backprop-learn-0.1.0.0: Combinators and useful tools for ANNs using the backprop library

Safe HaskellNone
LanguageHaskell2010

Backprop.Learn.Regularize

Contents

Synopsis

Documentation

type Regularizer p = forall s. Reifies s W => BVar s p -> BVar s Double Source #

A regularizer on parameters

class Backprop p => Regularize p where Source #

A class for data types that support regularization during training.

This class is somewhat similar to Metric Double, in that it supports summing the components and summing squared components. However, the main difference is that when summing components, we only consider components that we want to regularize.

Often, this doesn't include bias terms (terms that "add" to inputs), and only includes terms that "scale" inputs, like components in a weight matrix of a feed-forward neural network layer.

However, if all of your components are to be regularized, you can use norm_1, norm_2, lassoLinear, and ridgeLinear as sensible implementations, or use DerivingVia with RegularizeMetric:

data MyType = ...
  deriving Regularize via (RegularizeMetric MyType)

You can also derive an instance where no are regularized, using NoRegularize:

data MyType = ...
  deriving Regularize via (NoRegularize MyType)

The default implementations are based on Generics, and work for types that are records of items that are all instances of Regularize.

Minimal complete definition

Nothing

Methods

rnorm_1 :: p -> Double Source #

Like norm_1: sums all of the weights in p, but only the ones you want to regularize:

\[ \sum_w \lvert w \rvert \]

Note that typically bias terms (terms that add to inputs) are not regularized. Only "weight" terms that scale inputs are typically regularized.

If p is an instance of Metric, then you can set rnorm_1 = norm_1. However, this would count all terms in p, even potential bias terms.

rnorm_1 :: (ADT p, Constraints p Regularize) => p -> Double Source #

Like norm_1: sums all of the weights in p, but only the ones you want to regularize:

\[ \sum_w \lvert w \rvert \]

Note that typically bias terms (terms that add to inputs) are not regularized. Only "weight" terms that scale inputs are typically regularized.

If p is an instance of Metric, then you can set rnorm_1 = norm_1. However, this would count all terms in p, even potential bias terms.

rnorm_2 :: p -> Double Source #

Like norm_2: sums all of the squares of the weights n p, but only the ones you want to regularize:

\[ \sum_w w^2 \]

Note that typically bias terms (terms that add to inputs) are not regularized. Only "weight" terms that scale inputs are typically regularized.

If p is an instance of Metric, then you can set rnorm_2 = norm_2. However, this would count all terms in p, even potential bias terms.

rnorm_2 :: (ADT p, Constraints p Regularize) => p -> Double Source #

Like norm_2: sums all of the squares of the weights n p, but only the ones you want to regularize:

\[ \sum_w w^2 \]

Note that typically bias terms (terms that add to inputs) are not regularized. Only "weight" terms that scale inputs are typically regularized.

If p is an instance of Metric, then you can set rnorm_2 = norm_2. However, this would count all terms in p, even potential bias terms.

lasso :: Double -> p -> p Source #

lasso r p sets all regularized components (that is, components summed by rnorm_1) in p to be either r if that component was positive, or -r if that component was negative. Behavior is not defined if the component is exactly zero, but either r or -r are sensible possibilities.

It must set all non-regularized components (like bias terms, or whatever items that rnorm_1 ignores) to zero.

If p is an instance of Linear Double and Num, then you can set lasso = lassoLinear. However, this is only valid if rnorm_1 counts all terms in p, including potential bias terms.

lasso :: (ADT p, Constraints p Regularize) => Double -> p -> p Source #

lasso r p sets all regularized components (that is, components summed by rnorm_1) in p to be either r if that component was positive, or -r if that component was negative. Behavior is not defined if the component is exactly zero, but either r or -r are sensible possibilities.

It must set all non-regularized components (like bias terms, or whatever items that rnorm_1 ignores) to zero.

If p is an instance of Linear Double and Num, then you can set lasso = lassoLinear. However, this is only valid if rnorm_1 counts all terms in p, including potential bias terms.

ridge :: Double -> p -> p Source #

ridge r p scales all regularized components (that is, components summed by rnorm_2) in p by r.

It must set all non-regularized components (like bias terms, or whatever items that rnorm_2 ignores) to zero.

If p is an instance of Linear Double and Num, then you can set ridge = ridgeLinear. However, this is only valid if rnorm_2 counts all terms in p, including potential bias terms.

ridge :: (ADT p, Constraints p Regularize) => Double -> p -> p Source #

ridge r p scales all regularized components (that is, components summed by rnorm_2) in p by r.

It must set all non-regularized components (like bias terms, or whatever items that rnorm_2 ignores) to zero.

If p is an instance of Linear Double and Num, then you can set ridge = ridgeLinear. However, this is only valid if rnorm_2 counts all terms in p, including potential bias terms.

Instances
Regularize Double Source # 
Instance details

Defined in Backprop.Learn.Regularize

Regularize Float Source # 
Instance details

Defined in Backprop.Learn.Regularize

Regularize () Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: () -> Double Source #

rnorm_2 :: () -> Double Source #

lasso :: Double -> () -> () Source #

ridge :: Double -> () -> () Source #

Integral a => Regularize (Ratio a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

KnownNat n => Regularize (R n) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: R n -> Double Source #

rnorm_2 :: R n -> Double Source #

lasso :: Double -> R n -> R n Source #

ridge :: Double -> R n -> R n Source #

Regularize a => Regularize (TF a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: TF a -> Double Source #

rnorm_2 :: TF a -> Double Source #

lasso :: Double -> TF a -> TF a Source #

ridge :: Double -> TF a -> TF a Source #

Backprop a => Regularize (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

(Metric Double p, Num p, Backprop p) => Regularize (RegularizeMetric p) Source # 
Instance details

Defined in Backprop.Learn.Regularize

(Regularize a, Regularize b) => Regularize (a, b) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: (a, b) -> Double Source #

rnorm_2 :: (a, b) -> Double Source #

lasso :: Double -> (a, b) -> (a, b) Source #

ridge :: Double -> (a, b) -> (a, b) Source #

(KnownNat n, KnownNat m) => Regularize (L n m) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: L n m -> Double Source #

rnorm_2 :: L n m -> Double Source #

lasso :: Double -> L n m -> L n m Source #

ridge :: Double -> L n m -> L n m Source #

(Regularize a, Regularize b) => Regularize (a :# b) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: (a :# b) -> Double Source #

rnorm_2 :: (a :# b) -> Double Source #

lasso :: Double -> (a :# b) -> a :# b Source #

ridge :: Double -> (a :# b) -> a :# b Source #

(KnownNat i, KnownNat o) => Regularize (LRp i o) Source # 
Instance details

Defined in Backprop.Learn.Model.Regression

Methods

rnorm_1 :: LRp i o -> Double Source #

rnorm_2 :: LRp i o -> Double Source #

lasso :: Double -> LRp i o -> LRp i o Source #

ridge :: Double -> LRp i o -> LRp i o Source #

(KnownNat p, KnownNat q) => Regularize (ARIMAp p q) Source # 
Instance details

Defined in Backprop.Learn.Model.Regression

Methods

rnorm_1 :: ARIMAp p q -> Double Source #

rnorm_2 :: ARIMAp p q -> Double Source #

lasso :: Double -> ARIMAp p q -> ARIMAp p q Source #

ridge :: Double -> ARIMAp p q -> ARIMAp p q Source #

KnownNat o => Regularize (LSTMp i o) Source # 
Instance details

Defined in Backprop.Learn.Model.Neural.LSTM

Methods

rnorm_1 :: LSTMp i o -> Double Source #

rnorm_2 :: LSTMp i o -> Double Source #

lasso :: Double -> LSTMp i o -> LSTMp i o Source #

ridge :: Double -> LSTMp i o -> LSTMp i o Source #

KnownNat o => Regularize (GRUp i o) Source # 
Instance details

Defined in Backprop.Learn.Model.Neural.LSTM

Methods

rnorm_1 :: GRUp i o -> Double Source #

rnorm_2 :: GRUp i o -> Double Source #

lasso :: Double -> GRUp i o -> GRUp i o Source #

ridge :: Double -> GRUp i o -> GRUp i o Source #

(Regularize a, Regularize b, Regularize c) => Regularize (a, b, c) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: (a, b, c) -> Double Source #

rnorm_2 :: (a, b, c) -> Double Source #

lasso :: Double -> (a, b, c) -> (a, b, c) Source #

ridge :: Double -> (a, b, c) -> (a, b, c) Source #

(RPureConstrained Regularize as, ReifyConstraint Backprop TF as, RMap as, RApply as, RFoldMap as) => Regularize (Rec TF as) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: Rec TF as -> Double Source #

rnorm_2 :: Rec TF as -> Double Source #

lasso :: Double -> Rec TF as -> Rec TF as Source #

ridge :: Double -> Rec TF as -> Rec TF as Source #

(PureProdC Maybe Backprop as, PureProdC Maybe Regularize as) => Regularize (PMaybe TF as) Source # 
Instance details

Defined in Backprop.Learn.Regularize

(Vector v a, Regularize a, Backprop (Vector v n a)) => Regularize (Vector v n a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: Vector v n a -> Double Source #

rnorm_2 :: Vector v n a -> Double Source #

lasso :: Double -> Vector v n a -> Vector v n a Source #

ridge :: Double -> Vector v n a -> Vector v n a Source #

Regularize (ARIMAs p d q) Source # 
Instance details

Defined in Backprop.Learn.Model.Regression

Methods

rnorm_1 :: ARIMAs p d q -> Double Source #

rnorm_2 :: ARIMAs p d q -> Double Source #

lasso :: Double -> ARIMAs p d q -> ARIMAs p d q Source #

ridge :: Double -> ARIMAs p d q -> ARIMAs p d q Source #

(Regularize a, Regularize b, Regularize c, Regularize d) => Regularize (a, b, c, d) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: (a, b, c, d) -> Double Source #

rnorm_2 :: (a, b, c, d) -> Double Source #

lasso :: Double -> (a, b, c, d) -> (a, b, c, d) Source #

ridge :: Double -> (a, b, c, d) -> (a, b, c, d) Source #

(Regularize a, Regularize b, Regularize c, Regularize d, Regularize e) => Regularize (a, b, c, d, e) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

rnorm_1 :: (a, b, c, d, e) -> Double Source #

rnorm_2 :: (a, b, c, d, e) -> Double Source #

lasso :: Double -> (a, b, c, d, e) -> (a, b, c, d, e) Source #

ridge :: Double -> (a, b, c, d, e) -> (a, b, c, d, e) Source #

l1Reg :: Regularize p => Double -> Regularizer p Source #

Backpropagatable L1 regularization; also known as lasso regularization.

\[ \sum_w \lvert w \rvert \]

Note that typically bias terms (terms that add to inputs) are not regularized. Only "weight" terms that scale inputs are typically regularized.

l2Reg :: Regularize p => Double -> Regularizer p Source #

Backpropagatable L2 regularization; also known as ridge regularization.

\[ \sum_w w^2 \]

Note that typically bias terms (terms that add to inputs) are not regularized. Only "weight" terms that scale inputs are typically regularized.

noReg :: Regularizer p Source #

No regularization

l2RegMetric Source #

Arguments

:: (Metric Double p, Backprop p) 
=> Double

scaling factor (often 0.5)

-> Regularizer p 

L2 regularization for instances of Metric. This will count all terms, including any potential bias terms.

You can always use this as a regularizer instead of l2Reg, if you want to ignore the default behavior for a type, or if your type has no instance.

This is what l2Reg would be for a type p if you declare an instance of Regularize with rnorm_2 = norm_2, and ridge = ridgeLinear.

l1RegMetric Source #

Arguments

:: (Num p, Metric Double p, Backprop p) 
=> Double

scaling factor (often 0.5)

-> Regularizer p 

L1 regularization for instances of Metric. This will count all terms, including any potential bias terms.

You can always use this as a regularizer instead of l2Reg, if you want to ignore the default behavior for a type, or if your type has no instance.

This is what l1Reg would be for a type p if you declare an instance of Regularize with rnorm_1 = norm_1, and lasso = lassoLinear.

Build instances

DerivingVia

newtype RegularizeMetric a Source #

Newtype wrapper (meant to be used with DerivingVia) to derive an instance of Regularize that uses its Metric instance, and regularizes every component of a data type, including any potential bias terms.

Constructors

RegularizeMetric a 
Instances
Functor RegularizeMetric Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

fmap :: (a -> b) -> RegularizeMetric a -> RegularizeMetric b #

(<$) :: a -> RegularizeMetric b -> RegularizeMetric a #

Eq a => Eq (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Ord a => Ord (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Read a => Read (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Show a => Show (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Generic (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Associated Types

type Rep (RegularizeMetric a) :: Type -> Type #

Backprop a => Backprop (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

(Metric Double p, Num p, Backprop p) => Regularize (RegularizeMetric p) Source # 
Instance details

Defined in Backprop.Learn.Regularize

type Rep (RegularizeMetric a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

type Rep (RegularizeMetric a) = D1 (MetaData "RegularizeMetric" "Backprop.Learn.Regularize" "backprop-learn-0.1.0.0-LYs2l1OGpKTGmGWQXOoOXm" True) (C1 (MetaCons "RegularizeMetric" PrefixI False) (S1 (MetaSel (Nothing :: Maybe Symbol) NoSourceUnpackedness NoSourceStrictness DecidedLazy) (Rec0 a)))

newtype NoRegularize a Source #

Newtype wrapper (meant to be used with DerivingVia) to derive an instance of Regularize that does not regularize any part of the type.

Constructors

NoRegularize a 
Instances
Functor NoRegularize Source # 
Instance details

Defined in Backprop.Learn.Regularize

Methods

fmap :: (a -> b) -> NoRegularize a -> NoRegularize b #

(<$) :: a -> NoRegularize b -> NoRegularize a #

Eq a => Eq (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Ord a => Ord (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Read a => Read (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Show a => Show (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Generic (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Associated Types

type Rep (NoRegularize a) :: Type -> Type #

Methods

from :: NoRegularize a -> Rep (NoRegularize a) x #

to :: Rep (NoRegularize a) x -> NoRegularize a #

Backprop a => Backprop (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

Backprop a => Regularize (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

type Rep (NoRegularize a) Source # 
Instance details

Defined in Backprop.Learn.Regularize

type Rep (NoRegularize a) = D1 (MetaData "NoRegularize" "Backprop.Learn.Regularize" "backprop-learn-0.1.0.0-LYs2l1OGpKTGmGWQXOoOXm" True) (C1 (MetaCons "NoRegularize" PrefixI False) (S1 (MetaSel (Nothing :: Maybe Symbol) NoSourceUnpackedness NoSourceStrictness DecidedLazy) (Rec0 a)))

Linear

lassoLinear :: (Linear Double p, Num p) => Double -> p -> p Source #

A default implementation of lasso for instances of Linear Double and Num. However, this is only valid if the corresponding rnorm_1 counts all terms in p, including potential bias terms.

ridgeLinear :: Linear Double p => Double -> p -> p Source #

A default implementation of ridge for instances of Linear Double. However, this is only valid if the corresponding rnorm_2 counts all terms in p, including potential bias terms.

Generics

glasso :: (ADT p, Constraints p Regularize) => Double -> p -> p Source #

gridge :: (ADT p, Constraints p Regularize) => Double -> p -> p Source #

Manipulate regularizers

addReg :: Regularizer p -> Regularizer p -> Regularizer p Source #

Add together two regularizers

scaleReg :: Double -> Regularizer p -> Regularizer p Source #

Scale a regularizer's influence