backprop-learn-0.1.0.0: Combinators and useful tools for ANNs using the backprop library

Safe HaskellNone
LanguageHaskell2010

Backprop.Learn.Train

Contents

Synopsis

Gradients

gradModelLoss :: Backprop p => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> p -> a -> b -> p Source #

Gradient of model with respect to loss function and target

gradModelStochLoss :: (Backprop p, PrimMonad m) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Gen (PrimState m) -> p -> a -> b -> m p Source #

Stochastic gradient of model with respect to loss function and target

Opto

type Grad (m :: Type -> Type) r a = r -> a -> m (Diff a) #

Gradient function to compute a direction of steepest ascent in a, with respect to an r sample.

modelGrad :: (Applicative m, Backprop p) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Grad m (a, b) p Source #

Using a model's deterministic prediction function (with a given loss function), generate a Grad compatible with Numeric.Opto and Numeric.Opto.Run.

modelGradStoch :: (PrimMonad m, Backprop p) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Gen (PrimState m) -> Grad m (a, b) p Source #

Using a model's stochastic prediction function (with a given loss function), generate a Grad compatible with Numeric.Opto and Numeric.Opto.Run.