Safe Haskell | None |
---|---|
Language | Haskell2010 |
Synopsis
- gradModelLoss :: Backprop p => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> p -> a -> b -> p
- gradModelStochLoss :: (Backprop p, PrimMonad m) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Gen (PrimState m) -> p -> a -> b -> m p
- type Grad (m :: Type -> Type) r a = r -> a -> m (Diff a)
- modelGrad :: (Applicative m, Backprop p) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Grad m (a, b) p
- modelGradStoch :: (PrimMonad m, Backprop p) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Gen (PrimState m) -> Grad m (a, b) p
Gradients
gradModelLoss :: Backprop p => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> p -> a -> b -> p Source #
Gradient of model with respect to loss function and target
gradModelStochLoss :: (Backprop p, PrimMonad m) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Gen (PrimState m) -> p -> a -> b -> m p Source #
Stochastic gradient of model with respect to loss function and target
Opto
type Grad (m :: Type -> Type) r a = r -> a -> m (Diff a) #
Gradient function to compute a direction of steepest ascent in a
,
with respect to an r
sample.
modelGrad :: (Applicative m, Backprop p) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Grad m (a, b) p Source #
Using a model's deterministic prediction function (with a given loss
function), generate a Grad
compatible with Numeric.Opto and
Numeric.Opto.Run.
modelGradStoch :: (PrimMonad m, Backprop p) => Loss b -> Regularizer p -> Model (Just p) Nothing a b -> Gen (PrimState m) -> Grad m (a, b) p Source #
Using a model's stochastic prediction function (with a given loss
function), generate a Grad
compatible with Numeric.Opto and
Numeric.Opto.Run.