Copyright | (c) Justin Le 2019 |
---|---|
License | BSD3 |
Maintainer | justin@jle.im |
Stability | experimental |
Portability | non-portable |
Safe Haskell | None |
Language | Haskell2010 |
Core functionality for optimizers.
Synopsis
- type Diff a = a
- type Grad m r a = r -> a -> m (Diff a)
- data Opto :: (Type -> Type) -> Type -> Type -> Type where
- MkOpto :: forall s m r a c. (LinearInPlace m c a, Mutable m s) => {..} -> Opto m r a
- mapSample :: (r -> s) -> Opto m s a -> Opto m r a
- mapOpto :: forall m n r a c. LinearInPlace n c a => (forall x. m x -> n x) -> (forall x. Ref n x -> Ref m x) -> Opto m r a -> Opto n r a
- fromCopying :: (LinearInPlace m c a, Mutable m s) => s -> (r -> a -> s -> m (c, Diff a, s)) -> Opto m r a
- fromStateless :: LinearInPlace m c a => (r -> a -> m (c, Diff a)) -> Opto m r a
- pureGrad :: Applicative m => (r -> a -> Diff a) -> Grad m r a
- nonSampling :: (a -> m (Diff a)) -> Grad m r a
- pureNonSampling :: Applicative m => (a -> Diff a) -> Grad m r a
Documentation
Useful type synonym to indicate differences in a
and rates of
change in type signatures.
type Grad m r a = r -> a -> m (Diff a) Source #
Gradient function to compute a direction of steepest ascent in a
,
with respect to an r
sample.
data Opto :: (Type -> Type) -> Type -> Type -> Type where Source #
An
represents a (potentially stateful) in-place
optimizer for values of type Opto
m r aa
that can be run in a monad m
. Each
optimization step requires an additional external "sample" r
.
Usually these should be defined to be polymorphic on m
, so that it can
be run in many different contexts in Numeric.Opto.Run.
An
is a "non-sampling" optimizer, where each
optimization step doesn't require any external input.Opto
m v () a
mapSample :: (r -> s) -> Opto m s a -> Opto m r a Source #
(Contravariantly) map over the type of the external sample input of an
Opto
.
mapOpto :: forall m n r a c. LinearInPlace n c a => (forall x. m x -> n x) -> (forall x. Ref n x -> Ref m x) -> Opto m r a -> Opto n r a Source #
Map over the inner monad of an Opto
by providing a natural
transformation, and also a method to "convert" the references.
:: (LinearInPlace m c a, Mutable m s) | |
=> s | Initial state |
-> (r -> a -> s -> m (c, Diff a, s)) | State-updating function |
-> Opto m r a |
Create an Opto
based on a (monadic) state-updating function, given
an initial state and the state updating function. The function takes
the external r
input, the current value a
, the current state s
,
and returns a step to move a
in, a factor to scale that step via, and
an updated state.
The state is updated in a "copying" manner (by generating new values purely), without any in-place mutation.
fromStateless :: LinearInPlace m c a => (r -> a -> m (c, Diff a)) -> Opto m r a Source #
Create a statless Opto
based on a (monadic) optimizing function.
The function takes the external r
input and the current value a
and
returns a step to move a
in and a factor to scale that step via.
pureGrad :: Applicative m => (r -> a -> Diff a) -> Grad m r a Source #
Create a bona-fide Grad
from a pure (non-monadic) sampling gradient function.
nonSampling :: (a -> m (Diff a)) -> Grad m r a Source #
Create a Grad
from a monadic non-sampling gradient function, which
ignores the external sample input r
.
pureNonSampling :: Applicative m => (a -> Diff a) -> Grad m r a Source #
Create a Grad
from a pure (non-monadic) non-sampling gradient
function, which ignores the external sample input r
.