opto-0.1.0.0: General-purpose performant numeric optimization library

Copyright(c) Justin Le 2019
LicenseBSD3
Maintainerjustin@jle.im
Stabilityexperimental
Portabilitynon-portable
Safe HaskellNone
LanguageHaskell2010

Numeric.Opto.Core

Description

Core functionality for optimizers.

Synopsis

Documentation

type Diff a = a Source #

Useful type synonym to indicate differences in a and rates of change in type signatures.

type Grad m r a = r -> a -> m (Diff a) Source #

Gradient function to compute a direction of steepest ascent in a, with respect to an r sample.

data Opto :: (Type -> Type) -> Type -> Type -> Type where Source #

An Opto m r a represents a (potentially stateful) in-place optimizer for values of type a that can be run in a monad m. Each optimization step requires an additional external "sample" r.

Usually these should be defined to be polymorphic on m, so that it can be run in many different contexts in Numeric.Opto.Run.

An Opto m v () a is a "non-sampling" optimizer, where each optimization step doesn't require any external input.

Constructors

MkOpto 

Fields

mapSample :: (r -> s) -> Opto m s a -> Opto m r a Source #

(Contravariantly) map over the type of the external sample input of an Opto.

mapOpto :: forall m n r a c. LinearInPlace n c a => (forall x. m x -> n x) -> (forall x. Ref n x -> Ref m x) -> Opto m r a -> Opto n r a Source #

Map over the inner monad of an Opto by providing a natural transformation, and also a method to "convert" the references.

fromCopying Source #

Arguments

:: (LinearInPlace m c a, Mutable m s) 
=> s

Initial state

-> (r -> a -> s -> m (c, Diff a, s))

State-updating function

-> Opto m r a 

Create an Opto based on a (monadic) state-updating function, given an initial state and the state updating function. The function takes the external r input, the current value a, the current state s, and returns a step to move a in, a factor to scale that step via, and an updated state.

The state is updated in a "copying" manner (by generating new values purely), without any in-place mutation.

fromStateless :: LinearInPlace m c a => (r -> a -> m (c, Diff a)) -> Opto m r a Source #

Create a statless Opto based on a (monadic) optimizing function. The function takes the external r input and the current value a and returns a step to move a in and a factor to scale that step via.

pureGrad :: Applicative m => (r -> a -> Diff a) -> Grad m r a Source #

Create a bona-fide Grad from a pure (non-monadic) sampling gradient function.

nonSampling :: (a -> m (Diff a)) -> Grad m r a Source #

Create a Grad from a monadic non-sampling gradient function, which ignores the external sample input r.

pureNonSampling :: Applicative m => (a -> Diff a) -> Grad m r a Source #

Create a Grad from a pure (non-monadic) non-sampling gradient function, which ignores the external sample input r.