backprop-learn-0.1.0.0: Combinators and useful tools for ANNs using the backprop library

Safe HaskellNone
LanguageHaskell2010

Backprop.Learn.Model.Stochastic

Synopsis

Documentation

dropout :: KnownNat n => Double -> Model Nothing Nothing (R n) (R n) Source #

Dropout layer. Parameterized by dropout percentage (should be between 0 and 1).

0 corresponds to no dropout, 1 corresponds to complete dropout of all nodes every time.

rreLU :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n) Source #

Random leaky rectified linear unit

injectNoise :: (ContGen d, Mean d, Fractional a) => d -> Model Nothing Nothing a a Source #

Inject random noise. Usually used between neural network layers, or at the very beginning to pre-process input.

In non-stochastic mode, this adds the mean of the distribution.

applyNoise :: (ContGen d, Mean d, Fractional a) => d -> Model Nothing Nothing a a Source #

Multply by random noise. Can be used to implement dropout-like behavior.

In non-stochastic mode, this scales by the mean of the distribution.

injectNoiseR :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n) Source #

injectNoise lifted to R

applyNoiseR :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n) Source #

applyNoise lifted to R