Safe Haskell | None |
---|---|
Language | Haskell2010 |
Synopsis
- dropout :: KnownNat n => Double -> Model Nothing Nothing (R n) (R n)
- rreLU :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n)
- injectNoise :: (ContGen d, Mean d, Fractional a) => d -> Model Nothing Nothing a a
- applyNoise :: (ContGen d, Mean d, Fractional a) => d -> Model Nothing Nothing a a
- injectNoiseR :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n)
- applyNoiseR :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n)
Documentation
dropout :: KnownNat n => Double -> Model Nothing Nothing (R n) (R n) Source #
Dropout layer. Parameterized by dropout percentage (should be between 0 and 1).
0 corresponds to no dropout, 1 corresponds to complete dropout of all nodes every time.
rreLU :: (ContGen d, Mean d, KnownNat n) => d -> Model Nothing Nothing (R n) (R n) Source #
Random leaky rectified linear unit
injectNoise :: (ContGen d, Mean d, Fractional a) => d -> Model Nothing Nothing a a Source #
Inject random noise. Usually used between neural network layers, or at the very beginning to pre-process input.
In non-stochastic mode, this adds the mean of the distribution.
applyNoise :: (ContGen d, Mean d, Fractional a) => d -> Model Nothing Nothing a a Source #
Multply by random noise. Can be used to implement dropout-like behavior.
In non-stochastic mode, this scales by the mean of the distribution.