dopt.online.sgd

Contains an implementation of stochastic gradient descent that relies on automatic differentiation

Members

Functions

sgd
Updater sgd(Operation[] outputs, Operation[] wrt, Projection[Operation] projs, Operation learningRate = float32([], [0.01f]), Operation momentumRate = float32([], [0.0f]))

Creates a delegate that can be used to perform a step using the stochastic gradient descent update rule.

Meta

Authors

Henry Gouk