dopt.online.sgd

Contains an implementation of stochastic gradient descent that relies on automatic differentiation

Members

Functions

sgd
Updater sgd(Operation[] outputs, Operation[] wrt, Projection[Operation] projs, Operation learningRate, Operation momentumRate, bool nesterov)

Creates a delegate that can be used to perform a step using the stochastic gradient descent update rule.

Meta

Authors

Henry Gouk