1 /** 2 This package contains a deep learning API backed by dopt. 3 4 Working examples for how this package can be used are given in the $(D examples/mnist.d) and $(D examples/cifar10.d) 5 files. 6 7 One would generally start by using UFCS to define a feed-forward network: 8 9 --- 10 auto features = float32([128, 1, 28, 28]); 11 12 auto layers = dataSource(features) 13 .dense(2_000) 14 .relu() 15 .dense(2_000) 16 .relu() 17 .dense(10) 18 .softmax(); 19 --- 20 21 The $(D DAGNetwork) class can then be used to traverse the resulting graph and aggregate parameters/loss terms: 22 23 --- 24 auto network = new DAGNetwork([features], [layers]); 25 --- 26 27 After this, one can define an objective function---there are a few standard loss functions implemented in 28 $(D dopt.nnet.losses): 29 30 --- 31 auto labels = float32([128, 10]); 32 33 auto trainLoss = crossEntropy(layers.trainOutput, labels) + network.paramLoss; 34 --- 35 36 where `network.paramLoss` is the sum of any parameter regularisation terms. The $(D dopt.online) package can be 37 used to construct an updater: 38 39 --- 40 auto updater = sgd([trainLoss], network.params, network.paramProj); 41 --- 42 43 Finally, one can call this updater with some actual training data: 44 45 --- 46 updater([ 47 features: Buffer(some_real_features), 48 labels: Buffer(some_real_labels) 49 ]); 50 --- 51 52 Authors: Henry Gouk 53 */ 54 module dopt.nnet; 55 56 public 57 { 58 import dopt.nnet.data; 59 import dopt.nnet.layers; 60 import dopt.nnet.losses; 61 import dopt.nnet.models; 62 import dopt.nnet.networks; 63 import dopt.nnet.parameters; 64 } 65 66 version(Have_dopt_cuda) 67 { 68 import dopt.cuda; 69 }