class MXNet::Autograd
- MXNet::Autograd
- Reference
- Object
Overview
Autograd for MXNet.
x = MXNet::NDArray.array([1, 2, 3, 4], dtype: :float64)
g = MXNet::NDArray.array([0, 0, 0, 0], dtype: :float64)
MXNet::Autograd.mark_variables(x, g)
y = MXNet::Autograd.record do
x * x + 1
end
MXNet::Autograd.backward(y)
Defined in:
mxnet/autograd.crClass Method Summary
-
.backward(outputs, gradients = nil, retain_graph = false, train_mode = true)
Compute the gradients with respect to previously marked variables.
-
.is_recording
Gets status of recording/not recording.
-
.is_training
Gets status of training/predicting.
-
.mark_variables(variables, gradients, grad_reqs = :write)
Mark arrays as variables to compute gradients for autograd.
-
.pause(train_mode = false, &)
Creates a scope context for code that does not need gradients to be calculated.
-
.predict_mode(&)
Creates a scope context in which forward pass behavior is set to inference mode, without changing the recording mode.
-
.record(train_mode = true, &)
Creates a scope context for code that needs gradients to be calculated.
-
.train_mode(&)
Creates a scope context in which forward pass behavior is set to training mode, without changing the recording mode.
Class Method Detail
Compute the gradients with respect to previously marked variables.
Parameters
- outputs (
NDArray
orEnumerable(NDArray)
) Output arrays. - gradients (
NDArray
orEnumerable(NDArray)
) Gradients with respect to outputs. - retain_graph (
Bool
, default false) Whether to keep computation graph to differentiate again, instead of clearing history and releasing memory. - train_mode (
Bool
, default true) Whether the backward pass is in training or predicting mode.
Mark arrays as variables to compute gradients for autograd.
Parameters
- variables (
NDArray
orEnumerable(NDArray)
) - gradients (
NDArray
orEnumerable(NDArray)
) - grad_reqs (
::Symbol
orEnumerable(::Symbol)
, default:write
) :write
: gradient will be overwritten on every backward pass:add
: gradient will be added to existing value on every backward pass:null
: do not compute gradient
Creates a scope context for code that does not need gradients to be calculated.
Parameters
- train_mode (
Bool
, default = true) Whether the forward pass is in training or predicting mode.
Creates a scope context in which forward pass behavior is set to inference mode, without changing the recording mode.
Creates a scope context for code that needs gradients to be calculated.
When forwarding with train_mode = false
, the corresponding
.backward
should also use train_mode = false
, otherwise
the gradient is undefined.
Parameters
- train_mode (
Bool
, default = true) Whether the forward pass is in training or predicting mode. This controls the behavior of some layers such as Dropout and BatchNorm.
Creates a scope context in which forward pass behavior is set to training mode, without changing the recording mode.