class MXNet::Autograd

Overview

Autograd for MXNet.

x = MXNet::NDArray.array([1, 2, 3, 4], dtype: :float64)
g = MXNet::NDArray.array([0, 0, 0, 0], dtype: :float64)
MXNet::Autograd.mark_variables(x, g)
y = MXNet::Autograd.record do
  x * x + 1
end
MXNet::Autograd.backward(y)

Defined in:

mxnet/autograd.cr

Class Method Summary

Class Method Detail

def self.backward(outputs, gradients = nil, retain_graph = false, train_mode = true) #

Compute the gradients with respect to previously marked variables.

Parameters

  • outputs (NDArray or Enumerable(NDArray)) Output arrays.
  • gradients (NDArray or Enumerable(NDArray)) Gradients with respect to outputs.
  • retain_graph (Bool, default false) Whether to keep computation graph to differentiate again, instead of clearing history and releasing memory.
  • train_mode (Bool, default true) Whether the backward pass is in training or predicting mode.

[View source]
def self.is_recording #

Gets status of recording/not recording.


[View source]
def self.is_training #

Gets status of training/predicting.


[View source]
def self.mark_variables(variables, gradients, grad_reqs = :write) #

Mark arrays as variables to compute gradients for autograd.

Parameters

  • variables (NDArray or Enumerable(NDArray))
  • gradients (NDArray or Enumerable(NDArray))
  • grad_reqs (::Symbol or Enumerable(::Symbol), default :write)
    • :write: gradient will be overwritten on every backward pass
    • :add: gradient will be added to existing value on every backward pass
    • :null: do not compute gradient

[View source]
def self.pause(train_mode = false, &) #

Creates a scope context for code that does not need gradients to be calculated.

Parameters

  • train_mode (Bool, default = true) Whether the forward pass is in training or predicting mode.

[View source]
def self.predict_mode(&) #

Creates a scope context in which forward pass behavior is set to inference mode, without changing the recording mode.


[View source]
def self.record(train_mode = true, &) #

Creates a scope context for code that needs gradients to be calculated.

When forwarding with train_mode = false, the corresponding .backward should also use train_mode = false, otherwise the gradient is undefined.

Parameters

  • train_mode (Bool, default = true) Whether the forward pass is in training or predicting mode. This controls the behavior of some layers such as Dropout and BatchNorm.

[View source]
def self.train_mode(&) #

Creates a scope context in which forward pass behavior is set to training mode, without changing the recording mode.


[View source]