class MXNet::Gluon::Trainer
- MXNet::Gluon::Trainer
- Reference
- Object
Overview
Applies an Optimizer
on a set of Parameter
s.
Trainer
should be used together with Autograd
.
Defined in:
mxnet/gluon/trainer.crConstructors
-
.new(params, optimizer, **optimizer_params)
Creates a new instance.
Instance Method Summary
- #learning_rate
-
#step(batch_size)
Makes one step of parameter update.
-
#update(batch_size)
Makes one step of parameter update.
- #weight_decay
Constructor Detail
def self.new(params, optimizer, **optimizer_params)
#
Creates a new instance.
Parameters
- params (
ParameterDict
) The set of parameters to optimize. - optimizer (
Optimizer
) The optimizer to use. - optimizer_params (
NamedTuple
) Key-word arguments to be passed to optimizer constructor. See eachOptimizer
for a list of additional supported arguments common to all optimizers.
Instance Method Detail
def step(batch_size)
#
Makes one step of parameter update.
This should be called after Autograd#backward
and outside
of Autograd.record
.
Parameters
- batch_size (Int)
Batch size of data processed. Gradient will be normalized by
1/batch_size
. Set this to 1 if you normalized loss manually withloss = mean(loss)
.
def update(batch_size)
#
Makes one step of parameter update.
This should be called after Autograd#backward
and outside
of Autograd.record
.
Parameters
- batch_size (Int)
Batch size of data processed. Gradient will be normalized by
1/batch_size
. Set this to 1 if you normalized loss manually withloss = mean(loss)
.