Base class for parameter schedulers.
It should be inherited by all schedulers that schedule parameters in the optimizer’s param_groups
. All subclasses should overwrite the _get_value()
according to their own schedule strategy. The implementation is motivated by https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py.
optimizer (BaseOptimWrapper or Optimizer) – Wrapped optimizer.
param_name (str) – Name of the parameter to be adjusted, such as lr
, momentum
.
begin (int) – Step at which to start updating the parameters. Defaults to 0.
end (int) – Step at which to stop updating the parameters. Defaults to INF.
last_step (int) – The index of last step. Used for resuming without state dict. Default value -1
means the step
function is never be called before. Defaults to -1.
by_epoch (bool) – Whether the scheduled parameters are updated by epochs. Defaults to True.
verbose (bool) – Whether to print the value for each update. Defaults to False.
Return the last computed value by current scheduler.
A list of the last computed value of the optimizer’s param_group
.
Loads the schedulers state.
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict()
.
Display the current parameter value.
Returns the state of the scheduler as a dict
.
It contains an entry for every variable in self.__dict__ which is not the optimizer.
scheduler state.
Adjusts the parameter value of each parameter group based on the specified schedule.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4