|
ONE - On-device Neural Engine
|

Public Member Functions | |
| None | __init__ (self, float learning_rate=0.001, float beta1=0.9, float beta2=0.999, float epsilon=1e-7) |
| None onert.experimental.train.optimizer.adam.Adam.__init__ | ( | self, | |
| float | learning_rate = 0.001, |
||
| float | beta1 = 0.9, |
||
| float | beta2 = 0.999, |
||
| float | epsilon = 1e-7 |
||
| ) |
Initialize the Adam optimizer.
Args:
learning_rate (float): The learning rate for optimization.
beta1 (float): Exponential decay rate for the first moment estimates.
beta2 (float): Exponential decay rate for the second moment estimates.
epsilon (float): Small constant to prevent division by zero.
Reimplemented from onert.experimental.train.optimizer.optimizer.Optimizer.
Definition at line 8 of file adam.py.
References onert.experimental.train.optimizer.adam.Adam.__init__(), onert::backend::train::optimizer::Adam::Property.beta1, nnfw_adam_option.beta2, onert::backend::train::optimizer::Adam::Property.beta2, locoex::CircleInstanceNorm.epsilon(), locoex::CircleInstanceNorm.epsilon(), luci_interpreter::InstanceNormParams.epsilon, luci_interpreter::RmsNormParams.epsilon, luci::CircleInstanceNorm.epsilon(), luci::CircleRmsNorm.epsilon(), luci::CircleInstanceNorm.epsilon(), luci::CircleRmsNorm.epsilon(), moco::TFFusedBatchNorm.epsilon(), moco::TFFusedBatchNorm.epsilon(), onert_micro::core::L2NormalizationParams.epsilon, onert_micro::OMTrainingContext.epsilon, nnfw_adam_option.epsilon, nnfw::cker::InstanceNormParams.epsilon, nnfw::cker::RmsNormParams.epsilon, nnfw::cker::FusedBatchNormParams.epsilon, onert::backend::train::optimizer::Adam::Property.epsilon, onert::ir::operation::FusedBatchNorm::Param.epsilon, onert::ir::operation::InstanceNorm::Param.epsilon, and onert::ir::operation::RmsNorm::Param.epsilon.
Referenced by onert.experimental.train.optimizer.adam.Adam.__init__().