ONE - On-device Neural Engine
Loading...
Searching...
No Matches
onert.experimental.train.optimizer.adam.Adam Class Reference
Collaboration diagram for onert.experimental.train.optimizer.adam.Adam:

Public Member Functions

None __init__ (self, float learning_rate=0.001, float beta1=0.9, float beta2=0.999, float epsilon=1e-7)
 

Detailed Description

Adam optimizer.

Definition at line 4 of file adam.py.

Constructor & Destructor Documentation

◆ __init__()

None onert.experimental.train.optimizer.adam.Adam.__init__ (   self,
float   learning_rate = 0.001,
float   beta1 = 0.9,
float   beta2 = 0.999,
float   epsilon = 1e-7 
)
Initialize the Adam optimizer.

Args:
    learning_rate (float): The learning rate for optimization.
    beta1 (float): Exponential decay rate for the first moment estimates.
    beta2 (float): Exponential decay rate for the second moment estimates.
    epsilon (float): Small constant to prevent division by zero.

Reimplemented from onert.experimental.train.optimizer.optimizer.Optimizer.

Definition at line 8 of file adam.py.

12 epsilon: float = 1e-7) -> None:
13 """
14 Initialize the Adam optimizer.
15
16 Args:
17 learning_rate (float): The learning rate for optimization.
18 beta1 (float): Exponential decay rate for the first moment estimates.
19 beta2 (float): Exponential decay rate for the second moment estimates.
20 epsilon (float): Small constant to prevent division by zero.
21 """
22 super().__init__(learning_rate)
23 self.beta1: float = beta1
24 self.beta2: float = beta2
25 self.epsilon: float = epsilon

References onert.experimental.train.optimizer.adam.Adam.__init__(), onert::backend::train::optimizer::Adam::Property.beta1, nnfw_adam_option.beta2, onert::backend::train::optimizer::Adam::Property.beta2, locoex::CircleInstanceNorm.epsilon(), locoex::CircleInstanceNorm.epsilon(), luci_interpreter::InstanceNormParams.epsilon, luci_interpreter::RmsNormParams.epsilon, luci::CircleInstanceNorm.epsilon(), luci::CircleRmsNorm.epsilon(), luci::CircleInstanceNorm.epsilon(), luci::CircleRmsNorm.epsilon(), moco::TFFusedBatchNorm.epsilon(), moco::TFFusedBatchNorm.epsilon(), onert_micro::core::L2NormalizationParams.epsilon, onert_micro::OMTrainingContext.epsilon, nnfw_adam_option.epsilon, nnfw::cker::InstanceNormParams.epsilon, nnfw::cker::RmsNormParams.epsilon, nnfw::cker::FusedBatchNormParams.epsilon, onert::backend::train::optimizer::Adam::Property.epsilon, onert::ir::operation::FusedBatchNorm::Param.epsilon, onert::ir::operation::InstanceNorm::Param.epsilon, and onert::ir::operation::RmsNorm::Param.epsilon.

Referenced by onert.experimental.train.optimizer.adam.Adam.__init__().


The documentation for this class was generated from the following file: