ONE - On-device Neural Engine
|
Data Structures | |
class | BackPropAccumulator |
class | BackPropInitializer |
class | BinaryArithmeticLayer |
class | ConvolutionLayer |
class | DepthwiseConvolutionLayer |
class | ElementwiseActivationLayer |
class | FullyConnectedLayer |
class | GradientApplier |
class | LossCategoricalCrossentropyLayer |
class | LossLayer |
class | LossMeanSquaredErrorLayer |
class | MeanLayer |
class | PadLayer |
class | PoolLayer |
class | ReshapeLayer |
class | SoftMaxLayer |
class | TrainingKernelRegistry |
Typedefs | |
using | OperandType = onert::ir::DataType |
Enumerations | |
enum class | ArithmeticType { kAdd , kSub , kMul , kDiv } |
enum class | ElementwiseActivationType { kReLU } |
enum class | LossType { kMSE } |
enum class | PoolType { kMax , kAvg } |
Functions | |
nnfw::cker::Shape | getShape (const IPortableTensor *tensor) |
Get shape of tensor. | |
const IPortableTensor * | backpropActivation (const ir::Activation &activation, const IPortableTensor *output, const IPortableTensor *input_backprop, IPortableTensor *output_backprop) |
backpropagate acitvation | |
void | biasGrad (const IPortableTensor *input_backprop, IPortableTensor *bias_grad) |
backpropagate bias | |
nnfw::cker::train::LossReductionType | convertLossReductionType (ir::train::LossReductionType type) |
convert loss reduction type | |
using onert::backend::train::ops::OperandType = typedef onert::ir::DataType |
Definition at line 33 of file OperationUtils.h.
|
strong |
Enumerator | |
---|---|
kAdd | |
kSub | |
kMul | |
kDiv |
Definition at line 35 of file BinaryArithmeticLayer.h.
|
strong |
|
strong |
Enumerator | |
---|---|
kMSE |
Definition at line 35 of file LossLayer.h.
|
strong |
const IPortableTensor * onert::backend::train::ops::backpropActivation | ( | const ir::Activation & | activation, |
const IPortableTensor * | output, | ||
const IPortableTensor * | input_backprop, | ||
IPortableTensor * | output_backprop | ||
) |
backpropagate acitvation
-- forward direction -->
[ current layer ] -— [ next layer ] [ op | act ]
<-- backward direction --
activation | activation of current layer |
output | forward direction's output of current layer |
input_backprop | backward direction's output of next layer In other words, incoming gradient to current layer |
output_backprop | backward direction's output of activation, In other words, outcoming gradient of current layer's acitvation If activation is NONE, this param can be nullptr |
Definition at line 50 of file OperationUtils.cc.
References getShape(), onert::ir::NONE, onert::ir::RELU, onert::ir::RELU6, nnfw::cker::train::ReLU6Grad(), and nnfw::cker::train::ReLUGrad().
Referenced by onert::backend::train::ops::BinaryArithmeticLayer::backward().
void onert::backend::train::ops::biasGrad | ( | const IPortableTensor * | input_backprop, |
IPortableTensor * | bias_grad | ||
) |
backpropagate bias
input_backprop | backward direction's output of next layer In other words, incoming gradient to current layer |
bias_grad | gradient tensor of bias |
Definition at line 86 of file OperationUtils.cc.
References nnfw::cker::functor::biasReductionHelper(), onert::backend::ITensor::buffer(), and getShape().
nnfw::cker::train::LossReductionType onert::backend::train::ops::convertLossReductionType | ( | ir::train::LossReductionType | type | ) |
convert loss reduction type
type | loss reduction type defined in ir::train::LossReductionType |
Definition at line 100 of file OperationUtils.cc.
References nnfw::cker::train::SUM, onert::ir::train::Sum, nnfw::cker::train::SUM_OVER_BATCH_SIZE, and onert::ir::train::SumOverBatchSize.
Referenced by onert::backend::train::ops::LossCategoricalCrossentropyLayer::backward(), and onert::backend::train::ops::LossMeanSquaredErrorLayer::backward().
nnfw::cker::Shape onert::backend::train::ops::getShape | ( | const IPortableTensor * | tensor | ) |
Get shape of tensor.
tensor | tensor to get shape |
Definition at line 32 of file OperationUtils.cc.
References nnfw::cker::Shape::DimsData().
Referenced by backpropActivation(), onert::backend::train::ops::BackPropAccumulator::backward(), onert::backend::train::ops::BinaryArithmeticLayer::backward(), onert::backend::train::ops::LossCategoricalCrossentropyLayer::backward(), onert::backend::train::ops::LossMeanSquaredErrorLayer::backward(), onert::backend::train::ops::MeanLayer::backward(), onert::backend::train::ops::SoftMaxLayer::backward(), biasGrad(), onert::backend::train::ops::ElementwiseActivationLayer::configureBackward(), onert::backend::train::ops::DepthwiseConvolutionLayer::configureBackward(), onert::backend::train::ops::PadLayer::depad(), onert::backend::train::ops::LossCategoricalCrossentropyLayer::forward(), and onert::backend::train::ops::LossMeanSquaredErrorLayer::forward().