ONE - On-device Neural Engine
|
#include <L2NormLayer.h>
Public Member Functions | |
L2NormLayer () | |
void | configure (const IPortableTensor *_input, IPortableTensor *output) |
void | run () override |
Public Member Functions inherited from onert::exec::IFunction | |
virtual | ~IFunction ()=default |
virtual void | prepare () |
Definition at line 32 of file L2NormLayer.h.
|
inline |
Definition at line 35 of file L2NormLayer.h.
void onert::backend::cpu::ops::L2NormLayer::configure | ( | const IPortableTensor * | _input, |
IPortableTensor * | output | ||
) |
Definition at line 33 of file L2NormLayer.cc.
|
overridevirtual |
Implements onert::exec::IFunction.
Definition at line 42 of file L2NormLayer.cc.
References onert::backend::IPortableTensor::data_type(), onert::backend::IPortableTensor::data_zero_point(), onert::backend::cpu::ops::getShape(), nnfw::cker::L2NormParams::input_zero_point, nnfw::cker::L2NormalizeFloat32(), and nnfw::cker::L2NormalizeQuant8().
Referenced by package.infer.session::inference().