ONE - On-device Neural Engine
|
#include <QuantizeLayer.h>
Public Member Functions | |
QuantizeLayer () | |
void | configure (const IPortableTensor *input, IPortableTensor *output) |
void | run () override |
![]() | |
virtual | ~IFunction ()=default |
virtual void | prepare () |
Definition at line 26 of file QuantizeLayer.h.
|
inline |
Definition at line 29 of file QuantizeLayer.h.
void onert::backend::cpu::ops::QuantizeLayer::configure | ( | const IPortableTensor * | input, |
IPortableTensor * | output | ||
) |
Definition at line 36 of file QuantizeLayer.cc.
References onert::backend::IPortableTensor::data_type(), and onert::backend::cpu::ops::QuantizeMultiplier().
|
overridevirtual |
Implements onert::exec::IFunction.
Definition at line 63 of file QuantizeLayer.cc.
References onert::backend::IPortableTensor::data_type(), onert::backend::IPortableTensor::data_zero_point(), onert::backend::cpu::ops::getShape(), MatchingFlatSize(), nnfw::cker::Requantize< int8_t, uint8_t >(), and nnfw::cker::Requantize< uint8_t, int8_t >().