ONE - On-device Neural Engine
|
#include <OneHotLayer.h>
Public Member Functions | |
OneHotLayer () | |
template<typename T > | |
void | oneHotImpl () |
void | configure (const IPortableTensor *indices, const IPortableTensor *depth, const IPortableTensor *on_value, const IPortableTensor *off_value, IPortableTensor *output, int32_t axis) |
void | run () override |
Public Member Functions inherited from onert::exec::IFunction | |
virtual | ~IFunction ()=default |
virtual void | prepare () |
Definition at line 33 of file OneHotLayer.h.
|
inline |
Definition at line 36 of file OneHotLayer.h.
void onert::backend::cpu::ops::OneHotLayer::configure | ( | const IPortableTensor * | indices, |
const IPortableTensor * | depth, | ||
const IPortableTensor * | on_value, | ||
const IPortableTensor * | off_value, | ||
IPortableTensor * | output, | ||
int32_t | axis | ||
) |
Definition at line 40 of file OneHotLayer.cc.
void onert::backend::cpu::ops::OneHotLayer::oneHotImpl | ( | ) |
Definition at line 32 of file OneHotLayer.cc.
References onert::backend::cpu::ops::getShape().
|
overridevirtual |
Implements onert::exec::IFunction.
Definition at line 52 of file OneHotLayer.cc.
References onert::backend::IPortableTensor::data_type().
Referenced by package.infer.session::inference().