ONE - On-device Neural Engine
|
#include <UnpackLayer.h>
Public Member Functions | |
UnpackLayer () | |
void | configure (const IPortableTensor *input, uint32_t axis, int32_t num_output, std::vector< IPortableTensor * > &output) |
void | run () override |
Public Member Functions inherited from onert::exec::IFunction | |
virtual | ~IFunction ()=default |
virtual void | prepare () |
Definition at line 33 of file UnpackLayer.h.
onert::backend::cpu::ops::UnpackLayer::UnpackLayer | ( | ) |
Definition at line 32 of file UnpackLayer.cc.
void onert::backend::cpu::ops::UnpackLayer::configure | ( | const IPortableTensor * | input, |
uint32_t | axis, | ||
int32_t | num_output, | ||
std::vector< IPortableTensor * > & | output | ||
) |
Definition at line 65 of file UnpackLayer.cc.
|
overridevirtual |
Implements onert::exec::IFunction.
Definition at line 78 of file UnpackLayer.cc.
References onert::backend::IPortableTensor::data_type().
Referenced by package.infer.session::inference().