ONE - On-device Neural Engine
|
#include <ArgMinMaxLayer.h>
Public Member Functions | |
ArgMinMaxLayer () | |
void | configure (const IPortableTensor *indices, IPortableTensor *output, const IPortableTensor *axis, bool is_arg_max) |
void | run () override |
Public Member Functions inherited from onert::exec::IFunction | |
virtual | ~IFunction ()=default |
virtual void | prepare () |
Definition at line 33 of file ArgMinMaxLayer.h.
|
inline |
Definition at line 36 of file ArgMinMaxLayer.h.
void onert::backend::cpu::ops::ArgMinMaxLayer::configure | ( | const IPortableTensor * | indices, |
IPortableTensor * | output, | ||
const IPortableTensor * | axis, | ||
bool | is_arg_max | ||
) |
Definition at line 47 of file ArgMinMaxLayer.cc.
|
overridevirtual |
Implements onert::exec::IFunction.
Definition at line 56 of file ArgMinMaxLayer.cc.
References onert::backend::IPortableTensor::data_type(), onert::backend::IPortableTensor::getShape(), TF_LITE_ARG_MIN_MAX, and onert::backend::IPortableTensor::total_size().
Referenced by package.infer.session::inference().