ONE - On-device Neural Engine
|
#include <Executor.h>
Public Member Functions | |
int | run (const Model &model, const Request &request, const std::vector< RunTimePoolInfo > &modelPoolInfos, const std::vector< RunTimePoolInfo > &requestPoolInfos) |
Definition at line 80 of file Executor.h.
int Executor::run | ( | const Model & | model, |
const Request & | request, | ||
const std::vector< RunTimePoolInfo > & | modelPoolInfos, | ||
const std::vector< RunTimePoolInfo > & | requestPoolInfos | ||
) |
Definition at line 126 of file Executor.cpp.
References ANEURALNETWORKS_NO_ERROR, and VLOG.
Referenced by package.infer.session::inference(), and ExecutionBuilder::startCompute().