ONE - On-device Neural Engine
|
Loads TF lite file and provides helpers to access attributes. More...
#include <TFliteImport.h>
Public Member Functions | |
TFliteImport (const tflite::Model *model) | |
TFliteImport ()=delete | |
bool | select_sub_graph (uint32_t subgraph) |
const TFliteBuffers_t * | buffers () |
const TFliteTensors_t * | tensors () |
const TFliteOperators_t * | operators () |
const std::vector< int32_t > & | inputs () const |
const std::vector< int32_t > & | outputs () const |
uint32_t | num_subgraph () const |
tflite::BuiltinOperator | builtin_code (const tflite::Operator *op) const |
std::string | opcode_name (const tflite::Operator *op) const |
size_t | buffer_info (const tflite::Tensor *tensor, const uint8_t **buff_data) |
Public Member Functions inherited from souschef::TensorFiller | |
virtual | ~TensorFiller ()=default |
void | set_tensor_filler (uint32_t tensor_index) |
This will record the tensor by index, if it needs filler option, such as kernel, bias. | |
void | set_tensor_filler (uint32_t tensor_index, std::vector< int32_t > &expvalues) |
This will store int32 filler values such as reshape information for the tensor. | |
void | set_tensor_filler (uint32_t tensor_index, std::vector< float > &expvalues) |
bool | get_tensor_filler (uint32_t tensor_index) |
This will return true if the tensor by index, needs a filler option. | |
bool | get_tensor_filler (uint32_t tensor_index, std::vector< int32_t > &expvalues) |
This will return true if the tensor by index, needs a int array filler option. | |
bool | get_tensor_filler (uint32_t tensor_index, std::vector< float > &expvalues) |
void | clear_tensor_filler () |
void | clear_tensor_filler_vint32 () |
void | clear_tensor_filler_vfloat () |
Loads TF lite file and provides helpers to access attributes.
Definition at line 40 of file TFliteImport.h.
tflchef::TFliteImport::TFliteImport | ( | const tflite::Model * | model | ) |
Definition at line 28 of file TFliteImport.cpp.
|
delete |
size_t tflchef::TFliteImport::buffer_info | ( | const tflite::Tensor * | tensor, |
const uint8_t ** | buff_data | ||
) |
Definition at line 101 of file TFliteImport.cpp.
References size.
|
inline |
Definition at line 51 of file TFliteImport.h.
Referenced by tflchef::generate_recipe().
tflite::BuiltinOperator tflchef::TFliteImport::builtin_code | ( | const tflite::Operator * | op | ) | const |
Definition at line 67 of file TFliteImport.cpp.
References mio::tflite::builtin_code_neutral().
Referenced by tflchef::generate_recipe().
|
inline |
Definition at line 54 of file TFliteImport.h.
Referenced by validate_onnx2circle.OnnxRunner::feed_random_inputs(), tflchef::generate_recipe(), and package.infer.session::set_inputs().
|
inline |
Definition at line 57 of file TFliteImport.h.
References flatbuffers::Vector< T >::size().
Referenced by tflchef::generate_recipe().
std::string tflchef::TFliteImport::opcode_name | ( | const tflite::Operator * | op | ) | const |
Definition at line 76 of file TFliteImport.cpp.
References mio::tflite::builtin_code_neutral(), mio::tflite::is_custom(), and mio::tflite::is_valid().
Referenced by tflchef::generate_recipe().
|
inline |
Definition at line 53 of file TFliteImport.h.
Referenced by tflchef::generate_recipe().
|
inline |
Definition at line 55 of file TFliteImport.h.
Referenced by tflchef::generate_recipe(), validate_onnx2circle.OnnxRunner::get_outputs(), package.infer.session::inference(), and package.infer.session::set_outputs().
bool tflchef::TFliteImport::select_sub_graph | ( | uint32_t | subgraph | ) |
Definition at line 40 of file TFliteImport.cpp.
References tflchef::as_index_vector(), souschef::TensorFiller::clear_tensor_filler(), souschef::TensorFiller::clear_tensor_filler_vfloat(), souschef::TensorFiller::clear_tensor_filler_vint32(), and flatbuffers::Vector< T >::size().
Referenced by tflchef::generate_recipe().
|
inline |
Definition at line 52 of file TFliteImport.h.
Referenced by tflchef::generate_recipe().