ONE - On-device Neural Engine
|
Functions | |
float | get_memory_usage_mb () |
List[List[int]] | parse_shapes (List[str] shape_strs) |
List[tensorinfo] | get_validated_input_tensorinfos (infer.session sess, List[List[int]] static_shapes) |
benchmark_inference (str nnpackage_path, str backends, List[List[int]] input_shapes, int repeat) | |
main () | |
inference_benchmark.benchmark_inference | ( | str | nnpackage_path, |
str | backends, | ||
List[List[int]] | input_shapes, | ||
int | repeat | ||
) |
Definition at line 50 of file inference_benchmark.py.
References get_memory_usage_mb(), and get_validated_input_tensorinfos().
Referenced by main().
float inference_benchmark.get_memory_usage_mb | ( | ) |
Get current process memory usage in MB.
Definition at line 11 of file inference_benchmark.py.
Referenced by benchmark_inference().
List[tensorinfo] inference_benchmark.get_validated_input_tensorinfos | ( | infer.session | sess, |
List[List[int]] | static_shapes | ||
) |
Definition at line 27 of file inference_benchmark.py.
Referenced by benchmark_inference().
inference_benchmark.main | ( | void | ) |
Definition at line 102 of file inference_benchmark.py.
References benchmark_inference(), main(), and parse_shapes().
Referenced by main().
List[List[int]] inference_benchmark.parse_shapes | ( | List[str] | shape_strs | ) |
Definition at line 17 of file inference_benchmark.py.
Referenced by main().