ONE - On-device Neural Engine
|
#include <CLSplitVEx.h>
Public Member Functions | |
CLSplitVEx () | |
void | configure (const ICLTensor *input, const ICLTensor *size_splits, uint32_t split_dim, const std::vector< ICLTensor * > &outputs, unsigned int num_splits) |
void | run () override |
Basic function to run CLSplitVKernel
Definition at line 57 of file CLSplitVEx.h.
CLSplitVEx::CLSplitVEx | ( | ) |
Default constructor
Definition at line 156 of file CLSplitVEx.cpp.
void CLSplitVEx::configure | ( | const ICLTensor * | input, |
const ICLTensor * | size_splits, | ||
uint32_t | split_dim, | ||
const std::vector< ICLTensor * > & | outputs, | ||
unsigned int | num_splits | ||
) |
Configure the split CL kernel
[in] | input | The input tensor to split. Data types supported: U8/S8/QASYMM8/U16/S16/F16/U32/S32/F32 |
[in] | size_splits | A 1-D tensor containing the number of tensor values per split |
[out] | outputs | A vector containing the output tensor. Data types supported: Same as input The output tensors should match the input tensor dimensions for all shape dimensions apart from the split dimension. |
[in] | split_dim | Integer value representing the input tensor dimension along which to split |
[in] | num_splits | Number of splits |
Definition at line 161 of file CLSplitVEx.cpp.
|
override |
Definition at line 190 of file CLSplitVEx.cpp.
Referenced by package.infer.session::inference().