ONE - On-device Neural Engine
Loading...
Searching...
No Matches
luci::sinf::TensorShapeExpander Class Referencefinal

Create a higher-rank TensorShape following NumPy broadcasting semantics. More...

#include <CircleShapeInferenceHelper.h>

Public Member Functions

 TensorShapeExpander (const loco::TensorShape &shape)
 
loco::TensorShape to (uint32_t output_rank)
 

Detailed Description

Create a higher-rank TensorShape following NumPy broadcasting semantics.

HOW TO USE:

auto expanded_tensor_shape = expand(tensor_shape).to(N);

Definition at line 62 of file CircleShapeInferenceHelper.h.

Constructor & Destructor Documentation

◆ TensorShapeExpander()

luci::sinf::TensorShapeExpander::TensorShapeExpander ( const loco::TensorShape shape)
inline

Definition at line 65 of file CircleShapeInferenceHelper.h.

65 : _shape{shape}
66 {
67 // DO NOTHING
68 }

Member Function Documentation

◆ to()

loco::TensorShape luci::sinf::TensorShapeExpander::to ( uint32_t  output_rank)
inline

Definition at line 71 of file CircleShapeInferenceHelper.h.

72 {
73 auto const &input_shape = _shape;
74 uint32_t const input_rank = input_shape.rank();
75
76 assert(input_rank <= output_rank && "Cannot shrink rank");
77 uint32_t const axis_shift = output_rank - input_rank;
78
80
81 output_shape.rank(output_rank);
82 for (uint32_t axis = 0; axis < output_rank; ++axis)
83 {
84 output_shape.dim(axis) = (axis < axis_shift) ? 1 : input_shape.dim(axis - axis_shift);
85 }
86
87 return output_shape;
88 }
uint32_t rank(void) const
Definition TensorShape.h:35
const luci_interpreter::RuntimeShape output_shape

References output_shape, and loco::TensorShape::rank().


The documentation for this class was generated from the following file: