ONE - On-device Neural Engine
Loading...
Searching...
No Matches
onert::backend::builtin::UserTensor Class Reference

Tensor object that is for Input and Output tensors from the user. More...

#include <UserTensor.h>

Collaboration diagram for onert::backend::builtin::UserTensor:

Public Member Functions

 UserTensor (const ir::OperandInfo &info, ir::Layout layout, uint8_t *buffer, size_t size)
 
uint8_t * buffer () const override
 
ir::Layout layout () const
 
void set_dynamic () override
 set this tensor dynamic
 
void setShape (const ir::Shape &new_shape) override
 Set the shape of tenser to new_shape.
 
bool applyShape (const ir::Shape &) override
 Set the shape to shape and possibly re-allocate the buffer.
 
- Public Member Functions inherited from onert::backend::IPortableTensor
 IPortableTensor (const ir::OperandInfo &info)
 
virtual ~IPortableTensor ()
 
const ir::OperandInfoget_info () const
 
const ir::Sparsitysparsity () const
 
size_t total_size () const override final
 
size_t calcOffset (const ir::Coordinates &coords) const override final
 
ir::DataType data_type () const override final
 
float data_scale () const override final
 
int32_t data_zero_point () const override final
 
const std::vector< float > & data_scales () const override final
 
const std::vector< int32_t > & data_zero_points () const override
 
bool is_constant () const override final
 Return true if the tensor is constant.
 
bool is_dynamic () const override final
 Return true if the tensor needs dynamic allocation, meaning that during compile-time the outpus shape cannot be known and the output shape is calculated during kernel execution-time.
 
ir::Shape getShape () const override final
 Get ir::Shape of tensor.
 
bool has_padding () const final
 
void access (const std::function< void(ITensor &tensor)> &fn) final
 
- Public Member Functions inherited from onert::backend::ITensor
virtual ~ITensor ()
 
virtual void deallocBuffer ()
 Dealloc the buffer (only for dynamic tensors)
 
virtual bool is_subtensor () const
 
virtual bool needMemoryMap () const
 
virtual void enqueueWriteBuffer (const void *, bool)
 
virtual void enqueueReadBuffer (void *, bool)
 

Additional Inherited Members

- Protected Attributes inherited from onert::backend::IPortableTensor
ir::OperandInfo _info
 

Detailed Description

Tensor object that is for Input and Output tensors from the user.

This class is a wrapped buffer that is allocated by the user. So it does not have resposibility on allocation nor deallocation. All the model input/output tensors are wrapped with this class for execution.

Definition at line 38 of file UserTensor.h.

Constructor & Destructor Documentation

◆ UserTensor()

onert::backend::builtin::UserTensor::UserTensor ( const ir::OperandInfo info,
ir::Layout  layout,
uint8_t *  buffer,
size_t  size 
)
inline

Definition at line 41 of file UserTensor.h.

42 : IPortableTensor{info}, _layout{layout}, _buffer{buffer}, _size{size}
43 {
44 }
IPortableTensor(const ir::OperandInfo &info)
uint8_t * buffer() const override
Definition UserTensor.h:47
volatile const char info[]
int32_t size[5]
Definition Slice.cpp:35

Member Function Documentation

◆ applyShape()

bool onert::backend::builtin::UserTensor::applyShape ( const ir::Shape )
overridevirtual

Set the shape to shape and possibly re-allocate the buffer.

If a tensor is dynamic tensor and previously allocated memory exists, it will be deallocated. If a tensor is static tensor (with previously allocated memory by StaticTensorManager), buffer() will be overwriten

Parameters
shapetensor's new shape. While allocating memory for this new_shape, tensor's shape is set to new_shape
Returns
true If applying shape is successful
false If not applying shape is not supported (it throws for other errors)

Reimplemented from onert::backend::ITensor.

Definition at line 29 of file UserTensor.cc.

30{
31 // User tensors cannot be reallocated.
32 auto new_size = new_shape.num_elements() * ir::sizeOfDataType(data_type());
33 if (_size < new_size)
34 throw InsufficientBufferSizeException{"User given buffer size is too small."};
35 setShape(new_shape);
36 return true;
37}
ir::DataType data_type() const override final
void setShape(const ir::Shape &new_shape) override
Set the shape of tenser to new_shape.
Definition UserTensor.h:50
size_t sizeOfDataType(DataType data_type)
Definition DataType.cc:29

References onert::backend::IPortableTensor::data_type(), setShape(), and onert::ir::sizeOfDataType().

◆ buffer()

uint8_t * onert::backend::builtin::UserTensor::buffer ( ) const
inlineoverridevirtual

Implements onert::backend::ITensor.

Definition at line 47 of file UserTensor.h.

47{ return _buffer; }

◆ layout()

ir::Layout onert::backend::builtin::UserTensor::layout ( ) const
inline

Definition at line 48 of file UserTensor.h.

48{ return _layout; }

◆ set_dynamic()

void onert::backend::builtin::UserTensor::set_dynamic ( )
inlineoverridevirtual

set this tensor dynamic

Reimplemented from onert::backend::ITensor.

Definition at line 49 of file UserTensor.h.

References onert::backend::IPortableTensor::_info, and onert::ir::OperandInfo::setDynamic().

◆ setShape()

void onert::backend::builtin::UserTensor::setShape ( const ir::Shape )
inlineoverridevirtual

Set the shape of tenser to new_shape.

Note
Higer dimension will be placed on front.

Reimplemented from onert::backend::ITensor.

Definition at line 50 of file UserTensor.h.

50{ _info.shape(new_shape); }
const Shape & shape() const
Return tensor shape.
Definition OperandInfo.h:95

References onert::backend::IPortableTensor::_info, and onert::ir::OperandInfo::shape().

Referenced by applyShape().


The documentation for this class was generated from the following files: