ONE - On-device Neural Engine
Loading...
Searching...
No Matches
onert::backend::cpu::ops::ExpandDimsLayer Class Reference

#include <ExpandDimsLayer.h>

Collaboration diagram for onert::backend::cpu::ops::ExpandDimsLayer:

Public Member Functions

 ExpandDimsLayer ()
 
void configure (const IPortableTensor *input, IPortableTensor *output)
 
void run () override
 
- Public Member Functions inherited from onert::exec::IFunction
virtual ~IFunction ()=default
 
virtual void prepare ()
 

Detailed Description

Definition at line 33 of file ExpandDimsLayer.h.

Constructor & Destructor Documentation

◆ ExpandDimsLayer()

onert::backend::cpu::ops::ExpandDimsLayer::ExpandDimsLayer ( )

Definition at line 28 of file ExpandDimsLayer.cc.

28 : _input(nullptr), _output(nullptr)
29{
30 // DO NOTHING
31}

Member Function Documentation

◆ configure()

void onert::backend::cpu::ops::ExpandDimsLayer::configure ( const IPortableTensor input,
IPortableTensor output 
)

Definition at line 33 of file ExpandDimsLayer.cc.

◆ run()

void onert::backend::cpu::ops::ExpandDimsLayer::run ( )
overridevirtual

Implements onert::exec::IFunction.

Definition at line 39 of file ExpandDimsLayer.cc.

40{
41 // output buffer equals to input buffer means that copy is not needed
42 if (_output->buffer() != _input->buffer())
43 {
44 size_t count = _input->total_size();
45 memcpy(_output->buffer(), _input->buffer(), count);
46 }
47}
size_t total_size() const override final
virtual uint8_t * buffer() const =0

References onert::backend::ITensor::buffer(), and onert::backend::IPortableTensor::total_size().

Referenced by package.infer.session::inference().


The documentation for this class was generated from the following files: