ONE - On-device Neural Engine
|
#include <NETransposeConvLayer.h>
Public Member Functions | |
NETransposeConvLayer (std::shared_ptr< IMemoryManager > memory_manager=nullptr) | |
NETransposeConvLayer (const NETransposeConvLayer &)=delete | |
NETransposeConvLayer & | operator= (const NETransposeConvLayer &)=delete |
NETransposeConvLayer (NETransposeConvLayer &&)=delete | |
NETransposeConvLayer & | operator= (NETransposeConvLayer &&)=delete |
virtual | ~NETransposeConvLayer ()=default |
void | configure (ITensor *input, const ITensor *weights, const ITensor *bias, ITensor *output, const PadStrideInfo &info, unsigned int invalid_right, unsigned int invalid_bottom) |
void | run () override |
void | prepare () override |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *input, const ITensorInfo *weights, const ITensorInfo *bias, const ITensorInfo *output, const PadStrideInfo &info, unsigned int invalid_right, unsigned int invalid_bottom) |
Function to run the deconvolution layer.
Deconvolution Layer is the backward pass of Convolution Layer. First we transform the input depending on the stride and pad info and then perfrom a 1x1 convolution pass. Input stride defines how many zeroes we should put between each element of the input, pad is the amount of padding and finaly a is a user specified value where a < stride - 1 that increases the padding top and right of the input image.
The relation between input to output is as follows:
where width is the size of the first input dimension. height is the size of the second input dimension. width_output is the size of the first output dimension. height_output is the size of the second output dimension. kernel_x and kernel_y are the convolution sizes in x and y. stride_x and stride_y is the input stride of the first and second dimension.
The weights used by Deconvolution are supposed to be the same as the ones used for Convolution. Therefore, it will be necessary to use the weights in the reverse order to perform an actual convolution. This is achieved by using NEReverse.
This function calls the following NEON kernels/functions:
Definition at line 94 of file NETransposeConvLayer.h.
arm_compute::NETransposeConvLayer::NETransposeConvLayer | ( | std::shared_ptr< IMemoryManager > | memory_manager = nullptr | ) |
Constructor
Definition at line 54 of file NETransposeConvLayer.cpp.
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
delete |
Allow instances of this class to be moved
|
virtualdefault |
Default destructor
References validate().
void arm_compute::NETransposeConvLayer::configure | ( | ITensor * | input, |
const ITensor * | weights, | ||
const ITensor * | bias, | ||
ITensor * | output, | ||
const PadStrideInfo & | info, | ||
unsigned int | invalid_right, | ||
unsigned int | invalid_bottom | ||
) |
Set the input, weights, biases and output tensors.
[in,out] | input | Input tensor. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: F32/F16/QASYMM8/QASYMM8_SIGNED. |
[in] | weights | The 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as input . |
[in] | bias | Optional, ignored if NULL. The biases have one dimension. Data type supported: Data types supported: S32 for QASYMM8 and QASYMM8_SIGNED input, F32 for F32 input, F16 for F16 input. |
[out] | output | Output tensor. The output has the same number of dimensions as the input . |
[in] | info | Contains padding and policies to be used in the deconvolution, this is decribed in PadStrideInfo. |
[in] | invalid_right | The number of zeros added to right edge of the output. |
[in] | invalid_bottom | The number of zeros added to bottom edge of the output. |
Definition at line 135 of file NETransposeConvLayer.cpp.
References arm_compute::misc::shape_calculator::compute_transposeconv_output_shape(), arm_compute::misc::shape_calculator::compute_transposeconv_upsampled_shape(), info, output_shape, arm_compute::transposeconv_output_dimensions(), and validate().
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
delete |
Allow instances of this class to be moved
|
override |
Definition at line 218 of file NETransposeConvLayer.cpp.
Referenced by run().
|
override |
Definition at line 208 of file NETransposeConvLayer.cpp.
References prepare().
Referenced by package.infer.session::inference().
|
static |
Static function to check if given info will lead to a valid configuration of NETransposeConvLayer
[in] | input | Input tensor info. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. Data types supported: F32/F16/QASYMM8/QASYMM8_SIGNED. |
[in] | weights | The 4d weights info with dimensions [width, height, IFM, OFM]. Data type supported: Same as input . |
[in] | bias | (Optional) The biases have one dimension. Data types supported: S32 for QASYMM8 and QASYMM8_SIGNED input, F32 for F32 input, F16 for F16 input. |
[in] | output | Output tensor info. The output has the same number of dimensions as the input . |
[in] | info | Contains padding and policies to be used in the deconvolution, this is decribed in PadStrideInfo. |
[in] | innvalid_right | The number of zeros added to right edge of the output. |
[in] | invalid_bottom | The number of zeros added to bottom edge of the output. |
Definition at line 61 of file NETransposeConvLayer.cpp.
References arm_compute::misc::shape_calculator::compute_transposeconv_output_shape(), arm_compute::misc::shape_calculator::compute_transposeconv_upsampled_shape(), info, output_shape, and arm_compute::transposeconv_output_dimensions().
Referenced by configure(), and ~NETransposeConvLayer().