SdcNet for object recognition

作者:

Highlights:

摘要

In this paper, a CNN architecture for object recognition is proposed, aiming at achieving a good processing-quality at the lowest computation-cost. The work includes the design of SdcBlock, a convolution module, for feature extraction, and that of SdcNet, an end-to-end CNN architecture. The module is designed to extract the maximum amount of high-density feature information from a given set of data channels. To this end, successive depthwise convolutions (Sdc) are applied to each group of data to produce feature elements of different filtering orders. To optimize the functionality of these convolutions, a particular pre-and-post-convolution data control is applied. The pre-convolution control is to organize the input channels of the module so that the depthwise convolutions can be performed with a single channel or a combination of multiple data channels, depending on the nature of the data. The post-convolution control is to combine the critical feature elements of different filtering orders to enhance the quality of the convolved results. The SdcNet is mainly composed of cascaded SdcBlocks. The hyper-parameters in the architecture can be adjusted easily so that each module can be tuned to suit its input signals in order to optimize the processing-quality of the entire network. Three different versions of SdcNet have been proposed and tested using CIFAR dataset, and the results demonstrate that the architecture gives a better processing-quality at a significantly lower computation cost, compared with networks performing similar tasks. Two other versions have also been tested with samples from ImageNet to prove the applicability of SdcNet in object recognition with images of ImageNet format. Also, a SdcNet for brain tumor detection has been designed and tested successfully to illustrate that SdcNet can effectively perform the detection with a high computation efficiency.

论文关键词:

论文评审过程:Received 12 September 2020, Revised 15 August 2021, Accepted 30 November 2021, Available online 6 December 2021, Version of Record 16 December 2021.

论文官网地址:https://doi.org/10.1016/j.cviu.2021.103332