Remote-Sensing Image Scene Classification With Deep Neural Networks in JPEG 2000 Compressed Domain
To reduce the storage requirements, remote-sensing (RS) images are usually stored in compressed format. Existing scene classification approaches using deep neural networks (DNNs) require to fully decompress the images, which is a computationally demanding task in operational applications. To address this issue, in this article, we propose a novel approach to achieve scene classification in Joint Photographic Experts Group (JPEG) 2000 compressed RS images. The proposed approach consists of two main steps: 1) approximation of the finer resolution subbands of reversible biorthogonal wavelet filters used in JPEG 2000 and 2) characterization of the high-level semantic content of approximated wavelet subbands and scene classification based on the learned descriptors. This is achieved by taking codestreams associated with the coarsest resolution wavelet subband as input to approximate finer resolution subbands using a number of transposed convolutional layers. Then, a series of convolutional layers models the high-level semantic content of the approximated wavelet subband. Thus, the proposed approach models the multiresolution paradigm given in the JPEG 2000 compression algorithm in an end-to-end trainable unified neural network. In the classification stage, the proposed approach takes only the coarsest resolution wavelet subbands as input, thereby reducing the time required to apply decoding. Experimental results performed on two benchmark aerial image archives demonstrate that the proposed approach significantly reduces the computational time with similar classification accuracies when compared with traditional RS scene classification approaches (which requires full image decompression).
Published in: IEEE Transactions on Geoscience and Remote Sensing, 10.1109/TGRS.2020.3007523, Institute of Electrical and Electronics Engineers (IEEE)