Double-Branch DenseNet-Transformer Hyperspectral Image Classification
-
-
Abstract
To reduce the training samples of hyperspectral images and obtain better classification results, a double-branch deep network model based on DenseNet and a spatial-spectral transformer was proposed in this study. The model includes two branches for extracting the spatial and spectral features of the images in parallel. First, the spatial and spectral information of the sub-images was initially extracted using 3D convolution in each branch. Then, the deep features were extracted through a DenseNet comprising batch normalization, mish function, and 3D convolution. Next, the two branches utilized the spectral transformer module and spatial transformer module to further enhance the feature extraction ability of the network. Finally, the output characteristic maps of the two branches were fused and the final classification results were obtained. The model was tested on Indian pine, University of Pavia, Salinas Valley, and Kennedy Space Center datasets, and its performance was compared with six types of current methods. The results demonstrate that the overall classification accuracies of our model are 95.75%, 96.75%, 95.63%, and 98.01%, respectively when the proportion of the training set of Indian pines is 3% and the proportion of the training set of the rest is 0.5%. The overall performance was better than that of other methods.
-
-