1. Multimorphological Superpixel Model for Hyperspectral Image Classification
With the development of hyperspectral sensors, nowadays, we can easily acquire large amount of hyperspectral images (HSIs) with very high spatial resolution, which has led to a better identification of relatively small structures. Owing to the high spatial resolution, there are much less mixed pixels in the HSIs, and the boundaries between these categories are much clearer. However, the high spatial resolution also leads to complex and fine geometrical structures and high inner-class variability, which make the classification results very ``noisy.'' In this paper, we propose a multimorphological superpixel (MMSP) method to extract the spectral and spatial features and address the aforementioned problems. To reduce the difference within the same class and obtain multilevel spatial information, morphological features (multistructuring element extended morphological profile or multiattribute filter extended multi-attribute profiles) are first obtained from the original HSI. After that, simple linear iterative clustering segmentation method is performed on each morphological feature to acquire the MMSPs. Then, uniformity constraint is used to merge the MMSPs belonging to the same class which can avoid introducing the information from different classes and acquire spatial structures at object level. Subsequently, mean filtering is utilized to extract the spatial features within and among MMSPs. At last, base kernels are obtained from the spatial features and original HSI, and several multiple kernel learning methods are used to obtain the optimal kernel to incorporate into the support vector machine. Experiments conducted on three widely used real HSIs and compared with several well-known methods demonstrate the effectiveness of the proposed model.
2. Weighted Spectral-Spatial Classification of Hyperspectral Images via Class-Specific Band Contribution
Hyperspectral images (HSIs) have evident advantages in image understanding due to enormous spectral bands, and rich spatial information. Hundreds of spectral bands, however, actually play different roles in contributing to the class-specific classification. Then, treating each band equally may lead to the underuse or overuse of them. To address this issue, this paper introduces class-specific band contributions (BCs) into the spectral space, and proposes a weighted spectral-spatial classification method for HSIs. In the method, by incorporating BC characterized by F-measure into the distance-based posterior probability, a weighted spectral posterior probability (WSP) model is established. Furthermore, to exploit the spatial information, WSP is then combined with the spatial consistency constraint via an adaptive tradeoff parameter. Additionally, aimed at obtaining the class-dependent F-measures of each band, a semisupervised F-measure prediction method is also developed. Experiments on four hyperspectral data sets are conducted. Experimental results show the superiority of our proposed method over several state-of-the-art methods in terms of three widely used indexes.
3. Discriminative Feature Learning for Unsupervised Change Detection in Heterogeneous Images Based on a Coupled Neural Network
With the application requirement, the technique for change detection based on heterogeneous remote sensing images is paid more attention. However, detecting changes between two heterogeneous images is challenging as they cannot be compared in low-dimensional space. In this paper, we construct an approximately symmetric deep neural network with two sides containing the same number of coupled layers to transform the two images into the same feature space. The two images are connected with the two sides and transformed into the same feature space, in which their features are more discriminative and the difference image can be generated by comparing paired features pixel by pixel. The network is first built by stacked restricted Boltzmann machines, and then, the parameters are updated in a special way based on clustering. The special way, motivated by that two heterogeneous images share the same reality in unchanged areas and retain respective properties in changed areas, shrinks the distance between paired features transformed from unchanged positions, and enlarges the distance between paired features extracted from changed positions. It is achieved through introducing two types of labels and updating parameters by adaptively changed learning rate. This is different from the existing methods based on deep learning that just do operations on positions predicted to be unchanged and extract only one type of labels. The whole process is completely unsupervised without any priori knowledge. Besides, the method can also be applied to homogeneous images. We test our method on heterogeneous images and homogeneous images. The proposed method achieves quite high accuracy.
4. Complex-Valued Convolutional Neural Network and Its Application in PoSAR Image Classification 这篇论文在GITHUB上有源代码可以下载
Following the great success of deep convolutional neural networks (CNNs) in computer vision, this paper proposes a complex-valued CNN (CV-CNN) specifically for synthetic aperture radar (SAR) image interpretation. It utilizes both amplitude and phase information of complex SAR imagery. All elements of CNN including input-output layer, convolution layer, activation function, and pooling layer are extended to the complex domain. Moreover, a complex backpropagation algorithm based on stochastic gradient descent is derived for CV-CNN training. The proposed CV-CNN is then tested on the typical polarimetric SAR image classification task which classifies each pixel into known terrain types via supervised training. Experiments with the benchmark data sets of Flevoland and Oberpfaffenhofen show that the classification error can be further reduced if employing CV-CNN instead of conventional real-valued CNN with the same degrees of freedom. The performance of CV-CNN is comparable to that of existing state-of-the-art methods in terms of overall classification accuracy.
5. PCA-Based Edge-Preserving Features for Hyperspectral Image Classification
Edge-preserving features (EPFs) obtained by the application of edge-preserving filters to hyperspectral images (HSIs) have been found very effective in characterizing significant spectral and spatial structures of objects in a scene. However, a direct use of the EPFs can be insufficient to provide a complete characterization of spatial information when objects of different scales are present in the considered images. Furthermore, the edge-preserving smoothing operation unavoidably decreases the spectral differences among objects of different classes, which may affect the following classification. To overcome these problems, in this paper, a novel principal component analysis (PCA)-based EPFs (PCA-EPFs) method for HSI classification is proposed, which consists of the following steps. First, the standard EPFs are constructed by applying edge-preserving filters with different parameter settings to the considered image, and the resulting EPFs are stacked together. Next, the spectral dimension of the stacked EPFs is reduced with the PCA, which not only can represent the EPFs in the mean square sense but also highlight the separability of pixels in the EPFs. Finally, the resulting PCA-EPFs are classified by a support vector machine (SVM) classifier. Experiments performed on several real hyperspectral data sets show the effectiveness of the proposed PCA-EPFs, which sharply improves the accuracy of the SVM classifier with respect to the standard edge-preserving filtering-based feature extraction method, and other widely used spectral-spatial classifiers.