|Tytuł||Improving Autoencoders Performance for Hyperspectral Unmixing Using Clustering|
|Publication Type||Conference Proceedings|
|Autorzy||Grabowski B, Głomb P, Książek K, Buza K|
|Conference Name||Asian Conference on Intelligent Information and Database Systems|
|Series Title||Communications in Computer and Information Science|
|Conference Location||Ho Chi Minh City, Vietnam|
|Słowa kluczowe||Autoencoders, Clustering, Hyperspectral unmixing, Transfer learning|
Hyperspectral cameras acquire images containing information across the electromagnetic spectrum, which convey useful information about the scene. To enable effective analysis of such data, spectral unmixing is often used. It is an important task in hyperspectral imaging, allowing one to obtain the information about spectral endmembers which make up each hyperspectral pixel. This task, traditionally solved with dedicated statistical methods, has recently been explored with deep learning methods. One of the methods well-suited to this task are autoencoders. These neural networks are initialized using multiple random weights, and their initialization often has a significant impact on their efficiency. Because of that, to improve the initialization of autoencoders for the spectral unmixing task, we propose to use the pre-training scheme consisting of clustering-based artificial labeling. We test the approach on two popular hyperspectral datasets, i.e. Samson and Jasper Ridge. Our experiment delivers promising results, improving autoencoders effectiveness in the case of Samson dataset, i.e. for 25-class labeling endmembers’ and abundances’ errors improve by 0.045 and 0.008, respectively. The worse results in the case of Jasper Ridge dataset (improvement of the endmembers’ error by 0.001, and worsening of the abundances’ error by 0.006 for 25-classes labeling) show that more research is required to understand when the proposed approach improves the results of the spectral unmixing. The auxiliary experiments that we also conduct allow us to partially answer that question.