remotesensing-logo

Journal Browser

Journal Browser

Advances in Object and Activity Detection in Remote Sensing Imagery II

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 30 April 2024 | Viewed by 4511

Special Issue Editor


E-Mail Website
Guest Editor
School of Computing and Mathematics, Charles Sturt University, Port Macquarie, NSW 2444, Australia
Interests: signal and image processing; machine learning; deep convolutional neural nets; data analytics; computer vision; thermal imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Air, sea, and spaceborne surveillance of objects and their activities have been greatly improved by the ubiquitous availability of drone, satellite, and underwater imaging data. Applications where remote monitoring is essential include surveillance; border control; rescue operations for disaster management; precision agriculture; monitoring the environment; detecting weeds; conducting surveys of land, pest animals, wildlife, and marine life; and detecting individual or group activity.

Recent advances in deep learning have enabled significant progress in the fields of object and activity recognition. Visual object detection attempts to precisely localise objects of target classes inside an image and identify each object instance with the correct class label. Similarly, activity recognition attempts to identify the behaviours or activities of an agent or group of agents based on sensor or video observation data. Detecting, identifying, tracking, and interpreting the behaviour of objects in images/videos captured by multiple cameras are very important and difficult problems. Combined, the recognition of objects and their activities in imaging data recorded by remote sensing devices is a very dynamic and difficult area of research. In the past decade, the number of papers in the field of object and activity recognition has increased significantly. Particularly, many academics have identified application fields for identifying objects and their unique behaviours from airborne and spaceborne pictures.

This Special Issue is a continuation of volume 1 on the same subject and encourages papers that investigate innovative and challenging themes for object and activity recognition in remote sensing images/videos recorded from a variety of platforms.

This Special Issue invites articles about the detection of objects and activities in remote sensing imagery. All articles will be carefully reviewed in a significantly shorter amount of time than most current publications in this field.

Dr. Anwaar Ulhaq
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • vision transformers for an object, action, and activity detection
  • CNNs for object, action, and activity detection
  • 3D vison, LiDAR sensing for object detection
  • object recognition in UAV, underwater, and satellite imagery
  • underwater surveillance and monitoring
  • weed detection and biodiverse carbon validation
  • group activity detection
  • sea- and wildlife monitoring
  • border control and surveillance from UAV

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2039 KiB  
Article
Spectrally Segmented-Enhanced Neural Network for Precise Land Cover Object Classification in Hyperspectral Imagery
by Touhid Islam, Rashedul Islam, Palash Uddin and Anwaar Ulhaq
Remote Sens. 2024, 16(5), 807; https://doi.org/10.3390/rs16050807 - 25 Feb 2024
Viewed by 811
Abstract
The paradigm shift brought by deep learning in land cover object classification in hyperspectral images (HSIs) is undeniable, particularly in addressing the intricate 3D cube structure inherent in HSI data. Leveraging convolutional neural networks (CNNs), despite their architectural constraints, offers a promising solution [...] Read more.
The paradigm shift brought by deep learning in land cover object classification in hyperspectral images (HSIs) is undeniable, particularly in addressing the intricate 3D cube structure inherent in HSI data. Leveraging convolutional neural networks (CNNs), despite their architectural constraints, offers a promising solution for precise spectral data classification. However, challenges persist in object classification in hyperspectral imagery or hyperspectral image classification, including the curse of dimensionality, data redundancy, overfitting, and computational costs. To tackle these hurdles, we introduce the spectrally segmented-enhanced neural network (SENN), a novel model integrating segmentation-based, multi-layer CNNs, SVM classification, and spectrally segmented dimensionality reduction. SENN adeptly integrates spectral–spatial data and is particularly crucial for agricultural land classification. By strategically fusing CNNs and support vector machines (SVMs), SENN enhances class differentiation while mitigating overfitting through dropout and early stopping techniques. Our contributions extend to effective dimensionality reduction, precise CNN-based classification, and enhanced performance via CNN-SVM fusion. SENN harnesses spectral information to surmount challenges in “hyperspectral image classification in hyperspectral imagery”, marking a significant advancement in accuracy and efficiency within this domain. Full article
Show Figures

Figure 1

24 pages, 8008 KiB  
Article
A Weak Sample Optimisation Method for Building Classification in a Semi-Supervised Deep Learning Framework
by Yanjun Wang, Yunhao Lin, Huiqing Huang, Shuhan Wang, Shicheng Wen and Hengfan Cai
Remote Sens. 2023, 15(18), 4432; https://doi.org/10.3390/rs15184432 - 08 Sep 2023
Viewed by 767
Abstract
Deep learning has gained widespread interest in the task of building semantic segmentation modelling using remote sensing images; however, neural network models require a large number of training samples to achieve better classification performance, and the models are more sensitive to error patches [...] Read more.
Deep learning has gained widespread interest in the task of building semantic segmentation modelling using remote sensing images; however, neural network models require a large number of training samples to achieve better classification performance, and the models are more sensitive to error patches in the training samples. The training samples obtained in semi-supervised classification methods need less reliable weakly labelled samples, but current semi-supervised classification research puts the generated weak samples directly into the model for applications, with less consideration of the impact of the accuracy and quality improvement of the weak samples on the subsequent model classification. Therefore, to address the problem of generating and optimising the quality of weak samples from training data in deep learning, this paper proposes a semi-supervised building classification framework. Firstly, based on the test results of the remote sensing image segmentation model and the unsupervised classification results of LiDAR point cloud data, this paper quickly generates weak image samples of buildings. Secondly, in order to improve the quality of the spots of the weak samples, an iterative optimisation strategy of the weak samples is proposed to compare and analyse the weak samples with the real samples and extract the accurate samples from the weak samples. Finally, the real samples, the weak samples, and the optimised weak samples are input into the semantic segmentation model of buildings for accuracy evaluation and analysis. The effectiveness of this paper’s approach was experimentally verified on two different building datasets, and the optimised weak samples improved by 1.9% and 0.6%, respectively, in the test accuracy mIoU compared to the initial weak samples. The results demonstrate that the semi-supervised classification framework proposed in this paper can be used to alleviate the model’s demand for a large number of real-labelled samples while improving the ability to utilise weak samples, and it can be used as an alternative to fully supervised classification methods in deep learning model applications that require a large number of training samples. Full article
Show Figures

Graphical abstract

17 pages, 8457 KiB  
Article
Hyperspectral Marine Oil Spill Monitoring Using a Dual-Branch Spatial–Spectral Fusion Model
by Junfang Yang, Jian Wang, Yabin Hu, Yi Ma, Zhongwei Li and Jie Zhang
Remote Sens. 2023, 15(17), 4170; https://doi.org/10.3390/rs15174170 - 24 Aug 2023
Cited by 1 | Viewed by 1254
Abstract
Marine oil spills pose a crucial concern in the monitoring of marine environments, and optical remote sensing serves as a vital means for marine oil spill detection. However, optical remote sensing imagery is susceptible to interference from sunglints and shadows, leading to diminished [...] Read more.
Marine oil spills pose a crucial concern in the monitoring of marine environments, and optical remote sensing serves as a vital means for marine oil spill detection. However, optical remote sensing imagery is susceptible to interference from sunglints and shadows, leading to diminished spectral differences between oil films and seawater. This makes it challenging to accurately extract the boundaries of oil–water interfaces. To address these aforementioned issues, this paper proposes a model based on the graph convolutional architecture and spatial–spectral information fusion for the oil spill detection of real oil spill incidents. The model is experimentally evaluated using both spaceborne and airborne hyperspectral oil spill images. Research findings demonstrate the superior oil spill detection accuracy of the developed model when compared to Graph Convolutional Network (GCN) and CNN-Enhanced Graph Convolutional Network (CEGCN), across two hyperspectral datasets collected from the Bohai Sea. Moreover, the performance of the developed model in oil spill detection remains optimal, even with only 1% of the training samples. Similar conclusions are drawn from the oil spill hyperspectral data collected from the Yellow Sea. These results validate the efficacy and robustness of the proposed model for marine oil spill detection. Full article
Show Figures

Graphical abstract

29 pages, 6545 KiB  
Article
A Deep Learning-Based Hyperspectral Object Classification Approach via Imbalanced Training Samples Handling
by Md Touhid Islam, Md Rashedul Islam, Md Palash Uddin and Anwaar Ulhaq
Remote Sens. 2023, 15(14), 3532; https://doi.org/10.3390/rs15143532 - 13 Jul 2023
Cited by 5 | Viewed by 1227
Abstract
Object classification in hyperspectral images involves accurately categorizing objects based on their spectral characteristics. However, the high dimensionality of hyperspectral data and class imbalance pose significant challenges to object classification performance. To address these challenges, we propose a framework that incorporates dimensionality reduction [...] Read more.
Object classification in hyperspectral images involves accurately categorizing objects based on their spectral characteristics. However, the high dimensionality of hyperspectral data and class imbalance pose significant challenges to object classification performance. To address these challenges, we propose a framework that incorporates dimensionality reduction and re-sampling as preprocessing steps for a deep learning model. Our framework employs a novel subgroup-based dimensionality reduction technique to extract and select the most informative features with minimal redundancy. Additionally, the data are resampled to achieve class balance across all categories. The reduced and balanced data are then processed through a hybrid CNN model, which combines a 3D learning block and a 2D learning block to extract spectral–spatial features and achieve satisfactory classification accuracy. By adopting this hybrid approach, we simplify the model while improving performance in the presence of noise and limited sample size. We evaluated our proposed model on the Salinas scene, Pavia University, and Kennedy Space Center benchmark hyperspectral datasets, comparing it to state-of-the-art methods. Our object classification technique achieves highly promising results, with overall accuracies of 99.98%, 99.94%, and 99.46% on the three datasets, respectively. This proposed approach offers a compelling solution to overcome the challenges of high dimensionality and class imbalance in hyperspectral object classification. Full article
Show Figures

Figure 1

Back to TopTop