sensors-logo

Journal Browser

Journal Browser

Recent Trends of Computer Vision and Pattern Recognition for Ecological Monitoring and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Environmental Sensing".

Deadline for manuscript submissions: 25 April 2024 | Viewed by 6033

Special Issue Editors


E-Mail Website
Guest Editor
Yantai Institute of Coastal Zone Research, Chinese Academy of Sciences, Yantai 264003, China
Interests: intelligent biological monitoring; biological image detection/recognition; behavioral monitoring and analysis; automated bio-monitoring systems; marine monitoring systems

E-Mail Website
Guest Editor
Research and Development, Ecology and Future Research Institute, Busan 46228, Korea
Interests: ecological modelling; ecological informatics; computational behavior

E-Mail Website
Guest Editor
Institute of Environment and Ecology, Shandong Normal University, Jinan 250358, China
Interests: biological sensing

Special Issue Information

Dear Colleagues,

Great progress has been made in artificial intelligence in recent decades, and it has been changing the ways in which ecology and environmental studies have been undertaken. Computer vision and pattern recognition are the most commonly applied AI techniques in this field. Advanced computer vision techniques provide automated and accurate solutions for obtaining biological and ecological monitoring data, such as population, behavior, and biodiversity data. Computer vision-based tools can release scientists and researchers from the tedious work of biological data processing. Pattern recognition techniques can provide an in-depth investigation of the biological monitoring data for exploring the mechanisms of ecosystems and environmental changes. The in situ mining of biological and ecological patterns is able to predict potential risks in ecosystems in reality, such as environmental pollution and disasters. Computer vision and pattern recognition were used to establish the advanced ecological sensing system. This Special Issue is organized to collect recent works and research ideas in AI-based methodologies, technologies, and applications in ecological monitoring and sensing.

Topics should include, but are not limited to, the following:

  • Biological image classification/detection and its applications
  • Computer vision and pattern recognition in biodiversity/wildlife monitoring
  • Computer vision and pattern recognition in ecological risk assessment
  • Computer vision/sound-based biological object tracking
  • Computer vision and pattern recognition in agricultural/forest monitoring
  • Computer vision and pattern recognition in fishery/marine monitoring
  • Computer vision and pattern recognition in epidemics and disease transmission
  • Ecological remote sensing data
  • Pattern analysis of biological data
  • Ecological data mining
  • Related AI applications in environmental, agricultural, and marine ecosystems

Dr. Chunlei Xia
Prof. Dr. Tae-Soo Chon
Prof. Dr. Zongming Ren
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI in ecology
  • biomonitoring
  • image classification
  • ecological modeling
  • biological image analysis
  • object detection
  • animal tracking
  • ecosystem assessment

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 43287 KiB  
Article
UWV-Yolox: A Deep Learning Model for Underwater Video Object Detection
by Haixia Pan, Jiahua Lan, Hongqiang Wang, Yanan Li, Meng Zhang, Mojie Ma, Dongdong Zhang and Xiaoran Zhao
Sensors 2023, 23(10), 4859; https://doi.org/10.3390/s23104859 - 18 May 2023
Cited by 4 | Viewed by 1740
Abstract
Underwater video object detection is a challenging task due to the poor quality of underwater videos, including blurriness and low contrast. In recent years, Yolo series models have been widely applied to underwater video object detection. However, these models perform poorly for blurry [...] Read more.
Underwater video object detection is a challenging task due to the poor quality of underwater videos, including blurriness and low contrast. In recent years, Yolo series models have been widely applied to underwater video object detection. However, these models perform poorly for blurry and low-contrast underwater videos. Additionally, they fail to account for the contextual relationships between the frame-level results. To address these challenges, we propose a video object detection model named UWV-Yolox. First, the Contrast Limited Adaptive Histogram Equalization method is used to augment the underwater videos. Then, a new CSP_CA module is proposed by adding Coordinate Attention to the backbone of the model to augment the representations of objects of interest. Next, a new loss function is proposed, including regression and jitter loss. Finally, a frame-level optimization module is proposed to optimize the detection results by utilizing the relationship between neighboring frames in videos, improving the video detection performance. To evaluate the performance of our model, We construct experiments on the UVODD dataset built in the paper, and select mAP@0.5 as the evaluation metric. The mAP@0.5 of the UWV-Yolox model reaches 89.0%, which is 3.2% better than the original Yolox model. Furthermore, compared with other object detection models, the UWV-Yolox model has more stable predictions for objects, and our improvements can be flexibly applied to other models. Full article
Show Figures

Figure 1

16 pages, 4420 KiB  
Article
In Situ Sea Cucumber Detection across Multiple Underwater Scenes Based on Convolutional Neural Networks and Image Enhancements
by Yi Wang, Boya Fu, Longwen Fu and Chunlei Xia
Sensors 2023, 23(4), 2037; https://doi.org/10.3390/s23042037 - 10 Feb 2023
Cited by 3 | Viewed by 1689
Abstract
Recently, rapidly developing artificial intelligence and computer vision techniques have provided technical solutions to promote production efficiency and reduce labor costs in aquaculture and marine resource surveys. Traditional manual surveys are being replaced by advanced intelligent technologies. However, underwater object detection and recognition [...] Read more.
Recently, rapidly developing artificial intelligence and computer vision techniques have provided technical solutions to promote production efficiency and reduce labor costs in aquaculture and marine resource surveys. Traditional manual surveys are being replaced by advanced intelligent technologies. However, underwater object detection and recognition are suffering from the image distortion and degradation issues. In this work, automatic monitoring of sea cucumber in natural conditions is implemented based on a state-of-the-art object detector, YOLOv7. To depress the image distortion and degradation issues, image enhancement methods are adopted to improve the accuracy and stability of sea cucumber detection across multiple underwater scenes. Five well-known image enhancement methods are employed to improve the detection performance of sea cucumber by YOLOv7 and YOLOv5. The effectiveness of these image enhancement methods is evaluated by experiments. Non-local image dehazing (NLD) was the most effective in sea cucumber detection from multiple underwater scenes for both YOLOv7 and YOLOv5. The best average precision (AP) of sea cucumber detection was 0.940, achieved by YOLOv7 with NLD. With NLD enhancement, the APs of YOLOv7 and YOLOv5 were increased by 1.1% and 1.6%, respectively. The best AP was 2.8% higher than YOLOv5 without image enhancement. Moreover, the real-time ability of YOLOv7 was examined and its average prediction time was 4.3 ms. Experimental results demonstrated that the proposed method can be applied to marine organism surveying by underwater mobile platforms or automatic analysis of underwater videos. Full article
Show Figures

Figure 1

16 pages, 2035 KiB  
Article
CGUN-2A: Deep Graph Convolutional Network via Contrastive Learning for Large-Scale Zero-Shot Image Classification
by Liangwei Li, Lin Liu, Xiaohui Du, Xiangzhou Wang, Ziruo Zhang, Jing Zhang, Ping Zhang and Juanxiu Liu
Sensors 2022, 22(24), 9980; https://doi.org/10.3390/s22249980 - 18 Dec 2022
Cited by 2 | Viewed by 1749
Abstract
Taxonomy illustrates that natural creatures can be classified with a hierarchy. The connections between species are explicit and objective and can be organized into a knowledge graph (KG). It is a challenging task to mine features of known categories from KG and to [...] Read more.
Taxonomy illustrates that natural creatures can be classified with a hierarchy. The connections between species are explicit and objective and can be organized into a knowledge graph (KG). It is a challenging task to mine features of known categories from KG and to reason on unknown categories. Graph Convolutional Network (GCN) has recently been viewed as a potential approach to zero-shot learning. GCN enables knowledge transfer by sharing the statistical strength of nodes in the graph. More layers of graph convolution are stacked in order to aggregate the hierarchical information in the KG. However, the Laplacian over-smoothing problem will be severe as the number of GCN layers deepens, which leads the features between nodes toward a tendency to be similar and degrade the performance of zero-shot image classification tasks. We consider two parts to mitigate the Laplacian over-smoothing problem, namely reducing the invalid node aggregation and improving the discriminability among nodes in the deep graph network. We propose a top-k graph pooling method based on the self-attention mechanism to control specific node aggregation, and we introduce a dual structural symmetric knowledge graph additionally to enhance the representation of nodes in the latent space. Finally, we apply these new concepts to the recently widely used contrastive learning framework and propose a novel Contrastive Graph U-Net with two Attention-based graph pooling (Att-gPool) layers, CGUN-2A, which explicitly alleviates the Laplacian over-smoothing problem. To evaluate the performance of the method on complex real-world scenes, we test it on the large-scale zero-shot image classification dataset. Extensive experiments show the positive effect of allowing nodes to perform specific aggregation, as well as homogeneous graph comparison, in our deep graph network. We show how it significantly boosts zero-shot image classification performance. The Hit@1 accuracy is 17.5% relatively higher than the baseline model on the ImageNet21K dataset. Full article
Show Figures

Figure 1

Back to TopTop