sensors-logo

Journal Browser

Journal Browser

Computer Vision for Remote Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 January 2021) | Viewed by 19706

Special Issue Editor


E-Mail Website
Guest Editor
School of Engineering, Computer Science and Informatics, Jönköping University, 553 18 Jönköping, Sweden
Interests: computer vision; remote sensing; artificial intelligence; machine learning; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue invites the submission of original research articles related to earth observation applications using remote sensing images. Authors are encouraged to submit novel approaches for processing images and extracting important information based on computer vision, remote sensing, and also machine learning and artificial intelligence techniques. The Special Issue is open to contributions ranging from land use/cover change, water resource management, environmental monitoring, vegetation health, smart agriculture, times series analyses, as well as innovative approaches towards understanding climate change. Original contributions that look at sensor fusion methods (i.e., combining remote sensing information resources with ground measurements) are also encouraged.

Assist. Prof. Dr. Beril Sirmaçek
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Pattern recognition
  • Neural networks
  • Remotely sensed images
  • Earth observation
  • Land use/cover change detection
  • Environmental monitoring

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:
17 pages, 30258 KiB  
Article
A Multi-Level Feature Fusion Network for Remote Sensing Image Segmentation
by Sijun Dong and Zhengchao Chen
Sensors 2021, 21(4), 1267; https://doi.org/10.3390/s21041267 - 10 Feb 2021
Cited by 15 | Viewed by 2680
Abstract
High-resolution remote sensing image segmentation is a mature application in many industrial-level image applications and it also has military and civil applications. The scene analysis needs to be automated as much as possible with high-resolution remote sensing images. This plays a significant role [...] Read more.
High-resolution remote sensing image segmentation is a mature application in many industrial-level image applications and it also has military and civil applications. The scene analysis needs to be automated as much as possible with high-resolution remote sensing images. This plays a significant role in environmental disaster monitoring, forestry industry, agricultural farming, urban planning, and road analysis. This study proposes a multi-level feature fusion network (MFNet) that can integrate the multi-level features in the backbone to obtain different types of image information. Finally, the experiments in this study demonstrate that the proposed network can achieve good segmentation results in the Vaihingen and Potsdam datasets. By aiming to achieve a large difference in the scale of the target objects in remote sensing images and achieving a poor recognition result for small objects, a multi-level feature fusion solution is proposed in this study. This investigation improves the recognition results of the remote sensing image segmentation to a certain extent. Full article
(This article belongs to the Special Issue Computer Vision for Remote Sensing)
Show Figures

Figure 1

33 pages, 2283 KiB  
Review
Applications of Deep Learning for Dense Scenes Analysis in Agriculture: A Review
by Qian Zhang, Yeqi Liu, Chuanyang Gong, Yingyi Chen and Huihui Yu
Sensors 2020, 20(5), 1520; https://doi.org/10.3390/s20051520 - 10 Mar 2020
Cited by 105 | Viewed by 9156
Abstract
Deep Learning (DL) is the state-of-the-art machine learning technology, which shows superior performance in computer vision, bioinformatics, natural language processing, and other areas. Especially as a modern image processing technology, DL has been successfully applied in various tasks, such as object detection, semantic [...] Read more.
Deep Learning (DL) is the state-of-the-art machine learning technology, which shows superior performance in computer vision, bioinformatics, natural language processing, and other areas. Especially as a modern image processing technology, DL has been successfully applied in various tasks, such as object detection, semantic segmentation, and scene analysis. However, with the increase of dense scenes in reality, due to severe occlusions, and small size of objects, the analysis of dense scenes becomes particularly challenging. To overcome these problems, DL recently has been increasingly applied to dense scenes and has begun to be used in dense agricultural scenes. The purpose of this review is to explore the applications of DL for dense scenes analysis in agriculture. In order to better elaborate the topic, we first describe the types of dense scenes in agriculture, as well as the challenges. Next, we introduce various popular deep neural networks used in these dense scenes. Then, the applications of these structures in various agricultural tasks are comprehensively introduced in this review, including recognition and classification, detection, counting and yield estimation. Finally, the surveyed DL applications, limitations and the future work for analysis of dense images in agriculture are summarized. Full article
(This article belongs to the Special Issue Computer Vision for Remote Sensing)
Show Figures

Figure 1

19 pages, 4914 KiB  
Article
MapGAN: An Intelligent Generation Model for Network Tile Maps
by Jingtao Li, Zhanlong Chen, Xiaozhen Zhao and Lijia Shao
Sensors 2020, 20(11), 3119; https://doi.org/10.3390/s20113119 - 31 May 2020
Cited by 16 | Viewed by 3562
Abstract
In recent years, the generative adversarial network (GAN)-based image translation model has achieved great success in image synthesis, image inpainting, image super-resolution, and other tasks. However, the images generated by these models often have problems such as insufficient details and low quality. Especially [...] Read more.
In recent years, the generative adversarial network (GAN)-based image translation model has achieved great success in image synthesis, image inpainting, image super-resolution, and other tasks. However, the images generated by these models often have problems such as insufficient details and low quality. Especially for the task of map generation, the generated electronic map cannot achieve effects comparable to industrial production in terms of accuracy and aesthetics. This paper proposes a model called Map Generative Adversarial Networks (MapGAN) for generating multitype electronic maps accurately and quickly based on both remote sensing images and render matrices. MapGAN improves the generator architecture of Pix2pixHD and adds a classifier to enhance the model, enabling it to learn the characteristics and style differences of different types of maps. Using the datasets of Google Maps, Baidu maps, and Map World maps, we compare MapGAN with some recent image translation models in the fields of one-to-one map generation and one-to-many domain map generation. The results show that the quality of the electronic maps generated by MapGAN is optimal in terms of both intuitive vision and classic evaluation indicators. Full article
(This article belongs to the Special Issue Computer Vision for Remote Sensing)
Show Figures

Figure 1

21 pages, 20381 KiB  
Article
An Agave Counting Methodology Based on Mathematical Morphology and Images Acquired through Unmanned Aerial Vehicles
by Gabriela Calvario, Teresa E. Alarcón, Oscar Dalmau, Basilio Sierra and Carmen Hernandez
Sensors 2020, 20(21), 6247; https://doi.org/10.3390/s20216247 - 02 Nov 2020
Cited by 9 | Viewed by 3552
Abstract
Blue agave is an important commercial crop in Mexico, and it is the main source of the traditional mexican beverage known as tequila. The variety of blue agave crop known as Tequilana Weber is a crucial element for tequila agribusiness and the agricultural [...] Read more.
Blue agave is an important commercial crop in Mexico, and it is the main source of the traditional mexican beverage known as tequila. The variety of blue agave crop known as Tequilana Weber is a crucial element for tequila agribusiness and the agricultural economy in Mexico. The number of agave plants in the field is one of the main parameters for estimating production of tequila. In this manuscript, we describe a mathematical morphology-based algorithm that addresses the agave automatic counting task. The proposed methodology was applied to a set of real images collected using an Unmanned Aerial Vehicle equipped with a digital Red-Green-Blue (RGB) camera. The number of plants automatically identified in the collected images was compared to the number of plants counted by hand. Accuracy of the proposed algorithm depended on the size heterogeneity of plants in the field and illumination. Accuracy ranged from 0.8309 to 0.9806, and performance of the proposed algorithm was satisfactory. Full article
(This article belongs to the Special Issue Computer Vision for Remote Sensing)
Show Figures

Figure 1

Back to TopTop