remotesensing-logo

Journal Browser

Journal Browser

Pattern Recognition and Image Processing for Remote Sensing II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 25522

Special Issue Editors


E-Mail Website
Guest Editor
School of Astronautics, Beihang University, Beijing 102206, China
Interests: remote sensing image processing and analysis; computer vision; machine learning; pattern recognition
Special Issues, Collections and Topics in MDPI journals
School of Computer Science, Nankai University, Tianjin 300350, China
Interests: hyperspectral unmixing; remote sensing image processing; multi-objective optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing provides a global perspective and a wealth of data about earth systems, allowing us to visualize and analyze objects and features on the earth’s surface. Today, pattern recognition and image processing technologies are revolutionizing earth observation and presenting unprecedented opportunities and challenges. Despite the recent progress, there are still some open problems and challenges, such as deep learning with multi-modal and multi-resolution remote sensing images, light-weight processing for large-scale data, domain adaptation, and data fusion.

To answer the above questions, this Special Issue focuses on presenting the latest advances in pattern recognition and image processing. We invite you to submit papers with methodological contributions as well as innovative applications. All types of image modalities are encouraged, such as multispectral imaging, hyperspectral imaging, synthetic aperture radar (SAR), multi-temporal imaging, LIDAR, etc. The platform is also unrestricted, and sensing can be carried using drones, aircraft, satellites, robots, etc. Any other applications related to remote sensing are welcome. The potential topics may include but are not limited to:

  • Pattern recognition and machine learning;
  • Deep learning;
  • Image classification, object detection, and image segmentation;
  • Change detection;
  • Image synthesis;
  • Multi-modal data fusion from different sensors;
  • Image quality improvement;
  • Real-time processing of remote sensing data;
  • Unsupervised learning and self-supervised learning;
  • Advanced deep learning techniques (e.g., generative adversarial networks, diffusion probabilistic models, and physics-informed neural networks);
  • Applications of remote sensing image in agriculture, marine, meteorology and other fields.

Dr. Zhengxia Zou
Dr. Bin Pan
Dr. Xia Xu
Dr. Zhou Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • pattern recognition
  • image processing
  • machine learning
  • deep learning
  • LiDAR
  • hyperspectral
  • synthetic aperture radar
  • image quality assignment
  • data fusion

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6443 KiB  
Article
Infrared Cirrus Detection Using Non-Convex Rank Surrogates for Spatial-Temporal Tensor
by Shengyuan Xiao, Zhenming Peng and Fusong Li
Remote Sens. 2023, 15(9), 2334; https://doi.org/10.3390/rs15092334 - 28 Apr 2023
Cited by 3 | Viewed by 850
Abstract
Infrared small target detection (ISTD) plays a significant role in earth observation infrared systems. However, some high reflection areas have a grayscale similar to the target, which will cause a false alarm in the earth observation infrared system. For the sake of raising [...] Read more.
Infrared small target detection (ISTD) plays a significant role in earth observation infrared systems. However, some high reflection areas have a grayscale similar to the target, which will cause a false alarm in the earth observation infrared system. For the sake of raising the detection accuracy, we proposed a cirrus detection measure based on low-rank sparse decomposition as a supplementary method. To better detect cirrus that may be sparsely insufficient in a single frame image, the method treats the cirrus sequence image with time continuity as a tensor, then uses the visual saliency of the image to divide the image into a cirrus region and a cirrus-free region. Considering that the classical tensor rank surrogate cannot approximate the tensor rank very well, we used a non-convex tensor rank surrogate based on the Laplace function for the spatial-temporal tensor (Lap-NRSSTT) to surrogate the tensor rank. In an effort to compute the proposed model, we used a high-efficiency optimization approach on the basis of alternating the direction method of multipliers (ADMM). Finally, final detection results were obtained by the reconstructed cirrus images with a set threshold segmentation. Results indicate that the proposed scheme achieves better detection capabilities and higher accuracy than other measures based on optimization in some complex scenarios. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

21 pages, 7495 KiB  
Article
Wavelet Integrated Convolutional Neural Network for Thin Cloud Removal in Remote Sensing Images
by Yue Zi, Haidong Ding, Fengying Xie, Zhiguo Jiang and Xuedong Song
Remote Sens. 2023, 15(3), 781; https://doi.org/10.3390/rs15030781 - 30 Jan 2023
Cited by 6 | Viewed by 1935
Abstract
Cloud occlusion phenomena are widespread in optical remote sensing (RS) images, leading to information loss and image degradation and causing difficulties in subsequent applications such as land surface classification, object detection, and land change monitoring. Therefore, thin cloud removal is a key preprocessing [...] Read more.
Cloud occlusion phenomena are widespread in optical remote sensing (RS) images, leading to information loss and image degradation and causing difficulties in subsequent applications such as land surface classification, object detection, and land change monitoring. Therefore, thin cloud removal is a key preprocessing procedure for optical RS images, and has great practical value. Recent deep learning-based thin cloud removal methods have achieved excellent results. However, these methods have a common problem in that they cannot obtain large receptive fields while preserving image detail. In this paper, we propose a novel wavelet-integrated convolutional neural network for thin cloud removal (WaveCNN-CR) in RS images that can obtain larger receptive fields without any information loss. WaveCNN-CR generates cloud-free images in an end-to-end manner based on an encoder–decoder-like architecture. In the encoding stage, WaveCNN-CR first extracts multi-scale and multi-frequency components via wavelet transform, then further performs feature extraction for each high-frequency component at different scales by multiple enhanced feature extraction modules (EFEM) separately. In the decoding stage, WaveCNN-CR recursively concatenates the processed low-frequency and high-frequency components at each scale, feeds them into EFEMs for feature extraction, then reconstructs the high-resolution low-frequency component by inverse wavelet transform. In addition, the designed EFEM consisting of an attentive residual block (ARB) and gated residual block (GRB) is used to emphasize the more informative features. ARB and GRB enhance features from the perspective of global and local context, respectively. Extensive experiments on the T-CLOUD, RICE1, and WHUS2-CR datasets demonstrate that our WaveCNN-CR significantly outperforms existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

20 pages, 31332 KiB  
Article
D3CNNs: Dual Denoiser Driven Convolutional Neural Networks for Mixed Noise Removal in Remotely Sensed Images
by Zhenghua Huang, Zifan Zhu, Zhicheng Wang, Xi Li, Biyun Xu, Yaozong Zhang and Hao Fang
Remote Sens. 2023, 15(2), 443; https://doi.org/10.3390/rs15020443 - 11 Jan 2023
Cited by 2 | Viewed by 2091
Abstract
Mixed (random and stripe) noise will cause serious degradation of optical remotely sensed image quality, making it hard to analyze their contents. In order to remove such noise, various inverse problems are usually constructed with different priors, which can be solved by either [...] Read more.
Mixed (random and stripe) noise will cause serious degradation of optical remotely sensed image quality, making it hard to analyze their contents. In order to remove such noise, various inverse problems are usually constructed with different priors, which can be solved by either model-based optimization methods or discriminative learning methods. However, they have their own drawbacks, such as the former methods are flexible but are time-consuming for the pursuit of good performance; while the later methods are fast but are limited for extensive applications due to their specialized tasks. To fast obtain pleasing results with combination of their merits, in this paper, we propose a novel denoising strategy, namely, Dual Denoiser Driven Convolutional Neural Networks (D3CNNs), to remove both random and stripe noise. The D3CNNs includes the following two key parts: one is that two auxiliary variables respective for the denoised image and the stripe noise are introduced to reformulate the inverse problem as a constrained optimization problem, which can be iteratively solved by employing the alternating direction method of multipliers (ADMM). The other is that the U-shape network is used for the denoised auxiliary variable while the residual CNN (RCNN) for the stripe auxiliary variable. The subjectively and objectively comparable results of experiments on both synthetic and real-world remotely sensed images verify that the proposed method is effective and is even better than the state-of-the-arts. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

20 pages, 1292 KiB  
Article
Multi-Scale Feature Aggregation Network for Semantic Segmentation of Land Cover
by Xu Shen, Liguo Weng, Min Xia and Haifeng Lin
Remote Sens. 2022, 14(23), 6156; https://doi.org/10.3390/rs14236156 - 05 Dec 2022
Cited by 6 | Viewed by 2011
Abstract
Land cover semantic segmentation is an important technique in land. It is very practical in land resource protection planning, geographical classification, surveying and mapping analysis. Deep learning shows excellent performance in picture segmentation in recent years, but there are few semantic segmentation algorithms [...] Read more.
Land cover semantic segmentation is an important technique in land. It is very practical in land resource protection planning, geographical classification, surveying and mapping analysis. Deep learning shows excellent performance in picture segmentation in recent years, but there are few semantic segmentation algorithms for land cover. When dealing with land cover segmentation tasks, traditional semantic segmentation networks often have disadvantages such as low segmentation precision and weak generalization due to the loss of image detail information and the limitation of weight distribution. In order to achieve high-precision land cover segmentation, this article develops a multi-scale feature aggregation network. Traditional convolutional neural network downsampling procedure has problems of detail information loss and resolution degradation; to fix these problems, a multi-scale feature extraction spatial pyramid module is made to assemble regional context data from different areas. In order to address the issue of incomplete information of traditional convolutional neural networks at multiple sizes, a multi-scale feature fusion module is developed to fuse attributes from various layers and several sizes to boost segmentation accuracy. Finally, a multi-scale convolutional attention module is presented to enhance the segmentation’s attention to the target in order to address the issue that the classic convolutional neural network has low attention capacity to the building waters in land cover segmentation. Through the contrast experiment and generalization experiment, it can be clearly demonstrated that the segmentation algorithm proposed in this paper realizes the high precision segmentation of land cover. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

23 pages, 15175 KiB  
Article
An Enhanced Image Patch Tensor Decomposition for Infrared Small Target Detection
by Ziling Lu, Zhenghua Huang, Qiong Song, Kun Bai and Zhengtao Li
Remote Sens. 2022, 14(23), 6044; https://doi.org/10.3390/rs14236044 - 29 Nov 2022
Cited by 5 | Viewed by 1389
Abstract
Infrared small-target detection is a key technology for the infrared search and track system (IRST), but some problems still exist, such as false detections in complex backgrounds and clutter. To solve these problems, a novel image patch tensor (IPT) model for infrared small-target [...] Read more.
Infrared small-target detection is a key technology for the infrared search and track system (IRST), but some problems still exist, such as false detections in complex backgrounds and clutter. To solve these problems, a novel image patch tensor (IPT) model for infrared small-target detection is proposed. First, to better estimate the background component, we utilize the Laplace operator to approximate the background tensor rank. Secondly, we combined local gradient features and highlighted area indicators to model the local targets prior, which can effectively suppress the complex background clutter. The proposed model was solved by the alternating direction method of multipliers (ADMM). The experimental results on various scenes show that our model achieves an excellent performance in suppressing strong edge clutter and estimating small targets. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

25 pages, 12285 KiB  
Article
Coastline Recognition Algorithm Based on Multi-Feature Network Fusion of Multi-Spectral Remote Sensing Images
by Shi Qiu, Huping Ye and Xiaohan Liao
Remote Sens. 2022, 14(23), 5931; https://doi.org/10.3390/rs14235931 - 23 Nov 2022
Cited by 3 | Viewed by 1405
Abstract
Remote sensing images can obtain broad geomorphic features and provide a strong basis for analysis and decision making. As 71% of the earth is covered by water, shipping has become an efficient means of international trade and transportation, and the development level of [...] Read more.
Remote sensing images can obtain broad geomorphic features and provide a strong basis for analysis and decision making. As 71% of the earth is covered by water, shipping has become an efficient means of international trade and transportation, and the development level of coastal cities will directly reflect the development level of a country. The coastline is the boundary line between seawater and land, so it is of great significance to accurately identify it to assist shipping traffic and docking, and this identification will also play a certain auxiliary role in environmental analysis. Currently, the main problems of coastline recognition conducted by remote sensing images include: (1) in the process of remote sensing, image transmission inevitably brings noise causing poor image quality and difficult image quality enhancement; (2) s single scale does not allow for the identification of coastlines at different scales; and (3) features are under-utilized, false detection is high and intuitive measurement is difficult. To address these issues, we used the following multispectral methods: (1) a PCA-based image enhancement algorithm was proposed to improve image quality; (2) a dual attention network and HRnet network were proposed to extract suspected coastlines from different levels; and (3) a decision set fusion approach was proposed to transform the coastline identification problem into a probabilistic problem for coastline extraction. Finally, we constructed a coastline straightening model to visualize and analyze the recognition effect. Experiments showed that the algorithm has an AOM greater than 0.88 and can achieve coastline extraction. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

23 pages, 7471 KiB  
Article
Multi-Modal Feature Fusion Network with Adaptive Center Point Detector for Building Instance Extraction
by Qinglie Yuan and Helmi Zulhaidi Mohd Shafri
Remote Sens. 2022, 14(19), 4920; https://doi.org/10.3390/rs14194920 - 01 Oct 2022
Cited by 10 | Viewed by 3921
Abstract
Building information extraction utilizing remote sensing technology has vital applications in many domains, such as urban planning, cadastral mapping, geographic information censuses, and land-cover change analysis. In recent years, deep learning algorithms with strong feature construction ability have been widely used in automatic [...] Read more.
Building information extraction utilizing remote sensing technology has vital applications in many domains, such as urban planning, cadastral mapping, geographic information censuses, and land-cover change analysis. In recent years, deep learning algorithms with strong feature construction ability have been widely used in automatic building extraction. However, most methods using semantic segmentation networks cannot obtain object-level building information. Some instance segmentation networks rely on predefined detectors and have weak detection ability for buildings with complex shapes and multiple scales. In addition, the advantages of multi-modal remote sensing data have not been effectively exploited to improve model performance with limited training samples. To address the above problems, we proposed a CNN framework with an adaptive center point detector for the object-level extraction of buildings. The proposed framework combines object detection and semantic segmentation with multi-modal data, including high-resolution aerial images and LiDAR data, as inputs. Meanwhile, we developed novel modules to optimize and fuse multi-modal features. Specifically, the local spatial–spectral perceptron can mutually compensate for semantic information and spatial features. The cross-level global context module can enhance long-range feature dependence. The adaptive center point detector explicitly models deformable convolution to improve detection accuracy, especially for buildings with complex shapes. Furthermore, we constructed a building instance segmentation dataset using multi-modal data for model training and evaluation. Quantitative analysis and visualized results verified that the proposed network can improve the accuracy and efficiency of building instance segmentation. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Figure 1

20 pages, 2584 KiB  
Article
LMSD-YOLO: A Lightweight YOLO Algorithm for Multi-Scale SAR Ship Detection
by Yue Guo, Shiqi Chen, Ronghui Zhan, Wei Wang and Jun Zhang
Remote Sens. 2022, 14(19), 4801; https://doi.org/10.3390/rs14194801 - 26 Sep 2022
Cited by 27 | Viewed by 3395
Abstract
At present, deep learning has been widely used in SAR ship target detection, but the accurate and real-time detection of multi-scale targets still faces tough challenges. CNN-based SAR ship detectors are challenged to meet real-time requirements because of a large number of parameters. [...] Read more.
At present, deep learning has been widely used in SAR ship target detection, but the accurate and real-time detection of multi-scale targets still faces tough challenges. CNN-based SAR ship detectors are challenged to meet real-time requirements because of a large number of parameters. In this paper, we propose a lightweight, single-stage SAR ship target detection model called YOLO-based lightweight multi-scale ship detector (LMSD-YOLO), with better multi-scale adaptation capabilities. The proposed LMSD-YOLO consists of depthwise separable convolution, batch normalization and activate or not (ACON) activation function (DBA) module, Mobilenet with stem block (S-Mobilenet) backbone module, depthwise adaptively spatial feature fusion (DSASFF) neck module and SCYLLA-IoU (SIoU) loss function. Firstly, the DBA module is proposed as a general lightweight convolution unit to construct the whole lightweight model. Secondly, the improved S-Mobilenet module is designed as the backbone feature extraction network to enhance feature extraction ability without adding additional calculations. Then, the DSASFF module is proposed to achieve adaptive fusion of multi-scale features with fewer parameters. Finally, the SIoU is used as the loss function to accelerate model convergence and improve detection accuracy. The effectiveness of the LMSD-YOLO is validated on the SSDD, HRSID and GFSDD datasets, respectively, and the experimental results show that our proposed model has a smaller model volume and higher detection accuracy, and can accurately detect multi-scale targets in more complex scenes. The model volume of LMSD-YOLO is only 7.6MB (52.77% of model size of YOLOv5s), the detection speed on the NVIDIA AGX Xavier development board reached 68.3 FPS (32.7 FPS higher than YOLOv5s detector), indicating that the LMSD-YOLO can be easily deployed to the mobile platform for real-time application. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Graphical abstract

18 pages, 10193 KiB  
Article
3D Reconstruction of Remote Sensing Mountain Areas with TSDF-Based Neural Networks
by Zipeng Qi, Zhengxia Zou, Hao Chen and Zhenwei Shi
Remote Sens. 2022, 14(17), 4333; https://doi.org/10.3390/rs14174333 - 01 Sep 2022
Cited by 8 | Viewed by 2292
Abstract
The remote sensing 3D reconstruction of mountain areas has a wide range of applications in surveying, visualization, and game modeling. Different from indoor objects, outdoor mountain reconstruction faces additional challenges, including illumination changes, diversity of textures, and highly irregular surface geometry. Traditional neural [...] Read more.
The remote sensing 3D reconstruction of mountain areas has a wide range of applications in surveying, visualization, and game modeling. Different from indoor objects, outdoor mountain reconstruction faces additional challenges, including illumination changes, diversity of textures, and highly irregular surface geometry. Traditional neural network-based methods that lack discriminative features struggle to handle the above challenges, and thus tend to generate incomplete and inaccurate reconstructions. Truncated signed distance function (TSDF) is a commonly used parameterized representation of 3D structures, which is naturally convenient for neural network computation and computer storage. In this paper, we propose a novel deep learning method with TSDF-based representations for robust 3D reconstruction from images containing mountain terrains. The proposed method takes in a set of images captured around an outdoor mountain and produces high-quality TSDF representations of the mountain areas. To address the aforementioned challenges, such as lighting variations and texture diversity, we propose a view fusion strategy based on reweighted mechanisms (VRM) to better integrate multi-view 2D features of the same voxel. A feature enhancement (FE) module is designed for providing better discriminative geometry prior in the feature decoding process. We also propose a spatial–temporal aggregation (STA) module to reduce the ambiguity between temporal features and improve the accuracy of the reconstruction surfaces. A synthetic dataset for reconstructing images containing mountain terrains is built. Our method outperforms the previous state-of-the-art TSDF-based and depth-based reconstruction methods in terms of both 2D and 3D metrics. Furthermore, we collect real-world multi-view terrain images from Google Map. Qualitative results demonstrate the good generalization ability of the proposed method. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Graphical abstract

19 pages, 7106 KiB  
Article
Few-Shot Multi-Class Ship Detection in Remote Sensing Images Using Attention Feature Map and Multi-Relation Detector
by Haopeng Zhang, Xingyu Zhang, Gang Meng, Chen Guo and Zhiguo Jiang
Remote Sens. 2022, 14(12), 2790; https://doi.org/10.3390/rs14122790 - 10 Jun 2022
Cited by 12 | Viewed by 2273
Abstract
Monitoring and identification of ships in remote sensing images is of great significance for port management, marine traffic, marine security, etc. However, due to small size and complex background, ship detection in remote sensing images is still a challenging task. Currently, deep-learning-based detection [...] Read more.
Monitoring and identification of ships in remote sensing images is of great significance for port management, marine traffic, marine security, etc. However, due to small size and complex background, ship detection in remote sensing images is still a challenging task. Currently, deep-learning-based detection models need a lot of data and manual annotation, while training data containing ships in remote sensing images may be in limited quantities. To solve this problem, in this paper, we propose a few-shot multi-class ship detection algorithm with attention feature map and multi-relation detector (AFMR) for remote sensing images. We use the basic framework of You Only Look Once (YOLO), and use the attention feature map module to enhance the features of the target. In addition, the multi-relation head module is also used to optimize the detection head of YOLO. Extensive experiments on publicly available HRSC2016 dataset and self-constructed REMEX-FSSD dataset validate that our method achieves a good detection performance. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Graphical abstract

17 pages, 2531 KiB  
Article
Pair-Wise Similarity Knowledge Distillation for RSI Scene Classification
by Haoran Zhao, Xin Sun, Feng Gao and Junyu Dong
Remote Sens. 2022, 14(10), 2483; https://doi.org/10.3390/rs14102483 - 22 May 2022
Cited by 6 | Viewed by 1821
Abstract
Remote sensing image (RSI) scene classification aims to identify the semantic categories of remote sensing images based on their contents. Owing to the strong learning capability of deep convolutional neural networks (CNNs), RSI scene classification methods based on CNNs have drawn much attention [...] Read more.
Remote sensing image (RSI) scene classification aims to identify the semantic categories of remote sensing images based on their contents. Owing to the strong learning capability of deep convolutional neural networks (CNNs), RSI scene classification methods based on CNNs have drawn much attention and achieved remarkable performance. However, such outstanding deep neural networks are usually computationally expensive and time-consuming, making them impossible to apply on resource-constrained edge devices, such as the embedded systems used on drones. To tackle this problem, we introduce a novel pair-wise similarity knowledge distillation method, which could reduce the model complexity while maintaining satisfactory accuracy, to obtain a compact and efficient deep neural network for RSI scene classification. Different from the existing knowledge distillation methods, we design a novel distillation loss to transfer the valuable discriminative information, which could reduce the within-class variations and restrain the between-class similarity, from the cumbersome model to the compact model. This method could obtain the compact student model with higher performance compared with existing knowledge distillation methods in RSI scene classification. To be specific, we distill the probability outputs between sample pairs with the same label and match the probability outputs between the teacher and student models. Experiments on three public benchmark datasets for RSI scene classification, i.e., AID, UCMerced, and NWPU-RESISC datasets, verify that the proposed method could effectively distill the knowledge and result in a higher performance. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing II)
Show Figures

Graphical abstract

Back to TopTop