Reprint

Synthetic Aperture Radar (SAR) Meets Deep Learning

Edited by
February 2023
386 pages
  • ISBN978-3-0365-6382-4 (Hardback)
  • ISBN978-3-0365-6383-1 (PDF)

This book is a reprint of the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning that was published in

Engineering
Environmental & Earth Sciences
Summary

This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology.

A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications.

In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications.

This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports.

Format
  • Hardback
License
© by the authors
Keywords
heterogeneous transformation; SAR image; optical image; conditional generative adversarial nets (CGANs); self-supervised; synthetic aperture radar (SAR); despeckling; enhanced U-Net; video synthetic aperture radar (Video-SAR); moving target tracking; guided anchor Siamese network (GASN); interferometric synthetic aperture radar; deep convolutional neural network; phase unwrapping; unsupervised change detection; polarimetric synthetic aperture radar (PolSAR); UAVSAR; multi-scale shallow block; multi-scale residual block; synthetic aperture radar; image registration; transformer; transformer; deep learning; SAR target detection; multiscale learning; ship detection; deep learning; SAR ship detection; position-enhanced attention; lightweight backbone; image augmentation; building extraction; SAR; semantic segmentation; SAR ship detection; SAR dataset; single-stage detector; two-stage detector; anchor free; train from scratch; oriented bounding box; multi-scale detection; deep learning; computer vision; low-grade road extraction; remote sensing; image segmentation; SAR image; deep learning; synthetic aperture radar; optical images; scene classification; on-board; lightweight self-supervised algorithm; synthetic aperture radar (SAR) image; arbitrary-oriented ship detection; differentiable rotational IoU algorithm; triangle distance IoU loss; attention-weighted feature pyramid network; multiple skip-scale connections; attention-weighted feature fusion; Rotated-SARShip dataset (RSSD); object classification; radar image reconstruction; convolutional neural networks; ResNet18; GBSAR; Omega-K algorithm; n/a