Next Article in Journal
Pasture Productivity Assessment under Mob Grazing and Fertility Management Using Satellite and UAS Imagery
Next Article in Special Issue
A Novel UAV Visual Positioning Algorithm Based on A-YOLOX
Previous Article in Journal
Detection of White Leaf Disease in Sugarcane Using Machine Learning Techniques over UAV Multispectral Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inverse Airborne Optical Sectioning

by
Rakesh John Amala Arokia Nathan
,
Indrajit Kurmi
and
Oliver Bimber
*
Institute of Computer Graphics, Johannes Kepler University Linz, 4040 Linz, Austria
*
Author to whom correspondence should be addressed.
Drones 2022, 6(9), 231; https://doi.org/10.3390/drones6090231
Submission received: 27 July 2022 / Revised: 29 August 2022 / Accepted: 30 August 2022 / Published: 2 September 2022
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)

Abstract

:
We present Inverse Airborne Optical Sectioning (IAOS), an optical analogy to Inverse Synthetic Aperture Radar (ISAR). Moving targets, such as walking people, that are heavily occluded by vegetation can be made visible and tracked with a stationary optical sensor (e.g., a hovering camera drone above forest). We introduce the principles of IAOS (i.e., inverse synthetic aperture imaging), explain how the signal of occluders can be further suppressed by filtering the Radon transform of the image integral, and present how targets’ motion parameters can be estimated manually and automatically. Finally, we show that while tracking occluded targets in conventional aerial images is infeasible, it becomes efficiently possible in integral images that result from IAOS.

1. Introduction

Higher resolution, wide depth of field, fast framerates, high contrast, or signal-to-noise ratio can often not be achieved with compact imaging systems that apply narrower aperture sensors. Synthetic aperture (SA) sensing is a widely recognized technique to achieve these objectives by acquiring individual signals of multiple or a single moving small-aperture sensor and by computationally combining them to approximate the signal of a physically infeasible, hypothetical wide aperture sensor [1]. This principle has been used in a wide range of applications, such as radar [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28], telescopes [29,30], microscopes [31], sonar [32,33,34,35], ultrasound [36,37], lasers [38,39], and optical imaging [40,41,42,43,44,45,46,47].
In radar, electromagnetic waves are emitted and their backscattered echoes are recorded by an antenna. Electromagnetic waves at typical radar wavelengths (as compared with the visible spectrum) can penetrate scattering media (i.e., clouds, vegetation, and partly soil) and are quite useful for obtaining information in all weather conditions. However, acquiring high spatial resolution images would require an impractically large antenna [2]. Therefore, since its invention in the 1950s [3,4], Synthetic Aperture Radar (SAR) sensors have been placed on space-borne systems, such as satellites [5,6,7,8], planes [9,10,11], and drones [12,13] in different modes of operation, such as strip-map [11,14], spotlight [11,14], and circular [10,14] to observe various sorts of phenomena on Earth’s surface. These include crop growth [8], mine detection [12], natural disasters [6], and climate change effects, such as the deforestation [14] or melting of glaciers [7]. Phase differences of multiple SAR recordings (interferometry) have even been used to reconstruct depth information and enables finer resolutions [15].
Analogous to SAR (which utilizes moving radars for synthetic aperture sensing of widely static targets), a technique known as Inverse Synthetic Aperture Radar (ISAR) [16,17,18] considers the relative motion of moving targets and static radars for SAR sensing. In contrast to SAR (where the radar motion is usually known), ISAR is challenged by the estimation of an unknown target motion. It often requires sophisticated signal processing and is often limited to sensing one target at a time, while SAR can image large areas and monitor multiple (static) targets simultaneously [17,18]. ISAR has been used for non-cooperative target recognition (non-stationary targets) in maritime [19,20], airspace [21,22], near-space [23,24], and overland surveillance applications [25,26,27,28]. Recently, spatially distributed systems and advanced signal processing, such as compressed sensing and machine learning, have been utilized to obtain 3D images of targets, target’s reflectivity, and more degrees of freedom for target motion estimation [27,28].
With Airborne Optical Sectioning (AOS) [48,49,50,51,52,53,54,55,56,57,58,59,60], we introduced an optical synthetic aperture imaging technique that captures an unstructured light field with an aircraft, such as a drone. We utilized manually, automatically [48,49,50,51,52,53,54,55,56,58,59], or fully autonomously [57] operated camera drones that sample multispectral (RGB and thermal) images within a certain (synthetic aperture) area above occluding vegetation (such as forest) and combined their signals computationally to remove occlusion. The outcome is a widely occlusion-free integral image of the ground, revealing details of registered targets while unregistered occluders above the ground, such as trunks, branches or leaves disappear in strong defocus. In contrast to SAR, AOS benefits from high spatial resolution, real-time processing rates, and wavelength-independences, making it useful in many domains. So far, AOS has been applied to the visible [48,59] and the far-infrared (thermal) spectrum [51] for various applications, such as archeology [48,49], wildlife observation [52], and search and rescue [55,56]. By employing a randomly distributed statistical model [50,57,60] the limits of AOS and its efficacy with respect to its optimal sampling parameters can be explained. Common image processing tasks, such as classification with deep neural networks [55,56] or color anomaly detection [59] are proven to perform significantly better when applied to AOS integral images compared with conventional aerial images. We also demonstrated the real-time capability of AOS by deploying it on a fully autonomous and classification-driven adaptive search and rescue drone [56]. Yet, the sequential sampling nature of AOS when being used with conventional single-camera drones has limited its applications to recover static targets only. Moving targets lead to motion blur in the AOS integral images, which are nearly impossible to classify or to track.
In [59], we presented a first solution to tracking moving people through densely occluding foliage with parallel synthetic aperture sampling supported by a drone-operated, 10 m wide, 1D camera array (assembling 10 synchronized cameras). Although feasible, such a specialized imaging system is in most cases is impractical as it is bulky and difficult to control.
Being inspired by the principles of ISAR for radar, in this article we present Inverse Airborne Optical Sectioning (IAOS) for detecting and tracking moving targets through occluding foliage (cf. Figure 1b) with a conventional, single-camera drone (cf. Figure 1c). As with ISAR, IAOS relies on the motion of targets being sensed by a static airborne optical sensor (e.g., a drone hovering above forest) over time (cf. Figure 1a) to computationally reconstruct an occlusion-free integral image (cf. Figure 1d). Essential for an efficient reconstruction is the correct estimation of the target’s motion.
In this article, we make four main contributions: (1) We introduce the principles of IAOS (i.e., inverse synthetic aperture imaging) in Section 1 and Section 2. (2) We explain how the signal of occluders can be further suppressed by filtering the Radon transform of the image integral in Section 2.1 (cf. Figure 1e). (3) We present how a target’s motion parameters can be estimated manually and automatically in Section 2.1 and Section 2.2. (4) Finally, we show that while tracking occluded targets in conventional aerial images is infeasible, it is efficiently possible in integral images that result from IAOS in Section 3.

2. Materials and Methods

All field experiments were carried out in compliance with the legal European union Aviation Safety Agency (EASA) flight regulations, using a DJI Mavic 2 Enterprise Advanced, over dense broadleaf, conifer, and mixed forest, and under direct sunlight as well as under cloudy weather conditions. Free flight drone operations were performed using the DJI’s standalone smart remote controller with DJI’s Pilot application. RGB videos of resolution 1920 × 1080 (30 fps) and thermal videos of resolution 640 × 512 (30 fps) were recorded on the drone’s internal memory, and were processed offline after landing. For vertical (top-down, as in Figure 3) scans the drone was hovering at an altitude of about 35 m AGL. For horizontal scans (sideways, as in Figure 4) the drone was hovering at a distance of about 10 m away from the vegetation. For quicker processing, we extracted a selection of 1–5 fps from the acquired 30 fps thermal videos using FFmpeg python bindings. Offline processing included intrinsic camera calibration (pre-calibrated transformation matrix computed using MATLAB’s camera calibrator application) and image un-distortion/rectification using OpenCV’s pinhole camera model (as explained in [48,55]). The undistorted and rectified images were cropped to a field of view of 36° and a resolution of 1024 × 1024 px. Image integration was achieved by averaging the pre-processed images being registered based on manually or automatically estimated motion parameters, as explained in Section 2.1 and Section 2.2. Radon transform filtering [61,62,63] (also explained in Section 2.1 and Section 2.2) was implemented in Mathworks’ MATLAB R2022a.

2.1. Manual Motion Estimation

If the target’s motion parameters (i.e., direction, θ [°] and speed, s [m/s]) are known and assumed to be constant for all time intervals, the captured images can be registered by shifting them accordingly to θ and s. Thereby, θ can directly be mapped to the image plane, while s must be mapped [m/s] to [px/s] (which is easily possibly after camera calibration and knowing the drone’s altitude). By averaging the registered images results in an integral image that shows the target in focus (local motion of the target itself, such as arm movements of a walking person, lead to defocus) while the misregistered occluders vanish in defocus.
Large occluders that are shifted in direction θ while being integrated appear as linear directional blur artifacts in the integral image (cf. Figure 1d). Their signal can be suppressed by filtering (zeroing out) the Radon transform of the integral image I(θ,s) in direction θ (+/− an uncertainty range that considers local motion non-linearities of the occluders, such as movements of branches caused by wind, etc.). The inverse Radon transform (filtered back projection [63]) of the filtered sinogram results in a new integral image with suppressed signal of the directionally blurred occluders (cf. Figure 1e). This process is illustrated in Figure 2, and can be summarized mathematically with:
I ( θ , s ) = Rf 1 ( F ( Rf ( I ( θ , s ) ) , θ ) ) ,  
where F is the filter function which zeros out coefficients at angle θ (+/− uncertainty range) in the sinogram.
One way of estimating the correct motion parameters is by visual search (i.e., θ and s are interactively modified until the target appears best focused in the integral image). Exploring the two-dimensional parameter space within proper bounds is relatively efficient if the motion can be assumed to be constant. Sample results are presented in Section 3. See also Supplementary Video S1 for an example of manual visual search for the motion parameters of results shown in Figure 3k. In case of non-linear motion, the motion parameters must be continuously and automatically estimated. A manual exploration becomes infeasible in this case.

2.2. Automatic Motion Estimation

Automatic estimation of motion parameters requires an error metric which is capable of detecting improvement and degradation in visibility (i.e., focus and occlusion) for different parameters. Here, we utilize simple gray level variance (GLV) [64] as an objective function. We already proved in [53] that, in contrast to traditionally used gradient-, Laplacian-, or wavelet-based focus metrics [65], GLV does not rely on any image features and is thus invariant to occlusion. In [54] (see also Appendix A), we demonstrated that the variance of an integral image is:
V a r [ I ] = D ( 1 D ) ( ( µ o µ s ) 2 ) + D σ o 2 + ( 1 D ) σ s 2 N + ( 1 D ) 2 ( 1 1 N ) σ s 2 ,
where D is the probability of occlusion, while µ o ,   σ o 2 and µ s , σ s 2 are the statistical properties of occlusion and the target signal, respectively.
Integrating N individual images with optimal motion parameters results in an occlusion-free view of the target’s signal whereas the signal strength of the occluders reduces and disappears in strong defocus. To further suppress occluders, we used Radon filtering [61,62,63] as described in Section 2.1. However, we now utilize the linearity property of the Radon transform which states that:
R f ( i α i I i ) = i   α i R f ( I i ) .
Thus, instead of filtering the integral image I(θ,s), as explained in Equation (1), we apply Radon transform filtering to each single image I i before integrating it.
For automatic motion parameter estimation, we registered the current integral image I (integrating I 1 I i 1 ) to the latest (most recently recorded) inverse Radon transformed filtered image I i = Rf 1 ( F ( Rf ( I i ) , θ ) ) by maximizing Var [ I ] while optimizing for best motion parameters ( θ , s ). Deterministic-global search, DIRECT [66] (as implemented Nlopt [67]), was applied for optimization. Consequently, we considered each discrete motion component between two recorded images and within the corresponding imaging time (e.g., 1/30 s for 30 fps) to be piecewise linear. The integration of multiple images, however, can reveal and track a non-linear motion pattern where ( θ , s ) vary in each recording step. Sample results are presented in Section 3 and in Supplementary Videos S2 and S3.

3. Results

Figure 3 presents results from field studies of IAOS with manual motion estimation, as explained in Section 2.1. Images are recorded top-down, with the drone hovering at a constant position above conifer (Figure 3a–l), broadleaf (Figure 3m–o), and mixed (Figure 3p–r) forest. Estimated motion parameters of hidden walking people were: 118°, 0.5 m/s (Figure 3a–l), 108°, 0.6 m/s (Figure 3m–o), and 90°, 0.6 m/s (Figure 3p–q).
Figure 4 illustrates an example with the drone hovering at a distance of 10 m in front of dense bushes (at an altitude of 2 m, recording horizontally). The hidden person is walking from right to left at 260° with 0.27 m/s (both manually estimated).
Figure 5 illustrates two examples for automatic motion estimation, as explained in Section 2.2, with the drone hovering at an altitude of 35 m and a hidden person walking through dense forest.
For tracking, moving targets are first detected by utilizing background subtraction based on Gaussian mixture models [68,69]. The resulting foreground mask is further processed using morphological operations to eliminate noise [70,71]. Subsequently blob analysis [72,73] detects connected pixels corresponding to each moving target. Association of detections in subsequent frames is entirely based on motion where the motion of each detected target is estimated by a Kalman filter. The filter predicts the target location in subsequent frame (based on previous motion and associated motion model) and then determines the likelihood of assigning the detection to the target.
For comparison, we applied the above tracking approach to both: the sequence of captured single thermal images and to the sequence of integral images computed from the single images, as described in Section 2.2. For each case, tracking parameters (such as minimum blob size, max. prediction length, number of training images for background subtraction) were individually optimized to achieve best possible results.
While tracking in single images leads to many false positive detections becoming practically infeasible, tracking in integral images results in clear track-paths of a single target. Estimated mean motion parameters were: 291°, 0.82 m/s (Figure 5a–c), 309°, 0.16 m/s for the first leg, and 241°, 0.41 m/s for the second leg (Figure 5d–f). See Supplementary Videos S2 and S3 for dynamic examples of these results.

4. Discussion and Conclusions

In this article we presented Inverse Airborne Optical Sectioning (IAOS), an optical analogy to Inverse Synthetic Aperture Radar (ISAR). Moving targets, such as walking people, that are heavily occluded by vegetation can be made visible and tracked with a stationary optical sensor (e.g., a hovering camera drone above forest). We introduced the principles of IAOS (i.e., inverse synthetic aperture imaging), explained how the signal of occluders can be further suppressed by filtering the Radon transform of the image integral, and presented how targets’ motion parameters can be estimated manually and automatically. Furthermore, we showed that while tracking occluded targets in conventional aerial images is infeasible, it is efficiently possible in integral images that result from IAOS.
IAOS has several limitations: We assume that local motion of occluders and of the drone (e.g., caused by wind) is smaller than the motion of the target. Small local motion of the target itself, such as individual moving body parts, appear blurred in integral images. Moreover, the field of view of a hovering drone is limited and moving targets might be out of view quickly. In the future, we will investigate how drone movement being adapted to target movement can increase field of view and reduce blur of local target motion. This corresponds to a combination of IAOS (i.e., occlusion removal by registering target motion) and classical AOS (i.e., occlusion removal by registering drone movement). Furthermore, results of Radon transform filtering have artifacts that are due to under-sampling; higher imaging rates can overcome this. The blob-based tracking approach applied for proof-of-concept is very simple; more sophisticated methods achieve superior tracking results. See supplementary Videos S2 and S3 for dynamic examples of these results. However, we believe that tracking in integral images will always outperform tracking in conventional images.

Supplementary Materials

The following supporting information can be downloaded at: https://github.com/JKU-ICG/AOS/, Video S1: Manual visual search for the motion parameters. Video S2: Automatic motion estimation (example 1). Video S3: Automatic motion estimation (example 2).

Author Contributions

Conceptualization, O.B.; methodology, O.B., R.J.A.A.N. and I.K.; software, R.J.A.A.N., I.K.; validation, R.J.A.A.N., I.K. and O.B.; formal analysis, R.J.A.A.N., I.K. and O.B.; investigation, R.J.A.A.N. and I.K.; resources, R.J.A.A.N. and I.K.; data curation, R.J.A.A.N. and I.K.; writing—original draft preparation, O.B., R.J.A.A.N. and I.K.; writing—review and editing,, O.B., R.J.A.A.N. and I.K.; visualization, O.B., R.J.A.A.N. and I.K.; supervision, O.B.; project administration, O.B.; funding acquisition, O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Austrian Science Fund (FWF) under grant number P 32185-NBL, and by the State of Upper Austria and the Austrian Federal Ministry of Education, Science and Research via the LIT–Linz Institute of Technology under grant number LIT-2019-8-SEE-114.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data, code, and supplementary material are available on GitHub: https://github.com/JKU-ICG/AOS/ (accessed on 1 September 2022).

Acknowledgments

We want to thank the Upper Austrian Fire Brigade Headquarters for providing the DJI Mavic 2 Enterprise Advanced for our experiments. Open Access Funding by the Austrian Science Fund (FWF).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In the following, we present the derivation of an integral image’s variance ( V a r [ I ] ). We applied the statistical model described in [50], where the integral image I is composed of N single image recordings I i and each single image pixel in I i is either occlusion-free ( S ) or occluded ( O ), determined by Z :
I i = Z i O i + ( 1 Z i ) S .
Similar to [50], all variables are independent and identically distributed with Z i , following a Bernoulli distribution with success parameter D (i.e., E [ Z i ] = E [ Z i 2 ] = D ; furthermore note that E [ Z i ( 1 Z i ) ] = 0 is true). The random variable S follows a distribution whose properties can be described with mean E [ S ] = µ s and E [ S 2 ] = ( µ s 2 + σ s 2 ) . Analogously, the occluded variable O i follows a distribution whose properties can be described with E [ O i ] = µ o and E [ O i 2 ] = ( µ o 2 + σ o 2 ) . We compute the first and second moments of I i to determine its mean and variance with:
E [ I i ] = D µ o + ( 1 D ) µ s
and
E [ I i 2 ] = D ( µ o 2 + σ o 2 ) + ( 1 D ) ( µ s 2 + σ s 2 ) .
Variances of single images I i can be obtained as:
V a r [ I i ] = E [ I i 2 ] ( E [ I i ] ) 2 = D ( µ o 2 + σ o 2 ) + ( 1 D ) ( µ s 2 + σ s 2 ) ( D 2 µ o 2 + ( 1 D ) 2 µ s 2 + 2 D ( 1 D ) µ o µ s = D ( 1 D ) ( ( µ o µ s ) 2 ) + D σ o 2 + ( 1 D )   σ s 2 .
Similarly, for I we determine the first and second moments where the first moment of I is given by:
E [ I ] = E [ 1 N i = 1 N Z i O i + ( 1 Z i ) S ] = D µ o + ( 1 D ) µ s  
and the second moment of I is as derived in [50]:
E [ I 2 ] = 1 N 2 ( N ( D ( σ o 2 + µ o 2 ) + ( 1 D ) ( σ s 2 + µ s 2 ) ) + N ( N 1 ) ( D 2 µ o 2 + 2 D ( 1 D ) µ o µ s + ( 1 D ) 2 ( σ s 2 + µ s 2 ) ) ) .
Consecutively, we calculate the variance of the integral image as:
V a r [ I ] = E [ I 2 ] ( E [ I ] ) 2 = 1 N ( D ( σ o 2 + µ o 2 ) + ( 1 D ) ( σ s 2 + µ s 2 ) ) + ( D 2 µ o 2 + 2 D ( 1 D ) µ o µ s + ( 1 D ) 2 ( σ s 2 + µ s 2 ) )   1 N ( D 2 µ o 2 + 2 D ( 1 D ) µ o µ s + ( 1 D ) 2 ( σ s 2 + µ s 2 ) ) ( D 2 µ o 2 + ( 1 D ) 2 µ s 2 + 2 D ( 1 D ) µ o µ s ) = 1 N ( D ( σ o 2 + µ o 2 ) + ( 1 D ) ( σ s 2 + µ s 2 ) ) + ( 1 D ) 2 σ s 2 1 N ( D 2 µ o 2 + 2 D ( 1 D ) µ o µ s + ( 1 D ) 2 ( σ s 2 + µ s 2 ) ) = 1 N ( D ( 1 D ) ( ( µ o µ s ) 2 ) + D σ o 2 + ( 1 D ) σ s 2 ) + ( 1 D ) 2 ( 1 1 N ) σ s 2   .  

References

  1. Ryle, M.; Vonberg, D.D. Solar radiation on 175 Mc./s. Nature 1946, 158, 339–340. [Google Scholar] [CrossRef]
  2. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  3. May, C.A. Pulsed Doppler Radar Methods and Apparatus. U.S. Patent No. 3,196,436, 20 July 1965. [Google Scholar]
  4. Willey, C.A. Synthetic aperture radars: A paradigm for technology evolution. IRE Trans. Military Electron. 1985, 21, 440–443. [Google Scholar]
  5. Farquharson, G.; Woods, W.; Stringham, C.; Sankarambadi, N.; Riggi, L. The capella synthetic aperture radar constellation. In Proceedings of the 12th European Conference on Synthetic Aperture Radar, Aachen, Germany, 4–7 June 2018. EUSAR 2018; VDE. [Google Scholar]
  6. Chen, F.; Lasaponara, R.; Masini, N. An overview of satellite synthetic aperture radar remote sensing in archaeology: From site detection to monitoring. J. Cult. Herit. 2017, 23, 5–11. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Lin, H.; Wang, M.; Liu, X.; Chen, Q.; Wang, C.; Zhang, H. A Review of Satellite Synthetic Aperture Radar Interferometry Applications in Permafrost Regions: Current Status, Challenges, and Trends. IEEE Geosci. Remote Sens. Mag. 2022, 1, 2–23. [Google Scholar] [CrossRef]
  8. Ranjan, A.K.; Parida, B.R. Predicting paddy yield at spatial scale using optical and Synthetic Ap-erture Radar (SAR) based satellite data in conjunction with field-based Crop Cutting Experiment (CCE) data. Int. J. Remote Sens. 2021, 42, 2046–2071. [Google Scholar]
  9. Reigber, A.; Scheiber, R.; Jager, M.; Prats-Iraola, P.; Hajnsek, I.; Jagdhuber, T.; Papathanassiou, K.P.; Nannini, M.; Aguilera, E.; Baumgartner, S.; et al. Very-high-resolution airborne synthetic aperture radar imaging: Signal processing and applica-tions. Proc. IEEE 2021, 101, 759–783. [Google Scholar] [CrossRef]
  10. Sumantyo JT, S.; Chua, M.Y.; Santosa, C.E.; Panggabean, G.F.; Watanabe, T.; Setiadi, B.; Sumantyo, F.D.S.; Tsushima, K.; Sasmita, K.; Mardiyanto, A.; et al. Airborne circularly polarized synthetic aperture radar. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1676–1692. [Google Scholar] [CrossRef]
  11. Tsunoda, S.I.; Pace, F.; Stence, J.; Woodring, M.; Hensley, W.H.; Doerry, A.W.; Walker, B.C. Lynx: A high-resolution synthetic aperture radar. In Proceedings of the 2000 IEEE Aerospace Conference. Proceedings (Cat. No.00TH8484), Big Sky, MT, USA, 25 March 2000; Volume 5, pp. 51–58. [Google Scholar]
  12. Fernández, M.G.; López Y, Á.; Arboleya, A.A.; Valdés, B.G.; Vaqueiro, Y.R.; Andrés FL, H.; García, A.P. Synthetic aperture radar imaging system for landmine detection using a ground penetrat-ing radar on board a unmanned aerial vehicle. IEEE Access 2018, 6, 45100–45112. [Google Scholar] [CrossRef]
  13. Deguchi, T.; Sugiyama, T.; Kishimoto, M. Development of SAR system installable on a drone. In Proceedings of the EUSAR 2021, 13th European Conference on Synthetic Aperture Radar, VDE, Online, 2 July 2021. [Google Scholar]
  14. Mondini, A.C.; Guzzetti, F.; Chang, K.T.; Monserrat, O.; Martha, T.R.; Manconi, A. Landslide failures detection and mapping using Synthetic Aperture Radar: Past, present and future. Earth-Sci. Rev. 2021, 216, 103574. [Google Scholar] [CrossRef]
  15. Rosen, P.A.; Hensley, S.; Joughin, I.R.; Li, F.K.; Madsen, S.N.; Rodriguez, E.; Goldstein, R.M. Synthetic aperture radar interferometry. Proc. IEEE 2000, 88, 333–382. [Google Scholar] [CrossRef]
  16. Prickett, M.J.; Chen, C.C. Principles of inverse synthetic aperture radar/ISAR/imaging. In Proceedings of the EASCON’80, Electronics and Aerospace Systems Conference, Arlington, VA, USA, 29 September–1 October 1980. [Google Scholar]
  17. Vehmas, R.; Neuberger, N. Inverse Synthetic Aperture Radar Imaging: A Historical Perspective and State-of-the-Art Survey. IEEE Access 2021, 9, 113917–113943. [Google Scholar] [CrossRef]
  18. Özdemir, C. Inverse Synthetic Aperture Radar Imaging with MATLAB® Algorithms; Wiley-Interscience: Hoboken, NJ, USA, 2021. [Google Scholar] [CrossRef]
  19. Marino, A.; Sanjuan-Ferrer, M.J.; Hajnsek, I.; Ouchi, K. Ship Detection with Spectral Analysis of Synthetic Aperture Radar: A Comparison of New and Well-Known Algorithms. Remote Sens. 2015, 7, 5416–5439. [Google Scholar] [CrossRef]
  20. Wang, Y.; Chen, X. 3-D Interferometric Inverse Synthetic Aperture Radar Imaging of Ship Target With Complex Motion. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3693–3708. [Google Scholar] [CrossRef]
  21. Xu, G.; Zhang, B.; Chen, J.; Wu, F.; Sheng, J.; Hong, W. Sparse Inverse Synthetic Aperture Radar Imaging Using Structured Low-Rank Method. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
  22. Berizzi, F.; Corsini, G. Autofocusing of inverse synthetic aperture radar images using contrast optimiza-tion. IEEE Transactions on Aerospace and Electronic Systems 1996, 32, 1185–1191. [Google Scholar] [CrossRef]
  23. Bai, X.; Zhou, F.; Xing, M.; Bao, Z. Scaling the 3-D Image of Spinning Space Debris via Bistatic Inverse Synthetic Aperture Radar. IEEE Geosci. Remote Sens. Lett. 2010, 7, 430–434. [Google Scholar] [CrossRef]
  24. Anger, S.; Jirousek, M.; Dill, S.; Peichl, M. Research on advanced space surveillance using the IoSiS radar system. In Proceedings of the EUSAR 2021, 13th European Conference on Synthetic Aperture Radar, Online, 2 July 2021. [Google Scholar]
  25. Vossiek, M.; Urban, A.; Max, S.; Gulden, P. Inverse Synthetic Aperture Secondary Radar Concept for Precise Wireless Positioning. IEEE Trans. Microw. Theory Tech. 2007, 55, 2447–2453. [Google Scholar] [CrossRef]
  26. Jeng, S.L.; Chieng, W.H.; Lu, H.P. Estimating speed using a side-looking single-radar vehicle detec-tor. IEEE Trans. Intell. Transp. Syst. 2013, 15, 607–614. [Google Scholar] [CrossRef]
  27. Ye, X.; Zhang, F.; Yang, Y.; Zhu, D.; Pan, S. Photonics-Based High-Resolution 3D Inverse Synthetic Aperture Radar Imaging. IEEE Access 2019, 7, 79503–79509. [Google Scholar] [CrossRef]
  28. Pandey, N.; Ram, S.S. Classification of automotive targets using inverse synthetic aperture radar images. IEEE Trans. Intell. Veh. 2022. Available online: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=dHBHt38AAAAJ&citation_for_view=dHBHt38AAAAJ:zYLM7Y9cAGgC (accessed on 20 July 2022).
  29. Levanda, R.; Leshem, A. Synthetic aperture radio telescopes. IEEE Signal Process. Mag. 2009, 27, 14–29. [Google Scholar] [CrossRef] [Green Version]
  30. Dravins, D.; Lagadec, T.; Nuñez, P.D. Optical aperture synthesis with electronically connected telescopes. Nat. Commun. 2015, 6, 1–5. [Google Scholar] [CrossRef] [PubMed]
  31. Ralston, T.S.; Marks, D.L.; Carney, P.S.; Boppart, S.A. Interferometric synthetic aperture microscopy. Nat. Phys. 2007, 3, 129–134. [Google Scholar] [CrossRef]
  32. Edgar, R. Introduction to Synthetic Aperture Sonar. Sonar Syst. 2011. [Google Scholar] [CrossRef]
  33. Hayes, M.P.; Gough, P.T. Synthetic Aperture Sonar: A Review of Current Status. IEEE J. Ocean. Eng. 2009, 34, 207–224. [Google Scholar] [CrossRef]
  34. Hansen, R.E.; Callow, H.J.; Sabo, T.O.; Synnes, S.A.V. Challenges in Seafloor Imaging and Mapping With Synthetic Aperture Sonar. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3677–3687. [Google Scholar] [CrossRef]
  35. Bülow, H.; Birk, A. Synthetic Aperture Sonar (SAS) without Navigation: Scan Registration as Basis for Near Field Synthetic Imaging in 2D. Sensors 2020, 20, 4440. [Google Scholar] [CrossRef]
  36. Jensen, J.A.; Nikolov, S.I.; Gammelmark, K.L.; Pedersen, M.H. Synthetic aperture ultrasound imaging. Ultrasonics 2006, 44, e5–e15. [Google Scholar] [CrossRef]
  37. Zhang, H.K.; Cheng, A.; Bottenus, N.; Guo, X.; Trahey, G.E.; Boctor, E.M. Synthetic tracked aperture ultrasound imaging: Design, simulation, and experimental evaluation. J. Med. Imaging 2016, 3, 027001. [Google Scholar] [CrossRef]
  38. Barber, Z.W.; Dahl, J.R. Synthetic aperture ladar imaging demonstrations and information at very low return levels. Appl. Opt. 2014, 53, 5531–5537. [Google Scholar] [CrossRef]
  39. Terroux, M.; Bergeron, A.; Turbide, S.; Marchese, L. Synthetic aperture lidar as a future tool for earth observation. Proc. SPIE 2017, 10563, 105633V. [Google Scholar] [CrossRef] [Green Version]
  40. Vaish, V.; Wilburn, B.; Joshi, N.; Levoy, M. Using plane+ parallax for calibrating dense camera arrays. In Proceedings of the 2004 IEEE Computer So-ciety Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; CVPR 2004. Volume 1. [Google Scholar]
  41. Vaish, V.; Levoy, M.; Szeliski, R.; Zitnick, C.L.; Kang, S.B. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2. [Google Scholar]
  42. Zhang, H.; Jin, X.; Dai, Q. Synthetic Aperture Based on Plenoptic Camera for Seeing Through Occlusions. In Pacific Rim Conference on Multimedia; Springer: Cham, Switzerland, 2018; pp. 158–167. [Google Scholar] [CrossRef]
  43. Yang, T.; Ma, W.; Wang, S.; Li, J.; Yu, J.; Zhang, Y. Kinect based real-time synthetic aperture imaging through occlusion. Multimed. Tools Appl. 2015, 75, 6925–6943. [Google Scholar] [CrossRef]
  44. Joshi, N.; Avidan, S.; Matusik, W.; Kriegman, D.J. Synthetic aperture tracking: Tracking through occlusions. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007. [Google Scholar]
  45. Pei, Z.; Li, Y.; Ma, M.; Li, J.; Leng, C.; Zhang, X.; Zhang, Y. Occluded-Object 3D Reconstruction Using Camera Array Synthetic Aperture Imaging. Sensors 2019, 19, 607. [Google Scholar] [CrossRef] [PubMed]
  46. Yang, T.; Zhang, Y.; Yu, J.; Li, J.; Ma, W.; Tong, X.; Yu, R.; Ran, L. All-In-Focus Synthetic Aperture Imaging. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 1–15. [Google Scholar] [CrossRef]
  47. Pei, Z.; Zhang, Y.; Chen, X.; Yang, Y.-H. Synthetic aperture imaging using pixel labeling via energy minimization. Pattern Recognit. 2013, 46, 174–187. [Google Scholar] [CrossRef]
  48. Kurmi, I.; Schedl, D.C.; Bimber, O. Airborne Optical Sectioning. J. Imaging 2018, 4, 102. [Google Scholar] [CrossRef]
  49. Bimber, O.; Kurmi, I.; Schedl, D.C. Schedl Synthetic aperture imaging with drones. IEEE Comput. Graph. Appl. 2019, 39, 8–15. [Google Scholar] [CrossRef] [PubMed]
  50. Kurmi, I.; Schedl, D.C.; Bimber, O. A statistical view on synthetic aperture imaging for occlusion removal. IEEE Sens. J. 2019, 19, 9374–9383. [Google Scholar] [CrossRef]
  51. Kurmi, I.; Schedl, D.C.; Bimber, O. Thermal Airborne Optical Sectioning. Remote Sens. 2019, 11, 1668. [Google Scholar] [CrossRef]
  52. Schedl, D.C.; Kurmi, I.; Bimber, O. Airborne Optical Sectioning for Nesting Observation. Sci. Rep. 2020, 10, 1–7. [Google Scholar] [CrossRef]
  53. Kurmi, I.; Schedl, D.C.; Bimber, O. Fast Automatic Visibility Optimization for Thermal Synthetic Aperture Visualization. IEEE Geosci. Remote Sens. Lett. 2021, 18, 836–840. [Google Scholar] [CrossRef]
  54. Kurmi, I.; Schedl, D.C.; Bimber, O. Schedl, and Oliver Bimber Pose error reduction for focus enhancement in thermal synthetic aperture visualization. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  55. Schedl, D.C.; Kurmi, I.; Bimber, O. Search and rescue with airborne optical sectioning. Nat. Mach. Intell. 2020, 2, 783–790. [Google Scholar] [CrossRef]
  56. Kurmi, I.; Schedl, D.C.; Bimber, O. Combined person classification with airborne optical sectioning. Sci. Rep. 2022, 12, 1–11. [Google Scholar] [CrossRef]
  57. Schedl, D.C.; Kurmi, I.; Bimber, O. An autonomous drone for search and rescue in forests using airborne optical sectioning. Sci. Robot. 2021, 6, eabg1188. [Google Scholar] [CrossRef] [PubMed]
  58. Ortner, R.; Kurmi, I.; Bimber, O. Acceleration-Aware Path Planning with Waypoints. Drones 2021, 5, 143. [Google Scholar] [CrossRef]
  59. Nathan, R.J.A.A.; Kurmi, I.; Schedl, D.C.; Bimber, O. Through-Foliage Tracking with Airborne Optical Sectioning. J. Remote Sens. 2022, 2022, 1–10. [Google Scholar] [CrossRef]
  60. Seits, F.; Kurmi, I.; Nathan RJ, A.A.; Ortner, R.; Bimber, O. On the Role of Field of View for Occlusion Removal with Airborne Optical Sectioning. arXiv 2022, arXiv:2204.13371. [Google Scholar]
  61. Bracewell, R.N. Two-Dimensional Imaging; Prentice-Hall: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
  62. Lim, J.S. Two-Dimensional Signal and Image Processing; Prentice-Hall: Englewood Cliffs, NJ, USA, 1990. [Google Scholar]
  63. Kak, A.C.; Slaney, M.; Wang, G. Principles of Computerized Tomographic Imaging. Am. Assoc. Phys. Med. 2002, 29, 107. [Google Scholar] [CrossRef]
  64. Firestone, L.; Cook, K.; Culp, K.; Talsania, N.; Preston, K., Jr. Comparison of autofocus methods for automated microscopy. Cytom. J.-Ternational Soc. Anal. Cytol. 1991, 12, 195–206. [Google Scholar] [CrossRef]
  65. Pertuz, S.; Puig, D.; Garcia, M.A. Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 2012, 46, 1415–1432. [Google Scholar] [CrossRef]
  66. Jones, D.R.; Perttunen, C.D.; Stuckman, B.E. Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 1993, 79, 157–181. [Google Scholar] [CrossRef]
  67. Johnson, S.G. The NLopt Nonlinear-Optimization Package. Available online: http://github.com/stevengj/nlopt (accessed on 20 July 2022).
  68. KaewTraKulPong, P.; Bowden, R. An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection. In Video-Based Surveillance Systems; Springer: Boston, MA, USA, 2002; pp. 135–144. [Google Scholar] [CrossRef]
  69. Stauffer, C.; Grimson, W.E.L. Grimson Adaptive background mixture models for real-time tracking. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999; Volume 2. [Google Scholar]
  70. Soille, P. Morphological Image Analysis: Principles and Applications; Springer: Berlin, Germany, 1999; Volume 2. [Google Scholar]
  71. Dougherty, E.R.; Lotufo, R.A. Hands-on Morphological Image Processing; SPIE Press: Washington, DC, USA, 2003. [Google Scholar] [CrossRef]
  72. Dillencourt, M.B.; Samet, H.; Tamminen, M. A general approach to connected-component labeling for arbitrary image representations. J. ACM 1992, 39, 253–280. [Google Scholar] [CrossRef]
  73. Shapiro, L.G.; Stockman, G.C. Computer Vision; Prentice Hall: Englewood Cliffs, NJ, USA, 2001; Volume 3. [Google Scholar]
Figure 1. Inverse Airborne Optical Sectioning (IAOS) principle: IAOS relies on the motion of targets being sensed by a static airborne optical sensor (e.g., a drone (c) hovering above forest (b)) over time (a) to computationally reconstruct an occlusion-free integral image I (d). Essential for an efficient reconstruction is the correct estimation of the target’s motion (direction θ, and speed s). By filtering the Radon transform of I, the signal of occluders can be suppressed further (e). Thermal images are shown in (a,d,e).
Figure 1. Inverse Airborne Optical Sectioning (IAOS) principle: IAOS relies on the motion of targets being sensed by a static airborne optical sensor (e.g., a drone (c) hovering above forest (b)) over time (a) to computationally reconstruct an occlusion-free integral image I (d). Essential for an efficient reconstruction is the correct estimation of the target’s motion (direction θ, and speed s). By filtering the Radon transform of I, the signal of occluders can be suppressed further (e). Thermal images are shown in (a,d,e).
Drones 06 00231 g001
Figure 2. Radon transform filtering: to suppress directional blur artifacts of large occluders integrated in direction θ (a), the Radon transform (Rf) of the integral image (b) is filtered with function F that zeros out θ, +/− an uncertainty range which takes local motion of the occluders themselves into account (c). The inverse Radon transform ( Rf 1 ) of this filtered sinogram suppresses the direction blur artefacts of the occluders (d). Note: remaining directional artifacts in orthogonal directions are caused by under-sampling (i.e., the number of images being integrated). They are fluctuating too much to be suppressed in the same manner. In the example above, θ = 118° with +/− 15° (image coordinate system: clockwise, +y-axis = 0°).
Figure 2. Radon transform filtering: to suppress directional blur artifacts of large occluders integrated in direction θ (a), the Radon transform (Rf) of the integral image (b) is filtered with function F that zeros out θ, +/− an uncertainty range which takes local motion of the occluders themselves into account (c). The inverse Radon transform ( Rf 1 ) of this filtered sinogram suppresses the direction blur artefacts of the occluders (d). Note: remaining directional artifacts in orthogonal directions are caused by under-sampling (i.e., the number of images being integrated). They are fluctuating too much to be suppressed in the same manner. In the example above, θ = 118° with +/− 15° (image coordinate system: clockwise, +y-axis = 0°).
Drones 06 00231 g002
Figure 3. Manual motion estimation (vertical): sequence of single thermal images (aj) with walking persons indicated (yellow box), distance covered by person during capturing time (j), computed integral image (k), and Radon transform filtered integral image (l). Target indicated by yellow arrow. Different forest types: single thermal image example (m,p), integral images (n,q), and Radon transform filtered integral images (o,r).
Figure 3. Manual motion estimation (vertical): sequence of single thermal images (aj) with walking persons indicated (yellow box), distance covered by person during capturing time (j), computed integral image (k), and Radon transform filtered integral image (l). Target indicated by yellow arrow. Different forest types: single thermal image example (m,p), integral images (n,q), and Radon transform filtered integral images (o,r).
Drones 06 00231 g003
Figure 4. Manual motion estimation (horizontal): Walking person behind dense bushes. RGB image of drone (a). Single thermal images with person position indicated with yellow box (bk). Distance covered by person during capturing time (k). Integral image (l) and close-up (m) where the shape of the person can be recognized.
Figure 4. Manual motion estimation (horizontal): Walking person behind dense bushes. RGB image of drone (a). Single thermal images with person position indicated with yellow box (bk). Distance covered by person during capturing time (k). Integral image (l) and close-up (m) where the shape of the person can be recognized.
Drones 06 00231 g004
Figure 5. Automatic motion estimation (vertical): Two examples of tracking a moving hidden person within dense forest ((a,d) RGB images of drone) in either single thermal images (b,e) and IAOS integral images. Note: the tracking results of the integral images were projected back to a single thermal image for better spatial reference (c,f). Motion paths are indicated by yellow lines. While tracking in single images leads to many false positive detections, tracking in integral images results in clear track-paths of a single target. See Supplementary Videos S2 and S3 for dynamic examples of these results.
Figure 5. Automatic motion estimation (vertical): Two examples of tracking a moving hidden person within dense forest ((a,d) RGB images of drone) in either single thermal images (b,e) and IAOS integral images. Note: the tracking results of the integral images were projected back to a single thermal image for better spatial reference (c,f). Motion paths are indicated by yellow lines. While tracking in single images leads to many false positive detections, tracking in integral images results in clear track-paths of a single target. See Supplementary Videos S2 and S3 for dynamic examples of these results.
Drones 06 00231 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amala Arokia Nathan, R.J.; Kurmi, I.; Bimber, O. Inverse Airborne Optical Sectioning. Drones 2022, 6, 231. https://doi.org/10.3390/drones6090231

AMA Style

Amala Arokia Nathan RJ, Kurmi I, Bimber O. Inverse Airborne Optical Sectioning. Drones. 2022; 6(9):231. https://doi.org/10.3390/drones6090231

Chicago/Turabian Style

Amala Arokia Nathan, Rakesh John, Indrajit Kurmi, and Oliver Bimber. 2022. "Inverse Airborne Optical Sectioning" Drones 6, no. 9: 231. https://doi.org/10.3390/drones6090231

Article Metrics

Back to TopTop