Next Article in Journal
Growth of Escherichia coli in Human Milk and Powdered Infant Formula under Various Treatments and Feeding Conditions in Neonatal Units
Next Article in Special Issue
A Prediction-to-Prediction Remote Sensing Image Super-Resolution Network under a Multi-Level Supervision Paradigm
Previous Article in Journal
Low Current Ripple Parameter-Free MPCC of Grid-Connected Inverters for PV Systems
Previous Article in Special Issue
An Optical Remote Sensing Image Matching Method Based on the Simple and Stable Feature Database
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mosaicing Technology for Airborne Wide Field-of-View Infrared Image

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geo-Spatial Information Processing and Application Systems, Chinese Academy of Sciences, Beijing 100190, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
4
System Engineering Research Institute, Academy of Military Sciences, Beijing 100071, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8977; https://doi.org/10.3390/app13158977
Submission received: 22 May 2023 / Revised: 19 July 2023 / Accepted: 25 July 2023 / Published: 4 August 2023
(This article belongs to the Collection Space Applications)

Abstract

:
Multi-detector parallel scanning is derived from the traditional airborne panorama camera, and it has a great lateral field of view. A wide field-of-view camera can be used to obtain an area of remote sensing image by whisk broom mood during the flight. The adjacent image during acquisition should cover the overlap region according to the flight path, and then the regional image can be generated by image processing. Complexity and difficulty are increased during the regional image processing due to some interference factors of aircraft in flight. The overlap of the acquired regional image is constantly variable. Depending on the analysis of the imaging geometric principle of a wide field-of-view scanning camera, this paper proposes the rigorous geometric model of geoposition. The infrared image mosaic technology is proposed according to the features of regional images through the SIFT (Scale Invariant Feature Transform) operator to extract the two best-matching point pairs in the adjacent overlap region. We realize the coarse registration of adjacent images according to image translation, rotation, and a scale model of image geometric transformation, and then the local fine stitching is realized using the normalized cross-correlation matching strategy. The regional mosaic experiment of aerial multi-detector parallel scanning infrared image is processed to verify the feasibility and efficiency of the proposed algorithm.

1. Introduction

Aerial remote sensing has the advantages of flexibility and high resolution, which researchers have always been focused on. It is acknowledged that the technical development of aerial remote sensing was developed before space remote sensing. Most of the advances in remote sensing technology have passed the experiment and verification of aerial remote sensing and then turned to space remote sensing. At present, digital imaging has become the main mode in the field of aerial remote sensing, and the digital aerial camera has entered the practical stage from the experimental stage. In 2010, the digital aerial camera replaced the analog film camera. Aerial digital cameras can be divided into two types according to their imaging methods, one is a linear aerial camera, and the other is an area aerial camera, and the representative digital aerial camera systems are ADS (Airborne Digital Sensor), DMC (Digital Mapping Camera), UltraCam (Ultra Digital Camera), etc. At present, with the deep applications of remote sensing and the progress of imaging technology, new imaging modes have begun to enter commercial applications. In recent years, there have been typical multi-detector parallel scanners, stepper framing cameras, multi-angle tilt cameras, etc. Among them, multi-detector parallel scanning imaging cameras have the characteristics of wide coverage mapping.
Multi-detector parallel scanning is a panoramic camera that originated from the traditional film era, which has a large lateral field of view. Combined with the application of CCD (Charge Coupled Device) technology, the multi-detector parallel scanning camera uses a linear CCD parallel to the flight direction on the focal plane of the objective lens instead of the slot imaging and scans along with the objective lens perpendicular to the flight direction to obtain an image. Because of the wide swing of the objective lens, it can image a wide range of areas on both sides of the route, which is a wide-field aerial scanning image. The scanning field of view of a multi-detector parallel scanning camera can be designed as very large, and wide images can be acquired only by a linear detector, which greatly improves the image acquisition efficiency and is often used for fast acquisition of infrared/night vision images. However, because the focal length of the multi-detector parallel scanning camera is fixed and the object distance increases with the increase of scanning angle, the scale on both sides of the image will gradually shrink, and the whole image will produce so-called panoramic geometric distortion. In addition, while scanning, the airplane moves forward, and the nonlinear factors of scanning mirror swing make the geometric distortion of the image more complicated.
In order to acquire images of a certain area, the multi-detector parallel scanning camera is often used to acquire the images through the left and right swing of the objective lens during flight, and the adjacent images are designed to be overlapped according to the flight path, and then the regional images are obtained by image processing. However, due to many interference factors during flight, the overlap between the acquired images is constantly changing, which increases the complexity and difficulty of regional image processing.
At present, there are few reports on the research about the automatic mosaic processing of wide-field aerial scanning images, but more research is on the mosaic of CCD images and other aerial area array images acquired by UAV (unmanned aerial vehicle). Wang Q. et al. proposed using SIFT algorithm to detect candidate feature points and combining the longitude and latitude coordinates of UAV array image and the location relationship of overlapping areas to conduct rough matching of feature points, using the random sampling consensus algorithm to solve the projection transformation matrix to complete the concatenation of two adjacent farmland remote sensing images and designed a pyramid stitching strategy to complete the stitching of 128 high-resolution array images [1]. Wang H.J. et al. proposed an improved SPHP (Shape Preserving Half Projection) algorithm to solve the problem of deformation or ghosting in UAV remote sensing array stitching images that were easily affected by geomorphic factors [2]. Lu Z. introduced a feature point matching algorithm to design a new UAV remote sensing image fast stitching system to address the issues of long matching time and slow stitching speed in the currently designed UAV array remote sensing image stitching system [3]. Wang H. et al. proposed a method based on a quality evaluation to find the optimal seam in UAV aerial image stitching [4]. Li Y.F. et al. proposed a new fast-stitching algorithm for UAV aerial images [5]. Zhou J. et al. proposed a seamless stitching method for UAV remote sensing images, which combined SIFT feature points and overlapping transition Poisson fusion [6]. Luo X. et al. proposed an automatic registration algorithm for UAV remote sensing images based on deep residual features [7]. Wang Y. et al. proposed a real-time stitching algorithm for UAV aerial images based on genetic algorithm optimization [8]. Yuan Y.T. et al. proposed a new seam-cutting method based on superpixels for UAV image stitching [9]. All these stitching methods extract the same name points in overlapping areas by matching algorithms and then calculate the spatial transformation relationship between adjacent area array images and complete the stitching of images on the basis of transformation. When an area array camera acquires images, it acquires all pixels in the field of view at the same time, so when considering the transformation relationship between adjacent images, it is based on the geometric relationship of area array imaging. The transformation model between adjacent images adopts the perspective projection model, and it is necessary to extract four or more control points in the overlapping area for calculation. However, an aerial scanning image with a wide field of view is obtained by scanning with linear CCD. That is, an image is scanned and imaged in a time sequence, which does not meet the perspective projection model of the area array camera, and the image will be deformed locally during scanning. Therefore, it is necessary to consider not only the overall transformation relationship but also the local uneven deformation during scanning between adjacent scanned images.
In view of the acquisition characteristics of wide-field aerial scanning images and the low accuracy of platform attitude and position measurement, a method of region mosaic processing of wide-field aerial scanning images is studied and given in this paper. The overall processing technical flow is shown in Figure 1. Firstly, the geometric correction of each image is carried out according to the imaging geometry model of wide-field aerial scanning. Based on correction, the overlapping areas of adjacent images are extracted by SIFT operator, and the translation, scale, and rotation models of images are used for rough stitching. Then, the normalized cross-correlation is used for matching in the local overlapping areas to realize fine stitching.
The main contributions of this paper are the following:
  • A transformation model of “scale + rotation” is proposed in the calculation of the geometric transformation of two adjacent images;
  • The realization of rough geometric stitching between adjacent images through the whole scale and rotation transformation processing by introducing the scale and rotation parameters between adjacent images;
  • A solvation of the local geometric non-uniformity deformation between adjacent images is demonstrated with better ways and results.

2. Materials and Methods

2.1. Geometric Conformation Model of Wide Field Infrared Image

The imaging geometric relationship of multi-detector parallel scanning is shown in Figure 2, and each image is obtained by scanning along the flight side by an exposure slit (corresponding to linear CCD). For the formation of each scanning line image, its geometric relationship is equivalent to the case that an area array CCD camera tilts along the flight side by a scanning angle θ and imaging with the center line (y = 0). According to the collinear equation model of photogrammetry, the collinear equation applicable to multi-detector parallel scanning images can be obtained [10,11]
( x ) = x / cos θ f a 1 ( X P X S ) + b 1 ( Y P Y S ) + c 1 ( Z p Z S ) a 3 ( X P X S ) + b 3 ( Y P Y S ) + c 3 ( Z p Z S ) ( y ) = f tg θ = f a 2 ( X P X S ) + b 2 ( Y P Y S ) + c 2 ( Z p Z S ) a 3 ( X P X S ) + b 3 ( Y P Y S ) + c 3 ( Z p Z S )
where f is the focal length and θ is the scanning angle, which can be calculated according to the scanning line yc and focal length f corresponding to the off-board point:
θ = y c / f
Among them, a 1 , a 2 , a 3 , b 1 , b 2 , b 3 , c 1 , c 2 , c 3 is the direction cosine formed by the attitude parameters of the scanning line, and the three attitude angles can be obtained by interpolation of inertial navigation measurement results. ( X S , Y S , Z S ) are the three-dimensional position coordinates of the antenna center corresponding to the scanning line, which are obtained by the onboard GPS measurements and interpolation processing. ( X P , Y P , Z P ) is the ground coordinate corresponding to the pixel.
In order to improve the correction efficiency, an infrared image should be corrected by blocks. That is, the image is divided into several sub-regions and then corrected for each sub-region, and the coordinates of the four corners of each image block are calculated by using a strict collinear equation model to obtain the geographical coordinates of the corrected image. For pixel points in each block, according to the coordinates of four corners, the perspective transformation model is used to calculate the coordinates of pixels. As shown in Figure 3, the perspective transformation model adopted by each pixel in the block is used to calculate the ground coordinates corresponding to the image. The perspective transformation model is as follows:
i = a 1 x + a 2 y + a 3 c 1 x + c 2 y + 1 j = b 1 x + b 2 y + b 3 c 1 x + c 2 y + 1
where (i,j) is the coordinates of the original image, (x,y) is the corrected image coordinates, and the coefficients a1, a2, a3, b1, b2, b3, c1, and c2 are model parameters, which can be reversed according to the coordinates of the four corners.

2.2. Rough Mosaic of Adjacent Frames Based on SIFT

The accuracy of geometric correction of images based on attitude and position is not high because the accuracy of attitude measurement and position measurement of aviation platforms is limited, and geometric mosaic processing needs to be carried out according to the overlapping area between adjacent images. Wide-field aerial scanning image takes the same time to scan, during which the platform movement causes systematic and local deformation of the image. In order to effectively eliminate the overall and local deformation of the image, it is necessary to process the whole image and local image deformation according to the image in the overlapping area. The transformation of whole adjacent images is realized by extracting the homonymous points through SIFT and calculating the translation, scale, and rotation parameters.
The SIFT feature was proposed by Lowe et al. in 2004 and has been successfully applied in the field of computer vision [12]. In the field of remote sensing image processing, scholars have conducted a lot of improvement research by using the excellent characteristics of SIFT. The SIFT method can effectively extract the local features of the image, some extreme points in the scale space, and it has strong adaptability to the brightness, translation, rotation, and scale change of the image. The feature descriptors of the feature points are extracted from the images around the feature points so that the feature descriptors can be matched. The main steps of the SIFT method include the formation of scale space and downsampled images, the detection of extreme points in scale space, the accurate positioning of feature points, the generation of directional parameters of feature points, and the formation of feature point descriptors.
We use the approximate geometric overlapping relationship between two adjacent images to clip the image blocks from the previous image and the corresponding image blocks from the next image, then extract SIFT feature points, and then extract the best homonymous point pair according to the ratio relationship between the nearest neighbor and the next neighbor and the bi-directional matching strategy. The basic principle is shown in Figure 4, while Figure 5 shows a pair of best homonymous-matched point pairs extracted from two adjacent overlapping areas.
A pair of best-matched point pairs are extracted from the left and right sides of the image, and then these two points are taken as control points. Considering that the main geometric deformations of two adjacent images are caused by translation, scale, and rotation, the geometric transformation parameters between the two images can be calculated according to the extracted two pairs of point pairs. The specific calculation formula is as follows:
k 1 k 2 d x d y = x 1 y 1 1 0 y 1 x 1 0 1 x 2 y 2 1 0 y 2 x 2 0 1 1 u 1 v 1 u 2 v 2
where k1, k2, dx, and dy are the four parameters to be calculated, ( x 1 , y 1 ) indicates the image coordinates of homonymous point P1 in the current image, ( u 1 , v 1 ) indicates the image coordinates of homonymous point P1 in the previous image, ( x 2 , y 2 ) indicates the image coordinates of homonymous point P2 in the current image, and ( u 2 , v 2 ) indicates the image coordinates of homonymous point P2 in the previous image.
The image with the same position as the previous one can be obtained using the calculated scale, translation, and rotation parameters to resample the current image. The coordinate transformation processing of the current image is calculated according to the following formula:
u = k 1 x + k 2 y + d x v = k 2 x + k 1 y + d y
where k1, k2, dx, and dy are four transformation parameters, ( x , y ) represents the original image coordinates of the current image, and ( u , v ) represents the transformed image coordinates of the current image.

2.3. Fine Splicing of Local Matching

Tian, J.Y. et al. proposed a fine stitching algorithm based on Wallis smoothing and distance weight enhancement [13]. Wu, Y.Q. et al. proposed a remote sensing image matching method based on Non-Subsampled Contourlet Transform (NSCT) and Speed Up Robust Features (SURF) [14]. Schifano, L. et al. proposed a new method for accurately stitching airborne images acquired from multiple sensor imagers [15]. Gao, R. et al. proposed an adaptive multi-view and real-time image mosaic method based on the combination of grayscale and feature [16]. Brown, M. et al. proposed an automatic panoramic image mosaic algorithm using invariant features [17]. Lin, C.C. et al. proposed an adaptive and as natural as possible image stitching algorithm [18]. Bang, S. et al. proposed a high-resolution automatic image stitching algorithm for construction sites based on UAV images [19]. David, J. et al. described a method for registering and mosaicking based on multicamera images [20]. Xu, Q. et al. introduced a UAV image stitching algorithm based on mesh-guided deformation and ground constraint, which can achieve precise registration and acquire ideal stitching effect [21]. Xiong, J.B. et al. proposed a spatially-varying deformation method that uses multiple local transformation matrices to reduce the image alignment error and reduce the image projection distortion and improve the perspective of non-overlapping regions of the stitched image [22].
The scale and rotation relationship between the images is basically the same because the time interval between two adjacent images is short, and the whole geometric deformation is basically eliminated after geometric correction. Therefore, a normalized cross-correlation matching algorithm can be used to extract a large number of homonymous points in the overlapping image area. A normalized cross-correlation coefficient is a standardized covariance function that indicates the similarity between two functions. If the correlation coefficient reaches 1, it means that the two functions are completely the same; otherwise, the correlation coefficient is less than 1. For the discrete infrared image, its calculation formula is as follows:
C ( u , v ) = x y [ T ( x , y ) μ T ] [ I ( x u , y v ) μ I ( u , v ) ] x y [ T ( x , y ) μ T ] 2 x y [ I ( x , y ) μ I ( u , v ) ] 2
where T represents the template image and I represents the image to be registered, μ T represents the mean value of the template, μ I represents the mean value of the image to be registered, and the image chip to be registered is used for matching in the reference image, and the position with the largest normalized correlation coefficient is the required matching point.
According to the coordinates of two adjacent matched points, coordinate adjustment is carried out on the scanning direction (column direction) of the current image, and the formula of column direction adjustment is as follows:
C n e w = ( Y k Y k 1 ) ( y k y k 1 ) ( C o l d y k 1 ) + Y k 1
where Y k represents the previous column coordinate of a kth-matched point, Y k 1 represents the previous column coordinate of a k − 1th-matched point, y k represents the current column coordinate of a kth-matched point, y k 1 represents the current column coordinate of a k − 1th-matched point, C o l d represents the original column coordinate of the current frame, C n e w represents the adjusted column coordinate of the current frame, and k represents the serial number of matched points.
After obtaining the best-matching position, the adjacent images are locally resampled according to the scanning direction so that the adjacent images with more accurate positions are obtained. Finally, the geometric stitching of the two images can be completed by stitching according to the geometric position.
Figure 6 shows two adjacent images with a certain overlap, and Figure 7 is an effect diagram of stitching after manual alignment according to an obvious ground object in the overlapping area. The ground object on the left side is difficult to align with the road aligned on the right side. The overall stitching effect is not good. Figure 8 is the result of a seamless mosaic of adjacent images by using normalized cross-correlation for local fine matching and resampling. The geometric mosaic effect of overlapping areas is satisfactory, and there is no obvious dislocation phenomenon.

3. Results and Discussion

In order to verify the effectiveness of the processing method, 21 obtained aerial infrared images were used for region stitching processing experiments. We have developed a fast processing program using VC using a high-performance server, including four main modules: an image geometric correction module based on rigorous models, extraction of homonymous points between adjacent images, a coarse stitching module based on SIFT for adjacent images, and a fine-stitching module for local matching. The specific experimental steps are as follows.
The first step was each image’s geometric correction based on rigorous models. The image geometry correction module based on a rigorous model utilizes the rigorous geometric relationship of multi-element parallel scanning imaging. Each image is obtained by a multi-element parallel scanning camera, which is obtained by scanning an exposure gap (corresponding to a linear array CCD) along the flight side. For each scanning line image obtained by a multi-element parallel scanning camera, its geometric relationship is equivalent to a tabletop array CCD camera tilting a scanning angle along the flight side θ. Based on the characteristics of centerline (y = 0) imaging and according to the Collinearity equation model of photogrammetry, the Collinearity equation Formula (1), which was applicable to the acquisition of images by multiple parallel scanning cameras, is obtained. At the same time, in order to improve the speed of image geometric correction based on the Collinearity equation model of the multi-element parallel scanning camera, the block correction method was adopted for each infrared image. That is, the image is divided into several sub-areas and then separately corrected in each sub-area. The strict Collinearity equation model was used to calculate the coordinates of the four corners of each image to obtain the geographical coordinates of the corrected image. For each pixel point in each region, the perspective transformation model Formula (2) was used to calculate the coordinates of each pixel based on the coordinates of the four corner points.
The second step was to extract points with the same name between adjacent images. The extraction module for homonymous points between adjacent two images utilizes the approximate geometric overlap relationship between the two images to extract the corresponding image blocks from the previous image and the next image. Then, SIFT feature points are separately extracted, and the best homonymous point pair is extracted based on the ratio relationship between the nearest neighbor and the next-nearest neighbor, as well as a bidirectional comparison strategy.
The third step was to perform coarse stitching of adjacent images based on SIFT. The coarse stitching module of adjacent images based on SIFT was achieved by extracting points with the same name through SIFT and calculating translation, scale, and rotation parameters. Based on SIFT, a pair of optimal homonymous point pairs were extracted on the left and right sides of the image, and these two points were used as controls. The main geometric deformations of adjacent images were translation, scale, and rotation. The required translation, scale, and rotation parameters were obtained using Formula (3), and then the calculated scale, scale, and rotation parameters were used to resample the current image using Formula (4), and then we could obtain an image that was basically consistent with the previous position.
The fourth step was fine stitching of local matching. The precise stitching module for local matching used a normalized cross-correlation matching algorithm to obtain the coordinates of feature points with the same name in the overlapping area of two adjacent images. It adjusts the scanning direction (column direction) of the current image. After obtaining the best-matching position, with local resamples of the adjacent images in the arrangement direction, we can obtain the adjacent images with more accurate positions. Thus, the geometric stitching of the two images can be completed by stitching according to their geometric positions.
Finally, loop processing was conducted. The concatenated result and the image to be concatenated are processed in a cyclic manner according to the first four processing steps, ultimately obtaining the concatenated large-area image.
A total of 21 aerial infrared images were used for the region mosaic processing experiment in order to verify the effectiveness of the processing method. Using a high-performance server and VC, a fast processing program, including an image restoration module, geometric correction module, and a mosaic module, is developed. The size of each original image is 11,500 × 480, and each pixel is 16 bits, so the regional geometric mosaic (including geometric correction, matching, and resampling) can be completed within 80 s. The final mosaic image is shown in Figure 9 and Figure 10. The two figures show the effect of local zoom in a display of the mosaic image. It can be seen from the results of the image mosaic in Figure 9 and Figure 10 that there are no obvious dislocations between the geometric textures of roads and fields, the changes of adjacent ground objects are very natural and smooth, and the overall mosaic effect is ideal.

4. Conclusions

Through the processing of aerial remote sensing images, the effectiveness of the proposed processing method was verified in the present study, and the wide-view angle processing speed also reached quasi-real-time. Therefore, the following conclusions can be drawn from the results of the current study:
(1)
In the geometric transformation model of two adjacent images, the transformation model of “scale + rotation” was adopted instead of the perspective projection model of an area array camera, which adapts to the geometric transformation model between adjacent images of wide-field aerial scanning images, and better solved the whole transformation problem between adjacent images.
(2)
In the calculation of the whole transformation model, the homonymous pointing on both sides of the scanned image was used to calculate the scale and rotation parameters between adjacent images, and the rough geometric stitching between adjacent images was realized through the whole scale and rotation transformation processing.
(3)
On the basis of the whole transformation model, local coordinate transformation and adjustment were carried out on the scanning direction of images according to a large number of locally matched points, which were obtained by normalized cross-correlation matching. They could solve the local geometric non-uniformity deformation between adjacent images with better ways and results.

Author Contributions

Conceptualization, L.D.; Methodology, L.D. and F.L.; Software, L.D.; Validation, F.L. and H.Y.; Formal analysis, L.D.; Resources, H.Y.; Writing—original draft preparation, L.D.; Writing—review & editing, F.L., M.H. and H.Y.; Visualization, F.L; Supervision, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Q.; Ning, J.F.; Cao, Y.X.; Han, W.T. UAV Remote Sensing Image Mosaic Technology Based on SIFT Algorithm. J. Jilin Univ. 2017, 35, 188–197. [Google Scholar]
  2. Wang, H.J.; Liu, Y.M.; Yue, Y.J.; Zhao, H. UAV remote sensing image mosaic technology based on improved SPHP algorithm. Comput. Eng. Des. 2020, 41, 6. [Google Scholar]
  3. Lu, Z. UAV remote sensing image fast mosaic system based on feature point matching. Electron. Des. Eng. 2022, 12, 30. [Google Scholar]
  4. Wang, H.; Zhou, Y.Y.; Wang, X.Y.; Wang, X.Y. Methods of UAV aerial image mosaic based on quality evaluation. Comput. Appl. Res. 2022, 39, 4. [Google Scholar]
  5. Li, Y.F.; Ren, J.B. A new fast mosaic algorithm for aerial images of UAV. Comput. Simul. 2022, 5, 39. [Google Scholar]
  6. Zhou, J.; Xie, K.; Fu, C.; Shi, K. UAV remote sensing image mosaic technology combining SIFT feature points and Poisson fusion. Surv. Mapp. Bull. 2021, 1, 94. [Google Scholar]
  7. Luo, X.; Lai, G.; Wang, X.; Hou, W. UAV Remote Sensing Image Automatic Registration Based on Deep Residual Features. Remote Sens. 2021, 13, 3605. [Google Scholar] [CrossRef]
  8. Wang, Y.; Qi, M. Real-time mosaic of UAV aerial images based on GA-SIFT algorithm. Surv. Mapp. Bull. 2021, 8, 6. [Google Scholar]
  9. Yuan, Y.T.; Fang, F.M.; Zhang, G.X. Superpixel-based seamless image stitching for UAV images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1565–1576. [Google Scholar] [CrossRef]
  10. Sun, J.P.; Shu, N.; Guan, Z.Q. Principles, Methods and Applications of Remote Sensing; Surveying and Mapping Publishing House: Beijing, China, 1997. [Google Scholar]
  11. Schenk, T. Digital photogrammetry. TerraScience 1999, 1, 251–266. [Google Scholar]
  12. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  13. Tian, J.Y.; Duan, F.Z.; Wang, L.; Li, X.J.; Qu, X.Y. Elimination of UAV Image Seam Based on Wallis Transform and Distance Weight Enhancement. J. Image Graph. 2014, 19, 806–812. [Google Scholar]
  14. Wu, Y.Q.; Shen, Y.; Tao, F.X. Remote Sensing Image Matching Based on NSCT and SURF. J. Remote Sens. 2014, 18, 624–635. [Google Scholar]
  15. Schifano, L.; Smeesters, L.; Berghmans, F.; Dewitte, S. Wide-Field-of-View Longwave Camera for the Characterization of the Earth’s Outgoing Longwave Radiation. Sensors 2021, 21, 4444. [Google Scholar] [CrossRef] [PubMed]
  16. Gao, R.; Miao, C.; Li, X. Adaptive Multi-View Image Mosaic Method for Conveyor Belt Surface Fault Online Detection. Appl. Sci. 2021, 11, 2564. [Google Scholar] [CrossRef]
  17. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  18. Lin, C.C.; Pankanti, S.U.; Natesan Ramamurthy, K.; Aravkin, A.Y. Adaptive As-Natural-As-Possible Image Stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1155–1163. [Google Scholar]
  19. Bang, S.; Kim, H.; Kim, H. UAV-based automatic generation of high-resolution panorama at a construction site with a focus on preprocessing for image stitching. Autom. Constr. 2017, 84, 70–80. [Google Scholar] [CrossRef]
  20. David, J. Holtkamp and A. Ardeshir Goshtasby. Precision Registration and Mosaicking of Multicamera Images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3446–3455. [Google Scholar]
  21. Xu, Q.; Chen, J.; Luo, L.B.; Gong, W.P.; Wang, Y. UAV Image Stitching Based on Mesh-Guided Deformation and Ground Constraint. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4465–4475. [Google Scholar] [CrossRef]
  22. Xiong, J.B.; Li, F.; Long, F.; Xu, Y.H.; Wang, S.; Xu, J.; Ling, Q. Spatially-varying Warping for Panoramic Image Stitching. In Proceedings of the 2022 34th Chinese Control and Decision Conference (CCDC), Hefei, China, 15–17 August 2022. [Google Scholar] [CrossRef]
Figure 1. Flow chart of processing technology.
Figure 1. Flow chart of processing technology.
Applsci 13 08977 g001
Figure 2. Imaging geometry of wide field-of-view scanning.
Figure 2. Imaging geometry of wide field-of-view scanning.
Applsci 13 08977 g002
Figure 3. Block rectification of infrared image.
Figure 3. Block rectification of infrared image.
Applsci 13 08977 g003
Figure 4. Homologous point relationships in two overlap images.
Figure 4. Homologous point relationships in two overlap images.
Applsci 13 08977 g004
Figure 5. Homologous point in the overlap area.
Figure 5. Homologous point in the overlap area.
Applsci 13 08977 g005
Figure 6. Two adjacent images.
Figure 6. Two adjacent images.
Applsci 13 08977 g006
Figure 7. Mosaic image by manual mode.
Figure 7. Mosaic image by manual mode.
Applsci 13 08977 g007
Figure 8. Mosaic image by local fine matching.
Figure 8. Mosaic image by local fine matching.
Applsci 13 08977 g008
Figure 9. Mosaic region image.
Figure 9. Mosaic region image.
Applsci 13 08977 g009
Figure 10. Zoom in on the local area of the mosaic image.
Figure 10. Zoom in on the local area of the mosaic image.
Applsci 13 08977 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, L.; Liu, F.; Han, M.; You, H. Mosaicing Technology for Airborne Wide Field-of-View Infrared Image. Appl. Sci. 2023, 13, 8977. https://doi.org/10.3390/app13158977

AMA Style

Dong L, Liu F, Han M, You H. Mosaicing Technology for Airborne Wide Field-of-View Infrared Image. Applied Sciences. 2023; 13(15):8977. https://doi.org/10.3390/app13158977

Chicago/Turabian Style

Dong, Lei, Fangjian Liu, Mingchao Han, and Hongjian You. 2023. "Mosaicing Technology for Airborne Wide Field-of-View Infrared Image" Applied Sciences 13, no. 15: 8977. https://doi.org/10.3390/app13158977

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop