Next Article in Journal
Mapping and Spatial Variation of Seagrasses in Xincun, Hainan Province, China, Based on Satellite Images
Previous Article in Journal
MIMO: A Unified Spatio-Temporal Model for Multi-Scale Sea Surface Temperature Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Luojia-1 Nightlight Image Registration Based on Sparse Lights

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
2
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
3
Institute of Remote Sensing Satellite, China Academy of Space Technology (CAST), Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(10), 2372; https://doi.org/10.3390/rs14102372
Submission received: 19 March 2022 / Revised: 7 May 2022 / Accepted: 12 May 2022 / Published: 14 May 2022
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
When mosaicking adjacent nightlight images of a large area that lacks human activities, traditional registration methods have difficulty realizing the tie point registrations due to the lack of structural information. In order to address this issue, this study devises an easy-to-implement engineering solution that allows for the registration of sparse light areas with high efficiency while guaranteeing accuracy in non-sparse light areas. The proposed method first extracts the sparsely distributed light point positions through use of roundness detection and the centroid method. Then, geometric positioning forward and backward algorithms and the random consistency sampling detection algorithm (RANSAC) are used in order to achieve a rough registration of the nightlight images and the remaining tie points are expanded through the affine model. Through experimentation it was found that, compared with traditional registration methods, the proposed method is more reliable and has a wider distribution in sparse light areas. Finally, through the registration test of 275 scenes of nightlight images of China from Luojia-1, the coverage ratio of the tie points was increased from 59.3% from the traditional method to 95.3% in this study and the accuracy of the block adjustment was 0.63 pixels, which verifies the effectiveness of the method. The proposed method provides a basis for the registration, block adjustment, and mosaicking of nightlight images.

1. Introduction

Nightlight remote sensing images provide evaluations of human activities [1], urban changes [2], socioeconomics [3], and even the impact of war [4]. Researchers typically select a regional or national area to evaluate the above indicators through various nightlight images, such as those that are acquired from the 1 km resolution operational line-scan system (OLS) data of the defense meteorological satellite program (DMSP) or the 750 m resolution’s day/night band (DNB) data that are collected by the NASA/NOAA Visible Infrared Imaging Radiometer Suite (VIIRS) [5]. With the launch of Luojia-1, the resolution of nightlight images has greatly improved to 129 m, allowing for more refined indicator evaluations [6,7,8,9]. There are also some high-resolution nighttime images that were acquired by the International Space Station (ISS), EROS-B, and JL1-3B [10] and more satellites (and CubeSats/nanosats) that will engage in nighttime Earth observation in the future. However, the increase in resolution has resulted in reduced image swath widths and the evaluation of a region or country now requires the mosaicking of multiple nightlight images. Hence, the registration problem of the tie points of these nightlight images has become prominent.
The pixels of nightlight images have high values in lighted areas, yet their values are almost zero in non-lighted ones (regardless of how richly textured they are during the day). Further, when influenced by background noise, some pixels appear to be lights. These issues limit many traditional registration methods. For areas lacking human activities, the lack of this structural information (which is composed of large areas of urban lights and road lights) creates difficulty for traditional registration methods to be able to register adjacent images in order to obtain the tie points of nightlight images.
Currently, the most commonly used registration methods in the engineering application field include grayscale-based registration and feature-based registration [11]. Grayscale-based registration (also called template registration) mainly compares the similarity of the grayscale of two images and it has a high degree of dependence on this grayscale information. Direct grayscale registration is very sensitive to lighting changes such as those which arise from changes to light angles and shadows; thus, grayscale normalized cross-correlation (NCC) is generally used as the registration criterion in order to enhance the robustness of this method under grayscale changes [12]. For nightlight images such as those that are produced by Luojia-1, grayscale registration usually fails because of the noise and changes in the brightness of the images that are captured in different periods. In addition, grayscale registration requires textural information. For places with few lights, the images mainly have a black background, which has a greater impact on the selection of grayscale templates and the registration’s accuracy.
Feature-based registration extracts feature descriptors in the affine invariant region, calculates their feature vectors, and then calculates the Euclidean distance between the feature vectors in order to determine whether two feature points are homonymous. The representative feature descriptors are scale-invariant feature transform (SIFT), SURF, KAZE, ORB, and BRISK [13]. Among these, feature registration that is based on the SIFT operator is the most representative. The SIFT operator has brightness, scale, and rotation invariance and is, therefore, not sensitive to the light intensity of the nightlight image and it can also be stable to noise [14,15,16]. SURF is based on SIFT and is reputed to be faster. In actual registration, more tie points can be obtained for nightlight images in areas with more texture, such as cities. However, for villages and towns in Western China with fewer inhabitants, as well as other sparsely illuminated areas, it is difficult to extract them as feature points in the feature point extraction stage because these areas appear as bright spots occupying only a few pixels in the image. Therefore, feature-based registration cannot be applied to all areas of nightlight remote sensing images.
The Luojia-1 nightlight remote sensing satellite has a sub-satellite point resolution of 129 m and the absolute geolocation accuracy of its images is approximately 650 m (1σ) [6]. This accuracy is sufficient for DMSP/OLS and VIIR/DNB data, which means that the dislocation between adjacent images is less than one pixel. However, it has a significant dislocation of five pixels when mosaicking images that are from Luojia-1. To further improve the positioning accuracy of nightlight images in China and finally realize the creation of complete nightlight images of China through image mosaicking, it is necessary to make block adjustments for all of the images in the region. The acquisition of the tie points between adjacent images during block adjustment is particularly critical.
Considering that the sparse light spots of villages and towns in nightlight images are usually isolated, their presence in the image is similar to that of a star point. This study devises an easy-to-implement engineering solution that allows for the registration of sparse light areas with high efficiency while guaranteeing accuracy in non-sparse light areas. Based on the idea of star point extraction [17], the coordinates of the sparse light points can be obtained through the use of a roundness threshold and the centroid method. The method can then take advantage of the existing geometric information of the image and use the geometric positioning forward and backward algorithms and the random consistency detection algorithm (RANSAC) in order to achieve the preliminary registration of the tie points [18]. Finally, through analysis, the affine model is used to realize the deletion and expansion of the tie points.

2. Materials and Methods

This section first introduces the method that was used to extract sparse lights from nightlight images [19], after which the registration method of Luojia-1 nightlight images is analyzed from the perspective of the geometric positioning model and the image transformation model. Finally, the specific process of the sparse light registration algorithm is elaborated upon. Table 1 shows the parameters of the Luojia-1 nightlight remote sensing satellite that was used to provide the data for the following analysis.

2.1. Sparse Light Extraction Method

2.1.1. Connected Domain Segmentation

The nightlight images of Luojia-1 are single-channel optical images. The digital number (DN) value of the image is theoretically zero in areas with no human activity. Generally, the greater the DN value, the greater the intensity of human activities. Therefore, the human activity area in the picture can be segmented from a black background using background threshold segmentation. Assuming that the gray value of the image containing the light points is expressed as g ( x , y ) and t is the background threshold, the image binarization is as follows:
f ( x , y ) = { 1 ,   g ( x , y ) t 0 ,   g ( x , y ) < t
which means that if the pixel value is larger than the background threshold t , the pixel value is set to one, otherwise it is set to zero. Each pixel in f ( x , y ) has a value of one and those which are adjacent to one another are marked as one connected domain. In this manner, k groups of connected domains Ω i ,   i = { 1 , 2 , k } are obtained. By the process of construction there is no intersection between each connected domain:
Ω i C   Ω j C = 0 ,   i j , C = { 1 , 2 , , k }

2.1.2. Roundness Detection

In order to improve the accuracy of the coordinate extraction of sparse lights, this research filtered the registration points by calculating the roundness of the sparse lights and selecting the lights with higher roundness. The calculation equation for the roundness of the connected domain is as follows:
P = 4 π S L 2
where P represents the roundness, S represents the area of the connected domain, and L represents the perimeter of the connected domain. From the relationship between the area of the circle and the perimeter, the closer P is to one, the rounder the sparse light.
In the process of the selection of light points that are to be registered, the area S ( Ω i ) is obtained by calculating the number of pixels of the connected domain and the perimeter L ( Ω i ) by calculating the number of pixels in the outer layer of the connected domain. Then, the appropriate connected domains are selected according to the area and roundness. The light points, after filtering, are expressed as follows:
Ω = Ω i   i f   ( S m i n < S ( Ω i ) S m a x   a n d   P ( Ω i ) > e )
where S m i n and S m a x represent the minimum and maximum numbers of pixels; e represents the threshold of the roundness; and the value range is (0, 1). In the specific selection strategy of sparse light points, for areas with few urban buildings, e is appropriately reduced in order to extract more points; whereas, in areas with more urban buildings, e is increased in order to improve the position accuracy of the light point extraction.

2.1.3. Centroid Extraction

When the light points are filtered out, the centroid can be used to extract the center coordinates of the light points and the gray square weighted centroid method can be used to obtain the centroid sub-pixel coordinates ( x ¯ , y ¯ ) of the light points. The calculation equation is as follows:
{ x ¯ = ( x , y ) Ω x · g ( x , y ) 2 ( x , y ) Ω g ( x , y ) 2   y ¯ = ( x , y ) Ω y · g ( x , y ) 2 ( x , y ) Ω g ( x , y ) 2
It can be seen from the above equation that the higher the roundness of the light point is, the higher the accuracy of the calculation of the center point will be.

2.2. Sparse Light Registration Method

The following are the methods for automatically detecting and selecting the homonymous light point:

2.2.1. Geometric Positioning Model Forward and Backward Algorithms

The nightlight image that is used for registration is usually a sensor calibration product [20]. Owing to the different satellite positions and attitudes at the time of imaging, there are errors in translation, scale, and rotation between the adjacent images that need to be registered. If there is already a rigorous geometric model within the image, the geometric positioning forward and backward algorithms can be used to preliminarily solve the problem [21]. Generally, the process of obtaining the object point coordinates according to the image point coordinates is called the forward algorithm and the process of obtaining the image point coordinates according to the object point coordinates is called the backward algorithm [22]. As Figure 1 shows, assuming that there are two images to be registered, named left and right, the forward algorithm, combined with the global digital elevation model (DEM) data, is used to project the left image point p to the ground and obtain the P coordinates. Then, the backward algorithm is used to retro-project P to the right image and get the p ′ coordinates.
According to satellite imaging parameters, the classic rigorous geometric model of optical remote sensing satellites can be constructed as follows [23,24]:
[ X Y Z ] = [ X S X S X S ] + m · R · [ x x 0 y y 0 f ]
where
R = R ( φ ) R ( ω ) R ( κ ) = [ a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3 ]
In the above equation, [ X Y Z ] T represents the coordinates of the ground point in the World Geodetic System 1984 (WGS84) coordinate system, [ X s Y s Z s ] T represents the satellite’s orbital position in the WGS84 coordinate system, m represents the scale factor, and R represents the satellite’s attitude of the body coordinate system relative to the WGS84 coordinate system, where R can be expressed as a combination of the satellite’s pitch angle φ , roll angle ω , and yaw angle κ . The variable ( x , y ) represents the coordinates of the corresponding image point in the image coordinate system, ( x 0 , y 0 )   represents the coordinates of the principal point of the camera, and f represents the focal length of the camera. For the geometric forward algorithm, according to the orbit and attitude interpolation model, the orbit and attitude parameters at the imaging time can be obtained [25]. Combining the known camera principal point and the principal distance parameters, as well as the Earth’s ellipsoid parameters and global DEM parameters, the object coordinates corresponding to any point on the image can be calculated according to the following ellipsoid equation [26]:
X 2 + Y 2 ( m a + h ) 2 + Z 2 ( m i + h ) 2 = 1
where m a and m i are the semi-major axis and semi-minor axis parameters of the Earth ellipsoid in the WGS84 coordinate system and h represents the elevation of the object point. The solution of the object point coordinates is an iterative process. The above process is the forward algorithm of the geometric positioning model. For the backward algorithm process, Equation (6) can be expressed as follows [27]:
{ x x 0 = f a 1 ( X X S ) + b 1 ( Y Y S ) + c 1 ( Z Z S ) a 3 ( X X S ) + b 3 ( Y Y S ) + c 3 ( Z Z S ) y y 0 = f a 2 ( X X S ) + b 2 ( Y Y S ) + c 2 ( Z Z S ) a 3 ( X X S ) + b 3 ( Y Y S ) + c 3 ( Z Z S )
That is, according to the object coordinates of the point, the orbit parameters at the imaging time, the attitude parameters, and the orientation elements in the camera, the corresponding image point coordinates can be obtained. All of the points in the left image are reprojected onto the right image through the use of the forward and backward algorithms. In this way, most of the translation, rotation, and scale errors between the homonymous points on the left and right images are eliminated.

2.2.2. Image Transformation Model Analysis

The parameters that are introduced into the image geometric positioning model have measurement errors and the DEM data have elevation errors. Therefore, after use of the forward and backward algorithms of the geometric positioning model, there is an offset between the homonymous points. If the transformation model between the homonymous points in the left and right images can be established, registration between the homonymous points can be realized. Because the orientation accuracy of the interior parameters of Luojia-1 is better than 0.3 pixels after geometric correction [6], the translation model can be used with a larger threshold in order to achieve rapid preliminary alignment between the homonymous points. In the subsequent process of deleting misregistered points, the overall constraints of the affine model can be used with a smaller threshold so as to eliminate the misregistered points. The parameter error of the satellite and the elevation error are discussed in Appendix A. This analysis indicates the process of the selection of the transformation model between the two sets of homonymous points.

2.2.3. Sparse Light Registration Algorithm

Combined with the selection of the image model transformation, the complete registration process is as shown in Figure 2. The specifics are as follows:
  • Extract the sparse lights of which the roundness is less than a certain threshold on the left and right adjacent images. Judging by the number of points that are extracted, appropriately increase the roundness threshold if the image is of an urban area. Conversely, appropriately decreased it if the image is of a sparsely lit area.
  • Use the forward algorithm to calculate the latitudes and longitudes of all of the sparse lights in the left image and then use the backward algorithm to calculate the position of these sparse lights in the right image. Set a certain threshold to delete the light points that exceed the range of the right image. Then, perform the geometric positioning forward and backward algorithms and delete the lights that exceed the range of the left image.
  • If the number of remaining lights in the final left and right images is zero, reduce the roundness threshold e and extract the lights again. If the number of light points is one or two, the corresponding registration point pair can be directly found through the geometric positioning model and positioning error threshold. Use the isolated point principle in order to determine whether the two points are uniquely paired. The principle of isolated points is as follows: taking any point on the left image, through the geometric positioning forward and backward algorithms only one point can be found within a certain distance threshold on the right image. Calculate this point on the left image. If only one point can be found within a certain distance threshold, then called this pair of points an isolated point.
  • If the number of remaining light points in the left and right images is greater than three, use the RANSAC algorithm and translation model to achieve registration through voting [18,28], as follows.
    • Randomly select a point on the left image and project it onto the right image through the forward and backward algorithms. Calculate the distance between this point with the homonymous point on the right image. Count all of the distances and the votes for these distances. Based on the RANSAC algorithm, the distance with the most occurrences is the translation parameter of the right image relative to the left image.
    • If the translation parameters are unique, use the translation model to translate the light point of the left image onto the right image. Then find the light point of the right image within a larger threshold range. If the isolated point principle is satisfied, the right image’s point is a homonymous point. Because of the sparse distribution of light points, usually when the left and right images have a certain number of light points, the translation value is unique and takes the highest number of votes according the RANSAC algorithm.
    • If the translation value is not unique, then adopt the method that is described in Step 3.
  • After judging all of the points, establish the affine transformation model of the registration point pair and calculate the residual error of each point. Starting with the largest residual, delete the larger error values in succession and recalculate the affine model transformation parameters until the residuals of all of the points meet a certain threshold.
  • The isolated point principle will inevitably lead to the omission of certain light points. Based on the registered point pairs, establish an affine model again and calculate the position of the unregistered light point in the left and right images. If it is less than a certain threshold, the point can be expanded.
  • Determine the number of registered homonymous points. When it is greater than one, the tie points of the two images have been registered successfully. However, when it is zero you must reduce the roundness threshold e and start the process over from the first step. The registration has failed if the threshold e is less than 0.1.

3. Experiment and Results

This experiment used Luojia-1 nightlight data from 275 scenes covering China. The imaging time range of the data was within the second half of 2018. The background noise in the nightlight image is generally one pixel and the connected domain that is composed of more than a certain number of pixels is usually irregular in shape. So, the minimum and maximum number of pixels were respectively set to S m i n = 2 2 and S m a x = 20 20 . The initial threshold of the roundness was e = 0.3 . When the number of light points that could be extracted in the image was less than four, the e threshold was lowered by 0.1 in order to continue the extraction process. The registration parameters were the same for all of the image settings and the tie points were automatically extracted and registered.
The following sections present the results and analyses of the reliability, distribution, comparison, and accuracy of the tie point registrations of the Luojia-1 nightlight images.

3.1. Reliability of Tie Points

Figure 3 shows the registration result of the tie points of the adjacent images of two scenes in Zhengzhou, Henan. A total of 71 sets of tie points were obtained from the registration process of the two images. The tie points’ numbers were sorted by their roundness and the top eight sets were selected with numbers A1–A8 and B1–B8 as shown. By comparing the positions of the centers of the light points, the registration accuracy was found to be very high; moreover, most of the automatically selected light points existed in the image, with a high roundness, and the position of the center point changed less in the different images.
Figure 4 shows the registration details of the first two pairs of tie points that were selected from the Zhengzhou registration points. The distribution of the light points was extracted according to the established roundness threshold.

3.1.1. Roundness Threshold Limiting Effect

For the light point that is numbered 340 in Figure 4a, the light information at the corresponding position in Figure 4b was weakened and the light point could not be extracted. Similarly, for the light point that is numbered 713 in Figure 4c, no light point was extracted at the corresponding position, as shown in Figure 4d. Therefore, for the registration of nightlight images, owing to the imaging angle, date, noise, etc., the shape of the light points of the left and right images changed. The actual light points that were extracted from the left image do not necessarily appear in the right image when the light points are extracted using the roundness threshold.

3.1.2. Isolated Point Principle Effect

After obtaining their translation parameters by using the RANSAC algorithm, point 1 of Figure 4a and point 799 of Figure 4b could be registered; however, point 514 in the upper-left corner of Figure 4a was relatively close to point 75. When determining the isolated point, it was impossible to determine which point corresponded to point 360 in Figure 4b. These three points were discarded in the initial registration stage in order to avoid affecting the calculation of the translation parameter. However, during subsequent expansion of the registration process, through the affine transformation parameters that were obtained from other registration points, it could be determined that point 514 in Figure 4a and point 360 in Figure 4b were homonymous points. Similarly, the isolation principle between point 2 of Figure 4c and point 366 of Figure 4d was not satisfied; however, after the point was finally expanded by the affine model, point 2 of Figure 4c and point 199 of Figure 4d were registered.
Based on the above situation, it can be seen that, for sparse lights with fewer pixels, the roundness and area size of the lights guarantee the extraction accuracy of the center position of the lights. In addition, the position error of the homonymous points that were extracted from the two images using the centroid method is small. Even if there is an error in the position, it is usually less than one pixel, owing to the small range of the light area itself.

3.2. Distribution of Tie Points

3.2.1. Distribution of Sparsely Populated Areas

As Figure 5 shows, the Tibet area of China is sparsely populated and the lights that are there are weak. Using the proposed method, the tie points that are shown in Figure 5 were automatically extracted and registered. According to points 1 through 6, it is evident that the homonymous points are small and not round, bright spots. Because there are very few lights in the western region, the program automatically reduces the roundness threshold when the light points are insufficient. Therefore, the extraction points are not all circular, but the center coordinate of the homonymous point is at the center of the light areas. For the slightly larger light areas numbered 7–9, they were not selected by the algorithm as tie points. Obviously, if these larger lights were selected as the tie points, there is no guarantee that the center coordinate would be calculated at the same location in the city, which would cause a greater error in the positional accuracy of the registration point.

3.2.2. Distribution of Densely Populated Areas

As shown in Figure 6 and Figure 7, in densely populated areas, such as the Yangtze River Delta and Beijing–Zhangjiakou, the tie points that were automatically extracted in this study effectively avoided the dense light areas inside the city. At the same time, the algorithm selected various isolated areas around the city with a small light area. The original intention of this algorithm is to extract the tie points that are in areas with less light information. In fact, where there are more small light areas, more effective tie points can be obtained.
Conventional algorithms cannot register nightlight images in sparsely populated areas in Western China. The algorithm that is proposed in this paper does not only compensate for such limitations, but also ensures the appropriate registration of the densely populated areas in eastern China.

3.3. Registration Method Comparison

3.3.1. Comparison with the SIFT Algorithm

This section compares the proposed method with the SIFT algorithm, which is a representative algorithm that is based on feature registration. The SIFT algorithm sets the window size to 256 pixels, the grid size to five pixels, and the search area size to 30 pixels. In fact, the parameters here only determine the number of light points that can be extracted from an urban area. The lights in sparse light areas cannot be effectively extracted as feature points by the SIFT method (as shown in Figure 8).
Figure 8 shows the registration results of the tie point in Kuche City, Xinjiang, Western China. Figure 8a,b shows the schematic diagrams of the tie points of the two images, which obtained by the SIFT algorithm’s registration, and Figure 8c,d shows the schematic diagrams of the tie points of the two images that were obtained by the proposed registration method. Figure 8e,f shows the enlarged displays corresponding to the red frame area of the SIFT algorithm and Figure 8g,h shows the enlarged displays corresponding to the area of the proposed method. Comparing Figure 8a,b with Figure 8c,d, it is evident that, by using the proposed method, the registration points are not concentrated in one area but rather they are widely distributed in the image. Specific comparisons between Figure 8e,f and Figure 8g,h show that the traditional SIFT algorithm tends to find characteristic points in urban lights and road lights with structural information, while the proposed method is more inclined to use sparse light areas. In fact, for the adjacent areas of Figure 5, there are several homonymous points that were acquired through the proposed method, but no corresponding points were found through the use of the SIFT algorithm. So, for the traditional SIFT algorithm, there are no homonymous points between adjacent images when registered in the sparse light areas of Western China.

3.3.2. Comparison with the NCC Algorithm

In this section, the proposed algorithm is compared with the representative algorithm, NCC, which is based on grayscale correlation. Figure 9 shows the results of the obtained tie points based on grayscale correlation registration in the same area.
The accuracy of the points is within one pixel, except for point 38 in Figure 9a,b and point 33 in Figure 9c,d. Therefore, for nightlight images in areas with more lights, such as cities, it is also possible to use algorithms that are based on grayscale correlation in order to register the tie points. However, for sparse light areas, such as those in Figure 9e–h, which are affected by noise and the imaging angle of view, the shape of the light changed slightly across different periods. Combined with the lack of textural structure, when registration is based on grayscale-related algorithms the tie point usually does not correspond to the same position of the light point. Therefore, even if the registration of nightlight images in urban areas can be achieved based on grayscale correlation, the NCC method is similar to the SIFT algorithm and still cannot solve the registration of tie points in sparse light areas.

3.4. Accuracy of Tie Points

3.4.1. Tie Point Registration Results in China

The traditional SIFT registration method and the method that was used in this study were applied to register 275 scenes of Luojia-1 nightlight images of China. The final distribution of the tie points is shown in Figure 10a and the mosaic result of nightlight images is shown in Figure 10b.
By using the traditional SIFT algorithm, it is evident from the distribution of the red points in Figure 10a that the tie points are mainly distributed in the densely populated eastern region; indeed, in China’s vast western region, as well as parts of China’s northeastern and southwestern regions, it is impossible to use this method to obtain effective tie points between images. Using the proposed sparse light registration method, not only are the tie points more evenly registered in the eastern areas, but, more importantly, most of the vacancies in the sparse light areas are filled and the tie points of almost all of the areas in China were obtained. The proposed algorithm is also unable to obtain tie points in the zone that corresponds to large uninhabited areas and desert areas such as the Coco Siri region and Taklimakan Desert in Western China, which do not have human-made nightlights. When these areas were photographed by Luojia-1, they were completely black. The statistics for the above calculation process can be found in Table 2.
As shown in Table 2, compared to the SIFT-GPU algorithm, which is optimized and accelerated by GPU, the efficiency of the proposed registering algorithm is nearly twice that of the traditional SIFT method. The number of tie points that was obtained by SIFT is less than that which was obtained by the proposed method, but this may be because the SIFT algorithm will thin out the dense area when registering, so the number of points is less; however, it can be seen from Figure 10a that the distribution of the tie points that were extracted by the proposed method is more uniform. It should be noted that the points of the SIFT algorithm are concentrated in big cities such as Beijing and Shanghai, but the tie points that were extracted in this study are scattered across different regions.
The number of images with tie points was more important when performing the mosaicking of the images. The traditional algorithm had only 163 images from which to obtain the tie points, whereas the proposed algorithm had 262 images that cover the tie points. The coverage ratio of the tie points increased from 59.3% to 95.3%, an increase which is extremely large, and the proposed method increased the degree of connection between the images, making the subsequent overall block adjustment results of the nightlight images in China more reasonable.

3.4.2. Block Adjustment Accuracy of Free Network

This study also evaluated the block adjustment accuracy of 275 images for the entire China area. Using the global 1 km grid DEM to assist the block adjustment, the results that are shown in Figure 10 and Figure 11 were obtained.
The detailed mosaic effect of the adjacent images is shown in Figure 11a,c; two adjacent urban area images have a certain offset due to a positioning error before block adjustment. By using the proposed sparse light method, the two images can be better stitched, as shown in Figure 11b,d. The overlay effect is also given for before and after the block adjustment in the sparse light area. As shown in Figure 12, for two images, img1 and img2, the red contour denotes the edge of img1’s light (Figure 12a). Before block adjustment, img2’s light was not in the red contour (Figure 12b). After block adjustment, img2 could be better overlaid with the red contour (Figure 12c).
The statistics for the block adjustment accuracy of the whole China region are shown in Table 3. As shown, using the traditional SIFT-GPU algorithm, the plane error was 0.556 pixels after the block adjustment and the maximum error was 6.235 pixels. Using the proposed algorithm, the plane error was 0.632 pixels after the block adjustment and the maximum error was 7.936 pixels. The accuracy of the proposed algorithm was slightly worse than that of the SIFT algorithm, but the area and number of images that were involved in the block adjustment of the proposed method were much greater than those that were involved in the SIFT algorithm. Furthermore, many of the available images were not participating in the block adjustment of the SIFT algorithm. Hence, the decrease in accuracy is reasonable. When comparing the root mean square (RMS) error in the x and y directions, it can be observed that the RMS error of the SIFT algorithm was 0.429 pixels in the x direction and 0.354 pixels in the y direction and that the error was not uniform. The median error of the proposed algorithm was 0.448 in the x direction and 0.445 in the y direction, error values which are equivalent. By combining the tie points that were extracted by the proposed method and the SIFT method, the block adjustment accuracy was slightly improved compared to the results of the proposed method alone and the plane accuracy was approximately 0.6 pixels.

4. Discussion

According to the above experimental results, the tie points that were extracted by the sparse light method were usually based on isolated light information, so they are unique. Assuming that in a sparsely populated area with sparse light there are two light points in the overlapping part of the two nightlight images and that the two light points are far apart, they are clearly distinguishable. Using the proposed method, the correspondence between the two points was easy to find. Additionally, because the area of an isolated light point is usually small, when selecting sparse light points in this study, the light points were selected through the use of a roundness threshold. Therefore, the center positions of the two scene images that were extracted by the centroid method were used. In the actual registration relationship, this is also a pair of homonymous points with a smaller positioning error. A larger light area usually has obvious roads or other textural information. At this time, the proposed algorithm rarely extracted the points in this area but obtained data regarding sparse light with a level of high roundness nearby. Using the proposed algorithm to register tie points, tie points that are more reasonably distributed around big cities can now be obtained.

5. Conclusions

This paper proposes a method of using sparse lights to obtain the tie points of nightlight images. Through the use of a roundness threshold and centroid extraction, the light points satisfying the conditions were selected, after which the geometric positioning forward and backward algorithms were used in order to initially eliminate the position error between the points that are to be registered. Subsequently, the preliminary registration of the light points was realized according to the random consistency detection and translation model. Based on the affine model, the points with larger errors were deleted and the tie points were expanded. The experimental results show that the proposed method has a high registration accuracy between homonymous points. The larger extraction range of the light points solves the problem of the difficulty of obtaining tie points in sparsely lit areas. For densely lit areas, smaller light points around the area can also be used to register the tie points. Compared with the traditional registration algorithm, the proposed method has good accuracy and improved distribution performance for the registration of nightlight image tie points. Through a registration test of 275 images of China from Luojia-1, the coverage ratio of the tie points increased from 59.3% to 95.3%. The proposed registration algorithm is thus applicable to the nightlight images of various areas.

Author Contributions

Conceptualization, Z.G. and Y.J.; methodology, Z.G.; software, Z.G.; validation, Z.G., Y.J. and Z.L.; formal analysis, Z.G. and Y.J.; investigation, Z.G.; resources, G.Z.; data curation, G.Z.; writing—original draft preparation, Z.G.; writing—review and editing, Z.G., Y.J. and X.S.; visualization, Z.L.; supervision, G.Z.; project administration, G.Z.; funding acquisition, Y.J. and X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China [grant numbers 41971412, 42171341].

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://59.175.109.173:8888/ (accessed on 18 March 2022).

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

(a)
Influence of Satellite Parameter Errors on Image Pixel Offset
The parameter errors of the satellite involve orbit and attitude measurement errors. Based on the analysis of the satellite’s geometric positioning accuracy [29], the image pixel offset that is caused by the orbit error is expressed as follows:
{ Δ x = T x r Δ y = T y r
where Δ x and Δ y represent the pixel errors in the row and column directions of the image, respectively; T x and T y represent the orbit errors along the orbital direction and across the orbital direction, respectively; and r = 129 m represents the resolution of the sub-satellite point sensor (unit: meter/per-pixel). As shown in Table 1 of the main part of this study, if the orbit error is less than 10 m, the effect of the track error on the image is less than 0.08 pixels.
For the attitude error, consider the roll angle error in Figure A1 as an example. OP is the real photography light, OP′ is the photography light with the roll angle error, f is the camera’s focal length (unit: mm), and ψ is the half field of view of the camera. Assuming that Δ ω is the roll angle error, Δ y is the column direction offset of the image pixel that is caused by the roll angle error. Δ y can be calculated as follows:
Δ y = f cos ψ · λ c c d · Δ ω
where λ c c d represents the pixel size of the camera (unit: micron/pixel). Similarly, for the image pixel offset in the row direction Δ x , which is caused by the pitch angle error, the mechanism of the geometric positioning error is similar to that of the roll angle. Assuming that the pitch angle error is Δ φ , the offset Δ x can be calculated as follows:
Δ x = f cos ψ · λ c c d · Δ φ
Figure A1. Influence of roll angle error.
Figure A1. Influence of roll angle error.
Remotesensing 14 02372 g0a1
Then, for the homonymous points, the offset that is caused by errors of the roll angle and the pitch angle is related to the magnitude of the roll angle error Δ ω and pitch angle error Δ φ , but not to the magnitude of the roll angle and pitch angle ω , φ . According to Table 1, the focal length f = 55   mm and the pixel size λ c c d = 11   μ m / pixel . Assuming that the maximum error of the pitch and yaw angle Δ φ = Δ ω = 180 , the pitch and yaw angle errors of Luojia-1 were able to be obtained. The influence of the central field of view ( ψ = 0 ) was 4.36 pixels and the influence of the edge pixels ( ψ = 32.3 ° / 2 ) was 4.54 pixels, with the difference between them being 0.18 pixels. Therefore, in the preliminary registration, the inconsistency between the center of the field of view and the edge offset can be ignored. At this time, the image pixel offset is related to the roll angle error and pitch angle error, which can be regarded as translation errors.
The influence of the yaw angle’s error on the geometric positioning is equivalent to that of the rotation of the camera array. According to the geometric relationship, the row direction image point offset Δ x and the column direction image point offset Δ y can also be obtained as
{ Δ x = ( cos Δ κ 1 ) · ( x x 0 ) sin Δ κ · ( y y 0 ) Δ y = sin Δ κ · ( x x 0 ) + ( cos Δ κ 1 ) · ( y y 0 )
where Δ κ is the yaw angle’s error. From the equation, the error that is caused by the yaw angle is related to the distance between the image point and the principal point. When the yaw angle error Δ κ = 180 , the error at the image point   ( x , y ) = ( 2048 , 2048 ) is ( Δ x , Δ y ) = ( 0.89 , 0.89 ) pixels.
For the orientation elements within the camera [30], it is obvious that the error Δ f of the focal length f causes a scale error. The influence on the row and column directions can be expressed as follows:
{ Δ x = Δ f · x · λ c c d f Δ y = Δ f · y · λ c c d f
The error of the main point ( x 0 , y 0 ) is ( Δ x 0 ,   Δ y 0 ) , which still introduces a translation error that can be expressed as follows:
{ Δ x = Δ x 0 Δ y = Δ y 0
(b)
Influence of the Elevation Error
Assume that the connecting line that is formed by a pair of homonymous points ( p , q ) of the left and right images and point G on the ground are as shown in Figure A2. The angles with the vertical line of the earth surface are ( α , β ) (defined clockwise as a positive angle direction), respectively. Owing to the existence of the elevation error Δ h , after the implementation of the forward and backward algorithms of the geometric positioning model, the point p of the left image corresponds to the point q of the right image.
Figure A2. Image pixel offset error caused by elevation error.
Figure A2. Image pixel offset error caused by elevation error.
Remotesensing 14 02372 g0a2
Then, the offset error Δ x y of the image pixel that is caused by the elevation error Δ h is calculated as follows [31]:
Δ x y = Δ h r · sin ( α β ) cos α · cos β
This experiment used the global 1 km grid SRTM elevation data and the 90 m grid SRTM elevation error was 16 m. After downsampling to 1 km, the error was mainly caused by the uneven distribution of the height difference within the grid itself. Assuming α = 15 ° , β = 15 ° , and Δ h = 100   m , then Δ x y = 0.42 pixels. This error is related to the elevation distribution in the area. In fact, for flat areas such as small cities, Δ h is small, hence the image pixel error is much smaller than the calculated value. Although the elevation error cannot be fitted by any model, the elevation error has little effect on the registration accuracy of Luojia-1 nightlight images.
(c)
Choice of Image Transformation Model
Based on the above error decomposition, for the corresponding relationship of the homonymous points after the use of the forward and backward algorithms of the geometric positioning model, combined with the accuracy of Luojia-1, the registration of the homonymous points can be achieved according to two models.
In the preliminary registration, only the influence of the orbit, roll angle, and pitch angle errors can be considered and the small difference between the center and edge of the field of view can be ignored. According to Equations (A1)–(A3), the orientation error can be modeled using the translation model:
{ Δ x = T x r + Δ φ cos ψ · λ c c d · f = A 0 Δ y = T y r + Δ ω cos ψ · λ ccd · f = B 0
where A 0 and B 0 are the constant-term substitution parameters of the translation model coefficients.
When considering the orbit, roll angle, pitch angle, yaw angle, focal length, and principal point errors, but neglecting the minor elevation error, the type of error that is caused by geometric positioning can be modeled by an affine model according to Equations (A1)–(A6):
{ Δ x = T x r + f cos ψ · λ c c d · Δ φ + ( cos Δ κ 1 ) · ( x x 0 ) sin Δ κ · ( y y 0 ) + Δ f · x · λ c c d f + Δ x 0 = ( T x r + f cos ψ · λ c c d · Δ φ ( cos Δ κ 1 ) x 0 + sin Δ κ · y 0 + Δ x 0 ) + ( cos Δ κ 1 + Δ f · λ c c d f ) x + ( sin Δ κ ) y = A 0 + A 1 · x + A 2 · y Δ y = T y r + f cos ψ · λ c c d · Δ ω + sin Δ κ · ( x x 0 ) + ( cos Δ κ 1 ) · ( y y 0 ) + Δ f · y · λ c c d f + Δ y 0 = ( T y r + f cos ψ · λ c c d · Δ ω sin Δ κ · x 0 ( cos Δ κ 1 ) y 0 + Δ y 0 ) + ( sin Δ κ ) x + ( cos Δ κ 1 + Δ f · λ c c d f ) y = B 0 + B 1 · x + B 2 · y
where A i , B i ( i = 1 , 2 , 3 ) are the substitution parameter of the affine model coefficients.

References

  1. Zeng, C.; Zhou, Y.; Wang, S.; Yan, F.; Zhao, Q. Population spatialization in China based on night-time imagery and land use data. Int. J. Remote Sens. 2011, 32, 9599–9620. [Google Scholar] [CrossRef]
  2. Ma, T.; Zhou, Y.; Zhou, C.; Haynie, S.; Pei, T.; Xu, T. Night-time light derived estimation of spatio-temporal characteristics of urbanization dynamics using DMSP/OLS satellite data. Remote Sens. Environ. 2015, 158, 453–464. [Google Scholar] [CrossRef]
  3. Levin, N.; Duke, Y. High spatial resolution night-time light images for demographic and socio-economic studies. Remote Sens. Environ. 2012, 119, 1–10. [Google Scholar] [CrossRef]
  4. Li, X.; Li, D. Can night-time light images play a role in evaluating the Syrian Crisis? Int. J. Remote Sens. 2014, 35, 6648–6661. [Google Scholar] [CrossRef]
  5. Levin, N. The impact of seasonal changes on observed nighttime brightness from 2014 to 2015 monthly VIIRS DNB composites. Remote Sens. Environ. 2017, 193, 150–164. [Google Scholar] [CrossRef]
  6. Zhang, G.; Wang, J.; Jiang, Y.; Zhou, P.; Zhao, Y.; Xu, Y. On-orbit geometric calibration and validation of Luojia 1-01 night-light satellite. Remote Sens. 2019, 11, 264. [Google Scholar] [CrossRef] [Green Version]
  7. Li, X.; Li, X.; Li, D.; He, X.; Jendryke, M. A preliminary investigation of Luojia-1 night-time light imagery. Remote Sens. Lett. 2019, 10, 526–535. [Google Scholar] [CrossRef]
  8. Ou, J.; Liu, X.; Liu, P.; Liu, X. Evaluation of Luojia 1-01 nighttime light imagery for impervious surface detection: A comparison with NPP-VIIRS nighttime light data. Int. J. Appl. Earth Obs. Geoinf. 2019, 81, 1–12. [Google Scholar] [CrossRef]
  9. Wang, C.; Chen, Z.; Yang, C.; Li, Q.; Wu, Q.; Wu, J.; Zhang, G.; Yu, B. Analyzing parcel-level relationships between Luojia 1-01 nighttime light intensity and artificial surface features across Shanghai, China: A comparison with NPP-VIIRS data. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101989. [Google Scholar] [CrossRef]
  10. Zheng, Q.; Weng, Q.; Huang, L.; Wang, K.; Deng, J.; Jiang, R.; Gan, M. A new source of multi-spectral high spatial resolution night-time light imagery—JL1-3B. Remote Sens. Environ. 2018, 215, 300–312. [Google Scholar] [CrossRef]
  11. Zitová, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef] [Green Version]
  12. Noh, M.; Howat, I.M. The surface extraction from TIN based search-space minimization (SETSM) algorithm. ISPRS J. Photogramm. Remote Sens. 2017, 129, 55–76. [Google Scholar] [CrossRef]
  13. Tareen, S.A.K.; Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018. [Google Scholar] [CrossRef]
  14. David, G.L. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  15. Goncalves, H.; Corte-Real, L.; Goncalves, J.A. Automatic image registration through image segmentation and SIFT. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2589–2600. [Google Scholar] [CrossRef] [Green Version]
  16. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4516–4527. [Google Scholar] [CrossRef]
  17. Guan, Z.; Jiang, Y.; Wang, J.; Zhang, G. Star-based calibration of the installation between the camera and star sensor of the Luojia 1-01 satellite. Remote Sens. 2019, 11, 2081. [Google Scholar] [CrossRef] [Green Version]
  18. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  19. Zhang, Y. Handbook of Image Engineering; Springer: Singapore, 2021. [Google Scholar]
  20. Pan, H.; Zhang, G.; Tang, X.; Wang, X.; Zhou, P.; Xu, M.; Li, D. Accuracy analysis and verification of ZY-3 products. Acta Geod. Cartogr. 2013, 42, 738–744, 751. [Google Scholar]
  21. Zhang, L.; Ai, H.; Xu, B.; Sun, Y.; Dong, Y. Automatic tie-point extraction based on multiple-image matching and bundle adjustment of large block of oblique aerial images. Acta Geod. Cartogr. 2017, 46, 554–564. [Google Scholar]
  22. Zhang, G.; Li, F.; Jiang, W.; Zhai, L.; Tang, X. Study of three-dimensional geometric model and orientation algorithms for systemic geometric correction product of push-broom optical satellite image. Acta Geod. Cartogr. 2010, 39, 34–38. [Google Scholar]
  23. Jiang, Y.; Zhang, G.; Tang, X.; Li, D.R.; Wang, T.; Huang, W.C.; Li, L.T. Improvement and assessment of the geometric accuracy of Chinese high-resolution optical satellites. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4841–4852. [Google Scholar] [CrossRef]
  24. Zhang, G.; Guan, Z. High-frequency attitude jitter correction for the Gaofen-9 satellite. Photogramm. Rec. 2018, 33, 264–282. [Google Scholar] [CrossRef]
  25. Pan, H.; Zou, Z.; Zhang, G.; Zhu, X.; Tang, X. A penalized spline-based attitude model for high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1849–1859. [Google Scholar] [CrossRef]
  26. Jiang, Y.; Zhang, G.; Wang, T.; Li, D.; Zhao, Y. In-orbit geometric calibration without accurate ground control data. Photogramm. Eng. Remote Sens. 2018, 84, 485–493. [Google Scholar] [CrossRef]
  27. Guan, Z.; Jiang, Y.; Zhang, G. Vertical accuracy simulation of stereo mapping using a small matrix charge-coupled device. Remote Sens. 2018, 10, 29. [Google Scholar] [CrossRef] [Green Version]
  28. Jiang, Y.; Zhang, G.; Tang, X.M.; Li, D.; Huang, W.C.; Pan, H.B. Geometric calibration and accuracy assessment of ZiYuan-3 multispectral images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4161–4172. [Google Scholar] [CrossRef]
  29. Zhang, G. Rectification for High Resolution Remote Sensing Image under Lack of Ground Control Points. Ph.D. Thesis, Wuhan University, Wuhan, China, 2005. [Google Scholar]
  30. Jiang, Y.; Zhang, G.; Tang, X.; Zhu, X.; Qin, Q.; Li, D.; Fu, X. High accuracy geometric calibration of ZY-3 three-line image. Acta Geod. Cartogr. 2013, 42, 523–529. [Google Scholar]
  31. Jiang, Y.; Cui, Z.; Zhang, G.; Wang, J.; Xu, M.; Zhao, Y.; Xu, Y. CCD distortion calibration without accurate ground control data for pushbroom satellites. ISPRS J. Photogramm. Remote Sens. 2018, 142, 21–26. [Google Scholar] [CrossRef]
Figure 1. The forward algorithm and backward algorithm diagram.
Figure 1. The forward algorithm and backward algorithm diagram.
Remotesensing 14 02372 g001
Figure 2. Nightlight image tie point registration process.
Figure 2. Nightlight image tie point registration process.
Remotesensing 14 02372 g002
Figure 3. Registration of tie points of two scenes in Zhengzhou, Henan, China (Left figure: A total of 71 sets of red dots of tie points were obtained by registration of the two images. Right figure: Top eight sets of tie points were selected with numbers A1–A8, B1–B8 as shown, where Ai and Bi are homonymous points and i = 1,2,…8).
Figure 3. Registration of tie points of two scenes in Zhengzhou, Henan, China (Left figure: A total of 71 sets of red dots of tie points were obtained by registration of the two images. Right figure: Top eight sets of tie points were selected with numbers A1–A8, B1–B8 as shown, where Ai and Bi are homonymous points and i = 1,2,…8).
Remotesensing 14 02372 g003
Figure 4. Initial light point extraction. (a) Homonymous point A1. (b) Homonymous point B1. (c) Homonymous point A2. (d) Homonymous point B2.
Figure 4. Initial light point extraction. (a) Homonymous point A1. (b) Homonymous point B1. (c) Homonymous point A2. (d) Homonymous point B2.
Remotesensing 14 02372 g004
Figure 5. Distribution of nine tie points in Tibet.
Figure 5. Distribution of nine tie points in Tibet.
Remotesensing 14 02372 g005
Figure 6. Distribution of tie points in the Yangtze River Delta region. (a) Yangtze River Delta. (b) Enlarged view. (c) Enlarged view. (d) Enlarged view.
Figure 6. Distribution of tie points in the Yangtze River Delta region. (a) Yangtze River Delta. (b) Enlarged view. (c) Enlarged view. (d) Enlarged view.
Remotesensing 14 02372 g006
Figure 7. Distribution of tie points in the Beijing–Zhangjiakou area. (a) Beijing–Zhangjiakou area. (b) Enlarged view. (c) Enlarged view. (d) Enlarged view.
Figure 7. Distribution of tie points in the Beijing–Zhangjiakou area. (a) Beijing–Zhangjiakou area. (b) Enlarged view. (c) Enlarged view. (d) Enlarged view.
Remotesensing 14 02372 g007
Figure 8. Comparison of SIFT registration results and registration methods in this study. (a) SIFT’s result for left image. (b) SIFT’s result for right image. (c) Proposed method’s result for left image. (d) Proposed method’s result for right image. (e) Enlarged view of (a). (f) Enlarged view of (b). (g) Enlarged view of (c). (h) Enlarged view of (d).
Figure 8. Comparison of SIFT registration results and registration methods in this study. (a) SIFT’s result for left image. (b) SIFT’s result for right image. (c) Proposed method’s result for left image. (d) Proposed method’s result for right image. (e) Enlarged view of (a). (f) Enlarged view of (b). (g) Enlarged view of (c). (h) Enlarged view of (d).
Remotesensing 14 02372 g008
Figure 9. Registration results based on the NCC method. (a) City area. (b) City area. (c) City area. (d) City area. (e) Sparse light area. (f) Sparse light area. (g) Sparse light area. (h) Sparse light area.
Figure 9. Registration results based on the NCC method. (a) City area. (b) City area. (c) City area. (d) City area. (e) Sparse light area. (f) Sparse light area. (g) Sparse light area. (h) Sparse light area.
Remotesensing 14 02372 g009
Figure 10. Registration situation of tie points and the mosaic result of Luojia-1 nightlight images in China. (a) Red: tie points obtained by SIFT method. Green: tie points obtained by the proposed method. (b) Nightlight images mosaic result.
Figure 10. Registration situation of tie points and the mosaic result of Luojia-1 nightlight images in China. (a) Red: tie points obtained by SIFT method. Green: tie points obtained by the proposed method. (b) Nightlight images mosaic result.
Remotesensing 14 02372 g010
Figure 11. Comparison of the mosaic effect of two adjacent images in the urban area. (a) Vertical mosaic effect before block adjustment. The arrows show the offset direction. (b) Vertical mosaic effect after block adjustment. (c) Horizontal mosaic effect before block adjustment. The arrows show the offset direction. (d) Horizontal mosaic effect after block adjustment.
Figure 11. Comparison of the mosaic effect of two adjacent images in the urban area. (a) Vertical mosaic effect before block adjustment. The arrows show the offset direction. (b) Vertical mosaic effect after block adjustment. (c) Horizontal mosaic effect before block adjustment. The arrows show the offset direction. (d) Horizontal mosaic effect after block adjustment.
Remotesensing 14 02372 g011
Figure 12. Overlay effect of two adjacent images in the sparse light area. (a) Img1’s light and edge. (b) Img2’s light and img1’s edge before block adjustment. (c) Img2’s light and img1’s edge after block adjustment.
Figure 12. Overlay effect of two adjacent images in the sparse light area. (a) Img1’s light and edge. (b) Img2’s light and img1’s edge before block adjustment. (c) Img2’s light and img1’s edge after block adjustment.
Remotesensing 14 02372 g012
Table 1. Luojia-1 satellite parameters.
Table 1. Luojia-1 satellite parameters.
ParameterValue/Description
Orbit height645 km
Orbit typeSun-synchronous
Revisit period15 days
Ground sampling distance129 m (sub-satellite point)
Ground swath264 km × 264 km
Camera focal length55 mm
Camera pixel size2048 × 2048
Camera detector size11 μm × 11 μm
Camera field of view (FOV)32.3°
Geolocation accuracy650 m/5 pixels
Attitude accuracy0.05°
Table 2. Comparison of registration methods for Luojia-1 in China.
Table 2. Comparison of registration methods for Luojia-1 in China.
Registration MethodRegistration TimeTie Points
Account
Covered Images/
Total Images
Tie Points
Coverage Ratio
SIFT-GPU31 min5413163/27559.3%
This article17 min14588262/27595.3%
Table 3. Comparison of registration accuracy of Luojia-1 in China.
Table 3. Comparison of registration accuracy of Luojia-1 in China.
MethodError Directionminmaxrms
SIFT-GPUx04.3350.429
y06.1740.354
plane06.2350.556
Proposedx06.2580.448
y07.9170.445
plane07.9360.632
Combination of both tie pointsx05.8830.453
y05.7640.403
plane06.1790.606
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guan, Z.; Zhang, G.; Jiang, Y.; Shen, X.; Li, Z. Luojia-1 Nightlight Image Registration Based on Sparse Lights. Remote Sens. 2022, 14, 2372. https://doi.org/10.3390/rs14102372

AMA Style

Guan Z, Zhang G, Jiang Y, Shen X, Li Z. Luojia-1 Nightlight Image Registration Based on Sparse Lights. Remote Sensing. 2022; 14(10):2372. https://doi.org/10.3390/rs14102372

Chicago/Turabian Style

Guan, Zhichao, Guo Zhang, Yonghua Jiang, Xin Shen, and Zhen Li. 2022. "Luojia-1 Nightlight Image Registration Based on Sparse Lights" Remote Sensing 14, no. 10: 2372. https://doi.org/10.3390/rs14102372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop