Next Article in Journal
AOEHO: A New Hybrid Data Replication Method in Fog Computing for IoT Application
Previous Article in Journal
Review of Seawater Fiber Optic Salinity Sensors Based on the Refractive Index Detection Principle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach

1
Hunan Key Laboratory of Geospatial Big Data Mining and Application, Hunan Normal University, Changsha 410081, China
2
School of Geographic Sciences, Hunan Normal University, Changsha 410081, China
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(4), 2180; https://doi.org/10.3390/s23042180
Submission received: 3 February 2023 / Revised: 11 February 2023 / Accepted: 11 February 2023 / Published: 15 February 2023

Abstract

:
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking the nighttime static monocular tilted light images of communities near Meixi Lake in Changsha City as research data. Using an object-oriented classification method to fully extract the spectral, textural and geometric features of urban nighttime lights, we build four types of classification models based on random forest (RF), support vector machine (SVM), K-nearest neighbor (KNN) and decision tree (DT), respectively, to finely extract five types of nighttime lights: window light, neon light, road reflective light, building reflective light and background. The main conclusions are as follows: (i) The equal division of the image into three regions according to the visual direction can alleviate the variable scale problem of monocular tilted images, and the multiresolution segmentation results combined with Canny edge detection are more suitable for urban nighttime lighting images; (ii) RF has the highest classification accuracy among the four classification algorithms, with an overall classification accuracy of 95.36% and a kappa coefficient of 0.9381 in the far view region, followed by SVM, KNN and DT as the worst; (iii) Among the fine classification results of urban light types, window light and background have the highest classification accuracy, with both UA and PA above 93% in the RF classification model, while road reflective light has the lowest accuracy; (iv) Among the selected classification features, the spectral features have the highest contribution rates, which are above 59% in all three regions, followed by the textural features and the geometric features with the smallest contribution rates. This paper demonstrates the feasibility of nighttime UAV static monocular tilt image data for fine classification of urban light types based on an object-oriented classification approach, provides data and technical support for small-scale urban nighttime research such as community building identification and nighttime human activity perception.

1. Introduction

Nighttime lighting is a reflection of human activity, economic development and energy use [1,2]. Traditional city night light images are mostly obtained by satellite remote sensing, while small rotary-wing UAVs, as a new remote sensing platform, can provide ultra-high resolution city night light images at the vertical level through tilt photography [3,4,5,6,7]. The distribution of different types of urban lights reflects the internal structure of the city and the intensity of human activities. Therefore, the classification and identification of urban nighttime lights is a key prerequisite step for small-scale nighttime urban research, and the fine classification of various types of urban lights has important research significance and application value for community building identification, urban emergency rescue and nighttime human activity perception.
The study of fine classification of urban night lights needs to consider the extraction of features, the selection of classification methods and the evaluation methods of classification accuracy, among which the most important is the selection of classification methods. The study of fine classification of high-resolution remote sensing images, such as UAVs, mostly adopts an object-oriented classification approach; object-orientation uses the segmented object as the basic unit and can take full advantage of the spectral, geometric and textural features of the object to be classified [8]. In addition, the researcher adds various features according to the characteristics of the object under study, such as adding vegetation indices to identify changes in urban tree cover [9], adding topographic information to classify features [10], etc. Of course, too much feature information can lead to redundancy. Yang K et al. [11] demonstrated that differences in feature dimensionality and importance are the main factors contributing to variation in olive tree extraction accuracy; Guo Q et al. [12] compared different feature combination schemes and showed that the combination by feature elimination had the highest accuracy in urban tree classification. Object-oriented approaches are often combined with machine learning in the selection of classification algorithms. In order to explore the most suitable machine learning algorithm, Cao J et al. [13] compared the accuracies of support vector machine (SVM) and K-nearest neighbor (KNN) algorithms for mangrove species classification, and the result was that SVM was more accurate than KNN; Ye, Z et al. [14] compared the accuracy of random forest (RF), support vector machine (SVM) and K-nearest neighbor (KNN) algorithms in extracting urban impervious surfaces and concluded that RF had the highest extraction accuracy. LIANG, L et al. [15] compared the accuracy of five machine learning algorithms for extracting permafrost hot thaw slip boundaries and concluded that SVM had the highest accuracy; Pádua L et al. [16] compared the fine classification results of support vector machines (SVM), random forests (RF) and artificial neural networks (ANN) for vineyards and concluded that the overall classification performance of ANN was better. It can be seen that the most suitable machine learning algorithms will vary when conducting object-oriented classification studies on different research objects. Additionally, from the viewpoint of the previous research on image time, the research on the combination of object-oriented and UAV images mainly focuses on daytime images, there is less research on the fine classification of city lights at night and there is no research on the fine classification of urban nighttime UAV images using object-oriented approaches.
Therefore, this paper uses a small rotary-wing UAV as a new nighttime urban remote sensing platform, takes the captured static monocular tilted images as the data source, adopts an object-oriented classification method and explores the classification effect of the object-oriented method on urban nighttime lights by comparing the classification accuracy of four machine learning algorithms—random forest (RF), support vector machine (SVM), K-nearest neighbor (KNN) and decision tree (DT)—fills the gap of fine classification research using ultra-high resolution urban nighttime light images captured by UAV, and expects to provide new data and technical references for smaller-scale urban nighttime light research.

2. Materials and Methods

2.1. Study Site and Data Acquisition

The study area is located in Meixihu Street, Yuelu District, Changsha City, Hunan Province, China, mainly including Meixihu Jinmaoyue, Baijiatang, Longqin Bay, Zhenye City and Jiajing, etc. The area has uniform and regular distribution of building height and high community management level, and is a typical urban center community. The images were acquired during the period of 20:00–20:30 at night in May 2022, which is the peak of human nocturnal activity and is representative. Some of the parameters of the UAV were set as shown in Table 1, and the monocular shooting method with fixed vertical takeoff and landing was used to obtain the city night tilt image containing red, green and blue bands (Figure 1). A clear time without wind and clouds was chosen for the shooting to ensure the accuracy.

2.2. Research Methods

In this study, a classification method of urban night lights based on static monocular tilted UAV visible images is proposed which uses object-oriented approaches and machine learning algorithms. The classification process mainly consists of the following four steps (Figure 2): (1) pre-processing of UAV images; (2) image segmentation and extraction of feature information; (3) image classification, using RF, SVM, KNN and DT classifiers to classify urban nighttime lights; (4) classification accuracy evaluation, selecting the four indexes of OA, kappa coefficient, UA and PA to evaluate the classification accuracy and analyze the accuracy of different machine learning algorithms and light types.

2.2.1. Image Pre-Processing

From Figure 1, it can be seen that the urban nighttime UAV images mainly included five types of lights: window light, neon light, road reflective light, building reflective light and background, and compared with the daytime images, the nighttime UAV images had problems such as low contrast and noise pollution in the dark light areas of the images. In addition, the spatial resolution of the image field of view direction was constantly changing due to the angle of monocular tilt image capture. For the above problems, three pre-processing operations, namely contrast enhancement, bilateral filtering denoising and view field division, were used to solve them.
(1) Contrast enhancement
To solve the problem of low contrast, this paper used the image enhancement module in the PIL library to pre-process the images, such as for contrast enhancement. The process is as follows: first, the original image is converted to grayscale and the arithmetic mean of its grayscale is found; then, a new image is created with the same size and number of channels as the original image, and the value of each channel for each pixel of the new image is the grayscale mean calculated earlier; finally, the new image is created by merging the original image and the new image, and the parameter a determines the weight of the two when they are merged.
i m g n e w = i m g 1 ( 1.0 a ) + i m g 2 a
where i m g 1 is the converted grayscale image, i m g 2 is the original image and i m g n e w is the new image after enhancement. a is the ratio that controls the two images when they are combined; when a < 1, the image contrast is reduced; when a = 1, the image remains unchanged; and when a > 1, the image contrast is enhanced.
(2) Bilateral filtering denoising
Bilateral filtering is a nonlinear filter based on the principle of representing the intensity of a pixel by a Gaussian-weighted average of the luminance of the surrounding pixels, making it effective not only in removing image noise but also in better preserving the edge information of the object [17]. Its application in low luminance environments has been demonstrated [18]. In this paper, after several comparison tests, a 50*50 window was finally used for bilateral filtering of the captured urban nighttime UAV images. The processed images effectively removed noise and retained the edge information of various types of lights intact, which was more conducive to the later image segmentation and classification operations.
(3) View field division
Static monocular tilt image is a single-view tilt image taken at a certain height using a single-lens UAV with a fixed vertical takeoff and landing, which has the problem of continuous variable scale, where the spatial resolution of the image becomes gradually smaller along the direction of the field of view and the size of the same type of light in the image varies very much, which has a certain impact on image segmentation and classification. Therefore, in this paper, before image segmentation, the images were equally divided into three images in the near, middle and far view regions according to the view direction (Figure 3), which effectively alleviated the continuous variable scale problem. The subsequent image segmentation and classification operations were based on the near, middle and far view regions.

2.2.2. Multiresolution Segmentation

Image segmentation is a key step in object-oriented classification, and the degree of segmentation directly affects the subsequent classification results. In this paper, the multiresolution segmentation (MRS) algorithm in eCognition 9.0 software was used for image segmentation operation. Multiresolution segmentation is a bottom-up iterative algorithm based on region merging under different scale conditions with a certain image element as the starting point, and the final segmentation objects follow the principle of maximum local heterogeneity among them. The parameters to be set in the multiresolution segmentation process include band weights, scale parameters, shape factors and compactness, among which the scale parameters are mainly determined with the aid of an ESP2 plug-in, which reflects the homogeneity of the segmentation results by introducing the local variance (LV) index and finds the potential optimal segmentation scale by rate of change (ROC) [19].
The light types, such as window lights, in urban nighttime UAV images as small targets are prone to over-segmentation using multiresolution segmentation directly [20,21,22]. In this paper, we finally chose to combine Canny edge detection to alleviate the over-segmentation phenomenon. As a comprehensive algorithm with good performance, Canny edge detection has been tried by many scholars in combination with multiresolution segmentation algorithm [23,24], and the final segmentation results can effectively alleviate the over-segmentation phenomenon and the edge information of the segmented object is more consistent with the real object edge contour.

2.2.3. Features Extraction

Based on the image segmentation results, the spectral, textural and geometric features of the lighting object were extracted. According to the characteristics of different types of night lighting objects, this paper extracted a total of 24 kinds of feature information. Spectral features include mean, standard deviation, maximum brightness difference and brightness in the visible band; textural features include correlation, homogeneity, contrast, standard deviation, angular second moment, dissimilarity, entropy and mean value extracted using gray level co-occurrence matrix (GLCM); geometric features include area, length/width, compactness, density, main direction, roundness, shape index and asymmetry of the lighting object. In order to facilitate subsequent writing, each feature was added with a number for simplified representation, and the specific feature information is shown in Table 2.

2.2.4. Sample Selection

In this paper, based on the three regions divided, different sample sizes were selected according to the actual distribution of each category of light, and a total of 2440 samples were selected. Since there was no road reflective light in the far view region, only the four light types other than road reflective light were available for sample selection. The selected sample data were normalized before model training. To ensure the stability of the classification model and prevent overfitting, a stratified sampling method was used to randomly divide each category of light sample into training and validation sets in the ratio of 7:3 (Table 3).

2.2.5. Classifier

In this paper, based on the synthesis of previous classification algorithms used in object-oriented classification and combined with the lighting objects in this study, we finally determined that four classification algorithms, namely random forest, support vector machine, K-nearest neighbor, and decision tree, were used for comparative analysis to find the most suitable classification algorithm for UAV nighttime urban lighting objects.
Random forest (RF) is a supervised classification algorithm based on integrated learning [25], which is constructed by combining multiple decision trees. The final classification result is obtained by averaging or voting the results of each decision tree. The random forest model has the advantages of high accuracy, robustness and resistance to overfitting, and has better performance in dealing with high-dimensional nonlinear classification problems and is widely used in high-resolution image classification [26,27,28].
Support vector machine (SVM) is a supervised machine learning algorithm based on statistical learning theory with powerful nonlinear and high-dimensional processing capability and high recognition accuracy for small samples [29,30]. The core idea of SVM is to map low-dimensional space to high-dimensional space through kernel function, search for the optimal separation hyperplane in high-dimensional space, maximize the distance between samples and this plane, and thus realize sample classification [31].
The K-nearest neighbor algorithm (KNN) is a nonparametric classification algorithm. The KNN algorithm classifies data based on the nearest or adjacent training examples in a given region, and for a new input, its K-nearest neighbor data are computed, and the majority type of its nearest neighbor data determines the classification of the new input [32]. The K-nearest neighbor algorithm is a simple but highly accurate lazy learning algorithm.
Decision tree (DT) is a supervised learning algorithm that uses a tree structure to construct a classification model, where each internal node of a decision tree is a set of category attributes [33]. DT has the advantages of fast computing speed and high accuracy and can work effectively on relatively small data sets [34], making it widely used in remote sensing image classification.
In this paper, we used Python 3.9 as the runtime environment for machine learning models. When modeling the classifiers, all used the ten-fold cross-validation method to ensure the accuracy of the models when training, and the grid search method was used to select the best parameters for each type of model.

2.2.6. Accuracy Evaluation Methods

In order to accurately analyze the classification accuracy of different machine learning models for UAV nighttime city light images, this paper used the calculation of confusion matrix and employed overall accuracy (OA), kappa coefficient, producer accuracy (PA) and user accuracy (UA) as quantitative metrics to evaluate the classification results.
O A = i = 1 n T i i T s u m × 100 %
K a p p a = T s u m i = 1 n T i i i = 1 n T i + T + i T s u m 2 i = 1 n T i + T + i
P A = T i i T + i × 100 %
U A = T i i T i + × 100 %
where T s u m is the total number of samples, T i i is the number of correctly classified samples, n is the total number of categories, T i + is the predicted number of samples in category i , and T + i is the number of true samples in category i .

3. Results

3.1. Comparison of Segmentation Results

In this study, the band weights were set to one. The specific segmentation process was as follows: firstly, keeping the scale parameter constant, the shape factor and compactness were segmented in steps of 0.1 and iterated through 0.1–0.9 for multiple segmentation experiments to determine the optimal parameters of both; secondly, the ESP2 plug-in was used to determine the potentially suitable scale parameter (Figure 4), and the final scale parameters were determined by visual interpretation; finally, the optimal segmentation parameter settings for different regions were obtained (Table 4).
In this paper, we incorporated Canny edge detection results based on multiresolution segmentation to alleviate the over-segmentation phenomenon of some lighting types. From comparing the segmentation results before and after combining Canny edge detection (Figure 5), it was difficult to segment all light types perfectly using only the multiresolution segmentation algorithm, and there was over-segmentation of a window light into multiple window lights (e.g., in the near view region of Figure 5 using only the area marked by circle ①, ② and ③ in MRS), the existence of a halo around the window light (e.g., in the near view region of Figure 5 using only the area marked by circle ④ in MRS) and over-segmentation of the building reflective light and background into multiple objects (e.g., in the middle view region of Figure 5 using only the area marked by circle ④ and ⑤ in MRS). The Canny edge detection can detect the edge contours of all kinds of lights with a low error rate and as much as possible, so the combination of the two makes the extraction results more consistent with the real contours of all kinds of lights. As can be seen from the selected ① ② ③ region in Figure 5, after combining Canny, the over-segmented window light was re-segmented into a complete window light, making the interior of individual window lights more complete and more homogeneous. Additionally, from the ④ ⑤ region selected in Figure 5, we can see that after the combination of the two, the segmentation result of the building reflective light and background was more regular and smoother, which is more in line with the actual situation. The segmentation results in this paper further validate the feasibility of the combination of the two types of algorithms for the application of UAV nighttime urban lighting images.

3.2. Classification Results

3.2.1. Results of Fine Classification of Urban Nighttime Lighting

Based on the segmentation, four classification algorithms, RF, SVM, KNN and DT, were used to classify the lighting objects, and the classification results are shown in Figure 6. In general, RF had the best classification results, and its classification results reflected the distribution of various types of urban lights more accurately; in particular, the window lights and backgrounds in the near, middle and far view regions are perfectly identified. The classification results of SVM and DT were the second best, and the two types of algorithms classified window light and building reflective light better; but, compared with RF, the SVM algorithm had more sticky recognition phenomena of identifying neon light periphery as window light in the mid- and far-view regions, there was the phenomenon of identifying multiple window lights as one window light, and more misclassification of road reflective light in the near- and mid-view regions. DT had more misdistribution of window light and road reflective light than RF. The KNN had the worst classification results, with a particularly high number of misclassifications and sticky recognition phenomena, especially for road and building reflective lights.

3.2.2. Accuracy Assessment of Fine Classification of Urban Nighttime Lighting

In this paper, the confusion matrices for different regions and algorithms were calculated separately based on the validation samples, and this was used to obtain the classification accuracy for each type of light, as well as the OA and kappa (Table 5, Table 6 and Table 7). Collectively, the OA of all three regions was above 80% and the kappa was above 0.75, among which the OA and kappa of the far view region were the largest, both above 90%, with the best classification effect.
The classification accuracy shows that the OA and kappa of RF were higher than those of SVM, KNN and DT in all three regions (Table 5, Table 6 and Table 7), among which the OA and kappa were the highest in the vision region with 95.36% and 0.9381, respectively. Therefore, the RF algorithm performed better than the other three algorithms in the fine classification of urban nighttime lights.
In the classification accuracy of each light type in the far view region (Table 5), the types of light with the best classification accuracy were window light and background, with neon light and building reflective light being relatively poor. The UA of all light types was higher than 87%, and the PA of all except KNN for road reflective light was lower than 85%, indicating that these four algorithms have lower misclassification rates and omission rates for all types of urban lights in the vision area.
In the classification accuracy of each light type in the middle view region (Table 6), the classification accuracy of the background was the highest, followed by the window light, and the lowest accuracy was the road reflective light. The PA and UA of each light type in the middle view region were higher than 72%, and the lowest UA was the classification accuracy of DT for neon light, which was only 72.50%, with a high misclassification rate; the lowest PA was the classification accuracy of SVM for building reflective light, which was 73.17%, with a high omission rate.
In the classification accuracy of each light type in the near view region (Table 7), the classification accuracy of background was the best, and the classification accuracy of road reflective light was the worst. The lowest result in UA was the classification result of KNN for building reflective light, only 60.32%, with a very high misclassification rate, and the highest was the classification of background by SVM, with 100% accuracy and all correct classification; the lowest in PA was the classification accuracy of KNN for road reflective light, which was only 73.85% with a high omission rate, and the highest was the classification of RF for background with 100% accuracy and no omission.
In summary, the classification accuracy of each light type is ranked as follows: background > window light > neon light > building reflective light > road reflective light. The classification accuracy of background and window light was the highest, and the classification accuracy of road reflective light was the lowest.

3.3. Feature Contribution Analysis

To further explore which specific features had the greatest degree of influence on the fine classification of nighttime urban lighting, the RF algorithm was used to obtain the ranking of feature contributions in different regions (Figure 7). Among the three regions, the feature with the highest contribution was the spectral feature, and the total contribution of the spectral feature was 59.63% in the near view region, 63.49% in the middle view region and 66.09% in the far view region; the proportion of the region increased gradually from near to far, indicating that the smaller the classification object was, the higher the contribution of the spectral feature was. Among them, Mean_Red (S1), Mean_Green (S2) and Brightness (S8) account for the top three of the three regional features, accounting for 32.19%, 36.92% and 44.34% from near to far, respectively, indicating that the cities are more sensitive to the red and green bands at night compared with the blue band. The contribution of geometric features was the smallest, 17.43% in the near view region, 14.77% in the middle view region and 14.67% in the far view region, and the contribution decreased gradually from near to far, indicating that the smaller the classification object was, the smaller the proportion of geometric features was, and the same went for textural features. It can be seen that the contribution of spectral features was the largest and the contribution of geometric features was the smallest in the nighttime urban lighting images captured by UAVs, and the smaller the classification object was, the larger the contribution of spectral features was, while the contribution of geometric and textural features was smaller.

3.4. Offsite Application Comparison

The research area selected in this paper is a typical mature community in urban construction in Changsha. Community buildings are mainly high-rise buildings, dense buildings, and rich in nighttime lighting types, providing a very good research area for fine classification of urban lighting at night. However, there are differences in the characterization of nighttime urban lights on UAV images in different regions. In order to explore the performance of this paper’s method in other areas of Changsha City and further verify the feasibility and accuracy of the object-oriented method in the fine classification of static monocular tilted urban nighttime light images captured by UAV, the old urban area of Changsha City, represented by Guanshaling (Figure 8), was selected as the research area for the comparison of different applications in this paper. The RF with the highest classification accuracy in the Meixi Lake area was used as the classification algorithm, and the method of this paper was used for classification.
As can be seen from the classification accuracy (Table 8), the classification accuracy in Guanshaling was similar to that of Meixi Lake. The overall classification accuracies of the three regions of near, middle and far view of Guanshaling were all above 85%, and the kappa were above 0.8. The feasibility and accuracy of the classification method of this paper in the fine classification of static monocular tilted urban nighttime light images captured by UAV were verified in different study areas.

4. Discussion

The classification accuracy shows the highest classification accuracy in the far view region, because the various light types in the far view region are smaller, the internal features of the object are more homogeneous and fluctuate less, and it is easier to distinguish between different types of lights. At the same time, in the tilt image, the more distant the region from the shooting point, the higher the probability that the road reflective light is blocked. The selected far view region in this paper directly without the road reflective light as a light type, correspondingly reducing part of the misclassification phenomenon. The classification accuracy of the middle view region was the lowest, the OA of the RF algorithm was 89.9%, and the kappa was 0.8732. Compared with the near view region, the object resolution of the middle view region was lower, and in the upper-right position of the middle view region (Figure 6), various lights were interspersed together, which increased the difficulty of classification. Additionally, the results from the offsite application comparison show that the accuracy of the middle view region was higher than that of the near view region, because the light types in the middle view region in the offsite application were not as complex as in Figure 6, so the classification accuracy of urban lights in different regions was ultimately affected by both the image resolution and the complexity of lights.
Our study indicated that RF had the highest classification accuracy in the three regions (Table 5, Table 6 and Table 7), which is in line with the findings of many classification studies of daytime UAV images [12,35,36,37]. The RF algorithm integrates multiple decision trees with good high-dimensional data processing capability and can effectively avoid noise interference. In this paper, more types of lighting features were selected, so the RF classification algorithm with better high-dimensional data processing capability had better classification results, especially in the near-field region where the accuracy difference with other classifiers was the largest, and OA was 3.92% higher than that of the second-ranked SVM. SVM had the second highest classification accuracy, and, as can be seen from Table 5, the OA difference with RF in the far-field region was the smallest at 1.79%, indicating that SVM had some advantages in dealing with small target object recognition, but was less capable of handling high-dimensional data and resisting noise than RF. KNN and DT had the lowest classification accuracy because KNN is a lazy classifier, and the nighttime light objects in this paper are more complex and more prone to misclassification and omission of lights; similarly, DT has a simple calculation process, which is difficult to cope with the task of fine classification of urban nighttime light types.
In the fine classification results of each light type (Table 5, Table 6 and Table 7), there were significant differences in the classification accuracy of each type of light. Taking the classification results of RF in the near view region as an example, the UA of road reflective lighting was only 85.71%, and the UA of other light types were above 90% (Table 7). The reasons for this phenomenon mainly include the following three points: (1) different types of street lights lead to huge differences within road reflective lighting; (2) more noise points in road reflective lighting, such as cars, pedestrians and traffic signs such as zebra lines; (3) vehicles and store lights also lead to different characteristics of road reflective lighting. Therefore, the classification accuracy and results of road reflective lighting were poor in both the near and medium view regions. The lighting types with the best classification results were window lighting and background, as both have more homogeneous internal properties and do not fluctuate much.
This study shows that spectral features occupy a greater contribution in the classification of urban nighttime low-brightness images, which is in line with the conventional perception that the brightness increases almost gradually from low to high from background, building reflected light, road reflected light, window light to neon light. The contribution of spectral features in the near, middle and far view regions was above 59%, and the proportion was gradually increasing, mainly because the smaller the classification object was, the less obvious its texture and geometric features were. For example, in the far view region, the window light directly became a small rectangle with uniform size and shape, which greatly reduced the separability of the window light in terms of texture and geometric attributes.

5. Conclusions

In this paper, based on static monocular tilted urban nighttime light images captured by UAVs, we qualitatively and quantitatively analyze the classification results and accuracy of different machine learning algorithms for common urban lights, and analyze the contribution of various types of features in fine classification, using an object-oriented classification method. The main conclusions are as follows:
(1) The resolution of the static monocular tilt image captured by the UAV is not a fixed value, and the variable scale problem will affect the subsequent segmentation and classification operations. In this paper, we propose the solution of dividing the image into three regions equally according to the visual direction, which can alleviate the variable scale problem, and use the segmentation method combining Canny edge detection and the multiresolution segmentation algorithm to alleviate the over-segmentation phenomenon generated when using multiresolution segmentation algorithm only.
(2) By comparing the accuracy of four classification algorithms, RF, SVM, KNN and DT, it was found that RF has the highest classification accuracy, including the highest classification accuracy in the far view region, with OA of 95.36% and kappa of 0.9381. SVM is the second, and KNN and DT have the worst classification accuracies, indicating that the RF model based on the integrated learning algorithm is more suitable for object-oriented UAV urban nighttime lights fine classification. Additionally, the OA of RF in off-site applications were above 85%, and kappa were above 0.8, which further verifies the feasibility and portability of the method in this paper.
(3) In the fine classification results of light types, window light and background have the highest classification accuracy, and their UA and PA in the RF classification model were above 93%, neon light and building reflective light were second, and road reflective light had the lowest accuracy.
(4) In the feature importance ranking, the contribution rate of spectral features was the highest, with the three regions of near, middle and far view all above 59%, among which the highest was 66.09% in the far view region; the contribution rate of geometric features was the lowest, with the three regions of near, middle and far view all below 18%, among which the lowest was 14.67% in the far view region. It indicates that the smaller the classification object is, the higher the sensitivity to spectral features and the lower the sensitivity to texture and geometric features in urban nighttime lighting images.
This paper demonstrates that the classification method combining object-oriented and traditional machine learning algorithms is applicable to the fine classification of urban nighttime light images captured by UAV static monocular tilt. This study makes up for the lack of traditional nighttime lighting at the vertical level of cities and smaller-scale urban studies, and provides a practical research idea for using finer-scale urban nighttime lighting images for related studies. Overall, the results of urban nighttime lighting classification obtained by this method can meet the needs of further research and provide methodological and data support for nighttime urban research. However, there are some parts to be improved in this paper: for example, only visible images are used, and hyperspectral and LiDAR images are not involved; for the variable scale problem, it is only solved by simply dividing it into three regions, and no finer solution is proposed. Therefore, future research will further improve these two parts.

Author Contributions

Conceptualization, D.Z. and D.L.; data curation, D.Z. and J.W.; formal analysis, D.Z.; funding acquisition, D.L. and L.Z.; investigation, D.Z. and L.Z; methodology, D.Z. and L.Z.; project administration, D.L. and L.Z.; resources, D.Z., D.L. and L.Z.; software, D.Z.; supervision, D.L.; validation, D.Z.; visualization, D.Z.; writing—original draft, D.Z.; writing—review and editing, D.L., L.Z and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Fund of Hunan Provincial Education Department (No.18A014) and the Construction Program for the First-Class Disciplines (Geography) of Hunan Province, China.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.; Levin, N.; Xie, J.; Li, D. Monitoring hourly night-time light by an unmanned aerial vehicle and its implications to satellite remote sensing. Remote Sens. Environ. 2020, 247, 111942. [Google Scholar] [CrossRef]
  2. Yu, B.; Wang, C.; Gong, W.; Chen, Z.; Shi, K.; Wu, B.; Hong, Y.; Li, Q.; Wu, J. Nighttime light remote sensing and urban studies: Data, methods, applications, and prospects. Natl. Remote Sens. Bull. 2021, 25, 342–364. [Google Scholar]
  3. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef] [Green Version]
  4. Deren, L.; Ming, L. Research advance and application prospect of unmanned aerial vehicle remote sensing system. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 505–513. [Google Scholar]
  5. Liao, X.; Xiao, Q.; Zhang, H. UAV remote sensing: Popularization and expand application development trend. J. Remote Sens. 2019, 23, 1046–1052. [Google Scholar]
  6. Hao, P.; Geertman, S.; Hooimeijer, P.; Sliuzas, R. Spatial analyses of the urban village development process in Shenzhen, China. Int. J. Urban Reg. Res. 2013, 37, 2177–2197. [Google Scholar] [CrossRef]
  7. Levin, N.; Kyba, C.C.; Zhang, Q.; de Miguel, A.S.; Román, M.O.; Li, X.; Portnov, B.A.; Molthan, A.L.; Jechow, A.; Miller, S.D. Remote sensing of night lights: A review and an outlook for the future. Remote Sens. Environ. 2020, 237, 111443. [Google Scholar] [CrossRef]
  8. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  9. Timilsina, S.; Aryal, J.; Kirkpatrick, J.B. Mapping urban tree cover changes using object-based convolution neural network (OB-CNN). Remote Sens. 2020, 12, 3017. [Google Scholar] [CrossRef]
  10. Yu, J.; Hong, W.; Hong, C.; Lei, Z. Unmanned image classification in karst area combining topographic factors and stratification strategy. Bull. Surv. Mapp. 2022, 2, 121. [Google Scholar]
  11. Yang, K.; Zhang, H.; Wang, F.; Lai, R. Extraction of Broad-Leaved tree crown based on UAV visible images and OBIA-RF model: A case study for Chinese Olive Trees. Remote Sens. 2022, 14, 2469. [Google Scholar] [CrossRef]
  12. Guo, Q.; Zhang, J.; Guo, S.; Ye, Z.; Deng, H.; Hou, X.; Zhang, H. Urban tree classification based on object-oriented approach and random forest algorithm using unmanned aerial vehicle (uav) multispectral imagery. Remote Sens. 2022, 14, 3885. [Google Scholar] [CrossRef]
  13. Cao, J.; Leng, W.; Liu, K.; Liu, L.; He, Z.; Zhu, Y. Object-based mangrove species classification using unmanned aerial vehicle hyperspectral images and digital surface models. Remote Sens. 2018, 10, 89. [Google Scholar] [CrossRef] [Green Version]
  14. Ye, Z.; Guo, Q.; Zhang, J.; Zhang, H.; Deng, H. Extraction of urban impervious surface based on the visible images of UAV and OBIA-RF algorithm. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2022, 38, 225–234. [Google Scholar]
  15. LIANG, L.; JIANG, L.; Zhiwei, Z.; Yuxing, C.; Yafei, S. Object-oriented classification of unmanned aerial vehicle image for thermal erosion gully boundary extraction. Remote Sens. Nat. Resour. 2019, 31, 180–186. [Google Scholar]
  16. Pádua, L.; Matese, A.; Di Gennaro, S.F.; Morais, R.; Peres, E.; Sousa, J.J. Vineyard classification using OBIA on UAV-based RGB and multispectral data: A case study in different wine regions. Comput. Electron. Agric. 2022, 196, 106905. [Google Scholar] [CrossRef]
  17. XU Chengquan, L.Q. Detection of Tilted Aerial Photography Right-Angled Image Control Points Target based on LSD Algorithm. J. Geo-Inf. Sci. 2021, 23, 505–513. (In Chinese) [Google Scholar]
  18. Shunzhong, X.; Ying, L.G.; Tong, W.; Mengjie, R.; Xiaohui, Y.; Cuizhen, Z. Radiometric consistency correction of UAV multispectral images in strong reflective water environment. Trans. Chin. Soc. Agric. Eng. 2022, 38, 192–200. (In Chinese) [Google Scholar]
  19. Chen, T.; Hu, Z.; Wei, L.; Hu, S. Data processing and landslide information extraction based on UAV remote sensing. J. Geo.-Inf. Sci. 2017, 19, 692–701. [Google Scholar]
  20. Shen, J.; Chen, H.; Xu, M.; Wang, C.; Liu, H. Intelligent image segmentation model for remote sensing applications. J. Intell. Fuzzy Syst. 2019, 37, 361–370. [Google Scholar] [CrossRef]
  21. Xu, X.; Qiu, J.; Zhang, W.; Zhou, Z.; Kang, Y. Soybean Seedling Root Segmentation Using Improved U-Net Network. Sensors 2022, 22, 8904. [Google Scholar] [CrossRef] [PubMed]
  22. Wu, Y.; Li, Q. The Algorithm of Watershed Color Image Segmentation Based on Morphological Gradient. Sensors 2022, 22, 8202. [Google Scholar] [CrossRef] [PubMed]
  23. Chuyue, P.; Xiao, C.; Linyuan, X. Study on Recognizing the Penguin Population in UAV Image Based on Object Otiented Classification. Geomat. Inf. Sci. Wuhan Univ. 2021, 1–15. [Google Scholar] [CrossRef]
  24. Junjie, Z.; Du Xiaoping, F.X.; Huadong, G. A Advanced Multi-Scale Fractal Net Evolution Approach. Remote Sens. Technol. Appl. 2014, 29, 324–329. [Google Scholar]
  25. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  26. Phan, T.N.; Kuch, V.; Lehnert, L.W. Land cover classification using Google Earth Engine and random forest classifier—The role of image composition. Remote Sens. 2020, 12, 2411. [Google Scholar] [CrossRef]
  27. Zhang, L.; Liu, Z.; Ren, T.; Liu, D.; Ma, Z.; Tong, L.; Zhang, C.; Zhou, T.; Zhang, X.; Li, S. Identification of seed maize fields with high spatial resolution and multiple spectral remote sensing using random forest classifier. Remote Sens. 2020, 12, 362. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, S.; Azzari, G.; Lobell, D.B. Crop type mapping without field-level labels: Random forest transfer and unsupervised clustering techniques. Remote Sens. Environ. 2019, 222, 303–317. [Google Scholar] [CrossRef]
  29. Maulik, U.; Chakraborty, D. Remote Sensing Image Classification: A survey of support-vector-machine-based advanced techniques. IEEE Geosci. Remote Sens. Mag. 2017, 5, 33–52. [Google Scholar] [CrossRef]
  30. Sahour, H.; Kemink, K.M.; O’Connell, J. Integrating SAR and optical remote sensing for conservation-targeted wetlands mapping. Remote Sens. 2022, 14, 159. [Google Scholar] [CrossRef]
  31. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support vector machine versus random forest for remote sensing image classification: A meta-analysis and systematic review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  32. Taunk, K.; De, S.; Verma, S.; Swetapadma, A. A brief review of nearest neighbor algorithm for learning and classification. In Proceedings of the 2019 International Conference on Intelligent Computing and Control Systems (ICCS), Madurai, India, 15–17 May 2019; pp. 1255–1260. [Google Scholar]
  33. Mu, Y.; Wu, M.; Niu, Z.; Huang, W.; Yang, J. Method of remote sensing extraction of cultivated land area under complex conditions in southern region. Remote Sens. Technol. Appl. 2020, 35, 1127–1135. [Google Scholar]
  34. Sen, P.C.; Hajra, M.; Ghosh, M. Supervised Classification Algorithms in Machine Learning: A Survey and Review. In Emerging Technology in Modelling and Graphics: Proceedings of IEM Graph 2018; Springer: Berlin/Heidelberg, Germany, 2020; pp. 99–111. [Google Scholar]
  35. Zhou, R.; Yang, C.; Li, E.; Cai, X.; Yang, J.; Xia, Y. Object-based wetland vegetation classification using multi-feature selection of unoccupied aerial vehicle RGB imagery. Remote Sens. 2021, 13, 4910. [Google Scholar] [CrossRef]
  36. Islam, N.; Rashid, M.M.; Wibowo, S.; Xu, C.-Y.; Morshed, A.; Wasimi, S.A.; Moore, S.; Rahman, S.M. Early weed detection using image processing and machine learning techniques in an Australian chilli farm. Agriculture 2021, 11, 387. [Google Scholar] [CrossRef]
  37. Vieira, C.C.; Sarkar, S.; Tian, F.; Zhou, J.; Jarquin, D.; Nguyen, H.T.; Zhou, J.; Chen, P. Differentiate soybean response to off-target dicamba damage based on UAV imagery and machine learning. Remote Sens. 2022, 14, 1618. [Google Scholar] [CrossRef]
Figure 1. The study area.
Figure 1. The study area.
Sensors 23 02180 g001
Figure 2. UAV nighttime city light image classification flow chart.
Figure 2. UAV nighttime city light image classification flow chart.
Sensors 23 02180 g002
Figure 3. Equally divide the image into near, middle and far images along the direction of the field of view.
Figure 3. Equally divide the image into near, middle and far images along the direction of the field of view.
Sensors 23 02180 g003
Figure 4. ROC−LV diagrams for near, middle and far view regions.
Figure 4. ROC−LV diagrams for near, middle and far view regions.
Sensors 23 02180 g004
Figure 5. Comparison of segmentation results. (The selected area ① ② ③ indicates the splitting effect of the Window Light, and the selected area ④ ⑤ indicates the split effect between the Building Reflective Light and the Background).
Figure 5. Comparison of segmentation results. (The selected area ① ② ③ indicates the splitting effect of the Window Light, and the selected area ④ ⑤ indicates the split effect between the Building Reflective Light and the Background).
Sensors 23 02180 g005
Figure 6. Fine-grained classification results of four machine learning algorithms for urban nighttime lights in near, middle and far view regions.
Figure 6. Fine-grained classification results of four machine learning algorithms for urban nighttime lights in near, middle and far view regions.
Sensors 23 02180 g006
Figure 7. Feature contribution ranking.
Figure 7. Feature contribution ranking.
Sensors 23 02180 g007
Figure 8. Guanshaling night lighting image and region division.
Figure 8. Guanshaling night lighting image and region division.
Sensors 23 02180 g008
Table 1. UAV parameter settings.
Table 1. UAV parameter settings.
SensorFlight AltitudeResolutionTilt and TurnISOAperture SizeExposure
CMOS300 m0.25472 × 3648−20°8002.81/15
Table 2. Feature information and number.
Table 2. Feature information and number.
Feature CategoryFeature Name (Number)Number of Features
Spectral featuresMean_Red (S1), Mean_Green (S2), Mean_Blue (S3), SD_Red (S4), SD_Green (S5), SD_Blue (S6), Max_diff (S7), Brightness (S8)8
Textural featuresGLCM_Correlation (T1), GLCM_Homogeneity (T2), GLCM_Contrast (T3), GLCM_StdDev (T4), GLCM_Ang_2nd (T5), GLCM_Dissimilarity (T6), GLCM_Entropy (T7), GLCM_Mean (T8)8
Geometric featuresArea (G1), Length/Width (G2), Compactness (G3), Density (G4), Main_direction (G5), Roundness (G6), Shape_index (G7), Asymmetry (G8)8
Table 3. Number of training and testing samples for different light types.
Table 3. Number of training and testing samples for different light types.
Light CategoryTotal SamplesTraining SamplesValidation Samples
Window Light550385165
Neon Light530371159
Road Reflective Light30021090
Building Reflective Light530371159
Background530371159
Table 4. Optimal segmentation parameters for near, middle and far view regions.
Table 4. Optimal segmentation parameters for near, middle and far view regions.
Region NameScale ParameterShape FactorCompactness
Far view region320.20.5
Middle view region420.40.7
Near view region520.30.8
Table 5. Far view region classification accuracy table (%).
Table 5. Far view region classification accuracy table (%).
Light CategoryRFSVMKNNDT
UAPAUAPAUAPAUAPA
Window Light97.3098.6395.9592.21100.096.1090.5498.53
Neon Light93.2497.1890.54100.089.19100.093.2493.24
Building Reflective Light95.4688.7390.9185.7193.9481.5895.4686.30
Background95.4696.9296.9796.9787.8895.0895.4696.92
OA95.3693.5792.8693.57
Kappa0.93810.91420.90470.9143
Table 6. Middle view region classification accuracy table (%).
Table 6. Middle view region classification accuracy table (%).
Light CategoryRFSVMKNNDT
UAPAUAPAUAUAPAUA
Window Light93.4895.5695.6586.2789.1383.6784.7279.59
Neon Light90.0085.7185.0091.8977.5083.7872.5078.38
Road Reflective Light80.8590.4874.4794.5985.1180.0078.7280.44
Building Reflective Light88.2478.9588.2473.1776.4781.2582.3575.68
Background100.0100.0100.096.8896.77100.093.55100.0
OA89.9087.8884.8581.82
Kappa0.87320.84800.80900.7712
Table 7. Near view region classification accuracy table (%).
Table 7. Near view region classification accuracy table (%).
Light CategoryRFSVMKNNDT
UAPAUAPAUAUAPAUA
Window Light93.0293.0295.3575.9395.3582.0083.7275.00
Neon Light91.6788.0081.2586.6781.2590.7075.0076.60
Road Reflective Light85.7188.8980.3688.2485.7173.8583.9382.46
Building Reflective Light90.4889.0684.1391.3860.3284.4580.9587.93
Background97.78100.0100.095.7497.7884.6297.7897.78
OA91.3787.4582.3583.92
Kappa0.89160.84280.77930.7983
Table 8. Classification accuracy of Guanshaling.
Table 8. Classification accuracy of Guanshaling.
Accuracy IndexNear View RegionMiddle View RegionFar View Region
OA86.67%91.11%93.02%
Kappa0.83200.88890.9124
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, D.; Li, D.; Zhou, L.; Wu, J. Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach. Sensors 2023, 23, 2180. https://doi.org/10.3390/s23042180

AMA Style

Zhang D, Li D, Zhou L, Wu J. Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach. Sensors. 2023; 23(4):2180. https://doi.org/10.3390/s23042180

Chicago/Turabian Style

Zhang, Daoquan, Deping Li, Liang Zhou, and Jiejie Wu. 2023. "Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach" Sensors 23, no. 4: 2180. https://doi.org/10.3390/s23042180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop