Next Article in Journal
Coastal Visibility Distance Estimation Using Dark Channel Prior and Distance Map Under Sea-Fog: Korean Peninsula Case
Previous Article in Journal
A Decade of Ground Deformation in Kunming (China) Revealed by Multi-Temporal Synthetic Aperture Radar Interferometry (InSAR) Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Method of Farmland Obstacle Boundary Extraction in UAV Remote Sensing Images

1
College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
2
Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture Rural Affairs, Shaanxi 712100, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(20), 4431; https://doi.org/10.3390/s19204431
Submission received: 19 September 2019 / Revised: 6 October 2019 / Accepted: 11 October 2019 / Published: 12 October 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
Aimed at the problem of obstacle detection in farmland, the research proposed to adopt the method of farmland information acquisition based on unmanned aerial vehicle landmark image, and improved the method of extracting obstacle boundary based on standard correlation coefficient template matching and assessed the influence of different image resolutions on the precision of obstacle extraction. Analyzing the RGB image of farmland acquired by unmanned aerial vehicle remote sensing technology, this research got the following results. Firstly, we applied a method automatically registering coordinates, and the average deviations on the X and Y direction were 4.6 cm and 12.0 cm respectively, while the average deviations manually by ArcGIS were 4.6 cm and 5.7 cm. Secondly, with an improvement on the step of the traditional correlation coefficient template matching, we reduced the time of template matching from 12.2 s to 4.6 s. The average deviation between edge length of obstacles calculated by corner points extracted by the algorithm and that by actual measurement was 4.0 cm. Lastly, by compressing the original image on a different ratio, when the pixel reached 735 × 2174 (the image resolution reached 6 cm), the obstacle boundary was extracted based on correlation coefficient template matching, the average deviations of boundary points I of six obstacles on the X and Y were respectively 0.87 and 0.95 cm, and the whole process of detection took about 3.1 s. To sum up, it can be concluded that the algorithm of automatically registered coordinates and of automatically extracted obstacle boundary, which were designed in this research, can be applied to the establishment of a basic information collection system for navigation in future study. The best image pixel of obstacle boundary detection proposed after integrating the detection precision and detection time can be the theoretical basis for deciding the unmanned aerial vehicle remote sensing image resolution.

1. Introduction

The obstacle detection method in the road can be divided into ultrasonic technology, machine vision technology and laser radar technology according to the sensor [1]. The machine vision technology refers to the fact that the vision sensor is used to acquire the image of the vehicle on the way in place of human eyes, then it processes the image by means of methods, including color threshold value division, edge detection and stereoscopic matching and gets relevant information of the obstacle [2,3,4,5]. This method is cheap, easy to operate and has no influence on the surrounding environment. However, the following two problems would arise when this method is applied to the obstacle detection in farmland. Firstly, part of or the whole obstacle would be hidden in the crop when the farmland image is taken by the vehicle-mounted camera. Secondly, various obstacles exist in farmland, which frequently are farm tools, the impounding reservoir, the telegraph pole, people and livestock and so on. These things vary greatly in shape and size with different colors and it is difficult to accurately recognize them by threshold division or boundary detection.
Remote sensing (RS) refers to non-contact and remote detection technology [6]. The image acquisition by means of unmanned aerial vehicle RS technology over the farmland can avoid the obstacle from not being captured by camera. In recent years, it has rapidly developed, and the technology is featured with operation flexibility, low cost, high temporal-spatial resolution, strong environment adaptability, labor-saving, high efficiency and less pollution to environment. In the field of agriculture, the application of unmanned aerial vehicle not only avoids the damage made by large agricultural machinery to the crop, but also addresses the safety concern in the mechanized production, which is gaining increasing popularity from farmers and receiving scholars’ attention home and abroad [7]. In 2002, the unmanned aerial vehicle of NASA, Pathfinder-Plus was used to monitor weed eruption, exposed irrigation and abnormal fertilization and other situations [8]. Zarcotejada P.J. analyzed the moisture during orange planting through unmanned aerial vehicles, providing support for reasonable irrigation saving [9]. Zhang used the texture analysis method to establish rules to identify corn seed production through decision tree [10]. Yao acquired a multispectral image of wheat of different nitrogen level, density and variety through unmanned aerial vehicle remote sensing and found a method to analyze nitrogen of wheat and growing features through image analysis. Gong proposed output estimation module of oilseed rape based on fully constrained mixed pixel analysis method and unmanned aerial vehicle remote sensing [11]. Overall, unmanned aerial remote sensing technology has been widely applied in acquiring agricultural information, monitoring crop condition, and analyzing the effect of fertilization, but few in navigation. In this research, we aimed to analyze the feasibility of UAV remote sensing for automatic navigation of agricultural machinery.
As for remote sensing images acquired by unmanned aerial vehicle, Geographic Information System (GIS) is used to specifically analyze them with machine learning methods. It is usually used to achieve registering of coordinates of unmanned aerial vehicle remote sensing image and the process of obstacle boundary extraction. Meanwhile, to plan routes for navigation using the information extracted from the remote sensing image, it is necessary to design an algorithm to automatically realize the process mentioned above. Template matching is an effective pattern recognition technology, which can directly reflect similarity between the image and the pattern, and thereby find the target in the image and determine its coordinate position [12]. As it is accurate, strong in noise immunity and easy to realize, it has managed to be applied in target detection. Kherchaoui used human face features to establish the pattern and match, accomplishing human face detection [13]. Zhe proposed a manuscript number recognition method based on pattern matching and artificial neural network, whose recognition accuracy of number 0 to 9 reached 99.6% [14]. Cheng proposed a target location method of mixed pattern matching and threshold division, which utilized pattern matching method to realize coarse position of target detection and then to realize fine position by threshold division [15,16]. In recent years, the research points at home and abroad have always been how to increase calculation speed of pattern matching, but few scholars have extended the application of the method. The detection of static obstacle in the image acquired by unmanned aerial vehicle remote sensing is generally proceeded before the agricultural machinery. The method of pattern matching is exactly applicable when the precision is relatively strictly required while the requirement of real-time is relatively low.
This research used unmanned aerial vehicle remote sensing technology to firstly acquire an image of target farmland. Section 1 introduces the background and research that has been done. Section 2 explains the materials like remote sensing platform and photography equipment and methods including affine transformation algorithm and template matching. Section 3 shows the image processing results based on ArcGIS, results of coordinate registration based on algorithm, results of obstacle boundary extraction based on improved template matching method and finally the effect of image resolution on the extraction of obstacle boundary where the minimum image pixel which can be applied into obstacle boundary extraction was analyzed. Section 4 makes a conclusion that the coordinate automatic matching and obstacle boundary automatic extraction algorithm designed in this study can be used to build a basic information acquisition system for navigation in the future, which lays a foundation for the development of path planning and obstacle avoidance functions.

2. Materials and Methods

2.1. Remote Sensing Platform and Photography Equipment

The ultra-low altitude remote sensing platform used in this study is an eight-rotor UAV independently developed by Zhejiang University [17]. Its physical chart is shown in Figure 1a, and the specific performance is shown in Table 1.
Considering the weight and the pixel, we choose Sony A7RII full-range micro-single camera (as shown in Figure 1b) on the platform. The camera can take about 300 still pictures with its battery. The camera lens is Sony E-mount lens, which can focus automatically or manually.

2.2. Layout of Experimental Environment and Acquisition of Data

The aerial image of the test field was collected in the field in the west of the campus on the morning of December 13, 2018. The weather was clear and the wind speed was low on that day. The layout of experimental environment is shown in Figure 2, in which the size of aluminum block is 20 cm × 20 cm × 5 cm and the thickness and radius of navigation landmark’s umbrella shaped surface is 3 mm and 6 cm.
After the experimental environment was set, the longitude and latitude coordinates of the 0–26 center point of the landmark need to be measured by the C94 M8P module of RTK satellite positioning system. The measuring time at each point was 10 s. Five data values were obtained per second. Finally, the average values of 50 measured values at each point were summarized.
After the latitude and longitude coordinates were measured, the RGB image of the experimental plot was taken by UAV. The interval of UAV photography was set to 1 s and 104 aerial photographs were acquired in the whole process. It is necessary to use image mosaic technology to synthesize a complete image for the next analysis process. This research was completed by software Agisoft Photoscan 1.2.6. The RGB image obtained by stitching a series of original photographic images was stored in TIF format with a size of 448 M, and the resolution of the image was about 1 cm. It can be read directly by ArcGIS software for subsequent operation.

2.3. Affine Transformation Algorithm

Assuming that XOY is a Cartesian coordinate system, xo’y is a physical coordinate system, the intersection angle between two coordinate systems is α, the transverse distance between the coordinate origin o’ and O is A 0 and the longitudinal translation distance is B 0 . The scale of the physical coordinate system (i.e., the scale of the pictures taken in this study) is m x and m x . According to the principle of graphics, the coordinate transformation formula is as follows:
X = A 0 + A 1 x A 2 y
Y = B 0 + B 1 x + B 2 y
In formula, A 1 = m x cos α , A 2 = m x sin α , B 1 = m y sin α , B 2 = m y cos α .
Assuming that Q x and Q y present the coordinate difference between control points and the transformation value according to the formula individually.
Q x = X ( A 0 + A 1 x A 2 y )
Q y = Y ( B 0 + B 1 x + B 2 y )
According to the principle of least square method, two sets of equations can be obtained by taking the sum and minimum of the squares of Q x and Q y as follows.
{ X = A 0 n + A 1 x A 2 y x X = A 0 x + A 1 x 2 A 2 x y y X = A 0 y + A 1 x y A 2 y 2
{ Y = B 0 n + B 1 x + B 2 y x Y = B 0 x + B 1 x 2 + B 2 x y y Y = B 0 y + B 1 x y + B 2 y 2
The coefficients can be obtained by solving Equations (5) and (6), so the transformation of whole graph can be determined.

2.4. Template Matching

(1) Standard correlation coefficient matching
Template matching algorithm can be realized through the function “matchTemplate” in OpenCV. According to the different matching values, there are six commonly used methods: square difference matching, standard square difference matching, correlation matching, correlation coefficient matching and standard correlation coefficient matching. From simple (square difference matching) to complex (correlation coefficient matching), the more accurate matching results can be obtained, the longer calculation time it takes. In order to obtain higher detection accuracy (according to the official document in OpenCV), the standard correlation coefficient matching method was used in this research.
Using correlation coefficient to measure the similarity between two vectors. Assuming that the target template is a 5 × 5 image, it can be regarded as a 25-dimensional vector, and each dimension is the gray value of a pixel point. Comparing this vector with each sub-region in the image, the process of finding out the sub-region with the largest standard correlation coefficient is standard correlation coefficient matching, as shown in Equation (7):
ρ ( c , r ) = i = 1 m j = 1 n ( g i , j g ¯ ) ( g i + r , j + c g ¯ ) i = 1 m j = 1 n ( g i , j g ¯ ) 2 × i = 1 m j = 1 n ( g i + r , j + c g ¯ ) 2
where g ( x , y ) is the image gray function, ( i , j ) is the center pixel coordinates of the target window, g ( x , y ) is the template gray function, and ( i + r , j + c ) is the center pixel coordinates of the search window. With g ¯ = 1 m n i = 1 m j = 1 n g i , j , g ¯ = 1 m n i = 1 m j = 1 n g i + r , j + c , we can get Equation (8):
ρ ( c , r ) = i = 1 m j = 1 n ( g i , j × g i + r , j + c ) 1 m n ( i = 1 m j = 1 n g i , j ) ( i = 1 m j = 1 n g i + r , j + c ) [ i = 1 m j = 1 n g i , j 2 1 m n ( i = 1 m j = 1 n g i , j ) 2 ] [ i = 1 m j = 1 n g i + r , j + c 2 1 m n ( i = 1 m j = 1 n g i + r , j + c ) 2 ] .
With using template as the search window, the correlation coefficients of the original image are matched according to the fixed step size (usually 1 pixel). The closer the result is to 1, the higher the similarity between the region and the template is.
(2) Scale-Invariant Feature Transform (SIFT) descriptor matching
SIFT [18] features are invariant to rotation, scale scaling, brightness and so on. It is very stable for features extraction and mainly composed of the following four steps:
a. Extremum detection in Difference of Gauss (DOG) scale space: First, constructing DOG scale space, and using Gauss ambiguity of different parameters to express different scale spaces in SIFT. The construction scale space is used to detect the feature points that exist at different scales.
b. Delete unstable extreme points. Two main types are deleted: low contrast extremum points and unstable edge response points.
c. Determine the main direction of feature points. The magnitude of the gradient of each pixel are calculated in the field with the feature point as the center and the radius of 3 × 1.5, and then the magnitude of the gradient is counted by histogram. The horizontal axis of the histogram is the direction of the gradient, and the vertical axis is the cumulative value of the gradient magnitude corresponding to the gradient direction. The direction corresponding to the highest peak in the histogram is the direction of the feature.
d. Generate descriptors of feature points. Firstly, the coordinate axis is rotated as the direction of the feature points, and the gradient magnitude and direction of the pixels in the 16 × 16 window centered on the feature points are divided into 16 blocks, each of which is the histogram statistics of eight directions in its pixels. A total of 128-dimensional feature vectors can be formed.
After getting the key points of the two images, we could match those feature points by calculating their distances. And the size and average distance of the top 10 key-points were used in the matching method.
S o c r e = i = 0 10 d i s t a n c e 10 s i z e m a t c h
The score represents the matching degree between the obstacle image and the UAV image; we could find the best matched area on the UAV image which contains the object similar to the obstacle image.

3. Results and Discussion

3.1. Image Processing Results Based on ArcGIS

Firstly, coordinate registration was performed on the remote sensing image obtained. The essence of coordinate registration was to establish the transformational relation between user coordinates and physical coordinates. The registration process was implemented by the tool of Georeferencing in ArcGIS, and the control points needed to be manually selected.
In this study, a total of four control points (the centers of the landmarks) were set. When using ArcGIS for matching coordinates, the coordinate system of the control points that had been input would default to Cartesian coordinates. If the latitude and longitude coordinates were directly input, a large error might occur. Map projection should be conducted before the registration. Gauss–Krüger Projection is generally used in geographic information system in Hangzhou, China [19] because Hangzhou’s low latitude and Gauss–Krüger Projection’s high accuracy.
The latitude and longitude coordinate of the control points was shown in Table 2, where L and B represented the longitude and latitude of the control points, respectively. The X and Y after the coordinate transformation represented the abscissa and ordinate in the plane coordinate, respectively. Since the unit of the plane coordinate was meter, the three digits after the decimal point were taken. The serial numbers correspond to those in Figure 2.
Another eight marker points different from the four points of registration were selected to obtain the geographical coordinates (X,Y), and the actual coordinates (X’,Y’) was obtained by the Gauss–Krüger Projection, as shown in Table 3.
It can be analyzed from Table 4 that the maximum deviation in the X direction between the geographical coordinate converted from the eight mark points registration map and the geographical coordinate converted from the actual latitude and longitude was 9.7 cm, and the average deviation was 4.6 cm; the maximum deviation in the Y direction between the two was 13.7 cm, and the average deviation was 5.7 cm. There are two reasons for the deviations:
(1) There was a certain deviation of the latitude and longitude between actual coordinates and measured coordinates b, including the positioning deviation of the C94 M8P module itself, and the deviation between the mobile station position and the actual center point of the obstacle;
(2) During the registration process and the accuracy inspection process, the center of the marker was judged by the naked eye of the tester. Although the accuracy was high, a certain error still occurred.
In this research, the water-storage wells along the farmland were used as the obstacles (as shown in Figure 3), and the corner points of the obstacles were manually extracted on the ArcGIS-registered images. The plane coordinate (X,Y) was obtained from the registration with the unit of m, which is the information of the position of the obstacle boundary. This was shown in Table 4.

3.2. Results of Obstacle Boundary Extraction Based on Improved Template Matching Method

Based on the traditional method of standard correlation coefficient template matching, this research improved the algorithm according to the specific experimental conditions. The original image size was 4408 × 13,047, so it would take a lot of time if the search was done using the step length of one pixel each time. Thus, using the rough matching method to determine the approximate position of the obstacle was considered. In this research, the original image was searched in steps of ten pixels, and six regions of interest containing obstacles were obtained. The image coordinates of each region in the original image coordinate system were recorded separately. Then the accurate matching method to search for six regions of interest in steps of one pixel was used to obtain the boundary of the six obstacles, and the image coordinates in the respective regions were recorded. Finally, the obstacle boundary in the original image was marked by coordinate transformation to obtain the specific information about the position. The time required for the improved standard correlation coefficient algorithm matching was reduced from 12.2 s to 4.6 s. The specific process is shown in Figure 4.
Besides standard correlation coefficient template matching, we tried a matching method based on SIFT descriptor for rough matching on Figure 4. The matching results are shown in Figure 5. Since the SIFT matching will match the whole image and there are many other objects in the origin image, we travelled through the origin image with a window, and we calculated the scores between this window and the obstacle image, then we found the window that got the highest score (the green one) on the Figure 5. Thus, we could get the approximate position of the obstacle on the origin image.
The matching method based on SIFT showed a more stable matching result and required less input, but this method consumes too much time (ten times or more) compared with standard correlation coefficient template matching. Since these two matching methods were used for a rough matching, the accuracy of this method would not be affected as long as the obstacles were found correctly. So, we chose standard correlation coefficient template matching method to complete the following analysis in this experiment.
The extraction results based on standard correlation coefficient template matching method were shown in Table 5.
In order to compare the accuracy of the two methods for the extraction of obstacle boundary, we selected one side of the obstacle (the straight-line distance from point I to point IV in Figure 3) as the research object, and compared the corner coordinates of the obstacles obtained by the two methods with the actual measured results respectively. The results were shown in Table 6.
It can be concluded from Table 6 that the maximum deviation of the side length of the obstacle based on ArcGIS and the actual measured length was 9.6 cm, the minimum was 0 cm, and the average deviation was 4.7 cm. The maximum deviation of the side length of the obstacle based on template matching method and the actual measured length was 6.3 cm, the minimum was 1.1 cm, and the average deviation was 4.0 cm. Thus, the difference of the average deviation between the two methods was 0.7 cm, which was quite small. Therefore, it can be concluded that the template matching method for UAV remote sensing image to extract the obstacle boundary is more accurate and can be used in the obstacle avoidance module of the automatic navigation system.

3.3. Results of Coordinate Registration Based on Algorithm

The image processing of coordinate registration was achieved by C++ and OpenCV function library in VS environment. The specific steps were as follows:
(1) Get region of obstacles (ROI). As explained in Section 3.2, we use a template matching method to get an ROI of obstacles.
(2) Obtain the image coordinate of the registration marker. The specific flow was shown in Figure 5, wherein the method of extracting the center was to calculate the center of gravity of the white portion in the binary image. The pixel of the extracted center point was the image coordinate of the registration point, and the geographical coordinate of the registration point was obtained by Gauss–Krüger’s positive calculation of the measured latitude and longitude coordinate.
(3) Calculate the affine transformation coefficients with four configuration fiducial points. The specific formulas were shown in Equations (1)–(6).
(4) Calculate the transformation of all image coordinates to geographic coordinates X, Y using the obtained parameters.
After the completion of registration, the same eight mark points as in Table 4 were selected to verify the matching accuracy. The results are shown in Table 7. It can be seen from Table 7 that the maximum deviation in the X direction between the geographical coordinate converted from the eight mark points registration map and the geographical coordinate converted by the actual latitude and longitude was 9.2 cm, and the average deviation was 4.6 cm; the maximum deviation in the Y direction between the two was 24.3 cm, and the average deviation was 12.0 cm. Compared with the results in Table 7, the accuracy of the registration results in the X direction was similar (the average deviation was 4.6 cm), but the accuracy of the automatic registration in the Y direction was not as good. The reason might be that the smear in UAV imaging led to the inconsistency between the image of the marker and the actual shape, especially in the Y direction. This caused a large deviation in the center extraction by the algorithm, thus the algorithm needed further improvement. However, compared with the manual registration of ArcGIS, the automatic registration using the algorithm was more time-saving and labor-saving, and was more suitable for automatic route planning in automatic navigation.

3.4. Effect of Image Resolution on the Extraction of Obstacle Boundary

In this research, the resolution of the original image was 4408 × 13,047 pixels. With such resolution, the image processing speed was relatively slow. Even though the improved template matching algorithm was used, the time required to extract the boundary was still greater than 2 s, which could affect the real-time information acquisition. When the resolution of the image is reduced, it is obvious that the processing time of the image will be shortened. However, the accuracy of detecting the obstacle boundary will also be lowered. Thus, the influence of the image resolution on the results will be discussed in this section.
The original image was compressed in OpenCV, and the ratios were 1/2, 1/4, 1/6, 1/8, 1/10, 1/12, respectively. Then the correlation coefficient template matching was applied to the extraction of the obstacle boundary in the image with reduced pixels after compression. The observation showed that when the resolution was reduced to 1/12, although the naked eye could clearly identify the obstacles in the image, the error rate of the obstacles extracted by the algorithm reached 50%. Therefore, we only recorded the time taken to process the image with the first five ratios and obtained the pixel coordinates of the midpoint I in the boundary, before comparing them with the original image. The results were shown in Figure 6 and Table 8.
It can be analyzed from the Figure 7 and Table 8 that as the image pixels were reduced, the time required for image processing was greatly reduced. When the pixel was 1/10 of the original one, the time to extract the boundaries of the six obstacles was only 2.6 s. However, the accuracy of the extraction results also decreased with the decrease of the pixel. This research took the boundary of the obstacle extracted from the original image as the standard. When reduced by 1/2, the average deviations of the point I of the six obstacles in the X and Y directions were only 0.22 cm and 0.43 cm. When reduced by 1/10, the average deviations reached 1.73 cm and 2.60 cm. There were two main reasons. The first is that the reduction of image resolution led to the increase of deviation in the coordinate registration process, and the second is that the detection error of algorithm increased as the resolution decreased.
In the automatic navigation of agricultural machinery, the accuracy deviation is generally less than 2 cm [20]. Considering the detection accuracy and detection time, the 735 × 2174 pixels (1/6 of the original image with the resolution of about 6 cm) of the remote sensing image used in this study can be used to detect obstacles, and the average deviations of point I of the six obstacles in the X and Y directions are 0.87 cm and 0.95 cm individually, and the detection time was about 3.1 s.

4. Conclusions

Based on image processing and template matching technology, an automatic coordinate registration and an obstacle boundary extraction algorithm were designed. The results were compared with those manually completed by ArcGIS software and the following conclusions could be drawn as follows.
(1) The RGB image of farmland in the west area of campus was acquired by using Sony A7RII camera on the eight-rotor UAV. The coordinate registration and obstacle boundary extraction process were completed by using ArcGIS. The average deviation between the geographic coordinates converted from eight landmark registration maps and the actual geographic coordinates converted from longitude to latitude was 4.6 cm in the X direction and 5.7 cm in the Y direction.
(2) The designed algorithm realized the automatic coordinates registration. The average deviation of the geographic coordinates converted from the eight landmark registration maps was 4.6 cm in the X direction and 12.0 cm in the Y direction.
(3) According to the specific situation, the traditional correlation coefficient template matching method was improved, and the image processing time was greatly reduced. Based on this, an automatic extraction algorithm of obstacle boundary was designed. The average deviation between the extracted edge length of obstacles based on template matching method and the actual measured edge length of obstacle was 4.0 cm.
(4) The original image was compressed, and the ratio was 1/2, 1/4, 1/6, 1/8 and 1/10. The image with reduced pixels was matched by correlation coefficient template to extract obstacle boundary. Compared with the original image processing result, it was concluded that when the pixels reached 735 × 2174 (resolution reached 6 cm), the mean deviations of the boundary points I were 0.87 cm and 0.95 cm in X and Y directions, respectively. The whole detection process took about 3.1 s.
In conclusion, the coordinate automatic matching and obstacle boundary automatic extraction algorithm designed in this study can be used to build the basic information acquisition system for navigation in the future, which lays a foundation for the development of path planning and obstacle avoidance functions. The optimal image pixels for obstacle boundary detection proposed in this research after considering detection accuracy and detection time comprehensively provides a theoretical basis for the selection of UAV remote sensing image resolution.

Author Contributions

Y.H., H.F. and H.J. conceived and designed the experiments; H.J., Y.W., Y.L. and H.C. performed the experiments; Y.H., H.J., F.L. and H.C. analyzed the data; H.J. wrote the draft manuscript; H.J., H.C. and H.F. revised the manuscript.

Funding

This research was funded by Key Research and Development Projects in Zhejiang Province (grant number 2017C02031); Subproject of National Science and Technology Support Plan (grant number 2017YFD0700401); Major Science and Technology Projects of Ningxia Hui Autonomous Region Key R&D Program (grant number 2017BY067); Research Funds for the Central Universities; Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture Rural Affairs (2018AIOT-03).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, Y.; Jiang, H.; Fang, H.; Wang, Y.; Liu, Y.; College of Biosystems Engineering and Food Science, Zhejiang University. Research progress of intelligent obstacle detection methods of vehicles and their application on agriculture. Trans. Chin. Soc. Agric. Eng. 2018, 34, 21–32. [Google Scholar]
  2. Franke, U.; Heinrich, S. Fast obstacle detection for urban traffic situations. IEEE Trans. Intell. Transp. Syst. 2002, 3, 173–181. [Google Scholar] [CrossRef]
  3. Huh, K.; Park, J.; Hwang, J.; Hong, D. A stereo vision-based obstacle detection system in vehicles. Opt. Lasers Eng. 2008, 46, 168–178. [Google Scholar] [CrossRef]
  4. Oniga, F.; Nedevschi, S. Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection. IEEE Trans. Veh. Technol. 2010, 59, 1172–1182. [Google Scholar] [CrossRef]
  5. Wei, W.; Rui, X. Study on edge detection method. Comput. Eng. Appl. 2006, 42, 88–91. [Google Scholar]
  6. Sun, J.B. Principle and Applications of Remote Sensing; Wuhan University Press: Wuhan, China, 2009; pp. 25–28. [Google Scholar]
  7. He, Y.; Ceng, H.Y.; He, L.W.; Liu, F.; Nie, P.C. Agricultural UAV Technology and Applications; Science Press: Beijing, China, 2017; pp. 150–160. [Google Scholar]
  8. Herwitz, S.R.; Johnson, L.F.; Dunagan, S.E.; Higgins, R.G.; Sullivan, D.V.; Zheng, J.; Lobitz, B.M.; Leung, J.G.; Gallmeyer, B.A.; Aoyagi, M.; et al. Imaging from an unmanned aerial vehicle: Agricultural surveillance and decision support. Comput. Electron. Agric. 2004, 44, 49–61. [Google Scholar] [CrossRef]
  9. Zarcotejada, P.J.; GonzálezDugo, V.; Berni, J.A.J. Fluorescence temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 2012, 117, 322–337. [Google Scholar] [CrossRef]
  10. Zhang, C.; Qiao, M.; Liu, Z.; Jin, H.; Ning, M.; Sun, H. Texture scale analysis and identification of seed maize fields based on UAV and satellite remote sensing images. Trans. Chin. Soc. Agric. Eng. 2017, 33, 98–104. [Google Scholar]
  11. Gong, L.; Xiao, J.; Hou, J.Y.; Duan, B. Rape yield estimation research based on spectral analysis for UAV image. J. Geomat. 2017, 6, 44–49. [Google Scholar]
  12. Stefano, L.D.; Mattoccia, S. Fast template matching using bounded partial correlation. Mach. Vis. Appl. 2003, 13, 213–221. [Google Scholar]
  13. Bhandarkar, S.M.; Luo, X. Integrated detection and tracking of multiple faces using particle filtering and optical flow-based elastic matching. Comput. Vis. Image Underst. 2009, 113, 708–725. [Google Scholar] [CrossRef]
  14. Zhe, X.U.; Lou, W.G.; School, B. Artificial Neural Network model for handwritten digit recognition based on template matching. Comput. Eng. Appl. 2008, 44, 226–228. [Google Scholar]
  15. Cheng, Y.J.; Ren, X.H.; Zheng, Z.G.; Cai, S.T.; Xu, L.; Fan, S. An hybrid template matching and threshold segmentation target localization method and application. Video Eng. 2018, 42, 73–76. [Google Scholar]
  16. Dey, S.; Motlicek, P.; Madikeri, S.; Ferras, M. Template-matching for text-dependent speaker verification. Speech Commun. 2017, 88, 96–105. [Google Scholar] [CrossRef] [Green Version]
  17. Yin, W.X.; Zhang, C.; Zhu, H.; Zhao, Y.; He, Y. Application of near-infrared hyperspectral imaging to discriminate different geographical origins of Chinese wolfberries. PLoS ONE 2017, 12, e0180534. [Google Scholar] [CrossRef] [PubMed]
  18. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  19. Li, G.; Wang, Y.; Guo, L.F.; He, Y.; Tong, J. Improved pure pursuit algorithm for rice transplanter path tracking. Trans. Chin. Soc. Agric. Mach. 2018, 49, 28–33. [Google Scholar]
  20. Reid, J.F.; Zhang, Q.; Noguchi, N.; Dickson, M. Agricultural automatic guidance research in North America. Comput. Electron. Agric. 2000, 25, 155–167. [Google Scholar] [CrossRef]
Figure 1. Sensing platform and photography equipment. (a) Eight-rotor UAV. (b) Sony A7RII full-range micro-single camera.
Figure 1. Sensing platform and photography equipment. (a) Eight-rotor UAV. (b) Sony A7RII full-range micro-single camera.
Sensors 19 04431 g001
Figure 2. Diagram of experimental environment.
Figure 2. Diagram of experimental environment.
Sensors 19 04431 g002
Figure 3. Obstacle number and corner sequence.
Figure 3. Obstacle number and corner sequence.
Sensors 19 04431 g003
Figure 4. Flow chart of obstacle boundary extraction based on correlation coefficient template matching.
Figure 4. Flow chart of obstacle boundary extraction based on correlation coefficient template matching.
Sensors 19 04431 g004
Figure 5. Travel through UAV image to find be best matched area.
Figure 5. Travel through UAV image to find be best matched area.
Sensors 19 04431 g005
Figure 6. Flow chart of center extraction of registration markers.
Figure 6. Flow chart of center extraction of registration markers.
Sensors 19 04431 g006
Figure 7. Boundary point I extraction from different pixel images.
Figure 7. Boundary point I extraction from different pixel images.
Sensors 19 04431 g007
Table 1. Parameters of eight-rotor unmanned aerial vehicle (UAV).
Table 1. Parameters of eight-rotor unmanned aerial vehicle (UAV).
PerformanceParameterPerformanceParameter
Fuselage diameter1.1 mMax-load8 kg
Fuselage height0.35 mMax-altitude500 m
Fuselage weight3.5 kgMax-endurance25 min
MaterialCarbon fiberRemote sensing platformThree-axis brushless cloud platform
Table 2. Latitude and longitude of control points and their plane coordinates.
Table 2. Latitude and longitude of control points and their plane coordinates.
NumberLBXY
030.3084806120.07545647257.8863,354,312.445
830.3083855120.07467417182.6463,354,301.845
1030.3085322120.07464547179.8363,354,318.106
2330.3086676120.07541947254.3133,354,333.169
Table 3. Accuracy analysis of ArcGIS registration results.
Table 3. Accuracy analysis of ArcGIS registration results.
NumberGeographic Coordinates
X/m X / m deviation/mY/m Y / m deviation/m
97180.2477180.2960.0493,354,306.3243,354,306.4610.137
117182.1887182.1670.0213,354,313.8273,354,313.9540.127
127191.4417191.3980.0433,354,317.3483,354,317.3970.049
137211.9847211.9280.0563,354,311.7503,354,311.7750.025
177234.9847234.9910.0073,354,316.4753,354,316.4830.008
197225.6017225.6680.0673,354,320.2903,354,320.2720.018
217237.9177237.8200.0973,354,328.5553,354,328.5990.044
227242.7517242.7240.0273,354,330.8393,354,330.7950.044
Average 0.046 0.057
Table 4. Corner coordinates of obstacles extracted by ArcGIS.
Table 4. Corner coordinates of obstacles extracted by ArcGIS.
NumberGeographic Coordinates
X/mY/mX/mY/mX/mY/mX/mY/m
17247.5423,354,307.7637247.2223,354,309.3487248.5323,354,309.5667248.8343,354,307.936
27234.3593,354,305.4547234.1103,354,307.0967235.4153,354,307.2927235.6943,354,305.686
37221.3113,354,303.2637221.3213,354,304.9007222.7073,354,305.0387222.6943,354,303.436
47208.7613,354,301.1727208.5633,354,302.5777209.8533,354,302.7877210.1693,354,301.197
57195.9283,354,298.7167195.7273,354,300.3037197.0233,354,300.5357197.2303,354,298.947
67183.2963,354,296.3277182.9853,354,298.0107184.2963,354,298.1537184.6023,354,296.614
Table 5. Coordinates of obstacles extracted by template matching algorithm.
Table 5. Coordinates of obstacles extracted by template matching algorithm.
NumberGeographic Coordinates
X/mY/mX/mY/mX/mY/mX/mY/m
17247.5213,354,307.7897247.2283,354,309.3737248.5203,354,309.6217248.8123,354,308.037
27234.3813,354,305.4137234.0743,354,307.0777235.4053,354,307.3337235.7123,354,305.669
37221.3833,354,303.1777221.3833,354,304.8017222.7143,354,305.0577222.6773,354,303.433
47208.8223,354,300.9107208.5233,354,302.5207209.8673,354,302.7787210.1663,354,301.168
57195.9683,354,298.6447195.6773,354,300.2147197.0143,354,300.4717197.3053,354,298.901
67183.3033,354,296.2907182.9993,354,297.9277184.3303,354,298.1837184.6333,354,296.546
Table 6. Barrier boundary length obtained by different extraction methods.
Table 6. Barrier boundary length obtained by different extraction methods.
MethodLength/m
123456Average
ArcGIS extraction1.3041.3551.3931.4081.3221.3371.353
Template matching extraction1.3151.3551.3191.3691.3611.3541.346
Actual measurement1.3041.3091.3111.3121.2981.3021.306
Deviation /ArcGIS00.0460.0820.0960.0240.0350.047
Deviation / template matching0.0110.0460.0080.0570.0630.0520.040
Table 7. Analysis of automatic registration results.
Table 7. Analysis of automatic registration results.
NumberGeographic Coordinates
X/m X / m deviation/mY/m Y / m deviation/m
97180.2547180.2960.0423,354,306.2183,354,306.4610.243
117182.1857182.1670.0183,354,313.7293,354,313.9540.225
127191.4097191.3980.0113,354,317.2743,354,317.3970.123
137211.9797211.9280.0513,354,311.7023,354,311.7750.073
177234.9087234.9910.0833,354,316.4833,354,316.4830
197225.7147225.6680.0463,354,320.1423,354,320.2720.130
217237.9127237.8200.0923,354,328.5203,354,328.5990.079
227242.7517242.7240.0273,354,330.8813,354,330.7950.086
Average 0.046 0.120
Table 8. Boundary point I extraction from different pixel images.
Table 8. Boundary point I extraction from different pixel images.
Scaling MultipleAverage Deviation in X Direction/cmAverage Deviation in Y Direction/cmImage Processing Time/s
1 12.2
1/20.220.435.1
1/41.080.653.4
1/60.870.953
1/81.952.172.7
1/101.732.602.6

Share and Cite

MDPI and ACS Style

Fang, H.; Chen, H.; Jiang, H.; Wang, Y.; Liu, Y.; Liu, F.; He, Y. Research on Method of Farmland Obstacle Boundary Extraction in UAV Remote Sensing Images. Sensors 2019, 19, 4431. https://doi.org/10.3390/s19204431

AMA Style

Fang H, Chen H, Jiang H, Wang Y, Liu Y, Liu F, He Y. Research on Method of Farmland Obstacle Boundary Extraction in UAV Remote Sensing Images. Sensors. 2019; 19(20):4431. https://doi.org/10.3390/s19204431

Chicago/Turabian Style

Fang, Hui, Hai Chen, Hao Jiang, Yu Wang, Yufei Liu, Fei Liu, and Yong He. 2019. "Research on Method of Farmland Obstacle Boundary Extraction in UAV Remote Sensing Images" Sensors 19, no. 20: 4431. https://doi.org/10.3390/s19204431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop