Next Article in Journal
Effect of Plasma Treatment on Root Canal Sealers’ Adhesion to Intraradicular Dentin—A Systematic Review
Previous Article in Journal
Amortized Bayesian Meta-Learning with Accelerated Gradient Descent Steps
Previous Article in Special Issue
Chinese Character Component Deformation Based on AHP
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quadratic Curve Fitting-Based Image Edge Line Segment Detection: A Novel Methodology

1
College of Automation, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
2
Nondestructive Detection and Monitoring Technology for High Speed Transportation Facilities, Key Laboratory of Ministry of Industry and Information Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
3
College of Electrical and Information Engineering, Lanzhou University of Technology, Lanzhou 730050, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8654; https://doi.org/10.3390/app13158654
Submission received: 20 June 2023 / Revised: 20 July 2023 / Accepted: 25 July 2023 / Published: 27 July 2023
(This article belongs to the Special Issue Computer Graphics and Artificial Intelligence)

Abstract

:
In the field of computer vision, edge line segment detection in images is widely used in tasks such as 3D reconstruction and simultaneous localization and mapping. Currently, there are many algorithms that primarily focus on detecting straight line segments in undistorted images, but they do not perform well in detecting edge line segments in distorted images. To address this quandary, the present study introduces a novel method of line segment identification founded on the principles of quadratic fitting. The method proposed utilizes the inherent property of a linear projection in a three-dimensional space, whereby it appears as a quadratic curve in a distorted two-dimensional image. This approach applies an iterative estimation process to ascertain the optimal parameters of the quadratic form that aligns with the edge contour. This process is facilitated by implementing an assumption and validation mechanism. Upon deriving the optimal model, it is then employed to identify the line segments that are encompassed within the edge contour. The experimental assessment of this novel method incorporates its application to both distorted and distortion-free image datasets. The method eliminates the necessity for preliminary processing to discarding distortions, thereby making it universally applicable to both distorted and non-distorted images. In addition to this, the experimental results based on the dataset indicate that the proposed algorithm in this paper achieves an average computational efficiency that is 27 times faster than traditional ones. Thus, this research will contribute to line segment detection in computer vision.

1. Introduction

Detecting line segments within images constitutes a significant challenge within the field of computer vision [1], and it has extensive ramifications across diverse applications such as camera calibration [2], pose estimation [3], image matching [4], image stitching [5], 3D object detection [6], and visual simultaneous localization and mapping (SLAM) [7].
Contemporary scholarly endeavors in the domain of edge line segment detection predominantly revolve around the identification of straight line segments within undistorted (or marginally distorted) images. Broadly, these techniques can be classified into the following categories: Hough transform [8], edge linking [9], and methodologies that are premised on gradient direction and magnitude [10]. The Hough transform technique translates image coordinates into a parameter space and identifies straight line segments through a process of accumulation voting, thereby delivering impressive accuracy and reliability. However, the technique necessitates parameter computation for each edge pixel within the image, leading to considerable computational load and compromising real-time efficiency. Furthermore, the Hough transform method relies solely on parameters to represent a line, rendering the precise localization of line segment endpoints challenging. In response to these shortcomings, Almazan et al. [11] implemented a perception grouping methodology based on the Hough transform technique to ascertain line segment endpoints and achieved commendable results. Conversely, edge linking methodologies extract initial line segments by pixel traversal, and are based on the substantial grayscale variations at image edges. The subsequent extension and merging of these initial segments are carried out using parameter fitting to yield reasonable and complete line segments. In comparison to the Hough transform method, edge linking techniques display superior noise resistance and computational efficiency. However, the quality of results obtained from edge linking methodologies heavily hinges on the reliability of the edge operator, potentially leading to a loss of some original image information. To counteract this issue, Dong et al. [12] introduced indicators of edge main direction and image gradient direction during the edge detection process, enhancing the line segment detection’s resistance to interference in edge linking methodologies. Lu et al. [13] proposed an adaptive parameter Canny operator to extract image edges, thereby enhancing the reliability of line segment detection in edge linking methodologies. Additionally, the gradient direction-based technique [14], unlike its counterparts, computes the gradient direction of pixels directly and groups pixels with similar gradients to establish regions for line segment detection. Yet, this method demands further refinement as it overlooks gradient magnitude information. Building upon the work of Burns et al. [14], Grompone et al. [10] proposed the classic line segment detector (LSD) method. This method considers both the gradient direction and the gradient magnitude of the image. By selecting pixels with extremum values of a gradient magnitude as seed points, it employs a region-growing approach to generate candidate line segment regions. Additionally, by combining hypothesis verification, it extracts the final line segments. The parameters of the LSD algorithm are adaptive, resulting in high accuracy and fast detection speed. Consequently, it has become a benchmark for modern line segment detection algorithms [15]. Extending the LSD algorithm, Salaun et al. [16] proposed a multiscale line segment detection algorithm, and Luo et al. [17] developed a line segment detection algorithm for high-resolution color images, delivering satisfactory results.
Based on the literature review presented above, it is evident that existing edge line segment detection methodologies are primarily designed for undistorted images. Hence, prior to employing these methodologies on distorted images, the pre-processing of these images would be required in addition to the standard denoising procedures [18]. This preliminary processing step not only amplifies the visual workload, especially for cameras with wide-angle lenses (such as fisheye lenses and spherical lenses), but also fails to effectively ensure the preservation of image information in wide-angle views. Consequently, detecting edge line segments and performing feature correction on the original distorted images presents a more effective approach for enhancing the speed of distortion correction, particularly for methods requiring high efficiency, such as the SLAM methods (e.g., ORB-SLAM [19] and VINS-MONO [20]). In response to these challenges, this paper presents a novel image edge line segment detection technique based on quadratic fitting, termed the QF-LSD method. Experimental outcomes demonstrate that, in comparison to mainstream line segment detection algorithms, the proposed method is capable of not only detecting edge lines in undistorted (or slightly distorted) images, but can also accurately and reliably identifying edge curve segments in highly distorted images. Additionally, the experiments demonstrate that the algorithm’s computational efficiency is 27 times faster than those in traditional approaches. The main contributions of this paper are as follows:
(1) This methodology leverages the property that a 3D straight line’s projection onto a distorted 2D image forms a quadratic curve. It iteratively estimates the quadratic curve model’s optimal parameters through hypothesis verification and uses this optimal model to find the longest and continuous pixel sequence as the edge line segment within the edge contour.
(2) To reliably extract the multiple edge line segments that potentially exist within the edge contour, each identified line segment within the contour is removed, and the remaining pixels in the contour form a new edge contour. The technique then re-estimates the quadratic curve model’s optimal parameters through hypothesis verification to extract new edge line segments, and this process is continued until no valid edge line segments can be extracted from the edge contour.
(3) Due to the edge detection method used in this article being independent of the camera’s distortion parameters, there is no need to preprocess the distorted images. This article conducts tests using image data from normal lenses, wide-angle lenses, and fisheye lenses, respectively.
The subsequent sections of this paper are structured as follows: Section 2 elucidates the mathematical derivations of the edge line segment model; Section 3 provides a detailed description of the edge line segment extraction algorithm based on quadratic fitting; Section 4 presents a performance comparison with the classical LSD algorithm using image data from normal lenses, wide-angle lenses, and fisheye lenses; and Section 5 concludes the paper.

2. Derivation and Parameter Solving of the Edge Line Segment Model

2.1. Construction of the Edge Line Segment Model

The straight edges in three-dimensional space possess collinearity during the imaging process, meaning that they remain straight when projected from a 3D space onto a two-dimensional image plane. Assuming the ideal equation of the projection of a 3D straight line edge onto a two-dimensional image plane is determined as follows:
a x u + b y u + c = 0
where ( x u , y u ) represents any arbitrary pixel point on the ideal projected straight line.
In the actual imaging process, camera lenses typically exhibit radial distortion, tangential distortion, and thin prism distortion due to manufacturing tolerances. However, the thin prism distortion and tangential distortion of the lens are typically negligible compared to radial distortion and can be ignored in practical applications. Therefore, the radial distortion of the camera lens can be represented using a single-parameter DM (division model) [21] as follows:
x u = x d 1 + λ r d 2 y u = y d 1 + λ r d 2
where ( x u , y u ) is the distortion-free image point, ( x d , y d ) is the distortion image point corresponding to the distortion-free image, λ is the distortion factor, and r d 2 = x d 2 + y d 2 . Bringing Equation (2) into Equation (1) we have the following:
a ( x d 1 + λ r d 2 ) + b ( y d 1 + λ r d 2 ) + c = 0
Equation (3) is brought into r d 2 = x d 2 + y d 2 . After sorting, we have the following:
x d 2 + y d 2 + e x d + f y d + g = 0
Among them, e = a c λ , f = b c λ , and g = 1 λ . From (4), it can be seen that under the effect of lens distortion, the projection of a straight line segment in three-dimensional space in the two-dimensional plane is no longer an ideal straight line but a quadratic curve.

2.2. Solution of Edge Line Segment Model Parameters

From the derivation of Section 2.1, it is known that due to the distortion, each edge segment to be extracted in the actual image can be regarded as a quadratic curve, and the coefficients of the quadratic curve, i.e., the parameters of the image edge segment model, are calculated in this section.
Assuming that the extracted edge line segment consists of n pixel points, denoted as ( x d , j , y d , j ) | j = 1 , 2 , 3 , . . . , n , then it satisfies Equation (5):
x d , j 2 + y d , j 2 + e x d , j + f y d , j + g = 0 ( j = 1 , 2 , 3 , . . . , n )
In order to obtain the exact coefficients e, f, and g of the quadratic curve, the error function is defined by (6):
F = n j = 1 ( x d , j 2 + y d , j 2 + e x d , j + f y d , j + g ) 2
By solving the first-order partial derivative equation of Equation (6), i.e.,
F e = 0 ; F f = 0 ; F g = 0
we can obtain the optimal solution of Equation (6). Then, by expanding and combining the like terms of Equation (7), we can have a matrix form as follows:
j = 1 n x d , j 2 j = 1 n x d , j y d , j j = 1 n x d , j j = 1 n x d , j y d , j j = 1 n y d , j 2 j = 1 n y d , j j = 1 n x d , j j = 1 n y d , j n A e f g X = j = 1 n ( x d , j 2 + y d , j 2 ) x d , j j = 1 n ( x d , j 2 + y d , j 2 ) y d , j j = 1 n ( x d , j 2 + y d , j 2 ) B A X = B
The matrix containing the quadratic curve coefficients can be obtained by solving Equation (8) using the least squares method:
X = ( A T A ) 1 A T B

3. Edge Line Segment Extraction Based on Quadratic Curve Fitting

From the derivation of the above mathematical equations, it is clear that each edge line segment to be extracted in the actual image should satisfy the quadratic curve model. The parameters of a quadratic curve model are calculated according to Equation (8) using at least three non-co-linear pixel points. Therefore, we can take the detected edge contour in the image, select three pixel points randomly from it, and estimate the optimal parameters of the quadratic curve model by continuously iterating. At the same time, this optimal model is used to find the continuous and longest image sequence in the edge contour, which is used as an edge segment of this contour. In addition, in order to avoid the problem that the edge contour contains multiple edge segments (as shown in Figure 1), which affects the reliability of subsequent detection, each edge segment extracted is removed from the edge contour, and the remaining pixel sequences constitute the new edge contour. Then, repeat the above steps of edge segment extraction until no more edge segments can be extracted from the edge contours. The above process is implemented as follows:
Step 1: The Canny operator [22] is used to detect the edge features in the input image, and the Moore-Neighbor algorithm [23] is used to find the edge contours. The edge contours whose length is smaller than 1 20 of the image size are removed. Suppose M edge contours are extracted by the above steps (noted as the set C i | i = 1 , 2 , 3 , . . . , M ), where C i is any one edge contour.
Step 2: For the edge contour C i , assume that it consists of n pixel points, and that it is denoted as C i = ( x j , y j ) | j = 1 , 2 , 3 , . . . , n . From C i , select three randomly non-co-linear pixel points ( x 1 , y 1 ) , ( x 2 , y 2 ) , and ( x 3 , y 3 ) and bring them into Equation (9) to calculate the quadratic curve equation coefficients e k , f k , and g k . Based on the current solution we obtain e k , f k , and g k . Together with Equation (4), we define the error function as follows:
E = x j 2 + y j 2 + e k x j + f k y j + g k
When the pixel points in C i satisfy Equation (10), then E’s value is smaller; otherwise, E’s value is larger. Therefore, by setting a threshold ε for the quadratic curve fitting error, it is possible to search for L from C i based on the magnitude of E, where L indicates the longest and continuous pixel sequence that satisfies Equation (10). In addition, this sequence can be recognized as an edge segment. The pseudocode for implementing the above process is illustrated by Algorithm 1:
Applsci 13 08654 i001
Step 3: In Step 2, it is necessary to randomly select three non-collinear pixels from the edge contour set C i = ( x j , y j ) | j = 1 , 2 , 3 , . . . , n . Therefore, pixel selection needs to be performed three times randomly. If there are a total of n pixels in the edge contour set C i , and if the pixel length of one of the longest image sequences is denoted as L with a length of l e n ( L ) , then the probability that the selected pixels exactly fall on sequence L in a single random selection is ( l e n ( L ) n ). The probability that all three randomly selected pixels consecutively fall on sequence L is l e n ( L ) n 3 . The probability that none of the three sampled pixels fall on sequence L is
P = 1 l e n ( L ) n 3
In this algorithm, a threshold is set such that the probability of not selecting sequence L after K repeated samplings is less than 1%. If we want to obtain L in the set, it is necessary to repeat Step 2 K times, and the number of repetitions K is determined as follows:
P K < 0.01 K > c e i l lg 0.01 lg P
The function ceil(•) represents rounding up to the nearest integer. Therefore, as long as the number of iterations K is greater than the value in Equation (12), it is possible to reliably search for L in C i . Furthermore, we define D i to store the continuous and longest edge segments. This means that the sequence L obtained from C i in Step 3 is added to the collection D i as an edge segment. After K iterations, if no L is found, D i remains empty, indicating that there are no satisfying edge segments in C i .
Step 4: After finding an edge line segment L in C i , L is removed from C i and the remaining pixels form a new edge profile stored in R. Due to the three different distributions of L in the set C i , deleting L corresponds to three different cases, as shown in Figure 2. The pseudocode for implementing the above process is illustrated by Algorithm 2:
Applsci 13 08654 i002
As can be seen from Algorithm 2, the final R obtained either contains one edge contour, two edge contours, or is empty.
Step 5: Consider the edge contours in R as the new C i , and repeat the steps from Step 2 to Step 4 until the contour set of R that can be formed by the remaining pixels is empty (which indicates that the edge segments in the initial C i that can satisfy the conditions in R are extracted).
Step 6: For the edge contours in the whole image C i | i = 1 , 2 , 3 , . . . , M , Step 2 to Step 5 are repeated until the M contours are traversed to obtain the complete set of edge segments in the whole image D = D i | i = 1 , 2 , 3 , . . . , M .
A complete representation of the algorithms in Step 1 to Step 6 above that use pseudocode are illustrated by Algorithm 3:
Applsci 13 08654 i003

4. Experimental Analysis

The quadratic curve-based edge line segment detection algorithm (named the QR-LSD method) proposed in this paper is tested by real images and compared with the LSD straight line segment detection algorithm [10]. In order to make the algorithm more fair and reasonable in the experimental comparison, both the algorithm in this paper and the LSD algorithm are implemented in the same programming language. All codes were run on the same workstation with the following hardware configuration: i7-10700KF processor, 3.80 GHz main frequency, and 32 GB RAM.
The performance of the LSD algorithm and the proposed QF-LSD algorithm is depicted in Figure 3 and Figure 4a–x. The algorithms are tested on images from regular lenses (minor distortion), wide-angle lenses, and fisheye lenses to assess the edge line segment detection capabilities. Evidently, both the QF-LSD algorithm and the LSD algorithm can effectively and accurately extract edge line segments from images without distortion (or with minor distortion). However, the difference is that the LSD algorithm can detect more abundant edge detail information. From Table 1, we can see that its recall rate is high. This is because the LSD algorithm uses gradient direction and gradient magnitude information during the edge line segment detection process. On the other hand, the QF-LSD algorithm proposed in this paper is dependent on edge detection results; hence, the extracted edge segments are not as rich as those extracted by the LSD algorithm.
In images captured using wide-angle lenses (i–p) and fisheye lenses (q–x), the LSD algorithm can still detect richer edge details. Nevertheless, as the LSD algorithm is designed for straight line segment extraction in distortion-free images, it is incapable of accurately and completely describing edge arc segments in images with significant distortion. For example, in Figure 3q, the arc curve at the junction of the wall and ceiling extracted by LSD is formed by connecting multiple straight-line segments of different colors, whereas in Figure 4q the curve segment extracted by the QF-LSD algorithm is represented by a complete green curve segment. It can only approximate these segments using smaller straight line segments, thus leading to reduced accuracy in detection. To improve the LSD algorithm’s accuracy in aberrated images, a pre-correction of these images is necessary. This requires precise knowledge of the camera’s distortion parameters, which is often challenging for some online vision tasks. In contrast, the QF-LSD algorithm proposed in this paper is independent of the camera’s distortion parameters. It does not require a pre-processing of the distorted images. It can accurately and completely extract the edge arc segments in large distorted images. In vision tasks that typically rely on edge segments (e.g., visual pose measurement, visual SLAM, autonomous driving), the accuracy of the edge segment detection algorithm is more important than the amount that is detected. As long as accurate edge segments are extracted, the reliable implementation of corresponding vision tasks can be ensured. On the other hand, a low accuracy but high recall rate can only introduce more disturbing factors and larger errors in such vision tasks.
Table 2 displays the time taken by both the LSD algorithm and the proposed QF-LSD algorithm to process each image. It can be observed that the operational efficiency of both algorithms is closely related to the image size and texture richness within the image. The larger the image size and the richer the texture information in the image, the longer both algorithms take to process the image. In comparison, the computational time of the QF-LSD algorithm presented in this paper is generally much smaller than that of the LSD algorithm.This is because the LSD algorithm not only calculates gradient direction and amplitude information when extracting straight line segments, but also employs region growth combined with hypothesis verification to obtain complete straight line segments, making the whole process more complex and time-consuming. Conversely, the QF-LSD algorithm proposed in this paper removes the edge segments from the contour after extraction, which improves the algorithm’s efficiency in identifying new edge segments within the remaining pixels. These measures effectively reduce the time cost of the QF-LSD algorithm.
In conclusion, compared to traditional straight line segment detection algorithms, the QF-LSD algorithm proposed in this paper is highly suitable for edge segment extraction from images with substantial distortion. It does not depend on the camera’s distortion parameters during edge detection, and it does not require a pre-processing of the distorted images. In addition, from the overall running time tested on the dataset, it is evident that the algorithm proposed in this paper achieves an average efficiency that is 27 times faster than traditional algorithms. Therefore, compared to traditional algorithms, it exhibits greater competitiveness and advantages in terms of real-time performance.

5. Conclusions

In this research, we introduced a method for image edge line segment detection that is founded on quadratic curve fitting. This approach capitalizes on the principle that a straight line in a three-dimensional space projects as a quadratic curve in a two-dimensional distorted image. The experiments demonstrate that, in addition to excellent detection accuracy, the method proposed in this paper is independent of camera distortion parameters and does not require a pre-correction of distorted images, thus exhibiting good practicality. Moreover, the algorithm’s computational efficiency is 27 times faster than those of the traditional approaches. We firmly believe that this algorithm will contribute to improving the real-time performance of vision-based autonomous driving technologies.

Author Contributions

Conceptualization, R.Q., G.X., P.W. and Y.C.; methodology, R.Q. and P.W.; resources, G.X., Y.C. and W.D.; software, R.Q. and P.W.; formal analysis, R.Q., G.X. and P.W.; writing—original draft, R.Q.; writing—review and editing, G.X., P.W. and R.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China under grant numbers 62073161 and 61905112, as well as by the National Natural Science Foundation of China under grant number 62001198.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Z.; Shin, B.S.; Klette, R. A statistical method for line segment detection. Comput. Vis. Image Underst. 2015, 138, 61–73. [Google Scholar] [CrossRef]
  2. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–139. [Google Scholar] [CrossRef]
  3. Wang, P.; Chou, Y.; An, A.; Xu, G. Solving the PnL problem using the hidden variable method: An accurate and efficient solution. Vis. Comput. 2022, 38, 95–106. [Google Scholar] [CrossRef]
  4. Cui, W.; Qiang, W.Y.; Chen, X.L. Image dynamic matching for stereo vision based on structural information between lines. Control Decis. 2003, 18, 633–636. [Google Scholar]
  5. Liu, Q.; Han, M. Linear-preserve-mesh warps in aerial image stitching. Control Decis. 2022, 37, 669–675. [Google Scholar]
  6. Ren, K.Y.; Gu, M.Y.; Yuan, Z.Q. 3D object detection algorithms in autonomous driving: A review. Control Decis. 2023, 38, 865–889. [Google Scholar]
  7. Li, Y.T.; Mu, R.J.; Shan, Y.Z. A survey of visual SLAM in unmanned systems. Control Decis. 2021, 36, 513–522. [Google Scholar]
  8. Duda, R.O.; Hart, P.E. Use of the Hough Transformation to Detect Lines and Curves in Pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  9. Etemadi, A. Robust segmentation of edge data. In Proceedings of the 1992 International Conference on Image Processing and Its Applications, Maastricht, The Netherlands, 7–9 April 1992; pp. 311–314. [Google Scholar]
  10. Grompone von Gioi, R.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  11. Almazan, E.J.; Tal, R.; Qian, Y.; Elder, J.H. MCMLSD: A Dynamic Programming Approach to Line Segment Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2031–2039. [Google Scholar]
  12. Dong, J.; Yang, X.; Yu, Q.F. Fast line segment detection based on edge connecting. ACTA Opt. Sin. 2013, 33, 213–220. [Google Scholar]
  13. Lu, X.; Yao, J.; Li, K.; Li, L. CannyLines: A parameter-free line segment detector. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 507–511. [Google Scholar] [CrossRef]
  14. Burns, J.B.; Hanson, A.R.; Riseman, E.M. Extracting Straight Lines. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 425–455. [Google Scholar] [CrossRef]
  15. Zheng, H.J.; Zhong, B.J. Overview and evaluation of image straight line segment detection algorithms. Comput. Eng. Appl. 2019, 55, 9–19. [Google Scholar]
  16. Salaün, Y.; Marlet, R.; Monasse, P. Multiscale line segment detector for robust and accurate SfM. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 2000–2005. [Google Scholar] [CrossRef] [Green Version]
  17. Luo, W.Y.; Cheng, Y.; Li, Y.H. Line segment detection algorithms towards high-resolution color image. Microelectron. Comput. 2017, 34, 25–30. [Google Scholar]
  18. Ma, C.Y.; Shiri, B.; Wu, G.C.; Baleanu, D. New fractional signal smoothing equations with short memory and variable order. Optik 2020, 218, 164507. [Google Scholar] [CrossRef]
  19. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  20. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  21. Bukhari, F.; Dailey, M.N. Automatic radial distortion estimation from a single image. J. Math. Imaging Vis. 2013, 45, 31–45. [Google Scholar] [CrossRef]
  22. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  23. Gonzales, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2017; pp. 664–669. [Google Scholar]
Figure 1. Edge contour containing multiple edge line segments.
Figure 1. Edge contour containing multiple edge line segments.
Applsci 13 08654 g001
Figure 2. The distribution position of C i is in L. (a) The starting pixel of L, n 1 , is located at the initial position of set C i , and the ending pixel, n 2 , is located at the middle position of set C i . (b) The starting pixel of L, n 1 , is located at the middle position of set C i , and the ending pixel, n 2 , is located at the final position of set C i . (c) The starting pixel of L, n 1 , and the ending pixel, n 2 , are both located at the middle position of set C i .
Figure 2. The distribution position of C i is in L. (a) The starting pixel of L, n 1 , is located at the initial position of set C i , and the ending pixel, n 2 , is located at the middle position of set C i . (b) The starting pixel of L, n 1 , is located at the middle position of set C i , and the ending pixel, n 2 , is located at the final position of set C i . (c) The starting pixel of L, n 1 , and the ending pixel, n 2 , are both located at the middle position of set C i .
Applsci 13 08654 g002
Figure 3. Test results of the LSD algorithm for different lens imaging images. (ah) Regular lens images, (ip) wide-angle lens images, and (qx) fisheye lens images.
Figure 3. Test results of the LSD algorithm for different lens imaging images. (ah) Regular lens images, (ip) wide-angle lens images, and (qx) fisheye lens images.
Applsci 13 08654 g003
Figure 4. Test results of the QF-LSD algorithm for different lens imaging images. (ah) Regular lens images, (ip) wide-angle lens images, and (qx) fisheye lens images.
Figure 4. Test results of the QF-LSD algorithm for different lens imaging images. (ah) Regular lens images, (ip) wide-angle lens images, and (qx) fisheye lens images.
Applsci 13 08654 g004
Table 1. Evaluation metrics for line segment detection in undistorted images (unit: number of line segments).
Table 1. Evaluation metrics for line segment detection in undistorted images (unit: number of line segments).
AlgorithmNumber of Line Segments(a–h) Average
Precision
(a–h) Average
Recall
LSD
method
Serial Numberabcdefgh0.4650.381
True Positive67133130194190161569154
False Positive74139154230201183620223
False Negative109207223329295268887257
QF-LSD
method
Serial Numberabcdefgh0.8780.218
True Positive491038011510370123105
False Positive131013116141217
False Negative1272372734083823591333306
Table 2. Running time comparison (unit: seconds).
Table 2. Running time comparison (unit: seconds).
AlgorithmSerial Number(a)–(x) Average
Processing
Time
LSD methodRegular lens imagesabcdefgh18.90
0.891.562.936.224.405.1119.185.39
Wide-angle lens imageijklmnop
1.031.902.30135.192.761.251.224.91
Fisheye lens imageqrstuvwx
1.2526.1464.19157.432.070.782.862.78
QF-LSD methodRegular lens imagesabcdefgh0.7
0.390.490.700.810.720.690.940.82
Wide-angle lens imageijklmnop
0.360.410.651.280.570.320.550.68
Fisheye lens imageqrstuvwx
0.390.941.171.820.560.300.670.61
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiao, R.; Xu, G.; Wang, P.; Cheng, Y.; Dong, W. Quadratic Curve Fitting-Based Image Edge Line Segment Detection: A Novel Methodology. Appl. Sci. 2023, 13, 8654. https://doi.org/10.3390/app13158654

AMA Style

Qiao R, Xu G, Wang P, Cheng Y, Dong W. Quadratic Curve Fitting-Based Image Edge Line Segment Detection: A Novel Methodology. Applied Sciences. 2023; 13(15):8654. https://doi.org/10.3390/app13158654

Chicago/Turabian Style

Qiao, Rui, Guili Xu, Ping Wang, Yuehua Cheng, and Wende Dong. 2023. "Quadratic Curve Fitting-Based Image Edge Line Segment Detection: A Novel Methodology" Applied Sciences 13, no. 15: 8654. https://doi.org/10.3390/app13158654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop