Next Article in Journal
Distinct Binding Properties of Neutravidin and Streptavidin Proteins to Biotinylated Supported Lipid Bilayers: Implications for Sensor Functionalization
Next Article in Special Issue
Multiple Damaged Cables Identification in Cable-Stayed Bridges Using Basis Vector Matrix Method
Previous Article in Journal
Assessment of Damage in Composite Pressure Vessels Using Guided Waves
Previous Article in Special Issue
An Approach for Time Synchronization of Wireless Accelerometer Sensors Using Frequency-Squeezing-Based Operational Modal Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Detection of Bolt Loosening Using YOLOv5

School of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(14), 5184; https://doi.org/10.3390/s22145184
Submission received: 31 May 2022 / Revised: 25 June 2022 / Accepted: 30 June 2022 / Published: 11 July 2022
(This article belongs to the Special Issue Smart Sensing Technology and Infrastructure Health Monitoring)

Abstract

:
Bolted connections have been widely applied in engineering structures, loosening will happen when bolted connections are subjected to continuous cyclic load, and a significant rotation between the nut and the bolt can be observed. Combining deep learning with machine vision, a bolt loosening detection method based on the fifth version of You Only Look Once (YOLOv5) is proposed, and the rotation of the nut is identified to detect the bolt loosening. Two different circular markers are added to the bolt and the nut separately, and then YOLOv5 is used to identify the circular markers, and the rotation angle of the nut against the bolt is calculated according to the center coordinate of each predicted box. A bolted connection structure is adopted to illustrate the effectiveness of the method. First, 200 images containing bolts and circular markers are collected to make the dataset, which is divided into a training set, verification set and test set. Second, YOLOv5 is used to train the model; the precision rate and recall rate are respectively 99.8% and 100%. Finally, the robustness of the proposed method in different shooting environments is verified by changing the shooting distance, shooting angle and light condition. When using this method to detect the bolt loosening angle, the minimum identifiable angle is 1°, and the maximum detection error is 5.91% when the camera is tilted 45°. The experimental results show that the proposed method can detect the loosening angle of the bolted connection with high accuracy; especially, the tiny angle of bolt loosening can be identified. Even under some difficult shooting conditions, the method still works. The early stage of bolt loosening can be detected by measuring the rotation angle of the nut against the bolt.

1. Introduction

Bolted connections have been widely used in engineering structures because of the advantages of convenient assembly and disassembly. When the structure is subjected to continuous vibration load, the bolted connection will loosen and the reliability will decrease, which may lead to structural failure [1,2,3,4]. It is of great significance to investigate the detection method of bolt loosening [5].
The mechanical performance of the bolted connection is related to pretightening torque, thread pair contaction and friction of contact surfaces [6,7,8]. In order to explore the mechanism of bolt loosening, Goodier et al. [9] tested the behavior of bolted connections under dynamic load. They explained that bolt loosening is caused by relative motion between screw threads and fasteners. Junker [10] designed classic experimental equipment to reveal that the relative motion between screw threads and fasteners is the main cause of bolt loosening. The decrease in pretightening torque and the rotation of bolts or nuts are the main phenomena of loosening. Jiang et al. [11] investigated the decline in preload in the loading cycle. The ratio relationship between the current preload and the initial preload was obtained. At the same time, Junker’s experimental equipment was improved, and a component was added to measure the relative angle between the bolt and the nut. The pretightening torque and the relative angle become important in the detection of bolt loosening. Yin [12] and Huo et al. [13] proposed a new bolt loosening detection method based on a piezo-electric transducer (PZT). The actual contact area can be determined by detecting the ultrasonic wave energy transmitted between contact surfaces. Experimental results show that there is an approximately linear relationship between the signal peak value and pretightening torque of the bolt under a saturation problem, which can be used to monitor bolt loosening. Xu et al. [14] proposed an improved time reversal method to monitor bolt loosening, which reconstructs the phase and amplitude of the signal. If there is a phase shift and amplitude difference between the signal generated by the structure in the healthy state and the reconstructed signal, it indicates that the structure is damaged. Experiments show that this method can realize quantitative monitoring of bolt preload with higher precision and sensitivity. Zhao et al. [15] combined PZT with the time reversal method for real-time health monitoring of bolted connections in wood structures. A bolt pretightening torque loss index of wood structure was proposed based on wavelet analysis design, which can reflect the looseness of bolts in wood structures. Zhang et al. [16] proposed a bolt loosening detection method based on audio classification. By recording and extracting hammer sounds of bolted connections at different loosening degrees, support vector machine (SVM) was used to train and test datasets, and quantitative detection results of bolt loosening were finally obtained. This method has high recognition accuracy and strong anti-noise ability. Wang et al. [17] proposed a new vibroacoustic method (VAM) for detecting the looseness of multi-bolt connections. The above detection methods can basically achieve unmanned online monitoring of bolted connections. However, these detection methods all need specific sensors to collect signals [18,19,20] and will undoubtedly increase the cost and difficulty of monitoring in the bolted connection structures due to the increasing number of sensors.
With the development of camera and image processing technology, structural health monitoring methods based on machine vision have been developed rapidly. Kromanis et al. [21] proposed a vision-based test method for measuring deformation and cracks of reinforced concrete structures. In addition, Kromanis also proposed a damage detection technique for bridge structure based on computer vision-derived parameters [22]. Cameras were used to collect the image frames of the bridge model under traffic loads, and the nodal displacements of the bridge model were computed from each image frame by an image processing algorithm. Structural responses such as deflection and strain were calculated according to the nodal displacements. Finally, the damage of the bridge structure was detected by analyzing the structural response. Kromanis’ research shows that the machine vision-based methods for structure health monitoring are more efficient and less costly than traditional monitoring methods. Many researchers have studied the loosening of bolt connections based on machine vision and image processing technology. Huynh et al. [23] proposed a method to identify the rotation angle of the nut using the Hough transform algorithm and to detect whether the bolt is loose by comparing the angle changes before and after. This method based on visual image can detect the nut rotation angle with an accuracy of ±2.6°. With the rapid development of deep learning, various neural networks have emerged with high recognition accuracy. For example, AlexNet [24] used GPU to accelerate computing for the first time, and other networks such as VGGNet [25], R-CNN [26], Fast R-CNN [27], GoogleNet [28] and Faster R-CNN [29] have continuously improved the recognition accuracy of target detection. In addition, YOLO [30] and SSD [31] considered the speed and accuracy of recognition. Compared with traditional methods, methods based on deep learning can autonomously learn the characteristics of data [32,33,34,35,36]. In the field of bolt loosening detection, Zhuang et al. [37] combined the time reversal method with deep learning methods to classify the ultrasonic signals in the bolted connections of wood structures, thus realizing the prediction of residual preload on bolted connections. Cha and Choi et al. [38,39,40] combined machine vision with support vector machine (SVM) to automatically distinguish tight bolts and loose bolts by detecting horizontal and vertical lengths of bolt heads in images. Huynh et al. [41] used R-CNN to detect and cut plausible bolts in bolt images, and then the Hough linear transformation (HLT) image processing algorithm was used to automatically estimate the angle of bolt loosening from bolt images. Zhao et al. [42] used SSD to identify bolt heads and the numbers on bolt heads, and the included angle of the center coordinates of the two predicted boxes was calculated. The monitoring of bolt loosening can be realized by measuring the change of the angle. The minimum identifiable angle of the method is 10°, and the angle of bolt loosening can be detected by 360°. Zhang et al. [43,44] used Faster R-CNN to train different screw heights after bolt loosening to determine whether bolts are tight or loose, and the recognition accuracy reached 95.03%. Pham et al. [45] used composite bolt images generated by graphical models as datasets trained by a neural network, which is helpful in reducing the time and cost of collecting high-quality training data. Pal et al. [46] extracted identification features using convolutional neural network (CNN) from time-frequency scale images based on vibration to detect bolt loosening. The average accuracy of the method is respectively 100% and 98.1%. Pan et al. [47] proposed an RTDT-bolt method by combining YOLOv3-tiny with optical flow method. The method achieved real-time detection and tracking of bolt rotation with an accuracy of more than 90%. Yuan et al. [48] used MASK R-CNN to complete the identification and classification of bolt loosening in near real time through a webcam. The minimum identifiable screw height was 4 mm. Gong et al. [49] proposed a bolt loosening detection method combining deep learning with geometric imaging theory, which can accurately calculate the length of exposed bolts. First, the exposed bolt was located using Faster R-CNN, and then, five key points on the exposed bolt were identified using CPN. Finally, the length of the exposed bolt was calculated by a length calculation module. The mean measurement error of this method is only 0.61 mm. The above detection methods combine deep learning with machine vision, which can not only identify various features of bolted connections but can also detect bolt loosening more intuitively and with higher precision. However, some of the above methods can only distinguish tight bolts and loose bolts, failing to determine the loosening degree of bolted connections. When the bolt loosening angle is tiny, the above methods cannot realize the early monitoring of bolt loosening. Therefore, it is necessary to investigate the detection method of bolt loosening angle.
Bolt loosening in engineering structures can be transformed into a target detection problem. Combining deep learning with machine vision, a bolt-loosening detection method based on a neural network is proposed. By adding two different circular markers on the bolted connection, a neural network is used to detect the included angles between the markers. The detection of bolt loosening can be realized by calculating the rotation angle of the nut against the bolt. Due to the small size of the markers on the bolt, YOLOv5 is effective and more efficient in detecting small targets. Compared with YOLOv3 [50,51] and v4, the network structure of YOLOv5 can extract deeper features and achieve better detection results. First, bolt images were collected using a smartphone and were trained by YOLOv5. Then, the trained model was used to detect the rotation angle of the nut against the bolt, and experiments were carried out in different environments to verify the detection accuracy of the proposed method.

2. Problem Description

At present, most research on the mechanics of bolt loosening is mainly based on the bolt loosening experimental device designed by Junker [3], as shown in Figure 1a. Jiang [4] obtained the experimental curve of bolt loosening through experiments, as shown in Figure 1b. P is the bolt pretightening torque, and θ is the nut rotation angle. In stage Ⅰ of the bolt preload curve, the preload decreases mainly due to material deformation, and the nut rotation angle is small. In stage Ⅱ, the preload begins to decline rapidly, while the nut rotation angle increases. The bolted connection loosens, which leads to structural failure. As shown in Figure 1b, when the preload starts to decline rapidly, the nut rotation angle is less than 5°. The nut rotation angle increases to 20° when the preload drops to 20%. Thus, an obvious rotation of the nut can be observed when the bolted connection is loosened. Therefore, combining deep learning with machine vision to detect the rotation angle of the nut against the bolt is feasible to diagnosis the loosening of bolted connections.

3. Methodology

3.1. YOLOv5 Algorithm for Bolt Loosening

The core idea of YOLO is to take the entire image as the input of the network and to directly obtain all bounding boxes of all categories through a forward propagation, so as to achieve One-Stage. First, the bolt image is divided into S × S grids, and each grid unit predicts B bounding boxes and their confidence, which roughly covers the entire bolt image. Each bounding box contains five predicted values ( x ,   y ,   w ,   h ,   c ) , where ( x ,   y ) is the center coordinate of the bounding box, and w and h are the width and height of the bounding box relative to the entire bolt image, respectively. c stands for confidence and represents the intersection over union (IOU) of the predicted box and the ground truth. Each grid unit also predicts the probability C of the category of the object when it appears and then realizes the final detection through B bounding boxes and confidence predicted by each grid unit, as well as the class probability map. Its detection principle is shown in Figure 2.
The network structure of YOLOv5 is shown in Figure 3. The Mosaic data enhancement method is adopted in the input of YOLOv5. Images are spliced through random scaling, random clipping and random arrangement, which has a good detection effect on small targets. Adaptive anchor frame computation is added in YOLOv5. In the training of the network, the predicted boxes are output on the basis of the initial anchor frame and are compared with the ground truths. The gaps between them are calculated, and then the network parameters are reversely updated and iterated. YOLOv5 can adaptively calculate the value of the optimal anchor frame in different training sets in each training. Focus structure is added in the backbone of YOLOv5 to realize slice operation, which can fully extract features and retain more complete downsampling information of pictures. In addition, the CSP structure can increase the gradient value of backpropagation between layers and avoid the gradient disappearance caused by deepening. The structure of FPN+PAN is used in the neck of YOLOv5. Feature pyramid networks (FPN) can transfer and fuse high-level feature information through upsampling to obtain feature maps for prediction. On this basis, a path aggregation network (PAN) structure is added to improve the propagation speed of low-level features. The structure of FPN+PAN enhances the detection capability of the model for objects of different scales and can realize the recognition of the same objects of different sizes and proportions. GIOU_Loss is adopted at the output of YOLOv5 as the loss function of the bounding boxes, and the non-maximum suppression (NMS) operation is used in the post-processing of target detection to screen out the bounding boxes with the highest probability.

3.2. The Detection Method of Bolt Loosening

Images of the bolted connection are captured using a smartphone and are taken under different shooting distances, different shooting angles and different light conditions. In these experiments, a total of 200 images (3024 × 3024 pixels) are collected; the image format is JPG. Smartphone camera specifications are shown in Table 1. The labeling tool named ‘LabelImg’ is used to label 200 images. The bolt is defined as ‘bolt’, the blue circular marker on the nut is defined as ‘nut’, and the red circular marker on the bolt is defined as ‘sign’. Figure 4b is the graph after labeling the three classes. These labeled images are then made into the dataset.
The dataset contains three classes, namely ‘bolt’, ‘nut’ and ‘sign’. The YOLOv5 algorithm under the Pytorch framework was used to train the dataset. Then, the YOLOv5 can identify and locate the three classes and output the center coordinates of the three classes’ predicted boxes. As shown in Figure 5, the top left corner of the image is taken as the origin, the center of the ‘bolt’ class is known to be A ( x 1 , y 1 ) , the center of the ‘nut’ class is B ( x 2 , y 2 ) and the center of the ‘sign’ class is C ( x 3 , y 3 ) . Based on these coordinates, the angle can be calculated by the vector product.
The angle with A as the vertex is
A = arccos (   A B A C | A B | | A C | )
When a bolt is loosened, the A changes with the rotation angle of the ‘nut’ class, and the degree of the bolt loosening is determined by detecting the change of A . The change of A is . The expression of is shown in Equation (2).
Δ = α α 0
where α is the angle calculated by the algorithm after the bolted connection has worked for a period of time; α 0 is the angle when the bolted connection is tightened.
When begins to increase, it indicates that the bolted connection has begun to loosen. The bolted connection fails and the structure may become damaged when > 20 . By detecting the rotation angle of the nut against the bolt and paying attention to the change of , the loosening of the bolted connection can be effectively monitored, and anti-loosening measures can be taken in the early stage of bolt loosening.

4. Detection of Bolt Loosening

4.1. Model Training

The dataset used in this paper contains 200 images, 80% of which are used as a training and validation set and 20% as a test set. The training set is trained by YOLOv5 under the Pytorch framework. Since GPU acceleration calculation was used in training and the hardware device is Nvidia RTX 3070, the batch size of model training is set as 8. In the training process, the model loss will decrease with the increase in iterations until it is in a stable state. At this time, the detection accuracy of the model is also in a stable state and in a high position. Therefore, an appropriate number of iterations not only reduces the training time of the model, but also keeps good detection accuracy. Since there are only 160 images in the dataset used for training in this paper, in order to make the model more accurate, the total training cycles are set as 500 epochs. One epoch is equivalent to training all samples in the training set once. The larger the value is, the more accurate the model will be and the longer the training time will be. Parameters of network training are shown in Table 2.
The evaluation of model performance is mainly based on some parameters, such as precision, recall and mAP. Precision refers to the number of real positive samples in positive sample results detected based on the predicted results. Recall refers to the number of real positive samples in the test set based on real results. mAP is the average identification accuracy of all classes. mAP@0.5 indicates the mAP of positive samples detected when the threshold of IOU is 0.5. The expressions for precision and recall are shown below.
precision = T P T P + F P
recall = T P T P + F N
where TP is the number of positive samples detected as positive samples, FP is the number of negative samples detected as positive samples, and FN is the number of positive samples detected as negative samples.
The output of YOLOv5 adopts GIOU_Loss as the loss function of the bounding boxes. The smaller the value is, the more accurate the predicted box is. The expression of GIOU_Loss is shown in Equation (5). Obj_Loss is the mean value of objectiveness loss. The smaller the value is, the more accurate the target detection is. Cls_Loss indicates the average value of classification loss. A smaller value indicates more accurate classification.
{ GIOU = I O U A c U A c GIOU _ Loss = 1 GIOU
where I O U is intersection over union and represents the ratio of intersection and union between the predicted box and the ground truth. A c is the smallest tangent rectangle between the predicted box and the ground truth; U is the union of the predicted box and the ground truth.
The training results of the model are shown in Figure 6.
The training results show that the GIOU_Loss, Obj_Loss and Cls_Loss of the model are both small after 10,000 iterations. The precision rate and recall rate of the model are respectively 99.8% and 100%, and the value of mAP@0.5 is over 0.95. These data indicate that the model can accurately identify the three classes in the image. Although the sizes of the ‘nut’ class and the ‘sign’ class are small and the ‘sign’ class is inside the ‘bolt’ class, the model can still detect accurately and achieve high recognition accuracy. In addition, we tried to use Single Shot MultiBox Detector (SSD) to train the dataset, but the obtained model could only recognize the ‘bolt’ class but could not recognize the ‘nut’ class and the ‘sign’ class. This may be because the small scale of the feature map generated by SSD, which does not accurately identify the ‘nut’ class and the ‘bolt’ class of quite small size. YOLOv5, however, has adaptive anchors and multi-scale fusion to better handle objects of any size. Therefore, it is effective and feasible to use YOLOv5 to detect the loosening angle of bolted connections.

4.2. Identification of Bolt Loosening Angles

4.2.1. Identification of Bolt Loosening at Any Angle

This section completes the detection of the ‘bolt’ class, the ‘nut’ class and the ‘sign’ class in the image by the pictures taken vertically. YOLOv5 is used to identify the three classes and output the center coordinates of their predicted boxes in the image. According to the calculation process described in the previous section, the nut rotation angle can be obtained, and the loosening degree of the bolted connection can be judged by detecting the change of nut rotation angle. The measurement method of bolt loosening angle is shown in Figure 7a. The nut is rotated 15°, 30°, 45° and 60°, and the identification results are shown in Figure 7. Because there is no particularly accurate method to measure the rotation angle of the nut against the bolt, a protractor is chosen to measure the nut rotation angle. Finally, the nut rotation angle measured by the protractor is compared and analyzed with the detection value of YOLOv5; analysis of the experimental data is shown in Table 3.
The error of bolt loosening angle is calculated by Equation (6).
E r r o r = | D M | M
where D is the detection value of the YOLOv5 algorithm, and M is the experimentally measured value.
The results show that when the bolt loosens at any angle, the average difference of the nut rotation angle is about 1.56° and the average error is about 1.34%. The difference and error of bolts at the above rotation angle are small, and the mAP is about 0.946, which indicates that the detection accuracy of bolt loosening by this method is enough.

4.2.2. Identification of Bolt Loosening at Tiny Angle

In order to verify the effectiveness of the proposed method in the case of bolt loosening at a tiny angle, experiments are carried out within the range of bolt loosening at 10° in this section. The nut is rotated 10°, 8°, 5°, 2° and 1°. The identification results are shown in Figure 8, and analysis of the experimental data is shown in Table 4.
The results show that when the bolt loosens at a tiny angle, the average difference of the nut rotation angle is about 1.18°, the average error is about 1.27% and the mAP is about 0.942. When the nut is rotated only 1°, the error increases to 2.90%, and the minimum identifiable angle can be determined as 1°. The smaller the bolt loosening angle is, the larger the detection error is. No matter if the bolt loosening angle is arbitrary or tiny, the detection errors are both small, and the proposed method has high recognition accuracy for the three classes. Therefore, the proposed bolt loosening detection method using YOLOv5 can monitor bolt loosening effectively, and the anti-loosening measures can be taken in the early stage of bolt loosening.

4.3. Identification under Different Shooting Conditions

4.3.1. Different Shooting Distances

In order to explore the influence of shooting distance on the detection results, the angle of 150° measured during the bolted connection tightening is taken as the initial state. The light condition and vertical shooting angle are kept unchanged; only the shooting distance of the camera is changed. Images are collected at distances of 5, 10, 15 and 20 cm between the bolted board and the camera. The identification results are shown in Figure 9, and the experimental data analysis of bolt loosening under different shooting distances is shown in Table 5.
The results show that when the shooting distance is 5 cm, although the mAP is high, the images taken by the camera are blurred, and the difference of nut rotation angle is 2.14°. When the shooting distance is within 10~15 cm, there is little difference in the detected value of the nut rotation angle. However, when the shooting distance is 20 cm, the recognition accuracy of the ‘nut’ class detected by the model is only 0.68. The recognition accuracy will decrease with the increase in the shooting distance. Therefore, the recommended shooting distance of images is 10~15 cm to achieve the best detection results when using the proposed bolt loosening detection method.

4.3.2. Different Shooting Angles

The positions of bolted connections are widely distributed in the engineering; thus, it is impossible to ensure that bolt images collected are always taken from a vertical angle. In order to explore the influence of shooting angle on the detection results, the angle of 150° measured during the bolted connection tightening is taken as the initial state. According to the conclusion obtained in the previous section, the distance between the camera and the bolt was kept within the range of 10~15 cm. While maintaining the light conditions, only the shooting angle was changed by tilting the camera. Images are collected under four shooting angles, which are perpendicular to the camera and are tilted 10°, 30° and 45°. The identification results are shown in Figure 10, and the experimental data analysis of bolt loosening under different shooting angles is shown in Table 6.
The results show that the error increases to 5.91% when the camera is tilted 45°. The detection error of the nut rotation angle increases with the increase in shooting angle. Therefore, vertical shooting should be kept as far as possible to avoid large errors when using the proposed bolt loosening detection method.

4.3.3. Different Light Conditions

In a real engineering environment, light conditions are also different. In order to explore the influence of light condition on the detection results, experiments are separately carried out under normal light, weak light, dark light and camera flash with the other shooting conditions unchanged. The identification results are shown in Figure 11, and the experimental data analysis of bolt loosening under different light conditions is shown in Table 7.
The results show that the detection error of nut rotation angle is minimum under normal light. The model has high recognition accuracy even in a weak light environment, and the error of bolt loosening angle is still small, only 2.79%. In dark light, none of the three classes could be detected. At this point, the camera flash can be turned on to increase the ambient brightness, such that the objects that cannot be recognized under dark light can still be recognized with high accuracy. However, due to the large light intensity of the flash lamp, the light reflection of the bolt surface is serious, which reduces the identification accuracy of the ‘bolt’ class.
In this study, when the shooting distance is changed, too close of a shooting distance will make the captured image blurred, while too far of a shooting distance will reduce the accuracy of target detection. Therefore, a suitable shooting distance is necessary to achieve the best detection effect with high precision. With the increase in shooting angle, the detection error of bolt loosening angle also increases. Therefore, vertical shooting should be maintained to achieve the best identification results. Meanwhile, different light conditions will also affect the detection results. The detection error is minimum under normal light, but the bolt loosening angle cannot be detected under dark light.

5. Summary and Conclusions

In this paper, a bolt loosening detection method based on YOLOv5 is proposed by combining deep learning with machine vision. YOLOv5 is used to train the model with high accuracy. The rotation angle of the nut against the bolt can be detected to realize the monitoring of bolt loosening. First, the effectiveness and feasibility of the proposed method are proven by the experiments of bolt loosening at any angle. Second, the applicability and accuracy of this method in detecting the tiny angle of bolt loosening are verified by experiments. Finally, the robustness of bolt loosening detection method based on deep learning and machine vision is verified through a series of experiments under different shooting conditions. In general, the proposed method can effectively and intuitively detect bolt loosening, and this has the advantages of high precision, high efficiency and low cost. At the same time, the detection of tiny angles is also effective, which provides a certain technical reference for the early detection of bolt loosening. The proposed method can transform the bolt loosening monitoring from manual inspection to automatic monitoring by fixed cameras.
Experimental investigations are undertaken under different shooting conditions, and the conclusions are as follows:
  • The precision rate and recall rate of the model trained by the dataset are respectively 99.8% and 100%, and the mAP of the model is over 0.95. This method not only de-creases the time to collect real bolt images, but also improves the generalization ability of the network and reduces the cost of detection.
  • The smaller the bolt loosening angle is, the larger the detection error is. The method is also accurate in detecting the tiny angle of bolt loosening. The minimum identifiable angle is 1°, and the error is only 2.90%.
  • The detection error of nut rotation angle will increase with the increase in shooting angle, and the maximum error of bolt loosening angle is only 5.91% when the camera is tilted 45°. This shows that the method is effective and accurate even under some difficult shooting conditions.
  • The detection results are not sensitive to the shooting distance and the light condition. When the shooting distance is within 10~15 cm and the light is sufficient, the detection accuracy is the best.
Since it is not always convenient for the camera to collect images directly against the bolts, how to reduce the influence of shooting angle and light conditions on the detection results is a crucial issue. The scope of application can be expanded after eliminating the limitations of the proposed method.

Author Contributions

Conceptualization, Y.S. and D.J.; methodology, Y.S.; software, Y.S. and R.D.; validation, Y.S. and M.L.; formal analysis, D.J.; investigation, Y.S.; resources, D.J. and W.C.; data curation, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, D.J.; visualization, Y.S., R.D. and M.L.; supervision, W.C.; project administration, D.J.; funding acquisition, D.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Number 11602112), Natural Science Research Project of Higher Education in Jiangsu Province (Grant Number 20KJB460003) and the Qing Lan Project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, D.; Shi, Q.F.; Fei, Q.G.; Wu, S.Q. Stiffness identification of fixed bolted-joint interface. J. Solid Rocket Technol. 2014, 37, 688–693. [Google Scholar]
  2. Jiang, D.; Wu, S.Q.; Shi, Q.F.; Fei, Q.G. Parameter identification of bolted-joint based on the model with thin-layer elements with isotropic constitutive relationship. J. Vib. Shock. 2014, 33, 35–40. [Google Scholar]
  3. Tan, Z.; Fei, Q.; Wu, H.; Zhang, H.; Jiang, D. Thermal adaptive technique for connecting composite material and high-temperature alloy bolt. J. Southeast Univ. (Nat. Sci. Ed.) 2017, 47, 337–342. [Google Scholar]
  4. Wang, M.; Tan, Z.; He, D.; Jiang, D.; Fei, Q. Anti-loosening Experiment of Composite Bolted Structures Under High Temperature and Vibration Circumstance. J. Vib. Meas. Diagn. 2018, 38, 1169–1175, 1292. [Google Scholar]
  5. Huang, J.; Liu, J.; Gong, H.; Deng, X. A comprehensive review of loosening detection methods for threaded fasteners. Mech. Syst. Signal Process. 2022, 168, 108652. [Google Scholar] [CrossRef]
  6. Tian, Y.; Qian, H.; Cao, Z.; Zhang, D.; Jiang, D. Identification of Pre-Tightening Torque Dependent Parameters for Empirical Modeling of Bolted Joints. Appl. Sci. 2021, 11, 9134. [Google Scholar] [CrossRef]
  7. Chen, J.; Wang, H.; Yu, Y.; Liu, Y.; Jiang, D. Loosening of Bolted Connections under Transverse Loading in Timber Structures. Forests 2020, 11, 816. [Google Scholar] [CrossRef]
  8. Jiang, D.; Wu, S.Q.; Shi, Q.F.; Fei, Q.G. Contact interface parameter identification of bolted joint structure with uncertainty using thin layer element method. Eng. Mech. 2015, 32, 220–227. [Google Scholar]
  9. Goodier, J. Loosening by Vibration of Threaded Fastenings. Mech. Eng. 1945, 67, 798. [Google Scholar]
  10. Junker, G.H. New criteria for self-loosening of fasteners under vibration. Sae Trans. 1969, 78, 314–335. [Google Scholar]
  11. Jiang, Y.Y.; Zhang, M.; Lee, C.H. A study of early stage self-loosening of bolted joints. J. Mech. Des. 2003, 125, 518–526. [Google Scholar] [CrossRef]
  12. Yin, H.; Wang, T.; Yang, D.; Liu, S.; Shao, J.; Li, Y. A Smart Washer for Bolt Looseness Monitoring Based on Piezoelectric Active Sensing Method. Appl. Sci. 2016, 6, 320. [Google Scholar] [CrossRef]
  13. Huo, L.; Wang, F.; Li, H.; Song, G. A fractal contact theory based model for bolted connection looseness monitoring using piezoceramic transducers. Smart Mater. Struct. 2017, 26, 104010. [Google Scholar] [CrossRef]
  14. Xu, C.; Wu, G.; Du, F.; Zhu, W.; Mahdavi, S.H. A Modified Time Reversal Method for Guided Wave Based Bolt Loosening Monitoring in a Lap Joint. J. Nondestruct. Eval. 2019, 38, 85. [Google Scholar] [CrossRef]
  15. Zhao, Z.; Chen, P.; Zhang, E.; Lu, G. Health Monitoring of Bolt Looseness in Timber Structures Using PZT-Enabled Time-Reversal Method. J. Sens. 2019, 2019, 2801638. [Google Scholar] [CrossRef] [Green Version]
  16. Zhang, Y.; Zhao, X.; Sun, X.; Su, W.; Xue, Z. Bolt loosening detection based on audio classification. Adv. Struct. Eng. 2019, 22, 2882–2891. [Google Scholar] [CrossRef]
  17. Wang, F.; Song, G. Monitoring of multi-bolt connection looseness using a novel vibro-acoustic method. Nonlinear Dynam. 2020, 100, 243–254. [Google Scholar] [CrossRef]
  18. Ma, L.; Xu, Y.; Zheng, J.; Dai, X. Ecodesign method of intelligent boom sprayer based on Preferable Brownfield Process. J. Clean. Prod. 2020, 268, 122206. [Google Scholar] [CrossRef]
  19. Xu, J.; Chen, Y.; Tai, Y.; Shi, G.; Chen, N.; Yao, J. New control strategy for suppressing the local vibration of sandwich beams based on the wave propagation method. J. Intell. Mater. Syst. Struct. 2021, 33, 231–247. [Google Scholar] [CrossRef]
  20. Zhou, J.; Xu, L.; Zhang, A.; Hang, X. Finite element explicit dynamics simulation of motion and shedding of jujube fruits under forced vibration. Comput. Electron. Agric. 2022, 198, 107009. [Google Scholar] [CrossRef]
  21. Ji, X.; Miao, Z.; Kromanis, R. Vision-based measurements of deformations and cracks for RC structure tests. Eng. Struct. 2020, 212, 110508. [Google Scholar] [CrossRef]
  22. Obiechefu, C.B.; Kromanis, R. Damage detection techniques for structural health monitoring of bridges from computer vision derived parameters. Struct. Monit. Maint. 2021, 8, 91–110. [Google Scholar]
  23. Nguyen, T.-C.; Huynh, T.-C.; Ryu, J.-Y.; Park, J.-H.; Kim, J.-T. Bolt-loosening identification of bolt connections by vision image-based technique. In Proceedings of the Nondestructive Characterization and Monitoring of Advanced Materials, Aerospace, and Civil Infrastructure 2016, Las Vegas, NV, USA, 21–24 March 2016; pp. 227–243. [Google Scholar]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  25. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  26. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  27. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 11–18 December 2015; pp. 1440–1448. [Google Scholar]
  28. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  29. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  30. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  31. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot Multibox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
  32. Cha, Y.-J.; Choi, W. Vision-Based Concrete Crack Detection Using a Convolutional Neural Network. In Dynamics of Civil Structures; Springer: Cham, Switzerland, 2017; Volume 2, pp. 71–73. [Google Scholar]
  33. Xu, Z.-D.; Yang, Y.; Miao, A.-N. Dynamic Analysis and Parameter Optimization of Pipelines with Multidimensional Vibration Isolation and Mitigation Device. J. Pipeline Syst. Eng. Pract. 2021, 12, 04020058. [Google Scholar] [CrossRef]
  34. Lu, H.; Xu, Z.-D.; Iseley, T.; Matthews, J.C. Novel Data-Driven Framework for Predicting Residual Strength of Corroded Pipelines. J. Pipeline Syst. Eng. Pract. 2021, 12, 04021045. [Google Scholar] [CrossRef]
  35. Xu, Z.-D.; Dong, Y.-R.; Chen, S.; Guo, Y.-Q.; Li, Q.-Q.; Xu, Y.-S. Development of hybrid test system for three-dimensional viscoelastic damping frame structures based on Matlab-OpenSees combined programming. Soil Dyn. Earthq. Eng. 2021, 144, 1006681. [Google Scholar] [CrossRef]
  36. Yan, X.; She, D.; Xu, Y.; Jia, M. Application of Generalized Composite Multiscale Lempel–Ziv Complexity in Identifying Wind Turbine Gearbox Faults. Entropy 2021, 23, 1372. [Google Scholar] [CrossRef]
  37. Zhuang, Z.; Yu, Y.; Liu, Y.; Chen, J.; Wang, Z. Ultrasonic Signal Transmission Performance in Bolted Connections of Wood Structures under Different Preloads. Forests 2021, 12, 652. [Google Scholar] [CrossRef]
  38. Cha, Y.-J.; You, K.; Choi, W. Vision-based detection of loosened bolts using the Hough transform and support vector machines. Autom. Cons. 2016, 71, 181–188. [Google Scholar] [CrossRef]
  39. Ramana, L.; Choi, W.; Cha, Y.-J. Automated Vision-Based Loosened Bolt Detection Using the Cascade Detector. In Sensors and Instrumentation; Springer: New York, NY, USA, 2017; Volume 5, pp. 23–28. [Google Scholar]
  40. Ramana, L.; Choi, W.; Cha, Y.-J. Fully automated vision-based loosened bolt detection using the Viola–Jones algorithm. Struct. Health Monit. 2018, 18, 422–434. [Google Scholar] [CrossRef]
  41. Huynh, T.-C.; Park, J.-H.; Jung, H.-J.; Kim, J.-T. Quasi-autonomous bolt-loosening detection method using vision-based deep learning and image processing. Autom. Constr. 2019, 105, 102844. [Google Scholar] [CrossRef]
  42. Zhao, X.; Zhang, Y.; Wang, N. Bolt loosening angle detection technology using deep learning. Struct. Control. Health Monit. 2019, 26, e2292. [Google Scholar] [CrossRef] [Green Version]
  43. Zhang, Y.; Sun, X.; Loh, K.J.; Su, W.; Xue, Z.; Zhao, X. Autonomous bolt loosening detection using deep learning. Struct. Health Monit. 2019, 19, 105–122. [Google Scholar] [CrossRef]
  44. Zhang, Y.; Yuen, K.-V. Bolt damage identification based on orientation-aware center point estimation network. Struct. Health Monit. 2021, 21, 438–450. [Google Scholar] [CrossRef]
  45. Pham, H.C.; Ta, Q.-B.; Kim, J.-T.; Ho, D.-D.; Tran, X.-L.; Huynh, T.-C. Bolt-Loosening Monitoring Framework Using an Image-Based Deep Learning and Graphical Model. Sensors 2020, 20, 3382. [Google Scholar] [CrossRef]
  46. Pal, J.; Sikdar, S.; Banerjee, S. A deep-learning approach for health monitoring of a steel frame structure with bolted connections. Struct. Control. Health Monit. 2021, 29, e2873. [Google Scholar] [CrossRef]
  47. Pan, X.; Yang, T.Y. Image-based monitoring of bolt loosening through deep-learning-based integrated detection and tracking. Comput. Aided Civ. Infrastruct. Eng. 2021. [Google Scholar] [CrossRef]
  48. Yuan, C.; Chen, W.; Hao, H.; Kong, Q. Near real-time bolt-loosening detection using mask and region-based convolutional neural network. Struct. Control. Health Monit. 2021, 28, e2741. [Google Scholar] [CrossRef]
  49. Gong, H.; Deng, X.; Liu, J.; Huang, J. Quantitative loosening detection of threaded fasteners using vision-based deep learning and geometric imaging theory. Autom. Constr. 2022, 133, 104009. [Google Scholar] [CrossRef]
  50. Zhou, J.; Huo, L.; Wang, H. Computer Vision-Based Detection for Delayed Fracture of Bolts in Steel Bridges. J. Sens. 2021, 2021, 8325398. [Google Scholar] [CrossRef]
  51. Yang, X.; Gao, Y.; Fang, C.; Zheng, Y.; Wang, W. Deep learning-based bolt loosening detection for wind turbine towers. Struct. Control. Health Monit. 2022, 29, e2943. [Google Scholar] [CrossRef]
Figure 1. Experiment of bolt loosening: (a) experimental equipment of bolt loosening; (b) curve of bolt loosening.
Figure 1. Experiment of bolt loosening: (a) experimental equipment of bolt loosening; (b) curve of bolt loosening.
Sensors 22 05184 g001
Figure 2. The principle of target detection using YOLO.
Figure 2. The principle of target detection using YOLO.
Sensors 22 05184 g002
Figure 3. YOLOv5 network structure.
Figure 3. YOLOv5 network structure.
Sensors 22 05184 g003
Figure 4. Labeling of the dataset: (a) captured bolt image; (b) labels of three classes.
Figure 4. Labeling of the dataset: (a) captured bolt image; (b) labels of three classes.
Sensors 22 05184 g004
Figure 5. Center coordinates of the three classes.
Figure 5. Center coordinates of the three classes.
Sensors 22 05184 g005
Figure 6. Training results of the model: (a) training loss; (b) validation loss; (c) precision; (d) recall; (e) mAP@0.5; (f) mAP@0.5:0.95.
Figure 6. Training results of the model: (a) training loss; (b) validation loss; (c) precision; (d) recall; (e) mAP@0.5; (f) mAP@0.5:0.95.
Sensors 22 05184 g006
Figure 7. Identification results of bolt loosening at any angle: (a) angle-measuring method; (b) initial state; (c) 15° of rotation; (d) 30° of rotation; (e) 45° of rotation; (f) 60° of rotation.
Figure 7. Identification results of bolt loosening at any angle: (a) angle-measuring method; (b) initial state; (c) 15° of rotation; (d) 30° of rotation; (e) 45° of rotation; (f) 60° of rotation.
Sensors 22 05184 g007
Figure 8. Identification results of loose bolts at tiny angle: (a) initial state; (b) 10° of rotation; (c) 8° of rotation; (d) 5° of rotation; (e) 2° of rotation; (f) 1° of rotation.
Figure 8. Identification results of loose bolts at tiny angle: (a) initial state; (b) 10° of rotation; (c) 8° of rotation; (d) 5° of rotation; (e) 2° of rotation; (f) 1° of rotation.
Sensors 22 05184 g008
Figure 9. Identification results of bolt loosening under different shooting distances: (a) 5 cm; (b) 10 cm; (c) 15 cm; (d) 20 cm.
Figure 9. Identification results of bolt loosening under different shooting distances: (a) 5 cm; (b) 10 cm; (c) 15 cm; (d) 20 cm.
Sensors 22 05184 g009
Figure 10. Identification results of bolt loosening under different shooting angles: (a) 0° of tilt; (b) 10° of tilt; (c) 30° of tilt; (d) 45° of tilt.
Figure 10. Identification results of bolt loosening under different shooting angles: (a) 0° of tilt; (b) 10° of tilt; (c) 30° of tilt; (d) 45° of tilt.
Sensors 22 05184 g010
Figure 11. Identification results of loose bolts under different light conditions: (a) normal light; (b) weak light; (c) dark light; (d) camera flash.
Figure 11. Identification results of loose bolts under different light conditions: (a) normal light; (b) weak light; (c) dark light; (d) camera flash.
Sensors 22 05184 g011aSensors 22 05184 g011b
Table 1. Smartphone camera specifications.
Table 1. Smartphone camera specifications.
ParametersValue
Size3024 × 3024 pixels
Vertical resolution72 dpi
Horizontal resolution72 dpi
Bit depth24
Aperturef/1.8
Focal length4 mm
Table 2. Parameters of training.
Table 2. Parameters of training.
ParametersValue
Image size640 × 640
Learning rate0.01
Momentum0.937
Weight decay0.0005
Batch size8
Iteration per epoch20
Total epoch500
Table 3. Experimental data analysis of bolt loosening at any angle.
Table 3. Experimental data analysis of bolt loosening at any angle.
Test SampleRotation Angle (°)Detection Value (°)Measured Value (°)Error (%)
b090.44900.49
c15107.351052.24
d30123.691203.08
e45135.261350.19
f60151.071500.71
Table 4. Experimental data analysis of bolt loosening at tiny angle.
Table 4. Experimental data analysis of bolt loosening at tiny angle.
Test SampleRotation Angle (°)Detection Value (°)Measured Value (°)Error (%)
a089.84900.18
b10100.921000.92
c898.68980.69
d595.86950.91
e293.84922.00
f193.64912.90
Table 5. Experimental data analysis of bolt loosening under different shooting distances.
Table 5. Experimental data analysis of bolt loosening under different shooting distances.
Test SampleShooting Distance (cm)Detection Value (°)Measured Value (°)Error (%)
a5152.141501.43
b10150.591500.39
c15149.441500.37
d20150.281500.19
Table 6. Experimental data analysis of bolt loosening under different shooting angles.
Table 6. Experimental data analysis of bolt loosening under different shooting angles.
Test SampleShooting Angle (°)Detection Value (°)Measured Value (°)Error (%)
a0118.161201.53
b10121.001200.83
c30125.631204.69
d45127.091205.91
Table 7. Experimental data analysis of bolt loosening under different light conditions.
Table 7. Experimental data analysis of bolt loosening under different light conditions.
Test SampleDetection Value (°)Measured Value (°)Error (%)
a89.85900.17
b92.51902.79
c-90-
d85.57904.92
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, Y.; Li, M.; Dong, R.; Chen, W.; Jiang, D. Vision-Based Detection of Bolt Loosening Using YOLOv5. Sensors 2022, 22, 5184. https://doi.org/10.3390/s22145184

AMA Style

Sun Y, Li M, Dong R, Chen W, Jiang D. Vision-Based Detection of Bolt Loosening Using YOLOv5. Sensors. 2022; 22(14):5184. https://doi.org/10.3390/s22145184

Chicago/Turabian Style

Sun, Yuhang, Mengxuan Li, Ruiwen Dong, Weiyu Chen, and Dong Jiang. 2022. "Vision-Based Detection of Bolt Loosening Using YOLOv5" Sensors 22, no. 14: 5184. https://doi.org/10.3390/s22145184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop