Next Article in Journal
BERT-Based Approaches to Identifying Malicious URLs
Next Article in Special Issue
A Digital Image Correlation Technique for Laboratory Structural Tests and Applications: A Systematic Literature Review
Previous Article in Journal
Automated Interlayer Wall Height Compensation for Wire Based Directed Energy Deposition Additive Manufacturing
Previous Article in Special Issue
Method for Diagnosing Bearing Faults in Electromechanical Equipment Based on Improved Prototypical Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Feasibility Study on Extension of Measurement Distance in Vision Sensor Using Super-Resolution for Dynamic Response Measurement

1
Department of Convergence System Engineering, Chungnam National University, Daejeon 34134, Republic of Korea
2
Department of Future & Smart Construction Research, Korea Institute of Civil Engineering and Building Technology, Goyang-si 10223, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(20), 8496; https://doi.org/10.3390/s23208496
Submission received: 14 September 2023 / Revised: 3 October 2023 / Accepted: 10 October 2023 / Published: 16 October 2023
(This article belongs to the Special Issue Structural Health Monitoring Based on Sensing Technology)

Abstract

:
The current civil infrastructure conditions can be assessed through the measurement of displacement using conventional contact-type sensors. To address the disadvantages of traditional sensors, vision-based sensor measurement systems have been derived in numerous studies and proven as an alternative to traditional sensors. Despite the benefits of the vision sensor, it is well known that the accuracy of the vision-based displacement measurement is largely dependent on the camera extrinsic or intrinsic parameters. In this study, the feasibility study of a deep learning-based single image super-resolution (SISR) technique in a vision-based sensor system is conducted to alleviate the low spatial resolution of image frames at long measurement distance ranges. Additionally, its robustness is evaluated using shaking table tests. As a result, it is confirmed that the SISR can reconstruct definite images of natural targets resulting in an extension of the measurement distance range. Additionally, it is determined that the SISR mitigates displacement measurement error in the vision sensor-based measurement system. Based on this fundamental study of SISR in the feature point-based measurement system, further analysis such as modal analysis, damage detection, and so forth should be continued in order to explore the functionality of SR images by applying low-resolution displacement measurement footage.

1. Introduction

Infrastructures have been constructed to satisfy and improve both social needs and the quality of life for the public. Whereas they maintain convenience, various types of loads and environmental impacts have decreased their service life and functionality and have further led to structural failures. Many developed countries have focused on the present condition of civil engineering structures and have reported the request for a reasonable maintenance regime including prioritization of backlogs of repair needs to prevent the deterioration of civil engineering structures [1,2]. To accomplish the prolongation of structures’ serviceability, it is inevitable to investigate their structural health condition [3,4,5].
The current status of infrastructures should be periodically examined through the measurement of various structural responses in order for preventative maintenance. Amongst the physical quantities, displacement data are an imperative consideration in broad engineering areas such as deflection measurement [6], load estimation [7,8], damage detection [9], model updating [10], modal analysis [11], and so forth. To obtain the structural responses, conventional sensors such as linear variable differential transformers (LVDTs), accelerometers, and/or strain gauges are used and installed during the examination or monitoring of structures. Whereas these sensors have been proven to provide reliable measurement data, they often are limited in spatial resolution or demand installation of dense sensor arrays [9,12,13]. Such conventional sensors, for instance, the LVDTs, need to be installed at the stationary reference points and to be physically contacted on the surface of measurement points [13,14]. Such installation of traditional sensors can be time-consuming, laborious, and expensive to operate or vulnerable to health and safety issues. Furthermore, direct access to the structures to set the sensors could occasionally be difficult or limited owing to low accessibility [4,15].
To address the disadvantages of contact-type sensors, noncontact-type sensors (e.g., Global Positioning Systems (GPSs), Laser Doppler Vibrometers (LVDs), and radar interferometry systems) are considered as the traditional sensors [16,17,18]. The GPS sensors have gained attention owing to their convenience of installation but have the limitation of measurement accuracy (errors between 5 and 10 mm) [17,19]. The LVDs perform general measurement accuracy in a short measurement range. Even though the high-intensity laser beam allows for an extension of the distance, this causes health and safety issues [17,18]. The interferometric radar can measure structural responses with a good resolution, but reflecting surfaces must be placed on the structure, which makes it cumbersome and time-consuming [20].
Due to the development of vision sensors, the cost range of high-performance digital cameras, which have high shutter speeds and high resolution, has been reduced, making them more affordable. With the advancement of efficient computer vision algorithms, vision sensors have become improved surrogates for conventional contact-based sensors. During the last decade, a large number of studies have been assessed to obtain quantitative information from a video [21]. Compared to the conventional sensors, the vision-based monitoring methods have distinctive aspects such as convenient instrumentation, simple installation, remote measurement, and capacity for multi-point measurement using a single sensor [3,4,5,7,13].
Despite the merits of the vision sensor, monitoring structures remotely using a single camera is challenging in terms of engineering applications [3,5,19,22]. The following three concerns pose limitations in its practical applications: (1) Most vision-based displacement sensors require a predesigned high-contrast target panel, which needs to access the structure to install the target panel. (2) It is well known that the accuracy of the vision-based displacement measurement is largely dependent on the image quality, which is often difficult to guarantee in outdoor field environmental conditions such as illumination variation and optical turbulence. (3) The accuracy of this measurement system is sensitive to the operation distance and camera intrinsic parameters.
The vision-based measurement systems are susceptible to certain environmental limitations such as illumination fluctuation, measurement distance, or optical turbulence in the field in hot weather [23]. Among them, the measurement distance and range are crucial factors that engage not only operational capacity but also the measurement performance of the vision sensor. The spatial resolution in digital images depends on the spatial density of the digital image. The sampling interval, which is the distance between each pixel, increases as the field of view becomes wider. To obtain definite images over a long distance, a long focal length lens or denser imaging sensor arrays are required [24].
In this connection, it would be a logical idea to simply upgrade the optical imaging system. Such an approach is not efficient in terms of cost due to the regular substitution of the hardware. Furthermore, it is applicable for taking new high-resolution images but is not used to enhance the resolution of existing low-resolution images. As the computer vision area has progressed, image resolution enhancement based on signal processing has received increasing attention owing to its flexibility and economic feasibility [25]. Super-resolution is the task of reconstructing the higher spatial resolution of a given low-resolution image. With super-resolution techniques and deep-learning architectures, images with a resolution beyond the limit of imaging systems can contribute to consecutive image processing analysis such as semantic segmentation [26,27], detection [28,29,30], and recognition [31,32]. Recently, the upscaled images obtained by using the super-resolution techniques have been implemented to analyze structural deformations by using stitched multiple image frames of the beam structure [33] and to improve displacement measurement accuracy using surveillance video cameras [34]. There are a couple of studies conducted to define the capabilities of super-resolution techniques in civil engineering areas, whilst their feasibility for measuring structural responses has not yet been broadly investigated.
The objective of this study is to systematically explore the feasibility of a super-resolution model in a vision sensor-based remote sensor to enhance measurement performance and extend operation capacity. Additionally, the robustness of the vision sensor with super-resolution images is experimentally investigated under various measurement distances.

2. Methodology of Dynamic Displacement Measurement Using Single Image Super-Resolution

The SISR methods are normally interested in restoration and refinement applications in computer vision areas. Although most researchers focus on improving the SR performance of the model, the implementation of SISR in vision sensor-based measurement systems is rarely conducted for extending measurement distance. This section considers brief explanations of a feature point-based displacement measurement system and a methodology of its measurement performance depending on the increase in measurement ranges. In addition, the SISR method implemented in this study is explored and its feasibility and robustness study are explained.

2.1. Feature Point-Based Measurement System

To obtain the dynamic response of structures from image sequences, there are various tracking approaches in terms of computer vision such as model-based, region-based, active contour, and feature-based methods. Amongst these methods, the template matching technique-based measurement schemes that use artificial targets as reference markers have traditionally found outstanding performances in former studies [3,4,13]. It is also found that tracking motion trajectories using key points extracted from objects within images shows good displacement measurement performances [5]. However, it requires a fundamental task to define which features are good to detect, or how to track them as well [35].
The interest points are normally detected in the form of corners, blobs, edges, junctions, lines, and so forth. Researchers suggested several methods such as tracking corner points, matching image patches with a high spatial frequency content or image regions where the mix of second-order derivatives is high enough to overcome the aperture problem [36]. Additionally, highly discriminative or salient features are required for the achievement of reliable tracking objects [5]. They should be invariant to illuminance fluctuation, scale, and geometry. To accomplish feature approach displacement measurement, two sets of feature detectors, i.e., the Shi-Tomasi corner detector [36] and the KAZE feature detector [37], which are incorporated with the Kanade–Lucas–Tomasi feature tracker algorithm [38], are derived into the vision sensor system.
Figure 1 indicates a framework of the measurement of dynamic response using feature points. To begin with, the first frame of an image sequence is used for the reference image to select regions of interest (ROIs) for the detection of key points. The ROIs can be either predesigned markers (high contrast) or natural targets (low contrast) existing on the structure, such as bolts and nuts and so forth. After selecting ROIs, feature points ( P n ) within the ROIs are extracted by either the Shi-Tomasi corner detector [36] or the KAZE feature detector [37] and are registered for identifying the initial position of the subject in pixel coordinates as denoted P n ( p x ,   1 , p y , 1 ) . The multiple key points P n ( p x ,   i , p y , i ) detected in the i th frame are matched with the registered features and tracked by implementing the KLT tracker algorithm [38,39,40]. The Maximum Likelihood Estimation Sample Consensus (MLESAC) method [41] is used to define dominant geometric transformation and to eliminate outliers amongst the matched features for consistent displacement vectors between the reference and i th images. The structural displacement ( D i ) using a vision system is computed by a simple product of the distance vector of the inlier estimated by the Euclidean norm in pixels d i and a scale factor S [42] determined by a correlation of the camera’s intrinsic and extrinsic parameters.
Although the scale factor can be simply obtained from the physical properties of the camera’s parameters, the perpendicularity of the camera’s optical axis to the object surface is mandatory. This prerequisite consequently requires that all measuring points on the object are in the same depth of fields [15,43,44]. The misalignment of the optical axis would lead to difficulties in the implementation of the vision measurement system because a small number of misaligned angles can be imperceptible throughout the experiment setups, specifically as the measurement distance is relatively long. Likewise, in practice, it would be occasionally inevitable to tilt the optical axis to capture structural behaviors.
The measurement errors caused by the nonperpendicularity of the optical axis were evaluated to increase the measurement accuracy of the vision system. Numerical studies of measurement errors between the scale factors depending upon the camera’s perpendicularity were carried out [15]. They found that the measurement errors were estimated to be 0.9% and 1.1% for perpendicular and not perpendicular to the optical axis at a tilt angle of 30°, respectively. Thus, the errors from the small tilt angle could be negligible and acceptable in practical implementations [15]. The camera calibration followed by Zhang’s method [45], which can reduce lens distortion [22,44], can be used to minimize measurement errors, but it was not accomplished in this study.

2.2. Indoor Experimental Conditions for Dynamic Displacement Measurements

To evaluate the displacement measurement performance of the vision sensor using single image super-resolution (SISR) as measurement distance increases, a three-story frame structure was prepared for indoor shaking table tests. As described in Figure 2, the aluminum slabs (300 mm × 200 mm) and stainless steel columns (300 mm × 25 mm) are bolt-connected to all the connections. The thickness of the slab and column are 10 mm and 1.5 mm, respectively. The frame model was horizontally excited by reciprocating motions of a shaking table generated by a servo motor (APMC-FAL01AM8K by LS ELECTRIC Co., Ltd., Anyang-si, Republic of Korea). The horizontal displacement amplitude of the base slab was ±14.5 mm, and 9.0 Hz sinusoidal motion was induced to demonstrate the 3rd mode shape.
As reference data, the horizontal displacements of each slab were collected with a sampling rate of 120 Hz by four LVDTs (CDP-50 and CDP-100 by Tokyo Sokki Kenkyujo Co., Ltd., Tokyo, Japan) and a data acquisition device (SCXI-1000 by National Instruments Corp., Austin, TX, USA) during shaking table tests. The LVDTs were installed between each floor of the frame and stationary reference points. Predesigned targets (105 mm × 70 mm) which are high contrast, were fixed on the midpoints of each floor, as shown in red regions in Figure 2a, in order to designate multiple ROIs for extracting and tracking features. Additionally, existing natural targets (bolt shafts and nuts (5 mm and 10 mm in diameter, respectively) and cross-sections of slabs (105 mm × 10 mm)), as indicated in green and blue areas, respectively, were introduced to compare displacement measurement performance depending on the targets’ type.
To capture structural behaviors, a video camera (DSC-RX100M7 Sony Corp., Minato, Tokyo) with a 72 mm focal length, which is 200 mm equivalent to a 35 mm image sensor format, was initially set at a stationary point 10 m away from the shaking table. As illustrated in Figure 2c, the measurement distance, where is the distance between the position of an image sensor and fiducial targets, was surveyed using a total station (GT-501 by Topcon Corp., Tokyo, Japan). The scale factor was 0.95 mm/pixel in this testing configuration. A laptop (P57F Dell Inc., Round Rock, TX, USA) with Imaging Edge Desktop 3.2.01, which is software provided by Sony Corp., was used to align an optical axis of the camera and frame structure and to remotely control recording settings. To obtain sharp image sequences, all the targets were manually focused by observing the magnified view from the abovementioned software. As the harmonic motion was induced, the motion trajectories of the frame structure were digitized into FHD image sequences (1080 × 1920 pixels) for 180 s with a sampling rate of 120 frames per second (FPS). The illuminance was consistent with 450 lux during the whole testing session and the mean ambient temperature of the laboratory in December was 18 degrees Celsius. The displacement measurements were continuously conducted by changing stationary points of the camera sensor with 10 m increments up to 40 m, as shown in Figure 3, and the scale factors corresponding to the measurement distance ranging from 20 m to 40 m were 1.91 mm/pixel, 2.87 mm/pixel, and 3.82 mm/pixel, respectively.

2.3. Single Image Super-Resolution in Vision Sensor-Based Measurement System

Various Convolutional Neural Networks (CNN)-based SR networks have successively accomplished remarkable fidelity performance on bicubic downsampling images [46,47,48,49]. In most of them, bicubic operation for downsampling is used to construct training data pairs. Additionally, the downsampled images are fed to the designed network architectures in the test phase [50]. Although these networks have improved fidelity by training to minimize the average error, the generated results using the original image for the test are blurry with significant noise. This mainly is because the bicubic sub-sampled image does not contain the same domain as the original image [50]. Owing to the difference in the domain gap, these models create unsatisfied artifacts when tested with the original image.
Ji et al. [48] found that EDSR [46] and ZSSR [49], for instance, produce unpleasant results in real-world images. The authors also concluded that it is essential to implement a proper degradation technique for real-world super-resolution to generate LR images and maintain the original domain attributes [50]. To deal with this issue, many researchers pay more attention to Generative Adversarial Networks (GANs)-based models by introducing adversarial losses and perceptual losses to improve the visual effects of the artifacts [51,52,53,54]. Ji et al. [50] proposed a novel framework (RealSR) for the SR model to overcome the abovementioned challenges on real-world images. As described in Figure 4, they designed a network following two stages: (1) The realistic blurry LR image is generated by sub-sampling HR images with the estimated degradation. (2) The second stage is the SR training phase using the constructed data.
To overcome the degradation of spatial resolution by implementing the SR, the pre-trained model of RealSR [50] was used for upscaling the image sequences. Because of its outstanding performance, the more realistic super-resolution can be accomplished specifically in the existing features of the structures, which can be directly introduced to the natural targets in vision sensor-based measurement systems with a long measurement distance range. Consequently, the installation of fiducial targets would not be required for the detection of distinctive feature points.
For the validation tests, two sets of image sequences of shaking table tests conducted at 10 m and 20 m were initially used to assess the feasibility of SISR by computing similarity evaluation metrics. Additionally, the displacement measurement error was compared to analyze the displacement measurement performance when the upscaled images were applied. All the image frames were first downscaled with a scale factor of 0.25 and converted to 480 × 270 pixels images. The LR images were reconstructed with a scale factor of 4 through bicubic (BC) upscaling operation and RealSR [50], respectively. The upsampled images (1920 × 1080 pixels) were used for the computation of similarity by evaluation metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Map (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) [55]. Furthermore, the image sequences of the displacement measurements at 30 m and 40 m are upsampled with a scale factor of 2. The SR images were equivalent to UHD images (3840 × 2160 pixels) and were used to verify the robustness of SISR in the vision sensor-based measurement system for extending measurement distance.

3. Result and Discussion

The dynamic structural responses of frame structure were measured based on LVDTs and estimated by trajectories of feature points within three different types of ROIs. Initially, the robustness of the vision sensor-based measurement system was evaluated by comparing it with the LVDTs’ data as measurement distance increases. In addition, the feasibility of SISR in the vision sensor-based measurement system was assessed, and its robustness for the natural targets is discussed regarding the extension of measurement distance.

3.1. Displacement Measurement Performance of Feature Point-Based Measurement System

To evaluate vision sensor measurement performance as the measurement distance increases, three different targets were selected for extracting corner points and KAZE features, respectively. Figure 5 shows the key point detection result comparison of both artificial and natural targets on the 3rd floor according to both feature detection schemes and the measurement distance increments. The image quality of targets is degraded as the measurement distance increases because of the lower pixel density within the allocated ROIs. As shown in Figure 5, the high-contrast target (fiducial target) provides a good reference region for detecting features in both algorithms as the measurement distance increases.
However, as can be seen in the feature detection results of bolt connection in Table 1, fewer quantities of features within the ROIs are detected because the coarser image pixels contribute to obscuring the bolt details. Thus, fewer spatial densities are composed of the images as the measurement distance range increases. Because of this pixel blocking or pixelation, the low detection capacity of corner points is determined above a measurement distance of 30 m. Nevertheless, there are a couple of corner points detected in the bolt connections at 40 m; they are not functional because of a lack of the number of points, which is less than the minimum requirement of 3 points within each ROI for the MLESAC algorithm. The mean number of KAZE features within the bolt connection is extracted more than the minimum requirement for tracking at the longest measurement distance. Both feature detection schemes in slab cross-sections have limitations of feature detection starting from a measurement distance of 30 m.
In order to evaluate the vision sensor’s performance in the time domain, dynamic displacements of floors are estimated based on tracking key points extracted from both artificial and natural targets by implementing two feature detection schemes: the Shi-Tomasi corner detector and the KAZE feature detector. To simplify the expressions, the corresponding displacement measurements are denoted as STC and KZF, respectively. Additionally, they are compared to LVDT data in order to quantify measurement error using Root Mean Square Error (RMSE) and Mean Absolute Peak Error (MAPE).
Figure 6 shows the displacement time histories of the artificial target excited by 9 Hz frequency depending upon the measurement range increments under normal indoor illumination. The 1st, 2nd, and 3rd floors’ displacements are relative to the base table’s displacement. The relative displacement time histories of each floor are indicated in a duration of 2 s, and the enlarged plots are presented to provide better illustrations.
As presented in all the results, the feature point-based measurement system using artificial targets can accurately determine a wide range of relative vibration amplitudes ranging from 0 to 30 mm. Moreover, the vision sensor demonstrates significant displacement measurement performances in a high excitation frequency.
Figure 7 and Figure 8 are the structural dynamic responses using two types of natural targets. As can be seen in the results at 10 m and 20 m, the STCs and KZFs show good agreement with the corresponding reference data. Because of the pixelation resulting from the increase in scale factor in the natural targets, the distance range limitation when using a slab cross-section was determined as 30 m in this testing configuration. Although both feature detection schemes are applicable in the bolt connection up to 30 m, the corner detector is unsuitable for tracking above 30 m due to insufficient features. Additionally, this would affect dramatic decreases in the measurement accuracy of bolt connections between 30 m and 40 m.
To estimate the displacement measurement performance as distance increases, error quantification was numerically studied, as tabulated in Table 2. The errors were quantified to RMSE and MAPE in mm and percentile, respectively. The RMSE of the fiducial target ranges from 0.61 mm to 0.74 mm in both feature detection schemes and shows a proportional relationship as measurement distance increases. The maximum MAPE of STC and KZF using fiducial target is 1.58% and 1.56%, respectively, at 40 m. As can be seen in the artificial target’s error quantifications, there is no big difference between the two feature detection techniques.
The RMSEs of STC and KZF slightly increase to 0.79 mm and 0.76 mm, respectively, at 30 m. However, there are sudden drops in the MAPEs of both displacement measurements at 30 m. Furthermore, the RMSE of KZF significantly increases to 1.2 mm and the corresponding MAPE sharply decreases to −10.2% at 40 m. Comparing the RMSEs and MAPEs using bolt connection, similar levels of measurement errors were determined in slab cross-sections at 10 m and 20 m.
Through comparative studies, the measurement distance is a critical factor in decreasing the measurement accuracy of a vision sensor. It can particularly be observed in small configurations of natural features because of the lower spatial resolution components as the measurement distance increases. Moreover, it is found that the KAZE features are more robust to measurement distance compared with corner points.

3.2. Feasibility Study of Super-Resolution in Vision Sensor-Based Measurement System

To validate the feasibility of SISR in a vision sensor-based measurement system, 2 sets of recorded image sequences of shaking table tests at 10 m and 20 m were used. All the original images were initially downsampled with a scale factor of 0.25 using BC operation, and the downscaled ones were separately enhanced with a scale factor of 4 through bicubic operation and RealSR. Figure 9 shows the artifact results of the artificial target and bolt connection on the frame structure at a measurement distance of 10 m. Compared with the original image (Figure 9a), the blurry images of two types of targets can be seen in BC images (Figure 9c). As shown in the fiducial target image (Figure 9d), the distinctive visual features were generated using RealSR.
Table 3 shows the quantitative analysis of the similarity of BC and SR images using 1000 corresponding reference image frames. Similar numerical results are shown in both SR operations at 10 m. However, improved SR performances are determined in both PSNR and SSIM of BC as the measurement distance increases. Because the PSNR and SSIM are normally applied for evaluation metrics in image restoration, they pay more attention to the fidelity of the image. Because RealSR uses perceptual loss for real-world SR images, it shows better SR performances in the LPIPS metric.
The reconstructed image frames are applied to the feature point-based displacement measurement system. Figure 10 shows the feature detection results of artifacts generated by two SR methods. Due to the visual effects of RSR images, they provide distinctive features in terms of corner points, as shown in Figure 10b. Nevertheless, the KAZE feature detector shows secure detection performance, especially in blurry images (BC images) because of its non-linear diffusion filtering process.
Table 4 shows the error quantification of displacement measurements using upsampled images to evaluate the feasibility of SISR in the vision sensor-based measurement system. The RMSEs of displacement measurements using BC images are generally higher than those of RealSR. The standard deviations and mean of RMSEs depending upon BC and RealSR images are 0.06 and 0.05, and 0.68 mm and 0.67 mm, respectively. In the case of MAPEs, BC and RealSR images are 0.147 and 0.151, and 1.75% and 1.57% respectively. In terms of feature detection methods, the mean RMSE of KZF is 0.65 mm, which is 0.05 mm lower than its STC.
The smaller average RMSEs can be found in both displacement measurements using the artificial target in comparison with those using natural targets. Because of the high contrast of predesigned markers, distinctive features can be detected in both SR methods, leading to smaller RMSE differences compared to those using the natural targets. Comparing the two types of natural targets, the mean RMSEs of RSR images are smaller than those of BC images. Due to the realistic SR performance of RealSR, the SR model can contribute to enhancing the image quality of natural targets, i.e., bolt connection and slab cross-section. Thus, the SR model can be implemented for both improving image resolution and tracking motion trajectories in the feature-based measurement system.

3.3. Alleviation of Low Spatial Resolution Using Super-Resolution

The image sequences recorded in the previous experiments at 30 m and 40 m were used to alleviate low spatial resolution by applying SR images in the vision-based measurement system. In addition to the alleviation, the robustness evaluation was also carried out to investigate the measurement accuracy of the vision sensor depending on the increase in measurement distance. The original image frames were upscaled through the pre-trained model of RealSR with a factor of 2, and each enhanced image consisted of a 3840 × 2160-pixel image. Due to the increase in pixel density, the scale factors of artifacts at 30 m and 40 m decrease to 1.43 mm/p and 1.91 mm/p, respectively. The scale factor of the SR image at 40 m corresponds to the scale factor of the original image at 20 m.
Figure 11 shows the feature detection results at 40 m by comparing original image frames (left) and the higher resolution images (right). As observed in the fiducial target artifacts, the image quality was properly enhanced in indoor light conditions. The details of bolt connections in synthetics are lost owing to a lack of pixel information from the original ones. Despite ambiguities of natural targets in SR images, both points of interest were detected in all types of targets.
Figure 12 and Figure 13 representatively plot displacement measurement time histories of LVDT data and KZFs using original and SR images depending on measurement distance. As can be seen in both figures for brief illustrations, the KZFs using SR image sequences of fiducial targets show great agreement with the reference data regardless of an increase in measurement distance range.
For the bolt connection, the discrepancies from the original images at 30 m and 40 m become smaller, as can be observed in the displacement comparisons, specifically in the enlarged time segments in both Figure 12b and Figure 13b. Owing to the increase in spatial resolution, the enhanced images of the slab cross-section allow tracking of the movement of each floor and extend the measurement distance range of the vision sensor (Figure 12c and Figure 13c).
Table 5 shows the error quantification of the robustness evaluation of SR in the feature point-based displacement measurement by refining the coarse details of image frames. All the RMSEs of displacement measurements using upscaled images of the artificial target and bolt connection become smaller compared to the corresponding measurements of the original images. The RMSEs of STC and KZF of the artificial target at 30 m decrease to 0.67 mm and 0.65 mm, respectively. Moreover, the corresponding MAPEs decrease to 1.44% and 1.42%, respectively. Similar to the case at 30 m, the higher spatial resolution caused a decrease in RMSEs and MAPEs as the measurement distance range was extended.
In the case of bolt connection, the RMSEs of STC and KZF at 30 m are 0.7 mm and 0.65 mm, respectively. Additionally, the absolute MAPEs of STC and KZF decrease to 1.96% and 1.80%, respectively. While the corner points can contribute to tracking the trajectory of the structure at 40 m because of higher pixel density, the RMSE and MAPE of STC at 40 m reached 0.77 mm and −5.10%, respectively. Similar to KZF at 30 m, the RMSE and MAPE of KZF at 40 m decrease to 0.87 mm and −4.84%.
For the slab cross-section, the SISR provides higher spatial resolution to detect key points, i.e., corner points and KAZE features within the ROIs at longer measurement distances. Compared to RMSE and MAPE of both measurements at 30 m using bolt connection, the improved measurement accuracy can be found in those of the slab cross-section. While similar measurement error trends can be observed in both STC and KZF at 30 m, the enhanced measurement accuracy can be determined in the KZF at 40 m by comparing the STC at 40 m. This is caused by the larger quantity of features and more distinctive points of interest in slab cross-sections that participated in tracking dynamic responses.

4. Conclusions

The dynamic responses of a frame structure were obtained by tracking trajectories of feature points within the predesigned targets and natural targets such as bolt connections and slab cross-sections. The fiducial targets provide distinctive ROIs to the Shi-Tomasi corner detector and the KAZE feature detector for extracting and tracking features at a measurement distance of 40 m. Through evaluation of the measurement performance of the feature point-based measurement system, excellent agreements can be observed between the displacements computed using the artificial targets and those measured by reference sensors. Unless natural targets are suitable for obtaining structural dynamic responses, the measurements using these targets have distance range limitations from 30 m. Additionally, their feature detection capacities are sensitive to the pixel blocking of images resulting from losing spatial density.
The GAN-based SISR was introduced in the vision sensor-based measurement system in order to alleviate the low spatial resolution due to the increase in measurement distance. The feasibility study of SISR in the vision-based sensor system was carried out to remotely measure multiple structural responses by tracking the natural features existing on structures at a long measurement distance. The pre-trained model of SR obtained outstanding SR results in terms of human perception by refining coarser image frames to high-resolution artifacts. Moreover, the SR images were functional to estimate displacement, and there are no big differences in the displacement measurement performances using reconstructed image frames compared with those of original images.
In addition to the feasibility study of SISR, the robustness of feature point-based displacement measurement in terms of extension of measurement distance was evaluated by increasing the spatial resolution of image sequences. Owing to the enhanced images, the measurement accuracy of the feature point-based measurement system was remarkably increased in the case of natural targets. Furthermore, the operational capacity of the vision sensor using natural targets was extended to 30 m in both feature detection schemes. While the displacements can be computed using the natural targets at 40 m, the KAZE features are more applicable for the lower contrast ROIs.
To sum up, it was found that the RealSR accomplishes refinement of definite images of natural targets, resulting in an extension of the measurement distance range in the feature point-based measurement system. Additionally, it is confirmed that the SISR mitigates displacement measurement error in the vision sensor-based measurement system at long measurement distance ranges. Based on this fundamental study of SISR in the feature point-based measurement system, further analysis such as modal analysis, damage detection, and so forth should be continued in order to explore the functionality of SR images by applying low-resolution displacement measurement footage.

Author Contributions

Conceptualization, J.G. and D.C.; methodology, J.G.; software, J.G.; validation, D.C.; formal analysis, J.G.; investigation, J.G.; resources, J.G.; data curation, J.G.; writing—original draft preparation, J.G.; writing—review and editing, D.C.; visualization, J.G.; supervision, D.C.; project administration, D.C.; funding acquisition, D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Chungnam National University [2022]. The authors gratefully acknowledge support.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This study was supported by the Department of Convergence System Engineering of Chungnam National University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ASCE 2021 Report Card for America’s Infrastructures; ASCE: Reston, VA, USA, 2021.
  2. ICE State of the Nation: Infrastructure 2014; ICE: London, UK, 2014.
  3. Fukuda, Y.; Feng, M.Q.; Narita, Y.; Kaneko, S.; Tanaka, T. Vision-Based Displacement Sensor for Monitoring Dynamic Response Using Robust Object Search Algorithm. IEEE Sens. J. 2013, 13, 4725–4732. [Google Scholar] [CrossRef]
  4. Fukuda, Y.; Feng, M.Q.; Shinozuka, M. Cost-Effective Vision-Based System for Monitoring Dynamic Response of Civil Engineering Structures. Struct. Control Health Monit. 2010, 17, 918–936. [Google Scholar] [CrossRef]
  5. Yoon, H.; Elanwar, H.; Choi, H.; Golparvar-Fard, M.; Spencer, B.F., Jr. Target-Free Approach for Vision-Based Structural System Identification Using Consumer-Grade Cameras. Struct. Control Health Monit. 2016, 23, 1405–1416. [Google Scholar] [CrossRef]
  6. Spencer, B.F., Jr.; Hoskere, V.; Narazaki, Y. Advances in Computer Vision-Based Civil Infrastructure Inspection and Monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  7. Kim, S.-W.; Jeon, B.-G.; Kim, N.-S.; Park, J.-C. Vision-Based Monitoring System for Evaluating Cable Tensile Forces on a Cable-Stayed Bridge. Struct. Health Monit. 2013, 12, 440–456. [Google Scholar] [CrossRef]
  8. Celik, O.; Dong, C.-Z.; Catbas, F.N. A Computer Vision Approach for the Load Time History Estimation of Lively Individuals and Crowds. Comput. Struct. 2018, 200, 32–52. [Google Scholar] [CrossRef]
  9. Cha, Y.-J.; Chen, J.G.; Büyüköztürk, O. Output-Only Computer Vision Based Damage Detection Using Phase-Based Optical Flow and Unscented Kalman Filters. Eng. Struct. 2017, 132, 300–313. [Google Scholar] [CrossRef]
  10. Feng, D.; Feng, M.Q. Model Updating of Railway Bridge Using in Situ Dynamic Displacement Measurement under Trainloads. J. Bridge Eng. 2015, 20, 4015019. [Google Scholar] [CrossRef]
  11. Poozesh, P.; Sarrafi, A.; Mao, Z.; Niezrecki, C. Modal Parameter Estimation from Optically-Measured Data Using a Hybrid Output-Only System Identification Method. Measurement 2017, 110, 134–145. [Google Scholar] [CrossRef]
  12. Lee, J.J.; Shinozuka, M. A Vision-Based System for Remote Sensing of Bridge Displacement. Ndt E Int. 2006, 39, 425–431. [Google Scholar] [CrossRef]
  13. Feng, M.Q.; Fukuda, Y.; Feng, D.; Mizuta, M. Nontarget Vision Sensor for Remote Measurement of Bridge Dynamic Response. J. Bridge Eng. 2015, 20, 4015023. [Google Scholar] [CrossRef]
  14. Xu, Y.; Brownjohn, J.M.W. Review of Machine-Vision Based Methodologies for Displacement Measurement in Civil Structures. J. Civ. Struct. Health Monit. 2018, 8, 91–110. [Google Scholar] [CrossRef]
  15. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A Vision-Based Sensor for Noncontact Structural Displacement Measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef] [PubMed]
  16. Bock, Y.; Melgar, D.; Crowell, B.W. Real-Time Strong-Motion Broadband Displacements from Collocated GPS and Accelerometers. Bull. Seismol. Soc. Am. 2011, 101, 2904–2925. [Google Scholar] [CrossRef]
  17. Kohut, P.; Holak, K.; Uhl, T.; Ortyl, Ł.; Owerko, T.; Kuras, P.; Kocierz, R. Monitoring of a Civil Structure’s State Based on Noncontact Measurements. Struct. Health Monit. 2013, 12, 411–429. [Google Scholar] [CrossRef]
  18. Nassif, H.H.; Gindy, M.; Davis, J. Comparison of Laser Doppler Vibrometer with Contact Sensors for Monitoring Bridge Deflection and Vibration. Ndt E Int. 2005, 38, 213–218. [Google Scholar] [CrossRef]
  19. Wu, T.; Tang, L.; Shao, S.; Zhang, X.-Y.; Liu, Y.-J.; Zhou, Z.-X. Cost-Effective, Vision-Based Multi-Target Tracking Approach for Structural Health Monitoring. Meas. Sci. Technol. 2021, 32, 125116. [Google Scholar] [CrossRef]
  20. Gentile, C.; Bernardini, G. An Interferometric Radar for Non-Contact Measurement of Deflections on Civil Engineering Structures: Laboratory and Full-Scale Tests. Struct. Infrastruct. Eng. 2010, 6, 521–534. [Google Scholar] [CrossRef]
  21. Bhowmick, S.; Nagarajaiah, S.; Lai, Z. Measurement of Full-Field Displacement Time History of a Vibrating Continuous Edge from Video. Mech. Syst. Signal Process 2020, 144, 106847. [Google Scholar] [CrossRef]
  22. Sładek, J.; Ostrowska, K.; Kohut, P.; Holak, K.; Gaska, A.; Uhl, T. Development of a Vision Based Deflection Measurement System and Its Accuracy Assessment. Measurement 2013, 46, 1237–1249. [Google Scholar] [CrossRef]
  23. Luo, L.; Feng, M.Q.; Wu, J. A Comprehensive Alleviation Technique for Optical-Turbulence-Induced Errors in Vision-Based Displacement Measurement. Struct. Control Health Monit. 2020, 27, e2496. [Google Scholar] [CrossRef]
  24. Han, Y.; Wu, G.; Feng, D. Vision-Based Displacement Measurement Using an Unmanned Aerial Vehicle. Struct. Control Health Monit. 2022, 29, e3025. [Google Scholar] [CrossRef]
  25. Chen, H.; He, X.; Qing, L.; Wu, Y.; Ren, C.; Sheriff, R.E.; Zhu, C. Real-World Single Image Super-Resolution: A Brief Review. Inf. Fusion. 2022, 79, 124–145. [Google Scholar] [CrossRef]
  26. Lei, S.; Shi, Z.; Wu, X.; Pan, B.; Xu, X.; Hao, H. Simultaneous Super-Resolution and Segmentation for Remote Sensing Images. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3121–3124. [Google Scholar]
  27. Wang, L.; Li, D.; Zhu, Y.; Tian, L.; Shan, Y. Dual Super-Resolution Learning for Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3774–3783. [Google Scholar]
  28. Zhang, Y.; Bai, Y.; Ding, M.; Xu, S.; Ghanem, B. KGSnet: Key-Point-Guided Super-Resolution Network for Pedestrian Detection in the Wild. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2251–2265. [Google Scholar] [CrossRef] [PubMed]
  29. Pang, Y.; Cao, J.; Wang, J.; Han, J. JCS-Net: Joint Classification and Super-Resolution Network for Small-Scale Pedestrian Detection in Surveillance Images. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3322–3331. [Google Scholar] [CrossRef]
  30. Li, J.; Wang, J.; Chen, X.; Luo, Z.; Song, Z. Multiple Task-Driven Face Detection Based on Super-Resolution Pyramid Network. J. Internet Technol. 2019, 20, 1263–1272. [Google Scholar]
  31. Yang, X.; Wu, W.; Liu, K.; Kim, P.W.; Sangaiah, A.K.; Jeon, G. Long-Distance Object Recognition with Image Super Resolution: A Comparative Study. IEEE Access 2018, 6, 13429–13438. [Google Scholar] [CrossRef]
  32. Wang, Z.; Chang, S.; Yang, Y.; Liu, D.; Huang, T.S. Studying Very Low Resolution Recognition Using Deep Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4792–4800. [Google Scholar]
  33. Trucco, E.; Verri, A. Introductory Techniques for 3-D Computer Vision; Prentice Hall: Englewood Cliffs, NJ, USA, 1998; Volume 201. [Google Scholar]
  34. Shi, J.; Tomasi, C. Good Features to Track. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  35. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE Features. In Proceedings of the Computer Vision—ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Proceedings, Part VI 12. pp. 214–227. [Google Scholar]
  36. Tomasi, C.; Kanade, T. Detection and Tracking of Point. Int. J. Comput. Vis. 1991, 9, 137–154. [Google Scholar] [CrossRef]
  37. Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the IJCAI’81: 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Volume 2, pp. 674–679. [Google Scholar]
  38. Kalal, Z.; Mikolajczyk, K.; Matas, J. Forward-Backward Error: Automatic Detection of Tracking Failures. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2756–2759. [Google Scholar]
  39. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef]
  40. Badali, A.P.; Zhang, Y.; Carr, P.; Thomas, P.J.; Hornsey, R.I. Scale Factor in Digital Cameras. In Proceedings of the Photonic Applications in Biosensing and Imaging, Toronto, ON, Canada, 12–14 September 2005; Volume 5969, pp. 556–565. [Google Scholar]
  41. Casciati, F.; Wu, L. Local Positioning Accuracy of Laser Sensors for Structural Health Monitoring. Struct. Control Health Monit. 2013, 20, 728–739. [Google Scholar] [CrossRef]
  42. Wu, L.-J.; Casciati, F.; Casciati, S. Dynamic Testing of a Laboratory Model via Vision-Based Sensing. Eng. Struct. 2014, 60, 113–125. [Google Scholar] [CrossRef]
  43. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  44. Hu, X.; Mu, H.; Zhang, X.; Wang, Z.; Tan, T.; Sun, J. Meta-SR: A Magnification-Arbitrary Network for Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1575–1584. [Google Scholar]
  45. He, X.; Mo, Z.; Wang, P.; Liu, Y.; Yang, M.; Cheng, J. Ode-Inspired Network Design for Single Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1732–1741. [Google Scholar]
  46. Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
  47. Qiu, Y.; Wang, R.; Tao, D.; Cheng, J. Embedded Block Residual Network: A Recursive Restoration Model for Single-Image Super-Resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4180–4189. [Google Scholar]
  48. Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F. Real-World Super-Resolution via Kernel Estimation and Noise Injection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 466–467. [Google Scholar]
  49. Shocher, A.; Cohen, N.; Irani, M. “Zero-Shot” Super-Resolution Using Deep Internal Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3118–3126. [Google Scholar]
  50. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  51. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
  52. Zhang, W.; Liu, Y.; Dong, C.; Qiao, Y. Ranksrgan: Generative Adversarial Networks with Ranker for Image Super-Resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3096–3105. [Google Scholar]
  53. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
  54. Kromanis, R.; Forbes, C.; Borah, S. Super-Resolution Images for Measuring Structural Response. In Proceedings of the SMAR 2019-Fifth Conference on Smart Monitoring, Assessment and Rehabilitation of Civil Structures, Potsdam, Germany, 27–29 August 2019; pp. 1–8. [Google Scholar]
  55. Sun, C.; Gu, D.; Zhang, Y.; Lu, X. Vision-Based Displacement Measurement Enhanced by Super-Resolution Using Generative Adversarial Networks. Struct. Control Health Monit. 2022, 29, e3048. [Google Scholar] [CrossRef]
Figure 1. Flowchart of structural displacement measurement based on feature points.
Figure 1. Flowchart of structural displacement measurement based on feature points.
Sensors 23 08496 g001
Figure 2. Frame structure and shaking table test configuration. (a) Frame model configuration; (b) vision sensor setup; (c) shaking table test setup (side view).
Figure 2. Frame structure and shaking table test configuration. (a) Frame model configuration; (b) vision sensor setup; (c) shaking table test setup (side view).
Sensors 23 08496 g002
Figure 3. Field of view of the indoor shaking table test. Measurement distance changed from 10 m (left) to 40 m (right) with 10 m increments.
Figure 3. Field of view of the indoor shaking table test. Measurement distance changed from 10 m (left) to 40 m (right) with 10 m increments.
Sensors 23 08496 g003
Figure 4. The framework of RealSR method. Adapted with permission from [48]. 2023, Xiaozhong Ji.
Figure 4. The framework of RealSR method. Adapted with permission from [48]. 2023, Xiaozhong Ji.
Sensors 23 08496 g004
Figure 5. Key points detection results by measurement distance changes. Red +, blue * and green × mean the detected points within the fiducial target, slab cross-section and bolt connection respectively. (a) Corner point detection. (b) KAZE feature detection.
Figure 5. Key points detection results by measurement distance changes. Red +, blue * and green × mean the detected points within the fiducial target, slab cross-section and bolt connection respectively. (a) Corner point detection. (b) KAZE feature detection.
Sensors 23 08496 g005
Figure 6. Displacement measurement comparison of artificial target. (a) Measurement distance of 10 m. (b) Measurement distance of 20 m. (c) Measurement distance of 30 m. (d) Measurement distance of 40 m.
Figure 6. Displacement measurement comparison of artificial target. (a) Measurement distance of 10 m. (b) Measurement distance of 20 m. (c) Measurement distance of 30 m. (d) Measurement distance of 40 m.
Sensors 23 08496 g006
Figure 7. Displacement measurement comparison of bolt connection. (a) Measurement distance of 10 m. (b) Measurement distance of 20 m. (c) Measurement distance of 30 m. (d) Measurement distance of 40 m.
Figure 7. Displacement measurement comparison of bolt connection. (a) Measurement distance of 10 m. (b) Measurement distance of 20 m. (c) Measurement distance of 30 m. (d) Measurement distance of 40 m.
Sensors 23 08496 g007aSensors 23 08496 g007b
Figure 8. Displacement measurement comparison of slab cross-section. (a) Measurement distance of 10 m. (b) Measurement distance of 20 m.
Figure 8. Displacement measurement comparison of slab cross-section. (a) Measurement distance of 10 m. (b) Measurement distance of 20 m.
Sensors 23 08496 g008
Figure 9. Visualization comparison of upscaled images. (a) Org. (1.0). (b) BC (0.25). (c) BC (1.0). (d) RealSR (1.0).
Figure 9. Visualization comparison of upscaled images. (a) Org. (1.0). (b) BC (0.25). (c) BC (1.0). (d) RealSR (1.0).
Sensors 23 08496 g009
Figure 10. Corner point and KAZE feature detection results of bicubic upsampled and RealSR images at 10 m and 20 m. Red +, blue * and green × mean the detected points within the fiducial target, slab cross-section and bolt connection respectively. (a) Corner point detection (BC images). (b) Corner point detection (RSR images). (c) KAZE feature detection (BC images). (d) KAZE feature detection (RSR images).
Figure 10. Corner point and KAZE feature detection results of bicubic upsampled and RealSR images at 10 m and 20 m. Red +, blue * and green × mean the detected points within the fiducial target, slab cross-section and bolt connection respectively. (a) Corner point detection (BC images). (b) Corner point detection (RSR images). (c) KAZE feature detection (BC images). (d) KAZE feature detection (RSR images).
Sensors 23 08496 g010
Figure 11. Corner point and KAZE feature detection results for RealSR images at 40 m. Red +, blue * and green × mean the detected points within the fiducial target, slab cross-section and bolt connection respectively. (a) Corner point detection. (b) KAZE feature detection.
Figure 11. Corner point and KAZE feature detection results for RealSR images at 40 m. Red +, blue * and green × mean the detected points within the fiducial target, slab cross-section and bolt connection respectively. (a) Corner point detection. (b) KAZE feature detection.
Sensors 23 08496 g011
Figure 12. KZF comparison at 30 m depending on targets. (a) Artificial target. (b) Bolt connection. (c) Slab cross-section.
Figure 12. KZF comparison at 30 m depending on targets. (a) Artificial target. (b) Bolt connection. (c) Slab cross-section.
Sensors 23 08496 g012
Figure 13. Displacement measurement comparison at 40 m depending on targets. (a) KZF using artificial target. (b) KZF using bolt connection. (c) KZF using slab cross-section.
Figure 13. Displacement measurement comparison at 40 m depending on targets. (a) KZF using artificial target. (b) KZF using bolt connection. (c) KZF using slab cross-section.
Sensors 23 08496 g013
Table 1. Feature detection result comparison.
Table 1. Feature detection result comparison.
Feature TypeTarget TypeMeasurement Distance (m)
10203040
Corner points (no.)A 190824129
B 225942
S 3469--
KAZE features (no.)A2881005434
B3616105
S5516--
1 Artificial target, 2 bolt connection, and 3 slab cross-section.
Table 2. Error quantification of vision-based measurement system depending on the increase in measurement distance.
Table 2. Error quantification of vision-based measurement system depending on the increase in measurement distance.
Disp. MeasurementErrorTarget TypeMeasurement Distance (m)
10203040
STCRMSE (mm)A 10.620.670.700.75
B 20.650.690.79-
S 30.660.67--
MAPE (%)A1.381.451.521.58
B1.741.85−4.30-
S1.551.57--
KZFRMSE (mm)A0.610.630.660.74
B0.630.650.761.20
S0.660.68--
MAPE (%)A1.401.431.501.56
B1.721.81−4.19−10.22
S1.561.59--
1 Artificial target, 2 bolt connection, and 3 slab cross-section.
Table 3. Quantitative result of image restoration.
Table 3. Quantitative result of image restoration.
Measurement Dist. (m)PSNR (dB) ↑SSIM ↑LPIPS ↓
BC 1RSR 2BCRSRBCRSR
1032.1132.340.910.900.290.21
2035.8532.920.950.870.220.22
↑ and ↓ mean higher value or lower value is desired. 1 Bicubic upsampling and 2 RealSR.
Table 4. Error quantitative analysis.
Table 4. Error quantitative analysis.
Disp. MeasurementErrorTarget TypeMeasurement Distance (m)
1020
Org.BC 4RSR 5Org.BCRSR
STCRMSE (mm)A 10.620.720.700.670.680.64
B 20.650.760.730.690.720.68
S 30.660.740.700.670.700.65
MAPE (%)A1.381.591.431.451.531.37
B1.741.841.681.851.951.80
S1.551.791.631.571.771.61
KZFRMSE (mm)A0.610.690.690.630.590.59
B0.630.690.700.650.590.59
S0.660.700.710.680.600.60
MAPE (%)A1.431.611.441.431.631.47
B1.721.871.711.811.991.84
S1.551.781.621.591.661.50
1 Artificial target, 2 bolt connection, 3 slab cross-section, 4 bicubic upsampling, and 5 RealSR.
Table 5. Error quantification of vision-based measurement system using upscaled images.
Table 5. Error quantification of vision-based measurement system using upscaled images.
Disp. MeasurementErrorTarget TypeMeasurement Distance (m)
3040
Org.RSR 4Org.RSR
STCRMSE (mm)A 10.700.670.750.72
B 20.790.70-0.77
S 3-0.68-0.85
MAPE (%)A1.521.441.581.54
B−4.301.96-−5.10
S-1.74-−6.80
KZFRMSE (mm)A0.660.650.740.68
B0.760.651.200.87
S-0.63-0.76
MAPE (%)A1.501.421.561.45
B−4.191.80−10.22−4.84
S-1.58-1.61
1 Artificial target, 2 bolt connection, 3 slab cross-section, and 4 RealSR.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cho, D.; Gong, J. A Feasibility Study on Extension of Measurement Distance in Vision Sensor Using Super-Resolution for Dynamic Response Measurement. Sensors 2023, 23, 8496. https://doi.org/10.3390/s23208496

AMA Style

Cho D, Gong J. A Feasibility Study on Extension of Measurement Distance in Vision Sensor Using Super-Resolution for Dynamic Response Measurement. Sensors. 2023; 23(20):8496. https://doi.org/10.3390/s23208496

Chicago/Turabian Style

Cho, Dooyong, and Junho Gong. 2023. "A Feasibility Study on Extension of Measurement Distance in Vision Sensor Using Super-Resolution for Dynamic Response Measurement" Sensors 23, no. 20: 8496. https://doi.org/10.3390/s23208496

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop