Next Article in Journal
Spiral Deployment of Optical Fiber Sensors for Distributed Strain Measurement in Seven-Wire Twisted Steel Cables, Post-Tensioned against Precast Concrete Bars
Previous Article in Journal
Sensor Acquisition and Allocation for Real-Time Monitoring of Articulated Construction Equipment in Digital Twins
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe

School of Mechanical Engineering, Nantong University, Nantong 226019, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7634; https://doi.org/10.3390/s22197634
Submission received: 21 August 2022 / Revised: 30 September 2022 / Accepted: 7 October 2022 / Published: 9 October 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
To avoid depth-of-field mismatches caused by the changes in pipe structure and image overexposures caused by highly reflective surfaces while radial imaging irregular pipes, this paper proposes a novel all-in-focus, adaptable, and low scene-coupling method that suppresses overexposures in support of fault detection. Firstly, the pipeline’s radial depth distribution data are obtained by sensors, and an optimal all-in-focus imaging scheme is established by combining camera parameters. Secondly, using digital imaging technology, the high reflection effect produced by disparate light sources is comprehensively evaluated for overexposure suppression. Thirdly, a device is designed for imaging non-Lambertian free-form surface scenes under low illumination, providing the sequence images needed for the next step. Lastly, specific digital fusions are made to the sequential images to obtain an all-in-focus final image without overexposure. An image-quality analysis method is then used to measure the efficacy of the system in obtaining the characteristic information of the inner surfaces of an irregular pipe. Results of the experiment show that the method and device used are able to distinguish small 0.5 mm wide lines ranging from 40–878 mm depth and are capable of providing efficient image support for defect inspection of irregular pipes and free-form surfaces amongst other irregular surfaces.

1. Introduction

Irregular pipes (e.g., those with S-shaped inlets) have special profile and variable cross-sectional features [1,2] characterized by continuous and irregular changes in the axial and radial directions, and the special non-Lambertian coating sprayed on its inner surface shows high reflectivity [3]. Thus, it is difficult to acquire the high-quality images needed for rapid digital fault detection.
Among the contemporary techniques, axial and radial imaging are the most common. Axial imaging refers to single-viewpoint methods (e.g., closed-circuit television [4], fisheye [5], single-reflection [6,7], and catadioptric panoramic annular lens [8,9]), in which the imaging device is placed along the axial direction of the inner pipeline. This results in a circular ring image whose effects directly depend on the structural regularity and coincidence degrees between the visual and pipeline axes, resulting in an image distortion-correction algorithm needing to be applied [6,10]. Any sudden changes in pipeline appearance or deviations along the visual axis can reduce imaging effects and detection accuracy. Thus, it is difficult to directly apply axial methods to irregular pipes. Radial imaging implies that the visual axis of the imaging device is perpendicular to the inner surface of the pipeline, from which information is obtained directly [11]. Although this method can be adapted to circular pipes of varying sizes by changing lenses, irregular pipes typically provide frequent and large radial depth-of-field (DoF) changes that make information acquisition prohibitively difficult and time-consuming. Imaging systems with expanded DoFs are helpful, and researchers have offered several related solutions (e.g., wavefront coding [12,13], special flat lensing [14], and light-field imaging [15,16]) that improve DoF handling at the expense of spatial resolution. However, these results are unsuitable for reliable defect detection. To achieve the all-in-focus imaging requirements needed for irregular pipes, more recent studies have applied multifocus fusion methods using sequence images [17,18]. These methods have good fusion effects that can be applied to a wide range of tasks, but they rely on passive focusing algorithms that are also unsuitable for irregular pipes, particularly due to the lack of texture gleaned in the low illumination [19,20]. Later, Liu [21] proposed another method of depth-image segmentation, which achieved good all-in-focus results, but the graph-based algorithm was unsuitable for continuous imaging. Notably, when imaging surfaces have large depth changes, it tends to be difficult to control the segmentation depth. Thus, a multicamera fusion scheme with a multifocal lens [22] can help resolve the large DoF problem. However, the required equipment is bulky, the solution is overly complex, and the system does not work well inside irregular and small pipelines requiring large depth spans.
Active illumination is often required when imaging pipe interiors. However, the highly reflective and freely changing structural features quickly lead to overexposures, which reduce the utility of the acquired information for fault detection. To mitigate these negative effects, Shao [23] devised a digital micro-mirror device (DMD) to realize pixel-level spatiotemporal modulations of illumination, the principle of which is that the reflected light (adaptive stripes with spatiotemporal modulation) modulated by DMD is projected through a triangular prism, mirror, and projection lens onto a workpiece with a highly reflective surface, and a uniform illumination is formed on the workpiece surface to highlight the defect information. Liu [24] introduced a study that uses a coaxial light source to obtain the first coaxial optical images to identify the bearing center coordinates and text information; then, under the multiangle light, images are obtained at different angles in turn, and all the defects of the bearing can be highlighted after image fusion. Feng [25] realized the acquisition of surface defect information of small highly reflective parts through high dynamic imaging technology and improved the quality of acquired images. Chen [26] designed a dual lighting system with a large area and high-brightness illumination from both sides of the steel ball; this design improved the light uniformity on the steel ball surface, thereby avoiding the generation of light spots and creating good conditions for subsequent detection. Although these methods addressed specific highly reflective objects and achieved decent results, most are meant to be used with the outer surfaces of objects. Notably, when imaging devices and algorithms are designed for specific tough jobs, their performance tends to be tightly coupled to the unique configuration; thus, they poorly adapt to changing scenes.
In summary, to obtain a method for solving the DoF mismatch problem while supporting overexposure suppression imaging of the insides of irregular pipes, this paper first proposes a cross-modal all-in-focus imaging device and strategy based on the target surface depth using a lighting and imaging method of overexposure suppression that depends on high-reflection prior information. Then, an imaging device is introduced, and an imaging experiment of 0.5 mm wide fine lines is carried out on the highly reflective surface with a maximum depth of 878 mm. Subsequently, an image fusion is carried out and the best all-in-focus image with no overexposure is obtained. The proposed method and device can provide efficient image support for defect inspection of special-shaped pipes, free-form surfaces, and other unconventional scenes, and it has good practical ap-plication value.
The structure of the paper is organized as follows: firstly, the background and key issues of all-in-focus imaging and imaging on high reflect surface are presented in Section 1, along with an overview of the relevant literature. Section 2 elaborates the method of all-in-focus imaging through fusing the depth date of imaging surface, as well as the method for overexposure suppression on surface which has high reflection character. In Section 3, an imaging device, which has a high-resolution camera, depth sensors, light sensors, and four evenly spaced light sources, is designed, and three imaging experiments and evaluations of the imaging effects are applied to verify the effectiveness and superiority of the above proposed method. Lastly, Section 4 summarizes the key technologies and achievements of this paper.

2. Materials and Methods

2.1. Cross-Modal All-in-Focus Imaging Strategy

The DoF is the key imaging feature of a pinhole camera, as shown in Figure 1, and it is determined by the diameter of the allowable circle of confusion, δ, focal length, f, and lens F-value, which can be characterized by the front and back DoFs:
{ Δ L front = F × δ × L 2 f 2 + F × δ × L Δ L back = F × δ × L 2 f 2 F × δ × L ,
where ΔLfront represents the front DoF, ΔLback represents the back DoF, L represents the shooting distance from the focal plane to the photosensitive element, and δ is a hyperparameter set according to the sharpness requirement. Generally, the photosensitive elements size is selected in [1, n] pixels for the δ number. In this study, to clearly demonstrate the law of front and back scenes along an object plane of distance L, we let
{ D front ( L ) = L Δ L front = L F × δ × L 2 f 2 + F × δ × L D back ( L ) = L + Δ L back = L + F × δ × L 2 f 2 F × δ × L ,
where Dfront(L) is the depth function of the DoF’s front edge (Figure 1), and Dback(L) is the depth function of its back edge. According to Equation (3)’s Gaussian imaging formula, for a fixed-focus pinhole imaging system, the image distance, v, can be changed by moving the lens focus. Thus, changes in v allow objects to be clearly imaged in the corresponding DoF under different object distances, u.
1 f = 1 u + 1 v .
The distance from the inner surface of the irregular pipe to the geometric centroid of the respective radial cross-sections is large. When the camera is at the center of the cross-section from different viewpoints, the large DoF differences required by each viewpoint cause single- or multifocus imaging techniques to fail to match the appropriate DoF. To capture images in such scenes, a DoF strategy of pinhole and cross-mode all-in-focus imaging is needed, as shown in Figure 2.
As shown in Figure 2a, dfX and dbX respectively represent the depths of the nearest and farthest points of the imaging object from the X(X = A, B) viewpoint. The depth span, ΔXX = dbX dfX), is the total DoF required for Viewpoint X. The curves of the Dfront(L) and Dback(L) functions are shown in Figure 2b. Let the depths, D = dfX, of the closest point of the target and D = Dfront(L) intersect at point NX(Dfront−1(dfX), dfX). The depths, D = dbX, of the farthest points of the target and D = Dback(L) intersect at point MX(Db−1(dbX), dbX). The object plane depth function, Dobj(L) = L. When Dfront−1(dfX) ≥ Dback−1(dbX) for any LFS∈(Dback−1(dbX), Dfront−1(dfX)), there is (dfX, dbX) ∩ (Dfront(LFS), Dback(LFS)) where the current depth range is included in any DoF with LFS as the object-plane depth, including the camera imaging range at Viewpoint A. To ensure good imaging quality, the intermediate depth of the scene can be used to focus the image in the single-focus mode, as follows:
D Fsingle ( d f A , d bA ) = D front 1 ( d fA ) + D b ack 1 ( d bA ) 2   D f 1 ( d fA ) D b 1 ( d bA ) ,
where DFsingle(dfA, dbA) denotes the depth of the object plane when the depth range is (dfA, dbA). Correspondingly, when the camera is at Viewpoint B for any LFM ∈ (Dback−1(dbB), Dfront−1(dfB)), there are (dfB, dbB) (Dfront(LFM), Dback(LFM)), such that when Dfront−1(dfB) < Dback−1(dbB), the DoF corresponding to any object plane, LFM, cannot completely cover the current surface, and multiple segmented focusing images are required. The mathematical relationship of the corresponding conditions is expressed as
{ ( < D 1 > < D 2 > < D n > ) ( d fB , d bB ) = ( d fB , d bB ) n = N min ,
where <Dn> is the depth range of each segment after the target depth is segmented, and N is the number of segmentations, where Nmin is the minimum value. The object plane depth, Dobj(LFMj), corresponding to depth range <Dj>, is analyzed using the abscissa of point MjB, where j = 1, 2, …, n, and n = 3 in Figure 2b. Then, for Viewpoint B, when Dfront−1(dfB) < Dback−1(dbB), all sub-focal plane positions in the multifocus mode can be obtained. {DFmulti(dfB, dbB)} = {DFM1, DFM2, ⋯⋯, DFMj}, and
D FM j = { D back - 1 ( d bB ) , ( j = 1 ) D back - 1 ( D front ( D FM ( j - 1 ) ) ) , ( D f ( D FM ( j - 1 ) ) > d fB ) ,
where DFMj represents the depth of each object plane corresponding to <Dj> of multifocus imaging, and DFMj = Dobj(LFMj) = LFMj. In this paper, the process of dividing (df, db) into {<D1>, <D2>, …, <Dn>} is called “depth segmentation”. The process of calculating a {<D1>, <D2>, …, <Dn>} that matches {DFM1, DFM2, …, DFMn} is called “DoF matching”. Accordingly, in view of the large radial depth span of an irregular pipe, a cross-modal adaptive all-in-focus imaging strategy is proposed, as shown in Equation (7).
{ D F ( d f , d b ) } = { { D Fsingle ( d f , d b ) }   , D front - 1 ( d f ) D back - 1 ( d b ) { D Fmulti ( d f , d b ) }   , D front - 1 ( d f ) < D back - 1 ( d b ) ,
where {DF(df, db)} is the set of depth positions of the object plane after solving the (df, db) depth interval. By imaging the object planes in the set, the all-in-focus sequence images in the current field of view can be obtained.

2.2. Lighting and Imaging Strategy of Overexposure Suppression

The classic Phong model [27] can be used to quantitatively describe the relationship between the amount of light observed as a function of the surface profile and light and viewing angle. As shown in Figure 3a, L, N, R, and V are the incident light, imaging surface normal, reflected light, and observation vectors, respectively. When the small-area diffuse light source (LS) is independent and unique, Equation (8) is obtained as follows:
I = k d × I pd × cos i + k s × I ps × cos m θ ( 0 θ 90 ) ,
where kd × Ipd × cosi and ks × Ips × cosmθ are the diffuse and specular reflection components, respectively, and m is the reflection light convergence index related to surface smoothness. As the observation angle, θ, decreases, there is a region named area-of-reflect (Ar), in which the specular reflection component increases sharply by the m-th power, which is a high-reflection region.
When a small area diffuse light source is used for direct illumination, the effect of the specular reflection component of the non-Lambertian plane is shown in Figure 3b. Under the action of LS1, Ar1, Ar2, Ar3, and Ar4 are the direct high-reflection, high-reflection transition, conventional reflection, and low-reflection areas, respectively. Consequently, the specular reflection intensity decreases. According to Equation (8) and Figure 3b, the high-reflection areas created by light sources of different paths do not completely overlap. Therefore, under a fixed viewing angle, the high-reflection areas, Ar1 and Ar2, can only be generated by light source LS2 in the incident optical path, IL–RL; however, this is not true for other light sources, such as LS3. Therefore, it is possible to obtain image sequences with highly reflective position differences by separately illuminating and imaging with small light sources at different positions to recover surface information through image fusion [26,28].
As shown in Figure 3d, A-1, A-2, and B are imaging systems with illuminations in which the positional relationships between the camera and the small light source are relatively fixed. C-LS1 and C-LS2 are imaging systems with illuminations that lack fixed positional relationships between the camera and light source. The reflection problem of free-form surfaces mainly results from the coupling of three factors: the surface feature structure of the free-form surface, camera pose, and light-source pose. The reflection type is set by the bright (high reflection) or dark (non-high reflection) field-forward caused by these three conditions [29]. See Figure 3d for poses A-1 and A-2 of the same viewpoint position and A-1, or A-2 and B, facing the same area but with different poses. Even under the same camera pose, C, when lighting conditions LS4 and LS5 differ, the imaging effects are not highly reflective at C-LS4, but are highly reflective at C-LS5, and different surface and lighting conditions can produce the same effects. Under freely changing imaging conditions and the mutual influence of reflective factors, the system must adopt different lighting schemes to adapt to the various changes.
Because the surfaces and camera poses are separately determined for each image, we designed a dynamic lighting device comprising four independently controllable small diffuse light sources in the same plane. The device produces low coupling effects between the imaging characteristics and surface features. As shown in Figure 3c, each light source is combined using the same intermediate camera to create a relatively fixed position, and a lighting and imaging overexposure suppression matching strategy is proposed as follows:
  • Four light sources are used to separately provide illumination and pre-imaging.
  • The reflection of the image formed under the illumination of each light source is calculated to obtain each source’s prior information.
  • The prior information that produces the least high reflection is chosen.
If high reflections cannot be avoided in this fashion, the complementary information between images formed by two light sources is used to suppress the high reflection effect via fusion. The decision conditions are determined as follows:
  • Condition-Ca: If a single light source, a or b(a, b∈{Tlight, Blight, Llight, Rlight}), can avoid the high-reflection overexposure, it is selected for supplementary light,
    Size ( IMG a ) = 0 ,   StdDev ( IMG a ) = min ( StdDev ( IMG b ) ) .
  • Condition-Cb: If highly reflective overexposure cannot be avoided, a combined lighting scheme with dual light sources, a and b(a, b∈{Tlight, Blight, Llight, Rlight}), is selected for supplementary illumination, and the decision equation is expressed as
LC ( IMG a , IMG b ) = max ( k O × ORI ( IMG a , IMG b ) + k M × SOI ( IMG a , IMG b ) ) ,
where IMGa represents the image obtained when light source a provides illumination. Size(IMGa) indicates the size of the area of the overexposure in IMGa, Tlight, Blight, Llight, and Rlight represent the light sources at the location of top, bottom, left, and right, respectively, and StdDev(IMGa) represents the overall standard deviation of image IMGa.
In Equation (10), LC(IMGa,IMGb) is the decision value of the light-source combination (LC), and the overall reflective intensity (ORI = 2 − 2/(1+ e(−kα × α))) represents the overall highly reflective imaging condition under two light sources. The spot overlap intensity (SOI = 3/(3 + 10 × e(kβ × (β − 1))) is used to characterize the overlap between the image spots formed under the two light sources, where α and β are the normalized and coincident spot areas after the AND operation (Sand(IMGa,IMGb)). The OR operation (Sor(IMGa,IMGb)) pertains to spots α = Sand(IMGa,IMGb)/Simg and β = Sor(IMGa,IMGb)/Simg, and kO and kM are the ORI and SOI weights, respectively, in the decision-making system, where kO + kM = 1. kα and kβ represent the total and coincident spot area penalty coefficients, respectively. A larger penalty coefficient results in a stronger attenuation effect of the increase in α and β on the overall decision-making result. A larger LC value of the decision result of the decision-making formula leads to a better final imaging effect. The important function of this formula is the selection of the two light sources that provide the smallest total spot area for illumination and imaging under the condition that the spot overlap is the smallest possible. The sequence of images obtained by this decision are obtained after fusion to produce the image with the best high-reflection suppression from the current viewpoint.

2.3. Imaging Control and Fusion Scheme

Owing to the complex relationships among surface features, illumination angles, and camera poses, sample quality is often significantly reduced. To remedy this problem, the adaptability of the imaging device is improved, as shown in Figure 4a, using the following adaptive decision-making control strategy:
  • Depth data acquisition and all-in-focus scheme: Using the depth sensor to obtain the depth distribution of the unknown curved surface in the current field of view, depth data are applied to Equation (7).
  • Pre-imaging and lighting plan: Using the depth data, different light sources are used for single-exposure pre-imaging at a fixed photosensitive level at the middle depth of the current surface and analyzing quality by judging whether there is overexposure or calculating the exposure conditions. The acquired prior information of high reflectivity under each light source is used to form the lighting plan using Equations (9) and (10).
  • Full-focus imaging under an optimal lighting scheme: The combination of four imaging scenarios is present (i.e., I: single-focus single-illumination scene with small DoF; II: single-focus dual-illumination scene with small DoF; III: multifocus single-illumination scene with large DoF; and Ⅳ: multifocus dual-illumination scene with large DoF). Then, the all-in-focus scheme of Step 1 and the lighting scheme of Step 2 are used to match one of the four preset combinations, execute the illumination scheme once, and image each object plane separately.
  • Sequence image fusion: After acquiring the sequence images, a specific fusion scheme (e.g., multifocus or wavelet) is performed on the sequence images according to the combined method chosen in Step 3. The image fusion process in the four scenarios proceeds as described below.
As shown in Figure 4b, the output images of each mode are marked as Img-Type-I, Img-Type-II, Img-Type-III, and Img-Type-Ⅳ. During object plane imaging under each light source, multi-exposures of the four photosensitive levels are performed by changing the exposure time, and fusion is performed to obtain the high-dynamic range (HDR) image needed to solve the problem of insufficient dynamics and poor brightness uniformity under single imaging. This then facilitates the expression of internal defects and other information needed to obtain high-definition images in the full DoF. This process is indicated by the red arrow in Combination I in Figure 4b.

3. Experimental Verification

3.1. All-in-Focus Imaging Device with Overexposure Suppression

Spatial resolution is an important performance parameter for measuring the ability of an imaging system to resolve fine defects. According to past research [30], the spatial resolution, Rspace, of radial imaging can be described as follows:
R space = f l = s i z e p i x e l s i z e o b j ,
where f is the focal length, l is the distance between lens and object, l ≈ L, sizepixel is the single-pixel size of the occupied photosensitive element, and sizeobj is the minimum width of the defective object. To achieve the imaging requirements of sizeobj with 0.5 mm wide fine lines at L = 1000 mm in the irregular pipe, SONY IMX477 is selected as the photosensitive imaging element. According to Equation (12), the lens has a focal length, f, of 3.9 mm, aperture value, F, of 2.8, and field of view (FoV) of 75 × 52. For the depth sensor selection, the ultra-miniature multiarea depth sensor, VL53L5CX, with a resolution of 8 × 8 and an FoV of 45° × 45° was selected, and the two were combined as shown in Figure 5. Due to the excellent multizone ranging ability of the sensor, this device can measure the depth data of a plane, and its measurement accuracy reached 5% under the condition of this paper. A high-brightness light-emitting diode (LED) lamp bead with a color temperature of 5500 K was selected as the light source, and a polymethylmethacrylate diffuser lens with a diameter of 15 mm and an illumination angle of 120° were selected as the condensing element. The light source array, ultra-miniature area depth sensor, and geometric installation relationship with the camera are shown in Figure 5.
Controlling the exposure of sequential images is crucial to the accuracy of pre-imaging information and the acquisition of multiple-exposure sequential images. Exposure control is determined by the aperture value, International Standards Organization (ISO) sensitivity, and exposure time, and it is closely related to the stable light-field intensity. Because the overall reflection of incident light on the curved surface differs greatly per light source, an independent four-channel constant-current source-drive circuit and proportional–integral–derivative (PID) control algorithm are used to achieve the constant control of light intensity, as shown in Figure 6. The CN5711 constant-current LED driver, which has excellent current stability and brightness retention over a wide temperature range, is used to maintain light-level consistency between images. The algorithm uses light-intensity feedback to modulate the brightness of each source using pulse-width modulation. The PID control formula is shown in Equation (13), where e is the current error, and et−1 is the last error originating from the feedback data of the two light intensity sensors (Figure 5). Simultaneously, to ensure the consistency of final sequence images, the process is controlled according to the consistency parameters presented in Table 1.
C = K p × e t + K i × t = 1 N × e t + K d × ( e t e t 1 ) .

3.2. Imaging Experiments

To verify the high-quality adaptability of the proposed imaging device under arbitrary DoFs, the experimental setup shown in Figure 6a was applied. A thin, gray, specular reflective iron sheet with diffuse reflection characteristics was used to build the scene (Figure 6e). The device was placed on a height-adjustable x–y coordinate slide to simulate different positional and attitudinal perspectives. The film ruler shown in the rectangle was attached to the curved surface and used to judge the final clarity. The device was designed to adaptively match the scene to combination Types I–IV based on the acquired depth and pre-imaging information. In this study, three viewpoints, x(x = i, ii, iii), of different combination types were imaged in the dark environment to verify the adaptability and efficacy of the proposed strategy and device. The viewpoint positions are shown in Figure 6b–d.

3.2.1. Single-Focus and Illumination under Small DoF

The positional relationship between the camera and scene was adjusted to Viewpoint i, as shown in Figure 6b. According to the process of Figure 4a, an imaging experiment was performed using the developed imaging device. The experimental steps and results are as follows:
  • As shown in Figure 7b, by detecting the depth distribution of the current Viewpoint i to obtain the contour surface-depth data matrix, the maximum and minimum depths are db = 485 mm and df = 319 mm, respectively, because Dfront−1(df) ≥ Dback−1(db), according to the all-in-focus imaging strategy of Equation (5). If the current mode is single-focus, then the camera automatically sets the depth plane as DFS = (Dfront−1(df) + Dback−1(db))/2 = 386 mm for the final all-in-focus imaging scheme, as shown in Figure 7c.
  • After selecting the intermediate depth, Dmid = 402 mm, of the object plane, rapid pre-imaging under light sources of Tlight, Blight, Llight, and Rlight are controlled according to the parameters in Table 1.
To improve the operation speed, the obtained image is reduced to 10% of the original for binarization where the threshold, TH, is 245. Then, a 5 × 5 opening operation is performed to obtain image with interference removed. The spot area Size(IMGa) is calculated for a ∈ {Tlight, Blight, Llight, Rlight}, and the optimal spot area under different light sources is discerned as shown in Table 2. The results show that Llight would not cause overexposure, which satisfies decision Condition Ca of Equation (9). Here, Llight is the final decision. Typical image-processing is illustrated in Figure 8a.
The final imaging scheme from Viewpoint i combines the full-focus scheme of Step 1 and the lighting scheme of Step 2, and DFS = 386 mm is selected as the object plane for HDR imaging under Llight. The imaging consistency control parameters are listed in Table 1.
As shown in Figure 8b, the multi-exposure fusion algorithm [31] is used on the sequence images to obtain the HDR image.

3.2.2. Single-Focus and Dual Illumination under Small DoFs

As shown in Viewpoint ii (Figure 6c), while single-focus imaging with a small DoF, if single-illumination Condition Ca cannot be satisfied, a dual light-source combined image is required. First, as with Viewpoint i, Step 1 is performed with the depth distribution data (Figure 9a–c), and the object plane depth, DFS = 538 mm, is chosen to achieve the all-in-focusing scheme.
Step 2 is performed next to obtain the pre-image single-illumination overexposure information from four light sources (Figure 9d,e). It can be seen that all single-light-source images suffer from overexposure. Hence, the ORI, SOI, and LC of the overexposed image (Table 3) are examined to determine the optimal combinations of two light sources under Condition Cb (Figure 9f). The black-filled area represents the overlapping overexposed area. A smaller area indicates a better final fusion effect. From the decision result, LC, in Table 4, the Tlight and Llight combination obtains the best high reflection suppression effect.
Step 3 is then performed; a DFS of 538 mm is chosen for the object plane, and multi-exposure sequence images are obtained under the separate illuminations of Tlight and Llight.
Lastly, in Step 4, multi-exposure fusion is performed according to the Viewpoint i steps to obtain the optimal HDR images under two light sources. Due to the character smoothness of wavelet fusion method in image processing [32], wavelet-domain over-exposure fusion is then performed on the obtained HDR images while retaining low-level components (Figure 10).

3.2.3. Multifocus and Dual Illumination under Large DoF

The first two imaging experiments used small DoFs; however, for large cases, the problem of an insufficient DoF may be encountered. As shown in Figure 6d, Viewpoint iii has large depth multifocus features (Combination IV), including high reflectivity under Viewpoint ii. The efficacy of the proposed illumination imaging strategy under complex multifocus and combined illumination conditions is demonstrated using the Viewpoint iii experiment via the following steps:
  • According to Step 1, the ranges of the current depth as detected by the depth sensor were found to be df = 247 and db = 878. The depth matrix and surface diagram are shown in Figure 11a,b. As Dfront−1(df) < Dback−1(db), the current focus mode is selected. Then, according to Equation (7), the depth distribution data are used to segment the current surface with depth-matching (Figure 11c). The assignment results for the object plane depth of the final all-in-focusing scheme are listed in Table 5. Unlike Viewpoints i and ii, Viewpoint iii requires depths of DFM1 = 584 mm, DFM2 = 350 mm, and DFM3 = 250 mm, as shown in Table 5.
  • Pre-imaging steps such as those in the previous experiments are performed to determine the lighting scheme. The pre-imaging and decision parameters are listed in Table 1 and Table 3, respectively, and the lighting decision-making process and data are shown in Figure 11d–f and Table 6, respectively. By comparing the LC results in Table 6, the combination of Tlight and Blight are selected as the required combination to provide lighting.
  • As with Viewpoint ii, under Tlight and Blight, the multi-exposure image sequence at object plane depths of DFM1 = 584 mm, DFM3 = 350 mm, and DFM2 = 250 mm is acquired.
  • According to the fusion process shown in Figure 4b, the multi-exposure sequence images are first fused to obtain the HDR image according to the combination of each object plane’s depth and light-source condition (Figure 12a,b). For the multifocus HDR images obtained from the same light source, the method in [17] is used to perform multifocus fusion to obtain the all-in-focus HDR image under Viewpoint iii, and a wavelet fusion operation like the one from Viewpoint ii is used on the all-in-focused HDR images under light sources of Tlight and Blight to suppress the high reflectivity and restore the information of the overexposed area. The final fusion results are shown in Figure 12c.

3.3. Effect Evaluation of All-in-Focus Imaging with Overexposure Suppression

The performance quality of the detailed imaging surface information at various depths was subjectively and objectively evaluated for the all-in-focus fusion of large DoFs and the information recovery effects after overexposure suppression.

3.3.1. Evaluation of the All-in-Focus Imaging Effect

The post-all-in-focus fusion image should have better texture feature expression than any single image taken prior to fusion due to the improvements to global sharpness. We compared the results before and after fusion of the three focus images under Viewpoint iii with light source Tlight, as shown in Figure 12a, and we objectively and subjectively evaluated the all-in-focus effects.

Objective Evaluation of the All-in-Focus Effects

The objective evaluation calculated image sharpness on the basis of statistical image gradient and sharpness evaluations. The image gradient evaluation adopted the energy of gradient (EOG) and Tenengrad methods, and the Vollath function was chosen on the basis of the statistical results. Furthermore, the improvement in detail expression between the final image and each original image could be individually compared as a function of the signal-to-noise ratio (SNR), and peak signal-to-noise ratio (PSNR) [32].
EOG function
The EOG function takes the square sum of the differences in gray value between a pixel and the adjacent pixels in the x- and y-directions as the gradient value of each, which are accumulated as input to the sharpness evaluation function. After averaging all pixels, the expression is
V EOG = 1 X × Y × x y { [ g ( x + 1 , y ) g ( x , y ) ] 2 + [ g ( x , y + 1 ) g ( x , y ) ] 2 } .
Tenengrad function
The Tenengrad function uses the Sobel operator to calculate the gradient value representing the sharpness of the image edge in the horizontal and vertical directions. After averaging the gradient values of all pixels, the expression is
V Tenengrad = 1 X × Y × x y ( G x 2 ( x , y ) + G y 2 ( x , y ) ) ,
where Gx(x,y) and Gy(x,y) are the gradient values of the pixel in the horizontal and vertical directions at (x,y), respectively, and its formula is as follows:
{ G x ( x , y ) = g ( x , y ) g x G y ( x , y ) = g ( x , y ) g y ,
where is the convolution symbol, and gx and gy are the horizontal and vertical templates of the Sobel operator, which are defined as g x = [ - 1   0   1 - 2   0   2 - 1   0   1 ] ,   g x = [ - 1   - 2   - 1       0             0             0       1             0             1 ] .
Vollath function
The Vollath autocorrelation function reflects the similarity between two points in space. The edges of the clear texture details in an image are clear and sharp, and the correlation between the pixels is low, whereas the texture details in the out-of-focus area are blurred, and the correlation between image pixels is high. The calculation result reflects the similarity of all adjacent pixels, thus evaluating overall image quality. The Vollath function is expressed as follows:
V Vollaths = 1 X × Y × x = 1 X 2 y = 1 Y ( g ( x , y ) × | g ( x + 1 , y ) g ( x + 2 , y ) | ) .
Signal-to-noise ratio
SNR = 10 lg x = 0 X 1 y = 0 Y 1 p ( x , y ) 2 x = 0 X 1 y = 0 Y 1 [ ( p ( x , y ) q ( x , y ) ) ] 2 .
Peak signal-to-noise ratio
PSNR = 10 lg [ 255 2 MSE ] ,
where MSE is the mean square error,
MSE = 1 X × Y x = 0 X 1 y = 0 Y 1 p ( x , y ) q ( x , y ) 2 .
In the evaluation functions of Sections above, g(x, y) is the pixel value at (x, y) of the image in ➀–➂, p(x, y) is the pixel value at (x, y) of the original image in ➃–➄, q(x, y) is the pixel value at (x, y) of the fused image in ➃–➄, and X and Y are the numbers of rows and columns in the image pixel matrix, respectively. The calculation results are listed in Table 7 and Table 8.
The results of the objective evaluation show that, compared with the original images, those obtained from multifocus sequence fusion were significantly improved according to the sharpness quantification index, indicating that the overall sharpness of the fused images was significantly improved.

Subjective Evaluation of the Effects of All-in-Focus Fusion

Subjective evaluation was performed by directly observing the details and expressing the power of the images on the surface at each depth before and after all-in-focus imaging. Subsequently, the efficacy of the control strategy was ascertained.
As shown in Figure 13, there are three regions, back region (br), middle region (mr), and front region (fr), representing different depths of the surface. The details of the br, mr, and fr regions in the IMGFMj (j = 1, 2, 3) at different object plane depths, DFMj (j = 1, 2, 3), under Viewpoint iii were analyzed subjectively. Taking the horizontal comparison of the detailed image of area br in the red frame of Figure 13 as an example, image IMGFM1 had the most prominent detail expression at this depth, and the expressivities of IMGFM2 and IMGFM3 decreased with the decrease in object plane depth. The blurriness of image region br was positively correlated to the depth of the IMGFM1, IMGFM2, and IMGFM3 object planes. Meanwhile, the fused image, IMGFused, had a similar detailed performance capability to IMGFM1, and depth regions mr and fr followed similar rules. By comprehensively comparing the clarity laws of br, mr, and fr, it can be found that, when facing the inner wall of the irregular pipe in the depth range of 247–878 mm from Viewpoint iii, compared with IMGFMj after a single focus, which only represents the surface texture of a certain depth range, the fused image, IMGFused, successfully distinguished surfaces at different depths with fine streaks and good performance, indicating that the imaging device and strategy effectively obtained pipeline surface information at various depths.

3.3.2. Evaluation of Information Recovery Effect after Suppressing Overexposure

To evaluate the efficacy of lighting and information recovery for the overexposed area, an example was provided via the selection judgment of the lighting source chosen under Viewpoint i, which showed that the proposed control strategy effectively selected the most suitable light source, thus ensuring lighting and imaging quality. However, when the overexposure problem cannot be overcome by changing the light-source positions, a sequence of images with non-overlapping overexposed areas can be generated using two light sources at different positions. As shown in Figure 14, detailed texture information of the overexposed area was recovered by fusion.
Figure 14 compares the details of the all-in-focus images under different light sources (Tlight and Blight) and after suppressing overexposure (Fused) in the same area and range. Considering the horizontal contrast between the detailed images under Tlight as an example (outlined in red), Tlight-P1, Tlight-P2, and Tlight-P3 are the high-reflection overexposure and transition areas under the light source and the non-overexposed area, respectively. The overexposed area loses the ability to express texture details owing to pixel saturation. As the area transitions to a non-overexposed area, the detailed expression gradually increases. By longitudinally comparing the texture details of Xlight-P1, Xlight-P2, and Xlight-P3 (X = Tlight, Blight, and Fused) areas, it can be seen that, after overexposure suppression, the overexposed spots were removed after magnifying the image details after suppression. Hence, it can be clearly seen that the texture information (e.g., scratches and dents) were recovered well, demonstrating that the device has a good overexposure suppression capability. These results show that overexposed spots in highly reflective areas can be restored to their detailed information.
On the basis of these subjective and objective evaluations, the all-in-focus imaging and lighting strategy with overexposure suppression effectively guides the developed imaging device to obtain highly detailed images with fully expressed global detail information for fault detection.

4. Conclusions

In this study, a cross-modal all-in-focus imaging method with overexposure suppression was proposed, which realizes its function by obtaining depth and pre-imaging information. Compared with existing methods, our method is unaffected by surface textures and ambient light, and it directly determines the focus required for the depth of the object plane, resulting in an all-in-focus imaging capability that ensures the large span depth imaging effects with efficient focusing. The low coupling device and method do not depend on the shape of the imaging surface, and the camera characteristics solve the problem of regional overexposures on highly reflective surfaces. Hence, drastic changes in the effects of the final overexposure suppression are avoided, showing good robustness.
Imaging experiments on non-Lambertian free-form surfaces were performed under no ambient light conditions, demonstrating that the new system adaptively obtains clear all-in-focus images without overexposure under all depth span and curved surface conditions inside non-Lambertian-shaped pipes. The system is small and demonstrates low coupling and self-adaptation. It is suitable for free-scene imaging in any irregular cavity requiring active lighting, pipes of different diameters, and highly reflective free-form surfaces.
Although the method of all-in-focus imaging with overexposure suppression in this paper can obtain the sequence images effectively, the fusion process of sequence images introduced in this paper needs three fusion algorithms, which are inefficient. In the future, further research will focus on studying more efficient special fusion algorithms to further improve the efficiency and robustness of image fusion.

Author Contributions

Conceptualization, S.W. and Q.X.; methodology, S.W, Q.X., and H.X.; software, S.W. and G.L.; validation, S.W., J.W. and G.L.; writing—original draft preparation, S.W and Q.X.; project administration and funding acquisition, S.W, Q.X. and H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Postgraduate Research and Practice Innovation Program of Jiangsu Province, China, grant number (KYCX21-3080); Science and Technology Project of Nantong City (JC2021034).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and code will be made available on request to the correspondent author’s email with appropriate justification.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saheby, E.B.; Gouping, H.; Hays, A. Design of Top Mounted Supersonic Inlets for a Cylindrical Fuselage. Proceedings of the Institution of Mechanical Engineers, Part G. J. Aerosp. Eng. 2019, 233, 2956–2979. [Google Scholar] [CrossRef]
  2. Madadi, A.; Nili-Ahmadabadi, M.; Kermani, M.J. Quasi-3D Inverse Design of S-Shaped Diffuser with Specified Cross-Section Distribution; Super-Ellipse, Egg-Shaped, and Ellipse. Inverse Probl. Sci. Eng. 2021, 29, 2611–2628. [Google Scholar] [CrossRef]
  3. Li, X.; Ma, L.; Xu, L. Empirical Modeling for Non-Lambertian Reflectance Based on Full-Waveform Laser Detection. Opt. Eng. 2013, 52, 116110. [Google Scholar] [CrossRef] [Green Version]
  4. Hawari, A.; Alamin, M.; Alkadour, F.; Elmasry, M.; Zayed, T. Automated Defect Detection Tool for Closed Circuit Television (cctv) Inspected Sewer Pipelines. Automat. Constr. 2018, 89, 99–109. [Google Scholar] [CrossRef]
  5. Hansen, P.; Alismail, H.; Rander, P.; Browning, B. Visual Mapping for Natural Gas Pipe Inspection. Int. J. Robot. Res. 2015, 34, 532–558. [Google Scholar] [CrossRef]
  6. Karkoub, M.; Bouhali, O.; Sheharyar, A. Gas Pipeline Inspection Using Autonomous Robots With Omni-Directional Cameras. IEEE Sens. J. 2021, 21, 15544–15553. [Google Scholar] [CrossRef]
  7. Zheng, D.; Tan, H.; Zhou, F. A Design of Endoscopic Imaging System for Hyper Long Pipeline Based on Wheeled Pipe Robot. In Proceedings of the AIP Conference Proceedings, Penang, Malaysia, 8–9 August 2017; Volume 1820, p. 060001. [Google Scholar] [CrossRef] [Green Version]
  8. Zhou, Q.; Tian, Y.; Wang, J.; Xu, M. Design and Implementation of a High-Performance Panoramic Annular Lens. Appl. Opt. 2020, 59, 11246–11252. [Google Scholar] [CrossRef]
  9. Lei, J.; Fu, J.; Guo, Q. Distortion Correction on Gun Bore Panoramic Image. In Proceedings of the 2011 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (ICQR2MSE), Chengdu, China, 17–19 June 2011; Huang, H.Z., Zuo, M.J., Jia, X., Liu, Y., Eds.; IEEE: New York, NY, USA, 2011; pp. 956–959. [Google Scholar]
  10. Wang, Z.; Liang, H.; Wu, X.; Zhao, Y.; Cai, B.; Tao, C.; Zhang, Z.; Wang, Y.; Li, S.; Huang, F.; et al. A Practical Distortion Correcting Method from Fisheye Image to Perspective Projection Image. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 26–30 May 2015; IEEE: New York, NY, USA, 2015; pp. 1178–1183. [Google Scholar] [CrossRef]
  11. Ai, Q.; Yuan, Y. Rapid Acquisition and Identification of Structural Defects of Metro Tunnel. Sensors 2019, 19, 4278. [Google Scholar] [CrossRef] [PubMed]
  12. Akpinar, U.; Sahin, E.; Meem, M.; Menon, R.; Gotchev, A. Learning Wavefront Coding for Extended Depth of Field Imaging. IEEE Trans. Image Process. 2021, 30, 3307–3320. [Google Scholar] [CrossRef] [PubMed]
  13. Yao, C.; Shen, Y. Optical Aberration Calibration and Correction of Photographic System Based on Wavefront Coding. Sensors 2021, 21, 4011. [Google Scholar] [CrossRef] [PubMed]
  14. Banerji, S.; Meem, M.; Majumder, A.; Sensale-Rodriguez, B.; Menon, R. Extreme-Depth-of-Focus Imaging with a Flat Lens. Optica 2020, 7, 214. [Google Scholar] [CrossRef]
  15. Perwass, C.; Wietzke, L. Single Lens 3D-Camera with Extended Depth-of-Field. In Human Vision and Electronic Imaging XVII; Rogowitz, B.E., Pappas, T.N., DeRidder, H., Eds.; Spie-Int Soc Optical Engineering: Bellingham, WA, USA, 2012; Volume 8291, p. 829108. [Google Scholar] [CrossRef] [Green Version]
  16. Ihrke, I.; Restrepo, J.; Mignard-Debise, L. Principles of Light Field Imaging Briefly Revisiting 25 Years of Research. IEEE Signal Process. Mag. 2016, 33, 59–69. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, S.; Chen, J.; Rahardja, S. A New Multi-Focus Image Fusion Algorithm and Its Efficient Implementation. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1374–1384. [Google Scholar] [CrossRef]
  18. Liu, H.; Zhou, X. Multi-Focus Image Region Fusion and Registration Algorithm with Multi-Scale Wavelet. Intell. Autom. Soft Comput. 2020, 26, 1493–1501. [Google Scholar] [CrossRef]
  19. Guo, C.; Ma, Z.; Guo, X.; Li, W.; Qi, X.; Zhao, Q. Fast Auto-Focusing Search Algorithm for a High-Speed and High-Resolution Camera Based on the Image Histogram Feature Function. Appl. Opt. 2018, 57, F44–F49. [Google Scholar] [CrossRef]
  20. Wang, C.; Huang, Q.; Cheng, M.; Ma, Z.; Brady, D.J. Deep Learning for Camera Autofocus. IEEE Trans. Comput. Imaging 2021, 7, 258–271. [Google Scholar] [CrossRef]
  21. Liu, H.; Li, H.; Luo, J.; Xie, S.; Sun, Y. Construction of All-in-Focus Images Assisted by Depth Sensing. Sensors 2019, 19, 1409. [Google Scholar] [CrossRef]
  22. Wilburn, B.; Joshi, N.; Vaish, V.; Talvala, E.V.; Antunez, E.; Barth, A.; Adams, A.; Horowitz, M.; Levoy, M. High Performance Imaging Using Large Camera Arrays. ACM Trans. Graph. 2005, 24, 765–776. [Google Scholar] [CrossRef] [Green Version]
  23. Shao, W.; Liu, K.; Shao, Y.; Zhou, A. Smooth Surface Visual Imaging Method for Eliminating High Reflection Disturbance. Sensors 2019, 19, 4953. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, B.; Yang, Y.; Wang, S.; Bai, Y.; Yang, Y.; Zhang, J. An Automatic System for Bearing Surface Tiny Defect Detection Based on Multi-Angle Illuminations. Optik 2020, 208, 164517. [Google Scholar] [CrossRef]
  25. Feng, W.; Liu, H.; Zhao, D.; Xu, X. Research on Defect Detection Method for High-Reflective-Metal Surface Based on High Dynamic Range Imaging. Optik 2020, 206, 164349. [Google Scholar] [CrossRef]
  26. Chen, Y.-J.; Tsai, J.-C.; Hsu, Y.-C. A Real-Time Surface Inspection System for Precision Steel Balls Based on Machine Vision. Meas. Sci. Technol. 2016, 27, 074010. [Google Scholar] [CrossRef]
  27. Phong, B.T. Illumination for Computer Generated Pictures. Commun. ACM 1975, 18, 7. [Google Scholar] [CrossRef] [Green Version]
  28. Tian, H.; Wang, D.; Lin, J.; Chen, Q.; Liu, Z. Surface Defects Detection of Stamping and Grinding Flat Parts Based on Machine Vision. Sensors 2020, 20, 4531. [Google Scholar] [CrossRef]
  29. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Tech. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  30. Tezerjani, A.D.; Mehrandezh, M.; Paranjape, R. Optimal Spatial Resolution of Omnidirectional Imaging Systems for Pipe Inspection Applications. Int. J. Optomechatron. 2015, 9, 261–294. [Google Scholar] [CrossRef] [Green Version]
  31. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. Comput. Graph. Forum 2009, 28, 161–171. [Google Scholar] [CrossRef]
  32. Abdulrahman, A.A.; Rasheed, M.; Shihab, S. The Analytic of Image Processing Smoothing Spaces Using Wavelet. J. Phys. Conf. Ser. 2021, 1879, 022118. [Google Scholar] [CrossRef]
Figure 1. Degree-of-freedom principal diagram.
Figure 1. Degree-of-freedom principal diagram.
Sensors 22 07634 g001
Figure 2. DoF principle of pinhole and cross-modal all-in-focus imaging strategy: (a) radial section diagram of the irregular pipe; (b) adaptive hybrid focusing principle.
Figure 2. DoF principle of pinhole and cross-modal all-in-focus imaging strategy: (a) radial section diagram of the irregular pipe; (b) adaptive hybrid focusing principle.
Sensors 22 07634 g002
Figure 3. Non-Lambertian free surface lighting effect: (a) reflection diagram of phone model; (b) characteristic of high reflection at different light sources LS2 and LS3 on the plane surface; (c) schematic diagram of lighting device; (d) the influence of the positional relationship between camera, light source and surface on the reflection performance.
Figure 3. Non-Lambertian free surface lighting effect: (a) reflection diagram of phone model; (b) characteristic of high reflection at different light sources LS2 and LS3 on the plane surface; (c) schematic diagram of lighting device; (d) the influence of the positional relationship between camera, light source and surface on the reflection performance.
Sensors 22 07634 g003
Figure 4. Imaging and fusion process: (a) the process of image acquisition by the imaging method in this paper; (b) the method and process of image fusion in this paper. I, the process in the red box indicates that the sequence images obtained under the imaging combination I in step 3 are subjected to multi-exposure fusion to obtain the Img-Type-1; the processes in the blue box and the orange box in II and III respectively indicate that, under imaging combinations II and III, by using wavelet fusion or multifocus fusion, the images of Img-Type-2 or Img-Type-3 can be obtained; the process shown in the purple box IV shows that image of Img-Type-4 can be obtained by fusing two images of Img-Type-3 under combination IV by multifocus fusion.
Figure 4. Imaging and fusion process: (a) the process of image acquisition by the imaging method in this paper; (b) the method and process of image fusion in this paper. I, the process in the red box indicates that the sequence images obtained under the imaging combination I in step 3 are subjected to multi-exposure fusion to obtain the Img-Type-1; the processes in the blue box and the orange box in II and III respectively indicate that, under imaging combinations II and III, by using wavelet fusion or multifocus fusion, the images of Img-Type-2 or Img-Type-3 can be obtained; the process shown in the purple box IV shows that image of Img-Type-4 can be obtained by fusing two images of Img-Type-3 under combination IV by multifocus fusion.
Sensors 22 07634 g004
Figure 5. Composition of the imaging device.
Figure 5. Composition of the imaging device.
Sensors 22 07634 g005
Figure 6. Experimental scene: (a) experimental scene model; (b) viewpoint i; (c) viewpoint ii; (d) viewpoint iii. When the model is at the location of viewpoints i, ii, and iii, the type of imaging combination becomes combinations I, II, and IV, respectively. (e) The actual scenario, where the clarity of the image can be judged by observing the thin line on the “inspection gauge”.
Figure 6. Experimental scene: (a) experimental scene model; (b) viewpoint i; (c) viewpoint ii; (d) viewpoint iii. When the model is at the location of viewpoints i, ii, and iii, the type of imaging combination becomes combinations I, II, and IV, respectively. (e) The actual scenario, where the clarity of the image can be judged by observing the thin line on the “inspection gauge”.
Sensors 22 07634 g006
Figure 7. Decision-making process of the all-in-focus scheme at viewpoint i: (a) result of surface simulation at viewpoint i; (b) data of depth distribution at viewpoint i; (c) all-in-focus imaging scheme at viewpoint i.
Figure 7. Decision-making process of the all-in-focus scheme at viewpoint i: (a) result of surface simulation at viewpoint i; (b) data of depth distribution at viewpoint i; (c) all-in-focus imaging scheme at viewpoint i.
Sensors 22 07634 g007
Figure 8. Decision-making process of lighting scheme and final image at viewpoint i: (a) decision-making process of lighting scheme; (b) HDR process of final image, where we fuse four images with different exposure levels, which are shown in the left part of the figure (b), into the right HDR image.
Figure 8. Decision-making process of lighting scheme and final image at viewpoint i: (a) decision-making process of lighting scheme; (b) HDR process of final image, where we fuse four images with different exposure levels, which are shown in the left part of the figure (b), into the right HDR image.
Sensors 22 07634 g008aSensors 22 07634 g008b
Figure 9. Decision-making process under viewpoint ii: (a) surface simulation; (b) data of depth distribution; (c) all-in-focus imaging scheme; (d) images of pre-imaging; (e) pre-imaging images after “open operation”, where the four different colors in this figure represent the different light sources. For example, the image with a red color character represents the image obtained under Tlight, and the blue, orange, and purple colors represent Blight, Llight, and Rlight respectively. (f) Inferred images after “operation” between two different pre-imaging images, where the meaning of the color is similar to that shown in Figure 9e, and the area with black color represents the inferred area where the spots of different light sources coincide.
Figure 9. Decision-making process under viewpoint ii: (a) surface simulation; (b) data of depth distribution; (c) all-in-focus imaging scheme; (d) images of pre-imaging; (e) pre-imaging images after “open operation”, where the four different colors in this figure represent the different light sources. For example, the image with a red color character represents the image obtained under Tlight, and the blue, orange, and purple colors represent Blight, Llight, and Rlight respectively. (f) Inferred images after “operation” between two different pre-imaging images, where the meaning of the color is similar to that shown in Figure 9e, and the area with black color represents the inferred area where the spots of different light sources coincide.
Sensors 22 07634 g009
Figure 10. Overexposure suppression process under combined lighting.
Figure 10. Overexposure suppression process under combined lighting.
Sensors 22 07634 g010
Figure 11. Decision-making process under viewpoint iii; (a) surface simulation; (b) data of depth distribution; (c) all-in-focus imaging scheme; (d) images of pre-imaging; (e) pre-imaging images after “open operation”, where the four different colors in this figure represent the different light sources. For example, the image with a red color character represents the image obtained under the Tlight, and the blue, orange, and purple colors represent Blight, Llight, and Rlight respectively. (f) Inferred images after “operation”. Images are inferred between two different pre-imaging images, where the meaning of the color is similar to that shown in Figure 11e, and the area with black color represents the inferred area where the spots of different light sources coincide.
Figure 11. Decision-making process under viewpoint iii; (a) surface simulation; (b) data of depth distribution; (c) all-in-focus imaging scheme; (d) images of pre-imaging; (e) pre-imaging images after “open operation”, where the four different colors in this figure represent the different light sources. For example, the image with a red color character represents the image obtained under the Tlight, and the blue, orange, and purple colors represent Blight, Llight, and Rlight respectively. (f) Inferred images after “operation”. Images are inferred between two different pre-imaging images, where the meaning of the color is similar to that shown in Figure 11e, and the area with black color represents the inferred area where the spots of different light sources coincide.
Sensors 22 07634 g011aSensors 22 07634 g011b
Figure 12. All-in-focus imaging with overexposure suppression under viewpoint iii: (a) three HDR images under Tlight with three depths of object plane, DFS1 = 584 mm, DFS2 = 350 mm, and DFS3 = 250 mm; (b) three HDR images under Blight with three depths of object plane, DFS1 = 584 mm, DFS2 = 350 mm, DFS3 = 250 mm; (c) on the left is the all-in-focus HDR image which is obtained by fusing the three images under Tlight in (a), in the middle is the all-in-focus HDR image which is obtained by fusing the three images under Blight in (b), and in the right is the the all-in-focus image without overexposure, which is obtained by fusing the left and the middle all-in-focus images.
Figure 12. All-in-focus imaging with overexposure suppression under viewpoint iii: (a) three HDR images under Tlight with three depths of object plane, DFS1 = 584 mm, DFS2 = 350 mm, and DFS3 = 250 mm; (b) three HDR images under Blight with three depths of object plane, DFS1 = 584 mm, DFS2 = 350 mm, DFS3 = 250 mm; (c) on the left is the all-in-focus HDR image which is obtained by fusing the three images under Tlight in (a), in the middle is the all-in-focus HDR image which is obtained by fusing the three images under Blight in (b), and in the right is the the all-in-focus image without overexposure, which is obtained by fusing the left and the middle all-in-focus images.
Sensors 22 07634 g012
Figure 13. Comparison of results before and after all-in-focus.
Figure 13. Comparison of results before and after all-in-focus.
Sensors 22 07634 g013
Figure 14. Comparisons of multifocus renderings.
Figure 14. Comparisons of multifocus renderings.
Sensors 22 07634 g014
Table 1. Consistency parameter configuration.
Table 1. Consistency parameter configuration.
ParameterResolutionFeedback
Light
Intensity
Aperture ValueISOShutter TimeSaturationBrightnessSharpness
Shutter Time of Pre-Imaging (ms)Collection of Multiple Exposure Shutter Times (ms)
Value2592 × 194420 lux2.880050{35, 50, 80, 120}05020
Table 2. Statistics of pre-imaging overexposures.
Table 2. Statistics of pre-imaging overexposures.
Light SourceTlightBlightLlightRlight
Spot area (Size (IMGa))590357104713
Normalized spot area0.01170.07110.00.0938
Table 3. Decision parameters of the dynamic lighting scheme.
Table 3. Decision parameters of the dynamic lighting scheme.
WeightkOkM
Value0.70.3305
Table 4. Process data statistics for lighting solutions under Viewpoint ii.
Table 4. Process data statistics for lighting solutions under Viewpoint ii.
Tlight-LlightTlight-BlightTlight-RlightLlight-BlightLlight-RlightBlight-Rlight
Sor158044355770413157906264
Sand05416520602450
ORI10.9840.9500.93910.376
SOI0.9740.9660.9620.9670.9880.960
LC0.9920.9790.9530.9470.9610.551
Table 5. Results of depth segmentation and depth-of-field matching.
Table 5. Results of depth segmentation and depth-of-field matching.
<Dj> (mm)DFMj (mm)
j = 1219–292250
j = 2292–438350
j = 3438–878584
Table 6. Process data statistics for lighting solutions under Viewpoint iii.
Table 6. Process data statistics for lighting solutions under Viewpoint iii.
Tlight-LlightTlight-BlightTlight-RlightLlight-BlightLlight-RlightBlight-Rlight
Sor662681192049118321802
Sand262539661127
ORI0.99210.9880.9800.9970.992
SOI0.9770.9770.9740.9800.9740.974
LC0.9880.9930.9840.9790.9900.987
Table 7. Objective evaluation results 1 of image clarity.
Table 7. Objective evaluation results 1 of image clarity.
IMGFM1IMGFM2IMGFM3IMGFused
VEOG51.53789.53198.709171.402
VTenengrad (103)1.2942.3422.8794.377
VVollaths (103)0.5910.6100.6070.839
Table 8. Objective evaluation results 2 of image clarity.
Table 8. Objective evaluation results 2 of image clarity.
SNR (db)PSNR (db)
IMGFM114.644233.9384
IMGFM215.898435.1926
IMGFM317.538636.8323
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, S.; Xing, Q.; Xu, H.; Lu, G.; Wang, J. Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe. Sensors 2022, 22, 7634. https://doi.org/10.3390/s22197634

AMA Style

Wang S, Xing Q, Xu H, Lu G, Wang J. Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe. Sensors. 2022; 22(19):7634. https://doi.org/10.3390/s22197634

Chicago/Turabian Style

Wang, Shuangjie, Qiang Xing, Haili Xu, Guyue Lu, and Jiajia Wang. 2022. "Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe" Sensors 22, no. 19: 7634. https://doi.org/10.3390/s22197634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop