Next Article in Journal
Multiobjective Optimization of Chemically Assisted Magnetic Abrasive Finishing (MAF) on Inconel 625 Tubes Using Genetic Algorithm: Modeling and Microstructural Analysis
Previous Article in Journal
An Electrochemical Electrode to Detect Theophylline Based on Copper Oxide Nanoparticles Composited with Graphene Oxide
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Free-Surface Velocity Measurement Using Direct Sensor Orientation-Based STIV

College of Computer and Information Engineering, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Micromachines 2022, 13(8), 1167; https://doi.org/10.3390/mi13081167
Submission received: 29 June 2022 / Revised: 17 July 2022 / Accepted: 20 July 2022 / Published: 23 July 2022
(This article belongs to the Section E:Engineering and Technology)

Abstract

:
Particle image velocimetry (PIV) is a quantitative flow visualization technique, which greatly improves the ability to characterize various complex flows in laboratory and field environments. However, the deployment of reference objects or ground control points (GCPs) for velocity calibration is still a challenge for in situ free-surface velocity measurements. By combining space-time image velocimetry (STIV) with direct sensor orientation (DSO) photogrammetry, a laser distance meter (LDM)-supported photogrammetric device is designed, to realize the GCPs-free surface velocity measurement under an oblique shooting angle. The velocity calibration with DSO is based on the collinear equation, while the lens distortion, oblique shooting angle, water level variation, and water surface slope are introduced to build an imaging measurement model with explicit physical meaning for parameters. To accurately obtain the in situ position and orientations of the camera utilizing the LDM and its embedded tilt sensor, the camera’s intrinsic parameters and relative position within the LDM are previously calibrated with a planar chessboard. A flume experiment is designed to evaluate the uncertainty of optical flow estimation and velocity calibration. Results show that the proposed DSO-STIV has good transferability and operability for in situ measurements. It is superior to propeller current meters and surface velocity radars in characterizing shallow free-surface flows; this is attributed to its non-intrusive, whole-field, and high-resolution features. In addition, the combined uncertainty of free-surface velocity measurement is analyzed, which provides an alternative solution for error assessment when comparing measurement failures.

1. Introduction

Particle image velocimetry (PIV) is a quantitative flow visualization technique [1]. It seeds particles with good light scattering to trace their flow and estimates their motion as displacement or velocity vectors between two successive frames via image processing to present the flow field. Compared with traditional velocity measurement methods, it greatly improves the ability to characterize various complex flows in laboratory and field environments; this is attributed to its non-intrusive and whole-field measurement features [2]. Velocity calibration, which transforms the motion vectors in the image coordinate system to the velocity vectors in the world coordinate system, is one of the key issues in PIV [3].
In laboratory flume and river model experiments, cameras generally shoot perpendicular to the test plane. Under these conditions, the scaling relation for orthographic projection could be simply calculated with known-size reference objects set on the test plane [4,5]. A more rigorous approach is to build the invertible coordinate transformation relation between the image plane and the object plane by solving the homography matrix. The direct linear transformation (DLT) is a commonly used method in close-range photogrammetry, which needs at least four non-aligned ground control points (GCPs) on the free surface [6]. The planar homography model is simple and GCPs must be strictly coplanar with the test plane. Otherwise, their projections on the image cannot represent the true elevation of the free surface to be measured. However, these conditions are often difficult to control in practice, resulting in calibration error [7].
In contrast, the conditions of large-scale river surfaces are more complicated. Firstly, in order to survey the river cross-section as completely as possible, the camera needs to be installed on a riverbank and shoot with a small pitch angle to obtain a large field of view, ranging from hundreds to thousands of square meters. This can lead to serious image perspective distortion and the loss of spatial resolution in the far-field image. Image ortho-rectification can correct the perspective distortion [8,9,10,11]; however, it not only increases the consumption of computation and memory, but also brings in errors induced by gray level interpolation [12]. Secondly, the water level of natural rivers varies quickly and significantly, especially in mountain streams where the variation can be several meters in minutes. This can change the projective relation of the free surface, thereby affecting the scale and position of the measuring area [13]. Moreover, it is quite difficult to deploy GCPs on the river surface [14]. Some studies have tried to establish the projective relation with a floating calibration board [15] or laser points tracing the water surface [16]. This can meet the measurement requirement of flow fields ranging from tens to hundreds of square meters under a large shooting angle. However, these reference objects often have poor visibility under complex illumination conditions such as shadows and glares on the river surface, which makes it difficult to automatically detect and accurately locate objects within the image. The detection error will be significantly amplified when applied to a large-scale area [17]. Currently, the commonly used velocity calibration scheme is based on variable height homography (VHH) [7,18,19,20]. It improves the measuring precision of bank-based large-scale PIV (LSPIV) systems with small shooting angles by considering the distance from GCPs to the free surface, as well as modeling the elevation of the camera with the water level and the river slope. A limitation of VHH is that at least six non-coplanar control points should be distributed evenly on both sides of the river and surveyed by a total station or differential global positioning system (DGPS). It is a labor-consuming and high-risk task, especially in flood emergency monitoring. In addition, it is found that solving VHH with DLT is sensitive to the number, distribution, and accuracy of GCPs, as well as lens distortion. The solving process trends to be non-convergence under unfavorable conditions and fails to build the coordinate transformation relation [21]. Therefore, the velocity calibration problem has become one of the main bottlenecks of LSPIV.
The two-step method [22] for camera calibration in the computer vision society provides a solution for reducing the dependence of PIV on GCPs. It takes into account the nonlinear distortion of the lens and decomposes the homography matrix into intrinsic and extrinsic camera matrices. The photogrammetric task is divided into two steps: (1) indoor calibration of intrinsic parameters (focal length and principal point) and distorted aberration (radial and tangential distortion); and (2) in situ calibration of extrinsic parameters (translation and rotation). This not only reduces the number of GCPs required by the in situ calibration to four [23], but also improves the flexibility of system deployment. For example, a rotational calibration technique based on precise pan-tilt heads is devised to ensure accurate camera geometry in the wide river setting [24,25]. It can utilize near-field GCPs with better visibility to calibrate the initial value of the extrinsic parameters. The camera can then be rotated to the measuring position and can compensate for its orientations (azimuth and pitch angles) via the readings of two dials on the pan-tilt heads. Since the above calibration process requires manual adjustment and reading, automatic measurement has not been supported so far. When the camera orientation changes due to external disturbance (such as a storm) the present calibration results will no longer be applicable. Re-surveying GCPs and re-calibrating may cause inconsistencies in the world reference system, which creates problems for data use. Obviously, the development of GCPs-free imaging velocimetry has important significance for online river flow monitoring.
With the advances in sensor and information fusion technology, the positioning and orientation system (POS) is explored for extrinsic parameter measurement and coordinate transformation in both land-based [26] and aerial photogrammetry fields [27]. This technique is known as direct sensor orientation (DSO) or direct georeferencing (DG). At present, it can realize the GCPs-free measurement with decimeter-level precision within several square kilometers. The measurement quality of these systems is highly dependent on the precision of POS. This limitation is a major drawback due to the elevated cost associated with high-end POS units, particularly the inertial system. Generally, it is required that the roll and pitch angle errors of inertial systems (INS) should not be greater than 0.01°, the azimuth angle error should not be greater than 0.02°, and the recording frequency should be higher than 50 Hz. The positioning accuracy of GPS should reach the centimeter level, and the minimum sampling interval is generally less than one second. In addition, the potential accuracy of the DSO also depends on the architecture and quality of the GPS/INS integration process, as well as the validity of the system calibration. However, the calibration of multiple sensors, as well as the system mounting parameters (i.e., six elements of exterior orientation), is a delicate and complicated task. For the aerial photogrammetric systems, a special ground control field generally needs to be deployed for regular in-flight calibration. Fortunately, for PIV systems with fixed cameras, the position can be measured after installation and the orientation can be synchronously measured by a tilt sensor when capturing images. The DSO-based PIV has the potential to utilize low-cost sensors and improve the efficiency and safety of in situ measurements.
In this paper, a method which combines DSO and space-time image velocimetry (STIV) is proposed to realize free-surface velocity measurement without GCPs and image ortho-rectification. Both the safety and efficiency of field operations can be effectively improved using this low-cost measurement method. Section 2 firstly introduces the principle of an FFT-STIV based on the fast Fourier transform (FFT), then mainly focuses on the velocity calibration using DSO. Section 3 introduces the design of a laser distance meter (LDM)-supported measurement device and mainly focuses on its calibration. Section 4 presents a flume experiment to validate the method when applied to a small-scale free surface and evaluates its uncertainty. Section 5 concludes the paper and proposes research needs for future work.

2. Principle of DSO-Based FFT-STIV

2.1. Motion Estimation with FFT-STIV

STIV is a motion estimation method for time-averaged optical flow [28]. It assumes that the tracers that follow the surface flow satisfy the continuity of motion within a short time. It makes their position in the two-dimensional (2-D) space-time image (STI) present as a significant directional texture, with a main orientation that is a function of the one-dimensional (1-D) time-averaged velocity on a testing line. Compared to LSPIV, its spatial resolution can achieve single-pixel level, and its computational efficiency is above 10 times than that of the correlation-based method [29]. This makes it particularly suitable for real-time velocity-field observation under a small shooting angle. In this study, the central perspective images (rather than the ortho-rectified ones) are directly used for motion estimation to avoid additional errors. Figure 1 illustrates the outline of the FFT-STIV.
Frames of M are captured from a camera or a video with time intervals of Δ t seconds, in which a testing line with the length of L pixels is set along the flow direction. The testing line can be regarded as the interrogation area (IA) with the width of single pixel. An STI the size of L   ×   M (pixels) can be generated in x - t coordinates. It has a directional texture resembling bright and dark bands formed by the motion of optical flow on the free surface. Assuming that the motion of optical flow is equivalent to the motion of flow tracers, the main orientation of texture (MOT), which is defined as δ , can reflect the magnitude and direction of the time-averaged velocity V (m/s) during time interval of M   ×   Δ t ( s ) on the testing line:
V = D T = d   ·   S τ   ·   Δ t = tan   δ   ·   S Δ t = v   ·   S
where D (m)represents the distance that tracers traveled in physical coordinates within T (s), d (pixel)represents the distance that tracers traveled in image coordinates within τ (frame), v   ( pixel / s ) represents the optical flow velocity and its sign denotes the motion direction, s (m/pixel)represents the scaling factor (i.e., the spatial resolution) on the testing line, and δ   ( ° ) represents the main orientation of texture.
To overcome the deficiencies under deteriorated light (shadows and glares) and natural seeding conditions (sparse and uneven spatiotemporal distribution), the study attempts to address the issue of MOT detection in the frequency domain according to the auto-registration property of the Fourier transform magnitude spectra F ( u , v ) . This auto-registration depends on the orientation and variation frequency of the image texture, rather than their spatial locations. It makes the FFT-STIV robust to local noises and randomly occurring tracers. The 2-D FFT of STI is noted below:
F ( u   ,   v ) = FFT 2 ( STI ( x   ,   y ) = 1 N 2 x = 0 N 1 y = 0 N 1 STI ( x   ,   y ) e j 2 π ( x u + y v N )
where FFT2 represents the two-dimensional fast Fourier transform, STI(x, y) represents the space-time image, and N represents the size of the space-time image.
To avoid “spectrum compression” caused by unequal width and height, the STI is enlarged to the size of N   ×   N pixels via zero-padding. The texture patterns of STI are re-distributed in lines passing through the center of its centrosymmetric FTMS, and the main orientation of spectra (MOS) θ is orthogonal to the MOT:
δ = θ     90 °
where δ ( ° ) represents the main orientation of texture.
To detect the MOS, the F ( u , v ) is projected to the polar coordinate system where the polar radius r = N / 2 pixels and the polar angle ranges from 0° to 180°:
P ( θ ) = r = 0 R | F ( r   ,   θ ) |   /   N
The θ m is determined further by searching and Gaussian fitting the peak value of P ( θ ) :
θ m = arg   max [ P ( θ ) ]
The low frequency background noises are mainly distributed within ±5° around 0° and 90° in the polar projection. Alternative high-pass filtering, or the edge detection procedure, is recommended for background suppression [30] when peak detection of P ( θ ) is interfered with by a low signal-to-noise ratio.

2.2. Velocity Calibration with DSO

Velocity calibration aims to convert the optical flow velocity into the physical velocity and determine the position (starting distance) of the testing line. The essential issue is how to establish the coordinate transformation relation between the object plane and the image plane. The central perspective projection model in photogrammetry provides a theoretical basis for solving such problems. In Figure 2, the object point P ( X , Y , Z ) , projection center C ( X C , Y C , Z C ) , and image point P ( X , Y ) are assumed collinear; o and O are the projection points of C ( X C , Y C , Z C ) on the image plane and the object plane, respectively. For a perfect pinhole imaging system, the principal point o is at the center of the image, with image plane coordinates (in mm) of:
{ x o = s   ·   m   /   2 y o = s   ·   n   /   2
where s (mm) is the pixel size of the image sensor and m   ×   n (pixels) is the image size. The image plane coordinates ( x , y ) are expressed by their image coordinates ( i , j ) (pixel) as:
{ x o = s   ·   i y o = s   ·   j
Considering the object distance ( OC ) is much larger than the image distance ( oc ), the focal length f can be assumed equal to the image distance. The linear geometric relation between image points and object points is established by similar triangles. If the image plane and the object plane coordinate systems are taken as the reference, respectively, the collinear equation can be expressed in the following direct and indirect forms:
{ x = x o f a 1 ( X     X c ) + b 1 ( Y     Y c ) + c 1 ( Z     Z c ) a 3 ( X     X c ) + b 3 ( Y     Y c ) + c 3 ( Z     Z c ) y = y o   f a 2 ( X     X c ) + b 2 ( Y   Y c ) + c 2 ( Z     Z c ) a 3 ( X     X c ) + b 3 ( Y     Y c ) + c 3 ( Z     Z c )
{ X = a 1 ( x     x o ) + a 2 ( y     y o ) + a 3 (   f ) c 1 ( x     x o ) + c 2 ( y     y o ) + c 3 (   f ) ( Z     Z c ) + X c Y = b 1 ( x     x o ) + b 2 ( y     y o ) + b 3 (   f ) c 1 ( x     x o ) + c 2 ( y     y o ) + c 3 (   f ) ( Z   Z c ) + Y c
where f represents the focal length and Z represents the elevation of the object plane.
The above two equations describe the reciprocal transformation relation of two plane coordinates. The direct form projects object point ( X , Y , Z ) to image point ( x , y ) . The indirect form projects image point ( x , y ) to object point ( X , Y ) , but the Z point coordinate is required. The rotation matrix formed by nine coefficients in the equations can be expressed by the camera’s orientation angles relative to the object’s 3D coordinate system:
R T = [ a 1 b 1 c 1 a 2 b 2 c 2 a 3 b 3 c 3 ] = [ cos κ cos φ sin κ sin ω sin φ cos ω sin φ sin κ cos φ + cos κ sin ω sin φ cos κ sin φ sin κ sin ω cos φ cos ω cos φ sin κ sin φ + cos κ sin ω cos φ sin κ cos ω sin ω cos κ cos ω ]
where the pitch ( ω ), roll ( φ ), and azimuth ( κ ) angles are defined as the angles used in order to rotate a ( X , Y , Z ) geodetic coordinate system and align it with the image coordinate system. Pitch is the rotation around the X axis. Roll is the rotation around the Y axis. Azimuth is the rotation around the Z axis.
There are six unknowns ( X C , Y C , Z C , φ , ω , κ ) to be solved, which are known as the extrinsic parameters of the camera. Existing methods usually utilize at least three non-collinear GCPs to solve the unknowns, and use (8) to transform coordinates and generate ortho-rectified images for subsequent motion estimation. In this study, the model is reasonably simplified considering the fixed camera position. The object coordinate system is established by taking the reference point R on a fixed ground or sea level as the origin. Point F is the pedal point of point C on the measuring plane. The camera’s optical axis is adjusted parallel to the cross-section; thus, it can be assumed that ( X C , Y C ) = ( 0 , 0 ) and κ = 0 . The remaining three extrinsic parameters ( Z C , φ , ω ) are obtained by distance and tilt sensors to realize the DSO-based photogrammetry.
To avoid additional errors and computations induced by image ortho-rectification, the velocity calibration is carried out based on (9) instead of (7). For STIV, the velocity on the testing line is denoted as:
V i   , j = X   i + v x   ,   j   X i   , j
where X   i + v x   ,   j and   X i   , j represent the coordinates of the image point in the object plane.
The location (i.e., starting distance of the cross-section) of the testing line is denoted as:
D i , j = | Y i , j | + D C
where D C represents the Y-axis projection distance of the point F relative to the reference point. Note that the scaling factor s in (1) is not directly solved by the above velocity calibration process. On the contrary, according to its definition, it can be derived using the physical distances between two adjacent pixels.

3. Calibration of Measuring Device

3.1. Measuring Device

An LDM-supported measuring device (Figure 3) was designed for the DSO-based FFT-STIV. A USB 3.0 industrial camera (HIKVISION MV-CA013-20UM, made by Hikvision, in Hangzhou, China) was used for image capture. It had a monochrome CMOS sensor with an image size of 1280 × 1024 pixels and a pixel size of 4.8 μm. The frame rate at full resolution was up to 170 fps. In this study, a C-mount lens with a focal length of 8 mm was installed on the camera. The LDM (SNDWAY SW-Q120, made by SNDWAY, in Dongguan, China) was equipped with an embedded dual-axis (pitch and roll) tilt sensor. The measuring ranges of pitch and roll angles were both ±90° with precision up to 0.1°. The measuring range and precision of distance can reach 120 m and 3 mm, respectively. The camera and LDM were rigidly mounted on a platform (a quick release shoe of panhead) with parallel optical axes. Their geometric relation was assumed to be invariant. The major error sources of the measuring device included the nonlinear distortion aberration of the lens, as well as the eccentric distances and eccentric angles between the camera and the LDM. To reduce their effect, the corresponding calibration methods are discussed below.

3.2. Calibration of Distortion Aberration

Due to the complexity of lens design and manufacture, the real imaging system cannot strictly satisfy the central perspective projection model. This lens distortion deviates the actual image position from that given by the central perspective projection model, especially in the use of wide-angle lenses. In high-precision photogrammetry, a nonlinear imaging model considering distortion aberration should be applied for image correction. Furthermore, the focal length should use the calibrated value rather than the nominal one. In this study, both intrinsic parameters and distortion aberration are calibrated in the laboratory with the planar chessboard method provided by the camera calibration toolbox from OpenCV. A chessboard with 18 × 12 square grids was designed and the side lengths of each grid were 60 mm. A total of nine images taken under different shooting angles were captured for calibration (Figure 4). The intrinsic parameter matrix K and the distortion parameter matrix D were calculated as follows:
K = [ f x 0 c x 0 f y c y 0 0 1 ] = [ 1740.2670 0 637.8651 0 1743.1978 526.1480 0 0 1 ]
D = [ k 1 k 2 p 1 p 2   ] = [ 0.1438 0.4073 0.0003 0.0017 ]
where ( C x , C y ) represents the principal point of the distorted image and f x and f y represent the focal lengths in pixels on the x and y directions, with the mean value converted to the actual focal length f in mm via the pixel size:
f = s · ( f x + f y )   /   2 = 8.3603   m m
k 1 and k 2 represent the radial lens distortion parameters and p 1 and p 2 represent the tangential lens distortion parameters. Figure 5 indicates that all the mean reprojection errors of the nine images are below 0.5 pixels, and the overall mean error is 0.38 pixels.
According to the above results, image correction is performed using the nonlinear imaging model below:
{ x = x ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 y + p 2 ( r 2 + 2 x 2 ) y = y ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 x + p 2 ( r 2 + 2 x 2 )   ,   r 2 = x 2 + y 2
where ( x , y ) and ( x , y ) are the distorted and undistorted points in the camera coordinate system, which satisfy the following relationship with their corresponding points ( u , v ) and ( u , v ) in the image coordinate system:
{ x = ( u     C x )   /   f x y = ( v     C y )   /   f y
{ u = f x x + C x u = f y y + C y
The above three equations establish the coordinate transformation relation between distorted and undistorted images.

3.3. Calibration of Eccentric Distances

The eccentric distances below are defined as the 3-D distances between the optical centers of the camera (C) and the LDM (L) shown in Figure 3b, which are manually measured with a Vernier caliper according to their marked positions after system integration.
{ Δ X = 45   mm Δ Y = 35   mm Δ Z = 5   mm
The key point that should be taken into account is their effect on the elevation of the camera on the object plane:
Z C = d · sin ω L   Δ Y · sin ω L + Z S
where d and ω L represent the slope distance and pitch angle of LDM relative to the object plane, respectively. In practice, the d should be measured with LDM by aiming at the water boundary rather than the water surface. The Z S is defined as the height of the measured water surface relative to the reference point R , which is supposed to be zero in this case (i.e., point R coincides with point F ). In practice, it can be simultaneously measured with the same camera using image-based methods [31].

3.4. Calibration of Eccentric Angles

The eccentric angles ( Δ κ , Δ ω and Δ φ ) are defined as the differences in three rotation angles between the camera and the LDM, where Δ κ 0 , Δ ω , and Δ φ are calibrated according to the flow chart shown in Figure 6. The basic principle is to determine the least root-mean-square error (RMSE) of the planar grid size:
( Δ ω   ,   Δ φ ) = arg   min [ E ( Δ ω   ,   Δ φ ) ] = arg   min [ ( E X 2 + E Y 2 )   /   2 ]
where E X and E Y represent the total RMSEs of measured grid sizes in X and Y directions, respectively:
{ E X = 1 ( M     1 ) N i = 1 M - 1 j = 1 N [ ( X i + 1 , j X I , j )     D X ] 2 E Y = 1 M ( N     1 ) i = 1 M j = 1 N - 1 [ ( Y I , j + 1 Y I , j )     D Y ] 2
M and N are the number of interior corners on the chessboard, ( X i , j , Y i , j ) represent their object coordinates, and D X , D Y are the actual sizes of the grid. Considering the system integration precision of the device, the minimum value of E ( Δ ω ,   Δ φ ) was firstly searched within the ±2° neighborhood of ω L and φ L with a step of 0.1°, then refined with the three-point Gaussian fitting method to make the error of eccentric angles less than 0.1°.
To analyze the sensitivity of the calibration method to the pitch angle ( ω ), three sets of chessboard images were captured for testing (Figure 7). The chessboard was the same as the one used in Figure 4, which was placed horizontally on the ground. The measuring device was installed on a tripod and adjusted to be level according to the bubble. Table 1 indicates that the elevation bias caused by the eccentric distances Δ Z and Δ Y was within 4 mm in this case. It is negligible when it is much less than d. Table 2 compares the measurement errors with and without the calibration of eccentric angles. In the set which directly used ω L and φ L for measurements, the RMSE decreased significantly with the increase in pitch angle. This was caused by a reduction in image resolution due to image perspective distortion, especially in the far-field. In view of this, the chessboard should be placed in the near-field of the image when the calibration is performed at a small pitch angle such as in Figure 7a. The RMSEs calculated with eccentric angle calibration decrease to a similar level with a difference of less than 0.05 mm. This is attributed to the global adjustment effect of the least square method. Note that the RMSEs include the errors of corner detection and angle measurement. According to the grid size in images, the mean relative error is approximately 2%, which is similar to the mean reprojection error in Figure 5. Additionally, it was found that the differences in eccentric angles preliminarily calibrated in the three sets were all within 0.1°, which agrees with the searching step. In comparison, the differences decrease to 0.05° after Gaussian fitting, and the measuring precision is further improved for set one with a small pitch angle. Here, the calibration results of set three with the smallest image perspective distortion are chosen for the following measurement, and the corresponding relative error of 2.03% is taken as the uncertainty of the scaling factor u ( S ) .

4. Experiment in Flume

4.1. Experiment Settings

An experiment was conducted in a recirculating flume to evaluate the proposed method in small-scale surface velocity field measurement. The 4 m-long flume was simply built using PVC rain gutters, as shown in Figure 8. Its section was trapezoid with the upper width of 160 mm, the lower width of 100 mm and the height of 80 mm. The upstream inlet boundary condition was uniform inflow velocity, the downstream outlet boundary condition was free outflow, and the wall boundary condition was non-slip solid wall. The measuring device was installed 0.2 m upstream of the outlet. Since the parameters of the lens cannot be changed after calibration, the height of device was adjusted to make imaging clear. Note that the imaging model in Figure 2 takes the measuring plane as the reference system, while the angle measurement of LDM is based on the geodetic reference system. This difference should be taken into account when calculating the camera orientations. In the experiment, the angle measurements of LDM were ω L = 59.8 ° and φ L = 1.2 ° . The vertical and horizontal slopes of the flume were 1.23° and 0°, respectively. It resulted in ω C = 59.272 ° and φ C = 1.927 ° with eccentric angle and slope compensation. The elevation from the camera to the flume bottom is calculated in Table 3. As the water surface was approximately 8 mm from the flume bottom (i.e., Δ l = 8 mm), the actual camera elevation was 0.541 m according to (20). The size of the effective measurement area shown in Figure 8 was approximately 640 × 480 pixels.

4.2. Comparison of In Situ and Non-In Situ Calibration

In order to verify whether the calibration results can be transferred into the actual measuring environment, a high-precision alumina calibration board was used for in situ calibration. There were 12 × 9 grids with a side length of 15 mm and an accuracy of 0.01 mm on the board. As its size was 200 × 200 mm, which was larger than the water surface to be measured, it was set on the top of the flume. Considering the thickness of the calibration board (i.e., Δ l = 3 mm), the actual elevation of the camera relative to the board was 0.464 m (Table 4). In comparison with the calibration results of set 3 in Table 2, the differences of Δ ω and Δ φ shown in Table 5 were both within the reasonable error range of 0.1°. Therefore, this proves that the calibration method of this integrated measuring device is practically operable. In addition, the Root Mean Square Error (RMSE) relative to the grid size was 0.73%. Note that this value is approximately 1/3 of the relative error given in Section 3.4. It was found that the mean spatial resolutions of the two cases (0.286 × 0.364 mm/pixel vis-à-vis 0.849 × 1.224 mm/pixel) also approximately met this ratio. As a result, it is considered to be a major error source of measurement alongside the shooting angle.

4.3. Velocity Measurement

The flow rate remained stable during the experiment. For the convenience of flow field observation with human eyes, a small number of sediments and surfactants were seeded to the water to produce distinguishable foam patterns on the free surface. The camera continuously captured images for 100 s with a frame rate of 100 fps. The total 10,000 frames were evenly divided into 40 groups for analysis.
Firstly, motion estimation with FFT-STIV was tested. A total of 50 testing lines with a length of 255 pixels and a space interval of 50 pixels were set across the section of flume in each frame. Figure 9 visualizes the time-averaged optical flow field obtained by group 1, where the black arrow line represents the optical flow velocity v x (pixel/frame) corresponding to each testing line. For easy observation, the arrow lines were enlarged by 10 times, and two frames (18th and 28th) with uniform and clear foam patterns were selected as background images. The central coordinates ( x 1 ,   y 1 ) and ( x 2 ,   y 2 ) of eight foams in the two frames were manually labeled as the corresponding points (Figure 10). Table 6 provides the displacements ( d x ,   d y ) of the corresponding points and compares their optical flow velocities in the x direction (denoted as u x ) with the FFT-STIV measurements. It shows that there is a good consistency between them. Here, the maximum relative error of 3.24% is taken as the uncertainty of motion estimation u ( v ) , which includes errors in manual labeling.
Secondly, velocity calibration with DSO was tested. Figure 11 presents the cross-section velocity distribution calculated with a total of 40 groups of calibrated velocity fields. It agreed with the pattern of velocity distribution in trapezoidal open channels [32]; the velocities varied dramatically near two side walls due to the frictional effect, but remained stable after a certain distance from them. In theory, the average velocity at the midstream (0.3 m) should be maximum; in this case, the measured velocity was approximately 0.02 m/s lower than those near 0.32 m. This is because the large foams were mainly concentrated in the midstream, which did not follow the surface flow as well as the small ones did. The starting distances of each testing line measured with the DSO method ranged from 0.260 m to 0.361 m. The field of view within 0.26 m was occluded by the right-side wall due to the limited observation angle, resulting an unmeasurable area of 0.012 m (shaded area in Figure 11). The measured distance between the two lower boundaries of the trapezoid section was 0.1 m, which was consistent with the true width of the bottom surface of the flume. The conclusion above verifies that the DSO method is effective for free-surface measurement and velocity calibration.

4.4. Uncertainty Evaluation

The flow rate remained stable during the experiment. For the convenience of flow field observation with human eyes, the uncertainty evaluation of real flow field measurement has always been a difficulty in PIV related studies. In this experiment, a propeller current meter (PCM) and a surface velocity radar (SVR) were set up to carry out a comparison measurement. The PCM was fixed at the midstream (0.3 m) to measure its velocity on the water surface. The average velocity measured over a period of 100 s was 0.496 m/s. The SVR was set on one side of the flume, approximately 0.65 m above the water surface. Its pitch angle was 52° and the beam heading angle relative to the flow direction was 5°. The average velocity measured over a period of 100 s was 0.44 m/s, with a maximum of 0.63 m/s and a minimum of 0.38 m/s. Obviously, the measurements of both instruments were much lower than those of the DSO-based FFT-STIV method. The reasons for this are: (a) the water depth was too shallow for the propeller to completely submerge under the water, which results in low spin number; and (b), the water was clear and shallow enough for the electromagnetic wave to pass through the free surface, enough to reach the bottom and reflect back. The actual measured velocity amounts to the depth-averaged one, rather than the superficial one. Therefore, both instruments have limitations under the experimental conditions and cannot be used as effective references for an uncertainty evaluation.
In view of this, the repeatability precision of the DSO-based FFT-STIV method was assessed by analyzing the 40 groups of velocity measurements in this study, as shown in Table 7. For the testing lines in the bottom inner zone (0.265~0.341 m), the deviation between their mean and median values was within 0.004 m/s. The standard deviation (SD) was less than 0.03 m/s, and the relative standard deviation (RSD) was less than 5%, among which 85% were less than 2%. For each testing line, its 40 measurements were firstly sorted into ascending order according to the absolute value of the RSD. The 38th value was then taken as the relative error with a cumulative frequency of 95%, which ranges from 1.66% to 7.53%. Considering factors such as velocity pulsation, this repeatability precision is reasonable for free-surface velocity measurement. In addition, the cumulative frequency errors (CFEs) are approximately twice the relative standard deviations. It can be assumed that the measurement error obeys the normal distribution law. In contrast, the random coarse error caused by strong flow turbulence and poor tracer conditions in the bottom outer zone can be beyond ±10%. To indicate the relationship between spatial resolution and measurement error, Table 7 also gives the scaling factors of each testing line. Although the spatial resolution increases with the distance of the testing line from the camera, the measurement error does not show an increasing trend. This is because of the global adjustment effect of the least square method which is discussed in Section 3.4. Moreover, S x changes less significantly than S v , indicating that the x direction of the image is less affected by perspective distortion. It can be inferred that the accuracy of velocity calibration is higher than that of the starting distance when the optical axis of the camera is perpendicular to the direction of water flow.
According to (1), the physical velocity V only depends on the optical flow velocity v and scaling factor S . Considering the independence of their errors, here we bring the previously mentioned u ( S ) and u ( v ) into the following equation to estimate the combined standard uncertainty of velocity:
  u   ( V ) = u 2 ( v ) + u 2 ( S )   = ( 3.24 % ) 2 + ( 2.03 % ) 2   = 3.823 %
It can be found that this value is close to the measured maximum RSD of 3.81% (testing line 41) in the bottom inner zone. According to the normal distribution law of measurement error, the extended uncertainty of 95% confidence level is estimated by using two times the combined standard uncertainty:
U 95 ( V ) = 2 · u ( V ) = 7.646 %
This value is also close to the CFE (7.53%) of testing line 41. This observation implies that the uncertainty synthesis method is feasible for the evaluation of DSO-based FFT-STIV.

5. Conclusions

The DSO-based STIV is proposed to realize 1-D free-surface velocity measurement under an oblique shooting angle. It also provides a GCPs-free solution to the problem of velocity calibration in planar PIV applications where reference objects cannot be deployed in situ. On the basis of the traditional collinear equation, lens distortion, oblique shooting angle, water level variation, and water surface slope are introduced to build an accurate and reasonably simplified imaging model to describe the coordinate transformation relation between image and object planes. The LDM-supported measuring device is low-cost and easy to construct. It only needs to be calibrated once if the camera’s intrinsic parameters and relative position within the LDM remain constant. The comparison of non-in situ and in situ calibration results verify good transferability of the calibration method. The effectiveness of motion estimation with FFT-STIV and velocity calibration with DSO were verified step by step based on the designed flume experiment. Finally, the combined uncertainty of velocity measurement was evaluated, which provided an effective means for error analysis when comparing measurement failures. Overall, the DSO-STIV proved superior to propeller current meters and surface velocity radars in surface velocity measurements in extremely shallow water conditions such as those in this study. It not only has high spatial resolution up to a single pixel, but also has good measuring precision with the standard uncertainty controlled within 5% in small-scale applications. In future work, the DSO-STIV will be applied to natural rivers to evaluate the sensitivity and uncertainty of large-scale river surfaces.

Author Contributions

Conceptualization, Z.Z.; investigation, L.Z. and Z.Z.; methodology, Z.Z.; software, L.Z. and B.L.; supervision, Z.Z.; validation, T.J. and Z.C.; writing—original draft, L.Z.; writing—review and editing, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the China Postdoctoral Science Foundation (No. 2019M651673), the Fundamental Research Funds for the Central Universities (No. B200202187), the National Natural Science Foundation of China (No. 51709083), and the Natural Science Foundation of JiangSu Province (No. BK20170891).

Data Availability Statement

All the data presented in this study are available in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adrian, R.J. Twenty years of particle image velocimetry. Exp. Fluids 2005, 39, 159–169. [Google Scholar] [CrossRef]
  2. Di Cristo, C. Particle Imaging Velocimetry and its Applications in Hydraulics: A State-of-the-Art Review. In Proceedings of the 30th International School of Hydraulics, Wiejce Palace, Poland, 14–17 September 2010. [Google Scholar]
  3. Willert, C.E. Assessment of camera models for use in planar velocimetry calibration. Exp. Fluids 2006, 41, 135–143. [Google Scholar] [CrossRef]
  4. Fox, J.F.; Patrick, A. Large-scale eddies measured with large scale particle image velocimetry. Flow Meas. Instrum. 2008, 19, 283–291. [Google Scholar] [CrossRef]
  5. Mujtaba, B.; de Lima, J.L.M.P. Laboratory testing of a new thermal tracer for infrared-based PTV technique for shallow overland flows. CATENA 2018, 169, 69–79. [Google Scholar] [CrossRef]
  6. Gharahjeh, S.; Aydin, I. Application of video imagery techniques for low cost measurement of water surface velocity in open channels. Flow Meas. Instrum. 2016, 51, 79–94. [Google Scholar] [CrossRef]
  7. Lin, D.; Grundmann, J.; Eltner, A. Evaluating Image Tracking Approaches for Surface Velocimetry with Thermal Tracers. Water Resour. Res. 2019, 55, 3122–3136. [Google Scholar] [CrossRef]
  8. Fujita, I.; Muste, M.; Kruger, A. Large-scale particle image velocimetry for flow analysis in hydraulic engineering applications. J. Hydraul. Res. 1998, 36, 397–414. [Google Scholar] [CrossRef]
  9. Kim, Y.; Muste, M.; Hauet, A.; Krajewski, W.F.; Kruger, A.; Bradley, A. Stream discharge using mobile large-scale particle image velocimetry: A proof of concept. Water Resour. Res. 2008, 44, W09502. [Google Scholar] [CrossRef]
  10. Muste, M.; Fujita, I.; Hauet, A. Large-scale particle image velocimetry for measurements in riverine environments. Water Resour. Res. 2008, 44, W00D19. [Google Scholar] [CrossRef] [Green Version]
  11. Lee, M.C.; Leu, J.M.; Chan, H.C.; Huang, W.C. The measurement of discharge using a commercial digital video camera in irrigation canals. Flow Meas. Instrum. 2010, 21, 150–154. [Google Scholar] [CrossRef]
  12. Hauet, A.; Creutin, J.D.; Belleudy, P. Sensitivity study of large-scale particle image velocimetry measurement of river discharge using numerical simulation. J. Hydrol. 2008, 349, 178–190. [Google Scholar] [CrossRef]
  13. Dramais, G.; Le Coz, J.; Camenen, B.; Hauet, A. Advantages of a mobile LSPIV method for measuring flood discharges and improving stage-discharge curves. J. Hydro-Environ. Res. 2011, 5, 301–312. [Google Scholar] [CrossRef] [Green Version]
  14. Patalano, A.; Garcia, C.M.; Rodriguez, A. Rectification of Image Velocity Results (RIVeR): A simple and user-friendly toolbox for large scale water surface Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV). Comput. Geosci. 2017, 109, 327–330. [Google Scholar] [CrossRef]
  15. Daigle, A.; Berube, F.; Bergeron, N.; Matte, P. A methodology based on Particle image velocimetry for river ice velocity measurement. Cold Reg. Sci. Technol. 2013, 89, 36–47. [Google Scholar] [CrossRef]
  16. Tauro, F.; Porfiri, M.; Grimaldi, S. Orienting the camera and firing lasers to enhance large scale particle image velocimetry for streamflow monitoring. Water Resour. Res. 2014, 50, 7470–7483. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Zhou, Y.; Li, Y.; Ye, Y.; Li, X. IP camera-based LSPIV system for on-line monitoring of river flow. In Proceedings of the 13th IEEE International Conference on Electronic Measurement and Instruments, Yangzhou, China, 20–22 October 2017. [Google Scholar]
  18. Jodeau, M.; Hauet, A.; Paquier, A.; Le Coz, J.; Dramais, G. Application and evaluation of LS-PIV technique for the monitoring of river surface velocities in high flow conditions. Flow Meas. Instrum. 2008, 19, 117–127. [Google Scholar] [CrossRef] [Green Version]
  19. Al-mamari, M.M.; Kantoush, S.A.; Kobayashi, S.; Sumi, T.; Saber, M. Real-Time Measurement of Flash-Flood in a Wadi Area by LSPIV and STIV. Hydrology 2019, 6, 27. [Google Scholar] [CrossRef] [Green Version]
  20. Zhu, X.; Kouyi, G.L. An Analysis of LSPIV-Based Surface Velocity Measurement Techniques for Stormwater Detention Basin Management. Water Resour. Res. 2019, 55, 888–903. [Google Scholar] [CrossRef]
  21. Zhang, Z.; Xu, F.; Shen, J.; Han, L.; Xu, L. Plane measurement method with monocular vision based on variable-height homogra-phy. Chin. J. Sci. Instrum. 2014, 35, 1860–1868. [Google Scholar]
  22. Tsai, R.A. Versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lens. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  23. Holland, K.T.; Holman, R.A.; Lippmann, T.C. Practical use of video imagery in nearshore oceanographic field studies. IEEE J. Ocean. Eng. 1997, 22, 81–92. [Google Scholar] [CrossRef]
  24. Bechle, A.J.; Wu, C.H.; Liu, W.C.; Kimura, N. Development and application of an automated river-estuary discharge imaging system. J. Hydraul. Eng. 2012, 138, 327–339. [Google Scholar] [CrossRef]
  25. Huang, W.C.; Young, C.C.; Liu, W.C. Application of an Automated Discharge Imaging System and LSPIV during Typhoon Events in Taiwan. Water 2018, 10, 280. [Google Scholar] [CrossRef] [Green Version]
  26. Rau, J.Y.; Habib, A.F.; Kersting, A.P. Direct Sensor Orientation of a Land-Based Mobile Mapping System. Sensors 2011, 11, 7243–7261. [Google Scholar] [CrossRef]
  27. Costa, F.A.L.; Mitishita, E.A. An approach to improve direct sensor orientation using the integration of photogrammetric and lidar datasets. Int. J. Remote Sens. 2019, 40, 5651–5672. [Google Scholar] [CrossRef]
  28. Fujita, I.; Watanab, H.; Tsubaki, R. Development of a non-intrusive and efficient flow monitoring technique: The space time image velocimetry (STIV). Int. J. River Basin Manag. 2019, 5, 105–114. [Google Scholar] [CrossRef] [Green Version]
  29. Fujita, I.; Notoya, Y.; Tani, K. Efficient and accurate estimation of water surface velocity in STIV. Environ. Fluid Mech. 2018, 19, 1363. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, Z.; Wang, X.; Fan, T.; Xu, L. River surface target enhancement and background suppression for unseeded LSPIV. Flow Meas. Instrum. 2013, 30, 99–111. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Zhou, Y.; Liu, H.; Gao, H. In-situ Water Level Measurement Using NIR-imaging Video Camera. Flow Meas. Instrum. 2019, 67, 95–106. [Google Scholar] [CrossRef]
  32. Hu, Y.; Gao, H.; Geng, L.; Cai, F. Laws of velocity distribution in trapezoidal open channels. J. Zhejiang Univ. (Eng. Sci. Ed.) 2009, 43, 1102–1106. [Google Scholar]
Figure 1. Outline of FFT-STIV.
Figure 1. Outline of FFT-STIV.
Micromachines 13 01167 g001
Figure 2. Model of central projection imaging under an oblique view.
Figure 2. Model of central projection imaging under an oblique view.
Micromachines 13 01167 g002
Figure 3. LDM-supported measuring device showing (a) structure diagram and (b) picture of real device.
Figure 3. LDM-supported measuring device showing (a) structure diagram and (b) picture of real device.
Micromachines 13 01167 g003
Figure 4. Chessboard images for camera calibration.
Figure 4. Chessboard images for camera calibration.
Micromachines 13 01167 g004
Figure 5. Mean reprojection error of nine images.
Figure 5. Mean reprojection error of nine images.
Micromachines 13 01167 g005
Figure 6. Flow chart of eccentric angles calibration.
Figure 6. Flow chart of eccentric angles calibration.
Micromachines 13 01167 g006
Figure 7. Calibration of eccentric angles under different shooting angles, showing (a) set 1, (b) set 2, and (c) set 3.
Figure 7. Calibration of eccentric angles under different shooting angles, showing (a) set 1, (b) set 2, and (c) set 3.
Micromachines 13 01167 g007
Figure 8. Flume experiment settings.
Figure 8. Flume experiment settings.
Micromachines 13 01167 g008
Figure 9. Visualization of optical flow field with different background image, showing (a) 18th frame and (b) 28th frame.
Figure 9. Visualization of optical flow field with different background image, showing (a) 18th frame and (b) 28th frame.
Micromachines 13 01167 g009
Figure 10. Matching point pairs manually labeled in 18th frame (a) and 28th frame (b).
Figure 10. Matching point pairs manually labeled in 18th frame (a) and 28th frame (b).
Micromachines 13 01167 g010
Figure 11. Cross-section velocity distribution averaged with a total of 40 calibrated velocity fields.
Figure 11. Cross-section velocity distribution averaged with a total of 40 calibrated velocity fields.
Micromachines 13 01167 g011
Table 1. Calibration results of eccentric distances and camera elevation.
Table 1. Calibration results of eccentric distances and camera elevation.
Setd (m) ω L   ( ° ) Δ Z   ( m ) Δ Y   ( m ) Z C   ( m )
12.41419.60.0050.0250.806
21.77841.30.0050.0251.161
31.78459.10.0050.0251.512
Table 2. Calibration results of eccentric angles and camera orientations.
Table 2. Calibration results of eccentric angles and camera orientations.
SetWithout CalibrationWithout Gaussian FittingWith Gaussian Fitting
ω L (°) φ L (°)E (mm) ω C   ( ° ) φ C ( ° ) E (mm) ω C   ( ° ) φ C   ( ° ) E (mm) Δ ω   ( ° ) Δ φ   ( ° )
119.60.62.29819.11.11.24919.0701.0641.244−0.5300.464
241.30.11.71640.70.51.20540.7160.5231.205−0.5840.423
359.1−0.61.35758.6−0.11.21758.572−0.0971.217−0.5280.503
Table 3. Calibration results of eccentric distances and camera elevation to flume bottom.
Table 3. Calibration results of eccentric distances and camera elevation to flume bottom.
Item d (m) ω L (°) Z L   ( m ) Δ Z   ( m ) Δ Y   ( m ) Z C   ( m )
Value0.65759.80.5680.0050.0250.549
Table 4. Calibration results of eccentric distances and camera elevation to the calibration board.
Table 4. Calibration results of eccentric distances and camera elevation to the calibration board.
Item d (m) ω L   ( ° ) Δ Z   ( m ) Δ Y   ( m ) Z C   ( m )
Value0.56059.80.0050.0250.464
Table 5. Calibration results of eccentric angles and camera orientation to the calibration board.
Table 5. Calibration results of eccentric angles and camera orientation to the calibration board.
Without CalibrationWithout Gaussian FittingWith Gaussian Fitting
Item ω L (°) φ L (°)E (mm) ω C   ( ° ) φ C   ( ° ) E (mm) ω C   ( ° ) φ C   ( ° ) E (mm) Δ ω   ( ° ) Δ φ   ( ° )
Value59.8−2.40.17559.3−2.00.11059.2 72−1.9580.109−0.5280.442
Table 6. Comparison of visual measurements and FFT-STIV measurements.
Table 6. Comparison of visual measurements and FFT-STIV measurements.
Points12345678
(x1, y1) (pixel)(466, 147)(272, 184)(291, 195)(486, 224)(488, 243)(480, 258)(249, 272)(320, 284)
(x2, y2) (pixel)(251, 156)(48, 187)(64, 199)(269, 233)(269, 251)(257, 263)(22, 278)(90, 285)
(dx, dy) (pixel)(−215, 9)(−224, 3)(−225, 4)(−217, 9)(−219, 8)(−223, 5)(−227, 6)(−230, 1)
ux (piels/frame)21.522.422.521.721.922.322.723.0
vx (pixels/frame)21.77022.19221.77021.92422.05122.19222.54222.279
Er (%)1.26%−0.93%−3.24%1.03%0.69%−0.48%−0.70%−3.13%
Table 7. Statistical results of velocity measurements.
Table 7. Statistical results of velocity measurements.
NoStarting Distance (m)Scaling FactorVelocity
Sx (mm/pixel)Sy (mm/pixel)Mean (m/s)Median (m/s)SD (m/s)RSD (%)Maximum RSD (%)Minimum RSD (%)CFE (%)
10.260.3460.3810.0940.0070.099105.79158.27−94.66143.33
20.2620.3470.3820.7890.7960.0799.9616.77−56.51102.75
30.2640.3470.3840.8030.7980.0293.6410.83−7.606.74
40.2650.3480.3850.8170.8130.0253.017.04−9.236.03
50.2670.3480.3860.8270.8290.0212.525.36−7.583.62
60.2690.3490.3870.8220.8190.0161.985.33−4.283.00
70.2710.350.3890.8260.8230.0161.955.75−4.292.56
80.2730.350.390.8270.8260.0151.844.01−3.853.26
90.2750.3510.3910.8190.8180.0172.113.85−4.083.25
100.2770.3510.3920.8240.8240.0141.644.36−3.903.13
110.2790.3520.3940.8250.8240.0121.493.43−3.002.89
120.2810.3520.3950.8260.8240.0111.313.52−2.532.29
130.2830.3530.3960.8230.8240.0121.483.90−3.272.23
140.2850.3540.3980.8210.8190.0131.613.92−3.632.60
150.2870.3540.3990.8270.8250.0111.283.04−2.162.60
160.2890.3550.40.8310.8290.0111.304.12−1.421.98
170.2910.3550.4020.8320.8320.0121.463.99−3.822.52
180.2930.3560.4030.8410.8420.0111.282.36−1.922.35
190.2950.3570.4040.8420.8410.0111.334.13−1.812.22
200.2970.3570.4060.8360.8350.0101.233.05−3.661.87
210.2990.3580.4070.8330.8320.0121.423.80−2.801.86
220.3010.3580.4080.8290.8280.0111.343.26−2.772.31
230.3030.3590.410.8320.8340.0141.634.16−3.052.27
240.3050.360.4110.8330.8330.0101.243.65−2.472.40
250.3070.360.4130.8330.8330.0111.284.17−3.512.57
260.3090.3610.4140.8350.8340.0091.094.04−2.301.66
270.3110.3610.4150.8450.8440.0111.293.20−2.361.99
280.3130.3620.4170.8520.8510.0121.412.56−2.842.00
290.3160.3630.4180.8590.8560.0101.122.46−2.082.03
300.3180.3630.420.8540.8550.0101.142.72−2.081.99
310.320.3640.4210.8590.8580.0111.283.01−2.111.78
320.3220.3650.4230.8560.8550.0091.082.89−1.781.71
330.3240.3650.4240.8470.8480.0111.282.80−2.751.70
340.3260.3660.4250.8420.8440.0111.282.73−3.212.05
350.3280.3660.4270.8400.8400.0121.442.41−4.142.33
360.330.3670.4280.8370.8410.0151.763.18−5.652.23
370.3330.3680.430.8270.8250.0141.744.38−4.572.70
380.3350.3680.4310.8150.8170.0141.683.39−5.072.56
390.3370.3690.4330.7990.7970.0192.324.08−5.303.86
400.3390.370.4340.7780.7760.0243.116.91−7.235.07
410.3410.370.4360.7550.7590.0293.815.99−9.517.53
420.3430.3710.4370.7020.7140.0405.768.97−10.837.84
430.3460.3720.4390.5810.5780.0579.8019.50−23.2013.94
440.3480.3720.440.4620.4620.0337.1714.38−10.9612.26
450.350.3730.4420.4330.4240.05212.0557.81−15.4311.26
460.3520.3740.4440.2930.3810.16255.4365.77−97.6186.80
470.3540.3740.4450.0490.0080.060121.60272.90−85.74163.30
480.3570.3750.4470.0160.0070.031188.18843.68−57.38106.42
490.3590.3760.4480.0070.00700000
500.3610.3760.450.0070.00700000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Zhao, L.; Liu, B.; Jiang, T.; Cheng, Z. Free-Surface Velocity Measurement Using Direct Sensor Orientation-Based STIV. Micromachines 2022, 13, 1167. https://doi.org/10.3390/mi13081167

AMA Style

Zhang Z, Zhao L, Liu B, Jiang T, Cheng Z. Free-Surface Velocity Measurement Using Direct Sensor Orientation-Based STIV. Micromachines. 2022; 13(8):1167. https://doi.org/10.3390/mi13081167

Chicago/Turabian Style

Zhang, Zhen, Lijun Zhao, Boyuan Liu, Tiansheng Jiang, and Ze Cheng. 2022. "Free-Surface Velocity Measurement Using Direct Sensor Orientation-Based STIV" Micromachines 13, no. 8: 1167. https://doi.org/10.3390/mi13081167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop