Next Article in Journal
Variables Affecting Peri-Implant Radiographic Bone Loss-8-23 Years Follow-Up
Next Article in Special Issue
Detecting and Measuring Defects in Wafer Die Using GAN and YOLOv3
Previous Article in Journal
Structure and Stability of Gas Adsorption Complexes in Periodic Porous Solids as Studied by VTIR Spectroscopy: An Overview
Previous Article in Special Issue
Surface Texture Measurement on Complex Geometry Using Dual-Scan Positioning Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Light to Displacement: A Design Framework for Optimising Spectral-Domain Low-Coherence Interferometric Sensors for In Situ Measurement

Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, Loughborough, LE11 3TU, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(23), 8590; https://doi.org/10.3390/app10238590
Submission received: 23 October 2020 / Revised: 23 November 2020 / Accepted: 26 November 2020 / Published: 30 November 2020
(This article belongs to the Special Issue Manufacturing Metrology)

Abstract

:
Growing requirements for in situ metrology during manufacturing have led to an increased interest in optical coherence tomography (OCT) configurations of low coherence interferometry (LCI) for industrial domains. This paper investigates the optimisation of spectral domain OCT hardware and signal processing for such implementations. A collation of the underlying theory of OCT configured LCI systems from disparate sources linking the journey of the light reflected from the object surface to the definition of the measurand is presented. This is portrayed in an applicable, comprehensible design framework through its application to profilometry measurements for optimising system performance.

1. Introduction

In situ metrology is a growing requirement for many high value manufacturing processes, ensuring traceability of parts with high tolerances and complex geometric forms, in addition to process control [1,2]. Current in situ forms of metrology commonly involve the use of sensors collecting information on parameters such as temperature, pressure, vibration and torque [3,4] to quantify part quality. Verification of quality becomes increasingly difficult via these methods with adaptive production lines creating made to measure custom parts such as for biomedical implants. Current systems used for geometric measurements on-machine are often tactile touch trigger probes, due to the measurement speed of tactile probes only sparse datasets can be practically obtained in production line scenarios, leading to the ability to only capture simple geometric forms requiring a low datapoint density to accurately describe the structure [2,5,6]. Such methods of metrology do not allow for comprehensive analysis of component manufacture progress in-process nor do they allow for a complete verification of the parts adherence to required tolerances. Such a system would enable quick adaptation to change in component design, ensure traceability of each stage of the manufacturing process and identification of issues at the point of occurrence, then corrective actions can be taken without needing to remove and re-fixture the workpiece.
In order to achieve this goal direct measurement of the component is required through the migration of complex metrology systems from the measurement lab and into the domain of manufacturing, allowing for geometric measurement data to be collected while a manufacturing process is underway, or soon after it is completed. However, due to spatial constraints and impacts from the environment, direct measurement of geometric form is not easily achieved using conventional, currently developed metrology tools.
Low-coherence interferometry (LCI) is a well-established optical method used for obtaining absolute geometric measurements present in many metrology laboratories around the world. However, typical LCI instruments are large and not suited to direct implementation into machining processes, often utilising white light scanning configurations prone to error from external environmental factors and requiring dedicated workstations. Optical coherence tomography (OCT) is a branch of applied LCI developed for operating in non-ideal environments within biomedical science [7]. OCT has been developed for over three decades and demonstrates high data integrity with high spatial resolution even whilst operating within the hostile environment of the human body [8]. OCT configurations have demonstrated straightforward coupling with fibre optics, allowing sensors to be located in situ and at a distance from the readout sensor and electronics, meaning only the chemically inert, extremely compact and robust optical fibre needs to be integrated within the manufacturing process environment. This allows for potential sensor integration into hostile areas in the presence of issues such as high temperatures, electromagnetic interference, corrosive fluids and radioactive activity, whereas current electronic sensors would be prevented from working properly or require extensive shielding. Additionally, the sensors require relatively inexpensive optical components that are readily available making this technology highly accessible. These characteristics make OCT configurations of LCI particularly interesting for in situ measurement where integration using current sensor technology is currently not possible for profilometry.
Although several review papers have been generated from literature on the developments in OCT systems [8,9,10,11], these focus on its application in the biomedical domain. As a result, areas of the underlying theoretical principles of parameters affecting the captured signal and methods of its processing for such configurations remain largely dispersed and isolated amongst the literature. An in-depth end-to-end demonstration of the theory of operation, from light source to spatial measurement in non-ideal, realistic scenarios with the additional focus for manufacturing regimes, has not yet been completed. As this technology transitions over to greater use in different engineering sectors and applications, a complete practical methodology for its design and application into such environments is required.
The purpose of this article is to create an optical and data processing design framework for system hardware characteristic selection and signal processing to optimise desired system performance. This is achieved though collation of the underlying theory of spectral domain OCT systems from disparate sources with an application to industrial metrology, linking the journey from the light reflected from the object surface to the definition of the measurand, portraying it in an applicable, comprehensible format through its application to profilometry.

Sensor Applications in Industry

It can be seen from Table 1 that a diverse range of applications have already had initial investigation into the use of LCI sensors for monitoring processes. Recent work demonstrating the use of LCI in the domain of a liquid environment has shown that the system is able to provide stable, accurate data during profilometry measurements across various step-heights [12]. An extension of this work demonstrates an embedded LCI sensor for absolute distance measurement of samples operating within a jet of water has also shown significant data stability [13], demonstrating the potential usefulness of the LCI sensor for post-processing verification of parts, utilising a liquid jet for flushing of debris and coolant during measurement. This growing interest in the use of LCI sensors in industrial applications indicated a requirement of a text focused on optimising systems for such environments, along with expected outcomes and potential issues commonly found.
Although there are several implementations of OCT configurations, due to the sensitivity advantages of Fourier-domain approaches over time-domain ones [14] and the prohibitive cost of operating with laser-swept light sources, this paper focuses on the use of spectrometer or spectral-domain based implementations.

2. Fundamental Principles

Spectral domain LCI investigates the optical path length and magnitude of backreflected light from a sample to determine the samples’ distance offset. This involves analysing the achieved interferometric signal frequency modulation which is dependent on the optical path difference between a stable reference signal and the signal returned from the measured sample; this is commonly achieved through conversion of the signal into the Fourier domain via a Fast Fourier Transform (FFT). This behaviour is well outlined in literature especially for measurement of biological systems with OCT [45,46]. This section will clarify the step by step underlying theory of how to achieve a spatial distance measurement from a simple LCI system.
A Michelson interferometer configuration is commonly used for the interferometric portion of the LCI sensor which is outlined in Figure 1a, the mathematical description of the light signal is also shown at each stage of progress. Figure 1b,c shows optical components can also be added at the exit of the fibre, allowing for the control of specific beam parameters (shape, depth of focus, focal length, and intensity distribution). Additionally, scanning methods can be added to perform profilometry as shown in Figure 1b. However, when applying such a sensor to operate within the spatially constrained non-ideal environment present in many manufacturing procedures scanning systems may not allow for the required robustness and spatial tolerances imposed within the measurement area, thus scanning procedures such as shown in Figure 1c may be more appropriate.

2.1. Mathematical Formulation

The progress of light through a Michelson configured LCI is shown in Figure 2; offshoots at each stage in the process show key parameters which will be affected by the system components. By understanding the impacting factors on signal quality at each stage of this progression, optimisation of system design and signal processing can be undertaken to maximise achieved signal quality.
The theoretical principles of LCI have been detailed in literature for an object with M discrete back-scattering reflectors [45,46]. This section collates that information in an end-to-end theoretical breakdown from light source to spatial measurement.
The interferometric signal that is generated by the superposition of reference and sample signals is shown in Figure 1d and Equations (2) and (3), respectively; these are composed of the same original signal from the light source represented by Equation (1) but with a phase delay due to the optical path difference between them.
E 0 = s 0 k , ω e i k z ω t ,
E R = A R e i 2 k z R ; A R = r R 1 2 s 0 k , ω ,
E S = A S e i 2 k z S ; A R = r S 1 2 s 0 k , ω ,
E D = 1 2 E S + E R = 1 2 A R e i 2 k z R + A S e i 2 k z S ,
where s 0 represents the light source spectrum as a function of wavenumber k also known as spatial frequency and angular frequency ω . The phase offset ( k z ) can be set to zero for the initial signal in Equation (1). The reference signal and sample signal are shown by Equations (2) and (3), respectively, having their corresponding reflectivity represented by r R and r S ( z S ) , where z S denotes the path length variable in the sample arm from the beam splitter, the factor 1 / 2 representing a 50:50 distribution of the optical power of the source signal from the beam splitter. The interferogram signal is composed of the superposition of the returned reference and signal electric fields as shown by Equation (4), the factor 1 / 2 represents that only half of the returned signal power will be passed towards the detector as shown in Figure 1a; this signal takes the form shown in Figure 1d.
To digitise the interferometric signal, an optical detector is used; this comes in the form of a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS). This means that the recorded intensity I is proportional to the time averaged electric field multiplied by its complex conjugate. The signal obtained by the spectrometer is shown in Equation (5), where the photocurrent is generated by a square-law detector expressed as a function of spatial frequency. Using the knowledge that the modulus of a complex number has the relationship: | z ˜ | z ˜ z ˜ the following equation for intensity can be determined:
I D k , z = | E D · E D | , = E S · E S + E R · E R + 2 R E S · E R ,
where R represents the real component and the angled brackets denote a time-average which can be displayed as:
I D k , z = lim T 1 2 T T T E D · E D d t ,
where I D is the photocurrent generated by the spectrometer. Substituting Equations (2) and (3) for the reference, E R and signal, E S components in Equation (5) will yield Equation (7). Here, the light source intensity spectrum is denoted as S ( k ) = | s ( k ) | 2 . This represents a modulating signal term combined with a carrier signal term “DC term” to produce the modulated signal shown in Figure 1d.
I D ( k ) = ρ 4 S k R R + n = 1 N R S n + ρ 2 S k n = 1 N R R R S n ( c o s 2 k n z R z S n ) + ρ 4 S k n m = 1 N R S n R S m ( c o s 2 k n z S n z S m ) ,
where ρ is the responsivity of the line scan camera, R R is the reflectance of the reference, R S is the reflectance of the sample reflector, 2 ( z R z S n ) is the round trip optical path difference (OPD) between the reference and n th ; and m th . sample reflectors. It can be seen from Equation (7) that the intensity of the signal is composed from the sum of three terms:
The 1st term is a constant “DC” signal, independent of changes in the OPD between the reference and sample signals. This term dominates the measured signal with an amplitude proportional to the power reflectivity of the reference and sample, scaled by the light source optical intensity as a function of wavenumber. This dominance can be reduced through changing the beam splitter ratio such that a greater percentage of the signal is directed towards the sample arm of the interferometer. Additionally, as the reference signal optical power is typically far greater than the samples returned signal optical power, this term will limit the signal strength before the spectrometer detector becomes saturated; by reducing the reference signal, the signal strength can be increased without saturation.
The 2nd term represents the reflected signals from the measured sample. The signal magnitude contribution from this term is typically far less than the DC signal contribution, due to the optical intensity being proportional to the square root of the reference and sample reflectivity.
The 3rd term represents the interference between different sample reflectors for a system where the reference and sample paths are separate as in Figure 1a. This term will appear as additional peak artefacts in the Fourier domain of the measured signal; reduction can be achieved through increasing the reference signal power through changing the beam splitter ratio directing light along each path. As shown in Equation (7), the 2nd term signal amplitude is dependent on the reflectivity of the reference and sample whilst the 3rd term is dependent on only the sample signal reflectivity. However, for a common-path configuration, this represents the reflected signals from the measured sample.
Depth information is obtained by performing a Fourier transform on Equation (7) to form Equation (8), allowing for determination of the modulating frequencies corresponding to sample reflector locations:
i D ( z ) = ρ 8 γ ( z ) [ R R + R S 1 + R S 2 + . . . ] + ρ 4 γ ( z ) n = 1 N R R R S n ( δ [ z ± 2 ( z R z S n ) ] ) + ρ 8 γ ( z ) n m = 1 N R S n R S m ( δ [ z ± 2 ( z S n z S m ) ] ) ,
where i D ( z ) is intensity recorded as a function of OPD and γ z = F T { S ( k ) } is the coherence function manifesting as a convolution of the Dirac delta function which represents frequency elements of the Fourier transformed signal at set bins. Hence, this term determines the point spread function (PSF) over which the spread of interference fringes occur for a set sample reflector.

Signal Analysis for Singular Reflecting Objects

When taking measurements of opaque, singular reflecting objects such as metallic surfaces, as is often the case in manufacturing scenarios ( N = 1 ) ; this allows for a reduction of Equation (7) to the form shown in Equation (9). This removes the interference noise, which occurs between reflectors at different OPDs for N > 1 :
I D ( k ) = ρ 4 S k R R + R S 1 + ρ 2 S k R R R S 1 ( c o s 2 k n z R z S 1 ) .
Depth information can be obtained by performing an inverse Fourier transform on Equation (9) to form Equation (10):
i D ( z ) = ρ 8 γ z R R + R S 1 + ρ 4 γ z R R R S 1 ( δ ( z ± 2 n ( z R z S 1 ) ) .

2.2. Extracting Spatial Depth from Spectral Interferogram

From Equations (8) and (10), the spatial distance measurement which is of interest is a sinusoidal frequency dependent on c o s ( 2 n ( z R z S ) ) ; hence, the period of the wave is π / n ( z R z S ) .
The Fourier transformed result is composed from a series of discrete steps dependent on the sampling resolution δ k of the spectrometer with an array of N s pixels. The spectral sampling resolution can be determined by δ λ = λ / N s or δ k = δ λ / λ 0 2 , hence the sampling interval in the z-domain is given by δ z ^ = 2 π / 2 δ s k , where z ^ = 2 z to account for the apparent depth-doubling factor shown in the modulation frequency term c o s ( 2 n ( z R z S ) ) . Hence, a peak located at pixel N i on the spectrometer reading will correspond to a depth shown in Equation (11):
z i = π δ k N i N s .
However, systems are not perfectly representative of their theoretical counterparts, errors in various parts of the system such as pixel spacing and diffraction grating ruling errors will bleed through into the measurement data if such a linear fit approach is taken. As such, further calibration of the system can be undertaken to improve the accuracy of this transfer function. Previous work discovered for one setup that, during measurements in air, this would lead to errors of up to approximately 50 nm and whilst operating in water approximately 100 nm [12]. Hence, depending on the measurement accuracy requirements, further action may be required to compensate for nonlinearities in the system. To calibrate the LCI system, sensor readings were obtained from a mirror sample at a range of offsets from the fibre tip. Simultaneous reference measurements were taken using a reference interferometer (Renishaw XL-80, UK), allowing accurate determination of δ k and k across the useable range of operation.

2.3. Signal Noise

The noise in a detected signal is a result of noise introduced from the measurement hardware, and as result of non-ideal measurement conditions. Noise sources resulting from the measurement hardware can be grouped generally as: readout noise, photon noise, relative intensity noise (RIN) and dark noise [47]. These can be further classified as temporal or spatial noise sources. Temporal sources such as photon and dark noise vary with time and thus can be reduced through averaging of frames. Spatial noise arises from non-uniform outputs in dark current non-uniformity and photo response.
Photon noise also known as shot noise is a statistical variation in the arrival rate of photons on the camera detector governed by Poisson statistics and as such the magnitude is linked to the square root of the signal optical intensity as shown in Equation (12). Dark noise results from a charge accumulation in each pixel due to thermal fluctuations in the silicon which can release electrons. Readout noise arises from converting the pixel charge carriers into a voltage signal and processing from analog-to-digital conversion with the largest contributor being the amplifier electronics. RIN describes the instability in the power level of the light source and is typically independent of laser power.
Spectrometer CCD’s commonly have cooling which results in contributions to noise from dark current being insignificant. Noise contributions from photon variance reduces as photon flux increases due to the square root relationship with signal. When the measured signal has a low optical intensity, the read noise will be greater than the photon noise and the signal is termed read-noise limited. Increasing the optical power input into the spectrometer or increasing the integration time will allow the pixels to collect more photons, which in turn increases the SNR, there will then come a crossover point where photon noise exceeds read and dark noise, and the signal is termed photon or shot-noise limited:
σ p h o t o n = ρ η τ h v 0 P 0 N γ S R S + γ R R R ,
where η is the quantum efficiency of the detector, τ is the exposure time , h is the Planck constant, v 0 is the centre frequency of the light source spectrum, P 0 is the optical power, and γ S and γ R denote the quantity of input power that will exit the interferometer from each arm, assuming R S = R R = 1 .

2.4. Measurement Range

The useable measurement range of an LCI system is dependent on: the sampling rate of the signal due to Nyquist’s theory, the sensitivity fall-off and the returned signal strength from measured regions such that the returned signal can be distinguished from the systems noise. This section looks at factors impacting on these areas.

2.4.1. Maximum Range—Nyquist Theory

Since the analogue signal is being digitised through its sampling via the optical detector pixels the maximum sample distance from the fibre that the sensor can resolve is limited by the sampling rate of the signal by the detector. The signal is digitised through the sampling of its wavelength components intensities across an optical detectors pixel array and thus is victim to Nyquist theory of requiring a sampling rate of at least twice that of the measured signal frequency. This limitation due to hardware can be calculated through the use of Equation (13):
z m a x = λ 0 2 4 N s λ .

2.4.2. Sensitivity Fall-Off

When using spectrometer based LCI techniques, there is an apparent fall-off in signal strength following a Gaussian rule with increasing OPD between the reference and sample signal leg, even if the reflected signal amplitude is constant. This is caused by the following two factors: finite pixel width and finite spectrometer resolution.

Finite Pixel Width

The diffraction grating within the spectrometer spectrally separates the interferometric signal in terms of wavelength, directing spectral components of the signal onto discrete pixel detectors across the pixel array. The optical intensity of each of these spectrally dispersed spots is sampled across each pixel. The optical spot intensity is integrated over the pixel area which in effect imposes a rectangular function as a convolution of the Gaussian beam spot. This signal decay factor as a function of the finite pixel width ( R P W ) manifests as a non-normalized Sinc function as shown in Equation (14) along with the Rect function [47]:
R P W = s i n c 2 π z 2 · z m a x , R e c t k δ k = 0 for | k | > δ k 2 1 2 for | k | = δ k 2 . 0 for | k | < δ k 2
The result of this is a 4 dB drop in the signal-to-noise ratio (SNR) near the Nyquist limit [47]. However, the finite pixel width does not introduce a decay of the noise level due to the statistical independence of the shot noise between neighbouring pixels of the array [46].

Finite Spectrometer Resolution

The finite spectrometer resolution determined by the Gaussian beam profile of the spot size incident on the optical detector pixel array also introduces a sensitivity decay ( R S R ) as shown in Equation (15):
R S R = e x p ω 2 2 l n 2 π z 2 · z m a x 2 ,
where ω = δ k k and δ k is the spectrometer’s spectral resolution dependent on the size of the beam spot in the focal plane of the detector pixel. Thus, the total signal decay in the system as a function of depth comes in the form of a convolution of the finite pixel size with the Gaussian spectral resolution. After applying a Fourier transform, this leads to a multiplication of both of the decay functions which gives the expression shown in Equation (16) for sensitivity reduction R, as a function of imaging depth z [48]:
R ( z ) = s i n c 2 π z 2 · z m a x e x p ω 2 2 l n 2 π z 2 · z m a x 2 .
The depth-dependent sensitivity of the system presented in this work is shown in Figure 3a, the SNR sharply declines with depth and measurements near the Nyquist limit become difficult to reliably analyse as shown in Figure 3b.
A normalisation procedure can be applied in the Fourier domain to compensate for fall-off by multiplying the frequency components of the signal by the inverse of the signal drop-off envelope, indicated by the red dashed line in Figure 3a. This improvement can be seen in Figure 3c across the range of operation, Figure 3d shows the same signal as in Figure 3b with the normalisation curve applied to it. However, care should be taken to optimise this normalisation curve as it will also amplify high frequency noise and dampen low frequency signal components.

Signal Strength

An additional factor limiting a systems ability to measure over large distances is signal strength resulting from the amount of light backreflected or backscattered into the interferometer. The surface which is being measured will also have a large impact on the signal magnitude and quality along with the orientation of the surface in relation to the axis of measurement. Environmental impacts may also degrade the signal, from attenuation due to operating medium, or the effects of vibration and particulates in solution causing scattering of the light source.
Some improvement can be achieved using optics, focussing the light on a constrained small region to improve re-coupling of back-reflected light into the system from the region of interest. However, for in situ measurements or for integration into many manufacturing regions, tight geometric constraints along with the influence of non-ideal environmental factors may limit the ability to add such optical elements into the system.

3. LCI System Design Framework

The prospect of designing an LCI system from first principles can be somewhat daunting. For practical implementation, the system will be built with a design specification of requirements, thus a robust methodology and knowledge of the various parameters which affect the outcome of critical operating qualifiers such as system resolution, operating range and measurement speed is required. Furthermore, many parameters are inter-linked making the design process somewhat iterative and, in many cases, trade-offs or compromises will need to be made to achieve a system that can perform in the desired manner under the required circumstances of operation. Figure 4 shows a system design framework based around metrology design operating criteria depicted by the blue highlighted elements with the offshoots in grey representing parameters impacting them. Some of the parameters are broken down into smaller elements with the positive and negative symbols representing if the parameter should be increased or decreased to maximise each design criteria based on our exemplar; other designs may vary based on prioritisation of certain requirements.
The operating domain should also be taken into account during the design phase, such as operating medium, environmental conditions and sample surface characteristics and their impact on the mentioned design parameters outlined in Figure 4. The following sections consider each step of the flowchart in Figure 4 with more in depth discussion on the various parameters and the cross-linking that occurs.

3.1. Axial Resolution

The axial resolution in an LCI system is determined by the smallest change in OPD that the system can detect, determined by the spectral resolution of the spectrometer. Previous work utilising the LCI system described in Section 3.2.1 has demonstrated submicrometric accuracy across step heights ranging from 5 μ m to 8 μ m with a precision of ±56 nm [12]. However, if transparent samples are measured with multiple reflective layers, the smallest discernible distance between layers will be limited by the coherence length of the light source and the axial resolution will be as shown in Equation (17). The larger the bandwidth of the light that is sent into the interferometer, the narrower the coherence function/PSF upon performing an FFT. Increasing the bandwidth of the light source for a detector of set pixel number will result in a reduction in the maximum achievable measurement range as shown in Equation (13); thus, a balance must be found:
δ z = 2 l n 2 π × λ 0 2 λ .
When performing tomographic measurements, the resolution of the instrument is limited by the coherence function due to potential reflectors interfering with one another. It has been shown for measurements of singular reflecting surfaces the accuracy of measurement is not limited by the peak PSF with step-height measurements with submicrometric accuracy [12].

3.2. Lateral Resolution

The lateral resolution of the system is controlled by the optics of the sensing arm of the interferometer. These fixed optics will allow for an axial field of view ( F O V a x i a l ) as shown in Equation (18) in which the lateral resolution ( δ x ) will be approximately equivalent to the focused spot size shown from Equation (19). Outside of this field of view, the lateral resolution will deteriorate:
F O V a x i a l = 0.221 λ s i n 2 s i n 1 ( N A ) 2 .
δ x = 0.37 λ 0 N A .
Hence, from Equations (18) and (19), it can be seen that, as lateral resolution is increased due to a large NA focusing optic, the axial field of view over which the spot size is constant decreases.
The refractive index of the operating medium may also impact the lateral resolution [34]. Additionally, for in situ measurements during manufacturing processes, tight spatial constraints may be in place along with environmental factors such as vibration and non-ideal operating regions containing dust and liquid contaminants which may affect optical components. In such a situation, the use of un-lensed optical fibres is a desirable solution due to the compact size and robustness nature. However, as highlighted above, the lateral resolution severely degrades as a result of the light diffraction upon exiting the fibre.

3.2.1. Operating outside the Region of Focus

To demonstrate degradation in lateral resolution outside of a focused region, profile measurements were taken at two offsets of 235 μ m and 1020 μ m from a British five pence coin using the setup described in Figure 5. The system was configured as a fully fibre-enclosed implementation of an LCI with a common path for both the reference and sample signal leg, taking advantage of reduced sensitivity to vibrations, thermal fluctuations, and humidity, and removing the requirement for dispersion compensation between signals [49]. The system was un-lensed with the bare cleaved fibre having a numerical aperture of 0.13. This provides a micrometre-level footprint and form factor of the sensor. The system in this configuration has a confocal parameter of 83.5 μ m and a 1 / e 2 transverse resolution of 5 μ m, hence both measured regions will be outside of the focused beam region.
The system components consist of a broadband light source provided by a super luminescent diode (EXS210068-01, Beratron, 850 nm) with a 3-dB bandwidth of 58 nm and an emitting power of 5.14 mW at 160 mA. A single-mode fibre coupler with a splitting ratio of 50:50 for beam splitting and coupling was used, with only one branch of signal output being used as the common path for the signal and reference. A spectrometer (MayaPro2000, Ocean Optics) operating at 125 Hz with a 2048 × 64 pixel array, starting wavelength of 756 nm and spectral range of 174 nm with a resolution of 0.21 nm was used, giving a theoretical axial operating range of 2.15 mm before aliasing as defined from Equation (13).
A region of the coin was focused on which contains the artisan’s initials “J.C” to highlight the variation in lateral resolution with the distance of the coin from the fibre. Unilateral profile measurements were taken with the coin being translated at a constant speed of 100 μ m/s on the x-axis over 1100 μ m with a measurement integration time of 8 ms and a single spectrum captured per datapoint. This translated to a resolution of approximately 5 μ m per datapoint. Steps of 5 μ m were taken on the y-axis over a range of 800 μ m.
Figure 6 shows the results from the two measurements, the red circles indicate a predicted spot size calculated from the NA of the fibre for an offset of 235 μ m and 1020 μ m. A clear reduction in the lateral resolution is present between the two offsets which manifests in a blurring of the measured feature. This degradation is a result of the increased spot on the surface of the sample reflecting light back from a larger area of focus. This results in an integration over the illuminated area which can shift the focus to other regions as the primary returned signal. From the measurement data, an increase of 1784% in the theoretical spot size area relating to an increase of 88% in the measured area covered by the “J.C” was observed; it is worth noting that an uncertainty metric for this measurement has not been completed. However, the values reflect the reduction in lateral resolution seen in Figure 6, where the achievable resolution is not equal to the spot size; this is due to the optical intensity distribution across the spot and that, with large incident angles to the sample surface, it is less likely for the light component to be recoupled into the system upon back reflection. Although the comparison may appear dramatic for a change in offset of approximately 900 μ m, the lateral scale of measurement should be taken into account which is approximately 700 μ m; thus, for many applications, this introduced blur will not impact measurement data integrity. However, for some small features such as shown in Figure 6, or surface roughness, this may be an unacceptable compromise and thus measurement offset must be taken into account or additional steps such as focussing optics to improve results must be used.
Several methods of improving the lateral resolution of an OCT profilometry measurement have been shown in literature [50,51,52]. A promising technique is the use of synthetic-aperture radar (SAR) [53] which has been used for improving the lateral resolution of radar systems for decades; due to LCI operating characteristics being analogous to radar, the same principles can be applied whilst performing line scan measurements to drastically improve the measurement resolution. This may circumvent the issue of lateral resolution whilst operating with an un-lensed system or outside of a systems lateral field of view. Application of SAR to LCI has been demonstrated with some success in literature with a technique named ’Interferometric Synthetic Aperture Microscopy’ (ISAM) [51].

3.3. Motion Artefacts and Fringe Washout

For in situ profilometry, data measurement on the fly is necessary. However, the effect of sample motion can negatively impact on data integrity due to the signal being integrated over a period of time. This results in a reduction in SNR and image degradation [54]. The magnitude of motion artefacts and SNR reduction is controlled by the total axial or transverse displacement during a singular signal acquisition time. Hence, for a given sample motion, if the capture rate is increased or axial and transverse displacements are decreased, then the impact of motion artefacts will reduce.

3.3.1. Axial Motion Effects

Motion in the axial plane may occur due to vibrations of the measured sample or sensor leading to SNR reduction. This manifests as fringe washout due to the continuous phase changes of fringes over the integration period of the measurement. Equation (20) represents the detected number of electrons per wavenumber, N ( k ) :
N k = N 0 s i n c k z ,
where N 0 is the number of electrons obtained when the sample and sensor are stationary and z = | v z T | is the axial displacement of the sample during the integration time ( T ) with axial velocity ( v z ) . Performing a Fourier transform on Equation (20) also allows for the approximation of the factor s i n c ( k z ) to be seen as s i n c ( k 0 z ) for the case | z | < < | z z b | , where z b is the zero path length location. Thus, the axial motion of the sample or sensor gives rise to an SNR penalty of s i n c 2 ( k 0 z ) . For k z > > 1 , broadening of the axial resolution occurs, due to the Sinc function in Equation (20) limiting the effecting spectral bandwidth. This can be seen as a convolution of the original image with a Rect function given by the Fourier transform of the Sinc function [54].

3.3.2. Lateral Motion Effects

Transverse motion will degrade the lateral resolution leading to a blurring of the image in the direction of travel [55] and a reduction in SNR [54]. For a continuous wave (CW) source, the SNR degradation ( S N R d e c r e a s e ) due to lateral motion is given by Equation (21) [54]:
S N R d e c r e a s e 5 l o g 10 1 + 0.5 x 2 w 0 2 ,
where x is the scanning distance during the camera integration time and w 0 denotes FWHM of the beam profile. The normalised displacement is defined as x 2 / w 0 2 . For pulsed illumination, x is replaced by ( T p u l s e / T c a m e r a ) . x , where T p u l s e is the pulse width in time and T c a m e r a is the integration time of the camera for a single data point capture. To combat the lateral motion artefacts, the spectra capture speed can be increased, the motion speed decreased, or a pulsed or stroboscopic instead of CW light source can be used. [54,56,57]. Additionally, pulsed illumination can also offer SNR improvements for measurement on the fly over CW sources [58].
The irradiance distribution profile for a Gaussian beam can be found using Equation (22) whilst stationary. However, once there is a transverse movement between the probe and surface, then the irradiance distribution profile for a fixed exposure period (T) can be calculated by Equation (23) [54]:
g ( x , y , z ) 4 l n 2 π w 0 2 e x p 4 l n 2 ( x 2 + y 2 ) w 0 2 ,
G ( x , y ) = 1 T T / 2 T / 2 g ( x + v x t , y + v y t ) d t ,
where x and y are locations along the beam profile, w 0 is the FWHM of the beam profile and v x and v y represent constant velocities along the transverse x and y-axis respectively. The impact on transverse resolution in the x-axis for instance can be found by solving for G ( x , y = 0 ) and then locating the FWHM of the calculated irradiance profile.

3.4. Dispersion Compensation

Many manufacturing conditions do not offer an ideal environment to operate within, with machining processes including various liquids for lubrication and for flushing of debris. OCT has already shown that measurement errors due to chromatic dispersion in optically dense materials occur, resulting from the frequency dependence of propagation constants for different mediums [59]. The result of unbalanced dispersion within an LCI system is a reduction in signal peak intensity and a broadening of its Point Spread Function (PSF) shown in literature to be dependent on refractive index, central wavelength, bandwidth and propagation distance [59,60,61]. LCI is very tolerant of chromatic dispersion if the amount of dispersion occurring in the reference and sample arms are equal. This can be achieved through insertion of artefacts of varying thickness and refractive index into the reference arm of the spectrometer [60] or through using a common-path setup. Software based compensation is also achievable offering a more flexible method of compensation [59,62,63].
The propagation constant for a material can be described by a Taylor series expansion shown by Equation (24) with the expansion made around the light sources central frequency k 0 [63]:
θ ( k ) = θ ( k 0 ) + θ ( k ) k ( k 0 k ) + 1 2 ! · 2 θ ( k ) k 2 ( k 0 k ) 2 + . . . + 1 n ! · n θ ( k ) k n ( k 0 k ) n .
The terms in Equation (24) relate to the zeroth, first and second orders of dispersion which respectively represent: constant offset, group velocity, both of which are not related to dispersive broadening and group-delay dispersion per unit length, though higher order terms may be present.
In practice, the frequency dependent dispersion variation can be compensated for by multiplying the obtained interferometric signal by a phase correction term such that I c o r r e c t e d ( k ) = I ( k ) × e ( i θ ( k ) ) and can be obtained during post-processing [46]. This is achieved through isolation of the coherence function of a strong reflector, followed by performing an inverse Fourier transform on the masked signal and then taking the arctangent of the imaginary component divided by the real component. By performing a linear fit and then identifying the difference through subtraction, an error term can be achieved indicating how much wavenumbers are out of phase with one another. This can then be used to form a phase adjustment value e ( i θ ( k ) ) . Using the knowledge that, for a properly dispersion compensated system the signal peak intensity will be at its maximum [60], an iterative procedure can be carried out until the maximum peak value is achieved.

3.5. Design Exemplar

This section contains a walk-through for component specification dependent on an exemplar set requirement list as outlined in the design framework shown in Figure 4:
  • Axial resolution: 5 μ m
  • Operating range: 2.5 mm
  • Measurement speed: 200 mm/s
  • Lateral resolution: 20 μ m
Axial resolution: The operating mediums absorption spectrum, degree of scattering present in the measurement and the required axial resolution will determine the potential range of λ 0 . It is also worth noting that some bandwidth regions have been more developed than others in terms of product availability and pricing. If a central wavelength of 1 μ m is chosen, then the corresponding bandwidth for a set axial resolution can be calculated using Equation (17)—for this example, λ 88 nm.
Operating range: For a required operating range and light source bandwidth, the minimum number of pixels sampling the wavelength range can be found using Equation (13) and solving for N s (i.e., N s = z m a x 4 λ λ 0 2 ). For the given example, this would come to N s = 880 pixels.
Measurement speed: The measurement speed is limited by the linescan rate of the camera. However, signal strength will impact the possibility of operating at such speeds which requires a knowledge of sample reflectivity and potential scattering or absorption of the operating medium; the amount of signal fall-off will also impact this. As shown by Equation (23), when relative transverse motion between the probe and sample occurs for a set exposure time, there will also be a reduction in the transverse resolution.
Lateral resolution: To achieve a lateral resolution of 20 μ m at a translational speed of 200 mm/s with a linescan rate of 100 μ s, the maximum static FWHM of the beam profile should be ∼6.8 μ m from Equation (23), if the linescan rate is halved to 50 μ s, then the static FWHM should be ∼18.7 μ m. The static FWHM of the beam profile is dependent on the focusing optics used and will only be applicable over a certain axial field of view as described by Equations (18) and (19), respectively. The use of objective lenses can be employed to extend the field of view [64].

4. Signal Processing Algorithms

Signal processing is increasingly playing a key role in the analysis of optical measurements due to the enhanced power of microprocessors. There are many approaches to improving signal quality of LCI systems presented in literature; however, these again remain dispersed across the literature. This section will provide a concise overview of the key analysis procedures along with highlighting practical application signal issues which the user will encounter during real-life measurement scenarios. A signal processing framework for the steps required to go from raw detected signal to spatial distance value is illustrated in Figure 7. Here, the solid bounding boxed elements represent essential parts of post processing and the dashed bounding boxed elements represent procedures that will enhance the signal quality.

4.1. Resampling Captured Interferogram

The majority of spectrometers utilises diffractive components to uniformly sample the signal by wavelength. However, FFT algorithms are most efficient when the data points are evenly spaced within the frequency domain [65]. Due to the relationship between wavenumber and wavelength, ( λ ), ( k = 2 π / λ ), the data will be non-uniformly spaced by wavenumber, requiring a re-sampling process before a conventional FFT can be effectively applied. If the dataset is left as linear in λ before the FFT is applied, then severe deterioration of axial resolution and SNR will be present in the signal, with an OPD dependent broadening of the PSF function leading to a reduced operational limit due to signal fall-off. This phenomena can be seen in Figure 8a.
It is possible to avoid this requirement for re-sampling of the data by using a Non-uniform Direct Fourier Transform (NuDFT) instead of an FFT algorithm. However, the NuDFT is more computationally intensive than other alternatives for re-sampling. Since computational time is particularly important for high-speed imaging, the FFT is preferred. Re-sampling approaches can be categorised into two methods. The first method uses a special hardware configuration which samples the returned signal directly to be linear in k [66,67]. However, this also adds additional cost, design complexity and is not easily translated between systems. The second method utilises software to take the signal acquired nonlinearly in k and re-sample it during post processing through the use of a λ to k-mapping function [68]. However, software-based methods do not perform as well for reflectors close to the maximum achievable depth (Nyquist limit) due to high frequency fringes in the interferogram that are poorly sampled [65].
The approach taken in this work is similar to that presented for automatic calibration [68], relying on the knowledge that, for a singular reflector and whilst operating in a common-path configuration negating the requirement for dispersion compensation, the interference fringes will be evenly spaced in k-space. By using zero-crossing detection shown in Figure 8b, fringe spacing as a function of pixel number is obtained from which a mapping relationship shown by the red line in Figure 8c between linear in λ and linear in k-space can be created; this allows interpolation of the data using a B-spline interpolation to create an evenly sampled dataset in k-space before performing an FFT.
To increase the accuracy of the zero-crossing detection, a zero-filling procedure can be followed which addresses the issue of errors at greater depths because of the increased shoulders around the peaks [14,65]. This involves taking the FFT, Fourier shifting the result and then zero padding the start and end of the array to increase the data-array length by a factor M, followed by an inverse FFT back to the spectral data. These spectral data, which was M times larger than the original array, have a greater datapoint density hence a more accurate resampling procedure is possible. In practice, this gives an increase in signal magnitude and a reduced PSF of the signal peak.

4.2. Windowing Signal

Applying a windowing function to the spectral interferogram can improve the sensitivity of FFT spectral-analysis techniques and reduce spectral leakage and side-lobes amplitude, resulting in a greater SNR. There are various types of windowing functions available, and the appropriateness of a window function depends on the frequency content of the measured signal. For example, if measuring a multi-layered reflecting sample results in signal components close to one another, then a window function with a narrow main lobe should be used to prevent spectral resolution degradation. Various windows should be compared against one another to find the best fit for the measurement situation. Table 2 shows four commonly employed windowing functions applied to an LCI signal demonstrating an increase in PSF FWHM compared with the original unwindowed signal but also a substantial increase in SNR.

4.3. Fixed Pattern Noise Removal

Fixed pattern noise refers to items in the captured signal that are separate to the interference modulation pattern which contains the depth information as a response to OPD between the reference reflected signal and the signal reflected from the measured sample [46]. For example, variations in response of pixels in the CCD camera or spurious etalons in the interferometer due to back-reflectance within the interferometer or stray rays and the DC signal term. There have been various software methods to remove these unwanted signal additions suggested in literature with the most common techniques being high-pass filtering, subtracting the measured reference signal [69] and subtracting the mid-point between the upper and lower signal envelopes [12].

4.4. Zero-Padding

A zero-padding methodology can be applied to the captured spectrum to increase datapoints in the Fourier domain achieved by joining an array of zero values to the end of the spectrum array. This does not add any frequency components to the signal but when an FFT is performed the number of frequency bins that will be compared with the signal will be increased, thus improving the datapoint density of the Fourier transform signal. This can be seen in Figure 9a,b showing the FFT peak of a resampled spectrum without and with zero-padding before Fourier-transforming, respectively. However, as array size is increased naturally so will the computational time. This increased datapoint density can improve polynomial fitting to the FFT peak as shown in Figure 9c.
Figure 9d shows the resolution of the interferometer when processing the raw data by detecting the location based on the maximum intensity detected peak, this value is calculated as being 2.17 μ m according to the number of bins assigned by the 2048 CCD pixel array in the spectrometer. This resolution can be improved through increasing the number of bins as shown through zero-padding in Figure 9b, or by fitting a polynomial to the datapoints surrounding the maximum peak location and extracting the vertex. In practice, a mixture of zero-padding and polynomial fitting can improve the system’s ability to resolve depth values.

4.5. Locate Signal Peak(s)

The signal peak(s) which have the largest magnitude will correspond to regions from the sample that the light signal has been backreflected or backscattered from. Thus, these frequencies correspond to the depth values at areas of interest on the sample. A straightforward method of finding the peak(s) within a signal is through use of the MATLAB function “findpeaks”. The resolution of the measurement can be increased by fitting a 2nd order polynomial to datapoints surrounding the peak location relating to the measured sample offset distance as shown in Figure 9b. The obtained peak frequency can then be converted to a spatial depth as described in Section 2.2.

5. Conclusions

This work has identified a growing interest around the application of developments in OCT configurations of LCI to the industrial domain for in situ measurement. However, due to the wealth of knowledge accrued over the past 30 years of progress in the field, an in-depth structured collation of theory with an industrial application mindset was required.
As a result, this work for the first time presents an end-to-end holistic framework for the development of OCT configured LCI systems, linking the journey of light reflected from the object surface to the definition of the m measurand. This includes the breakdown of achieved signal contributions, practical operating limits from measurement range due to sampling rate and sensitivity fall-off, lateral resolution variation across the region of measurement, motion effects on SNR/introduction of spurious artefacts and dispersion mismatch compensation.
In addition to identification of factors impacting the measurement signal, a simple practical approach to component selection resulting from critical operating qualifiers is also given and finally a comprehensive step by step dataflow approach to digital signal processing techniques for optimising the quality and robustness of achieved system measurements.
Some general limitations of OCT configured spectral domain LCI measurements which still require improvement include: achieving a high lateral resolution over a large work volume, difficulty in measuring surfaces with large gradients, limited penetration depth through scattering mediums, relatively small operating range, and signal strength dependent on scatterers reflectively, impacted by motion blur and suffering from sensitivity fall-off.
This work will equip the reader with the necessary underlying theoretical and practical information to apply the developments that have been made in OCT/ LCI to industrial applications for in process metrology.

Author Contributions

Conceptualization, T.H., L.J., J.P., P.K.; methodology, T.H.; software, T.H.; validation, T.H.; formal analysis, T.H.; investigation, T.H.; resources, J.P., P.K.; data curation, T.H.; writing—original draft preparation, T.H.; writing—review and editing, T.H., L.J., J.P., P.K.; supervision, L.J., J.P., P.K.; funding acquisition, P.K., J.P. All authors have read and agreed to the published version of the manuscript.

Funding

The authors wish to acknowledge support of the UK Engineering Physical Science and Research Council (EPSRC) for funding this work (Grant Nos. EP/M020746/1, EP/L01498X/1).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schwenke, H.; Knapp, W.; Haitjema, H.; Weckenmann, A.; Schmitt, R.; Delbressine, F. Geometric error measurement and compensation of machines—An update. CIRP Ann. 2008, 57, 660–675. [Google Scholar] [CrossRef]
  2. Uhlmann, E.; Mullany, B.; Biermann, D.; Rajurkar, K.P.; Hausotte, T.; Brinksmeier, E. Process chains for high-precision components with micro-scale features. CIRP Ann. 2016, 65, 549–572. [Google Scholar] [CrossRef]
  3. Altintas, Y.; Aslan, D. Integration of virtual and online machining process control and monitoring. CIRP Ann. 2017, 66, 349–352. [Google Scholar] [CrossRef]
  4. Li, Z.; Wang, G.; He, G. Surface quality monitoring based on time-frequency features of acoustic emission signals in end milling Inconel-718. Int. J. Adv. Manuf. Technol. 2018, 96, 2725–2733. [Google Scholar] [CrossRef]
  5. Fang, F.Z.; Zhang, X.D.; Weckenmann, A.; Zhang, G.X.; Evans, C. Manufacturing and measurement of freeform optics. CIRP Ann. 2013, 62, 823–846. [Google Scholar] [CrossRef]
  6. Gao, W.; Haitjema, H.; Fang, F.Z.; Leach, R.K.; Cheung, C.F.; Savio, E.; Linares, J.M. On-machine and in-process surface metrology for precision manufacturing. CIRP Ann. 2019, 68, 843–866. [Google Scholar] [CrossRef] [Green Version]
  7. Huang, D.; Swanson, E.A.; Lin, P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef] [Green Version]
  8. Kirtane, T.S.; Wagh, M.S. Endoscopic optical coherence tomography (OCT): Advances in gastrointestinal imaging. Gastroenterol. Res. Pract. 2014, 2014, 1–7. [Google Scholar] [CrossRef]
  9. Schmitt, J.M. Optical Coherence Tomography (OCT): A Review. IEEE J. Sel. Top. Quantum Electron. 1999, 5, 1205–1215. [Google Scholar] [CrossRef] [Green Version]
  10. Targowski, P.; Iwanicka, M. Optical Coherence Tomography: Its role in the non-invasive structural examination and conservation of cultural heritage objects—A review. Appl. Phys. A Mater. Sci. Process. 2012, 106, 265–277. [Google Scholar] [CrossRef] [Green Version]
  11. Lin, H.; Zhang, Z.; Markl, D.; Zeitler, J.A.; Shen, Y. A Review of the Applications of OCT for Analysing Pharmaceutical Film Coatings. Appl. Sci. 2018, 8, 2700. [Google Scholar] [CrossRef] [Green Version]
  12. Hovell, T.; Matharu, R.S.; Petzing, J.N.; Justham, L.; Kinnell, P. Lensless fiber-deployed low-coherence interferometer for in situ measurements in nonideal environments. Opt. Eng. 2020, 59, 1–11. [Google Scholar] [CrossRef]
  13. Hovell, T.; Petzing, J.; Justham, L.; Kinnell, P. in situ measurement of electrochemical jet machining using low coherence interferometry. In Proceedings of the Euspen’s 20th International Conference & Exhibition, Geneva, Switzerland, 8–12 June 2020. [Google Scholar]
  14. Choma, M.A.; Sarunic, M.V.; Yang, C.H.; Izatt, J.A. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Opt. Express 2003, 11, 2183–2189. [Google Scholar] [CrossRef] [Green Version]
  15. Bosselmann, T.; Ulrich, R. High-Accuracy Position-Sensing with Fiber-Coupled White-Light Interferometers. In Proceedings of the 2nd International Conference on Optical Fiber Sensors, Stuttgart, Germany, 5–7 September 1984. [Google Scholar] [CrossRef]
  16. Takada, K.; Yokohama, I.; Chida, K.; Noda, J. New measurement system for fault location in optical waveguide devices based on an interferometric technique. Appl. Opt. 1987, 26, 1603–1606. [Google Scholar] [CrossRef]
  17. Youngquist, R.C.; Carr, S.; Davies, D.E.N. Optical coherence-domain reflectometry: A new optical evaluation technique. Opt. Lett. 1987, 12, 158–160. [Google Scholar] [CrossRef]
  18. Gilgen, H.H.; Novak, R.P.; Salathe, R.P.; Hodel, W.; Beaud, P. Submillimeter Optical Reflectometry. J. Light. Technol. 1989, 7, 1225–1233. [Google Scholar] [CrossRef]
  19. Liang, H.; Cucu, R.; Dobre, G.M.; Jackson, D.A.; Pedro, J. Application of OCT to Examination of Easel Paintings. In Proceedings of the Second European Workshop on Optical Fibre Sensors, Santander, Spain, 9–11 June 2004. [Google Scholar]
  20. Targowski, P.; Gora, M.; Wojtkowski, M. Optical Coherence Tomography for Artwork Diagnostics. Laser Chem. 2006, 2006. [Google Scholar] [CrossRef] [Green Version]
  21. Kononenko, V.V.; Konov, V.I.; Pimenov, S.M.; Volkow, P.V.; Goryunov, V.A.; Ivanov, V.V.; Novikov, M.A.; Markelov, V.A.; Tertysnik, A.D.; Ustavshikov, S.S. Control of laser machining of poly crystalline diamond plates by the method of low-coherence optical interferometry. Quantum Electron. 2005, 35, 622–626. [Google Scholar] [CrossRef]
  22. Wiesner, M.; Ihlemann, J.; Muller, H.H.; Lankeanu, E.; Huttmann, G. Optical coherence tomography for process control of laser machining. Rev. Sci. Instrum. 2010, 81, 033705. [Google Scholar] [CrossRef] [Green Version]
  23. Webster, P.J.L.; Wright, L.G.; Mortimer, K.D.; Leung, B.Y.; Yu, J.X.Z.; Fraser, J.M. Automatic real-time guidance of laser machining with inline coherent imaging. J. Laser Appl. 2011, 23, 022001. [Google Scholar] [CrossRef]
  24. Purtonen, T.; Kalliosaari, A.; Salminen, A. Monitoring and adaptive control of laser processes. Phys. Procedia 2014, 56, 1218–1231. [Google Scholar] [CrossRef] [Green Version]
  25. Kononenko, V.V.; Bushuev, E.V.; Zavedeev, E.V.; Volkov, P.V.; Luk’yanov, A.Y.; Konov, V.I. Low-coherence interferometry as a tool for monitoring laser micro- and nanoprocessing of diamond surfaces. Quantum Electron. 2017, 47, 1012–1016. [Google Scholar] [CrossRef]
  26. Wiesauer, K.; Pircher, M.; Goetzinger, E.; Bauer, S.; Engelke, R.; Ahrens, G.; Gruetzner, G.; Hitzenberger, C.K.; Stifter, D. En-face scanning optical coherence tomography with ultra-high resolution for material investigation. Opt. Express 2005, 13, 1015–1024. [Google Scholar] [CrossRef] [PubMed]
  27. Liu, P.; Groves, R.M.; Benedictus, R. Signal processing in optical coherence tomography for aerospace material characterization. Opt. Eng. 2013, 53, 033201. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, P.; Groves, R.M.; Benedictus, R. 3D monitoring of delamination growth in a wind turbine blade composite using optical coherence tomography. NDT E Int. 2014, 64, 52–58. [Google Scholar] [CrossRef]
  29. Kirillin, M.Y.; Alarousu, E.; Fabritius, T.; Myllylä, R.; Priezzhev, A.V. Visualization of paper structure by optical coherence tomography: Monte Carlo simulations and experimental study. J. Eur. Opt. Soc. 2007, 2, 07031. [Google Scholar] [CrossRef]
  30. Tuukka, P.; Jakub, C.; Erkki, A.; Risto, M. Optical coherence tomography as an accurate inspection and quality evaluation technique in paper industry. Opt. Rev. 2010, 17, 218–222. [Google Scholar] [CrossRef]
  31. You, J.; Kim, S. Optical inspection of complex patterns of microelectronics products. CIRP Ann. Manuf. Technol. 2008, 57, 505–508. [Google Scholar] [CrossRef]
  32. Czajkowski, J.; Prykäri, T.; Alarousu, E.; Palosaari, J.; Myllylä, R. Optical Coherence Tomography as a Method of Quality Inspection for Printed Electronics Products. Opt. Rev. 2010, 17, 257–262. [Google Scholar] [CrossRef]
  33. Majumdar, A.; Huang, H. Development of an in-fiber white-light interferometric distance sensor for absolute measurement of arbitrary small distances. Appl. Opt. 2008, 47, 2821–2828. [Google Scholar] [CrossRef]
  34. Donato, A.D.; Pietrangelo, T.; Anzellotti, A.; Monti, T.; Morini, A.; Farina, M. Infrared imaging in liquid through an extrinsic optical microcavity. Opt. Lett. 2013, 38, 5094–5097. [Google Scholar] [CrossRef] [PubMed]
  35. Tang, D.; Gao, F.; Jiang, X. On-line surface inspection using cylindrical lens-based spectral domain low-coherence interferometry. Appl. Opt. 2014, 53, 5510–5516. [Google Scholar] [CrossRef] [PubMed]
  36. Park, H.M.; Jung, H.W.; Joo, K.N. Dual low coherence scanning interferometry for rapid large step height and thickness measurements. Opt. Express 2016, 24, 28625–28632. [Google Scholar] [CrossRef] [PubMed]
  37. Park, H.M.; Joo, K.N. High-speed combined NIR low-coherence interferometry for wafer metrology. Appl. Opt. 2017, 56, 8592–8597. [Google Scholar] [CrossRef]
  38. Krauter, J.; Osten, W. Nondestructive surface profiling of hidden MEMS using an infrared low-coherence interferometric microscope. Surf. Topogr. Metrol. Prop. 2018, 6, 015005. [Google Scholar] [CrossRef] [Green Version]
  39. Cheymol, G.; Brichard, B.; Villard, J.F. Fiber optics for metrology in nuclear research reactors-Applications to dimensional measurements. IEEE Trans. Nucl. Sci. 2011, 58, 1895–1902. [Google Scholar] [CrossRef]
  40. Amaral, M.M.; Raele, M.P.; Caly, J.P.; Samad, R.E.; Nilson, D.; Freitas, A.Z. Roughness measurement methodology according to DIN 4768 using optical coherence tomography (OCT). In Proceedings of the Modeling Aspects in Optical Metrology II, Munich, Germany, 15–16 June 2009. [Google Scholar] [CrossRef]
  41. Quinsat, Y.; Tournier, C. In situ non-contact measurements of surface roughness. Precis. Eng. 2012, 36, 97–103. [Google Scholar] [CrossRef] [Green Version]
  42. Koller, D.M.; Hannesschläger, G.; Leitner, M.; Khinast, J.G. Non-destructive analysis of tablet coatings with optical coherence tomography. Eur. J. Pharm. Sci. 2011, 44, 142–148. [Google Scholar] [CrossRef]
  43. Meemon, P.; Yao, J.; sung Lee, K.; Thompson, K.P.; Ponting, M.; Baer, E.; Rolland, J.P. Optical Coherence Tomography Enabling Non Destructive Metrology of Layered Polymeric GRIN Material. Sci. Rep. 2013, 3, 1709. [Google Scholar] [CrossRef] [Green Version]
  44. Karim, F.; Bora, T.; Chaudhari, M.B.; Habib, K.; Mohammed, W.S.; Dutta, J. Optical fiber-based sensor for in situ monitoring of cadmium sulfide thin-film growth. Opt. Lett. 2013, 38, 5385–5388. [Google Scholar] [CrossRef]
  45. Brezinski, M.E. Optical Coherence Tomography: Principles and Applications; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  46. Drexler, W.; Fujimoto, J.G. Optical Coherence Tomography Technology and Applications; Springer: Berlin/Heidelberg, Germany, 2014; Volume 1. [Google Scholar]
  47. Leitgeb, R.; Hitzenberger, C.K.; Fercher, A.F. Performance of fourier domain vs. time domain optical coherence tomography. Opt. Express 2003, 11, 889–894. [Google Scholar] [CrossRef] [PubMed]
  48. Yun, S.H.; Tearney, G.J.; Bouma, B.E.; Park, B.H.; de Boer, J.F. High-speed spectral-domain optical coherence tomography at 1.3 μm wavelength. Opt. Express 2003, 11, 3598–3604. [Google Scholar] [CrossRef] [PubMed]
  49. Tan, K.M.; Mazilu, M.; Chow, T.H.; Lee, W.M.; Taguchi, K.; Ng, B.K.; Sibbett, W.; Herrington, C.S.; Brown, C.T.A.; Dholakia, K. In-fiber common-path optical coherence tomography using a conical-tip fiber. Opt. Express 2009, 17, 2375–2384. [Google Scholar] [CrossRef] [PubMed]
  50. Xu, Y.; Chng, X.K.B.; Adie, S.G.; Boppart, S.A.; Carney, P.S. Multifocal interferometric synthetic aperture microscopy. Opt. Express 2014, 22, 16606–16618. [Google Scholar] [CrossRef] [Green Version]
  51. Ralston, T.S.; Marks, D.L.; Carney, P.S.; Boppart, S.A. Real-time interferometric synthetic aperture microscopy. Nat. Phys. 2007, 3, 129–134. [Google Scholar] [CrossRef] [Green Version]
  52. Mason, J.H.; Davies, M.E.; Bagnaninchi, P.O. Blur resolved OCT: Full-range interferometric synthetic aperture microscopy through dispersion encoding. Opt. Express 2020, 28, 3879–3894. [Google Scholar] [CrossRef]
  53. Wiley, C.A. Pulsed Doppler Radar Methods and Apparatus. U.S. Patent 3196436, 13 August 1954. [Google Scholar]
  54. Yun, S.H.; Tearney, G.J.; de Boer, J.F.; Bouma, B.E. Motion artifacts in optical coherence tomography with frequency-domain ranging. Opt. Express 2004, 12, 2977–2998. [Google Scholar] [CrossRef] [Green Version]
  55. Markl, D.; Hannesschläger, G.; Buchsbaum, A.; Sacher, S.; Khinast, J.G.; Leitner, M. In-line quality control of moving objects by means of spectral-domain OCT. Opt. Lasers Eng. 2014, 59, 1–10. [Google Scholar] [CrossRef]
  56. You, J.W.; Chen, T.C.; Mujat, M.; Park, B.H.; de Boer, J.F. Pulsed illumination spectral-domain optical coherence tomography for human retinal imaging. Opt. Express 2006, 14, 6739–6748. [Google Scholar] [CrossRef] [Green Version]
  57. Moneron, G.; Boccara, A.C.; Dubois, A. Stroboscopic ultrahigh-resolution full-field optical coherence tomography. Opt. Lett. 2005, 30, 1351–1353. [Google Scholar] [CrossRef]
  58. Drexler, W.; Morgner, U.; Kärtner, F.X.; Pitris, C.; Boppart, S.A.; Li, X.D.; Ippen, E.P.; Fujimoto, J.G. In vivo ultrahigh-resolution optical coherence tomography. Opt. Lett. 1999, 24, 1221–1223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Wojtkowski, M.; Srinivasan, V.J.; Ko, T.H.; Fujimoto, J.G.; Kowalczyk, A.; Duker, J.S. Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation. Opt. Express 2004, 12, 2404–2422. [Google Scholar] [CrossRef]
  60. Hitzenberger, C.K.; Baumgartner, A.; Drexler, W.; Fercher, A.F. Dispersion effects in partial coherence interferometry: Implications for intraocular ranging. J. Biomed. Opt. 1999, 4, 144–152. [Google Scholar] [CrossRef] [PubMed]
  61. Hillman, T.R.; Sampson, D.D. The effect of water dispersion and absorption on axial resolution in ultrahigh-resolution optical coherence tomography. Opt. Express 2005, 13, 1860–1874. [Google Scholar] [CrossRef]
  62. Marks, D.L.; Oldenburg, A.L.; Reynolds, J.J.; Boppart, S.A. Autofocus algorithm for dispersion correction in optical coherence tomography. Appl. Opt. 2003, 42, 3038–3046. [Google Scholar] [CrossRef] [PubMed]
  63. Cense, B.; Nassif, N.; Chen, T.C.; Pierce, M.C.; Yun, S.H.; Park, B.H.; Bouma, B.E.; Tearney, G.J.; de Boer, J.F. Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography. Opt. Express 2004, 12, 2435–2447. [Google Scholar] [CrossRef] [PubMed]
  64. Song, S.; Xu, J.; Wang, R.K. Long-range and wide field of view optical coherence tomography for in vivo 3D imaging of large volume object based on akinetic programmable swept source. Biomed. Opt. Express 2016, 7, 4734–4748. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Dorrer, C.; Belabas, N.; Likforman, J.P.; Joffre, M. Spectral resolution and sampling issues in Fourier-transform spectral interferometry. J. Opt. Soc. Am. B 2000, 17, 1795–1802. [Google Scholar] [CrossRef]
  66. Hu, Z.; Rollins, A.M. Fourier domain optical coherence tomography with a linear-in-wavenumber spectrometer. Opt. Lett. 2007, 32, 3525–3527. [Google Scholar] [CrossRef]
  67. Payne, A.; Podoleanu, A.G. Direct electronic linearization for camera-based spectral domain optical coherence tomography. Opt. Lett. 2012, 37, 2424–2426. [Google Scholar] [CrossRef]
  68. Liu, X.; Balicki, M.; Taylor, R.H.; Kang, J.U. Towards automatic calibration of Fourier-Domain OCT for robot-assisted vitreoretinal surgery. Opt. Express 2010, 18, 24331–24343. [Google Scholar] [CrossRef] [PubMed]
  69. Nassif, N.; Cense, B.; Park, B.H.; Pierce, M.C.; Yun, S.H.; Bouma, B.E.; Tearney, G.J.; Chen, T.C.; de Boer, J.F. In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve. Opt. Express 2004, 12, 367–376. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) schematic of Michelson interferometer for LCI sensor, (b) scanning by galvanometer mirror, (c) scanning via moving probe or moving sample, (d) signal term for a singular sample reflector and corresponding equations relating to signal features. For multiple reflectors, the cross-correlation component is a superposition of sinusoids.
Figure 1. (a) schematic of Michelson interferometer for LCI sensor, (b) scanning by galvanometer mirror, (c) scanning via moving probe or moving sample, (d) signal term for a singular sample reflector and corresponding equations relating to signal features. For multiple reflectors, the cross-correlation component is a superposition of sinusoids.
Applsci 10 08590 g001
Figure 2. Flowchart of the signal progress throughout a Michelson interferometer lit by a broadband light source. The grey boxes indicate effects resulting from each stage.
Figure 2. Flowchart of the signal progress throughout a Michelson interferometer lit by a broadband light source. The grey boxes indicate effects resulting from each stage.
Applsci 10 08590 g002
Figure 3. Depth-dependent loss in signal sensitivity from sample reflector. (a) overlaid signals for sample measurements up to 2.17 mm, (b) an example signal at a depth of 2 mm, (c) dataset (a) after normalisation, (d) the same signal in (b) after the normalisation relationship.
Figure 3. Depth-dependent loss in signal sensitivity from sample reflector. (a) overlaid signals for sample measurements up to 2.17 mm, (b) an example signal at a depth of 2 mm, (c) dataset (a) after normalisation, (d) the same signal in (b) after the normalisation relationship.
Applsci 10 08590 g003
Figure 4. LCI sensor design framework, optimising hardware choices for key operating criteria.
Figure 4. LCI sensor design framework, optimising hardware choices for key operating criteria.
Applsci 10 08590 g004
Figure 5. Experimental setup of a common path low coherence interferometer.
Figure 5. Experimental setup of a common path low coherence interferometer.
Applsci 10 08590 g005
Figure 6. Effect of offset distance on artefact measurement.
Figure 6. Effect of offset distance on artefact measurement.
Applsci 10 08590 g006
Figure 7. Framework for LCI signal data processing and distance offset retrieval, outlining required procedures in solid bounding boxes along with processing steps which can improve the signal quality in dashed bounding boxes.
Figure 7. Framework for LCI signal data processing and distance offset retrieval, outlining required procedures in solid bounding boxes along with processing steps which can improve the signal quality in dashed bounding boxes.
Applsci 10 08590 g007
Figure 8. (a) a comparison between FFT of a resampled spectrum and FFT of the original spectrum; (b) a sample signal spectra with the DC component removed, the red dots show the zero crossing (ZC) points on the curve; (c) the red line shows the nonlinearly spaced ZC locations for a raw spectrum, and the black line shows the ZC points spacing once a resampling procedure has been undertaken.
Figure 8. (a) a comparison between FFT of a resampled spectrum and FFT of the original spectrum; (b) a sample signal spectra with the DC component removed, the red dots show the zero crossing (ZC) points on the curve; (c) the red line shows the nonlinearly spaced ZC locations for a raw spectrum, and the black line shows the ZC points spacing once a resampling procedure has been undertaken.
Applsci 10 08590 g008
Figure 9. (a) FFT of original dataset, (b) FFT of zero-padded dataset, (c) improving resolution through fitting a 2nd order polynomial to the FFT peak, (d) comparing maximum peak detection versus a polynomial peak fitting against OPD.
Figure 9. (a) FFT of original dataset, (b) FFT of zero-padded dataset, (c) improving resolution through fitting a 2nd order polynomial to the FFT peak, (d) comparing maximum peak detection versus a polynomial peak fitting against OPD.
Applsci 10 08590 g009
Table 1. Publication breakdown by application domain, table ordered by first appearance of application in literature.
Table 1. Publication breakdown by application domain, table ordered by first appearance of application in literature.
First Seen in LiteratureApplication DomainPublications
1984Absolute ranging for detecting faults in fibre optic cables[15,16,17,18]
2004Art and Archaeology inspection[10,19,20]
2005Measurement during laser machining[21,22,23,24,25]
2005Material investigation[26,27,28]
2007Investigation of paper structure[29,30]
2008Electronic board and micro-electronics[31,32]
2008Profiling measurements[33,34,35,36,37,38]
2009Nuclear research reactor[39]
2009Surface roughness measurements[40,41]
2011Measurement of tablet coatings offline and online[11,42]
2013Measurement of optical components such as GRIN material[43]
2013Tracking film growth[44]
2020Measurement in liquid jets[13]
Table 2. Comparison of windowing functions applied to LCI signal, showing change in PSF and SNR from unwindowed data.
Table 2. Comparison of windowing functions applied to LCI signal, showing change in PSF and SNR from unwindowed data.
Window FunctionHannHammingTaylorBlackman
PSF FWHM ( μ m)0.810.70.451.23
SNR (dB)39.0933.9222.1338.54
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hovell, T.; Petzing, J.; Justham, L.; Kinnell, P. From Light to Displacement: A Design Framework for Optimising Spectral-Domain Low-Coherence Interferometric Sensors for In Situ Measurement. Appl. Sci. 2020, 10, 8590. https://doi.org/10.3390/app10238590

AMA Style

Hovell T, Petzing J, Justham L, Kinnell P. From Light to Displacement: A Design Framework for Optimising Spectral-Domain Low-Coherence Interferometric Sensors for In Situ Measurement. Applied Sciences. 2020; 10(23):8590. https://doi.org/10.3390/app10238590

Chicago/Turabian Style

Hovell, Tom, Jon Petzing, Laura Justham, and Peter Kinnell. 2020. "From Light to Displacement: A Design Framework for Optimising Spectral-Domain Low-Coherence Interferometric Sensors for In Situ Measurement" Applied Sciences 10, no. 23: 8590. https://doi.org/10.3390/app10238590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop