Next Article in Journal
Flow Performance and Its Effect on Shape Formation in PDMS Assisted Thermal Reflow Process
Previous Article in Journal
Research on the Tie Cable Replacement Method of Half-through Tied-Arch Bridge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Vascular Pattern Extraction from Grayscale Volumetric Ultrasound Images for Biometric Recognition Purposes

School of Engineering, University of Basilicata, 85100 Potenza, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8285; https://doi.org/10.3390/app12168285
Submission received: 28 June 2022 / Revised: 16 August 2022 / Accepted: 17 August 2022 / Published: 19 August 2022
(This article belongs to the Section Applied Biosciences and Bioengineering)

Abstract

:
Recognition systems based on palm veins are gaining increasing attention as they are highly distinctive and very hard to counterfeit. Most popular systems are based on infrared radiation; they have the merit to be contactless but can provide only 2D patterns. Conversely, 3D patterns can be achieved with Doppler or photoacoustic methods, but these approaches require too long of an acquisition time. In this work, a method for extracting 3D vascular patterns from conventional grayscale volumetric images of the human hand, which can be collected in a short time, is proposed for the first time. It is based on the detection of low-brightness areas in B-mode images. Centroids of these areas in successive B-mode images are then linked through a minimum distance criterion. Preliminary verification and identification results, carried out on a database previously established for extracting 3D palmprint features, demonstrated good recognition performances: EER = 2%, ROC AUC = 99.92%, and an identification rate of 100%. As further merit, 3D vein pattern features can be fused to 3D palmprint features to implement a costless multimodal recognition system.

1. Introduction

With the increasing importance of information security in every aspect of our lives, biometrics is becoming one of the most popular and promising authentication techniques because traditional methods, based on smart cards and passwords, carry the risks of being stolen, lost, or forgotten. Biometrics is the science of identifying or verifying a person’s identity based on physiological or behavioral characteristics. There is a wide variety of biometric features that can be used in different applications, including face shape and geometry, fingerprint, iris, palmprint, gait, and voice. Palm veins are gaining more and more attention because of their uniqueness and distinctiveness (even identical twins have different palm vein patterns). Additionally, veins lie underneath the skin, and thus, it is unchangeable and difficult to counterfeit [1,2,3,4].
The most common technologies used to collect palm vein images are near-infrared (NIR) and far-infrared (FIR) radiation. NIR is considered better than FIR because it is more tolerant to environmental changes (temperature or humidity) [5,6,7]. On the other hand, NIR ‘faces the problem of pattern corruption because of visible skin features being mistaken for veins’ [8]. IR technology is easily acceptable by users because it is contactless but, on the other hand, it allows one to collect only 2D patterns.
Ultrasound is a well-established technology in medical diagnostic imaging as it allows one to achieve accurate 3D images of human organs in a non-invasive way. This capability can be profitably exploited for biometric recognition as well. In addition, ultrasound systems have other peculiar merits, including the ability to detect the liveness of the sample, which make them practically unspoofable, insensitivity to ambient conditions, such as thermic, illumination or humidity variations, and insensitivity to several kinds of skin contamination (grease, ink).
Several ultrasonic devices have been studied and developed for biometric purposes [9]. The majority of attention was devoted to fingerprints [10,11,12], but other characteristics like palmprints [13] and hand geometry [14,15,16] have also been investigated. Additionally, 3D vein patterns were extracted using power Doppler techniques [17,18,19] and photoacoustics [20,21,22,23]. However, even if quite accurate patterns were achieved, the acquisition time proved to be too long for both technologies.
Palm veins are exploited in multimodal systems together with other characteristics, especially palmprints [24,25,26,27]. This modality, also called biometric fusion, allows to improve the performance of the biometric system in terms of recognition accuracy, universality and security.
In this work, a recognition system based on a 3D vascular pattern extracted from conventional grayscale volumetric ultrasound images is proposed for the first time and experimentally evaluated. This approach has twofold merit: the acquisition time is contained in less than 5 s, and the collected volumetric image can be exploited to extract other independent biometric characteristics, such as palmprints [28,29] and/or inner hand geometry [14], upgrading the recognition system to a multimodal one.
Hereinafter, Section 2 presents the experimental set up used for collecting volumetric ultrasound images of a portion of the human hand, Section 3 describes the proposed method for extracting 3D vascular pattern, Section 4 reports the results of verification and identification experiments, and Section 5 contains the conclusions and a discussion on the perspective of future works.

2. 3D Image Acquisition

Firstly, 3D ultrasonic images of the human hand used for extracting vascular patterns were collected in a previous work to implement a biometric system based on 3D palmprints [29]. The setup was composed of a commercial ultrasound probe with a frequency of 12 MHz (LA435 by ESAOTE) connected to a CNC pantograph and driven by the ultrasound research scanner ULAOP [30]. The acquisition procedure involves the user dipping his hand in a basin of water with the palm facing upwards while the probe, also immersed in water, is mechanically moved in the direction from the wrist to the end of the palm (fingers excluded), while the system continuously acquires B-mode images. Subsequently, in a post-process phase, they are grouped in a volumetric image of 25 × 38 × 11 mm 3 . This 3D image is described by a 3D matrix V(X,Y,Z) of 542 × 814 × 238 voxels. The brightness of each voxel is represented in an 8-bit grayscale, where black indicates the complete absence of wave reflection (value 0), while white corresponds to a full-wave reflection (value 255). Figure 1 shows a 3D rendering of a collected sample. In addition to palmprint, other features can be extracted from the same volume for biometric recognition purposes, including vascular patterns, which can be achieved by detecting “dark” regions in a B-mode image similar to the one highlighted with red circles in Figure 1.

3. Feature Extraction

The procedure for deriving a vascular pattern is based on the identification of vein centroids in each of the B-mode images collected along the y-direction. These centroids are then linked on the base of the minimum distance criterion. Figure 2 shows a collected B-mode image. As the user’s hand and the probe are both immersed in water, the image shows a very dark layer corresponding to water.
As is known, ultrasound images are strongly affected by speckle noise, which is generated when an ultrasonic wave hits an acoustic interface smaller than its wavelength: soft tissue cells act as independent sources of ultrasound waves, scattering the gained energy in all directions. Depending on other scatterers’ positions, diffuse ultrasonic waves may undergo constructive or destructive interference (based on the wave phase), resulting in an increase or decrease in ultrasonic energy returning to the probe [31,32]. Several techniques have been proposed in the literature to reduce speckle noise without destroying important image information [33,34]. In this work, a speckle-reducing anisotropic diffusion (SRAD) filter, which is effective and has been successfully used in a previous work with similar images [35], has been adopted based on the following iterative formula [36]:
I i , j n + 1 = I i , j n + Δ t 4 + q i , j n
where I i , j n indicatesthe pixel at position (i, j) at step n, Δ t is time step size and q i , j n identifies the edge structure of pixels adjacent to the one considered.
SRAD is sensitive to filter window shape and size: according to Ref. [31], a square window is typically applied, and its size depends on the scale of interest. The choice of the parameters n and Δ t , as well as window dimension, is crucial to optimize image quality. In this work, the optimum set of parameters for the SRAD filter was determined in a heuristic way, i.e., by maximizing recognition results, which has been achieved with a 50 × 50 window, number of iterations n = 40, and time step Δ t = 0.1. Figure 3a shows the achieved results. For comparison, the image obtained by setting a 60 × 60 window, Δ t = 0.5, and n = 60 is reported in Figure 3b. As can be seen, in this latter case, over-smoothing occurs, and edges are blurred, damaging image quality.
The next operation is binarization. In previous works [37,38], a global threshold was set based on heuristic considerations. However, this approach is suitable only if images under analysis have more or less the same characteristics in terms of illumination and contrast. For better accuracy, an adaptive threshold is preferable: it is chosen according to the grey tones of each image based on some local statistics. Several binarization methods can be found in the literature [39]. In this work, two well-established types of adaptive threshold binarizations have been experimented with: mean value and Ridler’s threshold [40,41].
Mean binarization provides that, for each B-mode image, the threshold is given by the arithmetic mean ( μ ) of all its pixels, as follows:
μ = 1 N i = 1 N A i
where A i is the i-th pixel value, and N = 542 × 238 is the number of pixels in the image. An example of pixel value distribution in a B-mode image is shown in Figure 4.
Ridler’s threshold is calculated by the following iterative algorithm: [42]:
  • An initial threshold T 0 is established according to Equation (2);
  • All pixels below T 0 are placed in a set, called A for simplicity; the others are placed in a set B.
  • For each set, thresholds, denoted as T A and T B , are calculated by averaging pixels values contained in each;
  • The new reference threshold T 1 is given by the average between T A and T B .
The algorithm ends if T 1 is equal to T 0 ; otherwise, steps 2, 3, and 4 are executed until equal thresholds are obtained in two successive iterations.
Binarization results obtained with mean and Ridler’s threshold are shown in Figure 5a,b, respectively.
Subsequently, a complementary image is derived to have pixels corresponding to the vein with value 1, which is a need in the next operation; some morphological operations [43] are then performed. The selection of the range of the connected components that are a candidate to be recognized as the vein is first performed by removing areas too wide (>1600) and too small (<129). These values were set in a heuristic way by analyzing all the images in the database and by accounting for vein dimension and produced significantly different results from those used in a previous work [38]. If no related component is found, the search range is extended, and the lower threshold is set to 50. Note that upper and lower limits have to be fixed to filter connected regions that are not related to a vein. As can be seen in Figure 5, there are connected regions too wide (see, for example, in the top and the bottom of the image) or too small (almost everywhere) to be considered as a vein. A closing operation is then performed to fill small holes in connected regions. Once connected regions are identified, centroids are calculated.
To the authors’ best knowledge, this is the first work where hand vein patterns are extracted from greyscale ultrasound images for biometric recognition purposes. The analysis is therefore focused on a single pattern to verify extraction accuracy by comparisons with C-mode images extracted at various depths and to gain confidence in the recognition capability by performing preliminary experiments.
In Figure 6, centroids are indicated with red dots. Then, the image from which the pattern begins is opportunely cropped to contain computation time, and the mean point of the new image, indicated in the Figure with a blue dot, is calculated. The starting point of the vascular pattern is the centroid with the smallest Euclidean distance from the midpoint, highlighted with a green circle in Figure 6.
According to the Djikstras algorithm [37,44], the 3D vascular pattern is achieved by linking the centroid selected in image n to the one of image n + 1 that has the smallest Euclidean distance from it. However, it may happen that no connected regions, and hence centroids, are found in image n + 1 that are sufficiently close to the centroid in image n. To hold this problem, the following strategy has been devised and experimented with: only centroids that are no more than 12 pixels away on the z-axis and 15 on the x-axis from the reference centroid of the previous image are considered valid.
If no centroid is found in image n + 1 , then the XZ coordinates of the one detected in the previous image (n) are assigned. If this condition occurs for j consecutive images, coordinates of missing centroids are calculated through interpolation from those of centroids in images n and n + j . Based on a heuristic analysis of the images in the database, a further crop is performed, and only 461 B-mode images are analyzed, from the 200th to 640th, which is the central part of the palm where veins are clearly detectable.
An example of a pattern obtained is shown in Figure 7. In order to provide a first validation of the accuracy of the method, a 2D pattern is obtained by projecting a 3D pattern on the XY plane (Figure 8a). This 2D pattern can be compared with a C-mode image, i.e., an XZ plane extracted at a depth of 6.28 mm from the acquired volumetric image, shown in Figure 8b. As can be seen, the two patterns are very similar. However, this kind of comparison can be done only in particular cases, i.e., when the 3D pattern has a very low slope in the z-direction, as in the analyzed example, and the whole pattern be found in a single C-mode image at a certain depth.
In a more general case (see Figure 9a), when the slope along z of the vascular pattern cannot be neglected, several C-mode images extracted at adjacent depths have to be analyzed to verify the similarity between the two patterns. In this case, the projection of the pattern on the XY plane (see Figure 9b) is compared with C-mode images extracted at several depths as shown in Figure 10; in this case, a good agreement is also observed.

4. Performances Evaluation

Recognition performances have been evaluated by executing verification and identification experiments on a preliminary database composed of 64 acquisitions extracted from 22 volunteers, which is a relatively small subset of a larger database established for palmprint recognition in [29].
The subset contains all 3D images where the selected vein pattern can be found. All other 3D images in the database used in [29], which was established to extract palmprint features that are located in the outermost layers of the palm, presented regions too thin to contain a complete vein pattern. An example of acquisition unsuitable for the extraction of the palm vein selected in this work is shown in Figure 11.
One of the objectives of this work is to verify if 3D vascular pattern templates provide more accurate recognition results than 2D ones. To this end, matching scores to evaluate the similarity between two templates are defined for both 2D and 3D cases by following a pixel-to-area approach [45]:
S c o r e ( A , B ) 2 D = 2 M A + M B i = 1 n j = 1 m [ A ( i , j ) B ( i , j ) ]
S c o r e ( A , B ) 3 D = 2 M A + M B i = 1 n j = 1 m k = 1 o [ A ( i , j ) B ( i , j ) ]
where A is the reference template in the database, and B is the template under test. M A and M B represent the sum of all pixels with value “1” in A and B, respectively.
In the 2D case, possible misalignments of the hand during the acquisition are taken into account by performing small translations along x and y and rotations of the test image. As the 3D case is concerned, only a few translations were performed to save computation time. The maximum score obtained is the one used in statistic analysis.

4.1. Verification Experiments

The recognition modality called “Verification” determines whether an individual turns out to be who he claims to be. Verification systems perform one-to-one comparisons between a test template, extracted from a sample released by a user, and a stored reference template, corresponding to the claimed identity. Performances of the 2D and 3D methods are evaluated by comparing each template with all the others. In this way, a much higher number of scores (2016) than those achievable with the standard reference-query method is obtained [29]. As is known, a genuine score is the result of a comparison between templates belonging to the same user, while an impostor score is from different users.
There are two types of errors in biometric systems: false acceptance rate (FAR) and false reject rate (FRR), i.e., the frequency of mistakenly accepted non-legitimate users and rejected legitimate users, respectively. Different feature extraction techniques are compared through the receiver operating characteristic (ROC) curves, which plot 1-FRR vs FAR [46,47] (see Figure 12). A method is more accurate the closer its ROC curve is to the upper left corner of the graph.
In this work, two parameters that are often used to synthetically evaluate recognition performances of the various extraction techniques are examined: the area under the ROC curve (AUC) [48] and the equal error rate (EER), which is the error when FRR = FAR. Table 1 reports achieved values of EER and AUC for all the experimental methods. As can be seen, the results are satisfactory for every case. In particular, the use of the proposed SRAD filter always improves recognition results while the two binarization methods are almost equivalent, with a slight preference for Ridler’s method. Furthermore, results demonstrate that the 3D template in any case provides better recognition performances than the 2D one. The best results were achieved by using the SRAD filter, Ridler adaptive threshold, and 3D template: EER = 2% and AUC = 99.92%.

4.2. Identification Experiments

In the identification modality, the system performs a one-to-many comparison to recognize an unknown individual searching through templates stored in the database. In this work, identification experiments were executed by comparing each template with all the others in the database. Results were grouped in 64 tables: each table contains 63 scores sorted in descending order.
The identification rate is calculated as the ratio between successful experiments, i.e., when all genuine scores are higher than all impostor scores and total experiments for any recognition method. Results are summarized in Table 2. As can be seen, the use of the SRAD filter and 3D templates allows us to achieve an identification rate of 100%. It is also appropriate to highlight that 3D performances are worse than 2D ones if the SRAD filter is not used.

5. Conclusions

In this work, a biometric recognition system based on palm vein patterns extracted from conventional 3D grayscale ultrasonic images is proposed for the first time and experimentally evaluated. The proposed feature extraction method is basically based on the detection of low brightness areas in B-mode images. The selected centroids are then linked using a minimum distance criterion. Only one vascular pattern is extracted from each sample.
Different image processing methods were experimented with, differing in the kind of filter and adaptive threshold used. Recognition capabilities were evaluated by performing verification and identification experiments on a preliminary homemade database. Results demonstrated good recognition performances in both modalities and for all experimented methods even if results should be confirmed by experiments on a wider database. In particular, the best results are obtained when an SRAD filter is used to reduce speckle noise; in this case, 3D features, which are a peculiarity of ultrasound over infrared techniques, always provide better results than 2D ones.
A procedure for extracting all vascular patterns present in the collected region is currently under study, which allows us to utilize a much higher number of samples from the database [29] and, consequently, to provide more reliable results. Furthermore, alternatives to the whole feature extraction procedure, including anti-speckle filters, binarization methods, and machine learning methods for vein segmentation, are being evaluated.
A major merit of the proposed vein pattern recognition system relies on the possibility to implement a costless multimodal system by operating a fusion between 3D palmprints and 3D vein pattern as the two features are extracted from the same collected volume. To this end, an approach similar to that presented in [49], which has demonstrated its capability of dramatically improving recognition performances of hand geometry and palmprints extracted from the same volume, is being tested.
The system also exhibits the other advantages of ultrasound, such as its capability to not be sensitive to many kinds of surface contamination, humidity, environmental light, and temperature, and, overall, to detect the liveness of the sample.
Finally, to overcome a significant drawback of the proposed system that may limit its acceptability, i.e., the user has to get his hand wet, the possibility to extract vascular patterns from volumes collected with an acquisition system that uses gel as a coupling medium is under investigation, as has been successfully done for palmprints [50,51,52]. This system allows a comfortable positioning of the hand and, as the hand is in contact with gel, it is acceptable to users in a post-pandemic world. The proposed recognition system is intended for high-security applications. In such a perspective, the cost and encumbrance can be widely contained. In fact, a single-stepped motor can be used instead of the pantograph, and probes based on the emerging and potentially cheap MEMS technology [53,54], which is nowadays showing up in the market [55], could be employed.

Author Contributions

Conceptualization, A.I.; methodology, A.I. and A.V.; software, A.V.; validation, A.I. and A.V.; data curation, A.V.; writing—original draft preparation, A.V.; writing—review and editing, A.I.; supervision, A.I.; project administration, A.I.; funding acquisition, A.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Italian Government through the PRIN 2020 Program (Project n. 20205HFXE7).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kavitha, S.; Sripriya, P. A Review on Palm Vein Biometrics. Int. J. Eng. Technol. 2018, 7, 407. [Google Scholar] [CrossRef]
  2. Hernández-García, R.; Barrientos, R.; Rojas, C.; Mora, M. Individuals identification based on palm vein matching under a parallel environment. Appl. Sci. 2019, 9, 2805. [Google Scholar] [CrossRef]
  3. Gautam, A.; Kapoor, R. A Novel ES-RwCNN Based Finger Vein Recognition System with Effective L12 DTP Descriptor and AWM-WOA Selection. Eng. Lett. 2022, 30, 882–891. [Google Scholar]
  4. Wu, W.; Elliott, S.; Lin, S.; Yuan, W. Low-cost biometric recognition system based on NIR palm vein image. IET Biom. 2019, 8, 206–214. [Google Scholar] [CrossRef]
  5. Wang, L.; Leedham, G.; Cho, S.Y. Infrared imaging of hand vein patterns for biometric purposes. IET Comput. Vis. 2007, 1, 113–122. [Google Scholar] [CrossRef]
  6. Zhou, Y.; Kumar, A. Human identification using palm-vein images. IEEE Trans. Inf. Forensics Secur. 2011, 6, 1259–1274. [Google Scholar] [CrossRef]
  7. Palma, D.; Blanchini, F.; Giordano, G.; Montessoro, P.L. A Dynamic Biometric Authentication Algorithm for Near-Infrared Palm Vascular Patterns. IEEE Access 2020, 8, 118978–118988. [Google Scholar] [CrossRef]
  8. Wu, W.; Elliott, S.; Lin, S.; Sun, S.; Tang, Y. Review of palm vein recognition. IET Biom. 2020, 9, 1–10. [Google Scholar] [CrossRef]
  9. Iula, A. Ultrasound systems for biometric recognition. Sensors 2019, 19, 2317. [Google Scholar] [CrossRef]
  10. Schmitt, R.; Zeichman, J.; Casanova, A.; Delong, D. Model based development of a commercial, acoustic fingerprint sensor. In Proceedings of the IEEE International Ultrasonics Symposium, IUS, Dresden, Germany, 7–10 October 2012; pp. 1075–1085. [Google Scholar]
  11. Lamberti, N.; Caliano, G.; Iula, A.; Savoia, A. A high frequency cMUT probe for ultrasound imaging of fingerprints. Sens. Actuators A Phys. 2011, 172, 561–569. [Google Scholar] [CrossRef]
  12. Jiang, X.; Tang, H.Y.; Lu, Y.; Ng, E.J.; Tsai, J.M.; Boser, B.E.; Horsley, D.A. Ultrasonic fingerprint sensor with transmit beamforming based on a PMUT array bonded to CMOS circuitry. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2017, 64, 1401–1408. [Google Scholar] [CrossRef]
  13. Iula, A.; Savoia, A.S.; Caliano, G. An ultrasound technique for 3D palmprint extraction. Sens. Actuators A Phys. 2014, 212, 18–24. [Google Scholar] [CrossRef]
  14. Iula, A.; De Santis, M. Experimental evaluation of an ultrasound technique for the biometric recognition of human hand anatomic elements. Ultrasonics 2011, 51, 683–688. [Google Scholar] [CrossRef]
  15. Iula, A.; Hine, G.; Ramalli, A.; Guidi, F. An improved ultrasound system for biometric recognition based on hand geometry and palmprint. Procedia Eng. 2014, 87, 1338–1341. [Google Scholar] [CrossRef]
  16. Iula, A. Biometric recognition through 3D ultrasound hand geometry. Ultrasonics 2021, 111, 106326. [Google Scholar] [CrossRef]
  17. Iula, A.; Savoia, A.; Caliano, G. 3D Ultrasound palm vein pattern for biometric recognition. In Proceedings of the 2012 IEEE International Ultrasonics Symposium, Dresden, Germany, 7–10 October 2012; pp. 1–4. [Google Scholar]
  18. Harput, S.; Tortoli, P.; Eckersley, R.; Dunsby, C.; Tang, M.X.; Christensen-Jeffries, K.; Ramalli, A.; Brown, J.; Zhu, J.; Zhang, G.; et al. 3-D Super-Resolution Ultrasound Imaging with a 2-D Sparse Array. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2020, 67, 269–277. [Google Scholar] [CrossRef]
  19. Russo, D.; Ricci, S. Electronic Flow Emulator for the Test of Ultrasound Doppler Sensors. IEEE Trans. Ind. Electron. 2022, 69, 6341–6349. [Google Scholar] [CrossRef]
  20. Wang, Y.; Li, Z.; Vu, T.; Nyayapathi, N.; Oh, K.; Xu, W.; Xia, J. A robust and secure palm vessel biometric sensing system based on photoacoustics. IEEE Sens. J. 2018, 18, 5993–6000. [Google Scholar] [CrossRef]
  21. Zhang, C.; Liu, G.; Xie, Z.; Shu, Z.; Ren, Z.; Yao, Q. Vascular recognition system based on photoacoustic detection. J. Laser Appl. 2021, 33, 012051. [Google Scholar] [CrossRef]
  22. Zhan, Y.; Rathore, A.; Milione, G.; Wang, Y.; Zheng, W.; Xu, W.; Xia, J. 3D finger vein biometric authentication with photoacoustic tomography. Appl. Opt. 2020, 59, 8751–8758. [Google Scholar] [CrossRef] [PubMed]
  23. Sun, M.; Ma, Y.; Tong, Z.; Wang, Z.; Zhang, W.; Yang, S. High-security photoacoustic identity recognition by capturing hierarchical vascular structure of finger. J. Biophotonics 2021, 14, e202100086. [Google Scholar] [CrossRef] [PubMed]
  24. Gupta, P.; Srivastava, S.; Gupta, P. An accurate infrared hand geometry and vein pattern based authentication system. Knowl.-Based Syst. 2016, 103, 143–155. [Google Scholar] [CrossRef]
  25. Wu, T.; Leng, L.; Khan, M.; Khan, F. Palmprint-Palmvein Fusion Recognition Based on Deep Hashing Network. IEEE Access 2021, 9, 135816–135827. [Google Scholar] [CrossRef]
  26. Kala, K.; Kumar, S.; Reddy, R.; Shastry, N.; Thakur, R. Contactless Authentication Device using Palm Vein and Palm Print Fusion Biometric Technology for Post Covid World. In Proceedings of the Proceedings—2021 International Conference on Design Innovations for 3Cs Compute Communicate Control, ICDI3C 2021, Bangalore, India, 11–12 June 2021; pp. 281–285. [Google Scholar] [CrossRef]
  27. Ahmed, M.; Roushdy, M.; Salem, A.B. Multimodal Technique for Human Authentication Using Fusion of Palm and Dorsal Hand Veins. Smart Innov. Syst. Technol. 2022, 270, 63–78. [Google Scholar] [CrossRef]
  28. Iula, A.; Nardiello, D. Three-dimensional ultrasound palmprint recognition using curvature methods. J. Electron. Imaging 2016, 25, 033009. [Google Scholar] [CrossRef]
  29. Iula, A.; Nardiello, D. 3-D Ultrasound Palmprint Recognition System Based On Principal Lines Extracted At Several Under Skin Depths. IEEE Trans. Instrum. Meas. 2019, 68, 4653–4662. [Google Scholar] [CrossRef]
  30. Tortoli, P.; Bassi, L.; Boni, E.; Dallai, A.; Guidi, F.; Ricci, S. ULA-OP: An advanced open platform for ultrasound research. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2009, 56, 2207–2216. [Google Scholar] [CrossRef]
  31. Yu, Y.; Acton, S. Speckle reducing anisotropic diffusion. IEEE Trans. Image Process. 2002, 11, 1260–1270. [Google Scholar]
  32. Narayan, N.S.; Marziliano, P.; Kanagalingam, J.; Hobbs, C.G. Speckle in ultrasound images: Friend or FOE? In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar]
  33. El-Shafai, W.; Mahmoud, A.; Ali, A.; El-Rabaie, E.S.; Taha, T.; Zahran, O.; El-Fishawy, A.; Soliman, N.; Alhussan, A.; Abd El-Samie, F. Deep CNN Model for Multimodal Medical Image Denoising. Comput. Mater. Contin. 2022, 73, 3795–3814. [Google Scholar] [CrossRef]
  34. Ratheesha, S.; Shamla Beevi, A.; Kalady, S. Performance analysis of speckle reduction filtering algorithms in B-mode ultrasound images. In Proceedings of the 2021 2nd International Conference for Emerging Technology (INCET), Belagavi, India, 21–23 May 2021. [Google Scholar] [CrossRef]
  35. Iula, A.; Micucci, M. Palmprint Recognition Through a Reliable Ultrasound Acquisition System and a 3D Template. Lect. Notes Electr. Eng. 2020, 629, 207–213. [Google Scholar] [CrossRef]
  36. Goyal, S.N.; Rani, A.; Yadav, N.; Singh, V. SGS-SRAD Filter for Denoising and Edge Preservation of Ultrasound Images. In Proceedings of the 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 7–8 March 2019; pp. 676–682. [Google Scholar]
  37. De Santis, M.; Agnelli, S.; Nardiello, D.; Iula, A. 3D Ultrasound Palm Vein recognition through the centroid method for biometric purposes. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017. [Google Scholar] [CrossRef]
  38. Iula, A. Optimization and evaluation of a biometric recognition technique based on 3D Ultrasound Palm Vein. In Proceedings of the 2020 IEEE International Ultrasonics Symposium (IUS), Las Vegas, NV, USA, 7–11 September 2020; pp. 1–4. [Google Scholar] [CrossRef]
  39. Hashem, A.A.; Idris, M.Y.I.; Ahmad, A.E.A. Comparative study of different binarization methods through their effects in characters localization in scene images. Data Knowl. Eng. 2018, 117, 216–224. [Google Scholar] [CrossRef]
  40. Ridler, T.; Calvard, S. Picture Thresholding using an Iterative Selection Method. IEEE Trans. Syst. Man Cybern. 1978, 8, 630–632. [Google Scholar] [CrossRef]
  41. Xue, J.H.; Zhang, Y.J. Ridler and Calvard’s, Kittler and Illingworth’s and Otsu’s methods for image thresholding. Pattern Recognit. Lett. 2012, 33, 793–797. [Google Scholar] [CrossRef]
  42. Karuppanagounder, S.; Genish, T. A Median Filter Approach for Ridler Calvard Method. In Proceedings of the National Conference on Signal and Image Processing (NCSIP-2012), Deemed Uni, India, 9–10 February 2012. [Google Scholar]
  43. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
  44. Dijkstra, E.W. A note on two problems in connexion with graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef]
  45. Zhang, D.; Lu, G.; Li, W.; Zhang, L.; Luo, N. Palmprint recognition using 3-D information. IEEE Trans. Syst. Man Cybern. Part Appl. Rev. 2009, 39, 505–519. [Google Scholar] [CrossRef]
  46. Hernández-Orallo, J. ROC curves for regression. Pattern Recognit. 2013, 46, 3395–3411. [Google Scholar] [CrossRef]
  47. Fan, J.; Upadhye, S.; Worster, A. Understanding receiver operating characteristic (ROC) curves. Can. J. Emerg. Med. 2006, 8, 19–20. [Google Scholar] [CrossRef]
  48. Faraggi, D.; Reiser, B. Estimation of the area under the ROC curve. Stat. Med. 2002, 21, 3093–3106. [Google Scholar] [CrossRef]
  49. Iula, A.; Micucci, M. Multimodal Biometric Recognition Based on 3D Ultrasound Palmprint-Hand Geometry Fusion. IEEE Access 2022, 10, 7914–7925. [Google Scholar] [CrossRef]
  50. Nardiello, D.; Iula, A. A new recognition procedure for palmprint features extraction from ultrasound images. Lect. Notes Electr. Eng. 2019, 512, 113–118. [Google Scholar]
  51. Iula, A.; Micucci, M. Experimental validation of a reliable palmprint recognition system based on 2D ultrasound images. Electronics 2019, 8, 1393. [Google Scholar] [CrossRef]
  52. Iula, A.; Micucci, M. A Feasible 3D Ultrasound Palmprint Recognition System for Secure Access Control Applications. IEEE Access 2021, 9, 39746–39756. [Google Scholar] [CrossRef]
  53. Greenlay, B.; Zemp, R. Fabrication of linear array and top-orthogonal-to-bottom electrode CMUT arrays with a sacrificial release process. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2017, 64, 93–107. [Google Scholar] [CrossRef]
  54. Caliano, G.; Carotenuto, R.; Cianci, E.; Foglietti, V.; Caronti, A.; Iula, A.; Pappalardo, M. Design, fabrication and characterization of a capacitive micromachined ultrasonic probe for medical imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2005, 52, 2259–2269. [Google Scholar] [CrossRef]
  55. Butterfly Network, Inc. 2020. Available online: https://www.https://www.butterflynetwork.eu/ (accessed on 16 August 2022).
Figure 1. Example of a volumetric image of a human hand region; the red circle indicates a vein.
Figure 1. Example of a volumetric image of a human hand region; the red circle indicates a vein.
Applsci 12 08285 g001
Figure 2. A B-mode image used for vein centroid identification.
Figure 2. A B-mode image used for vein centroid identification.
Applsci 12 08285 g002
Figure 3. (a) Image obtained by applying an SRAD filter with a 50 × 50 window, Δ t = 0.1, n = 40 (used in this work). (b) Image obtained by applying an SRAD filter with a 60 × 60 window, Δ t = 0.5, n = 60, which produces excessive smoothing and blurred edges.
Figure 3. (a) Image obtained by applying an SRAD filter with a 50 × 50 window, Δ t = 0.1, n = 40 (used in this work). (b) Image obtained by applying an SRAD filter with a 60 × 60 window, Δ t = 0.5, n = 60, which produces excessive smoothing and blurred edges.
Applsci 12 08285 g003
Figure 4. Example of pixel value distribution in a B-mode image.
Figure 4. Example of pixel value distribution in a B-mode image.
Applsci 12 08285 g004
Figure 5. Binarized image obtained by using (a) mean threshold and (b) Ridler’s threshold.
Figure 5. Binarized image obtained by using (a) mean threshold and (b) Ridler’s threshold.
Applsci 12 08285 g005
Figure 6. Cropped region used for vein extraction. The selected centroid (highlighted with a green circle) is the one closest to the center of the selected region (blue point).
Figure 6. Cropped region used for vein extraction. The selected centroid (highlighted with a green circle) is the one closest to the center of the selected region (blue point).
Applsci 12 08285 g006
Figure 7. An example of 3D vascular pattern with a slight slope along z.
Figure 7. An example of 3D vascular pattern with a slight slope along z.
Applsci 12 08285 g007
Figure 8. (a) A 2D pattern obtained by projecting the 3D pattern of Figure 7 on the XY plane; (b) C-mode image extracted at 6.28 mm of depth.
Figure 8. (a) A 2D pattern obtained by projecting the 3D pattern of Figure 7 on the XY plane; (b) C-mode image extracted at 6.28 mm of depth.
Applsci 12 08285 g008
Figure 9. (a) Example of 3D pattern with non-negligible slope; (b) 2D pattern obtained by projecting 3D pattern on XY plane.
Figure 9. (a) Example of 3D pattern with non-negligible slope; (b) 2D pattern obtained by projecting 3D pattern on XY plane.
Applsci 12 08285 g009
Figure 10. C-mode images of sample in Figure 9a at different depths.
Figure 10. C-mode images of sample in Figure 9a at different depths.
Applsci 12 08285 g010
Figure 11. Example of acquisition discarded due to the presence of too thin of regions to contain a complete vascular pattern.
Figure 11. Example of acquisition discarded due to the presence of too thin of regions to contain a complete vascular pattern.
Applsci 12 08285 g011
Figure 12. ROC curves computed for all analyzed methods.
Figure 12. ROC curves computed for all analyzed methods.
Applsci 12 08285 g012
Table 1. EER values for the various recognition methods.
Table 1. EER values for the various recognition methods.
2D EER (%)3D EER (%)2D AUC (%)3D AUC (%)
Mean3.73.0699.8599.85
SRAD and mean3.022.899.8799.90
Ridler43.0599.7999.80
SRAD and Ridler3.05299.8799.92
Table 2. Identification rates for the various recognition methods.
Table 2. Identification rates for the various recognition methods.
2D (%)3D (%)
Mean95.2393.65
SRAD and mean98.41100
Ridler95.2393.65
SRAD and Ridler95.23100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Iula, A.; Vizzuso, A. 3D Vascular Pattern Extraction from Grayscale Volumetric Ultrasound Images for Biometric Recognition Purposes. Appl. Sci. 2022, 12, 8285. https://doi.org/10.3390/app12168285

AMA Style

Iula A, Vizzuso A. 3D Vascular Pattern Extraction from Grayscale Volumetric Ultrasound Images for Biometric Recognition Purposes. Applied Sciences. 2022; 12(16):8285. https://doi.org/10.3390/app12168285

Chicago/Turabian Style

Iula, Antonio, and Alessia Vizzuso. 2022. "3D Vascular Pattern Extraction from Grayscale Volumetric Ultrasound Images for Biometric Recognition Purposes" Applied Sciences 12, no. 16: 8285. https://doi.org/10.3390/app12168285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop