Next Article in Journal
SS-CPGAN: Self-Supervised Cut-and-Pasting Generative Adversarial Network for Object Segmentation
Next Article in Special Issue
Regression-Based Camera Pose Estimation through Multi-Level Local Features and Global Features
Previous Article in Journal
Novel Fault Diagnosis of a Conveyor Belt Mis-Tracking via Motor Current Signature Analysis
Previous Article in Special Issue
Benchmarking of Contactless Heart Rate Measurement Systems in ARM-Based Embedded Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition Performance Analysis of a Multimodal Biometric System Based on the Fusion of 3D Ultrasound Hand-Geometry and Palmprint

School of Engineering, University of Basilicata, 85100 Potenza, Italy
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(7), 3653; https://doi.org/10.3390/s23073653
Submission received: 15 February 2023 / Revised: 28 March 2023 / Accepted: 29 March 2023 / Published: 31 March 2023

Abstract

:
Multimodal biometric systems are often used in a wide variety of applications where high security is required. Such systems show several merits in terms of universality and recognition rate compared to unimodal systems. Among several acquisition technologies, ultrasound bears great potential in high secure access applications because it allows the acquisition of 3D information about the human body and is able to verify liveness of the sample. In this work, recognition performances of a multimodal system obtained by fusing palmprint and hand-geometry 3D features, which are extracted from the same collected volumetric image, are extensively evaluated. Several fusion techniques based on the weighted score sum rule and on a wide variety of possible combinations of palmprint and hand geometry scores are experimented with. Recognition performances of the various methods are evaluated and compared through verification and identification experiments carried out on a homemade database employed in previous works. Verification results demonstrated that the fusion, in most cases, produces a noticeable improvement compared to unimodal systems: an EER value of 0.06% is achieved in at least five cases against values of 1.18% and 0.63% obtained in the best case for unimodal palmprint and hand geometry, respectively. The analysis also revealed that the best fusion results do not include any combination between the best scores of unimodal characteristics. Identification experiments, carried out for the methods that provided the best verification results, consistently demonstrated an identification rate of 100%, against 98% and 91% obtained in the best case for unimodal palmprint and hand geometry, respectively.

1. Introduction

In recent years, biometric recognition is acquiring increasing popularity in various fields where personal security is required, replacing classical authentication methods based on PINs and passwords. Biometric characteristics are mainly employed in commercial applications such as smartphones and access control, government, and forensics.
Biometric systems based on the combination of two or more characteristics, referred to as multimodal systems, have several advantages compared to their unimodal counterparts as they allow improved recognition rate, universality, and the authentication of users for which one of the single biometric characteristic cannot be detected [1,2,3]. In particular, multimodal systems based on a single sensor are arousing interest because they permit to achieve cost-effectiveness and improved acceptability from users [4].
Multimodal systems are often employed for human hand characteristics including hand geometry and palmprint because both are universal, invariant, acceptable and collectable [5,6].
Over the years, several technologies have been experimented with for the acquisition of the two human hand modalities. The most commonly employed are optical and infrared [7]. The former is mainly based on CCD cameras and contactless technique [8,9,10]: CCD cameras collect high-quality images but are limited by the bulkiness of the device, while contactless modality is highly useful for acceptability of users and reasons of personal hygiene, but it is not very reliable for low-quality images. Regarding the latter, both Near-Infrared (NIR) and Far-Infrared (FIR) radiation are used [11,12]. The principal limit of these technologies is their capability of providing information present only on the external skin surface.
Ultrasound is a technology employed in several fields including sonar [13], motors and actuators [14], Non-Destructive Evaluations (NDE) [15], Indoor Positioning Systems (IPS) [16], medical imaging [17] and therapy [18], and biometric systems [19]. The capability of ultrasound to penetrate the human body can be very useful in the latter field because it allows for 3D information on the features to be obtained, leading to a more accurate description of the biometric characteristic and hence, improved recognition accuracy [10].Moreover, ultrasound is featured by the capability of effectively detecting liveness during the acquisition phase, by simply checking vein pulsing, making the system very difficult to counterfeit and is not influenced by the presence of oil and ink stains on the skin and by environmental changes in light or temperature. Ultrasound technology has been widely investigated in the biometric field, particularly for extraction of fingerprint features [20,21] and, recently, the integration of the sensor in smartphone devices became reality [22]. Other characteristics, including hand geometry [23,24], palmprint [25,26,27], and hand veins [28,29,30] were also investigated.
In a recent paper [31], a single-sensor multimodal system based on the combination of ultrasound hand geometry and palmprint was proposed. The 3D features for both hand geometry and palmprint were extracted from the same volumetric hand images. Verification and identification experiments were first performed by separately considering the single modalities and, then, a preliminary attempt of fusion was performed by considering only the best scores of the two characteristics.
In the present work, the abovementioned study has been extensively carried out by testing several fusion approaches based on the weighted score sum rule and by considering a wide variety of possible combinations of palmprint and hand-geometry scores.
The remainder of this paper is structured as follows. The main papers related to the present work are reviewed in Section 2. In Section 3, the acquisition modality of hand volume and feature extraction techniques is described. Section 4 focuses briefly on state-of-the-art fusion techniques and a description of experimental methods. In Section 5, fusion results obtained via verification and identification experiments are reported. Lastly, concluding remarks are provided in Section 6.

2. Related Works

Palmprint and hand geometry are two well-explored biometric characteristics. Palmprint is characterized by a rich texture consisting of ridges, singular and minutia points. This texture can be extracted from high-resolution images that are suitable for forensic applications such as criminal detection [32]; for civil and commercial applications, low-resolution images are employed; in this case, only principal lines and wrinkles are analyzed [33,34]. A wide variety of feature extraction methods have been proposed including line-based, texture-based, and appearance-based features [33,34,35]. More recently, several holistic and coding techniques were evaluated as well as machine/deep learning approaches [36,37,38]. Most palmprint recognition systems use 2-D images for feature extraction, increasingly collected in a contactless way. However, 2-D palmprint images can be easily counterfeited. To overcome such difficulties, optical palmprint recognition systems that use 3-D information on the curvature of the palm were proposed [39].
Biometric systems based on hand geometry, which use a varying number of distances including lengths and/or widths of palm and fingers as templates, have been under development since the second half of the 20th century.The main technology used for capturing an image of the human hand is the optical approach but good recognition results were obtained using infrared radiation as well [8,40,41]. A number of pegs is sometimes used to obtain correct alignment of the fingers; however, unconstrained acquisition modality is nowadays the one most commonly adopted.
Both palmprint and hand geometry were extracted from volumetric ultrasound images.
Various probes and ultrasonic scanners were used for collecting a volumetric region of the palm in a reasonable time period (about 5 s) and with a resolution better than 100 dpi [42,43,44]. In total, two types of 3D palmprint features were extracted. The first feature was based on palm curvature [25], using similar methods as those used for optical images. The second feature was based on the analysis of principal lines extracted from several depths under the skin [27], gaining 3D information that cannnot be achieved with any other technology.
Three-dimensional ultrasonic images of the whole hand were acquired with a similar technique as that used for palmprint [23]. To minimize acquisition time, as the volume to be collected was much greater, the 3D hand images were quite lower in resolution. Furthermore, in this case several 2D images were extracted from various depths under the skin. For each image, a template based on a number of distances was defined and all these templates were opportunely combined to provide a 3D template [24].
Biometric fusion can be performed at several levels including at the sensor level, feature level, score level, and decision level.
Sensor-level fusion mainly consists of fusing raw samples of biometric traits acquired by the sensor [45]. This fusion technique can be performed if the samples are compatible and represent the same biometric trait. Moreover, it is mostly employed in multi-sample systems where multiple samples are combined to obtain a composite sample for human identification.
Feature-level fusion combines feature vectors obtained during the feature-extraction phase. This technique can be applied when feature sets of different modalities are compatible or synchronised [46].
Score-level fusion is based on a combination of match score levels by using the resultant score, in order to perform a final recognition decision [47].
Decision-level fusion is similar to score-level fusion with the difference that scores are turned into match/non-match decisions before fusion [48,49].
Among the above-described methods, fusion at the score level is the most popular because it is easy to implement and allows adequate information content [50]. Over the years, a large number of score-level fusion algorithms have been experimented with. Hammandlu et al. [47], Peng et al. [51], and El-latif et al. [52] proposed fusion approaches based on t-norms. Many authors have proposed score-level fusion methods based on the weighted score sum rule where the appropriate weight is assigned to each score. Weights were calculated through different modalities: for instance, Zhang et al. defined the weight on the basis of the EER of each modality [53]; Damer et al. estimated the weights through the mean of scores distribution and its maxima [54]; Kabir et al. calculated the weight through distances between max and mean or mean and min of genuine/impostor scores [55]; Poh et al. and Snelick et al. defined the weight on the basis of mean and standard deviation of genuine and impostor scores [56,57].

3. Image Acquisition and Feature Extraction

Ultrasound image acquisition of the human hand [24] is performed with a system composed of an ultrasound scanner [58], a linear array of 192 elements and a numerical pantograph, which controls the movement of the probe on the region of interest (ROI).
The acoustic coupling between the human body and the probe is created by submerging both in a tank of water. A three-dimensional image is acquired by moving the probe along the elevation direction; during the motion, several B-mode images are collected and regrouped in order to obtain a volume defined by an 8-bit grayscale 3D matrix (416 × 500 × 68 voxels). Figure 1 shows an example of a 3D render of the whole human hand. The resolution of the image is about 400 μ m.
Successively, an interpolation is performed along the z-axis and 2D renderings are extracted at various depths from the volume: the external surface of the hand is first projected on the XY plane in order to achieve the shallowest 2D image. Then, it is translated along the z-axis beneath the skin and projected again on the XY plane obtaining 2D images at increasing depths.
Three-dimensional information is taken into account by collecting fourteen 2D images with a step of 50 μ m: the shallowest image is taken at 100 µm while the deepest is captured at 750 µm. Successively, 2D and 3D features both for hand geometry and palmprint are extracted from 2D renderings: for the hand geometry, they consist of hand measurements including the size of palm, lengths and widths of fingers, while for palmprint they are represented by principal lines and main wrinkles.
The procedure employed for the extraction of 2D templates consists of a median filter to reduce the noise, binarization with a suitable threshold, and calculation of distances between a middle point on the wrist boundary, as a reference point, and each point on the hand contour with Euclidean distance [24]. Then, several feature points, shown in Figure 2a with different colours, including finger peaks, a middle point, valleys between fingers, other finger base points, and an extra point are extracted [40]. From these points, 26 distances are calculated in order to define a 2D template. Successively, 2D templates at different depths are combined to obtain the 3D template; three combinations were considered:
  • Mean features (MF): each length computed as the mean value of the lengths obtained at each depth;
  • Weighted Mean features (WMF): each length represented by a weighted mean of the lengths obtained at various depths;
  • Global features (GF): all lengths computed at every depth.
Regarding palmprint, a palm ROI is first extracted by defining a square as indicated in Figure 2b; in this way, the repeatability of the procedure is guaranteed. Successively, after some preprocessing operations, 2D features are extracted with a classical line-based procedure [27] as shown in Figure 3. The image is scanned along four directions (0°, 90°, 180°, 270°). Along each direction, the edges of principal lines are detected by calculating intensity variations through the first derivative. Short lines and isolated points are then filtered by using a Laplacian filter. The four images obtained are summed with a logical OR operation. Finally, morphological operations are executed: closing operation in order to filling holes and small concavity, thinning operation, and pruning operation to remove short lines. In this way, 2D templates are achieved.
Successively, 2D templates, at different depths, are combined with a particular algorithm in order to obtain a 3D template. The algorithm is mainly based on two operations that are executed iteratively: the first analysed 2D template T i is dilated with a structuring element of β dimension and stored in a 3D matrix; then, a logical AND comparison is performed between the current dilated template and the 2D template at adjacent depth level ( T i 1 or T i + 1 ) [31,59], and the result is stored in the 3D matrix. The dilation operation is performed to account for the fact that, by increasing the under-skin depth, the palm trait may be not orthogonal to the XY plane, while the AND operation allows filtering spurious traits in each of the two images. Dimension β of the structuring element affects the quality of results because if it is too high, the 3D template may contain spurious traits while if it is too low, some principal information could be eliminated.
Figure 4a shows an example of a 3D template represented as a colour scale matrix and obtained by setting β = 5, where each pixel is defined by a value that varies from 0 to 13: 0 defines a blue pixel that corresponds to the absence of trait while 13 defines a dark red pixel that corresponds to the presence of the trait at all depths. For comparison, Figure 4 shows the corresponding 2D grey scale render.

4. Fusion

In a previous work [31], a multimodal system based on 3D hand geometry and 3D palmprint was investigated where it was assumed that the best fusion results were achieved by considering 3D templates, both for hand geometry and palmprint, that provided the best recognition results in unimodal experiments. Instead, in the present work, this assumption has been removed and extensive fusion experiments have been performed by testing several fusion methods based on the weighted score sum rule and by considering a wide variety of possible combinations of palmprint and hand-geometry templates.

Experimented Weighted Score Sum Rules

Methods based on the weighted score sum rule are easy to implement and demonstrate high effectiveness [50]. They are generically expressed through the following equation:
R M W = i = 1 n w i R i
where n is the number of characteristics, R i represents the score and w i is the corresponding weight. Several kinds of weights were experimented with. In the following, only those discussed in Section 2 are evaluated. In all cases, the weight is calculated according to the expression:
w i = y i j = 1 n y j
For each methodology, the value y i is defined with a certain modality:
  • EER weighted (EERW) [31,53]:
    y i = 1 E E R i
  • D-Prime weighted [57]:
    y i = μ i G μ i I σ i G 2 + σ i I 2
  • Mean-to-extrema weighted [54]:
    y i = ( M a x i I μ i I ) + ( μ i G M a x i G )
  • Fisher’s discriminant ratio weighted (FDRW) [56]:
    y i = ( μ i G μ i I ) 2 σ i G 2 + σ i I 2
  • Kabir method [55]:
    y i = ( M a x i G μ i G ) + ( μ i I M i n i I )
where μ i G and μ i I are the means of genuine and impostor scores distributions, respectively, σ i G and σ i I are the standard deviations of genuine and impostor scores distributions, respectively, M a x i G and M a x i I are the maximum values of genuine and impostor scores, respectively, and M i n i I is the minimum value of impostor scores.

5. Results

Recognition accuracy is evaluated by performing verification and identification experiments on a database previously employed in [31]. It is composed of 110 samples acquired from 50 different users of both sexes with ages ranging from 18 to 55.

5.1. Verification

The verification mode consists of identifying a person through their claimed identity and is based on one-to-one comparisons between a query template and a reference template that is stored in the database. Verification experiments are performed by comparing each 3D template with all others in the database both for palmprint and hand geometry. Regarding hand geometry [24,31], 3D templates are compared by employing absolute distance function:
D = i = 1 n | Q i R i |
where Q i is the query template while R i is the reference template. Instead, the similarity criterion between two 3D palmprint templates [27,59], of the type shown in Figure 4a, is defined by a classic pixel-to-area approach based on a logical AND operation between corresponding pixels of two images:
S 3 D ( R , Q ) = 2 S R + S Q i = 1 n j = 1 n T R ( i , j ) T Q ( i , j ) | O R ( i , j ) O Q ( i , j ) | < α
where T R and T Q are the reference and query templates, respectively, n × n is the dimension of the template and S R and S Q are the sum of pixels of value “1” in T R and T Q , respectively; α is an integer value between 0 and the number of 2D templates and acts as a filter for small or secondary traits. The lower the value of α , the greater the filter effect.The term | O R ( i , j ) O Q ( i , j ) |< α allows tuning the acceptable difference of occurrences in corresponding pixels.
The result of the comparison, referred to as score, is defined as genuine or impostor if the two templates come from the same user or from different users, respectively. Furthermore, if the score exceeds a certain threshold, the user is authenticated, while if the score is lower, the user is rejected. In biometric systems, two types of errors may exist: false acceptance and false rejection errors, which occur when an impostor score exceeds the threshold and when a genuine score is lower than the threshold, respectively. Consequently, the performance of a system is evaluated by the False Acceptance Rate (FAR) and the False Rejection Rate (FRR), which can be computed as the ratio between occurrences of false acceptances and false rejections and the total scores, respectively. The Equal Error Rate (EER), which occurs when FRR = FAR, is often used to provide a synthetic evaluation of the recognition capability of the system. Performances of different systems are compared through Detection Error Tradeoff (DET) and Receiver Operating Characteristics (ROC) curves, which plot FRR and True Acceptance Rate (TAR), defined as 1-FRR, as a function of FAR, respectively. For DET and ROC curves, the system shows better recognition performances when the corresponding curve is closer to the axis. Specifically, in order to quantitatively evaluate the performances on the basis of ROC curves, the Area Under Curve (AUC) is computed.
Figure 5 shows DET and ROC curves obtained for palmprint by varying α values between 3 and 9, for β = 3, β = 4, and β = 5. The values of EER and AUC computed for each curve of Figure 5 are reported in Table 1. As can be seen, an overall improvement of recognition results is observed by increasing β . This improvement strictly occurs for the AUC. As concerns EER, best values are obtained with β = 5 for any value of α , while, in most cases, β = 3 provided better results than β = 4. It is also to note that, by increasing β , lowest EER values are achieved for decreasing values of α , i.e., α = 7 and α = 8 for β = 3, α = 6 and α = 7, for β = 4, and α = 5 and α = 6 for β = 5. A similar behaviour can be observed for AUC and indicates that a higher dilation ( β ) before the AND operation in 3D template generation should be compensated by accepting only small differences in pixel occurrences between the two 3D templates during the matching operation. EER and AUC values obtained by using different 3D templates for hand geometry are reported in Table 2. A detailed comparison between DET curves was reported in a previous work [24]. Successively, the two characteristics are fused by employing the methods described in Equations (1)–(7) and by considering all analyzed combinations of α and β for palmprint while, for hand geometry, only the 3D template obtained with GF is used in fusion operations. Fusion results are reported in Table 3, Table 4 and Table 5 for β = 3, β = 4, and β = 5, respectively. As can be seen, the fusion between hand geometry and palmprint allows obtaining, in most cases, an improvement in the recognition capabilities in terms of EER. In particular, a notable lowering of the EER, with respect to that of the best unimodal methods, is found for the great majority of cases for β = 5; the lowest values (about 0.06%) were achieved with the EERW ( α = 4 and α = 5) and Kabir ( α = 4, α = 5, and α = 8) methods. The value of 0.074%, obtained for β = 3 and α = 9, is noteworthy as well. It is to highlight that, for all the above-reported cases, the fused EER is lower than the one obtained by choosing β = 5 and α = 6, which provided the best result for the unimodal characteristic [31]. The worst fusion results were instead obtained for β = 3, in particular with FDRW and D-Prime methods, for which an increase of EER is found.
Figure 6 shows DET curves plotted for the best fusion methods and for best unimodal palmprint and hand-geometry cases, where the dramatic improvement in fusion recognition results over the unimodal results can be observed. The first bisector FRR = FAR is not plotted for figure readability. As far as the AUC values are concerned, all fusion methods demonstrate an improvement. In particular, for β = 5, EERW, D-Prime and Kabir methods allow obtaining an AUC value equal to 100% for all α values; the same result is achieved for α = 9 β = 4 as well.

5.2. Identification

Identification is an alternative modality to verification with the purpose of assigning an identity to an unknown person. The system compares a test template with all templates contained in a database: the highest score determines the identity if it exceeds a predefined threshold; otherwise, the person is considered as not present in the database.
Identification experiments were performed only for fusion scores that provided the best results in verification experiments, i.e., those obtained for β = 5 with the Kabir method ( α = 4, α = 5, α = 8), EERW ( α = 4, α = 5) and for β = 4 with D-Prime ( α = 7), and for the best palmprint and hand geometry cases.
Matching results are stored in 110 tables, one for each sample, where each one contains 109 scores that follow a descending order. An identification experiment is successful when all genuine scores fill the first positions of the table, i.e., the lowest genuine is higher than the highest impostor score. The identification rate is defined by the number of success tables over the total number of tables. An identification rate of 100% is registered for all analysed fusion methods, while the best unimodal palmprint and hand geometry methods scored identification rates of 98% and 91%, respectively, again demonstrating the effectiveness of fusion.
To further test the robustness of the identification procedure, normalized score differences (NSD) between the lowest genuine and highest impostor scores were calculated for all experiments. Figure 7 shows the distributions of such values normalized to the lowest genuine score for the six fusion methods and the best palmprint and hand geometry cases. For each distribution, mean, standard deviation, and the number of occurrences when NSD is lower than 0.1 are calculated and reported in Table 6. As can be seen, fusion based on D-prime seems to be the most robust because it exhibits the highest mean value and the lowest number of NDS < 0.1 among the fusion methods; the highest value of standard deviation is observed as well. Note that a strict correlation between the three parameters seems to occur for any fusion method. In fact, a decrease in the occurrences of NSD < 0.1 corresponds to an increase in both mean and standard deviation. For hand geometry and palmprint, mean and standard deviation are reported for comparison while, as they exhibited an identification rate lower than 100%, the occurrences of NSD < 0.1 are not reported at all.

6. Conclusions

In this work, recognition performances of a multimodal system based on the fusion of three-dimensional palmprint and hand-geometry features, extracted from the same volumetric ultrasound images, are experimentally evaluated. Several fusion techniques based on the weighted score sum rule and on a wide variety of possible combinations of palmprint and hand geometry scores, obtained by varying two main parameters ( α and β ) that sensibly affect palmprint score results, are proposed and tested. Recognition capabilities of the various methods are evaluated and compared by carrying out verification and identification experiments on a homemade database employed in a previous work [31]. Verification results demonstrated that the fusion, in most cases, allows obtaining a dramatic improvement in recognition performance with respect to unimodal systems. Particularly, an EER value of about 0.06% is achieved in at least five cases against values of 1.18% and 0.63% of the best cases for unimodal palmprint and hand-geometry, respectively. Moreover, it was also proved that the best fusion results are not obtained by fusing the best scores of the two unimodal characteristics. Identification experiments, which are executed for the fusion methods that provided the best verification results, demonstrated an identification rate of 100%, against 98% and 91% obtained in the best cases for unimodal palmprint and hand-geometry, respectively, again demonstrating the effectiveness of fusion.
The exceptionally high-recognition accuracy, together with the other features of ultrasound (above all the capability of effectively detecting liveness), makes this kind of system particularly suited for high secure access applications.
Future work will be devoted to experimenting with the acquisition of volumetric ultrasound images of the hand by employing gel as a coupling medium, instead of water, as already tested in previous works [59,60]. The benefits offered by this coupling approach include reduced invasiveness of the acquisition procedure and increased user comfort regarding hand placement. This approach will make it easier to establish a wider database, which will improve the effectiveness and reliability of the achieved results. In addition to the extraction of hand geometry and palmprint, the coupling approach will also allow vein patterns to be extracted from the same collected volume. Finally, alternative feature-extraction methods, mainly based on machine learning and deep learning [61], will be investigated in addition to other fusion techniques, particularly feature-level techniques.

Author Contributions

Conceptualization, A.I.; methodology, A.I.; software, M.M.; validation, M.M.; formal analysis, M.M.; resources, A.I.; writing—original draft preparation, M.M.; writing—review and editing, A.I.; supervision, A.I.; project administration, A.I.; funding acquisition, A.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Italian Government through the PRIN 2020 305 Program (Project n. 20205HFXE7).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.; Shi, D.; Zhou, W. Convolutional Neural Network Approach Based on Multimodal Biometric System with Fusion of Face and Finger Vein Features. Sensors 2022, 22, 6039. [Google Scholar] [CrossRef] [PubMed]
  2. Ryu, R.; Yeom, S.; Kim, S.H.; Herbert, D. Continuous Multimodal Biometric Authentication Schemes: A Systematic Review. IEEE Access 2021, 9, 34541–34557. [Google Scholar] [CrossRef]
  3. Haider, S.; Rehman, Y.; Usman Ali, S. Enhanced multimodal biometric recognition based upon intrinsic hand biometrics. Electronics 2020, 9, 1916. [Google Scholar] [CrossRef]
  4. Bhilare, S.; Jaswal, G.; Kanhangad, V.; Nigam, A. Single-sensor hand-vein multimodal biometric recognition using multiscale deep pyramidal approach. Mach. Vis. Appl. 2018, 29, 1269–1286. [Google Scholar] [CrossRef]
  5. Kumar, A.; Zhang, D. Personal recognition using hand shape and texture. IEEE Trans. Image Process. 2006, 15, 2454–2461. [Google Scholar] [CrossRef] [Green Version]
  6. Charfi, N.; Trichili, H.; Alimi, A.; Solaiman, B. Bimodal biometric system for hand shape and palmprint recognition based on SIFT sparse representation. Multimed. Tools Appl. 2017, 76, 20457–20482. [Google Scholar] [CrossRef]
  7. Gupta, P.; Srivastava, S.; Gupta, P. An accurate infrared hand geometry and vein pattern based authentication system. Knowl. Based Syst. 2016, 103, 143–155. [Google Scholar] [CrossRef]
  8. Kanhangad, V.; Kumar, A.; Zhang, D. Contactless and pose invariant biometric identification using hand surface. IEEE Trans. Image Process. 2011, 20, 1415–1424. [Google Scholar] [CrossRef]
  9. Kumar, A. Toward More Accurate Matching of Contactless Palmprint Images under Less Constrained Environments. IEEE Trans. Inf. Forensics Secur. 2019, 14, 34–47. [Google Scholar] [CrossRef]
  10. Liang, X.; Li, Z.; Fan, D.; Zhang, B.; Lu, G.; Zhang, D. Innovative Contactless Palmprint Recognition System Based on Dual-Camera Alignment. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6464–6476. [Google Scholar] [CrossRef]
  11. Wu, W.; Elliott, S.; Lin, S.; Sun, S.; Tang, Y. Review of palm vein recognition. IET Biom. 2020, 9, 1–10. [Google Scholar] [CrossRef]
  12. Palma, D.; Blanchini, F.; Giordano, G.; Montessoro, P.L. A Dynamic Biometric Authentication Algorithm for Near-Infrared Palm Vascular Patterns. IEEE Access 2020, 8, 118978–118988. [Google Scholar] [CrossRef]
  13. Wang, R.; Müller, R. Bioinspired solution to finding passageways in foliage with sonar. Bioinspir. Biomim. 2021, 16, 066022. [Google Scholar] [CrossRef]
  14. Iula, A.; Bollino, G. A travelling wave rotary motor driven by three pairs of langevin transducers. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2012, 59, 121–127. [Google Scholar] [CrossRef]
  15. Pyle, R.; Bevan, R.; Hughes, R.; Rachev, R.; Ali, A.; Wilcox, P. Deep Learning for Ultrasonic Crack Characterization in NDE. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 1854–1865. [Google Scholar] [CrossRef]
  16. Carotenuto, R.; Merenda, M.; Iero, D.; Della Corte, F. An Indoor Ultrasonic System for Autonomous 3-D Positioning. IEEE Trans. Instrum. Meas. 2019, 68, 2507–2518. [Google Scholar] [CrossRef]
  17. Avola, D.; Cinque, L.; Fagioli, A.; Foresti, G.; Mecca, A. Ultrasound Medical Imaging Techniques. ACM Comput. Surv. 2021, 54. [Google Scholar] [CrossRef]
  18. Trimboli, P.; Bini, F.; Marinozzi, F.; Baek, J.H.; Giovanella, L. High-intensity focused ultrasound (HIFU) therapy for benign thyroid nodules without anesthesia or sedation. Endocrine 2018, 61, 210–215. [Google Scholar] [CrossRef]
  19. Iula, A. Ultrasound systems for biometric recognition. Sensors 2019, 19, 2317. [Google Scholar] [CrossRef] [Green Version]
  20. Schmitt, R.; Zeichman, J.; Casanova, A.; Delong, D. Model based development of a commercial, acoustic fingerprint sensor. In Proceedings of the IEEE International Ultrasonics Symposium, IUS, Dresden, Germany, 7–10 October 2012; pp. 1075–1085. [Google Scholar]
  21. Lamberti, N.; Caliano, G.; Iula, A.; Savoia, A. A high frequency cMUT probe for ultrasound imaging of fingerprints. Sens. Actuator A Phys. 2011, 172, 561–569. [Google Scholar] [CrossRef]
  22. Jiang, X.; Tang, H.Y.; Lu, Y.; Ng, E.J.; Tsai, J.M.; Boser, B.E.; Horsley, D.A. Ultrasonic fingerprint sensor with transmit beamforming based on a PMUT array bonded to CMOS circuitry. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2017, 64, 1401–1408. [Google Scholar] [CrossRef] [PubMed]
  23. Iula, A.; Hine, G.; Ramalli, A.; Guidi, F. An Improved Ultrasound System for Biometric Recognition Based on Hand Geometry and Palmprint. Procedia Eng. 2014, 87, 1338–1341. [Google Scholar] [CrossRef] [Green Version]
  24. Iula, A. Biometric recognition through 3D ultrasound hand geometry. Ultrasonics 2021, 111, 106326. [Google Scholar] [CrossRef] [PubMed]
  25. Iula, A.; Nardiello, D. Three-dimensional ultrasound palmprint recognition using curvature methods. J. Electron. Imaging 2016, 25, 033009. [Google Scholar] [CrossRef]
  26. Nardiello, D.; Iula, A. A new recognition procedure for palmprint features extraction from ultrasound images. Lect. Notes Electr. Eng. 2019, 512, 113–118. [Google Scholar]
  27. Iula, A.; Nardiello, D. 3-D Ultrasound Palmprint Recognition System Based on Principal Lines Extracted at Several under Skin Depths. IEEE Trans. Instrum. Meas. 2019, 68, 4653–4662. [Google Scholar] [CrossRef]
  28. De Santis, M.; Agnelli, S.; Nardiello, D.; Iula, A. 3D Ultrasound Palm Vein recognition through the centroid method for biometric purposes. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017. [Google Scholar]
  29. Iula, A.; Vizzuso, A. 3D Vascular Pattern Extraction from Grayscale Volumetric Ultrasound Images for Biometric Recognition Purposes. Appl. Sci. 2022, 12, 8285. [Google Scholar] [CrossRef]
  30. Micucci, M.; Iula, A. Ultrasound wrist vein pattern for biometric recognition. In Proceedings of the 2022 IEEE International Ultrasonics Symposium, IUS, Venice, Italy, 10–13 October 2022; Volume 2022. [Google Scholar]
  31. Iula, A. Micucci, M. Multimodal Biometric Recognition Based on 3D Ultrasound Palmprint-Hand Geometry Fusion. IEEE Access 2022, 10, 7914–7925. [Google Scholar] [CrossRef]
  32. Chen, S.; Guo, Z.; Feng, J.; Zhou, J. An Improved Contact-Based High-Resolution Palmprint Image Acquisition System. IEEE Trans. Instrum. Meas. 2020, 69, 6816–6827. [Google Scholar] [CrossRef]
  33. Palma, D.; Montessoro, P.; Giordano, G.; Blanchini, F. Biometric Palmprint Verification: A Dynamical System Approach. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2676–2687. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, L.; Li, H.; Niu, J. Fragile Bits in Palmprint Recognition. IEEE Signal Process. Lett. 2012, 19, 663–666. [Google Scholar] [CrossRef] [Green Version]
  35. Fei, L.; Lu, G.; Jia, W.; Teng, S.; Zhang, D. Feature extraction methods for palmprint recognition: A survey and evaluation. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 346–363. [Google Scholar] [CrossRef]
  36. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA convolutional networks for touchless palmprint recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3160–3174. [Google Scholar] [CrossRef] [Green Version]
  37. Zhong, D.; Zhu, J. Centralized Large Margin Cosine Loss for Open-Set Deep Palmprint Recognition. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1559–1568. [Google Scholar] [CrossRef]
  38. Shao, H.; Zhong, D.; Du, X. Deep Distillation Hashing for Unconstrained Palmprint Recognition. IEEE Trans. Instrum. Meas. 2021, 70. [Google Scholar] [CrossRef]
  39. Fei, L.; Zhang, B.; Jia, W.; Wen, J.; Zhang, D. Feature Extraction for 3-D Palmprint Recognition: A Survey. IEEE Trans. Instrum. Meas. 2020, 69, 645–656. [Google Scholar] [CrossRef]
  40. Sharma, S.; Dubey, S.; Singh, S.; Saxena, R.; Singh, R. Identity verification using shape and geometry of human hands. Expert Syst. Appl. 2015, 42, 821–832. [Google Scholar] [CrossRef]
  41. Klonowski, M.; Plata, M.; Syga, P. User authorization based on hand geometry without special equipment. Pattern Recognit. 2018, 73, 189–201. [Google Scholar] [CrossRef]
  42. Iula, A.; Savoia, A.; Caliano, G. Capacitive micro-fabricated ultrasonic transducers for biometric applications. Microelectron. Eng. 2011, 88, 2278–2280. [Google Scholar] [CrossRef]
  43. Iula, A.; Savoia, A.S.; Caliano, G. An ultrasound technique for 3D palmprint extraction. Sens. Actuator A Phys. 2014, 212, 18–24. [Google Scholar] [CrossRef]
  44. Iula, A.; Hine, G.E.; Ramalli, A.; Guidi, F.; Boni, E.; Savoia, A.S.; Caliano, G. An enhanced ultrasound technique for 3D palmprint recognition. In Proceedings of the 2013 IEEE International Ultrasonics Symposium (IUS), Prague, Czech Republic, 21–25 July 2013; pp. 978–981. [Google Scholar]
  45. Aldjia, B.; Leila, B. Sensor Level Fusion for Multi-modal Biometric Identification using Deep Learning. In Proceedings of the 2021 IEEE International Conference on Recent Advances in Mathematics and Informatics, ICRAMI 2021, Tebessa, Algeria, 21–22 September 2021. [Google Scholar]
  46. Safavipour, M.; Doostari, M.; Sadjedi, H. A hybrid approach to multimodal biometric recognition based on feature-level fusion of face, two irises, and both thumbprints. J. Med. Signals Sens. 2022, 12, 177–191. [Google Scholar] [PubMed]
  47. Hanmandlu, M.; Grover, J.; Gureja, A.; Gupta, H.M. Score level fusion of multimodal biometrics using triangular norms. Pattern Recognit. Lett. 2011, 32, 1843–1850. [Google Scholar] [CrossRef]
  48. Punyani, P.; Gupta, R.; Kumar, A. A multimodal biometric system using match score and decision level fusion. Int. J. Inf. Technol. 2022, 14, 725–730. [Google Scholar] [CrossRef]
  49. Devi, D.; Rao, K. Decision level fusion schemes for a Multimodal Biometric System using local and global wavelet features. In Proceedings of the CONECCT 2020-6th IEEE International Conference on Electronics, Computing and Communication Technologies, Bangalore, India, 2–4 July 2020. [Google Scholar]
  50. Dwivedi, R.; Dey, S. Score-level fusion for cancelable multi-biometric verification. Pattern Recognit. Lett. 2019, 126, 58–67. [Google Scholar] [CrossRef]
  51. Peng, J.; El-Latif, A.; Li, Q.; Niu, X. Multimodal biometric authentication based on score level fusion of finger biometrics. Optik 2014, 125, 6891–6897. [Google Scholar] [CrossRef]
  52. El-Latif, A.; Hossain, M.; Wang, N. Score level multibiometrics fusion approach for healthcare. Clust. Comput. 2019, 22, 2425–2436. [Google Scholar] [CrossRef]
  53. Zhang, D.; Lu, G.; Li, W.; Zhang, L.; Luo, N. Palmprint recognition using 3-D information. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2009, 39, 505–519. [Google Scholar] [CrossRef] [Green Version]
  54. Damer, N.; Opel, A.; Nouak, A. Biometric source weighting in multi-biometric fusion: Towards a generalized and robust solution. In Proceedings of the European Signal Processing Conference, Lisbon, Portugal, 1–5 September 2014; pp. 1382–1386. [Google Scholar]
  55. Kabir, W.; Ahmad, M.; Swamy, M. Normalization and weighting techniques based on genuine-impostor score fusion in multi-biometric systems. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1989–2000. [Google Scholar] [CrossRef]
  56. Poh, N.; Bengio, S. A study of the effects of score normalisation prior to fusion in biometric authentication tasks. Technical Report, IDIAP 2004. Available online: https://infoscience.epfl.ch/record/83130 (accessed on 14 February 2023).
  57. Snelick, R.; Uludag, U.; Mink, A.; Indovina, M.; Jain, A. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 450–455. [Google Scholar] [CrossRef]
  58. Tortoli, P.; Bassi, L.; Boni, E.; Dallai, A.; Guidi, F.; Ricci, S. ULA-OP: An advanced open platform for ultrasound research. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2009, 56, 2207–2216. [Google Scholar] [CrossRef]
  59. Iula, A.; Micucci, M. A Feasible 3D Ultrasound Palmprint Recognition System for Secure Access Control Applications. IEEE Access 2021, 9, 39746–39756. [Google Scholar] [CrossRef]
  60. Iula, A.; Micucci, M. Experimental validation of a reliable palmprint recognition system based on 2D ultrasound images. Electronics 2019, 8, 1393. [Google Scholar] [CrossRef] [Green Version]
  61. Micucci, M.; Iula, A. Recent Advances in Machine Learning Applied to Ultrasound Imaging. Electronics 2022, 11, 1800. [Google Scholar] [CrossRef]
Figure 1. Example of 3D rendered human hand.
Figure 1. Example of 3D rendered human hand.
Sensors 23 03653 g001
Figure 2. (a) Feature points extracted from the hand shape and the 26 distances defining the 2D template; (b) ROI extraction for palmprint from two feature points of (a).
Figure 2. (a) Feature points extracted from the hand shape and the 26 distances defining the 2D template; (b) ROI extraction for palmprint from two feature points of (a).
Sensors 23 03653 g002
Figure 3. Palmprint feature extraction procedure step by step: (a) 2D grayscale image at 350 μ m; (b) image after detection of edges; (c) feature extraction along direction 0° (d) 90° (e) 180° (f) 270° (g); logical OR of images after feature extraction along four directions; (h) final 2D template.
Figure 3. Palmprint feature extraction procedure step by step: (a) 2D grayscale image at 350 μ m; (b) image after detection of edges; (c) feature extraction along direction 0° (d) 90° (e) 180° (f) 270° (g); logical OR of images after feature extraction along four directions; (h) final 2D template.
Sensors 23 03653 g003
Figure 4. (a) Three-dimensional template represented as a colour scale matrix where the trait’s depth varies from 0 to 13; (b) 2D greyscale render of the same sample.
Figure 4. (a) Three-dimensional template represented as a colour scale matrix where the trait’s depth varies from 0 to 13; (b) 2D greyscale render of the same sample.
Sensors 23 03653 g004
Figure 5. DET (first line) and ROC (second line) curves obtained with palmprint templates by varying α and β values.
Figure 5. DET (first line) and ROC (second line) curves obtained with palmprint templates by varying α and β values.
Sensors 23 03653 g005
Figure 6. DET curves obtained with the best fusion methods. Best unimodal palmprint and hand geometry curves are also reported for comparison.
Figure 6. DET curves obtained with the best fusion methods. Best unimodal palmprint and hand geometry curves are also reported for comparison.
Sensors 23 03653 g006
Figure 7. Distribution of Normalized Score Difference (NSD) between the highest impostor score and the lowest genuine score for: (a) Kabir ( α = 4, β = 5), (b) Kabir ( α = 5, β = 5), (c) Kabir ( α = 8, β = 5), (d) EERW ( α = 4, β = 5), (e) EERW ( α = 5, β = 5), (f) D-Prime ( α = 9, β = 4), (g) Palmprint, (h) HG. As can be seen, for fusion methods, NSD values are always higher than 0, ensuring an identification rate equal to 100%.
Figure 7. Distribution of Normalized Score Difference (NSD) between the highest impostor score and the lowest genuine score for: (a) Kabir ( α = 4, β = 5), (b) Kabir ( α = 5, β = 5), (c) Kabir ( α = 8, β = 5), (d) EERW ( α = 4, β = 5), (e) EERW ( α = 5, β = 5), (f) D-Prime ( α = 9, β = 4), (g) Palmprint, (h) HG. As can be seen, for fusion methods, NSD values are always higher than 0, ensuring an identification rate equal to 100%.
Sensors 23 03653 g007
Table 1. 3D Palmprint: EER and AUC values for all curves reported in Figure 5.
Table 1. 3D Palmprint: EER and AUC values for all curves reported in Figure 5.
β = 3 β = 4 β = 5
MethodEERAUCEERAUCEERAUC
α = 3 3.08%99.35%2.00%99.61%1.60%99.85%
α = 4 2.55%99.43%2.75%99.69%1.54%99.88%
α = 5 2.13%99.50%2.53%99.71%1.48%99.88%
α = 6 1.82%99.53%1.93%99.70%1.18%99.87%
α = 7 1.59%99.53%1.91%99.69%1.64%99.84%
α = 8 1.54%99.50%2.08%99.67%1.78%99.82%
α = 9 1.75%99.55%2.49%99.77%2.04%99.79%
Table 2. 3D Hand Geometry: EER and AUC values for the three types of 3D templates.
Table 2. 3D Hand Geometry: EER and AUC values for the three types of 3D templates.
MethodEERAUC
GF0.64%99.94%
MF0.74%99.94%
WMF0.93%99.94%
Table 3. EER and AUC values obtained using various fusion methods and α values for β = 3.
Table 3. EER and AUC values obtained using various fusion methods and α values for β = 3.
ERRWD-PrimeFDRWMEWKabir
MethodEERAUCEERAUCEERAUCEERAUCEERAUC
α = 3 0.22%99.99%0.90%99.98%0.89%99.98%0.20%99.99%0.24%99.99%
α = 4 0.24%99.99%0.88%99.98%0.90%99.97%0.26%99.99%0.28%99.99%
α = 5 0.28%99.99%0.86%99.98%0.90%99.97%0.33%99.99%0.38%99.99%
α = 6 0.30%99.99%0.90%99.98%0.90%99.96%0.22%100%0.37%100%
α = 7 0.33%99.99%0.88%99.98%0.89%99.96%0.23%100%0.36%100%
α = 8 0.32%99.99%0.90%99.97%0.80%99.97%0.24%100%0.37%100%
α = 9 0.34%100%0.90%99.98%0.90%99.96%0.33%99.99%0.63%99.99%
Table 4. EER and AUC values obtained using various fusion methods and α values for β = 4.
Table 4. EER and AUC values obtained using various fusion methods and α values for β = 4.
ERRWD-PrimeFDRWMEWKabir
MethodEERAUCEERAUCEERAUCEERAUCEERAUC
α = 3 0.21%99.99%0.55%99.99%0.81%99.99%0.25%99.98%0.14%99.99%
α = 4 0.16%99.99%0.52%99.99%0.85%99.99%0.21%99.99%0.18%100%
α = 5 0.16%99.99%0.68%99.99%0.86%99.99%0.16%99.99%0.16%100%
α = 6 0.14%100%0.63%99.99%0.90%99.98%0.094%99.99%0.90%100%
α = 7 0.47%99.99%0.47%99.99%0.65%99.99%0.47%100%0.20%100%
α = 8 0.15%100%0.75%99.99%0.90%99.99%0.14%100%0.21%100%
α = 9 0.16%100%0.074%100%0.15%100%0.15%100%0.14%100%
Table 5. EER and AUC values obtained using various fusion methods and α values for β = 5.
Table 5. EER and AUC values obtained using various fusion methods and α values for β = 5.
ERRWD-PrimeFDRWMEWKabir
MethodEERAUCEERAUCEERAUCEERAUCEERAUC
α = 3 0.14%100%0.20%100%0.14%100%0.27%100%0.33%100%
α = 4 0.063%100%0.12%100%0.29%100%0.18%100%0.058%100%
α = 5 0.063%100%0.22%100%0.28%99.99%0.12%99.99%0.062%100%
α = 6 0.081%100%0.22%100%0.24%99.99%0.41%99.99%0.098%100%
α = 7 0.15%100%0.20%100%0.30%99.99%0.15%100%0.1%100%
α = 8 0.13%100%0.23%100%0.27%100%0.15%100%0.06%100%
α = 9 0.15%100%0.24%100%0.29%99.99%0.083%100%0.088%100%
Table 6. Mean, Standard Deviation and occurrences of NSD < 0.1 for the distributions of Figure 7.
Table 6. Mean, Standard Deviation and occurrences of NSD < 0.1 for the distributions of Figure 7.
MethodMeanStandardNSD < 0.1
Deviation
Kabir ( α = 4, β = 5)0.12550.049827
Kabir ( α = 5, β = 5)0.13130.050123
Kabir ( α = 8, β = 5)0.13280.050321
D-Prime ( α = 7, β = 4)0.17850.065515
EERW ( α = 4, β = 5)0.12090.048831
EERW ( α = 5, β = 5)0.12470.048730
HG0.05850.0423
Palmprint0.20850.1251
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Micucci, M.; Iula, A. Recognition Performance Analysis of a Multimodal Biometric System Based on the Fusion of 3D Ultrasound Hand-Geometry and Palmprint. Sensors 2023, 23, 3653. https://doi.org/10.3390/s23073653

AMA Style

Micucci M, Iula A. Recognition Performance Analysis of a Multimodal Biometric System Based on the Fusion of 3D Ultrasound Hand-Geometry and Palmprint. Sensors. 2023; 23(7):3653. https://doi.org/10.3390/s23073653

Chicago/Turabian Style

Micucci, Monica, and Antonio Iula. 2023. "Recognition Performance Analysis of a Multimodal Biometric System Based on the Fusion of 3D Ultrasound Hand-Geometry and Palmprint" Sensors 23, no. 7: 3653. https://doi.org/10.3390/s23073653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop