Next Article in Journal
Multimodal Sentiment Analysis in Realistic Environments Based on Cross-Modal Hierarchical Fusion Network
Next Article in Special Issue
Enhancing Web Application Security: Advanced Biometric Voice Verification for Two-Factor Authentication
Previous Article in Journal
Particle-in-Cell Simulations on High-Efficiency Phase-Locking Millimeter-Wave Magnetrons with Unsynchronized High-Voltage Pulses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identity Recognition System Based on Multi-Spectral Palm Vein Image

School of Information Engineering, Shenyang University, Shenyang 110044, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(16), 3503; https://doi.org/10.3390/electronics12163503
Submission received: 17 July 2023 / Revised: 11 August 2023 / Accepted: 15 August 2023 / Published: 18 August 2023
(This article belongs to the Special Issue Biometric Recognition: Latest Advances and Prospects)

Abstract

:
A multi-spectral palm vein image acquisition device based on an open environment has been designed to achieve a highly secure and user-friendly biometric recognition system. Furthermore, we conducted a study on a supervised discriminative sparse principal component analysis algorithm that preserves the neighborhood structure for palm vein recognition. The algorithm incorporates label information, sparse constraints, and local information for effective supervised learning. By employing a robust neighborhood selection technique, it extracts discriminative and interpretable principal component features from non-uniformly distributed multi-spectral palm vein images. The algorithm addresses challenges posed by light scattering, as well as issues related to rotation, translation, scale variation, and illumination changes during non-contact image acquisition, which can increase intra-class distance. Experimental tests are conducted using databases from the CASIA, Tongji University, and Hong Kong Polytechnic University, as well as a self-built multi-spectral palm vein dataset. The results demonstrate that the algorithm achieves the lowest equal error rates of 0.50%, 0.19%, 0.16%, and 0.1%, respectively, using the optimal projection parameters. Compared to other typical methods, the algorithm exhibits distinct advantages and holds practical value.

1. Introduction

Palm vein recognition technology appeared in 1991 [1], and it utilizes the uniqueness and long-term stability of palm vein distribution for identity authentication [2,3]. It attracted people’s attention because of its high security, liveness detection, user acceptability [4], and convenience. Traditional palm vein recognition requires users to place their palms inside an image acquisition box to avoid interference from visible light. To enhance recognition accuracy, contact-based captures with hand immobilization devices are used, imposing significant constraints on users. Non-contact palm vein acquisition in open environments is more user-friendly, but it introduces interference from visible light on near-infrared imaging. The non-transparency, non-uniformity, and heterogeneity of tissues surrounding the palm vein result in the scattering of near-infrared light used for palm vein illumination [3]. The presence of visible light in open environments exacerbates scattering, increases noise, and leads to un-clear imaging and poor image quality, reducing the amount of useful information. This is the fundamental reason affecting the recognition performance of palm vein images. Furthermore, the non-contact acquisition method enlarges intra-class differences due to rotations, translations, scaling, and changes in illumination during multiple captures of the same sample. These two dilemmas make palm vein recognition highly challenging.
Palm vein recognition technology has developed four main approaches based on feature extraction methods: texture-based methods, structure-based methods, deep learning-based methods, and sub-space-based methods [3].
Texture-based methods, such as double Gabor orientation Weber local descriptor (DGWLD) [5], multi-scale Gabor orientation Weber local descriptors (MOGWLD) [6], difference of block means (DBM) [7], democratic voting down-sampling (DVD) [8], and various local binary pattern [9] (LBP) variants mentioned in [10], extract information about the direction, frequency, and phase of palm vein texture as features for matching and recognition. However, these methods are limited by the inadequate richness and clarity of texture information in palm vein images, which can result in decreased recognition performance.
Structure-based methods, such as the speeded-up robust feature (SURF) operator [11], histogram of oriented gradient (HOG) [12], and maximum curvature direction feature (MCDF) [13], extract point- and line-based structural features to represent palm veins. Recognition performance may be adversely affected in cases of poor image quality, as certain point and line features might be lost.
Deep learning-based methods employ various neural networks to automatically extract features and perform classification and recognition, overcoming the limitations of traditional feature extraction methods. For instance, Wu et al. [1] selectively emphasize classification features using the SER model and weaken less useful features, thereby addressing issues related to rotation, translation, and scaling. Similarly, Wei et al. [14] applied neural architecture search (NAS) techniques to overcome the drawbacks of manually designed CNNs, expanding the application of NAS technology in palm vein recognition. However, these methods may require large palm vein databases, limiting their applicability.
Sub-space-based methods, such as two-dimensional principal component analysis (2D-PCA) [15], neighborhood-preserving embedding (NPE) [16], two-dimensional Bhattacharyya bound linear discriminant analysis [17], and variants [18,19,20] of classical methods such as PCA, treat palm vein images as high-dimensional vectors or matrices. These methods transform the palm vein images into low-dimensional representations through projection or mathematical transformations for image matching and classification. Sub-space methods offer advantages, such as high recognition rates and low system resource consumption, compared to other approaches. However, due to their disregard for the texture features of the images, they may exhibit a certain degree of blindness in the dimensionality reduction process. This could lead to the omission of some discriminative features that are crucial for classification, particularly in non-contact acquisition methods in open environments, where the impact on recognition performance becomes more pronounced.
Non-contact palm vein image acquisition in open environments has garnered significant research attention due to its hygienic and convenient nature, offering promising prospects for various applications. Nevertheless, the scarcity of non-contact acquisition devices and publicly available datasets in open environments has impeded progress in non-contact palm vein image recognition research. Consequently, this study focuses on three key contributions: Firstly, the proposal of a multi-spectral palm vein image acquisition device specifically designed for open environments. Secondly, the establishment of a non-contact palm vein image dataset utilizing the developed acquisition device. Finally, addressing the existing challenges in the field, the study introduces a supervised discriminative sparse principal component analysis algorithm with a preserved neighborhood structure (SDSPCA-NPE) for palm vein recognition. As a sub-space method, this approach combines supervised label information with sparse constraints, resulting in discriminative and highly interpretable palm vein features. It mitigates issues related to un-clear imaging and poor texture quality, expands the inter-class distance of projected data, and enhances discriminability among different palm vein samples. During projection, the concept of neighborhood structure information, commonly employed in non-linear dimensionality reduction methods, is introduced. Robust neighborhood selection techniques are utilized to preserve similar local structures in palm vein samples before and after projection. This approach captures the non-uniform distribution of palm vein images and improves the drawbacks arising from increased image variations within the same class due to rotation, scaling, translation, and lighting changes. Experimental evaluations conducted on self-built palm vein databases and commonly used public multi-spectral palm vein databases, including the CASIA (Institute of Automation, Chinese Academy of Sciences) database [21], the Tongji University database, and the Hong Kong Polytechnic University database [22], demonstrate the superior performance of the proposed method compared to current typical methods.
The remaining sections of this paper are organized as follows: Section 2 introduces the self-developed acquisition device; Section 3 presents the proposed algorithm; Section 4 describes the experiments and results analysis; and Section 5 concludes the paper.

2. Multi-Spectral Image Capture Device

When near-infrared light (NIR) in the range of 720–1100 nm penetrates the palm, the different absorption rates of NIR radiation by various components of biological tissues result in a high absorption rate of blood hemoglobin (including oxygenated and deoxygenated hemoglobin). This leads to the formation of observable shadows, allowing the identification of vein locations and the generation of palm vein images [3]. Due to the reflection, scattering, or fluorescence in different tissues of the palm, the optical penetration depth varies from 0.5 mm to 5 mm. Vein acquisition devices [2] can only capture superficial veins, and palm vein images are typically obtained using a reflection imaging approach. To improve user acceptance and enhance comfort during palm vein recognition, open environment capture is employed, which un-avoidably introduces visible light (390–780 nm) illuminating the palm and entering the imaging system, resulting in the acquisition of multi-spectral palm vein images. As shown in Figure 1, visible light entering the skin increases light scattering, thereby interfering with clear palm vein imaging [22].
The self-developed non-contact palm vein image acquisition device in an open environment is shown in Figure 2. To enhance the absorption of near-infrared light by palm veins and minimize interference from visible light, the device employs two CST brand near-infrared linear light sources, model BL-270-42-IR, with a wavelength of 850 nm. These light sources are equipped with an intensity adjustment controller. The device uses an industrial camera, model MV-VD120SM, for image capture. The captured images are grayscale with a resolution of 1280 pixels × 960 pixels and 8 bits.

3. Method

The proposed methodology consists of the following steps: (1) image pre-processing, (2) feature extraction (SDSPCA-NPE), and (3) feature matching and recognition.

3.1. Image Pre-Processing

The most important issue in image pre-processing is the localization of the region of interest (ROI). ROI extraction normalizes the feature area of different palm veins, significantly reducing computational complexity. In this study, the ROI localization method proposed in reference [2] was adopted. This method identifies stable feature points on the palm, namely the valleys between the index and middle fingers and between the ring finger and little finger. Through ROI extraction, it partially corrects image rotation, translation, and scaling issues caused by non-contact imaging.
The ROI extraction process is illustrated in Figure 3. Firstly, the original image (Figure 3a) is denoised using a low-pass filter. Then, the image is binarized, and the palm contour is extracted using binary morphological dilation, refining it to a single-pixel width. Vertical line scanning is performed from the right side to the left side of the image, and the number of intersection points between the palm contour and the scan line is counted. When there are 8 intersection points, it indicates that the scan line passes through four fingers, excluding the thumb. From the second intersection point, p2, to the third intersection point, p3, the palm contour is traced to locate the valley point, point A, between the index finger and the middle finger (Figure 3c), using the disk method [2]. Similarly, between p6 and p7, the valley point, point B, between the ring finger and the little finger is located. Points A and B are connected, forming a square ABCD on the palm with the side length equal to the length of AB, denoted as d. This square is then grayscale normalized and resized to a size of 128 pixels × 128 pixels, resulting in the desired ROI, as shown in Figure 3d.

3.2. Feature Extraction (SDSPCA-NPE)

Palm vein images often encounter interference in the form of partial noise and deformation during the non-contact acquisition process. These disturbances not only increase the difficulty of processing palm vein data but also pose challenges to dimensionality reduction and classification, which hinder palm vein image recognition. To address these unique characteristics of palm vein images, this study employs supervised discriminative sparse principal component analysis (SDSPCA) [23] for dimensionality reduction and recognition. SDSPCA combines supervised discriminative information and sparse constraints into the PCA model [15], enhancing interpretability and mitigating the impact of high inter-class ambiguity in palm vein image samples. By projecting palm vein images using SDSPCA, the integration of sparse constraints and supervised learning achieves more effective dimensionality reduction for classification tasks, ultimately improving the recognition performance of palm vein images. The SDSPCA model is depicted below:
m i n Q X Q Q T X F 2 + α Y Q Q T Y F 2 + β Q 2,1 s . t . Q T Q = I k
Optimize as follows [23]:
Step 1:
X Q Q T X F 2 = T r ( ( X Q Q T X ) T ( X Q Q T X ) ) = T r ( X T X X T Q Q T X ) = T r ( X T X ) T r ( Q T X X T Q )
T r ( X T X ) as a fixed value, independent of the final minimization problem solution.
Step 2:
m i n Q X Q Q T X F 2 = m i n Q T r ( Q T X X T Q )
By simple algebraic calculation [24], the above equation can be optimized as follows:
m i n Q X Q Q T X F 2 + α Y Q Q T Y F 2 + β Q 2,1 = m i n Q Tr Q T X X T Q α Tr Q T Y Y T Q + β Tr Q T D Q = m i n Q Tr Q T X X T α Y Y T + β D Q s . t . Q T Q = I k
In the proposed method, α and β are weight parameters. The training data matrix is X = x 1 , , x n T R n × d , where n is the number of training samples, and d is the feature dimension. Using Y = y 1 , , y n T R n × c as the label matrix of the dataset X, Y is constructed as follows:
Y i , j = { 1 , i f c j = i , j = 1,2 , n , i = 1,2 , , c 0 , o t h e r w i s e
where c represents the number of classes in the training data, and c j { 1 , , c } represents the class labels. The optimal Q consists of k-tail eigenvectors of Z = X X T α Y Y T + β D , where D R n × n is a diagonal matrix, and the i-th diagonal element is:
D i i = 1 2 Σ j = 1 k Q i j 2 + ϵ
where ε is a small positive constant to avoid dividing by zero.
SDSPCA has achieved good performance in palm vein recognition. However, when faced with partially low-quality palm vein images, the method exhibits a performance decline. The underlying reason lies in the sparse characteristics of palm vein images, where the effective information often occupies only a few dimensions in the high-dimensional space. This effective information exhibits inherent structures and features among palm vein images, providing a certain level of correlation and similarity among data points in high-dimensional space. These correlations and similarities result in the formation of a low-dimensional manifold in the high-dimensional space for the palm vein dataset, which is essential for dimensionality reduction-based recognition. However, palm vein images captured in open environments suffer from image quality degradation due to the scattering of palm veins under near-infrared illumination. This leads to un-clear imaging of some palm vein patterns and poor image quality. Additionally, within the same sample, variations in rotation, scale, translation, and lighting conditions during multiple captures further increase image differences. Under the influence of these factors, effective information is reduced and interfered with, leading to interactions in high-dimensional space. As a result, palm vein images exhibit an un-evenly distributed low-dimensional manifold with high inter-class similarity and large intra-class differences. Therefore, as SDSPCA is a linear dimensionality reduction method that performs linear projection on the entire dataset, it has limitations in capturing the un-even non-linear geometric structure of the palm vein dataset in high-dimensional space. Consequently, palm vein samples cannot be well distributed in the final linear projection space, limiting the classification capability.
In order to address the aforementioned issue, previous researchers have utilized several non-linear dimensionality reduction methods, such as locally preserving projection (LPP) [25], locally linear embedding (LLE) [26], and neighborhood preserving embedding (NPE). However, although these methods have achieved non-linear dimensionality reduction, their classification capability is limited. Therefore, this paper proposes a supervised discriminative sparse PCA algorithm, named SDSPCA-NPE, that preserves the neighborhood structure. This algorithm inherits the advantages of SDSPCA in enlarging the inter-class distance through supervised learning while overcoming its limitations. By incorporating the constraints of NPE, the proposed method introduces local structural information to capture the non-linear geometric structure of the palm vein dataset. As a result, the projected palm vein data exhibits an improved distance distribution, enhancing the classification performance of palm vein data. The model for NPE is presented as follows:
m i n i x i j W i j x j 2 = m i n W Tr X T ( I W ) T ( I W ) X = m i n W Tr X T M X s . t . X T X = I k
M is the result of multiplying the transpose matrices of ( I W ) and ( I W ) . Wij represents a matrix consisting of distance weight coefficients between Xi and Xj in the original space. The construction process of Wij is described here first. If Xi and Xj is a k-nearest neighbor relationship, then the following heat kernel function calculation is used. In the following equation, if it is not a k-nearest neighbor relationship, then Wij is equal to 0.
W i j = e X i X j 2 t
The ‘t’ represents the bandwidth parameter used in the computation of the heat kernel matrix [27].
NPE, in essence, aims to preserve the local linear relationships among palm vein data samples during dimensionality reduction. It directly approximates a non-linear projection space that satisfies the implicit low-dimensional manifold of palm vein data while striving to retain the relative proximity relationships between data points. This ensures the connectivity of the same samples during dimensionality reduction of low-dimensional manifolds, thereby reducing the impact of noise and outliers.
Specifically, NPE performs linear reconstruction on palm vein samples within local regions, typically using a k-nearest neighbor approach. By minimizing the reconstruction error in the dimensionality reduction process, the local structure is preserved in the projection space. This yields a complex non-linear low-dimensional embedding space within the high-dimensional manifold, consequently reducing the intra-class distance after projection and achieving optimal dimensionality reduction. Figure 4 illustrates the schematic diagram of the NPE principle.
The neighborhood preserving embedding (NPE) technique employed in our proposed method allows for capturing the non-linear and non-uniform distribution of the palm vein dataset in high-dimensional space. This capability enables the method to mitigate the effects of variations, such as rotation, scaling, translation, and changes in lighting conditions, which often lead to increased differences in palm vein images of the same class across multiple captures. Moreover, it helps reduce the interference caused by outliers present in the palm vein samples. In the context of palm vein recognition, when the original palm vein data exhibits a non-uniform distribution within a class due to the influence of outliers, linear dimensionality reduction methods that seek the final projection space through global linear transformations often fail to preserve the non-linear and non-uniform distribution structure of the high-dimensional palm vein dataset. Consequently, they demonstrate low tolerance towards outliers during dimensionality reduction, resulting in the misclassification of such samples. In contrast, by applying NPE’s non-linear mapping and utilizing robust neighborhood selection techniques, the method encompasses the outliers within the neighborhood range. This allows the outliers to be pulled closer to samples of the same class during the dimensionality reduction process, ultimately resulting in a more compact distribution of palm vein samples within the low-dimensional space for the same class and larger separations from samples of other classes.
The proposed method is as follows:
m i n Q X Q Q T X F 2 + α Y Q Q T Y F 2 + β Q 2,1 + δ i x i j W i j x j 2 = m i n Q Tr Q T X X T Q α Tr Q T Y Y T Q + β Tr Q T D Q + δ T r Q T X X T M X X T Q = m i n Q Tr Q T X X T α Y Y T + β D + δ X X T M X X T Q s . t . Q T Q = I k
The optimal Q consists of k-tail eigenvectors of Z = X X T α Y Y T + β D + δ X X T M X X T . We initialize the Q, and we calculate D based on a given formula, and obtain Z. With Z and D at hand, we proceed to update the Q values through this iterative process, ultimately arriving at the optimal Q [23].
The proposed method, SDSPCA-NPE, is a supervised learning variant of PCA that inherits the advantages of SDSPCA. The approach utilizes sparse-constrained principal component analysis, which not only extracts and condenses the main components of palm vein images, but also reduces the ambiguity associated with PCA dimensionality reduction. Additionally, class labels are incorporated into the algorithm model, preserving the category information of palm vein images. This constraint influences the extraction of principal components by approximating the given label information through linear transformations. As a result, the similarity of feature vectors projected from different classes of palm vein samples is reduced. The final method extracts highly interpretable and well-classifiable principal component features from palm vein images while mitigating the performance interference caused by poor image quality due to lighting variations. This makes it more suitable for palm vein recognition.
The final proposed method, SDSPCA-NPE, introduces local structural information to address the limitations of SDSPCA, which only considers global and class information in supervised learning. By incorporating sparse constraints and class information, the resulting feature vectors exhibit strong interpretability and sensitivity to class information, effectively reducing the impact of lighting variations in palm vein images and improving the problem of high inter-class similarity. This leads to better inter-class distances. Additionally, the method preserves the local structural information of the data during projection, maintaining certain neighborhood relationships from the original palm vein data. This enhances tolerance for non-uniformly distributed outliers and reduces intra-class distances. As a result, the final palm vein feature vectors demonstrate improved classification performance, with increased inter-class distances and decreased intra-class distances in terms of distance distribution. Figure 5 provides an overview of the proposed method.

3.3. Feature Matching and Recognition

By projecting the ROI image onto the projection matrix, a set of coordinates representing the position of the image in the sub-space can be obtained, serving as the basis for classification. After feature extraction, the image p yields a set of positional coordinates, Fp, in the sub-space, which are used as the matching feature vectors. Firstly, the within-class and between-class thresholds, t, are calculated based on the distribution curves of matching within the training set. Within-class matching refers to matching different images from the same palm, while between-class matching refers to matching images from different palms [2]. The matching distance is computed as the Euclidean distance between the feature vectors Fp of image p and Fq of image q, denoted as:
D i s t a n c e p q = | F p F q |
Intra-class and inter-class matching curves are drawn, and the Euclidean distance corresponding to the intersection of the two curves is the threshold t.
Figure 6 shows the distribution of the Euclidean distances between the feature vectors of the palm vein images for intra- and inter-class matching. The solid line is the intra-class distance distribution, and the dashed line is the inter-class distance distribution. The t-value corresponding to the intersection point is 0.165, representing a matching threshold of t = 0.165. The threshold ‘t’ is subject to variation, and it may differ for different databases.
For matching, the two image feature vectors in the test set are computed to calculate the Euclidean distance if they satisfy the following:
D i s t a n c e < t
It is considered to belong to the same person and is accepted; otherwise, it is rejected.

4. Experimental Results and Analysis

The proposed algorithm was validated for its feasibility through experiments conducted on a self-built image database, the image database of the Institute of Automation, the Chinese Academy of Sciences, the image database of Hong Kong Polytechnic University, and the image database of Tongji University.

4.1. Feature Matching and Recognition

Four palm vein databases collected by heterogeneous devices under different conditions are considered to evaluate the proposed method’s recognition accuracy.
(1)
Self-built image databases: The self-developed device for palm vein image acquisition shown in Figure 2 was used for shooting, and the acquisition environment is shown in Figure 7. Two hundred and sixty-five palm images of the left and right hands of 265 people were collected. The left and right hands of the same person were regarded as different samples. In total, 530 palms were captured, with 10 images taken for each hand, resulting in a total of 5300 images. In the scope of the 5300 images we collected, the FTE rate of our device is 0%.
(2)
CASIA (Chinese Academy of Sciences Institute of Automation) databases: Multi-spectral Palm Vein Database V1.0 contains 7200 palm vein images collected from 100 different hands. Its palmprint images taken at 850 nm wavelength can clearly show the palm veins, making it a universal palm vein atlas.
(3)
Hong Kong Polytechnic University databases (PolyU): The PolyU multi-spectral database collects palmprint images under blue, green, red, and near-infrared (NIR) illumination. The CCD camera and high-power halogen light source form a contact device for contact collection. Palm vein samples are extracted from palmprint images collected under near-infrared illumination. It contains 250 palm vein images collected by users under a near-infrared light source, and 6000 images were collected.
(4)
Tongji University databases: Tongji University’s non-contact collection of palm vein galleries has a light source wavelength of 940 nm. It contains 12,000 palm vein image samples from individuals between 20 and 50. These images were captured using proprietary non-contact acquisition devices. The data were collected in two stages, including 600 palms, and each palm had 20 palm vein images.
Figure 8, Figure 9, Figure 10 and Figure 11 show the basic situation of each database sample. As shown in the figure, the collected images are affected by the palm vein itself and external factors, and there are different degrees of blurring.

4.2. Performance Evaluation and Error Indicators

Each of the databases consists of 100 classes, with six images per class. For each class, the first four images are used for training, while the remaining two images are used for testing. After feature extraction, a total of 40,000 matches are performed among the 200 test palm vein images. Among these matches, 400 matches are performed for samples of the same class, while 39,600 matches are performed for samples of different classes [2]. The threshold value ‘t’ is determined based on the distribution curve of intra-class and inter-class samples in the training set. The performance of the recognition system is evaluated using the following metrics: false rejection rate (FRR), false acceptance rate (FAR), correct recognition rate (CRR), and recognition time.
F R R = N F R N A A × 100 %
F A R = N F A N I A × 100 %
NAA and NIA are the total numbers of attempts by legitimate and fake (illegal) users, respectively; NFR and NFA are the number of false rejects and false acceptances, respectively. CRR is defined as:
C R R = N u m b e r   o f   c o r r e c t   i d e n t i f i c a t i o n T o t a l   n u m b e r   o f   i d e n t i f i c a t i o n × 100 %
As an error indicator, the equivalent error rate (EER) and receiver operating characteristic (ROC) curves are used, as they are the most commonly used indicators for reporting the accuracy of biometric systems in verification mode.

4.3. Parameter Adjustment and Sensitivity Analysis

By adjusting the parameters of the SDSPCA-NPE method, we tested their impact on the final recognition performance in order to determine the optimal parameter combination. In the SDSPCA-NPE method, the weight parameters significantly influence the recognition effectiveness of palm vein features. During the training stage of SDSPCA-NPE, there are three parameters: α, β, and δ. These parameters primarily determine the influence factors for class information, regularization, and local structure.
Taking Figure 12a as an example, we conducted numerous experiments with different values of k to find the optimal setting that ensures a good trade-off between performance and computational efficiency. Setting k to be too large can lead to the curse of dimensionality, resulting in computational challenges and potential over-fitting. Conversely, selecting k to be too small may lead to insufficient information representation and decreased performance. After determining the optimal value for k, we kept α constant and made changes to β and δ. The performance of these parameters was recorded in four different databases, and the results are presented in Figure 12.
The conclusion drawn is that SDSPCA-NPE is robust to β within the range of [0.01, 100], but sensitive to α and δ. Specifically, within a certain range, the weights assigned to class information and local information have a significant impact on the classification ability.

4.4. Ablation Experiments

In the experiment, the proposed method integrates global information, category information (supervised), and local information, aiming to verify the performance improvement achieved by combining these pieces of information. To validate this, individual experiments were conducted using the NPE, SDSPCA, and SDSPCA-NPE algorithms on the same image database. The specific performance results can be found in Figure 13.
From Figure 13, it can be observed that the proposed method, EER, demonstrates superior performance across all four datasets. Furthermore, NPE and SDSPCA exhibit the expected performance differences when applied to datasets that adhere to their respective dimensionality reduction principles. In conclusion, the SDSPCA-NPE algorithm combines the strengths of each individual algorithm, effectively integrating class-specific, global, and local information. It exhibits better applicability compared to SDSPCA and NPE alone, resulting in more desirable performance outcomes.

4.5. Performance Comparison

A comparison of our proposed algorithm with several typical algorithms is presented here, evaluating their performance on four databases. Table 1 displays the performance results (CRR/EER) of different algorithms. The corresponding ROC curves are illustrated in Figure 14.
Here are some introductions to these used algorithms:
(1)
PCA: This method extracts the main information from the data, avoiding the comparison of redundant dimensions in palm vein images. However, it may result in data points being mixed together, making it difficult to distinguish between similar palm vein image samples, leading to sub-par performance.
(2)
NPE: NPE retains the local information structure of the data, ensuring that the projected palm vein data maintains a close connection among samples of the same class. It effectively reduces the intra-class distance of similar palm vein samples. However, this method assumes the effective existence of local structures within the palm vein samples. It lacks robustness for samples that do not satisfy this data characteristic, such as palm vein images with significant deformation.
(3)
SDSPCA: SDSPCA incorporates class information and sparse regularization into PCA. It exhibits a certain resistance to anomalous samples (e.g., blurry or deformed images) in palm vein images. However, its classification capability still cannot overcome the inherent limitations of PCA, resulting in the loss of certain components crucial for classification and un-satisfactory performance, especially for similar palm vein image samples.
(4)
DBM: DBM utilizes texture features extracted from divided blocks, offering a simple structure, easy implementation, and fast speed. However, its performance is significantly compromised when dealing with low-quality or deformed palm vein images. Nevertheless, it performs reasonably well on high-quality palm vein data.
(5)
DGWLD: DGWLD consists of an improved differential excitation operator and dual Gabor orientations. It better reflects the local grayscale variations in palm vein images, enhancing the differences between samples of different classes. However, it still struggles with sample rotation and deformation issues in non-contact palm vein images, and it incurs higher computational costs.
(6)
MOGWLD: MOGWLD builds upon the dual Gabor framework by extracting multi-scale Gabor orientations and improving the original differential excitation by considering grayscale differences in multiple neighborhoods. This method enhances the discriminative power for distinguishing between samples from different classes. However, despite the improvement over the previous method, it increases the computational time and does not fundamentally enhance the classification ability for blurry and deformed samples within the same class.
(7)
JPCDA: JPCDA incorporates class information into PCA, effectively reducing inter-class ambiguity. However, it does not perform well with non-linear palm vein data.
From Table 1 and Figure 14, it can be observed that sub-space methods, such as the SDSPCA-NPE algorithm, outperform other texture-based methods in terms of time efficiency. In terms of specific performance, the algorithm achieves superior results across four databases, with the best CRR and EER performance. It also exhibits better time complexity compared to the majority of methods. However, it should be noted that certain methods show lower time complexity and even better EER performance on individual image databases. Nonetheless, these algorithms lack universality and are not applicable for distinguishing palm vein images, especially when dealing with non-uniformly distributed palm vein databases.
It can be concluded that SDSPCA-NPE, as a supervised algorithm, effectively combines local structural information and global information for dimensionality reduction, yielding better overall performance than other algorithms across the four databases.

5. Conclusions

We have designed an open-environment palm vein image acquisition device based on multi-spectral imaging to achieve a high-security palm vein recognition system. Additionally, we have established a non-contact palm vein image dataset. In this study, we propose a supervised discriminative sparse principal component analysis (SDSPCA-NPE) algorithm that preserves the neighborhood structure to improve recognition performance. By utilizing sparse constraints in supervised learning, the SDSPCA-NPE algorithm obtains interpretable principal component features that contain class-specific information. This approach reduces the impact of issues such as un-clear imaging and low image quality during the acquisition process. It expands the inter-class distance and enhances the discriminability between different palm vein samples. Moreover, we introduce the neighborhood structure information into the projection step using robust neighborhood selection techniques, which ensure the preservation of similar local structures in the palm vein samples before and after projection. This technique captures the un-even distribution of palm vein images and addresses the drawbacks of increased image differences within the same class caused by rotation, scale variation, translation, and illumination changes. Experimental results demonstrate the effectiveness of the proposed method on three self-built databases, the CASIA database, the Hong Kong Polytechnic University database, and the Tongji University database. The equal error rates achieved are 0.10%, 0.50%, 0.16%, and 0.19%, respectively. Our approach outperforms other typical methods in terms of recognition accuracy. The system achieves real-time performance with an identification time of approximately 0.0019 s, indicating its practical value. Future work will focus on miniaturizing the palm vein acquisition device and developing recognition algorithms to accommodate large-scale palm vein databases.

Author Contributions

Conceptualization, W.W.; methodology, W.W.; software, W.W.; validation, W.W.; formal analysis, W.W.; investigation, Y.L. and Y.Z.; resources, Y.L.; data curation, Y.Z. and C.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L.; visualization, Y.L.; supervision, W.W.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding the National Natural Science Foundation of China (61903358), the China Postdoctoral Science Foundation (2019M651142), and the Natural Science Foundations of Liaoning Province (2023-MS-322).

Data Availability Statement

The dataset presented in this study is available. All data used in the study can be obtained from publicly available databases and emails from the author’s mailbox.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. MacGregor, P.; Welford, R. Veincheck: Imaging for security and personnel identification. Adv. Imaging 1991, 6, 52–56. [Google Scholar]
  2. Wu, W.; Wang, Q.; Yu, S.; Luo, Q.; Lin, S.; Han, Z.; Tang, Y. Outside Box and Contactless Palm Vein Recognition Based on a Wavelet Denoising Resnet. IEEE Access 2021, 9, 82471–82484. [Google Scholar] [CrossRef]
  3. Wu, W.; Elliott, S.J.; Lin, S.; Sun, S.; Tang, Y. Review of Palm Vein Recognition. IET Biom. 2019, 9, 1–10. [Google Scholar] [CrossRef]
  4. Lee, Y.P. Palm vein recognition based on a modified (2D)2LDA. Signal Image Video Process. 2013, 9, 229–242. [Google Scholar] [CrossRef]
  5. Wang, H.B.; Li, M.W.; Zhou, J. Palmprint recognition based on double Gabor directional Weber local descriptors. Electron. Inform. 2018, 40, 936–943. [Google Scholar]
  6. Li, M.W.; Liu, H.Y.; Gao, X.J. Palmprint recognition based on multiscale Gabor directional Weber local descriptors. Prog. Laser Optoelectron. 2021, 58, 316–328. [Google Scholar] [CrossRef]
  7. Almaghtuf, J.; Khelifi, F.; Bouridane, A. Fast and Efficient Difference of Block Means Code for Palmprint Recognition. Mach. Vis. Appl. 2020, 31, 1–10. [Google Scholar] [CrossRef]
  8. Leng, L.; Yang, Z.; Min, W. Democratic Voting Downsampling for Coding-based Palmprint Recognition. IET Biom. 2020, 9, 290–296. [Google Scholar] [CrossRef]
  9. Karanwal, S. Robust Local Binary Pattern for Face Recognition in Different Challenges. Multimed. Tools Appl. 2022, 81, 29405–29421. [Google Scholar] [CrossRef]
  10. El Idrissi, A.; El Merabet, Y.; Ruichek, Y. Palmprint Recognition Using State-of-the-art Local Texture Descriptors: A Comparative Study. IET Biom. 2020, 9, 143–153. [Google Scholar] [CrossRef]
  11. Kaur, P.; Kumar, N.; Singh, M. Biometric-Based Key Handling Using Speeded Up Robust Features. In Lecture Notes in Networks and Systems; Springer Nature: Singapore, 2023; pp. 607–616. [Google Scholar]
  12. Kumar, A.; Gupta, R. Futuristic Study of a Criminal Facial Recognition: A Open-Source Face Image Dataset. Sci. Talks 2023, 6, 100229. [Google Scholar] [CrossRef]
  13. Yahaya, Y.H.; Leng, W.Y.; Shamsuddin, S.M. Finger Vein Biometric Identification Using Discretization Method. J. Phys. Conf. Ser. 2021, 1878, 012030. [Google Scholar] [CrossRef]
  14. Jia, W.; Xia, W.; Zhao, Y.; Min, H.; Chen, Y.-X. 2D and 3D Palmprint and Palm Vein Recognition Based on Neural Architecture Search. Int. J. Autom. Comput. 2021, 18, 377–409. [Google Scholar] [CrossRef]
  15. Rida, I.; Al-Maadeed, S.; Mahmood, A.; Bouridane, A.; Bakshi, S. Palmprint Identification Using an Ensemble of Sparse Representations. IEEE Access 2018, 6, 3241–3248. [Google Scholar] [CrossRef]
  16. Sun, S.; Cong, X.; Zhang, P.; Sun, B.; Guo, X. Palm Vein Recognition Based on NPE and KELM. IEEE Access 2021, 9, 71778–71783. [Google Scholar] [CrossRef]
  17. Guo, Y.-R.; Bai, Y.-Q.; Li, C.-N.; Bai, L.; Shao, Y.-H. Two-Dimensional Bhattacharyya Bound Linear Discriminant Analysis with Its Applications. Appl. Intell. 2021, 52, 8793–8809. [Google Scholar] [CrossRef]
  18. Jolliffe, I.T. Principal Component Analysis and Factor Analysis. In Principal Component Analysis; Springer: New York, NY, USA, 1986; pp. 115–128. [Google Scholar]
  19. Liu, J.-X.; Xu, Y.; Gao, Y.-L.; Zheng, C.-H.; Wang, D.; Zhu, Q. A Class-Information-Based Sparse Component Analysis Method to Identify Differentially Expressed Genes on RNA-Seq Data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2016, 13, 392–398. [Google Scholar] [CrossRef] [PubMed]
  20. Multilinear Principal Component Analysis. In Multilinear Subspace Learning; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013; pp. 136–169.
  21. Al-jaberi, A.S.; Mohsin Al-juboori, A. Palm Vein Recognition, a Review on Prospects and Challenges Based on CASIA’s Dataset. In Proceedings of the 2020 13th International Conference on Developments in eSystems Engineering (DeSE), Virtual Conference, 14–17 December 2020. [Google Scholar]
  22. Salazar-Jurado, E.H.; Hernández-García, R.; Vilches-Ponce, K.; Barrientos, R.J.; Mora, M.; Jaswal, G. Towards the Generation of Synthetic Images of Palm Vein Patterns: A Review. Inf. Fusion 2023, 89, 66–90. [Google Scholar] [CrossRef]
  23. Feng, C.-M.; Xu, Y.; Liu, J.-X.; Gao, Y.-L.; Zheng, C.-H. Supervised Discriminative Sparse PCA for Com-Characteristic Gene Selection and Tumor Classification on Multiview Biological Data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2926–2937. [Google Scholar] [CrossRef]
  24. Jiang, B.; Ding, C.; Luo, B.; Tang, J. Graph-Laplacian PCA: Closed-Form Solution and Robustness. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  25. Feng, D.; He, S.; Zhou, Z.; Zhang, Y. A Finger Vein Feature Extraction Method Incorporating Principal Component Analysis and Locality Preserving Projections. Sensors 2022, 22, 3691. [Google Scholar] [CrossRef]
  26. Wang, X.; Yan, W.Q. Human Identification Based on Gait Manifold. Appl. Intell. 2022, 53, 6062–6073. [Google Scholar] [CrossRef]
  27. Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  28. Zhao, X.; Guo, J.; Nie, F.; Chen, L.; Li, Z.; Zhang, H. Joint Principal Component and Discriminant Analysis for Dimensionality Reduction. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 433–444. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Palm vein image schematic diagram.
Figure 1. Palm vein image schematic diagram.
Electronics 12 03503 g001
Figure 2. Device for self-built palm vein database collection.
Figure 2. Device for self-built palm vein database collection.
Electronics 12 03503 g002
Figure 3. Flow chart of palm vein image ROI extraction: (a) denoising; (b) determine the cross point; (c) determine the valley point; and (d) extract the ROI region.
Figure 3. Flow chart of palm vein image ROI extraction: (a) denoising; (b) determine the cross point; (c) determine the valley point; and (d) extract the ROI region.
Electronics 12 03503 g003aElectronics 12 03503 g003b
Figure 4. Flowchart of the NPE.
Figure 4. Flowchart of the NPE.
Electronics 12 03503 g004
Figure 5. Flowchart of the proposed method.
Figure 5. Flowchart of the proposed method.
Electronics 12 03503 g005
Figure 6. Curves of matching distribution for intra-class and inter-class.
Figure 6. Curves of matching distribution for intra-class and inter-class.
Electronics 12 03503 g006
Figure 7. Acquisition environment of the self-built database.
Figure 7. Acquisition environment of the self-built database.
Electronics 12 03503 g007
Figure 8. Self-built database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 8. Self-built database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Electronics 12 03503 g008
Figure 9. CASIA database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 9. CASIA database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Electronics 12 03503 g009
Figure 10. PolyU database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 10. PolyU database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Electronics 12 03503 g010
Figure 11. Tongji database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 11. Tongji database samples: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Electronics 12 03503 g011
Figure 12. Performance index and parameter relationship: (a) β and δ; (b) α and δ; (c) α and β. The blue to red gradient represents the CRR from low to high.
Figure 12. Performance index and parameter relationship: (a) β and δ; (b) α and δ; (c) α and β. The blue to red gradient represents the CRR from low to high.
Electronics 12 03503 g012
Figure 13. Performance of different components in the database.
Figure 13. Performance of different components in the database.
Electronics 12 03503 g013
Figure 14. ROC curves. (a) Self-built database. (b) CASIA database. (c) PolyU database. (d) Tongji database.
Figure 14. ROC curves. (a) Self-built database. (b) CASIA database. (c) PolyU database. (d) Tongji database.
Electronics 12 03503 g014
Table 1. Performance of different algorithms in a database.
Table 1. Performance of different algorithms in a database.
AlgorithmsDatabaseEER (%)Times(10−4 s)
PCA [15]Self-built 0.2819.59
CASIA2.3819.77
PolyU1.519.45
Tongji 619.58
DBM [7]Self-built 5.01521.54
CASIA22.53522.52
PolyU2.31525.16
Tongji 5.66549.56
DGWLD [5]Self-built8.262020.56
CASIA22.852066.15
PolyU7.212058.94
Tongji 3.662054.89
MOGWLD [6]Self-built 10.2624,645.82
CASIA19.7024,645.92
PolyU5.1324,645.79
Tongji 2.7324,659.23
NPE [16]Self-built 0.5013.81
CASIA7.5013.90
PolyU114.19
Tongji 9.614.11
SDSPCA [23]Self-built 1.5013.56
CASIA15.3913.89
PolyU5.5013.57
Tongji 10.7513.63
JPCDA [28]Self-built 0.1324.56
CASIA0.7224.39
PolyU0.5024.77
Tongji 0.5524.96
SDSPCA-NPESelf-built 0.1019.77
CASIA0.5038.50
PolyU0.1619.75
Tongji 0.1919.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, W.; Li, Y.; Zhang, Y.; Li, C. Identity Recognition System Based on Multi-Spectral Palm Vein Image. Electronics 2023, 12, 3503. https://doi.org/10.3390/electronics12163503

AMA Style

Wu W, Li Y, Zhang Y, Li C. Identity Recognition System Based on Multi-Spectral Palm Vein Image. Electronics. 2023; 12(16):3503. https://doi.org/10.3390/electronics12163503

Chicago/Turabian Style

Wu, Wei, Yunpeng Li, Yuan Zhang, and Chuanyang Li. 2023. "Identity Recognition System Based on Multi-Spectral Palm Vein Image" Electronics 12, no. 16: 3503. https://doi.org/10.3390/electronics12163503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop