Deep Learning-Based Biometric Technologies

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: closed (31 August 2019) | Viewed by 39598

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail
Guest Editor
Room 302, Science Building, School of Mathematics and Statistics, Xi'an Jiaotong University, No. 28, Xianning West Road, Xi'an 710049, Shaanxi, China
Interests: discrete geometry analysis; 3D face recognition; 3D facial expression analysis; deep learning for 3D shapes

Special Issue Information

Dear Colleagues,

Recent developments have led to the widespread use of biometric technologies, such as face, fingerprint, vein, iris, palmprint, wrinkle, voice, and gait recognition, in a variety of applications in access control, financial transactions on mobile devices, and automatic teller machines (ATMs). While existing biometric technology has matured, its performance is still affected by various environmental conditions, and recent approaches have been attempted to combine deep learning techniques with conventional biometrics to guarantee the higher performance. The objective of this Special Issue is to invite high-quality, state-of-the-art research papers that deal with challenging issues in deep learning-based biometric technologies. We solicit the original papers of unpublished and completed research that are not currently under review by any other conference/magazine/journal. Topics of interest include, but are not limited to:

  •  Region of interest (ROI) or feature point detection for biometrics based on deep learning
  •  Biometric feature extraction based on deep learning
  •  Biometric recognition based on deep learning
  •  Soft biometrics based on deep learning
  •  Multimodal biometrics based on deep learning
  •  Spoof detection based on deep learning

Prof. Kang Ryoung Park
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Region of interest (ROI) or feature point detection for biometrics based on deep learning
  • Biometric feature extraction based on deep learning
  • Biometric recognition based on deep learning
  • Soft biometrics based on deep learning
  • Multimodal biometrics based on deep learning
  • Spoof detection based on deep learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1053 KiB  
Article
Low-Rank Multi-Channel Features for Robust Visual Object Tracking
by Fawad, Muhammad Jamil Khan, MuhibUr Rahman, Yasar Amin and Hannu Tenhunen
Symmetry 2019, 11(9), 1155; https://doi.org/10.3390/sym11091155 - 11 Sep 2019
Cited by 9 | Viewed by 2322
Abstract
Kernel correlation filters (KCF) demonstrate significant potential in visual object tracking by employing robust descriptors. Proper selection of color and texture features can provide robustness against appearance variations. However, the use of multiple descriptors would lead to a considerable feature dimension. In this [...] Read more.
Kernel correlation filters (KCF) demonstrate significant potential in visual object tracking by employing robust descriptors. Proper selection of color and texture features can provide robustness against appearance variations. However, the use of multiple descriptors would lead to a considerable feature dimension. In this paper, we propose a novel low-rank descriptor, that provides better precision and success rate in comparison to state-of-the-art trackers. We accomplished this by concatenating the magnitude component of the Overlapped Multi-oriented Tri-scale Local Binary Pattern (OMTLBP), Robustness-Driven Hybrid Descriptor (RDHD), Histogram of Oriented Gradients (HoG), and Color Naming (CN) features. We reduced the rank of our proposed multi-channel feature to diminish the computational complexity. We formulated the Support Vector Machine (SVM) model by utilizing the circulant matrix of our proposed feature vector in the kernel correlation filter. The use of discrete Fourier transform in the iterative learning of SVM reduced the computational complexity of our proposed visual tracking algorithm. Extensive experimental results on Visual Tracker Benchmark dataset show better accuracy in comparison to other state-of-the-art trackers. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Figure 1

18 pages, 4375 KiB  
Article
An Adversarial and Densely Dilated Network for Connectomes Segmentation
by Ke Chen, Dandan Zhu, Jianwei Lu and Ye Luo
Symmetry 2018, 10(10), 467; https://doi.org/10.3390/sym10100467 - 09 Oct 2018
Cited by 6 | Viewed by 2831
Abstract
Automatic reconstructing of neural circuits in the brain is one of the most crucial studies in neuroscience. Connectomes segmentation plays an important role in reconstruction from electron microscopy (EM) images; however, it is rather challenging due to highly anisotropic shapes with inferior quality [...] Read more.
Automatic reconstructing of neural circuits in the brain is one of the most crucial studies in neuroscience. Connectomes segmentation plays an important role in reconstruction from electron microscopy (EM) images; however, it is rather challenging due to highly anisotropic shapes with inferior quality and various thickness. In our paper, we propose a novel connectomes segmentation framework called adversarial and densely dilated network (ADDN) to address these issues. ADDN is based on the conditional Generative Adversarial Network (cGAN) structure which is the latest advance in machine learning with power to generate images similar to the ground truth especially when the training data is limited. Specifically, we design densely dilated network (DDN) as the segmentor to allow a deeper architecture and larger receptive fields for more accurate segmentation. Discriminator is trained to distinguish generated segmentation from manual segmentation. During training, such adversarial loss function is optimized together with dice loss. Extensive experimental results demonstrate that our ADDN is effective for such connectomes segmentation task, helping to retrieve more accurate segmentation and attenuate the blurry effects of generated boundary map. Our method obtains state-of-the-art performance while requiring less computation on ISBI 2012 EM dataset and mouse piriform cortex dataset. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Figure 1

26 pages, 8830 KiB  
Article
Deep Learning-Based Multinational Banknote Fitness Classification with a Combination of Visible-Light Reflection and Infrared-Light Transmission Images
by Tuyen Danh Pham, Dat Tien Nguyen, Jin Kyu Kang and Kang Ryoung Park
Symmetry 2018, 10(10), 431; https://doi.org/10.3390/sym10100431 - 25 Sep 2018
Cited by 2 | Viewed by 2881
Abstract
The fitness classification of a banknote is important as it assesses the quality of banknotes in automated banknote sorting facilities, such as counting or automated teller machines. The popular approaches are primarily based on image processing, with banknote images acquired by various sensors. [...] Read more.
The fitness classification of a banknote is important as it assesses the quality of banknotes in automated banknote sorting facilities, such as counting or automated teller machines. The popular approaches are primarily based on image processing, with banknote images acquired by various sensors. However, most of these methods assume that the currency type, denomination, and exposed direction of the banknote are known. In other words, not only is a pre-classification of the type of input banknote required, but in some cases, the type of currency is required to be manually selected. To address this problem, we propose a multinational banknote fitness-classification method that simultaneously determines the fitness level of a banknote from multiple countries. This is achieved without the pre-classification of input direction and denomination of the banknote, using visible-light reflection and infrared-light transmission images of banknotes, and a convolutional neural network. The experimental results on the combined banknote image database consisting of the Indian rupee and Korean won with three fitness levels, and the United States dollar with two fitness levels, show that the proposed method achieves better accuracy than other fitness classification methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Figure 1

23 pages, 18829 KiB  
Article
Age Estimation Robust to Optical and Motion Blurring by Deep Residual CNN
by Jeon Seong Kang, Chan Sik Kim, Young Won Lee, Se Woon Cho and Kang Ryoung Park
Symmetry 2018, 10(4), 108; https://doi.org/10.3390/sym10040108 - 13 Apr 2018
Cited by 15 | Viewed by 4999
Abstract
Recently, real-time human age estimation based on facial images has been applied in various areas. Underneath this phenomenon lies an awareness that age estimation plays an important role in applying big data to target marketing for age groups, product demand surveys, consumer trend [...] Read more.
Recently, real-time human age estimation based on facial images has been applied in various areas. Underneath this phenomenon lies an awareness that age estimation plays an important role in applying big data to target marketing for age groups, product demand surveys, consumer trend analysis, etc. However, in a real-world environment, various optical and motion blurring effects can occur. Such effects usually cause a problem in fully capturing facial features such as wrinkles, which are essential to age estimation, thereby degrading accuracy. Most of the previous studies on age estimation were conducted for input images almost free from blurring effect. To overcome this limitation, we propose the use of a deep ResNet-152 convolutional neural network for age estimation, which is robust to various optical and motion blurring effects of visible light camera sensors. We performed experiments with various optical and motion blurred images created from the park aging mind laboratory (PAL) and craniofacial longitudinal morphological face database (MORPH) databases, which are publicly available. According to the results, the proposed method exhibited better age estimation performance than the previous methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Figure 1

13 pages, 13907 KiB  
Article
A Novel Multimodal Biometrics Recognition Model Based on Stacked ELM and CCA Methods
by Jucheng Yang, Wenhui Sun, Na Liu, Yarui Chen, Yuan Wang and Shujie Han
Symmetry 2018, 10(4), 96; https://doi.org/10.3390/sym10040096 - 04 Apr 2018
Cited by 17 | Viewed by 4137
Abstract
Multimodal biometrics combine a variety of biological features to have a significant impact on identification performance, which is a newly developed trend in biometrics identification technology. This study proposes a novel multimodal biometrics recognition model based on the stacked extreme learning machines (ELMs) [...] Read more.
Multimodal biometrics combine a variety of biological features to have a significant impact on identification performance, which is a newly developed trend in biometrics identification technology. This study proposes a novel multimodal biometrics recognition model based on the stacked extreme learning machines (ELMs) and canonical correlation analysis (CCA) methods. The model, which has a symmetric structure, is found to have high potential for multimodal biometrics. The model works as follows. First, it learns the hidden-layer representation of biological images using extreme learning machines layer by layer. Second, the canonical correlation analysis method is applied to map the representation to a feature space, which is used to reconstruct the multimodal image feature representation. Third, the reconstructed features are used as the input of a classifier for supervised training and output. To verify the validity and efficiency of the method, we adopt it for new hybrid datasets obtained from typical face image datasets and finger-vein image datasets. Our experimental results demonstrate that our model performs better than traditional methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Graphical abstract

15 pages, 4260 KiB  
Article
Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset
by Lin Zhang, Zaixi Cheng, Ying Shen and Dongqing Wang
Symmetry 2018, 10(4), 78; https://doi.org/10.3390/sym10040078 - 21 Mar 2018
Cited by 80 | Viewed by 7372
Abstract
Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN) based scheme, namely [...] Read more.
Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN) based scheme, namely P a l m R CNN (short for palmprint/palmvein recognition using CNNs). The effectiveness and efficiency of P a l m R CNN have been verified through extensive experiments conducted on benchmark datasets. In addition, though substantial effort has been devoted to palmvein recognition, it is still quite difficult for the researchers to know the potential discriminating capability of the contactless palmvein. One of the root reasons is that a large-scale and publicly available dataset comprising high-quality, contactless palmvein images is still lacking. To this end, a user-friendly acquisition device for collecting high quality contactless palmvein images is at first designed and developed in this work. Then, a large-scale palmvein image dataset is established, comprising 12,000 images acquired from 600 different palms in two separate collection sessions. The collected dataset now is publicly available. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Figure 1

9085 KiB  
Article
Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment
by Muhammad Arsalan, Hyung Gil Hong, Rizwan Ali Naqvi, Min Beom Lee, Min Cheol Kim, Dong Seop Kim, Chan Sik Kim and Kang Ryoung Park
Symmetry 2017, 9(11), 263; https://doi.org/10.3390/sym9110263 - 04 Nov 2017
Cited by 102 | Viewed by 14227
Abstract
Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted [...] Read more.
Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR) light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN); which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II) training database (selected from the UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE) dataset were used. Experimental results showed that our method outperformed the existing segmentation methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Show Figures

Figure 1

Back to TopTop