Next Article in Journal
The Effect of the Ransomware Dataset Age on the Detection Accuracy of Machine Learning Models
Next Article in Special Issue
Assessing the Security and Privacy of Android Official ID Wallet Apps
Previous Article in Journal
A Survey on Feature Selection Techniques Based on Filtering Methods for Cyber Attack Detection
Previous Article in Special Issue
Blockchain Data Availability Scheme with Strong Data Privacy Protection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Literature Review on Human Ear Biometrics: Approaches, Algorithms, and Trend in the Last Decade

1
Department of Computer Science, Federal University of Agriculture, Abeokuta 110124, Nigeria
2
Centre for Lifelong Learning, Universiti Brunei Darussalam, Jalan Tungku Link, Gadong BE1410, Brunei
3
Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Akoka, Lagos 100213, Nigeria
4
Department of Electrical Engineering and Information Technology, Institute of Digital Communication, Ruhr University, 44801 Bochum, Germany
5
Department of Computer Science, Faculty of Information and Communication Sciences, University of Ilorin, Ilorin 240003, Nigeria
*
Authors to whom correspondence should be addressed.
Information 2023, 14(3), 192; https://doi.org/10.3390/info14030192
Submission received: 19 December 2022 / Revised: 13 March 2023 / Accepted: 15 March 2023 / Published: 17 March 2023
(This article belongs to the Special Issue Digital Privacy and Security)

Abstract

:
Biometric technology is fast gaining pace as a veritable developmental tool. So far, biometric procedures have been predominantly used to ensure identity and ear recognition techniques continue to provide very robust research prospects. This paper proposes to identify and review present techniques for ear biometrics using certain parameters: machine learning methods, and procedures and provide directions for future research. Ten databases were accessed, including ACM, Wiley, IEEE, Springer, Emerald, Elsevier, Sage, MIT, Taylor & Francis, and Science Direct, and 1121 publications were retrieved. In order to obtain relevant materials, some articles were excused using certain criteria such as abstract eligibility, duplicity, and uncertainty (indeterminate method). As a result, 73 papers were selected for in-depth assessment and significance. A quantitative analysis was carried out on the identified works using search strategies: source, technique, datasets, status, and architecture. A Quantitative Analysis (QA) of feature extraction methods was carried out on the selected studies with a geometric approach indicating the highest value at 36%, followed by the local method at 27%. Several architectures, such as Convolutional Neural Network, restricted Boltzmann machine, auto-encoder, deep belief network, and other unspecified architectures, showed 38%, 28%, 21%, 5%, and 4%, respectively. Essentially, this survey also provides the various status of existing methods used in classifying related studies. A taxonomy of the current methodologies of ear recognition system was presented along with a publicly available occlussion and pose sensitive black ear image dataset of 970 images. The study concludes with the need for researchers to consider improvements in the speed and security of available feature extraction algorithms.

1. Introduction

Globally, over 1.5 billion people are without proper identification proof [1]. Establishing a person’s identity, together with connected privileges, is an increasing source of concern for governments all over the world, as it constitutes a major requirement for the attainment of Sustainable Development Goals (SDG).
A formal means of personal identity verification is a primary requirement in modern societies. The inability to establish one’s identity can significantly hamper access to basic rights, government, and other essential services. The task of effectively identifying an individual involves the use of biometrics technology. Biometric recognition involves using specialized devices to capture the image of an individual’s feature and computer software to extract, encrypt, store, and match these features [2]. It typically involves the use of unique features such as the face, ear signature, gait, voice, fingerprint, etc., for automatic computerized identification systems.
A biometric system is principally a pattern recognition system that obtains biometric data from an individual, mines a feature set from the data acquired, and compares this feature set against the stored template in the database [3].
Computer-based biometric systems have become available primarily due to increasing technological sophistication and computing capabilities. The face is a prominent example of an innate human biometric used for identification [4]. It is a major feature for identification due to its uniqueness [5]. However, an upward surge in the global population coupled with cultural diversities makes effective identification more profound, particularly as traditional identification such as passwords, locks and pin codes are gradually becoming vulnerable to theft, sabotage, or loss hence the need for more reliable traits like the ear [6]. The recent global pandemic caused by the novel corona virus (COVID-19) has led to the compulsory use of face masks in public [7]. Consequently, this new dressing standard poses a serious challenge to facial recognition in public [8]. Further still, the challenge is further emphasized in the performance of recognition systems, particularly in surveillance scenarios, because the masks have occluded a large portion of the face [9] and have made the attention to ear recognition research even more important. Although strategies for ear recognition systems (ERS) were long conceived, actual implementation did not occur until much later [10]. Ear images are a promising feature that has been lately advanced as a biometric resource [11]. For instance, the human ears have an immediately foreseeable background, and scholarly work on the symmetric features of the human ear has continued to generate new interest [12]. For instance, structural features of the human ear abound, thereby making it readily suitable for robust processing and applications. Not only does the ear represent an unchanged biometric trait over time, but it also possesses characteristics applicable to every individual, such as distinctiveness, collectability, universality, and permanence [13].
The advantages of the external ear as a biometric feature include:
  • Fewer inconsistencies in ear structure due to advancement in age compared with a face.
  • Reliable ear outline throughout an individual life cycle.
  • The distinctiveness of the external ear shape is not affected by moods, emotions, other expressions, etc.
  • Restricted surface ear surface area leads to faster processing compared with a face.
  • It is easier to capture the human ear even at a distance.
  • The procedure is non-invasive. Beards, spectacles, and makeup cannot alter the appearance of the ear.
In summary, this study aims to conduct a Systematic Literature Review (SLR) on human ear biometric and recognition systems. The emphasis is on the contributions of deep learning to improving and enhancing ear recognition system performance vis-a-vis traditional machine learning methods. Subsequent sections of this paper are organized as follows: Section 2 highlights the sequence, search methods, and other strategies used in this study. Results obtained are presented in Section 3, with a follow-up discussion in Section 4. Lastly, Section 5 highlights the research outcomes and challenges and presents a current taxonomy of the ear recognition system.

2. Research Method

Research studies on human ear biometrics abound. These studies, mostly digital, were scientifically analyzed using quantitative methods to highlight significant trends and developments in ear recognition systems. The search procedure used in [1] was adopted and used for this study to provide answers to the following research questions:
RQ1: What is state of the art in ear recognition research?
RQ2: What has deep learning contributed to ear recognition in the last decade?
RQ3: Is there sufficient publicly available data for ear recognition research?
The research questions though intertwined motivates conducting this SLR.

2.1. Search Attributes

The methods of human ear recognition can be roughly divided into traditional and methods based on deep learning, [14] with studies particularly more inclined towards the latter.
Biometrics has, over time, evolved to include deep learning of artificial neural networks (ANN), [15]. Deep Convolutional Neural Networks are mathematical models that simulate the functional attributes of human biological neural networks [16]. They represent multiple data layers with multiple abstraction stages through learning to generate precise models autonomously [17]. Research into ear recognition using neural networks with varying performances has been in existence for a while. Several variants of ANN, such as the convolutional neural network (CNN) are applicable in advancing various biometric modalities. Studies suggest that approaches applying CNN epitomize state-of-the-art performance in object detection, segmentation, and image classification, particularly in unconstrained settings [18].
One of the initial efforts at the neural network for ear recognition was described by [19], which employed local binary patterns and CNN with a recognition accuracy of 93.3%. Recent advances in CNN for developing verification and identification systems have considerably pushed the development of image classification and object detection [20]. It combines a large set of parameters than traditional neural networks, thereby generating improved performance [16].

2.2. Search Queries

In other to obtain a robust and comprehensive collection of related articles that have significantly contributed to ERS, the following search criteria were used:
  • Boolean operators of “OR or “AND” to retrieve data.
  • Keywords generated from the research question as search parameters.
  • Restriction to some publication types and publishers.
  • Identifiers from related work.
Search results displayed outcomes with keywords and Boolean combinations such as (human ear) AND (deep convolutionary network (OR) biometrics), (Identification (OR) recognition (OR) deep-learning (OR) feature extraction). A logical procedure of review of the contributions of neural networks to ERS was conducted through a numerical assessment to identify innovative patterns, methods, and techniques in the ear recognition domain. Table 1 indicates the number of articles downloaded from respective indexed databases.
Search Stage 1 (Information Extraction): an in-depth search of seven electronic databases showed an initial total article count of 1121 and was further subjected to a careful selection process.
Search Stage 2 (Screening): after the removal of 784 duplicate and 245 irrelevant articles/works of literature, a residual quantity of 92 was obtained for onward analysis.
Search Stage 3 (Eligibility Determination): in obtaining appropriate articles relevant to the study, 92 articles were shortlisted. Subsequently, 18 articles were dropped for lack of clear-cut methodology.
Search Stage 4 (Inclusion): in-line with the research aim, the Authors conducted a quality check for the residual articles and concluded on 74 for further systematic review.
The summary of the search procedure from stage 1 (information extraction), stage 2 (screening), stage 3 (eligibility determination) to stage 4 (inclusion) are represented in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart in Figure 1. Preliminary results from search criteria were obtained from Google Scholar, Scopus, Springer, Science Direct, ACM, Emerald, and IEEE explore databases using a search criterion of publications not later than ten (10) years.

2.3. Search Strategy

After a preliminary assessment of requirements suitable for answering the research questions, a predominance of varied knowledge repositories ranging from journal articles, online blogs, and bulletins to book chapters were returned. Five (5) main sources which include journals, conferences, workshops, book chapters and original thesis were selected for the review. A total of 74 articles were carefully selected based on relevance with 52 journal articles, 9 conference proceedings, 5 workshop reports, 5 theses and 3 book chapters.

2.4. Article Source (AS)

Ten (10) electronic databases, including Taylor & Francis, Springer, Elsevier, Emerald, Wiley, Science Direct, IEEE, ACM, Sage, and MIT, provided data for extraction using keywords and related terms in the study. The sources include workshops, conference proceedings, journal publications, original thesis, and book chapters.

2.5. Ear Databases

This section presents a review of databases used in ear detection and recognition. Ear databases are crucial in developing and evaluating ERS and algorithms. Existing databases are in different sizes with varied factors of influence ranging from illumination to the angle of the pose. A summary of existing databases used in ear recognition research studies is presented in Table 2. A number of these databases are either publicly available or can be acquired under license.

2.6. Methods of Classification

The techniques of ear recognition can be grouped into four broad categories: hybrid, geometric, holistic, and local methods [10].

2.6.1. Geometric Approach

Research on geometric tendencies of the human ear dates to early 1890, when a French researcher, Alphonse Bertillon, suggested the potential of the human ear in identifying subjects [21]. Additional improvements using geometric features promoted the development of a Voronoi illustration with adjacency graphs [22].
The geometric method involves the extraction and analysis of geometric features of the human ear. These ranges from canny edge detection and contours to statistical features [23]. Ear image edges are computed after noise reduction using a Gaussian filter in canny edge detection. Edges are then connected to generate a pattern [24]. The contours of the ear start and end points are also useful information sources applicable in generating ear features and recognizable patterns [25]. Other feature-based statistical methods present ear images using parameters such as ear height, width, and angles between ear portions [26]. The work [27] presents a detailed taxonomy of ear features used for recognition by both machines and humans, such as texture, structure, and details. Typical texture-related features include ear type, skin colour; ear size, and shape, all extractable using linear discriminant analysis and principal component analysis algorithms. Ear features also use more prominent methods like local binary pattern [28], SIFT [29], and Gabor filters [30], on ear structures such as lobes, contours, and folds of the ear to represent the distinctiveness of the ear.
However, distortion invariant methods in ear geometry make only the required details available, thereby making this approach over-dependent on edge detectors such that only geometric ear information is considered with little emphasis on texture information.

2.6.2. Holistic Approach

In the holistic method, the overall stance of the ear is used to calculate input representations. It provides reasonable performance, particularly for suitably processed images. Hence, the approach requires normalization procedures before the extraction of desired features to ensure quality performance.
In this study, several studies on holistic techniques were reviewed. Ref. [31] conducted preliminary research on Force Field Transformation (FFT) for automatic ear recognition and returned a recognition rate of 99% on about 252 images in the XM2VTS database. Ref. [32] furthered the application of FFT with the underlying principle of Newton’s law of gravitation to consider symmetric image pixels.
Experiments on the USTB IV database by [33] registered a comparatively low recognition rate of 72.2%. Gabor filters are also capable of identifying detailed texture data. When fused, its recognition accuracy varies between 92.06% and 95.93% [32]. Dimensionality reduction techniques such as PCA [31,34], ICA [35] and matrix factorization [36], feed higher-dimension vectors into lower dimensions while retaining their distinct features. Selected wavelet coefficients were used by [37] in repeated steps to represent features of ear images from the IITK database with a stated recognition accuracy of 96% [38] in their experiment on the UND and FEUD databases identified the suitability of sparse representations in changing degrees of illumination and pose.
In [39], numerical methods were used in composing six varied feature vectors that serve as feedback for a back propagation neural network for classifying moment invariant feature sets.

2.6.3. Local Approach

The local method depends on local areas of certain locations in an image to the extent of encoding texture details such that the region of interest does not automatically match structurally significant parts. Studies such as [40] present SIFT as a robust algorithm suitable for feature extraction under changing conditions. For instance, SIFT can accommodate variants in the pose for about 20 degrees [32]. Generally, assigning landmarks to ear images before training ensures proper filtering and matching operations in the local technique. Though SIFT landmarks have been very high such that obtaining an exact assignment is experimentally impossible, [41] attained a recognition rate of 91.5% with the XM2VTS database with possibilities for further improvements to 96%. Subsequent studies by [42] decomposed ear images into distinct colour segments with a reduced error margin that identifies and calculate unique identifiers for each key point detected. Unlike other approaches, local descriptors have varying degrees of complexity and are often combined with hybrid techniques to provide further reliable results in ear recognition [43].

2.6.4. Hybrid Approach

The hybrid technique involves the use of multiple parameters to improve the performance of recognition systems [5]. Edge models are initially generated from training images before adjustments into actual edges, as shown in [44]. Similarly, a fusion of Tchebichef moment descriptors and the triangle ratio method was experimentally determined in [45], while [46] achieved a recognition accurate of 99.2% in the USTB II database.
The study of [47] famously combined PCA and wavelets, while [39] opted for a fusion of Haar wavelet and LBP. The sparse representation algorithm by [48] was used on gray-level positioning features before initial dimension reduction procedures with LDA by [49]. In wavelet transforms, coefficient thresholds are required to obtain feature vectors that are particularly useful in the recognition and identification systems [50].

2.7. Ear Recognition Stages

In ear recognition systems, ear images are captured using a specific device. The images are then subjected to a preliminary stage of determining potential regions of interest using algorithms before being processed by a classifier, where details are enhanced before further procedures [51]. Essentially, the stages required in ear recognition are highlighted below:

2.7.1. Pre-Processing

This is the first step in ensuring the usability of acquired images. It involves the removal of unwanted background information (noise) before further processing. The techniques used are divided into intensity and filter methods.
Intensity Method: Analysing coloured images for edge and feature detection can be very complex [23]. Hence, a 3-conduit (RGB) image is often reduced to a single pathway (grayscale) to minimize complexity [52]. A method of spreading image intensity across a histogram, known as histogram equalization, is also sometimes applicable.
Filter method: In the filter method, noise reduction and feature enhancements are achieved using fuzzy technology [24]. Mean or median and Gaussian and Gabor filters are prominent examples of achieving a similar purpose.

2.7.2. Feature Extraction

The task of reducing the dimensions of an image for proper identification is known as feature extraction [53]. The features of an image must be precisely and correctly extracted using certain constituents of ear images, such as texture, colour, and shape. Subsequently, research parameters have been established to further determine the performance of recognition systems [9].

2.7.3. Classification

The classification or authentication stage is the final stage in the recognition process, where the feature set of the probe image is compared with a database image using various authentication techniques [23]. Many studies have been conducted on the stages involved in recognition of ear patterns. A summary of the common methods used by researchers for developing efficient and effective ERS is presented in Table 3.

2.8. Deep Learning Approaches in Ear Recognition

In this study, a relationship between the most crucial stage (feature extraction) and classification techniques in relation to the volume of Authors is established.
Although deep-based schemes are often data-hungry, requiring significant processing time, several light, computationally fast variants have recently evolved [66,67].
In deep learning, more prominent feature extraction techniques include Gabor Mean [54], ANN Classifier, Haar wavelet ([50], Linear Discriminant Analysis (LDA) [68,69], Back Propagation Neural Network [70], FFT [23], Principal Component Analysis (PCA), [71], Edge-based method [12] and Voronoi diagrams [20].
Over time, the field of ear recognition has naturally developed along traditional machine learning methods, with few of its methods showing resilience to unconstrained conditions, including lightning and pose variations [69], hence inhibiting the overall performance of traditional systems.
Traditional ear detection and feature extraction methods typically rely on physiological attributes of the ear for normalization, feature extraction and classification [69,72]. For instance, in [73], training of various geometrical attributes of the ear was conducted with neural classifiers before the appearance of the inner and outer ear was suggested by [74]. Similarly, a combination of ear shape, average, centroid, and distance between pixels has been used to extract features using contour algorithms [75] geometrically. The work [58] extracted features using exterior ear edge and other local geometric features. Though these procedures appear straightforward, the performance level is often significantly low due to other salient processes involved [23].
Techniques involving subspace learning such as PCA, LDA and ICA, sometimes referred to as “Eigenears” have been experimentally determined suitably in local ear contour feature extraction [23]. More recently, The work [61] used a combination of multi-discriminative attributes and dimension reduction techniques to locally extract features of the ear. Such fusion techniques are referred to as hybrid and are usually more computationally expensive but with higher recognition performance over individual local, holistic, and geometric methods [76].
Nevertheless, traditional learning methods in ear recognition are severely hampered by more complex realities [72]. Even more interesting is the recent research focus which involves obtaining ear images in unrestrained conditions, generally referred to as in the wild. Traditional approaches to human ear recognition often rely on the preliminary processing of images, complex feature extraction, and determination of suitable classifiers [70]. These challenges have opened a new landscape as the research focus has gradually shifted to the automation of biometric identification [77].

3. Results Analysis

This section presents a discussion of search strategy outcomes to provide answers to research questions. Subsequently, different subsections are structured to highlight interpretations of the findings.

3.1. Search Strategy 1: Source

  • RQ1: What is state of the art in ear recognition research?
In the initial phase, a categorized search was used to identify similar articles on ERS and Neural Networks using paper titles and related keywords before developing a concluding search technique. The search for similar works was conducted for articles between 2010 and 2020 from the following sources: Springer, Elsevier, ACM, IEEE, Sage, Wiley, MIT Press, Taylor & Francis, Emerald and Science Direct. Figure 1 shows the number of relevant articles from selected sources, thus addressing RQ1.

3.2. Relevance of Publication

The 74 selected publications show that IEEE had the highest number with 15 relevant articles, followed by Springer having 12 relevant articles, Elsevier published 11, while Science Direct published 9 relevant articles. Taylor & Francis, Emerald, ACM, and Sage had 8, 8, 7 and 3 articles, respectively, while Wiley and MIT had one relevant publication each.
Ear recognition technique remains an active area of research that continues to generate diverse interest. The total number of relevant publications and the corresponding levels of citation from 2011 to 2020 is 2, 3, 5, 5, 4, 8, 7, 12, 10, and 13, respectively. Thus, confirming the steady rise in neural network techniques with the year 2020 having the highest number of relevant articles within the decade.
Although diverse methods of pre-processing, feature extraction and classification exist in the recognition process, there is an upward surge in the use of neural network methods for classification in ear recognition systems. Reasons for this might be inferred from the increasing demand for more fool proof biometric identification systems requiring large datasets and the ability of neural networks to train very large data sets autonomously.

3.3. Search Strategy 3: (Method)

Ear recognition techniques vary. Overtime, several Authors, have experimentally determined the performance of ERS using single or combined approaches on a wide array of datasets. Table 4 presents a summary of identified works containing metrics used in ear recognition.
Previous studies have highlighted the numerous methods applied in the process of ear recognition, including local, holistic, geometric, and hybrid. The study on 74 existing related literature carefully selected from several works of literature [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180] revealed that 65%, 20%, 12% and 8% of the studies employed local, hybrid, holistic, and geometric methods, respectively. Although several works of literature on ear biometrics abound, a concise summary of some existing ear recognition approaches from the list is presented in Table 5. A summary of the Pros and Cons of different sub-areas in Ear Recognition Stages is given in Table 6 in Section 4.
In this study, the authors of selected articles were divided into five groups. These categories represent the level of the ERS implementation in the article in terms of if the study was based on:
  • an assessment of existing algorithms on a given dataset (A);
  • a proposed or yet-to-be-evaluated techniques (S);
  • a designed templates using existing procedures (D);
  • planning and assessment with studies based on established procedures (PA);
  • newly proposed and executed techniques (PE).
The results showed A, S, D, PA and PE returned 26, 19, 8, 9, 13 articles respectively. The details of the articles in each category is in Table 7 (see Section 4).
Results show that 25.33% of the methods used in the selected articles were suggested (proposed) and not implemented. This might not be unconnected with the availability of limited ear databases collected in unrestrained situations for experimental studies.
  • RQ2: What are the contributions of deep learning to ear recognition in the last decade?
At present, acceptance of deep learning techniques is increasing as it combines the traditional steps in the recognition process into single connecting models [72]. Deep learning algorithms have overcome many of the challenges associated with machine learning algorithms, particularly those associated with feature extraction techniques, while also having the ability for biometric image transformations. Consequently, attempts at ear detection using neural networks though initially limited are rapidly gaining pace. Early attempts by [160] focused on multi-class projection extreme learning machine methods to augment performance. In [10], a concise and detailed review of advances in ear detection using machine learning was presented. Geometric morphometric and Neural Networks were suggested in [57] to compare non-automated instances. Ref. [87] developed a neural network model to authenticate responses originating from the human ear with a 7.56% and 13.3% increase in identification and verification tasks, respectively.
However, variants of the neural network such as Convolutional Neural Networks (CNN) have shown remarkable performance against conventional systems [161]. The CNN design originates from [162], it is majorly a multi-layer network with capabilities to handle several invariants [169]. Subsequent experimental studies have gradually adapted its use to the recognition of specific human biometric traits. It eliminates cumbersome pre-processing procedures associated with traditional methods [163,164] and its robustness against texture and shape makes it dominant over traditional approaches [20,24].
Experimental studies by [72] compared the performance of some traditional ear recognition approaches to a variant of CNN with results above 22% of the initial descriptors. Nonetheless, ear recognition using deep neural networks is still significantly hampered by limited ear recognition databases and few experimental images leading to data augmentation [18].
  • RQ3: Is there sufficient publicly available data for ear recognition research?
A summary of findings from Table 2 indicates a predominance of free publicly available ear databases. This research identifies 27 publicly available datasets. Findings studies suggest the existence of publicly available ear databases since 1995, however, ear databases have grown to further accommodate different poses, angles, occlusion, and modes of collection.
Ear biometrics represents an active field of research. Nevertheless, ear image databases are very rare and usually strongly limited [165]. Further still, an absence of a unified large-scale publicly available ear database still represents a major challenge in the overall objective evaluation of ear recognition systems.
For instance, as of 2017, the reported performance of ear-recognition techniques has surpassed the rank-1 recognition rate of 90% on most available datasets [10]. This fact suggests that though technology has reached a level of maturity that easily handles images captured in laboratory-like settings, presently available ear databases are inadequate. Consequently, more challenging datasets are needed to identify open problems and provide room for further advancements.

3.4. Comparison with Related Surveys

ERS is not so popular compared to other biometric systems like fingerprints, faces, Veins, iris etc, [113]. Data augmentation of images in neural networks is often a challenging factor. Hence [166] suggested a learning method using limited datasets to train the network in ear image recognition. Similarly, Ref. [69] proposed a means of ear identification using transfer learning. Ref. [10] also recommended a mean method to improve the performance of datasets and suggested various architectures and controlled learning on previously trained datasets to develop a widely accessible CNN-based ear recognition method. In order to improve upon factors that affect image acquisition techniques such as contrast, position, and light intensity, a framework for ear localization using a histogram of oriented gradient (HOG) and support vector machine (SVM) was developed by [116] before subsequent CNN classification. A discriminant method was suggested by [61] to extract ear features in a pecking order, while [21] introduced dual images using SVM to tackle the challenge of limited images per subject. In exploring hand-crafted options, Ref. [167] combined CNN and handcrafted features to augment deep learning techniques, thus suggesting that deep learning can be complemented with other techniques.
This survey extended the review from [23], whose focus was mainly on the three core phases of ear biometric research: pre-processing, feature extraction and authentication. Consequently, a comprehensive overview of the contributions of prior research efforts is further amplified with particular emphasis on methods used for feature extraction and classification process. Despite previous reviews, this study focuses on qualitative and quantitative analysis of prevailing techniques through diverse search strategies as done in [11]. To the best of our knowledge, this study is the first to provide an in-depth novel synopsis and grouping of research approaches in ear biometric using different categories: existing approaches and methods.
Table 7 in Section 4 shows shows the predominantly used ear databases amongst several researchers from the list of reviewed articles.
A careful review of selected publications revealed some factors highlighted below as major determinants of the challenge raised in R3.
  • Poor feature selection: the application of feature selection is very diverse as it aims to reduce factors that can affect the performance of classifiers. Many images are acquired with several inherent background noises. Invariably, poor feature selection results in poor classification.
  • Hardware Dependence: A common drawback identified from selected works of literature is the resource-intensive tendencies of neural networks and other associated costs. They often require large volumes of data for training, placing heavy computational demand on processors.
  • Gaps between industry, implementation, research, and deployment: studies from reviewed articles revealed a missing link between the industries, researchers, and other stakeholders such that the majority of the related experimental studies were performed for purely academic purposes, hence limiting the potential to fine-tune existing technologies to suit user requirements.
Consequently, a need for merging research with actual deployment at user-ends is crucial in assessing the strengths and weaknesses of recognition systems and in providing relevant state-of-the-art systems capable of mitigating emerging vulnerabilities.

3.5. State of the Art in Ear Biometrics over the Last Decade

In the past few years, ear biometrics have been very prominent in achieving state of the art status applicable within the fields of human verification and identification [173]. Although poor quality images have often been a demerit, improved methods have since been developed to tackle it. Research from various authors, Refs. [181,182] have consistently explored novel approaches targeted at optimal performance of ear biometric systems. Typically, concentration on ear biometrics have been largely focused on the approaches of ear detection. This is seen from the study in [183,184,185,186,187,188,189]. The fundamental goal of researchers for years has been and continues to be developing ear recognition model that can overcome all detection challenges [183], but ear detection remains an image segmentation problem. In [184], deep CNN and contextual information was applied for ear detection in the 2D side of the face image. A single stage architecture was used to perform detection and classification with scale invariance. A context-provider in Context-aware Ear Detection Network (ContexedNet) developed in [190], extracts probability maps from the input image corresponding to facial element locations, and a model specifically designed to segment ears that incorporates the probability maps into a context-aware segmentation-based ear recognition algorithm. Extensive tests were conducted on the AWE and UBEAR datasets to evaluate ContexedNet, and the results were very encouraging when compared to other state-of-the-art methods. In [185], a deep learning object detector called Faster R-CNN was developed based on CNN, PCA and genetic algorithm (GA) for feature extraction, dimensionality reduction and selection, respectively. The work [186] went further to propose a deep network for segmenting and normalising ear print patterns, the model was trained using the IITD dataset.
Furthermore, the authors in [113] proposed a method for ear detection based on Faster Region-based Convolutional Neural Networks (Faster R-CNNs). On the UBEAR and UND dataset, the model was demonstrated to assure highly competitive outcomes by building on advancements in the general object detection area. El-Naggar et al. later presented a theoretically related method in [191], which once more showed the effectiveness of the Faster R-CNN architecture for ear identification. A geometric deep learning-based method for ear recognition was reported [76]. The suggested model uses Gaussian mixture models to define convolutional filters and permits the use of CNNs on graphs (GMMs). Based on this idea, the authors develop a framework for competitive detection that is both highly rotation-resistant (i.e., rotation equivariant) and has other advantageous features. Using a multi-path model topology and detection grouping, the authors in [123] proposed a CNN-based method for ear detection that locates ear regions in the images. This method’s core idea is to search for ears at various scales, like contextual modules seen in contemporary object identification frameworks like [192,193], to enhance detection performance. The authors in [190] employed general object detection models with contextual modules for the job of ear detection, exploring a related approach.
The work in [187], studied ear landmarks detection while utilising the image contract, Laplace filter and Gaussian blurring techniques. Sobel Edge detector and modified adaptive search window was applied for highlighting ear edges and detecting region while [188] automatically identified the primary anatomical contour features in depth map pictures to detect the auricular elements of the ear. Ear Mask Extraction (EME) network, normalization algorithm and a novel Siamese-based CNN (CG-ERNet) was used to segment, align, and extract deep ear features, respectively in [189]. Curvature Gabor filters were used by CG-ERNet to take advantage of domain-specific information while triplet loss, triplet selection, and adaptive margin were adopted for better loss convergence.
Recent technological advancements in the field of artificial intelligence and particularly convolutional neural networks have inspired improved computer visions leading to improved detection, recognition, regression, and classification issues in ear biometrics. Some of these innovations are highlighted in [189] to include object detection methods such as F-RCNN, Mask-RCNN, SSD, VGG. Though these methods often have several non-linear layers, a myriad of parameters may be used in further training the ear recognition databases.
The work [194] employed a deep unsupervised active learning (DUAL) model to learn new features on the ear images while testing without any feedback or correction. Using conditional Deep Convolutional Generative Adversarial Networks (DCGAN) and Convolutional Neural Network (CNN) models, a framework that includes a generative model for colouring dark and grayscale images as well as a classification model was proposed in this [195]. When tested on the limited AMI and the unconstrained AWE ear datasets, the model displayed encouraging results. A quick CNN-like network (TR-ICANet) was suggested for ear print recognition in [67]. While PCA was used to geometrically normalize scale and posture, CNN was employed to detect the ear landmarks and convolutional filters were learned through an unsupervised learning method utilizing Independent Component Analysis (ICA).
Selecting and weighting characteristics has an impact on most ear identification techniques; this is a difficult problem in ERS and other pattern recognition applications [196]. The authors presented a deep CNN feature learning Mahalanobis distance metric technique. Discriminant correlation analysis was used to reduce dimensionality, Mahalanobis distance was learned based on LogDet divergence metric, and K-nearest neighbour was implemented for ear detection, various deep features are retrieved by adopting VGG and ResNet pre-trained models. In [197], unrestricted ear recognition was examined using a transformer neural network dubbed Vision transformer (ViT) and data-efficient image transformers (DeiTs). The recognition accuracy of the ViT-Ear and DeiT-Ear models was at par with previous CNN-based techniques and other deep learning algorithms. Without data augmentation procedures, ViT and DeiTs models was shown to outperform ResNets. The authors in [198], utilized Deep Residual Networks (ResNet) to create ear recognition models that acts as feature extractors in feeding an SVM classifier. ResNet was trained and improved utilizing a training corpus of various ear datasets. To improve the performance of the entire system, ensembles of networks with different depths were deployed.
A six layer deep convolutional neural network design was proposed in [199] to supplement the other biometric systems in a pandemic scenario. When deployed in conjunction with an appropriate surveillance system, the method was found to be very effective at identifying people in huge crowds in uncontrolled environments. The Particle Swarm Optimization (PSO)-based ERS was presented in [200] and evaluated with 50 photos and 150 images using the AMI EAR database. The recognition accuracy was 98% and 96.6%, respectively, which is superior to other benchmark approaches like PCA and Scale Invariant Feature Transform (SIFT).
Despite the advances in deep learning, ear recognition approaches have since grown to include bi and multi-modal methods. For instance, the works [201,202] underscores the accuracy of multimodal biometric systems in uncontrolled scenarios by integrating ear and face profile. Each biometrics’ texture characteristics were extracted using a histogram-based local descriptor, local directional patterns, binarized statistical picture features, and local phase quantization. At the feature and score levels, the local descriptors from both modalities were combined to create the KNN classifier for human identification [201]. In [202], a high-dimensional feature vector was utilized to independently represent the ear and face modalities in the frequency and spatial domains utilizing local phase quantization (LPQ) and local directional patterns (LDP). To create more non-linear and discriminative characteristics for the kNN classifier’s use in identifying persons, the feature set was merged with kernel discriminative common vector (KDCV). Experimental results on two benchmark datasets demonstrated that the suggested strategy outperforms individual modalities and other cutting-edge techniques in terms of performance.

3.6. Threats to Validity

Considering the related threats to the review procedures and possibly inaccurate data extraction, the highlighted papers in this review were selected based on the earlier described process. The details in Figure 1 reflects some of the answers raised in the research questions. There are numerous articles that no doubt may extend beyond the search parameters used; hence the possibility of exclusion of one or more vital but related articles remains likely. Consequently, a reference check was carried out at the initial stage to prevent any omission of such articles. The final article selection was based on parameters such as precision of the information, quality assessment and clear methodology. Also, the articles were further evaluated by comparing results published by various Authors to avoid overestimation.

4. Discussions, Limitations, and Taxonomy

This study underscores the contributions of deep learning to ear recognition systems while also highlighting a summary of contemporary techniques discussed in other studies. Security is paramount and accurate recognition of target elements from pre-processing to classification is critical in ensuring the integrity of any biometric system. The contributions of deep learning are multifaceted and far-reaching. Studies reviewed affirm the enormous work done in ERS using minimum distance and support vector machines.
However, newer methods capable of autonomously training large sets of data remain under explored. Based on the articles selected, the advantages and disadvantages of the various sub-units in ear recognition stages are indicated in Table 6. A small number of novel classification approaches exist for ERS. The work [168] highlighted a few bio-inspired algorithms, such as cuckoo search, particle swarm optimization, etc. Although some of the listed algorithms have widespread application domains, their significance is primarily for unraveling the optimization challenge in the location search. Consequently, in-depth knowledge of deep learning in pre-processing and feature extraction stages of ear recognition systems is required in subsequent research.

4.1. Limitations

In line with the study research questions, a thorough review of research articles on the contributions of deep learning to ERS was conducted, with 74 publications eventually identified as sufficient to achieve the research objectives. However, most of the papers listed were published between 2015 to 2022. Therefore, we cannot categorically state that all available studies in this research domain have been exhausted, considering the rate and volume of published research articles. Also, non-English articles were not considered during the article search.

4.2. Specific Contributions

Presently, the need to develop a black ear-pose invariant ear recognition database is motivated by the following:
  • This study identifies a need to evaluate the performance of ear recognition systems with ear images of different races before they are deployed in real-world scenarios. However, existing ear recognition databases contain mostly Caucasian ear images, while other minority ethnic groups such as blacks, Asians, and Arabs are ignored [169].
  • The black race form 18.2% of the total world population, however, previous research endeavors toward black ear recognition have not been established, and there is no publicly available dataset dedicated to black ear recognition in the works of literature reviewed.
  • This study observed that ear recognition images are often partially or fully occluded by hair, dress, headphone, hat/cap, scarf, rings, and other obstacles [170]. Such occlusions and viewpoints may cause a significant decline in the performance of the ear recognition algorithm (ERA) during identification or verification tasks [171]. Therefore, reliable ear recognition should be equipped with automated detection of occlusion to avoid misclassification due to occluded samples [51].
Therefore, the ear image samples were collected from 152 African (black-skinned) individuals from a public university in Nigeria. The dataset contains left and right ear images of the volunteers in varying pose angles of 0°, 30°, and 60°, respectively, with the ear images containing head scarfs, earrings, ear plugs, etc., thus, making the dataset pose and occlusion invariant. The corpus is published and publicly available to researchers at [203] with a total of 907 black ear images. Figure 2 shows the pose angles of the left and right ear images as captured for each volunteer.
Also, this study classified current state-of-the-art techniques to reflect the contributions of the highlighted works under three core categories: approaches, performance parameters, and trait selection [204]. Figure 3 provides an explicit description of this taxonomy. The complete classification results of the articles is presented in Table 7.

5. Conclusions and Future Direction

Although a high volume of research is geared toward improving the recognition accuracy of biometric systems, none of these techniques has shown 100% accuracy. In this study, an SLR showing the current contributions of deep learning to ear recognition in different stages is presented. Before the screening, a total number of 1121 articles was returned during a preliminary search followed by a thorough analysis of existing contributions of deep learning, research questions, and the various methods used in the recognition process. In the end, 74 articles were deemed relevant to the study and were selected for further analysis.
In terms of the number of publications per year, results indicate that significant contributions were made to ear recognition in 2018, as it had 18 relevant articles, closely followed by 2016 with 16 articles. Results based on contributions from Deep learning obtained from Table 7 showed CNN, other architectures and non- unspecified architectures had 51.95%, 18.18%, and 29.87%, contributions, respectively. Similarly, local, geometric and hybrid feature extraction approaches had 60.61%, 18.18% and 21.21%, respectively. For studies that employed existing or developed image databases, the analysis revealed that 85.42% (82) articles used one database or another in their studies, while 14 did not use any database.
Contrastingly, results from analyzing the status of articles showed gap between proposed methods (S) and proposed & executed works (P&E) which accounted for 25.33% and 17.33%, respectively. Articles that assessed existing algorithms (A), designed a templates (D) or planned and assessed using established procedures (PA) had 34.67%, 10.67%, and 12.0%, respectively.
Traditional machine learning methods was used in 45 (48.91%) of the articles while 47 (51.09%) employed deep learning methods. This is due to increase in the ER datasets sizes.
Further still, an examination of selected performance metrics of recognition accuracy, template capacity, true acceptance rate, false acceptance rate, false rejection rate, equal error rate, precision, recall, and matching speed used by the Authors of selected articles was systematically determined. Interestingly, most studies on ear recognition system are assessment of existing algorithms on a given dataset followed by newly proposed or yet to be evaluated techniques.
In real-life applications, speed is of great essence. Future works should investigate various enhancement techniques to improve the speed of feature extraction algorithms in ERS. Although ear biometric technology is renowned for its long history of use, particularly in developed countries, it is still enjoying rapid growth and potential with increasingly dynamic but secure classification procedures. Establishing an efficient and foolproof ear biometric recognition system is not only a growing concern but also an opportunity to explore the inherent gaps in feature extraction and classification procedures targeted at accurate authentication or identification tasks.

Author Contributions

The manuscript was written through the contributions of all authors. O.G.O., was responsible for the conceptualization of the topic; article gathering and sorting were carried out by O.G.O., A.A.-A., O.‘T.A. and A.Q.; manuscript writing and original drafting and formal analysis were carried out by O.G.O., A.A.-A., O.‘T.A., A.Q., A.L.I. and J.B.A.; writing of reviews and editing were carried out by A.A.-A., O.‘T.A., A.Q., A.L.I., J.B.A. and A.A.-A. led the overall research activity. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Agbotiname Lucky Imoize is supported by the Nigerian Petroleum Technology Development Fund (PTDF) and the German Academic Exchange Service (DAAD) through the Nigerian-German Postgraduate Program under grant 57473408.

Data Availability Statement

The black ear recognition dataset is publicly available. Other data that supports the findings in this paper are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Bank Group. Identification for Development Strategic Framework; Working Paper; World Bank Group: Washington, DC, USA, 2016. [Google Scholar]
  2. Atick, J. The Identity Ecosystem of Rwanda. A Case Study of a Performant ID System in an African Development Context. ID4Africa Rep. 2016, 1–38. Available online: https://citizenshiprightsafrica.org/the-identity-ecosystem-of-rwanda-a-case-study-of-a-performant-id-system-in-an-african-development-context/ (accessed on 17 December 2022).
  3. Saranya, M.; Cyril, G.L.I.; Santhosh, R.R. An approach towards ear feature extraction for human identification. In Proceedings of the International Conference on Electrical, Electronics and Optimization Techniques (ICEEOT 2016), Chennai, India, 3–5 March 2016; pp. 4824–4828. [Google Scholar] [CrossRef]
  4. Unar, J.; Seng, W.C.; Abbasi, A. A review of biometric technology along with trends and prospects. Pattern Recognit. 2014, 47, 2673–2688. [Google Scholar] [CrossRef]
  5. Emersic, Z.; Stepec, D.; Struc, V.; Peer, P. Training Convolutional Neural Networks with Limited Training Data for Ear Recognition in the Wild. In Proceedings of the International Conference on Automatic Face Gesture Recognition, Washington, DC, USA, 30 May–3 June 2017; pp. 987–994. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, Z.; Yang, J.; Zhu, Y. Review of Ear Biometrics. Arch. Comput. Methods Eng. 2019, 28, 149–180. [Google Scholar] [CrossRef]
  7. Alaraj, M.; Hou, J.; Fukami, T. A neural network based human identification framework using ear images. In Proceedings of the International Technical Conference of IEEE Region 10, Fukuoka, Japan, 21–24 November 2010; pp. 1595–1600. [Google Scholar]
  8. Song, L.; Gong, D.; Li, Z.; Liu, C.; Liu, W. Occlusion Robust Face Recognition Based on Mask Learning with Pairwise Differential Siamese Network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 773–782. [Google Scholar] [CrossRef] [Green Version]
  9. Li, P.; Prieto, L.; Mery, D.; Flynn, P.J. On Low-Resolution Face Recognition in the Wild: Comparisons and New Techniques. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2000–2012. [Google Scholar] [CrossRef] [Green Version]
  10. Emersic, Z.; Struc, V.; Peer, P. Ear recognition: More than a survey. Neurocomputing 2017, 255, 26–39. [Google Scholar] [CrossRef] [Green Version]
  11. Abayomi-Alli, O.; Misra, S.; Abayomi-Alli, A.; Odusami, M. A review of soft techniques for SMS spam classification: Methods, approaches and applications. Eng. Appl. Artif. Intell. 2019, 86, 197–212. [Google Scholar] [CrossRef]
  12. Youbi, Z.; Boubchir, L.; Boukrouche, A. Human ear recognition based on local multi-scale LBP features with city-block distance. Multi. Tools Appl. 2019, 78, 14425–14441. [Google Scholar] [CrossRef]
  13. Madhusudhan, M.V.; Basavaraju, R.; Hegde, C. Secured Human Authentication Using Finger-Vein Patterns. In Data Management, Analytics, and Innovation. Advances in Intelligent Systems and Computing; Balas, V., Sharma, N., Chakrabarti, A., Eds.; Springer: Singapore, 2019; pp. 311–320. [Google Scholar] [CrossRef]
  14. Lei, Y.; Qian, J.; Pan, D.; Xu, T. Research on Small Sample Dynamic Human Ear Recognition Based on Deep Learning. Sensors 2022, 22, 1718. [Google Scholar] [CrossRef]
  15. Chen, Y.; Chen, W.; Wei, C.; Wang, Y. Occlusion aware face in painting via generative adversarial networks. In Proceedings of the Image Processing (ICIP), International Conference on IEEE, Beijing, China, 17–20 September 2017; pp. 1202–1206. [Google Scholar]
  16. Tian, L.; Mu, Z. Ear recognition based on deep convolutional network. In Proceedings of the 9th International Congress on Image and Signal Processing, Biomedical Engineering, and Informatics (CISP-BMEI 2016), Datong, China, 15–17 October 2016; pp. 437–441. [Google Scholar] [CrossRef]
  17. Labati, R.D.; Muñoz, E.; Piuri, V.; Sassi, R.; Scotti, F. Deep-ECG: Convolutional Neural Networks for ECG biometric recognition. Pattern Recognit. Lett. 2019, 126, 78–85. [Google Scholar] [CrossRef]
  18. Ramos-Cooper, S.; Gomez-Nieto, E.; Camara-Chavez, G. VGGFace-Ear: An Extended Dataset for Unconstrained Ear Recognition. Sensors 2022, 22, 1752. [Google Scholar] [CrossRef]
  19. Guo, Y.; Xu, Z. Ear Recognition Using a New Local Matching Approach. In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP), San Diego, CA, USA, 12–15 October 2008; pp. 289–293. [Google Scholar]
  20. Raghavendra, R.; Raja, K.B.; Venkatesh, S.; Busch, C. Improved ear verification after surgery—An approach based on collaborative representation of locally competitive features. Pattern Recognit. 2018, 83, 416–429. [Google Scholar] [CrossRef]
  21. Bertillon, A. La Photographie Judiciaire, Avec un Appendice Classificationetl Identification Anthropométriques; Technical Report; Gauthier-Villars: Paris, France, 1890. [Google Scholar]
  22. Burge, M.; Burger, W. Ear biometrics in computer vision. In Proceedings of the 15th International Conference on Pattern Recognition. ICPR-2000, Barcelona, Spain, 3–7 September 2000; pp. 822–826. [Google Scholar]
  23. Alva, M.; Srinivasaraghavan, A.; Sonawane, K. A Review on Techniques for Ear Biometrics. In Proceedings of the IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 20–22 February 2019; pp. 1–6. [Google Scholar] [CrossRef]
  24. Chowdhury, M.; Islam, R.; Gao, J. Robust ear biometric recognition using neural network. In Proceedings of the 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 1855–1859. [Google Scholar]
  25. Kumar, R.; Dhenakaran, S. Pixel based feature extraction for ear biometrics. In Proceedings of the IEEE International Conference on Machine Vision and Image Processing (MVIP), Coimbatore, India, 14–15 December 2012; pp. 40–43. [Google Scholar]
  26. Rahman, M.; Islam, R.; Bhuiyan, I.; Ahmed, B.; Islam, A. Person identification using ear biometrics. Int. J. Comput. Internet Manag. 2007, 15, 1–8. [Google Scholar]
  27. El-Naggar, S.; Abaza, A.; Bourlai, T. On a taxonomy of ear features. In Proceedings of the IEEE Symposium on Technologies for Homeland Security; HST2016, Waltham, MA, USA, 10–11 May 2016; pp. 1–6. [Google Scholar] [CrossRef]
  28. Damer, N.; Führer, B. Ear Recognition Using Multi-Scale Histogram of Oriented Gradients. In Proceedings of the Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Piraeus-Athens, Greece, 18–20 July 2012; pp. 21–24. [Google Scholar]
  29. Tiwari, S.; Singh, A.; Singh, S. Comparison of Adult and Infant Ear Images for Biometric Recognition. In Proceedings of the Fourth International Conference on Parallel Distribution Grid Computing, Waknaghat, India, 22–24 December 2016; pp. 4–9. [Google Scholar]
  30. Tariq, A.; Anjum, M.; Akram, M. Personal identification using computerized human ear recognition system. In Proceedings of the 2011 International Conference on Computer Science and Network Technology, Harbin, China, 24–26 December 2011; pp. 50–54. [Google Scholar]
  31. Chang, K.; Bowyer, K.; Sarkar, S.; Victor, B. Comparison and combination of ear and face images in appearance-based biometrics. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1160–1165. [Google Scholar] [CrossRef]
  32. Pflug, A.; Busch, C. Ear biometrics: A survey of detection, feature extraction and recognition methods. IET Biom. 2012, 1, 114–129. [Google Scholar] [CrossRef] [Green Version]
  33. Dong, J.; Mu, Z. Multi-pose ear recognition based on force field transformation. In Proceedings of the 2nd International Symposium on Intelligence in Information Technology Applications, Shanghai, China, 20–22 December 2008; pp. 771–775. [Google Scholar]
  34. Xiao, X.; Zhou, Y. Two-Dimensional Quaternion PCA and Sparse PCA. IEEE Trans. Neural Networks Learn. Syst. 2018, 30, 2028–2042. [Google Scholar] [CrossRef] [PubMed]
  35. Zhang, J.; Mu, C.; Qu, W.; Liu, M.; Zhang, Y. A novel approach for ear recognition based on ICA and RBF network. In Proceedings of the International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; pp. 4511–4515. [Google Scholar]
  36. Yuan, L.; Mu, C.; Zhang, Y.; Liu, K. Ear recognition using improved non-negative matrix factorization. In Proceedings of the International Conference on Pattern Recognition, Hong Kong, China, 20–24 August 2006; pp. 501–504. [Google Scholar]
  37. Sana, A.; Gupta, P.; Purkai, R. Ear Biometrics: A New Approach. In Advances in Pattern Recognition; Pal, P., Ed.; World Scientific Publishing: Singapore, 2007; pp. 46–50. [Google Scholar]
  38. Naseem, I.; Togneri, R.; Bennamoun, M. Sparse Representation for Ear Biometrics; Bebis, G., Boyle, R., Parvin, B., Koracin, D., Remagnino, P., Porikli, F., Eds.; Advances in Visual Computing: San Diego, CA, USA, 2008; pp. 336–345. [Google Scholar]
  39. Wang, X.-Q.; Xia, H.-Y.; Wang, Z.-L. The Research of Ear Identification Based On Improved Algorithm of Moment Invariant. In Proceedings of the 2010 Third International Conference on Information and Computing, Wuxi, China, 4–6 June 2010; Volume 1, pp. 58–60. [Google Scholar] [CrossRef]
  40. Bustard, J.D.; Nixon, M.S. Toward Unconstrained Ear Recognition From Two-Dimensional Images. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans 2010, 40, 486–494. [Google Scholar] [CrossRef] [Green Version]
  41. Arbab-Zavar, B.; Nixon, S.; Hurley, J. On model-based analysis of ear biometrics. In Proceedings of the Conference on Biometrics: Theory, Applications and Systems, Crystal City, VA, USA, 27–29 September 2007; pp. 1–5. [Google Scholar]
  42. Kisku, R.; Mehrotra, H.; Gupta, P.; Sing, K. SIFT-Based ear recognition by fusion of detected key-points from color similarity slice regions. In Proceedings of the IEEE International Conference on Advances in Computational Tools for Engineering Applications (ACTEA), Beirut, Lebanon, 15–17 July 2009; pp. 380–385. [Google Scholar]
  43. Emersic, Z.; Playa, O.; Struc, V.; Peer, P. Towards accessories-aware ear recognition. In Proceedings of the 2018 IEEE International Work Conference on Bioinspired Intelligence (IWOBI), San Carlos, Costa Rica, 18–20 July 2018; pp. 1–8. [Google Scholar]
  44. Jeges, E.; Mate, L. Model-Based Human Ear Localization and Feature Extraction. Int. J. Intell. Comput. Med. Sci. Image Pro. 2007, 1, 101–112. [Google Scholar]
  45. Liu, H.; Yan, J. Multi-view Ear Shape Feature Extraction and Reconstruction. In Proceedings of the Third International IEEE Conference on Signal-Image Technologies and Internet-Based System (SITIS), Shanghai, China, 16–18 December 2007; pp. 652–658. [Google Scholar]
  46. Lakshmanan, L. Efficient person authentication based on multi-level fusion of ear scores. IET Biom. 2013, 2, 97–106. [Google Scholar] [CrossRef]
  47. Nosrati, M.S.; Faez, K.; Faradji, F. Using 2D wavelet and principal component analysis for personal identification based On 2D ear structure. In Proceedings of the International Conference on Intelligent and Advanced Systems, Kuala Lumpur, Malaysia, 25–28 November 2007; pp. 616–620. [Google Scholar] [CrossRef]
  48. Kumar, A.; Chan, T.-S. Robust ear identification using sparse representation of local texture descriptors. Pattern Recognit. 2013, 46, 73–85. [Google Scholar] [CrossRef]
  49. Galdamez, P.; Arrieta, A.G.; Ramon, M. Ear recognition using a hybrid approach based on neural networks. In Proceedings of the International Conference on Information Fusion, Salamanca, Spain, 7–10 July 2014; pp. 1–6. [Google Scholar]
  50. Mahajan, A.S.B.; Karande, K.J. PCA and DWT based multimodal biometric recognition system. In Proceedings of the International Conference on Pervasive Computing (ICPC), Pune, India, 8–10 January 2015; pp. 1–4. [Google Scholar] [CrossRef]
  51. Quoc, H.N.; Hoang, V.T. Real-Time Human Ear Detection Based on the Joint of Yolo and RetinaFace. Complexity 2021, 2021, 7918165. [Google Scholar] [CrossRef]
  52. Panchakshari, P.; Tale, S. Performance analysis of fusion methods for EAR biometrics. In Proceedings of the 2016 IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 20–21 May 2016; pp. 1191–1194. [Google Scholar] [CrossRef]
  53. Ghoualmi, L.; Draa, A.; Chikhi, S. An ear biometric system based on artificial bees and the scale invariant feature transform. Expert Syst. Appl. 2016, 57, 49–61. [Google Scholar] [CrossRef]
  54. Mishra, S.; Kulkarni, S.; Marakarkandy, B. A neoteric approach for ear biometrics using multilinear PCA. In Proceedings of the International Conference and Workshop on Electronics and Telecommunication Engineering (ICWET 2016), Mumbai, India, 26–27 February 2016. [Google Scholar] [CrossRef]
  55. Kumar, A.; Hanmandlu, M.; Kuldeep, M.; Gupta, H.M. Automatic ear detection for online biometric applications. In Proceedings of the 2011 3rd International Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, Hubli, India, 15–17 December 2011. [Google Scholar]
  56. Anwar, A.S.; Ghany, K.K.A.; Elmahdy, H. Human Ear Recognition Using Geometrical Features Extraction. Procedia Comput. Sci. 2015, 65, 529–537. [Google Scholar] [CrossRef] [Green Version]
  57. Cintas, C.; Quinto-Sánchez, M.; Acuña, V.; Paschetta, C.; de Azevedo, S.; de Cerqueira, C.C.S.; Ramallo, V.; Gallo, C.; Poletti, G.; Bortolini, M.C.; et al. Automatic ear detection and feature extraction using Geometric Morphometrics and convolutional neural networks. IET Biom. 2017, 6, 211–223. [Google Scholar] [CrossRef]
  58. Rahman, M.; Sadi, R.; Islam, R. Human ear recognition using geometric features. In Proceedings of the International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 5–8 March 2014; pp. 1–4. [Google Scholar]
  59. Canny Edge Detection. Fourier.eng.hmc.edu. 2018. Available online: http://fourier.eng.hmc.edu/e161/lectures/canny/node1.html (accessed on 18 December 2022).
  60. Omara, I.; Li, X.; Xiao, G.; Adil, K.; Zuo, W. Discriminative Local Feature Fusion for Ear Recognition Problem. In Proceedings of the 2018 8th International Conference on Bioscience, Biochemistry and Bioinformatics (ICBBB), Tokyo, Japan, 18–20 January 2018; pp. 139–145. [Google Scholar] [CrossRef]
  61. Omara, I.; Li, F.; Zhang, H.; Zuo, W. A novel geometric feature extraction method for ear recognition. Expert Syst. Appl. 2016, 65, 127–135. [Google Scholar] [CrossRef]
  62. Hurley, D.; Nixon, M.; Carter, J. Force Field Energy Functionals for Ear Biometrics. Comput. Vis. Image Underst. 2005, 98, 491–512. [Google Scholar] [CrossRef] [Green Version]
  63. Polin, Z.; Kabir, E.; Sadi, S. 2D human-ear recognition using geometric features. In Proceedings of the 7th International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, 20–22 December 2012; pp. 9–12. [Google Scholar]
  64. Benzaoui, A.; Adjabi, I.; Boukrouche, A. Person identification based on ear morphology. In Proceedings of the International Conference on Advanced Aspects of Software Engineering (ICAASE), Constantine, Algeria, 29–30 October 2016; pp. 1–5. [Google Scholar]
  65. Benzaoui, A.; Hezil, I.; Boukrouche, A. Identity recognition based on the external shape of the human ear. In Proceedings of the 2015 International Conference on Applied Research in Computer Science and Engineering (ICAR), Beirut, Lebanon, 8–9 October 2015; pp. 1–5. [Google Scholar]
  66. Sharkas, M. Ear recognition with ensemble classifiers: A deep learning approach. Multi. Tools Appl. 2022, 81, 43919–43945. [Google Scholar] [CrossRef]
  67. Korichi, A.; Slatnia, S.; Aiadi, O. TR-ICANet: A Fast Unsupervised Deep-Learning-Based Scheme for Unconstrained Ear Recognition. Arab. J. Sci. Eng. 2022, 47, 9887–9898. [Google Scholar] [CrossRef]
  68. Pflug, A.; Busch, C.; Ross, A. 2D ear classification based on unsupervised clustering. In Proceedings of the International Joint Conference on Biometrics, Clearwater, FL, USA, 29 September–2 October 2014; pp. 1–8. [Google Scholar]
  69. Dodge, S.; Mounsef, J.; Karam, L. Unconstrained ear recognition using deep neural networks. IET Biom. 2018, 7, 207–214. [Google Scholar] [CrossRef]
  70. Ying, T.; Shining, W.; Wanxiang, L. Human ear recognition based on deep convolutional neural network. In Proceedings of the 30th Chinese Control and Decision Conference (2018 CCDC), Shenyang, China, 9–11 June 2018; pp. 1830–1835. [Google Scholar] [CrossRef]
  71. Zarachoff, M.M.; Sheikh-Akbari, A.; Monekosso, D. Multi-band PCA based ear recognition technique. Multi. Tools Appl. 2022, 82, 2077–2099. [Google Scholar] [CrossRef]
  72. Alshazly, H.; Linse, C.; Barth, E.; Martinetz, T. Handcrafted versus CNN Features for Ear Recognition. Symmetry 2019, 11, 1493. [Google Scholar] [CrossRef] [Green Version]
  73. Moreno, B.; Sanchez, A.; Velez, J. On the use of outer ear images for personal identification in security applications. In Proceedings of the IEEE 33rd Annual International, Carnahan Conference on Security Technology, Madrid, Spain, 5–7 October 1999; pp. 469–476. [Google Scholar] [CrossRef]
  74. Mu, Z.; Yuan, L.; Xu, Z.; Xi, D.; Qi, S. Shape and Structural Feature Based Ear Recognition; Springer: Berlin/Heidelberg, Germany, 2004; pp. 663–670. [Google Scholar] [CrossRef]
  75. Choras, M. Ear biometrics based on geometrical feature extraction. Electron. Lett. Comput. Vis. Image Anal. 2005, 5, 84–95. [Google Scholar] [CrossRef] [Green Version]
  76. Tomczyk, A.; Szczepaniak, P.S. Ear Detection Using Convolutional Neural Network on Graphs with Filter Rotation. Sensors 2019, 19, 5510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Abdellatef, E.; Omran, E.M.; Soliman, R.F.; Ismail, N.A.; Elrahman, S.E.S.E.A.; Ismail, K.N.; Rihan, M.; El-Samie, F.E.A.; Eisa, A.A. Fusion of deep-learned and hand-crafted features for cancelable recognition systems. Soft Comput. 2020, 24, 15189–15208. [Google Scholar] [CrossRef]
  78. Traore, I.; Alshahrani, M.; Obaidat, M.S. State of the art and perspectives on traditional and emerging biometrics: A survey. Secur. Priv. 2018, 1, e44. [Google Scholar] [CrossRef] [Green Version]
  79. Prakash, S.; Gupta, P. Human recognition using 3D ear images. Neurocomputing 2014, 140, 317–325. [Google Scholar] [CrossRef]
  80. Raposo, R.; Hoyle, E.; Peixinho, A.; Proenca, H. UBEAR: A dataset of ear images captured on-the-move in uncontrolled conditions. In Proceedings of the 2011 IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (CIBIM), Paris, France, 11–15 April 2011; pp. 84–90. [Google Scholar] [CrossRef]
  81. Abaza, A.; Bourlai, T. On ear-based human identification in the mid-wave infrared spectrum. Image Vis. Comput. 2013, 31, 640–648. [Google Scholar] [CrossRef]
  82. Pandiar, A.; Ntalianis, K. Palanisamy, Intelligent Computing, Information and Control Systems, Advances in Intelligent Systems and Computing 1039; Springer: Berlin/Heidelberg, Germany, 2019; pp. 176–185. [Google Scholar]
  83. Nait-Ali, A. (Ed.) Hidden Biometrics; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
  84. Srinivas, N.; Flynn, P.J.; Bruegge, R.W.V. Human Identification Using Automatic and Semi-Automatically Detected Facial Marks. J. Forensic Sci. 2015, 61, S117–S130. [Google Scholar] [CrossRef]
  85. Almisreb, A.; Jamil, N. Advanced Technologies in Robotics and Intelligent Systems; Proceedings of ITR 2019; Springer: Berlin/Heidelberg, Germany, 2012; pp. 199–203. [Google Scholar] [CrossRef]
  86. Almisreb, A.; Jamil, N. Automated Ear Segmentation in Various Illumination Conditions. In Proceedings of the IEEE 8th International Colloquium on Signal Processing and Its Applications, Malacca, Malaysia, 23–25 March 2012; pp. 199–203. [Google Scholar]
  87. Kang, J.S.; Lawryshyn, Y.; Hatzinakos, D. Neural Network Architecture and Transient Evoked Otoacoustic Emission (TEOAE) Biometrics for Identification and Verification. IEEE Trans. Inf. Forensics Secur. 2019, 15, 2291–2301. [Google Scholar] [CrossRef]
  88. Rane, M.E.; Bhadade, U.S. Multimodal score level fusion for recognition using face and palmprint. Int. J. Electr. Eng. Educ. 2020. [Google Scholar] [CrossRef]
  89. Saini, R.; Rana, N. Comparison of Various Biometrics Methods. Int. J. Adv. Sci. Technol. 2014, 2, 24–30. [Google Scholar]
  90. Patil, S. Biometric Recognition Using Unimodal and Multimodal Features. Int. J. Innov. Res. Comput. Commun. Eng. 2014, 2, 6824–6829. [Google Scholar]
  91. Khan, B.; Khan, M.; Alghathbar, K. Biometrics and identity management for homeland security applications in Saudi Arabia. Afr. J. Bus. Manag. 2010, 4, 3296–3306. [Google Scholar]
  92. Zhang, J.; Ma, Q.; Cui, X.; Guo, H.; Wang, K.; Zhu, D. High-throughput corn ear screening method based on two-pathway convolutional neural network. Comput. Electron. Agric. 2020, 179, 105525. [Google Scholar] [CrossRef]
  93. Bansal, J.; Das, K.N.; Nagar, A.; Deep, K.; Ojha, A. Soft Computing for Problem Solving. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2017; pp. 1–9. [Google Scholar] [CrossRef]
  94. Earnest, E.; Hansley, E.; Segundo, P.; Sarkar, S. Employing fusion of learned and handcrafted Feature for unconstrained ear recognition. IET Biom. 2018, 7, 215–223. [Google Scholar]
  95. Chen, L.; Mu, Z. Partial Data Ear Recognition From One Sample per Person. IEEE Trans. Hum. Mach. Syst. 2016, 46, 799–809. [Google Scholar] [CrossRef]
  96. Wang, X.; Yuan, W. Gabor wavelets and General Discriminant analysis for ear recognition. In Proceedings of the 8th World Congress on Intelligent Control and Automation, Jinan, China, 7–9 July 2010; pp. 6305–6308. [Google Scholar] [CrossRef]
  97. Fahmi, P.A.; Kodirov, E.; Choi, D.J.; Lee, G.S.; Azli, A.M.F.; Sayeed, S. Implicit Authentication based on Ear Shape Biometrics using Smartphone Camera during a call. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Seoul, Korea, 14–17 October 2012; pp. 2272–2276. [Google Scholar]
  98. Ariffin, S.M.Z.S.Z.; Jamil, N. Cross-band ear recognition in low or variant illumination environments. In Proceedings of the International Symposium on Biometrics and Security Technologies (ISBAST), Kuala Lumpur, Malaysia, 26–27 August 2014; pp. 90–94. [Google Scholar] [CrossRef]
  99. Al Rahhal, M.M.; Mekhalfi, M.L.; Guermoui, M.; Othman, E.; Lei, B.; Mahmood, A. A Dense Phase Descriptor for Human Ear Recognition. IEEE Access 2018, 6, 11883–11887. [Google Scholar] [CrossRef]
  100. Oravec, M. Feature extraction and classification by machine learning methods for biometric recognition of face and iris. In Proceedings of the ELMAR-2014, Zadar, Croatia, 10–12 September 2014; pp. 1–4. [Google Scholar] [CrossRef]
  101. Wu, Y.; Chen, Z.; Sun, D.; Zhao, L.; Zhou, C.; Yue, W. Human Ear Recognition Using HOG with PCA Dimension Reduction and LBP. In Proceedings of the 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China, 12–14 July 2019; pp. 72–75. [Google Scholar] [CrossRef]
  102. Sable, A.H.; Talbar, S.N. An Adaptive Entropy Based Scale Invariant Face Recognition Face Altered by Plastic Surgery. Pattern Recognit. Image Anal. 2018, 28, 813–829. [Google Scholar] [CrossRef]
  103. Mali, K.; Bhattacharya, S. Comparative Study of Different Biometric Features. Int. J. Adv. Res. Comput. Commun. Eng. 2013, 2, 30–35. [Google Scholar]
  104. Kandgaonkar, T.V.; Mente, R.S.; Shinde, A.R.; Raut, S.D. Ear Biometrics: A Survey on Ear Image Databases and Techniques for Ear Detection and Recognition. IBMRD’s J. Manag. Res. 2015, 4, 88–103. [Google Scholar] [CrossRef] [Green Version]
  105. Sikarwar, R.; Yadav, P. An Approach to Face Detection and Feature Extraction using Canny Method. Int. J. Comput. Appl. 2017, 163, 1–5. [Google Scholar] [CrossRef]
  106. Maity, S. 3D Ear Biometrics and Surveillance Video Based Biometrics. Ph.D. Thesis, University of Miami, Miami, FL, USA, 2017; p. 1789, Open Access Dissertations. Available online: https://scholarlyrepository.miami.edu/oa_dissertations/1789 (accessed on 5 December 2022).
  107. Mamta; Hanmandlu, M. Robust ear based authentication using Local Principal Independent Components. Expert Syst. Appl. 2013, 40, 6478–6490. [Google Scholar] [CrossRef]
  108. Galdámez, P.L.; Raveane, W.; Arrieta, A.G. A brief review of the ear recognition process using deep neural networks. J. Appl. Log. 2017, 24, 62–70. [Google Scholar] [CrossRef] [Green Version]
  109. Li, L.; Zhong, B.; Hutmacher, C.; Liang, Y.; Horrey, W.J.; Xu, X. Detection of driver manual distraction via image-based hand and ear recognition. Accid. Anal. Prev. 2020, 137, 105432. [Google Scholar] [CrossRef] [PubMed]
  110. Nguyen, K.; Fookes, C.; Sridharan, S.; Tistarelli, M.; Nixon, M. Super-resolution for biometrics: A comprehensive survey. Pattern Recognit. 2018, 78, 23–42. [Google Scholar] [CrossRef] [Green Version]
  111. HaCohen-Kerner, Y.; Hagege, R. Language and Gender Classification of Speech Files Using Supervised Machine Learning Methods. Cybern. Syst. 2017, 48, 510–535. [Google Scholar] [CrossRef]
  112. Kaur, P.; Krishan, K.; Sharma, S.K.; Kanchan, T. Facial-recognition algorithms: A literature review. Med. Sci. Law 2020, 60, 131–139. [Google Scholar] [CrossRef] [PubMed]
  113. Pedrycz, W.; Chen, S. Deep Learning: Algorithms and Applications. In Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2020; pp. 157–170. [Google Scholar] [CrossRef]
  114. Zhang, Y.; Mu, Z. Ear Detection under Uncontrolled Conditions with Multiple Scale Faster Region-Based Convolutional Neural Networks. Symmetry 2017, 9, 53. [Google Scholar] [CrossRef]
  115. Eyiokur, F.I.; Yaman, D.; Ekenel, H.K. Domain adaptation for ear recognition using deep convolutional neural networks. IET Biom. 2018, 7, 199–206. [Google Scholar] [CrossRef] [Green Version]
  116. Kandaswamy, C.; Monteiro, J.C.; Silva, L.M.; Cardoso, J.S. Multi-source deep transfer learning for cross-sensor biometrics. Neural Comput. Appl. 2016, 28, 2461–2475. [Google Scholar] [CrossRef] [Green Version]
  117. Sinha, H.; Manekar, R.; Sinha, Y.; Ajmera, P.K. Convolutional Neural Network-Based Human Identification Using Outer Ear Images. In Soft Computing for Problem Solving; Springer: Berlin/Heidelberg, Germany, 2018; pp. 707–719. [Google Scholar] [CrossRef]
  118. Xu, X.; Liu, Y.; Cao, S.; Lu, L. An Efficient and Lightweight Method for Human Ear Recognition Based on MobileNet. Wirel Commun. Mob. Comput. 2022, 2022, 9069007. [Google Scholar] [CrossRef]
  119. Hidayati, N.; Maulidah, M.; Saputra, E.P. Ear Identification Using Convolution Neural Network. Available online: www.iocscience.org/ejournal/index.php/mantik/article/download/2263/1800 (accessed on 18 December 2022).
  120. Madec, S.; Jin, X.; Lu, H.; De Solan, B.; Liu, S.; Duyme, F.; Heritier, E.; Baret, F. Ear density estimation from high resolution RGB imagery using deep learning technique. Agric. For. Meteorol. 2018, 264, 225–234. [Google Scholar] [CrossRef]
  121. Mikolajczyk, M.; Growchoski, M. Data augmentation for improving deep learning in image classification. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Świnouście, Poland, 9–12 May 2018; pp. 215–224. [Google Scholar]
  122. Jiang, R.; Tsun, L.; Crookes, D.; Meng, W.; Rosenberger, C. Deep Biometrics, Unsupervised and Semi-Supervised Learning; Springer: Berlin/Heidelberg, Germany, 2020; pp. 238–322. [Google Scholar] [CrossRef]
  123. Pereira, T.M.; Conceição, R.C.; Sencadas, V.; Sebastião, R. Biometric Recognition: A Systematic Review on Electrocardiogram Data Acquisition Methods. Sensors 2023, 23, 1507. [Google Scholar] [CrossRef]
  124. Raveane, W.; Galdámez, P.L.; Arrieta, M.A.G. Ear Detection and Localization with Convolutional Neural Networks in Natural Images and Videos. Processes 2019, 7, 457. [Google Scholar] [CrossRef] [Green Version]
  125. Martinez, A.; Moritz, N.; Meyer, B. Should Deep Neural Nets Have Ears? The Role of Auditory Features in Deep Learning Approaches; Interspeech: Incheon, Korea, 2014; pp. 1–5. [Google Scholar]
  126. Jamil, N.; Almisreb, A.; Ariffin, S.; Din, N.; Hamzah, R. Can Convolution Neural Network (CNN) Triumph in Ear Recognition of Uniform Illumination Variant? Indones. J. Electr. Eng. Comput. Sci. 2018, 11, 558–566. [Google Scholar]
  127. de Campos, L.M.L.; de Oliveira, R.C.L.; Roisenberg, M. Optimization of neural networks through grammatical evolution and a genetic algorithm. Expert Syst. Appl. 2016, 56, 368–384. [Google Scholar] [CrossRef]
  128. El-Bakry, H.; Mastorakis, N. Ear Recognition by using Neural networks. J. Math. Methods Appl. Comput. 2010, 770–804. Available online: https://www.researchgate.net/profile/Hazem-El-Bakry/publication/228387551_Ear_recognition_by_using_neural_networks/links/553fa82c0cf2320416eb244b/Ear-recognition-by-using-neural-networks.pdf (accessed on 18 December 2022).
  129. Victor, B.; Bowyer, K.; Sarkar, S. An evaluation of face and ear biometrics. In Proceedings of the International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; Volume 1, pp. 429–432. [Google Scholar]
  130. Jacob, L.; Raju, G. Ear recognition using texture features-a novel approach. In Advances in Signal Processing and Intelligent Recognition Systems; Springer International Publishing: New York, NY, USA, 2014; pp. 1–12. [Google Scholar]
  131. Kumar, A.; Zhang, D. Ear authentication using Log-Gabor wavelets. In Proceedings of the Symposium on Defense and Security, International Society for Optics and Photonics, Orlando, FL, USA, 9–13 April 2007; pp. 1–5. [Google Scholar]
  132. Lumini, A.; Nanni, L. An improved BioHashing for human authentication. Pattern Recognit. 2007, 40, 1057–1065. [Google Scholar] [CrossRef]
  133. Arbab-Zavar, B.; Nixon, M. Robust Log-Gabor Filter for Ear Biometrics. In Proceedings of the International Conference on Pattern Recognition (ICPR 2008), Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar]
  134. Wang, Y.; Mu, Z.-C.; Zeng, H. Block-based and multi-resolution methods for ear recognition using wavelet transform and uniform local binary patterns. In Proceedings of the Pattern Recognition, 2008, ICPR 2008, 19th International Conference, Tampa, FL, USA, 8–11 December; 2008; pp. 1–4. [Google Scholar] [CrossRef]
  135. Xie, Z.; Mu, Z. Ear recognition using LLE and IDLLE algorithm. In Proceedings of the 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
  136. Zhang, Z.; Liu, H. Multi-view ear recognition based on B-Spline pose manifold construction. In Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 2416–2421. [Google Scholar] [CrossRef]
  137. Nanni, L.; Lumini, A. Fusion of color spaces for ear authentication. Pattern Recognit. 2009, 42, 1906–1913. [Google Scholar] [CrossRef]
  138. Xiaoyun, W.; Weiqi, Y. Human ear recognition based on block segmentation. In Proceedings of the Cyber—Enabled Distributed Computing and Knowledge Discovery, Zhangjiajie, China, 10–11 October 2009; pp. 262–266. [Google Scholar]
  139. Chan, S.; Kumar, A. Reliable ear identification using 2-D quadrature filters. Pattern Recognit. Lett. 2012, 33, 1870–1881. [Google Scholar] [CrossRef]
  140. Ganapathi, I.I.; Prakash, S.; Dave, I.R.; Bakshi, S. Unconstrained ear detection using ensemble-based convolutional neural network model. Concurr. Comput. Pract. Exp. 2019, 32, e5197. [Google Scholar] [CrossRef]
  141. Baoqing, Z.; Zhichun, M.; Chen, J.; Jiyuan, D. A robust algorithm for ear recognition under partial occlusion. In Proceedings of the Chinese Control Conference, Xi’an, China, 26–28 July 2013; pp. 3800–3804. [Google Scholar]
  142. Kacar, U.; Kirci, M. ScoreNet: Deep Cascade Score Level Fusion for Unconstrained Ear Recognition. Available online: https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/iet-bmt.2018.5065 (accessed on 18 December 2022).
  143. Wang, Y.; Cheng, K.; Zhao, S.; Xu, E. Human Ear Image Recognition Method Using PCA and Fisherface Complementary Double Feature Extraction. Available online: https://ojs.istp-press.com/jait/article/download/146/159 (accessed on 18 December 2022).
  144. Basit, A.; Shoaib, M. A human ear recognition method using nonlinear curvelet feature subspace. Int. J. Comput. Math. 2014, 91, 616–624. [Google Scholar] [CrossRef]
  145. Benzaoui, A.; Hadid, A.; Boukrouche, A. Ear biometric recognition using local texture descriptors. J. Electron. Imaging 2014, 23, 053008. [Google Scholar] [CrossRef]
  146. Khorsandi, R.; Abdel-Mottaleb, M. Gender classification using 2-D ear images and sparse representation. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, 15–17 January 2013; pp. 461–466. [Google Scholar] [CrossRef]
  147. Pflug, A.; Paul, N.; Busch, N. A comparative study on texture and surface descriptors for ear biometrics. In Proceedings of the International Carnahan Conference on Security Technology, Rome, Italy, 13–16 October 2014; pp. 1–6. [Google Scholar]
  148. Ying, T.; Debin, Z.; Baihuan, Z. Ear recognition based on weighted wavelet transform and DCT. In Proceedings of the Chinese Conference on Control and Decision, Changsha, China, 31 May–2 June 2014; pp. 4410–4414. [Google Scholar]
  149. Chattopadhyay, P.K.; Bhatia, S. Morphological examination of ear: A study of an Indian population. Leg. Med. 2009, 11, S190–S193. [Google Scholar] [CrossRef] [PubMed]
  150. Krishan, K.; Kanchan, T.; Thakur, S. A study of morphological variations of the human ear for its applications in personal identification. Egypt. J. Forensic Sci. 2019, 9, 1–11. [Google Scholar] [CrossRef] [Green Version]
  151. Houcine, B.; Hakim, D.; Amir, B.; Hani, B.A.; Bourouba, H. Ear recognition based on Multi-bags-of-features histogram. In Proceedings of the International Conference on Control, Engineering Information Technology, Tlemcen, Algeria, 25–27 May 2015; pp. 1–6. [Google Scholar] [CrossRef]
  152. Meraoumia, A.; Chitroub, S.; Bouridane, A. An automated ear identification system using Gabor filter responses. In Proceedings of the International Conference on New Circuits and Systems, Grenoble, France, 7–10 June 2015; pp. 1–4. [Google Scholar]
  153. Morales, A.; Diaz, M.; Llinas-Sanchez, G.; Ferrer, M. Ear print recognition based on an ensemble of global and local features. In Proceedings of the International Carnahan Conference on Security Technology, Taipei, Taiwan, 21–24 September 2015; pp. 253–258. [Google Scholar]
  154. Sánchez, D.; Melin, P.; Castillo, O. Optimization of modular granular neural networks using a firefly algorithm for human recognition. Eng. Appl. Artif. Intell. 2017, 64, 172–186. [Google Scholar] [CrossRef]
  155. Almisreb, A.; Jamil, N.; Din, M. Utilizing AlexNet Deep Transfer Learning for Ear Recognition. In Proceedings of the 4th International Conference on Information Retrieval and Knowledge Management (CAMP), Kota Kinabalu, Malaysia, 26–28 March 2018; pp. 1–5. [Google Scholar]
  156. Wiseman, K.B.; McCreery, R.W.; Walker, E.A. Hearing Thresholds, Speech Recognition, and Audibility as Indicators for Modifying Intervention in Children With Hearing Aids. Ear Hear. 2023. [Google Scholar] [CrossRef]
  157. Khan, M.A.; Kwon, S.; Choo, J.; Hong, S.J.; Kang, S.H.; Park, I.-H.; Kim, S.K. Automatic detection of tympanic membrane and middle ear infection from oto-endoscopic images via convolutional neural networks. Neural Netw. 2020, 126, 384–394. [Google Scholar] [CrossRef] [PubMed]
  158. Ma, Y.; Huang, Z.; Wang, X.; Huang, K. An Overview of Multimodal Biometrics Using the Face and Ear. Math. Probl. Eng. 2020, 2020, 6802905. [Google Scholar] [CrossRef]
  159. Hazra, A.; Choudhury, S.; Bhattacharyya, N.; Chaki, N. An Intelligent Scheme for Human Ear Recognition Based on Shape and Amplitude Features. Advanced Computing Systems for Security: 13. Lecture Notes in Networks and Systems, 241; Chaki, R., Chaki, N., Cortesi, A., Saeed, K., Eds.; Advanced Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  160. Jeyabharathi, J.; Devi, S.; Krishnan, B.; Samuel, R.; Anees, M.I.; Jegadeesan, R. Human Ear Identification System Using Shape and structural feature based on SIFT and ANN Classifier. In Proceedings of the International Conference on Communication, Computing and Internet of Things (IC3IoT), Chennai, India, 10–11 March 2022; pp. 1–6. [Google Scholar] [CrossRef]
  161. Xu, X.; Lu, L.; Zhang, X.; Lu, H.; Deng, W. Multispectral palmprint recognition using multiclass projection extreme learning machine and digital shearlet transform. Neural Comput. Appl. 2016, 27, 143–153. [Google Scholar] [CrossRef]
  162. Borodo, S.; Shamsuddin, S.; Hasan, S. Big Data Platforms and Techniques. Indones. J. Electr. Eng. Comput. Sci. 2016, 1, 191–200. [Google Scholar] [CrossRef]
  163. Hubel, D.H.; Wiesel, T.N. Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 1959, 148, 574–591. [Google Scholar] [CrossRef] [PubMed]
  164. Booysens, A.; Viriri, S. Ear biometrics using deep learning: A survey. Appl. Comput. Intell. Soft Comput. 2022, 2022. [Google Scholar] [CrossRef]
  165. Zhao, H.M.; Yao, R.; Xu, L.; Yuan, Y.; Li, G.Y.; Deng, W. Study on a Novel Fault Damage Degree Identification Method Using High-Order Differential Mathematical Morphology Gradient Spectrum Entropy. Entropy 2018, 20, 682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  166. Gonzalez, E.; Alvarez, L.; Mazorra, L. Normalization and feature extraction on ear images. In Proceedings of the IEEE International Carnahan Conference on Security, Newton, MA, USA, 15–18 October 2012; pp. 97–104. [Google Scholar] [CrossRef]
  167. Zhang, J.; Yu, W.; Yang, X.; Deng, F. Few-shot learning for ear recognition. In Proceedings of the 2019 International Conference on Image, Video and Signal Processing, New York, NY, USA, 25–28 February 2019; pp. 50–54. [Google Scholar] [CrossRef]
  168. Zou, Q.; Wang, C.; Yang, S.; Chen, B. A compact periocular recognition system based on deep learning framework AttenMidNet with the attention mechanism. Multimed. Tools Appl. 2022. [Google Scholar] [CrossRef]
  169. Shafi’I, M.; Latiff, M.; Chiroma, H.; Osho, O.; Abdul-Salaam, G.; Abubakar, A.; Herawan, T. A Review on Mobile SMS Spam Filtering Techniques. IEEE Access 2017, 5, 15650–15666. [Google Scholar] [CrossRef]
  170. Perkowitz, S. The Bias in the Machine: Facial Recognition Technology and Racial Disparities. MIT Case Stud. Soc. Ethic Responsib. Comput. 2021. [Google Scholar] [CrossRef]
  171. Kamboj, A.; Rani, R.; Nigam, A. A comprehensive survey and deep learning-based approach for human recognition using ear biometric. Vis. Comput. 2021, 38, 2383–2416. [Google Scholar] [CrossRef]
  172. Othman, R.; Alizadeh, F.; Sutherland, A. A novel approach for occluded ear recognition based on shape context. In Proceedings of the 2018 International Conference on Advanced Science and Engineering (ICOASE), Duhok, Iraq, 9–11 October 2018; pp. 93–98. [Google Scholar]
  173. Zangeneh, E.; Rahmati, M.; Mohsenzadeh, Y. Low resolution face recognition using a two-branch deep convolutional neural network architecture. Expert Syst. Appl. 2020, 139, 112854. [Google Scholar] [CrossRef]
  174. Toprak, I.; Toygar, Ö. Detection of spoofing attacks for ear biometrics through image quality assessment and deep learning. Expert Syst. Appl. 2021, 172, 114600. [Google Scholar] [CrossRef]
  175. Rahim, M.; Rehman, A.; Kurniawan, F.; Saba, T. Biometrics for Human Classification Based on Region Features Mining. Biomed. Res. 2017, 28, 4660–4664. [Google Scholar]
  176. Hurley, D.; Nixon, M.; Carter, J. Automatic ear recognition by force field transformations. In Proceedings of the IEE Colloquium on Visual Biometrics, Ref. No. 2000/018, London, UK, 2 March 2000; pp. 1–5. [Google Scholar]
  177. Abaza, A.; Ross, A.; Herbert, C.; Harrison, M.; Nixon, M. A survey on ear biometrics. ACM Comput. Surv. 2013, 45, 1–35. [Google Scholar] [CrossRef]
  178. Chowdhury, D.P.; Bakshi, S.; Sa, P.K.; Majhi, B. Wavelet energy feature based source camera identification for ear biometric images. Pattern Recognit. Lett. 2018, 130, 139–147. [Google Scholar] [CrossRef]
  179. Miccini, R.; Spagnol, S. HRTF Individualization using Deep Learning. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; pp. 390–395. [Google Scholar]
  180. Bargal, S.A.; Welles, A.; Chan, C.R.; Howes, S.; Sclaroff, S.; Ragan, E.; Johnson, C.; Gill, C. Image-based Ear Biometric Smartphone App for Patient Identification in Field Settings. In Proceedings of the 10th International Conference on Computer Vision Theory and Applications, Berlin, Germany, 11–14 March 2010; pp. 171–179. [Google Scholar] [CrossRef] [Green Version]
  181. Agarwal, R. Local and Global Features Based on Ear Recognition System. In International Conference on Artificial Intelligence and Sustainable Engineering; Sanyal, G., Travieso-González, C.M., Awasthi, S., Pinto, C.M., Purushothama, B.R., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2022; p. 837. [Google Scholar] [CrossRef]
  182. Chowdhury, D.P.; Bakshi, S.; Pero, C.; Olague, G.; Sa, P.K. Privacy Preserving Ear Recognition System Using Transfer Learning in Industry 4.0. In IEEE Transactions on Industrial Informatics; IEEE: New York, NY, USA, 2022; pp. 1–10. [Google Scholar] [CrossRef]
  183. Minaee, S.; Abdolrashidi, A.; Su, H.; Bennamoun, M.; Zhang, D. Biometrics recognition using deep learning: A survey. Artif. Intell. Rev. 2023, 1–49. [Google Scholar] [CrossRef]
  184. Kamboj, A.; Rani, R.; Nigam, A.; Jha, R.R. CED-Net: Context-aware ear detection network for unconstrained images. Pattern Anal. Appl. 2020, 24, 779–800. [Google Scholar] [CrossRef]
  185. Ganapathi, I.I.; Ali, S.S.; Prakash, S.; Vu, N.S.; Werghi, N. A Survey of 3D Ear Recognition Techniques. ACM Comput. Surv. 2023, 55, 1–36. [Google Scholar] [CrossRef]
  186. Alkababji, A.M.; Mohammed, O.H. Real time ear recognition using deep learning. TELKOMNIKA Telecommun. Comput. Electron. Control 2021, 19, 523–530. [Google Scholar] [CrossRef]
  187. Hamdany, A.H.S.; Ebrahem, A.T.; Alkababji, A.M. Earprint recognition using deep learning technique. TELKOMNIKA Telecommun. Comput. Electron. Control 2021, 19, 432–437. [Google Scholar] [CrossRef]
  188. Hadi, R.A.; George, L.E.; Ahmed, Z.J. Automatic human ear detection approach using modified adaptive search window technique. TELKOMNIKA Telecommun. Comput. Electron. Control. 2021, 19, 507–514. [Google Scholar] [CrossRef]
  189. Mussi, E.; Servi, M.; Facchini, F.; Furferi, R.; Governi, L.; Volpe, Y. A novel ear elements segmentation algorithm on depth map images. Comput. Biol. Med. 2021, 129, 104157. [Google Scholar] [CrossRef]
  190. Kamboj, A.; Rani, R.; Nigam, A. CG-ERNet: A lightweight Curvature Gabor filtering based ear recognition network for data scarce scenario. Multi. Tools Appl. 2021, 80, 26571–26613. [Google Scholar] [CrossRef]
  191. Emersic, Z.; Susanj, D.; Meden, B.; Peer, P.; Struc, V. ContexedNet: Context–Aware Ear Detection in Unconstrained Settings. IEEE Access 2021, 9, 145175–145190. [Google Scholar] [CrossRef]
  192. El-Naggar, S.; Abaza, A.; Bourlai, T. Ear Detection in the Wild Using Faster R-CNN Deep Learning. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Barcelona, Spain, 28–31 August 2018; pp. 1124–1130. [Google Scholar] [CrossRef]
  193. Tang, X.; Du, D.K.; He, Z.; Liu, J. PyramidBox: A Context-Assisted Single Shot Face Detector; Springer: Berlin/Heidelberg, Germany, 2018; pp. 812–828. [Google Scholar] [CrossRef] [Green Version]
  194. Najibi, M.; Samangouei, P.; Chellappa, R.; Davis, L.S. SSH: Single Stage Headless Face Detector. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4875–4884. [Google Scholar] [CrossRef] [Green Version]
  195. Khaldi, Y.; Benzaoui, A.; Ouahabi, A.; Jacques, S.; Taleb-Ahmed, A. Ear Recognition Based on Deep Unsupervised Active Learning. IEEE Sens. J. 2021, 21, 20704–20713. [Google Scholar] [CrossRef]
  196. Khaldi, Y.; Benzaoui, A. A new framework for grayscale ear images recognition using generative adversarial networks under unconstrained conditions. Evol. Syst. 2021, 12, 923–934. [Google Scholar] [CrossRef]
  197. Omara, I.; Hagag, A.; Ma, G.; El-Samie, F.E.A.; Song, E. A novel approach for ear recognition: Learning Mahalanobis distance features from deep CNNs. Mach. Vis. Appl. 2021, 32, 38. [Google Scholar] [CrossRef]
  198. Alejo, M.B. Unconstrained Ear Recognition Using Transformers. Jordanian J. Comput. Inf. Technol. 2021, 7, 326–336. [Google Scholar] [CrossRef]
  199. Alshazly, H.; Linse, C.; Barth, E.; Idris, S.A.; Martinetz, T. Towards Explainable Ear Recognition Systems Using Deep Residual Networks. IEEE Access 2021, 9, 122254–122273. [Google Scholar] [CrossRef]
  200. Priyadharshini, R.A.; Arivazhagan, S.; Arun, M. A deep learning approach for person identification using ear biometrics. Appl. Intell. 2020, 51, 2161–2172. [Google Scholar] [CrossRef] [PubMed]
  201. Lavanya, B.; Inbarani, H.H.; Azar, A.T.; Fouad, K.M.; Koubaa, A.; Kamal, N.A.; Lala, I.R. Particle Swarm Optimization Ear Identification System. In Soft Computing Applications. SOFA 2018. Advances in Intelligent Systems and Computing; Balas, V., Jain, L., Balas, M., Shahbazova, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; pp. 372–384. [Google Scholar] [CrossRef]
  202. Sarangi, P.P.; Panda, M.; Mishra, S.; Mishra, B.S.P. Multimodal biometric recognition using human ear and profile face: An improved approach. In Cognitive Data Science in Sustainable Computing, Machine Learning for Biometrics; Sarangi, P.P., Ed.; Elsevier: Amsterdam, The Netherlands; pp. 47–63. [CrossRef]
  203. Sarangi, P.P.; Nayak, D.R.; Panda, M.; Majhi, B. A feature-level fusion based improved multimodal biometric recognition system using ear and profile face. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 1867–1898. [Google Scholar] [CrossRef]
  204. Abayomi-Alli, A.; Bioku, E.; Folorunso, O.; Dawodu, G.A.; Awotunde, J.B. An Occlusion and Pose Sensitive Image Dataset for Black Ear Recognition. Available online: https://zenodo.org/record/7715970#.ZBPQjPZBxPZ (accessed on 18 December 2022).
Figure 1. PRISMA flow chart for the search procedure.
Figure 1. PRISMA flow chart for the search procedure.
Information 14 00192 g001
Figure 2. Pose of angles of the left and right ear images.
Figure 2. Pose of angles of the left and right ear images.
Information 14 00192 g002
Figure 3. A Taxonomy showing ear recognition state of the art methodology.
Figure 3. A Taxonomy showing ear recognition state of the art methodology.
Information 14 00192 g003
Table 1. Articles downloaded from indexed database.
Table 1. Articles downloaded from indexed database.
S/nDigital LibraryNo. ArticlesPercentage (%)
1Taylor & Francis897.9
2Science Direct15714
3IEEE25522.7
4Emerald484.2
5ACM736.5
6Sage554.9
7Springer20117.9
8Elsevier13712.2
9Wiley454.0
10MIT615.4
Total1121100
Table 2. Existing ear recognition research databases.
Table 2. Existing ear recognition research databases.
S/nCatalogueYearTotal
Images
SidesVolunteersDescriptionAvailable
1VGGFace-Ear2022234651both660Iner and intra subject variations in pose, age, illumination and ethnicity.Free
2UERC201911000Both3690Three image datasets to train and test images under varied parametersFree
3EarVN1.0201828412N/A164Images captured under varied pose, illumination, and occlusion conditionsFree
4USTB-HELLOEAR (A)2017336572Both104Pose variationsFree
5USTB-HELLOEAR (B)2017275909Both466Left and right images captured in uncontrolled conditionsFree
6WebEars20171000N/AN/AImages captured under varied conditionsFree
7HelloEars2017610000Both1570Images captured in a controlled environmentFree
8AWE20161000Both100Images captured in the wild in an uncontrolled environmentFree
9UND2014NABothN/ADifferent image collections with varied images captured in 3D.Free
10XM2VTS20144 FootagesBoth29532 khz 16-bit audio/video filesNot Free
11UMIST2014564Both20Head rotation from the left-hand side to the frontal viewFree
12UBEAR20114497Both127Images captured in an uncontrolled environment with different poses and occlusionFree
13WPUT20102071Both501Varied illuminationFree
14YSU20092590 259Angle images between 0 and 90Free
15IIT Delhi2007493Right1253 Images taken indoorFree
16WVU2006460Both4022 min audio-visual from both sidesFree
17USTB (4)20058500Both50015-degree differences using 17 camerasFree
18USTB (3)20041738Right79Dual images at 5-degree variation till 60.Free
19USTB (2)2003308Right77Varying degrees of illumination at +30 and −30 degreesFree
20USTB (1)2002180Right60Different illumination conditions at a trivial angleFree
21UND (E)2002942Both302Both 2D and 3D picturesFree
22UND (F)2003464Side114Side profile appearanceFree
23UND (G)2005738Side2352D and 3D picturesFree
24UND (J2)20051800Both4152D and 3D picturesFree
25IITD2007663Right121Greyscale images with slight angle variations.Free
26PERPINAN1995102Left17Images with minor pose variations captured in a controlled environmentFree
27AMINA700Both100Fixed IlluminationFree
28NCKUN/A330Both9037 images for each respondentFree
Table 3. Summary of common methods in different stages of human ear recognition.
Table 3. Summary of common methods in different stages of human ear recognition.
Pre-ProcessingFeature ExtractionDecision-Making and Classification
Filter Method
Log Gabor Filter [54]
Gaussian filter [55]
Middle filter [55,56]
Fuzzy filter [24]
Intensity Method
Histogram equalization [53,57]
RBG—grayscale [25,55]
Geometric Method
Numerical technique [58]
Ear contour [25]
Detection of the edge [59]
Appearance Based Method
Descriptors of features [60]
Reduction of Dimension [61]
Force field Transformations [62]
Wavelet Method [63]
Neural networks [64]
Normalized cross-correlation [53]
SVM classifier [64,65],
K-Nearest Neighbours [28]
Minimum Distance Classifier [50]
Table 4. Summary of Performance metrics used in Traditional and Deep learning techniques in selected articles.
Table 4. Summary of Performance metrics used in Traditional and Deep learning techniques in selected articles.
Traditional Learning Technique
True Acceptance Rate
[6,78,79,80,81,82,83]
Template capacity
[5,84,85,86]
False Acceptance Rate
[4,6,21,23,83,87,88,89,90,91]
Equal Error Rate
[92,93,94]
Matching Speed
[3,95]
Recognition Accuracy
[14,15,24,28,68,85,96,97,98,99,100,101,102,103,104,105]
Recall
[106,107,108]
Precision
[40,95,102,109,110,111]
Deep Learning Techniques
True Acceptance Rate
[110,111,112,113,114]
Template capacity
[115]
False Acceptance Rate
[110,111,112,113,114].
Equal Error Rate
[72,114]
Matching Speed
[61,115,116,117]
Recognition Accuracy
[70,118,119,120,121]
Recall
[57,77,122,123,124,125]
Precision
[126,127]
Table 5. Comparative summary of ear recognition approaches.
Table 5. Comparative summary of ear recognition approaches.
Reference.YearMethodTypeDatasetPerformance (%)
[7]2010PCA and NNHolisticUBEAR96
[18]2022Deep LearningCNNVGGFaceNA
[23]2019NANANANA
[27]2016Geometric featuresGeometric featuresCP88
[31]2003Force field transformHolisticOwnNA
[31]2003PCAHolisticUND(E)71.6
[35]2005Matrix factorizationHolisticUSTB II91
[38]2008Sparse representationHolisticUND96.9
[39]2010Moment invariant methodHolisticOwn91.8
[40]2010SIFTLocalXM2VTS96
[41]2007Combination of pre-filtered points and SIFTLocalXM2VTS91.5
[47]2007PCA and wavelet transformationHybridUSTB II, CP90.5
[47]2007Inpainting techniques, neural networksCNN, Traditional learningUERC75
[48]2013SIFTLocalCP78.8
[49]2014Hybrid-based on SURF LDA AND NNHybridOwn97
[49]2014Neural networksDeep CNNUERC99.7
[72]2019Neural NetworksCNNAMI75.6
[73]1999Orthogonal log-Gabor filter pairsLocalIITD II95.9
[75]2005Ear framework geometryGeometricOwn86.2
[81]2013Not Applicable (NA)NANANA
[85]2019NANANANA
[87]2019Neural networksCNN--
[92]2020Deep learningCNNNA97
[98]2014Edge image dimensionGeometricUSTB II85
[107]2016CNNLocalAvila Police School & Bisite Video80.5 & 79.2
[107]2013Deep neural networkCNNAvila Police School84
[108]2017Traditional Machine LearningYOLO, Multilayer perceptronOwn82
[117]2018Maximum and minimum height linesGeometricUSTDB&IIT Delhi98.3 & 99.6
[119]2018Deep LearningCNNOpen image dataset85
[123]2023Neural networksCNNAMI, UND, Video Dataset, UBEAR98
[128]2010PCAHolisticOwn40
[129]2002ICAHolisticOwn94.1
[130]2014Log-Gabor waveletsLocalUND90
[131]2007Multi-MatcherHybridUND(E)80
[132]2007Log-Gabor filtersLocalXM2VTS85.7
[133]2008LBP and Haar Wavelet transformationHybridUSTB III92.4
[134]2008Improved locally linear embeddingHolisticUSTB III90
[135]2008Null Kernel discriminant analysisHolisticUSTB I97.7
[136]2008Gabor filtersLocalUND(E)84
[137]2009Block portioning and Gabor transformationLocalUSTB II100
[138]20092D quadrature filterLocalIITD I96.5
[140]2013Sparse representation classificationHolisticUSTB III90
[141]2019Multi-level fusionHybridUSTB II99.2
[142]2014Enhanced SURF with NNLocalIITK 12.8
[143]2014Non-linear curvelet featuresLocalIITD II96.2
[144]2014BSIFLocalIITD II97.3
[145]2014LPQLocalSeveral93.1
[146]2014LPQ, BSIF, LBP, HOG with LDAHybridUND-J2, AMI, IITK98.7
[147]2014Weighted wavelet transforms and DCTHybridOwn98.1
[148]2015Haar wavelet and LBPHybridIITD94.5
[149]2016BSIFLocalIITD I, IITD II96.7 & 97.3
[150]2015Multi-bags-of-features histogramLocalIITD I6.3
[151]2015Gabor filtersLocalIITD II92.4
[153]2017Modular neural networkHybridUSTB99
[154]2018Biased normalized cut and morphological operationsDeep Neural NetworkOwn95
[155]2018Traditional machine learningLocalNANA
[156]2020Deep learningCNNOwn95
[157]2020Traditional Machine LearningSparse RepresentationUSTB IIINA
[158]2022Traditional Machine LearningHybridIITDelhiNA
[159]2022Deep LearningSIFT and ANNIITDelhiNA
[180]2022Global and local ear printsHybridFEARID91.3
Table 6. Summary of the Pros and Cons of different sub-areas in Ear Recognition Stages.
Table 6. Summary of the Pros and Cons of different sub-areas in Ear Recognition Stages.
StageSub-AreaProsCons
Pre-processingFilter methodNo need for object segmentationAligned ears are at a disadvantage
Graceful degradation is a major boostSome details may be lost
Suitable for non-aligned imagesLimited bandwidth is a drawback
Intensity methodReduced computational difficultyDistorted uniform images are concealed
Spin and reflection invariantPoor performance against scaling
Limited false matchesCopy and paste regions of an image cannot be detected
Feature
Extraction
Geometric methodSuitable for obtaining a non-varying featureIncreased computation requirements
Methods are easy to implementResults can sometimes be inaccurate
Image orientations are detectedSusceptible to noise
Appearance MethodVery robust, particularly in 2-dimensional spacePerformance decreases with size
Any image characteristics is extracted as a featureAverage accuracy is less compared with other methods
Minimized false matchesCannot handle certain compressions
It can be used with a few selected featuresIllumination is a significant factor
Recognition accuracy is highGood-quality images are required
ClassificationNeural NetworksNon-linear problems are easily resolvedInability to model a few numbers of training datasets
Support VectorIncreased performance with gap in classesLarge datasets are unsuitable in SVM
Improved memory utilizationNoise is not effectively controlled
Improved memory utilizationLimited explanation for classification
Table 7. Article classification result.
Table 7. Article classification result.
YearAuthorsDatasetApproachesMethodsArchitectureStatus
HolisticLocalGeometricHybridTLDLCNNOthersUnspecifiedAssessment (A)Proposed (S)Designed (D)Planned & Assessed (P&A)Proposed & Executed (P&E)
2016[3] xx x
2017[5] x x
2019[6]x xx x
2010[7] x x
2017b[10]x xx x
2019[12]x x x
2016[16]x xx x
2018[17]x xx x
2022[18]x xx
2018[20]x xx x
2017[24]x xx
2012[25]x xx xx
2012[28]x x x x
2016[29]x xx
2018[34]x xx x
2010[39]x x x x
2010[40]x x x x
2018[43] x x xx
2013[46]x x x x x
2013[48]x xx x x
2014[49]x x x xx
2015[50]x x x xx
2021[51]x x
2016[52] x x xx
2016[53]x xx
2011[55]x x x x x
2015[56]x x x xx
2016[57]x x x x x
2014[58] x x
2018[59]x xx x
2018[60]x xx x
2016[61]x x x x
2016[64]x x x
2015[65]x x x xx
2022[66]x xx x
2018[69]x x x x
2019[72]x xx x
2019[76]x xx x
2020[77]x x
2018[78]x x xx
2014[79]x xx x
2011[80]x x
2013[81]x x x x
2020[83] x x x
2019[87]x xx xx
2020[88]x x x x
2010[91]x x x x x
2020[92]x xx x
2017[93] xx
2018[94]x xx x
2016[95]x x x x
2014[98]x x x x
2018[99]x x x x
2014[100] x x x
2019[101]x xx x
2018[102]x x x x
2017[104] x x
2013[106]x x x x
2016[107]x x x
2020[108]x x x
2017[109]x x x x
2017[110]x xx x
2020[111] x x
2020[112] xx x
2017[113]x x x
2019[116]x xx x
2018[119]x xx
2020[121]x xx x
2019[123]x xx x
2014[124]x x x x x
2016[126]x xx x
2010[127]x x x xx
2013[140]x x x
2013[141]x x x x x
2014[142]x x x x x
2014[143]x x x x
2015[150]x xx x
2020[156]x xx x
2020[157]x xx x
2019[166]x xx x
2018[167]x xx x
2010[179]x x x xx
2020[183]x xx x
2021[184]x
2021[185]x xx x
2021[186]x x x
2021[187]x x x
2021[188] x x
2021[189]x x x x
2021[190]x xx x
2021[194] x x x
2021[195]x xx x
2021[196]x xx x
2021[198]x x x x
2021[199]x xx x
2022[202]x x x x
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oyebiyi, O.G.; Abayomi-Alli, A.; Arogundade, O.‘T.; Qazi, A.; Imoize, A.L.; Awotunde, J.B. A Systematic Literature Review on Human Ear Biometrics: Approaches, Algorithms, and Trend in the Last Decade. Information 2023, 14, 192. https://doi.org/10.3390/info14030192

AMA Style

Oyebiyi OG, Abayomi-Alli A, Arogundade O‘T, Qazi A, Imoize AL, Awotunde JB. A Systematic Literature Review on Human Ear Biometrics: Approaches, Algorithms, and Trend in the Last Decade. Information. 2023; 14(3):192. https://doi.org/10.3390/info14030192

Chicago/Turabian Style

Oyebiyi, Oyediran George, Adebayo Abayomi-Alli, Oluwasefunmi ‘Tale Arogundade, Atika Qazi, Agbotiname Lucky Imoize, and Joseph Bamidele Awotunde. 2023. "A Systematic Literature Review on Human Ear Biometrics: Approaches, Algorithms, and Trend in the Last Decade" Information 14, no. 3: 192. https://doi.org/10.3390/info14030192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop