Next Article in Journal
An Optimization Approach to Berth Allocation Problems
Next Article in Special Issue
Tiling Rectangles and the Plane Using Squares of Integral Sides
Previous Article in Journal
A Family of New Generating Functions for the Chebyshev Polynomials, Based on Works by Laplace, Lagrange and Euler
Previous Article in Special Issue
Light “You Only Look Once”: An Improved Lightweight Vehicle-Detection Model for Intelligent Vehicles under Dark Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ear-Touch-Based Mobile User Authentication

by
Jalil Nourmohammadi Khiarak
1,*,
Samaneh Mazaheri
2 and
Rohollah Moosavi Tayebi
3
1
Institute of Control and Computation Engineering, Warsaw University of Technology, 00-665 Warsaw, Poland
2
Faculty of Business and IT, Ontario Tech University, Oshawa, ON L1H 7K4, Canada
3
Faculty of Science, Ontario Tech University, Oshawa, ON L1H 7K4, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 752; https://doi.org/10.3390/math12050752
Submission received: 23 January 2024 / Revised: 14 February 2024 / Accepted: 27 February 2024 / Published: 2 March 2024
(This article belongs to the Special Issue New Advances and Applications in Image Processing and Computer Vision)

Abstract

:
Mobile devices have become integral to daily life, necessitating robust user authentication methods to safeguard personal information. In this study, we present a new approach to mobile user authentication utilizing ear-touch interactions. Our novel system employs an analytical algorithm to authenticate users based on features extracted from ear-touch images. We conducted extensive evaluations on a dataset comprising ear-touch images from 92 subjects, achieving an average equal error rate of 0.04, indicative of high accuracy and reliability. Our results suggest that ear-touch-based authentication is a feasible and effective method for securing mobile devices.

1. Introduction

As mobile devices continue to proliferate, the demand for reliable and secure authentication and access control mechanisms becomes increasingly paramount [1]. Various biometric modalities such as fingerprints, iris patterns, faces, voices, and even ears have been explored as potential solutions [2,3]. However, integrating additional sensors to capture such biometric data can substantially increase the cost of mobile devices, potentially limiting their accessibility to a broader user base.
One alternative approach is to leverage existing touchscreen technology to capture biometric data, specifically focusing on earprints. Earprints have long been utilized in forensic research due to their unique identifying characteristics [4,5,6]. Recent studies have demonstrated the feasibility of utilizing the distinctive shape and geometry of an individual’s ear for accurate identification, even among identical twins [7]. This intriguing avenue of research provides an accessible and cost-effective means of biometric authentication.
While traditional earprints are typically captured using specialized hardware or modifications to mobile devices [8], a related biometric feature known as “ear-touch” can be acquired using a smartphone’s touchscreen. Although ear-touch may exhibit lower image quality compared to dedicated earprint capture methods, it can be conveniently recorded using a standard smartphone with multi-touch capability. This makes ear-touch an appealing and practical biometric measure for widespread adoption.
This paper delves into the application of ear-touch for mobile user authentication and access control. We propose a method for capturing ear-touch data and address the challenge of missing data points, ultimately achieving high performance with respect to the equal error rate (EER).
The significance of this innovative approach lies in its potential to address several pressing issues:
  • Enhanced security: ear-touch authentication leverages the unique shape and acoustic properties of an individual’s ear, making it exceptionally difficult for unauthorized access.
  • User convenience: While security is paramount, user experience is equally crucial. Ear-touch authentication strikes a balance between security and convenience. Users can unlock their devices or access applications seamlessly by merely touching their ear, eliminating the need to remember complex passwords or carry additional hardware like fingerprint sensors.
  • Future-proofing mobile security: As the mobile landscape continues to evolve, so do the techniques employed by malicious actors. Ear-touch authentication represents a forward-thinking solution that anticipates future security challenges. Its adaptability to emerging threats positions it as a long-term and sustainable authentication method.
The structure of this paper is as follows: In Section 2, we present the problem statement and provide a comprehensive review of the pertinent literature, including the limitations of existing verification systems based on earprints. Section 3 outlines the ear-touch database we have meticulously collected, detailing the measurement procedures and discussing the issues encountered in dealing with missing data. In Section 4, we present our innovative solution, substantiating our approach’s effectiveness. Following this, in Section 5, we engage in an in-depth discussion of our findings and present the results we have obtained. Lastly, we summarize our conclusions in Section 6, emphasizing the potential implications of our work in the broader context of mobile device security and user authentication.

2. Problem Statement and Its Literature Solutions

This section has two subsections: the problem statement and its literature solutions. We present the problem, and then we explain what kinds of similar research have been undertaken recently.

2.1. Problem Statement

An ear-touch is a new biometric feature being introduced in this research. To capture an ear-touch, a smartphone’s multi-touch screen is used. As a result, ear-touches could be used for authentication on mobile devices in the same way as fingerprints, facial images, iris prints, etc. Ear-touches have not been used yet as a biometric measure in any research, though similar research, called earprint recognition, has been carried out on mobile devices. In this research, we propose a method for an ear-touch recognition system on mobile devices. With ear-touches, we have another problem called ‘missing points’. It occurs due to the physical features of ears and the way that people press smartphones to their ears. Each captured ear-touch could have either the same number of touched points or a different number of touched points. In this research, we also take the missing points into account.

2.2. Literature Solutions

Touchscreens have been used to capture biometric data already but by making changes inside the smartphones. In [9], the authors used touchscreen to capture fingers, fists, ears, and palms through their system called Bodyprint. The capacitive touchscreen sensor was used to capture biometrics from 12 users. The touchscreen had an input resolution of ~6 dpi and the sensor was able to capture an image with 27 × 15 pixels and a size of 8 bits. They used Speeded Up Robust Features (SURF) descriptors for feature extraction. For matching features, the L2 distance was used based on extracted 12 key frames by SURF descriptors. The performance of the method was tested on 12 participants in 12-fold cross-validation. Overall, they had 864 samples despite the fact that the false rejection rate was 26.8% for all biometrics (fingers, fists, ears, phalanges and palms) in this evaluation. For just ears, the false rejection rate was reported as 7.8%. Touchscreens were used in [10] as a sensor for capturing biometric data to authenticate users on mobile devices. In general, touchscreens were not used to scan presentations on it, but in order to obtain information from the touchscreen sensor as image data, the Android kernel of mobile devices was manipulated. Therefore, the touchscreen was changed to capture all touch points on the capacitive screen. The resolution of the touchscreen was 6 dpi or 27 × 15 pixels. The database used contained 1520 images, collected from 37 subjects, within which 40 images were taken from each subject. Recall and precision were reported as performance evaluation metrics and were 0.5960 and 0.8761, respectively.
The paper referenced in [11] introduced a novel earprint database, EINTU, and a Deep Learning model [12], DEL, for personal verification using earprints. It overcomes challenges in dataset creation and achieves impressive recognition accuracy rates. The study opens up possibilities for enhanced biometric security in mobile phone communications based on earprint recognition.
In [13], the use of earprints in the forensic field, and the stability of and variability in ears for earprints were reviewed. They showed the substantial features in earprints that can be used in forensic identification. In addition, in [14], a Forensic Ear Identification (FearID) earprint identification system was proposed and 7364 earprints were collected from 1229 participants. The weighted width comparison (in this method, the connected structures are determined to supposedly represent the imprints; then, they weight the corresponding intensities to extract local intensities as a signals), vector template matching (The method used to carry out print comparisons was based on the anatomical annotation of earprints, analyzed through the vector template matching (VTM) method. Following its anatomical annotation, each print has a template consisting of labelled points representing earprint landmarks and minutiae, distinguished into different classes. Prints are compared by assessment of the similarity between their templates.), and angular comparison method (it consists of the comparison of signals keeping track of the angle of the medial axes corresponding to the connected structures with the x-axis) were introduced for feature extraction. They used a logistic regression for classification and the data were split into two parts (training set and testing set). An EER for the test set of 9.3% was reported. In [15], the authors proposed a hybrid method based on global and local features for earprint features’ extraction. On the one hand, binary images and a comparison between the model and the query earprint were used to find global features. On the other hand, the scale-space extrema detection technique (the Difference of Gaussian function is calculated in order to identify potential regions that show characteristics invariant to scale and rotation), keypoint localization method (The local maxima and minima of Difference of Gaussian is evaluated by comparing it to its 16 neighbours. A candidate point is selected only if it is larger or smaller than all its neighbours. The candidate points with the lowest contrast are rejected after performing a detailed fit to the nearby data for location, scale, and ratio of principal curvatures in the pattern.), and orientation assignment method (each keypoint is represented by 16 orientations obtained from the local image gradient directions) were used to extract local features. The proposed method is applied on the FearID database and the EER was reported at 1.87%.
To recognize the extracted ear-touch via touchscreen on mobile devices, it is necessary to know how to match two-dimensional finite set points as a result [16]. There are various algorithms that have been used to match two-dimensional finite set points. In [17], to find exact patterns of points to match two sets, they found the centre of each set and then calculated polar coordinates based on the centroids. In another paper [18], the authors used the one-to-one matching method, although this method needs exactly the same number of points for two set points.
Overall, the studies reviewed suggest that ear-touch based mobile user authentication has potential for practical application in mobile device security.

3. Ear-Touch Database

Touchscreens are one of the sensors used to capture data, in particular biometric data. They have been used in many real-world applications, such as Bodyprint on the Yahoo smartphone [9]. Figure 1 shows structure of a human ear. Figure 2a shows how to capture the data [19]. As shown in Figure 2b, the subject holds a smartphone to their ear, and then a mobile application captures all possible touch points touched on the touchscreen. As a matter of fact, the outer ear (shown in Figure 1) is composed of different parts including the tragus, antitragus, helix, root of helix, crus of helix, antihelix, lower crus of antihelix, lobule anterior notch, navicular fossa, crus of antihelix, anti-helical fold, lobule, scapha and concha. However, only the first eight parts are touched or captured by a touchscreen, an example of which is shown in Figure 2b. The touched points are extracted and considered biometric features to authenticate an individual. To collect touchscreen ear biometric data, a mobile application was created and developed in the Android Studio mobile application environment. The application was able to simultaneously take touchscreen data from several points. First, the user completed some information about them in the application. Then, the touches for the left and right ears of each subject were captured separately.
Ethics committee
As we needed participants (volunteers) to collect datasets for our investigation of an ear recognition system’s performance under presentation attack detection, the ethics committee of Warsaw University of Technology approved the experiment protocol to collect the data. The experiment was particularly designed to figure out that the ear recognition system needs an ear presentation attack detection method. Before the experiment, the participants signed consent agreements. Personal data (non-biometric data, including their age, names, gender) were also collected from the participants. The non-biometric data were maintained separately to guarantee security of their personal data. Therefore, the participants were familiar with the experiment as we described it in full detail, and the signed consent forms were gathered from all of the volunteers. The collected database is ear images taken from the participants. All those participants were of different origin (namely, Azerbaijan, Afghanistan, Algeria, China, Ecuador, India, Iran, Jordan, Latvia, Mexico, Oman, Poland, Portugal, Spain, Turkey, Vietnam, Uzbekistan) and different age groups.
WUT-Ear V1.0 database collection methodology
To acquire both ear-touches and ear photos, a Samsung Galaxy A7 was used. The Samsung Galaxy A7 features a Super AMOLED capacitive touchscreen with a resolution of 1080 × 2220 pixels. The database had two parts: ear photos and ear-touches. Therefore, ear-touches were used in the experiments. There were 138 subjects with samples of over 9000 ear photos and approximately 1000 ear-touches.
Ear-touch data associated with almost half of the participants were collected. More details about the database are presented in Table 1. There were almost 20 ear-touches per subject. At each session, seven touches from each ear were taken, among which there were some unsuccessful acquisitions of data, considered unsuccessful attempts. To determine which were unsuccessful presentations, it was decided that sample presentations with less than four touch points would not be considered. Therefore, presentations giving four or more touch points and additional information were considered successful presentations. This rule was applied to ensure the presence of enough touch points to carry out the calculations.
According to the hardware limitation of touchscreens on the mobile devices based on the Android operating system, we could not acquire more than one set point from each part of the ear. The details of the dataset are given in Table 1. According to Table 1, there were 57 subjects—40 men and 17 women. It is worth mentioning why we have missing numbers. If two ears (left and right ears) were taken for each subject, we would have a total subject of 92. Despite the higher number of male subjects, we had a greater number of data acquisitions from female subjects, as there were more unsuccessful attempts with the males’ data collection.
The purpose of Figure 3 is to illustrate the distribution of ear-touch interactions among the participants in our dataset. Each bar on the graph represents the percentage of participants who engaged in a specific number of ear-touch interactions during the data collection phase. The horizontal axis denotes the number of ear-touch interactions, ranging from one to the maximum number recorded. Meanwhile, the vertical axis represents the percentage of participants falling into each category of ear-touch interactions. Our dataset encompasses a diverse range of participants, with some individuals exhibiting a higher frequency of ear-touch interactions than others. For instance, a small percentage of participants were observed to have a substantial number of ear-touch interactions, with some reaching up to 30 interactions.
It is worth noting that while our data collection process allowed for the recording of single ear-touch interactions, the focus of this analysis is primarily on participants with multiple ear-touch instances. Thus, the graph highlights the distribution of participants with at least two ear-touch interactions. As depicted in the graph, a notable proportion of participants (e.g., 5%) were found to have 10 ear-touch interactions or fewer, while an even smaller percentage (e.g., less than 1%) exhibited exceptionally high numbers of ear-touch interactions, such as 30 instances. By presenting this distribution, we aim to provide insights into the variability in ear-touch interactions among participants, which is essential for understanding the usage patterns and potential applications of our proposed ear-touch-based authentication system. The participants’ ages ranged from 18 to 60 years old.
In Figure 4, we present the distribution of the maximum number of set points obtained from ear-touch interactions. The vertical axis illustrates the frequency of ear-touch interactions, while the horizontal axis represents the maximum number of set points detected within each interaction.
Our data collection methodology allowed for the capture of ear-touch interactions with varying numbers of set points. However, to ensure robustness and reliability in our analysis, we focused on interactions with a minimum of four set points. Consequently, the range depicted in the graph spans from four to eight set points, reflecting the diversity of the set point configurations observed in our dataset. This approach enables us to explore the relationship between the number of ear-touch interactions and the complexity of set point patterns, providing valuable insights into the characteristics of ear-touch interactions and their potential implications for authentication systems.

Collection of Measurements, Procedures and Problems (Missing Data)

In this section, the data collection experiments are explained. To collect touchscreen ear biometric data, we created a mobile application, as shown in Figure 5. It was developed in the Android Studio mobile application environment. The application is able to take touchscreen data from several points simultaneously. First, the application asks the user for some information about them (Figure 5a). Then, as shown in Figure 5b, ear-touches are captured for the left and right ears of each subject separately.
Like a fingerprint recognition system, the ear-touch needs to be enrolled several times. At each session, we took seven touches from each ear, though some data acquisition attempts proved to be unsuccessful. We considered these unsuccessful attempts. In total, we had 1427 captured ear-touches. A total of 467 ear-touches had less than four set points. To determine which ones were unsuccessful presentations, we ignored presentations with less than four touch points. So, if the presentation registered four or more points of coordinates and information, it was considered a successful presentation. It means we had 960 successful captured ear-touch presentation.
Missing data problem:
In ear-touch data, we are faced with the problem of missing points. This means that the touches do not have a fixed number of points in each presentation. Normally, during data acquisition, the distances between the points are not small.
Figure 6 shows an example of missing points in various touches. It should be noted that in order to find the best matching algorithm, we must consider the problem of missing points, because even if we solve all translation and revolution problems, the missing point’s problem can still lead to significant errors in the matching.
  • First problem: when looking at Ear-touch 1 and Ear-touch 2 in Figure 6, they have the same number of points and they are from the same ear, but because those points are in a different region, our matching algorithm does not recognise it as the same ear.
  • Second problem: If we consider Ear-touch 1 and Ear-touch 3, there are a different number of points. So, in Ear-touch 3, there are points that do not appear in Ear-touch 1.

4. Materials and Methods

The ear-touch verification system is shown in Figure 7. The input—a number of set points with (x, y) coordinates, is presented. Set points’ information would be features for the next step in our ear-touch verification procedure. The similarity function is calculated based on F p (presented input) and F t (the stored features of the ear-touch in the server or database). These scores are compared and if Th (threshold) is smaller than S p & t , then it would be accepted.

4.1. Alignment of the Ear-Touches

In this section, we explain our solution method and the results we obtained by applying this method. Our solution has two major parts: “matching” between given ear-touches and a given template, used for authentication; and “template creation/extraction” based on a set of related ear-touches (e.g., ear-touches of some known subject), which can be used as a reference for authentication purposes. These are two basic tasks in our proposed method and there are several challenges to performing these tasks.
The first challenge is in connection with the “missing points”. In each experiment, the touchscreen will return up to eight set points based on the ear contact position with respect to the device coordinate system. There is no guarantee that all the points will be measured in each experiment, and each experiment may contain some missing points.
The second challenge concerns “permutations”. For each ear-touch, the sensor returns a “set” of points. They have no consistent or meaningful anatomical order. So, when matching between two sets, the algorithm should pair the points one by one. In fact, it considers different permutations between these two sets to find the most meaningful pairing.
The third challenge is related to “rotations and translations”. The touchscreen measures the set points with respect to its own coordinate system, not fixed coordinate based on the subject’s ear. So, even when comparing two ear-touches of the same ear, the algorithm should consider random rotations and translations that may have occurred during the different data acquisition phases.
To resolve these challenges, we used an optimisation-based approach that tries to find the best matches by minimising some relevant loss functions. To explain this approach, we first consider a simplified scenario, in which there are no missing points, before moving on to the more challenging scenarios, which are more suitable for real world applications.

4.2. A Simplified Scenario without Missing Points

In an ideal world, with no missing points and permutations, each ear-touch would be represented by a sequence of set points of fixed length. In this scenario, comparing two ear-touches is as simple as measuring the distance between all corresponding 2D points if a fixed coordinate system exists. Zero distance means all the corresponding points match each other, meaning that the two ear-touches match, whereas any non-zero distance mean they do not match. Such a distance measure can be defined by Equation (1), in which N is the number of set points for each ear-touch, · represents the Euclidean norm, T = ( t 1 , t 2 , ,   t N ) is the template or reference ear-touch, X = ( x 1 , x 2 , ,   x N ) is the given ear-touch we want to authenticate, and t i and x i represent the i-th set points of the template and the given ear-touch, respectively.
D 0 T , X = i = 1 N t i x i 2
In a more sophisticated scenario, when rotations, translations, and permutations are present, we have to consider all possible modes of these transformations. So, Equation (1) is modified into Equation (2), containing an optimisation problem. In Equation (2), R is the set of 2D rotation matrices, L is the set of all possible translations presented by 2D-vectors, and P is the set of all possible permutations between N points. It is assumed that each permutation is presented in π 1 ,   π 2 , ,   π N .
D 1 T , X = min R , l , P R × L × P i = 1 N t i ( R x π i + l ) 2
In fact, D 1 measures the distance between T and some transformed version of X using the D 0 distance we defined in Equation (1). We call this transformed version the best matching form of X w.r.t. to T, which will be used to create a template later. This best matching form or “best match” can be represented by Equation (3), while its parameters are defined in Equation (4).
b i = R * x π * i + l * i = 1,2 , , N B e s t T , X ( b 1 , b 2 , , b n )
( R * , l * , P * ) = argmin R , l , P R × L × P i = 1 N t i ( R x π i + l ) 2
Pacut [20] resolved the optimisation problem of Equation (4) based on the Procrustes problem [21] and the Kabsch–Umeyama algorithm [22]. His solution is straight-forward and efficient, without requiring common iterative schemes of general optimisation algorithms. However, his proposed method does not consider permutations and thus it has to be extended in order to solve our simplified scenario with Algorithm 1. It returns the mismatch based on Equation (2) and the best match based on Equation (3) without considering any missing points.
Algorithm 1: Reference Template Creation from a Group of Known Ear-Touches with no Missing Set-Points
inputs: template T, ear-touch X
outputs:
   mismatch between T and X based on Equation (2)
   best match of X w.r.t X (Equation (3))
mismatch = infinite
B = X
for each possible permutation P do
   permute X according to P, and save it as S
   use Pacut’s method to solve ( R * ,   l * ) = argmin R , l , P R × L i = 1 N t i ( R x π i + l ) 2
    b i a u x = R * x π * i + l * i = 1,2 ,   ,   N B a u x ( b 1 a u x ,   b 2 a u x , ,   b n a u x )
    m i n a u x = D 0 ( T ,   B a u x )
   if m i n a u x < m i s m a t c h then
      mismatch = min_aux
       B = B a u x
end for
return mismatch, B a u x
Creating a reference template from a group of known ear-touches was another part of our problem. So, we need an algorithm to perform this task as well. In the absence of missing points, every known ear-touch has the potential to be used as a template. However, in practice, it is preferable to construct or extract the template from a bunch of related or known ear-touches. This template can be found by Equation (5), where E is the set of all possible templates or ear-touches, T is the unknown template we are looking for, Y = Y 1 , Y 2 , ,   Y M is the set of input ear-touches and Q is some mismatch measure, which can be devised as Equation (2).
T = argmin T E 1 M j = 1 M Q ( T ,       Y j )
Pacut [20] also solved the optimization problem of Equation (5), although it does not consider permutations. It can simply be extended as Algorithm 2. The main idea of this method is the notion of “Best Match” introduced earlier. It is based on the properties of 2-Norm or Euclidean Norm, is which already used in Equations (1) and (2).
Algorithm 2: Expansion of Algorithm 1
inputs: A series of M related ear-touches Y = Y 1 , Y 2 , ,   Y M
outputs: A template which minimises the average mismatch to Y according to Equation (5)
k = 0
L 0 = i n f i n i t e
T 0 = Y 1
while (k = 0) or ( L k 1 L k > t o l )
   k = k+1
    B k j = B e s t M a t c h ( T k 1 ,   Y j ) j = 1,2 , , M //according to Algorithm 1
    t i , k = 1 M j = 1 M b i , k j  
    L k = 1 M j = 1 M D 0 ( T k ,   B k j )
end while
return T k

4.3. Matching in the Presence of Missing Points

In practice, both the template (T) and imposter (X) are variable length sequences of set points, so the one-to-one correspondence between their set points may not apply. This means that the previously mentioned algorithm for matching requires some modifications to overcome this challenge. First, a simpler scenario is assumed, in which the template is assumed to be complete, and then it is extended to the case where the template itself is an incomplete set of set points.
If the template is complete, then there must be an “injection” from a set of imposter set points to the set of template set points. We represent this injective function by a “partial permutation”. If we assume the imposter (X’) has N X set points and the template (T) has N T set points, then each injection can be shown by N X -permutation of N , i.e., as a n-tuple ( π 1   , π 2   . ,   π N X ) , where π i s are distinct and 1   π i N X ,   i = 1,2 ,   ,   N X . The prime symbol (‘) on X’ is just added to emphasise its incompleteness. If we denoted the set of all such partial permutations by P ( N , N X ) , then D 1 can be reformulated as D 2 in Equation (6) to consider missing points in its calculations. The optimisation problem can also be solved by some extension of Algorithm 1, which is described in Algorithm 3, allowing the Best Match to be incomplete and returning some additional output, and the index sequence E to distinguish between existing and non-existing set points. The definition of E is presented in Equation (7) and the incomplete Best Match (B,E) satisfies the equality in Equation (8).
D 2 T ,   X = 1 N X min ( R , l , P ) R × L × P N , N X i = 1 N X t π i R x i + l 2
E e 1 , e 2 , , e N e i = 1 i f   i t h   e a r m a r k   e x i s t s 0 o t h e r w i s e             i = 1 ,   2 ,   ,   N
D 2 T ,   X = i = 1 N e i t i b i 2
Algorithm 3: Matching in the Presence of Missing Points
Inputs
    •
complete   template   T = t 1 , t 2 , ,   t N
    •
incomplete   ear - touch   X = ( x 1 ,   x 2 , ,   x N X )
Outputs
 •
D 2 T ,   X based on Equation (6)
 •
The   pair   ( B , E ) of incomplete best match that satisfies Equation (8)

min = infinite
for each P P ( N ,   N X )
   Create a limited version of template as follows
       s i = t π i i = 1,2 ,   ,   N X S = ( s 1 , s 2 ,   ,   s N X )
find ( R ¯ ,   l ¯ ) = argmin R , l R × L i = 1 N X s i R x i + l 2 using Pacut’s method
Store the virtually complete Best Match of X’ w.r.t. S
          c i = R ¯ x i + l i = 1 ,   2 ,   ,   N X C = ( c 1 ,   c 2 ,   ,   c N X )
   mismatch = D 0 ( S ,   C )
   if m i s m a t c h < m i n then
      min = mismatch
       e i = 0 b i = 0 0 i = 1 ,   2 ,   ,   N
       e π i = 1 b π i = c i i = 1 ,   2 ,   ,   N
       B = ( b 1 , b 2 ,   ,   b N X ) E = ( e 1 , e 2 ,   ,   e N X )
   end if
end for
return min, B, E
If the template (T’) is incomplete, some set points of X’ may lose their equivalents in T’. So, it is important to know which set points are common in both ear-touches. If the total number of set points is denoted by N, T and X have N T and N X set points, respectively, and N X < N T ; then, the number of common set points ( N c ) obeys the following inequality: N T + N X N < N c < N X . So, our algorithm should search all these possible modes of this interval, which is described by Algorithm 4.
Algorithm 4: Expansion of Algorithm 3
Inputs
    •
N as the total number of existing set points
    •
incomplete   template   T = t 1 , t 2 , ,   t N T
    •
incomplete   ear - touch   X = ( x 1 ,   x 2 , ,   x N X )
Outputs
 •
The minimum mismatch between T’ and X’, considering all possible modes of common set points
 •
The   pair   ( B , E ) of incomplete best match which satisfied the following:
                a t t a i n e d   m i s m a t c h = i = 1 N e i t i b i 2
          min = N c m i n = N X + N T N   N c m a x = N X
for N c = N c m i n N c m a x
   for each Q X |   X 1 ,   2 ,   ,   N c m a x ,     N X = N c
      Sort elements of Q in ascending order as q 1 ,     q 2 ,   ,   q N c
      Create a limited version of X’:
          z i = x _ q i i = 1 ,   2 ,   ,   N c Z = ( z 1 ,   z 2 ,   ,   z N c )
      find the mismatch between Z and T’ according to Algorithm 3
          ( m i s m a t c h ,   B ,   E ) = P s e u d o c o d e 3 ( T ,   Z )
      if m i s m a t c h < m i n then
          e i = e i b i = b i i = 1 ,   2 ,   ,   N c
          e i = 0 b i = 0 0 i = N c + 1 ,     N c + 2 ,   ,   N
          B = ( b 1 , b 2 ,   ,   b N ) E = ( e 1 , e 2 ,   ,   e N )
      end if
   end for
end for
return min, B, E

4.4. Creating Template in Presence of Missing Points

In Algorithm 2, the template is constructed by averaging related set points on all Best Matches on the input ear-touches. Those Best Matches are dependent on the template itself; thus, an iterative scheme is employed to reach the fixed point of equation, which is the optima of Equation (5). The same can be carried out for incomplete ear-touches as described in Algorithm 5, but there is a major drawback.
Algorithm 5: Reference Template Creation from a Group of Known Ear-Touches with Missing Set-Points
inputs: A sequence of incomplete ear-touches Y = Y 1 ,   ,   Y M with corresponding existence sequences E = ( E 1 ,   E 2 ,   ,   E M ) .
Outputs: An incomplete template which extends Algorithm 2 for incomplete ear-touches.
Sort inputs (Y and E) by number of existing set points in a descending order and Store them in original variables.
          k = 0 L 0 = T 0 = Y 1 E 0 T = E 1
while (k = 0) or ( L k 1     L k > t o l )
   k = k+1
   ( B k j ,   E k j ) = i n c B e s t M a t c h ( T k 1 , E 0 T   ,   Y j ,   E j ) j = 1,2 , , M //w.r.t Algorithm 4
    n i = j = 1 M e i , k j t i k = 1 n i j = 1 M b i , k j n i > 0 0 n i = 0 i = 1 ,   2 ,   , N  
    T k = t 1 , k ,   t 2 , k ,   ,   t N , k
    L k = 1 M j = 1 M D 0 ( T k ,   B k j )
end while
return T k , E 0 T
Its existing set points are limited by the first guess, and further ear-touches will not add any new set points to the template. The reason for this is the “if statement” block of Algorithm 3, which ignores extra set points of the given ear-touch in the process of creating the Best Match. This drawback can be fixed as described in Algorithm 6, but it imposes another challenge.
Algorithm 6: Expansion of Reference Template Creation from a Group of Known Ear-Touches with Missing Set-Points
Synopsis: improving if-statement block in Algorithm 3
if m i s m a t c h < m i n then
   min = mismatch
    e i = 0 b i = 0 0 i = 1 ,   2 ,   ,   N
    e π i = 1 b π i = c i i = 1 ,   2 ,   ,   N c
    i n d e x = N c
   for each j 1 ,   2 ,   ,   N X Q
       i n d e x = i n d e x + 1
       e i n d e x = 1 b i n d e x = R x j + l
   end for
    B = ( b 1 , b 2 ,   ,   b N ) E = ( e 1 , e 2 ,   ,   e N )
end if
The vacant set points in the template are limited, meaning that in the template’s creation process, it is necessary to match and categorise all extra set points in each input ear-touch in order to aggregate them somehow. This process, in combination with the initial iterative scheme of the process, can result in a real computational burden. With this in mind, in our research, we preferred to apply a sub-optimal approach, as described in Algorithm 7, instead of the optimal strategy discussed above. The results were satisfactory and we did not develop our algorithm beyond this point.
Algorithm 7: Sub-Optimal Approach of Algorithm 6
inputs: A sequence of incomplete ear-touches Y = Y 1 ,   ,   Y M with corresponding existence sequences E = ( E 1 ,   E 2 ,   ,   E M ) .
outputs: An incomplete template which extends Algorithm 2 for incomplete ear-touches. Sort inputs (Y and E) by number of existing set points in a descending order and Store them in original variables.
          k = 1 L 0 = T 1 = Y 1 E 1 T = E 1 N 1 T = E 1
for  j = 2 ,   ,   M
   ( B j ,   E j ) = i n c B e s t M a t c h ( T j 1 , E j 1 T   ,   Y j ,   E j ) //w.r.t Algorithm 4, modified by Algorithm 6
    t i j = 1 n i ( j 1 ) T + e i j ( n i ( j 1 ) T t i j 1 + e i j b i j ) n i ( j 1 ) T + e i j > 0 0 0 n i ( j 1 ) T + e i j = 0 i = 1 ,   2 ,   , N  
    n i j T = n i j 1 T + e i j i = 1 ,   2 ,   , N  
    T j = ( t 1 j ,   t 2 j ,   ,   t N j ) N j T = ( n 1 j T ,   n 2 j T ,   ,   n N j T )
    L k = 1 M j = 1 M D 0 ( T j ,   B j ,   E j )
end for
return T M , E M T
To ensure a comprehensive understanding of the methodology, a practical real-world example will be furnished in Appendix A.

5. Results

The experimental scenario involved the limited ear-touch database. Each user considered in the enrollment process was chosen randomly. Then, the extracted features for the test user were calculated. We carried out the tests on our own ear-touch dataset. In the verification systems, the problem of missing points, because of the physical properties of ear, is marginal since it is always possible to acquire a proper ear-touch. Pressing for longer and keeping the touch screen on the ear takes only a few seconds and the users usually cooperate with the process. Hence, in the experiment, we concentrated on the images of the ear-touches with over four set points and ignored images with less than four.
Experiments were carried out to calculate the performance gain of using set point coordinates in a matching system. For each subject, the number of imposters and genuine matches were almost 36,315 ((270 × 269)/2) and 1110 (30 × 37), respectively. It should be mentioned that we did not consider the symmetric similarity of the same subject, or the similarity between the same ear-touches. The average time it took to create a template (feature extraction from the minimum four set points and the maximum eight set points) and matching was 0.22 s and 0.003 s, respectively. We used a PC with 8 GB RAM and a 2.6 GHz core i3 CPU. Octave was used to implement all the programs.
The query images were captured for the subjects in a similar condition. The features of ear-touches were computed and the ear recognition decision was taken based on the calculated features.
In the coming subsections, we denote the results for the proposed methods. The False Rejection Ratio (FRR) and the False Acceptance Ratio (FAR) parameters were calculated, thanks to which the Equal Error Ratio (EER) was computed [23]. The False Match Rate (FMR) is the rate at which a biometric process mismatches biometric signals from two distinct individuals as coming from the same individual.
False Acceptance Rate (FAR): this measures the rate at which the system incorrectly accepts an unauthorized user.
F A R = N u m b e r   o f   i m p o s t o r   a t t e m p t s   a c c e p t e d   a s   g e n u i n e T o t a l   n u m b e r   o f   i m p o s t o r   a t t e m p t s
False Rejection Rate (FRR): this measures the rate at which the system incorrectly rejects a legitimate user.
F R R = N u m b e r   o f   g e n u i n e   a t t e m p t s   r e j e c t e d   a s   i m p o s t o r s T o t a l   n u m b e r   o f   g e n u i n e   a t t e m p t s
Equal Error Rate (EER): EER is the point where FAR and FRR are equal, and it is often used as a threshold to determine the system’s overall accuracy.
E E R = F A R + F R R 2  
At the EER, the system is effectively balanced between its ability to accept legitimate users and reject impostors. In practical terms, lower EER values indicate better system performance, as they imply a reduced rate of both false acceptances and false rejections.

5.1. Exploratory Data Analysis

Now, let us aim to gain a deeper comprehension of this dataset. Conducting exploratory data analysis stands as a crucial phase in the model development process. Our focus will be on examining the distribution of the target class, and assessing the potential of set points to distinguish between individual identifications. The findings from this section will guide our decisions on selecting features for model training and determining the metrics suitable for model evaluation.
In this work, we have used different number of enrollment scenarios from one ear-touch in enrollments to eight ear-touches in enrollments. Therefore, we evaluate data based on these numbers of enrollment. Also, persons with single ear-touch are used just in the test set. Therefore, the training and test data are as in Table 2.
All the data’s normal distribution is shown in Figure 8. The figure illustrates the distribution of set points for ear-touches across 960 samples. Each sample consists of a maximum of nine set points. The individual distributions of set points are represented by the solid curves, where the x-axis denotes the set point values and the y-axis represents the probability density.
The overall mean of the set points, denoted by the dashed vertical line, is 424.58, indicating the central tendency of the dataset. The shaded region around the mean represents the overall standard deviation 254.22, providing insights into the dispersion of the set points. This visualization provides a comprehensive overview of the variability in ear-touches set points across samples, aiding in the understanding of the dataset’s statistical characteristics.
Let us now examine the inter-correlations among the input features and their associations with the target variable. Given that all the input features and the target variable consist of numerical values, the Pearson correlation coefficient is employed for measuring the degree of correlation. As a widely utilized metric, the Pearson correlation coefficient quantifies linear relationships between two variables, ranging from +1 to –1. A coefficient magnitude exceeding 0.7 indicates a notably high correlation, while magnitudes falling between 0.5 and 0.7 signify moderately high correlation. Additionally, magnitudes ranging from 0.3 to 0.5 indicate low correlation, and values below 0.3 denote minimal to no correlation. The pairwise correlations can be efficiently computed through the Pandas library’s corr() function. The resultant correlation matrix is illustrated in Figure 9.
( ( X 1 ,   Y 1 ) ,   ( X 2 ,   Y 2 ) ,   ( X 3 , Y 3 ) , ,   ( X 9 , Y 9 ) ) show the set points’ gathered features from smartphone and Target depicts the persons. In Figure 9, attention is initially directed to the first column, delineating the correlations between all input features and the target variable. It is evident that the features exhibit negligible correlation with the target class. Notably, the correlation coefficients bear negative values, indicating an inverse relationship: as the feature values increase, the corresponding target variable values decrease. This observation underscores the effective mitigation of rotation and translation challenges within our approach. Moreover, the analysis reveals substantial inter-correlation among several features. Notably, features such as X7, X8, and X9 display high correlations with Y7, Y8, and Y9, respectively. It is pertinent to observe that the majority of ear-touches register zero values for X7, X8, and X9.

5.2. Evaluate the Recognition System without Missing Points

In our dataset, we have some touches with no missing points. We have 17 users which have ear-touches with no missing points. In total, we have 72 ear-touches with no missing points. This means that these 17 users had at least three ear-touches which had the same number of set points and they were located in almost the same places. In this section, we evaluate how our proposed method works on this part of dataset. Figure 10 indicates FMR and FNMR for data with no missing points. We used single-enrollment and multi-enrollment scenarios. For evaluation without missing data, we used just three enrollment images because there is no possibility to have more enrollment images. We can see that the proposed method achieved EER 0.037 when there is a single enrollment image, whereas it achieved EER 0.032 when there are multiple enrollment images. Figure 11 depicts the Detection error trade-off (DET) curves for the proposed method on the ear-touches database. The DET curve has been shown to find a trade-off between FNMR and FMR.

5.3. Evaluate the System in the Presence of Missing Points

In this section, we used all datasets to evaluate the system’s performance. We have 92 users who have ear-touches with no missing point. In total, we have 960 ear-touches with missing points. In this section, we evaluate how our proposed method works on the whole dataset. The recognition outcome for the proposed method is shown in Figure 12. Figure 13 depicts the DET curves for the proposed method on the ear-touches database. The experiments showed that the proposed method could improve the results by about 0.17 (from 0.27 to 0.10) when eight ear-touches are used for enrollment. This result was probably achieved because in the single ear-touch, we assumed that all possible points in an ear-touch would appear in the ear-touch with the maximum points, whereas, in the method with eight ear-touches for enrollment, we considered possible points and the computations were carried out based on all set points.

5.4. Different Sample Numbers for Template Creation

We evaluated the proposed method on our dataset based on various numbers of ear-touches for template creation because we wanted to explain how the number of ear-touches could affect the results. Table 3 depicts the comparison of performance with our proposed method in terms of using various numbers of ear-touches for enrollment in the ear-touch recognition system. There are four subsets of the enrollment model. For instance, the “Enrol 1 ear-touch” means that one and the rest of ear-touches in a subject are used for training (template creation) and testing, respectively, and so on. It was observed that the more ear-touches were used for enrollment, the better the performance that was achieved. Consequently, Figure 14 indicates the comparison of FMR and FNMR with our proposed method in terms of using various numbers for enrollment.
Subsequently, Figure 15 depicts the DET curves for the proposed method on the ear-touches database. The experiments showed that the mean of the cross-validation results is about EER = 0.04. As a result, the mean EER was 0.04 for all folds, which is acceptable for a recognition system.
As a whole, our experiment indicated remarkable improvement in performance since we tested different numbers of ear-touches as enrollment presentations using the proposed method. It was shown that the method provided precise and additional information and might be used for authentication and control access on smartphones. The results of this research strongly suggest that considering missing points is crucial and must be considered as part of the features.

6. Discussion

The goal of this research was to introduce a novel touch-based biometric characteristic on mobile devices. This section comprises a discussion of the main findings as related to the written works on the ear-touch recognition method in mobile devices.
Our authentication system aims to verify the authenticity of users based on ear-touch images. While it may appear that a binary classifier could be a straightforward solution, the task at hand is more nuanced and falls into the realm of recognition rather than a binary classification.
The following are key points to consider:
Diversity in ear-touch patterns: Unlike traditional binary classification problems where distinct classes are well-defined, ear-touch patterns can exhibit significant variability among individuals. This diversity makes it challenging to define a single binary boundary that separates authentic from non-authentic users.
Enrollment for recognition: To effectively capture and adapt to the diversity mentioned above, our system employs an enrollment phase. During this phase, the system learns and recognizes unique features in each user’s ear-touch pattern. This allows for a more personalized and accurate recognition process, enhancing the overall security and reliability of the authentication system.
Recognition as a multiclass problem: In essence, our authentication system can be better conceptualized as a multiclass recognition problem rather than a binary classification problem. Each enrolled user represents a distinct class, and the system’s task is to correctly identify the user among the enrolled set.
Achieved equal error rate: The reported average equal error rate of 0.04 in our study underscores the effectiveness of our approach. This metric accounts for both false acceptance and false rejection rates, providing a comprehensive evaluation of the system’s performance in a recognition context.
This section concludes with a discussion of the limitations of this research, areas for future research, and a short summary. The following are questions related to a discussion and future study possibilities of this research:
Question 1: What motivates ear-touch person authentication to be useful on mobile devices?
Question 2: How could the used method for alignment of the ear-touches show the performance of the biometric characteristics?
To answer the first question, let us consider the existing ear-touch biometric characteristics. A proposed method in [16] shows a similar biometric system but with a different type of data equation. It scans all ears with a specific screen, which is not possible on a normal smartphone. This means that the authors changed the kernel of the mobile devices. Therefore, to make an ear-touch biometric system available, we used the touch screen in the normal mode. According to the results, we could use the acquired data as biometric characteristics.
The method proposed in [20] for the alignment of the ear-touches achieves an equal error rate 0.04. We might consider it a biometric system even for identification but it needs to be evaluated on a large database.
This biometric system has an advantage in terms of acquiring data, which can be easily carried out. It could be installed on all multi-touch screen mobile devices. However, the number of features obtained as a biometric characteristic system might be lower.

7. Conclusions

In this study, we have introduced a novel biometric authentication approach tailored for mobile devices equipped with multi-touch screens. Our proposed method leverages the distinctive characteristics of ear-touch patterns, offering a seamless and secure method for individuals to authenticate themselves. By harnessing the inherent capabilities of the multi-touch screen as a sensor, we have developed a robust authentication system.
Our efforts encompass the creation of a comprehensive database consisting of 92 subjects and a total of 960 ear-touch images. To facilitate the extraction and matching of these unique ear-touch features, we employed a theoretical method as outlined in [20]. Remarkably, our methodology yielded an impressive equal error rate (EER) of just 0.04, underlining the effectiveness of our approach.
In conclusion, this research serves as an important stepping stone for the exploration and application of a novel biometric characteristic acquired through multi-touchscreen technology, which is increasingly prevalent in the majority of mobile devices. The potential implications are far-reaching, promising enhanced mobile security and user convenience, thereby setting the stage for further advancements in this promising field.

Author Contributions

Conceptualization, J.N.K.; Methodology, J.K; Software, J.K; Validation, J.K; Formal analysis, J.K and R.M.T.; Investigation, J.K; Resources, J.K; Data curation, J.K; Writing—original draft, J.K; Writing—review & editing, S.M.; Visualization, J.K; Supervision, S.M; Project administration, J.K; Funding acquisition, J.N.K. All authors have read and agreed to the published version of the manuscript.

Funding

The first author was supported by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie under Grant Agreement No. 675087.

Data Availability Statement

Data is stored in University and could be shared by signing an agreement between universities.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Real-world example for Algorithms 1 and 2:
Consider a scenario where a user is trying to authenticate using ear-touch on a mobile device. Let us say that the set of points on the touchscreen represents the user’s unique ear-touches, as shown in Figure A1.
Figure A1. Four ear-touches without missing points; the samples are taken from a participant.
Figure A1. Four ear-touches without missing points; the samples are taken from a participant.
Mathematics 12 00752 g0a1
Template creation:
Based on these samples, let us make a template using three of the ear-touches for analyzing Algorithm 1. So, based on Algorithm 2, the template’s creation would be as shown in Figure A2.
Figure A2. Three ear-touches without missing points are chosen to create a template.
Figure A2. Three ear-touches without missing points are chosen to create a template.
Mathematics 12 00752 g0a2
Ear-touch 1 is considered T_0 = ear-touch 1; the rest are applied to find the best match. We move T0 to the origin and then we align the first ear-touch to the x-axis. So, the result would be as shown in Figure A3.
Figure A3. Ear-touch 1 is moved to origin and considered T0 to find the best match.
Figure A3. Ear-touch 1 is moved to origin and considered T0 to find the best match.
Mathematics 12 00752 g0a3
The created template would be as shown in Figure A4.
Figure A4. Created template using Algorithm 2.
Figure A4. Created template using Algorithm 2.
Mathematics 12 00752 g0a4
Applying Algorithm 1 on ear-touch 4 and obtaining the result for that would lead to the results shown in Figure A5. After the calculation of the distance between the template and the test, we obtain 25,070. This result is achieved for the identical ear-touch.
Figure A5. Ear-touch 4 is considered the test sample and result after aligning.
Figure A5. Ear-touch 4 is considered the test sample and result after aligning.
Mathematics 12 00752 g0a5
Let us consider a dissimilar ear-touch to test how the algorithm works. Figure A6 shows an example of an ear-touch which is not identical to template that we have for this scenario.
Figure A6. Test dissimilar ear-touch to verify if the distance is far from created template or not.
Figure A6. Test dissimilar ear-touch to verify if the distance is far from created template or not.
Mathematics 12 00752 g0a6
If we consider the shown ear-touch in Figure A7, we will have a distance equal to 42,489. We see that when we have ear-touches different from the template, we will have worse results.
Figure A7. Aligned ear-touch and template; the aligned ear-touch is dissimilar ear-touch with no missing set-points.
Figure A7. Aligned ear-touch and template; the aligned ear-touch is dissimilar ear-touch with no missing set-points.
Mathematics 12 00752 g0a7
Real-world example for Algorithms 3–5:
Consider a scenario where we have missing points and would like to authenticate a user. Let us say that the set of points on the touchscreen represents the user’s unique ear-touches, as shown in Figure A8.
Figure A8. Seven ear-touches with missing points; six samples (shown by * and o) are taken from a participant and a sample (shown by +) from another one.
Figure A8. Seven ear-touches with missing points; six samples (shown by * and o) are taken from a participant and a sample (shown by +) from another one.
Mathematics 12 00752 g0a8
Figure A8 shows the template creation samples, identical sample and dissimilar sample which will be used in our analysis. Five ear-touches are used to create the template in the presence of missing points.
Template creation:
Based on these samples (shown in Figure A9), let us make a template using five of the ear-touches for analyzing Algorithms 3 and 4. So, based on Algorithm 5, the template creation would be as follows:
Figure A9. Created template using Algorithm 5.
Figure A9. Created template using Algorithm 5.
Mathematics 12 00752 g0a9
Applying Algorithm 4 on ear-touch 6 to test whether it is identical and obtaining the results for that would lead to the results shown in Figure A10. After the calculation of the distance between the template and the test, we obtain 1849. This result is achieved for the identical ear-touch.
Figure A10. Aligned ear-touch and template; the aligned ear-touch is identical ear-touch.
Figure A10. Aligned ear-touch and template; the aligned ear-touch is identical ear-touch.
Mathematics 12 00752 g0a10
Let us consider a dissimilar ear-touch to test how the algorithm works. Figure A11 shows an example of an ear-touch which is not identical to the template that we have for this scenario (called “Ear-touch 7 dissimilar”).
Figure A11. Aligned ear-touch and template; the aligned ear-touch is dissimilar ear-touch with missing set-points.
Figure A11. Aligned ear-touch and template; the aligned ear-touch is dissimilar ear-touch with missing set-points.
Mathematics 12 00752 g0a11
If we consider the shown ear-touch in Figure A11, we will have a distance equal to 10,970. We see that when we have an ear-touch different from the template, we will have worse results.

References

  1. Patwary, A.A.-N.; Naha, R.K.; Garg, S.; Battula, S.K.; Patwary, M.A.K.; Aghasian, E.; Amin, M.B.; Mahanti, A.; Gong, M. Towards secure fog computing: A survey on trust management, privacy, authentication, threats and access control. Electronics 2021, 10, 1171. [Google Scholar] [CrossRef]
  2. Abdulla, W.H.; Marattukalam, F.; Hahn, V.K. Exploring Human Biometrics: A Focus on Security Concerns and Deep Neural Networks. APSIPA Trans. Signal Inf. Process. 2023, 12, e38. [Google Scholar] [CrossRef]
  3. Minaee, S.; Abdolrashidi, A.; Su, H.; Bennamoun, M.; Zhang, D. Biometrics recognition using deep learning: A survey. Artif. Intell. Rev. 2023, 56, 8647–8695. [Google Scholar] [CrossRef]
  4. Meijerman, L.; Thean, A.; Maat, G.J. Earprints in forensic investigations. Forensic Sci. Med. Pathol. 2005, 1, 247–256. [Google Scholar] [CrossRef] [PubMed]
  5. Broeders, A. Of earprints, fingerprints, scent dogs, cot deaths and cognitive contamination—A brief look at the present state of play in the forensic arena. Forensic Sci. Int. 2006, 159, 148–157. [Google Scholar] [CrossRef] [PubMed]
  6. Halpin, S. What have we got ear then: Developments in forensic science: Earprints as identification evidence at criminal trials. UC Dublin L. Rev. 2008, 8, 65. [Google Scholar]
  7. Meijerman, L.; Thean, A.; van der Lugt, C.; van Munster, R.; van Antwerpen, G.; Maat, G.J. Individualization of earprints. Forensic Sci. Med. Pathol. 2006, 2, 39–49. [Google Scholar] [CrossRef] [PubMed]
  8. Cabra, J.-L.; Parra, C.; Trujillo, L. Earprint touchscreen sensoring comparison between hand-crafted features and transfer learning for smartphone authentication. J. Internet Serv. Inf. Secur. JISIS 2022, 12, 16–29. [Google Scholar]
  9. Holz, C.; Buthpitiya, S.; Knaust, M. Bodyprint: Biometric user identification on mobile devices using the capacitive touchscreen to scan body parts. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 3011–3014. [Google Scholar]
  10. Maheshwari, M.; Arora, S.; Srivastava, A.M.; Agrawal, A.; Garg, M.; Prakash, S. Earprint Based Mobile User Authentication Using Convolutional Neural Network and SIFT. In Proceedings of the International Conference on Intelligent Computing, Wuhan, China, 15–18 August 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 874–880. [Google Scholar]
  11. Ali, S.O.; Al-Nima, R.R.; Mohammed, E.A. Individual Recognition with Deep Earprint Learning. In Proceedings of the 2021 International Conference on Communication & Information Technology (ICICT), Basrah, Iraq, 5–6 June 2021; IEEE: New York, NY, USA, 2021; pp. 304–309. [Google Scholar]
  12. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  13. Meijerman, L.; Sholl, S.; De Conti, F.; Giacon, M.; van der Lugt, C.; Drusini, A.; Vanezis, P.; Maat, G. Exploratory study on classification and individualisation of earprints. Forensic Sci. Int. 2004, 140, 91–99. [Google Scholar] [CrossRef] [PubMed]
  14. Alberink, I.; Ruifrok, A. Performance of the FearID earprint identification system. Forensic Sci. Int. 2007, 166, 145–154. [Google Scholar] [CrossRef] [PubMed]
  15. Morales, A.; Diaz, M.; Llinas-Sanchez, G.; Ferrer, M.A. Earprint recognition based on an ensemble of global and local features. In Proceedings of the 2015 International Carnahan Conference on Security Technology (ICCST), Taipei, Taiwan, 21–24 September 2015; IEEE: New York, NY, USA, 2015; pp. 253–258. [Google Scholar]
  16. Alajarmeh, N. Non-visual access to mobile devices: A survey of touchscreen accessibility for users who are visually impaired. Displays 2021, 70, 102081. [Google Scholar] [CrossRef]
  17. Atkinson, M. An optimal algorithm for geometrical congruence. J. Algorithms 1987, 8, 159–172. [Google Scholar] [CrossRef]
  18. Alt, H.; Mehlhorn, K.; Wagener, H.; Welzl, E.J.D.; Geometry, C. Congruence, similarity, and symmetries of geometric objects. Discret. Comput. Geom. 1988, 3, 237–256. [Google Scholar] [CrossRef]
  19. Chen, H.; Bhanu, B. Human ear recognition in 3D. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 718–737. [Google Scholar] [CrossRef] [PubMed]
  20. Pacut, A. Alignment of the earprints. arXiv preprint 2021, arXiv:2112.05237. [Google Scholar]
  21. Gower, J.C.; Dijksterhuis, G.B. Procrustes Problems; OUP Oxford: Oxford, UK, 2004. [Google Scholar]
  22. Lawrence, J.; Bernal, J.; Witzgall, C. A purely algebraic justification of the Kabsch-Umeyama algorithm. J. Res. Natl. Inst. Stand. Technol. 2019, 124, 1. [Google Scholar] [CrossRef] [PubMed]
  23. Palma, D.; Montessoro, P.L. Biometric-based human recognition systems: An overview. Recent Adv. Biom. 2022, 1–21. [Google Scholar] [CrossRef]
Figure 1. Structure of a human ear.
Figure 1. Structure of a human ear.
Mathematics 12 00752 g001
Figure 2. Setup of capturing data using a touchscreen on a smartphone—(a) shows how to press the smartphone to the ear and (b) illustrates a sample of the touched area on a smartphone. The blue circle shapes are set points; hence, in this sample, five set points are touched by the smartphone.
Figure 2. Setup of capturing data using a touchscreen on a smartphone—(a) shows how to press the smartphone to the ear and (b) illustrates a sample of the touched area on a smartphone. The blue circle shapes are set points; hence, in this sample, five set points are touched by the smartphone.
Mathematics 12 00752 g002
Figure 3. The distribution of ear-touches from 72 participants.
Figure 3. The distribution of ear-touches from 72 participants.
Mathematics 12 00752 g003
Figure 4. Maximum number of set-points in all subjects.
Figure 4. Maximum number of set-points in all subjects.
Mathematics 12 00752 g004
Figure 5. Ear-touch data collection mobile application: (a) the first page of the developed application, personal information should be filled; this is carried out to analyze data in terms of gender, age, and nationality (or race). (b) The second page of the developed application; this page has three parts; take ear scan from left ear—this is used to capture the ear-touch from the person’s left ear; take ear scan from right ear—this is used to capture the ear-touch from the person’s right ear; send information— this is used to send the collected data to the server. The last button is used to explain the consent form and all about the research purposes.
Figure 5. Ear-touch data collection mobile application: (a) the first page of the developed application, personal information should be filled; this is carried out to analyze data in terms of gender, age, and nationality (or race). (b) The second page of the developed application; this page has three parts; take ear scan from left ear—this is used to capture the ear-touch from the person’s left ear; take ear scan from right ear—this is used to capture the ear-touch from the person’s right ear; send information— this is used to send the collected data to the server. The last button is used to explain the consent form and all about the research purposes.
Mathematics 12 00752 g005
Figure 6. Ear-touches that have a different number of set points and are in a different region in some cases. These samples are shown in 2D coordinates.
Figure 6. Ear-touches that have a different number of set points and are in a different region in some cases. These samples are shown in 2D coordinates.
Mathematics 12 00752 g006
Figure 7. Ear-touch verification procedure: presentation—the captured set points from ears; F p —extracted features after cleaning the data; F t —the stored ear-touch or ear-touches in the database; S p & t —similarity values after matching the system.
Figure 7. Ear-touch verification procedure: presentation—the captured set points from ears; F p —extracted features after cleaning the data; F t —the stored ear-touch or ear-touches in the database; S p & t —similarity values after matching the system.
Mathematics 12 00752 g007
Figure 8. Distribution of set points for ear-touches across 960 samples.
Figure 8. Distribution of set points for ear-touches across 960 samples.
Mathematics 12 00752 g008
Figure 9. Correlation matrix for set points (features) in our dataset.
Figure 9. Correlation matrix for set points (features) in our dataset.
Mathematics 12 00752 g009
Figure 10. False Match Rate and False Non-Match Rate for ear-touches dataset with no missing data.
Figure 10. False Match Rate and False Non-Match Rate for ear-touches dataset with no missing data.
Mathematics 12 00752 g010
Figure 11. DET curve for ear-touches dataset with no missing data.
Figure 11. DET curve for ear-touches dataset with no missing data.
Mathematics 12 00752 g011
Figure 12. False Match Rate and False Non-Match Rate for ear-touches data using the proposed method.
Figure 12. False Match Rate and False Non-Match Rate for ear-touches data using the proposed method.
Mathematics 12 00752 g012
Figure 13. DET curve for ear-touches data using the proposed method.
Figure 13. DET curve for ear-touches data using the proposed method.
Mathematics 12 00752 g013
Figure 14. FMR and FNMR for ear-touches data with four types of enrollment models on ear-touch recognition system.
Figure 14. FMR and FNMR for ear-touches data with four types of enrollment models on ear-touch recognition system.
Mathematics 12 00752 g014
Figure 15. DET curve for ear-touches data based on enrollment with different numbers of ear-touches.
Figure 15. DET curve for ear-touches data based on enrollment with different numbers of ear-touches.
Mathematics 12 00752 g015
Table 1. Dataset distribution in ear-touches of view at WUT-Ear databases. Each ear from a person is considered a subject.
Table 1. Dataset distribution in ear-touches of view at WUT-Ear databases. Each ear from a person is considered a subject.
Sex (%)Single/MultipleTotal Number of Ear-TouchesNumber of Subjects
Male (63%)Multiple ear-touches59542
Female (36%)Multiple ear-touches34530
Male (90%)Single ear-touch1818
Female (10%)Single ear-touch22
Total96092
Table 2. Number of training and test sets in evaluation of the ear-touch.
Table 2. Number of training and test sets in evaluation of the ear-touch.
Enrollment ModeNumber of Training SamplesNumber of Test Samples
Single ear-touch72888
Eight ear-touches576384
Table 3. Performance result comparison with four types of enrollment models on ear-touch recognition system.
Table 3. Performance result comparison with four types of enrollment models on ear-touch recognition system.
Enrollment Model EER
Enroll 1 ear-touch0.178
Enroll 2 ear-touches0.093
Enroll 4 ear-touches0.05
Enroll 6 ear-touches0.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khiarak, J.N.; Mazaheri, S.; Tayebi, R.M. Ear-Touch-Based Mobile User Authentication. Mathematics 2024, 12, 752. https://doi.org/10.3390/math12050752

AMA Style

Khiarak JN, Mazaheri S, Tayebi RM. Ear-Touch-Based Mobile User Authentication. Mathematics. 2024; 12(5):752. https://doi.org/10.3390/math12050752

Chicago/Turabian Style

Khiarak, Jalil Nourmohammadi, Samaneh Mazaheri, and Rohollah Moosavi Tayebi. 2024. "Ear-Touch-Based Mobile User Authentication" Mathematics 12, no. 5: 752. https://doi.org/10.3390/math12050752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop