Next Article in Journal
Fabrication and Characteristics of a Three-Axis Accelerometer with Double L-Shaped Beams
Previous Article in Journal
Photoplethysmographic Time-Domain Heart Rate Measurement Algorithm for Resource-Constrained Wearable Devices and its Implementation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Heterogeneous Iris One-to-One Certification with Universal Sensors Based On Quality Fuzzy Inference and Multi-Feature Fusion Lightweight Neural Network

1
College of Computer Science and Technology, Jilin University, Changchun 130012, China
2
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
3
College of Computer Science, Northeast Electric Power University, Jilin 132012, China
4
College of Software, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(6), 1785; https://doi.org/10.3390/s20061785
Submission received: 12 March 2020 / Revised: 18 March 2020 / Accepted: 21 March 2020 / Published: 23 March 2020
(This article belongs to the Section Electronic Sensors)

Abstract

:
Due to the unsteady morphology of heterogeneous irises generated by a variety of different devices and environments, the traditional processing methods of statistical learning or cognitive learning for a single iris source are not effective. Traditional iris recognition divides the whole process into several statistically guided steps, which cannot solve the problem of correlation between various links. The existing iris data set size and situational classification constraints make it difficult to meet the requirements of learning methods under a single deep learning framework. Therefore, aiming at a one-to-one iris certification scenario, this paper proposes a heterogeneous iris one-to-one certification method with universal sensors based on quality fuzzy inference and a multi-feature entropy fusion lightweight neural network. The method is divided into an evaluation module and a certification module. The evaluation module can be used by different devices to design a quality fuzzy concept inference system and an iris quality knowledge concept construction mechanism, transform human logical cognition concepts into digital concepts, and select appropriate concepts to determine iris quality according to different iris quality requirements and get a recognizable iris. The certification module is a lightweight neural network based on statistical learning ideas and a multi-source feature fusion mechanism. The information entropy of the iris feature label was used to set the iris entropy feature category label and design certification module functions according to the category label to obtain the certification module result. As the requirements for the number and quality of irises changes, the category labels in the certification module function were dynamically adjusted using a feedback learning mechanism. This paper uses iris data collected from three different sensors in the JLU (Jilin University) iris library. The experimental results prove that for the lightweight multi-state irises, the abovementioned problems are ameliorated to a certain extent by this method.

1. Introduction

This paper takes lightweight one-to-one certification of multi-state iris in the same environment as the research object, and proposes a one-to-one certification method with universal sensors for heterogeneous irises based on quality fuzzy inference and multi-feature entropy fusion lightweight neural networks. The feedback learning mechanism enables the overall process to dynamically adjust the concept of the iris in the fuzzy system and the category labels in the recognition function as the number and quality of irises change.
The prerequisites for this method are listed as follows:
  • The iris acquisition status and acquisition environment change, and this change cannot be predicted, which causes certain defocusing, deflection, shadowing and other problems. The dimensions of the captured images are 640 × 480;
  • The number of iris categories is lightweight (all iris libraries contain dozens of categories, and each category contains only a few thousand pictures);
  • To ensure the accuracy of lightweight certification, testers are allowed to collect multiple times, and thus, it is allowable to appropriately increase the false rejection rate;
  • The number of training irises can reach several thousand, but the types of training irises (degree of defocus, illumination, strabismus effect, etc.) are not classified; they are directly mixed, belonging to mixed data;
  • The subject tester is a living human.
The overall working process of the method proposed in this paper is shown in Figure 1.
The whole process of the method in this paper is divided into an evaluation module and a certification module, and they are connected together through a transition phase of iris positioning.
  • Evaluation module: The evaluation module is a set of general processes that can be run on the irises collected by different acquisition instruments. This paper designs a quality concept fuzzy inference method based on a pure fuzzy logic system to select a recognizable iris. First, based on the existing training iris, a mechanism for the eye concept base and the qualified iris image concept base is proposed, and the subjective cognition of a person with normal thinking initially transforms the concept of an iris into a digital logic concept of the computer system. When evaluating the quality of the test iris, suitable concepts are selected according to the iris recognition requirements, and a quality inference machine is used to determine whether the test iris can be used for iris recognition. After the recognizable iris is processed, it is input into the recognition model. With the analysis of certification error results and the emergence of new quality requirements, a feedback learning mechanism for quality evaluation is designed for the updating and correction of knowledge. Existing conceptual labels are checked according to changes in the number of irises, errors of the original labels are corrected, and label expansion is implemented according to changes in quality requirements. The recognizable iris image determined by the evaluation module is located, normalized, and converted into a 180 × 32 dimensional iris recognition area.
  • Certification module: Based on the convolutional neural network structure, a neural network structure with lightweight layers is designed for iris certification. A feature representation mechanism based on multi-source features is proposed. The image of the iris certification area is processed by a smoothing algorithm and a texture highlighting algorithm. Three different iris images are formed as multi-source features. Each iris image passes through 12 layers of an image-processing network consisting of a convolutional layer, pooling layer, ReLU layer, and expansion layer. Finally, each iris image forms 15 expanded parameters and a total of 45 expanded parameters in the expansion layer. In the certification module, the expanded parameters of the three images are fused by average fusion through the feature fusion layer to form 15 recognition parameters. The certification function is designed based on the sigmoid function [1], and the statistical information is used to calculate the certification parameter information entropy. According to the information entropy, the certification function parameters are designed as the category labels. In the fully connected layer, one-to-one certification is finally performed through the certification function, and finally, the final result is output through the output layer. With the analysis of certification error results and the emergence of new iris data, in order to modify the information entropy and new category labels, a feedback learning mechanism for the certification module is designed.
Compared with the current research, the innovation and research significance of this method in terms of improving the accuracy of one-to-one certification are described as follows:
  • The problem of the heterogeneous unsteady-state iris: In quality evaluation, design of the iris quality knowledge concept mechanism must minimize the impact of fixed threshold decisions and use the objective iris features to set the concept. It must not require a fixed index for mechanical evaluation, but set the concept of quality based on the actual environment and identification needs for conceptual reasoning. In feature expression, a multi-source feature fusion mechanism is designed, feature fusion is performed from a multi-source perspective, and discrimination between different types of features is increased. The information entropy is used to design the recognition function category label parameters, realize the statistical recognition of the multi-state iris in the mixed data set, expand the range of recognizable irises, and improve the accuracy of unsteady heterogeneous iris certification.
  • For the correlation between the various links in the whole process: This paper uses the non-template matching mode as the basis for the design of the certification model. It does not focus on a specific algorithm for research, but mainly focuses on the research of the protocol mechanism, and proposes a set of solutions for the whole process of iris certification from image acquisition to result output. In constructing the knowledge base, reasoning process, certification process, and a series of agreement mechanisms were proposed to connect the links with each other and fully consider the correlation between the certification steps. Aiming at resolving the instability of the unsteady-state iris and making the scheme universal, lightweight heterogeneous certification can be realized; that is, the iris images collected under different environments by different sensors can be recognized by the scheme.
  • Limitations on iris dataset size and situation classification: Based on improvement of the positive certification process, this paper designs a feedback learning mechanism in the revision of the reverse overall process. Via professional analysis by human operators and data feedback from the computer system, the error certification situation and new possible external conditions are added. In this way, the process of performing a response and adjusting or correcting the knowledge base concept and the entropy features of the category labels enables the entire certification process to achieve dynamic adjustment, thus improving the forward certification and reverse correction of deep learning frameworks under the data set constraints.
The structure of this paper is as follows. Section 1—the introduction—explains the research content, prerequisites, and innovations of this paper. Section 2—background and related work —explains the relevant background of this article’s research content and domestic and foreign trends, and then explains the research value of this paper. Section 3—quality fuzzy inference —details of the process of iris quality evaluation in this paper. Section 4—heterogeneous iris one-to-one certification—introduces the process of certification in this paper in detail. Section 5—experiments and analysis—analyzes the advantages of the method in this paper in terms of the structural significance and certification effects and compares the current algorithms with various algorithms to highlight the advantages of the method under the prerequisites of this paper. Section 6— conclusion—summarizes the methods and experiments in this paper.

2. Background and Related Work

Iris recognition is currently one of the highest-security technologies recognized in the field [1]. In actual iris recognition, there are a variety of use scenarios, which are mainly divided into several situations:
In the iris collection state, the main settings are the collection status (collection posture, collection distance) and the external environment (illumination), which can be divided into four categories:
  • Unconstrained state in the same environment: Iris acquisition is performed based on a lack of restriction of the acquisition posture of the acquisition target person, and the external environment is not changed for iris acquisition;
  • Constrained state in the same environment: Iris acquisition is performed based on the restriction of the acquisition posture of the acquisition target person, and the external environment is not changed for iris acquisition;
  • Constrained state with environmental change: Iris acquisition is performed based on the restriction of the acquisition posture of the acquisition target person, and the external environment is changed for iris acquisition;
  • Unconstrained state with environmental change: Iris acquisition is performed based on a lack of restriction of the acquisition posture of the acquisition target person, and the external environment is changed for iris acquisition.
According to the recognition method, the process of recognition can be divided into two types:
  • Template matching: Coding the test iris with a single storage template iris to get the final conclusion;
  • Non-template matching: The features of the iris are formed into a cognitive concept, which is incorporated into neural network architectures such as deep learning [2] in the form of parameters, etc., to achieve non-coding matching recognition.
The essential purpose of use can be divided into two categories:
  • One-to-one certification: It is used to determine whether the test iris and the template iris belong to the same person;
  • One-to-many recognition: The iris of the test person is matched with multiple template irises, and the identity of the test person must be accurately identified.
There are separate process frameworks for completing iris recognition in various classification situations, and no matter what type of iris recognition is used, they all need to ensure high accuracy. The following aspects still require further breakthroughs.
  • Research on the whole structure of iris recognition: In the current research on iris recognition, template matching methods are basically aimed at a specific scene in image acquisition [3], quality evaluation [4], localization normalization [5], feature expression [6], and recognition [7] and take one or more of these steps as the core to improve and further improve performance. This method mechanically separates the steps, ignores the agreement between the algorithms, and limits effect on the overall performance accuracy. Non-template matching methods based on neural network structures such as deep learning explore the internal connections of certain steps, nesting them together to form a whole, and not necessarily completing all steps, thereby improving the accuracy. However, there is no qualitative and quantitative analysis method for the clear correlation between the steps in the overall structure of iris recognition. It seriously affects the demand for improving the performance of the overall recognition system.
  • Iris multi-state and single-source feature expression: Most training and learning recognition models in the current iris recognition algorithms are based on publicly known iris training sets or manually set iris labels. However, in the actual shooting process, multi-state heterogeneity is expected to occur between the template image and the collected image. There are many reasons for the multi-state heterogeneity, and they are divided into three main aspects:
    a.
    The acquisition sensor specifications are different (e.g., an NIR sensor [8] and ordinary optical sensor [9]; iris sample pictures taken by different cameras are shown in Figure 2).
    b.
    The collection environment is different because the iris status is unstable under different environments (illumination, etc.), which affects the relative relationship between the iris textures.
    c.
    The acquisition status of the iris is different. Because the acquisition status of the target person varies at different times, disturbances such as defocusing and deflection occur.
Because the algorithm applied in the process of iris processing and recognition takes on a black box state, it is impossible to predict the outcome, which makes it difficult to design a unified process for a variable and unsteady iris using fixed parameters and to subsequently form selected effects for iris quality, localization, and feature expression; this difficulty leads to the unsteady nature of iris features. The example of multi-state irises (taking the same device and the same person as an example) is shown in Figure 3.
It can be observed from Figure 3 that even the same device and the same person cannot guarantee the final appearance of the iris feature, which can affect the expression of the iris feature. Therefore, research into the impact of a multi-state iris on the setting of iris labels requires further examination.
3. Concept label setting and universal recognition of heterogeneous iris datasets: Currently, two main forms of iris concept label setting exist. The first is statistical learning methods [10]: a reasonable solution is obtained that meets the vast majority of requirements through the analysis of a notably large amount of data in the same type of situation. The second is cognitive learning methods [11]: by imitating the process of human learning, labels are ascribed to things to form the concept of things. The statistical learning method is currently the most commonly used method, and it applies a good combination of deep learning and other aspects. However, this type of method has high requirements for data preparation. In addition to requiring large amounts of data, this type of method also requires a clear division of the type of irises, and the limitations of the existing iris dataset size and situation classification make it difficult to meet a single deep learning frame. The amount of data required for the learning method under the learning framework also makes it difficult to support the establishment of a self-improvement process from forward recognition to the reverse. In addition, the accumulation of multi-state irises cannot be completed in a short time, which greatly limits the role of statistical learning. Although cognitive learning can reduce the need for data volume, the relationship between the iris labels and unknown environments needs further research. Because the hardware configuration between different collectors and the environment in which they are collected is expected to vary at different times, and the algorithm itself has prerequisites for use, this situation makes the recognition effect of the same algorithm differ substantially in different situations, thus greatly increasing the design complexity of iris recognition algorithms.
These problems make the use of the iris more accurately in one-to-one certification. To solve these problems, certain progresses have been reported in current research on the expression and recognition of iris features.
In the current research, there have been many practically tested pattern recognition frameworks. The existing framework is improved and tested through the public iris set, which proves that the framework is feasible, such as iris specific Mask R-CNN [12] and deep learning frameworks such as capsule neural networks under lightweight data structures [13], as well as existing deep learning frameworks specifically for iris recognition, such as DeepIris [14] and DeepIrisNet [15]. In research on the multi-state iris, the unsteady-state features are transformed into steady-state features through image processing and other methods [16], and the iris features are expressed through multiple recognition methods and weighted fusion [17] or the final result is obtained based on a credibility decision [18]. To set the iris concept labels, research into biomimetic cognition is an important direction; i.e., the determination of how to first summarize a feasible recognition model in the case of a small number of initial training samples, which increases in the number of recognitions and available training samples, as well as how to further effectively judge whether the current structure cannot meet the existing situation and requires users to retrain [19]. A current proposal is to set the iris label for the statistical cognitive learning method of unclassified mixed data [20] and apply the MiCoRe-Net neural network architecture [21] to eye concept reconstruction. In addition, for the label correction of the existing neural network architecture, an error correction code-based label optimization method [22] and a feedback mechanism-based label correction method [23] have been proposed. To make up for the lack of data in the dataset through transfer learning [24], in the study of iris generality, heterogeneous iris recognition is the main research direction [25]. First, by changing the internal structure of the algorithm, the device independence of the iris image is improved [26]. Second, the correction of images (blurring, displacement, etc.) using algorithms [27] improves the feasibility of multi-type iris recognition and further improves the environmental independence. In addition, starting from the acquisition sensor, improving the acquisition status during image acquisition and improving the iris quality from the beginning is also one of the current research directions of iris generality [28].
Previous studies have achieved good results in various experiments, but selected problems remain, and thus, there is a need to improve the accuracy of iris one-to-one certification.
  • The existing structural framework is designed to process a certain type of data, and the design purpose of the framework is not necessarily aimed at iris recognition. When the existing framework is used in iris recognition, inputting the iris data directly into the frame might not achieve a good recognition effect. The unsteady-state iris in an uncertain state is prone to a situation in which the existing framework does not match the input iris data. When the iris data set is limited in size, the situation classification is limited, and altering the existing framework might reduce accuracy, thus greatly limiting the types of existing frameworks that can be used. The question of how to design the framework to better adapt to the iris data must be addressed.
  • In the study of feature expression and universal recognition, most methods normalize the data of different sensors and different environments by using algorithms such that the images can be identified under the same standard. However, not all processes are suitable for all images; the lack of suitability can cause unnecessary calculations and exclude certain images that can be identified but are not considered to be available irises because they do not meet the standard, possibly resulting in a lack of training of the multi-state iris model. Current research lack a universal process mechanism for the heterogeneous iris generated in a variety of different devices and environments. Because a single algorithm is prone to omission in multi-state features, many algorithms use multi-source feature expressions, an approach that requires in-depth understanding of the relationship between different types of features to avoid repeated calculation of iris features with large correlations and to improve the remoteness and discrimination of features.
  • With the limited size of the iris data set and the limitation of situation classification, it is necessary to judge the situation of recognition errors and establish a reverse correction mechanism for the recognition framework process. This mechanism not only avoids mechanical judgment by relying on a large amount of data to generate fixed indicators in statistical learning but also avoids the situation of cognitive loss caused by the inability to form new concepts when unknown situations occur in cognitive learning.
The fault tree of the certification model in the case of certification error is shown in Figure 4.
According to the situation reflected in Figure 4 and the current problems, this paper proposes solutions for the following problems:
  • In face of the unsteady situation of heterogeneous irises produced by a variety of acquisition sensors and environments, the traditional processing method of a single source iris based on statistical learning or cognitive learning is not effective;
  • Traditional iris recognition divides the entire process into several statistically guided steps, which cannot solve the problem of correlation among various links;
  • The existing iris dataset size and situation classification constraints make it difficult to meet the requirements of learning methods in a single deep learning framework.

3. Evaluation Module

The whole structure of quality concept fuzzy inference and the process of obtaining the results are shown in Figure 5.
The construction of iris quality concept fuzzy inference is mainly based on a pure fuzzy system structure. A set of iris quality evaluation processes common to different iris libraries is designed to determine the quality of the input image and input the identified recognizable iris into the subsequent recognition model. This system uses the iris quality knowledge concept construction mechanism to build the eye concept knowledge base and quality concept knowledge base, and complete the iris quality concept library. According to the actual requirements, the appropriate concept is selected as the use concept, and the iris that conforms to the selected concept rule is used as the qualified iris for the next iris recognition. This process is called iris quality inference.

3.1. Iris Quality Knowledge Concept Construction Mechanism

The example of an image that may be acquired before the eye is pointed at the camera is shown in Figure 6.
When taking eye images, because of the fact that live detection is not considered, the difficulty of iris inspection is limited to the image confusion caused by the eyes before the camera is pointed at the time of human eye collection. Therefore, it is necessary to ensure that the image has an eye image inside.
It can be seen from Figure 6 that it is necessary to establish the concept of the eye, and it is the purpose of establishing the eye concept knowledge base to ensure that some form of eye exists in the image. At the same time, a quality evaluation knowledge base is constructed based on the iris feature extraction and the demand of the recognition algorithm for iris quality. Because the iris quality evaluation itself is highly subjective, in the iris quality knowledge concept construction mechanism proposed in this paper, due to the limited number of eye images and in order to meet the applications of different devices and different environments, people can only form a subjective cognitive concept based on the objective laws of the relative relationship among the gray levels of various parts of the eye first according to all the initial training samples. People’s logical concepts are transformed into digital concepts of computers by means of image processing, data extraction, and designing the process in a way that determines thresholds are avoided as much as possible, and concept labels and label inference rules are formed.
In the case where the collector is set as a living person, the formulation of the eye concept and the quality concept is continuously improved under the interaction mechanism between people and the system, transforming human subjective cognition into the digital concept of the computer system to ensure the universality of quality concept knowledge. The feedback mechanism must dynamically adjust according to the recognition results and the increase in external conditions, so that different feature extraction methods can get recognizable irises through identical evaluation reasoning process mechanisms.
The operation process of the iris quality knowledge concept construction mechanism is shown in Figure 7.
The operating steps of the iris quality knowledge concept construction mechanism are:
Step 1: Observe the state of the eyes of the training sample held by normal humans, and form the subjective concept of the eyes based on the visual impression of the eyes;
Step 2: Convert a subjective concept into a digital concept label through methods such as image processing and feature extraction;
Step 3: Formulate the inference rules for eye concept labels, combine multiple concept labels to form the concept of eyes, and build an eye concept knowledge base;
Step 4: In order to allow the unsteady state iris to be identified in different feature expression and recognition algorithms, reduce the probability of repeated extraction and design the iris quality knowledge base. Firstly, the subjective concept of iris quality is formed by observing the feelings of training samples held by normal humans according to the demand for iris quality;
Step 5: Use methods such as image processing and feature extraction to convert subjective concepts about quality into digital concept labels;
Step 6: Formulate inference rules for quality concept labels, combine multiple concept labels together to form the concept of the image of iris recognition, and build the iris quality concept knowledge base;
Step 7: Combine the iris quality knowledge base with the eye concept knowledge base to form the iris quality concept library.

3.2. Example of Iris Quality Concept Fuzzy Inference

This section uses a specific example to show how to use the iris quality knowledge concept construction mechanism to build the iris quality concept library when the iris conditions meet the prerequisites of this paper, and in this example, all iris knowledge is selected for iris quality inference to get recognizable irises and realize the versatility of collecting iris for different iris collectors.

3.2.1. Eye Concept Knowledge Base and Eye Inference

Because the acquisition situation before the glasses are aligned with the camera is unpredictable, and to ensure that the eye concept of the unsteady state iris can be universal, the establishment of the initial eye concept knowledge base adopts the form of subjective morphological description. First, the eye image needs to be processed to facilitate the transformation of the subjective morphological description of the human eye into the objective data form of the computer. The specific process is as follows:
The first step: process the eye image, and segment the corresponding parts to extract the eye concept:
1. Pupil region segmentation: The pupil is the most critical part of the eye. The pupil concept is established to facilitate the location of the iris. Therefore, it is necessary to narrow the pupil in the image first. The collected iris image is processed by the non-linear function (1). Equation (1) is as follows:
T = e ( f ( x , y ) 255 0.5 )
where e represents the base number in the exponential function, and f ( x , y ) represents the gray value of the point with coordinate ( x , y ) in the image.
Because the captured image is set as a grayscale image, the grayscale range of the image is [0,255], so the range of the resulting value T is [ e 0.5 , e 0.5 ], and image segmentation is performed with 1 as the cutoff point. Avoid the difficulty of threshold selection due to the large gray range. According to the law of the gray value of the eye, no matter how the external light changes, the gray value of the pupil is generally smaller than the iris sclera eyelid and other areas, so the pupil value should be in front of the overall range (less than 1), so take the gray value of the result value at [ e 0.5 , 1]. If most gray values are greater than 1 or only a few are less than 1, it means that there are no eyes, or the light intensity of the eyes is too dark or too bright, so that the pupils in the eye image are not obvious. The image can be segmented by formula 1, which means that the image itself has no extreme conditions, that is, the gray value difference between the pupil and iris is relatively normal. The image noise interference is removed through the open operation [29], and a grayscale image with pupils is obtained as the eye discrimination image.
2. Binary image of pupils: Design formula 2 to process the eye discrimination image.
f 1 ( x , y ) = { f ( x , y ) 0.5 × z f ( x , y ) > z f ( x , y ) 0.5 × z min f ( x , y ) z
Among them, z represents the average value of all points in the eye discrimination image whose gray value is not 0 or 255; f 1 ( x , y ) represents the result value of the gray value of the point with the coordinate ( x , y ) in the image after being processed by formula 2; min represents the minimum gray value (the gray value of the image is not 0 or 255) of all points in the eye to discern. Set the gray values of the points where the value of f 1 ( x , y ) is less than or equal to 0 to 0, and set the gray values of the other points to 255 to obtain a binarized image of the pupil, which contains a connected area related to the pupil.
Design idea and purpose of Equation (2):
(1)
It can be seen from Figure 5 that although the pupils are identified in the eye discrimination image, the pupils are still incomplete due to light and eye hair. Therefore, in order to better determine the position of the pupil, it is necessary to reflect the pupil rules more accurately, so binarization is required.
(2)
By lowering the eyes to determine the overall gray value of the image, while making the low gray value closer to 0, the relative relationship between the gray values is not changed, and the pupil site is better found. The reason for subtracting half of the average gray value is that under the unstable iris, the specific situation of the gray value of each image is uncertain, and it is not safe to use fixed parameters. Therefore, a non-fixed value related to the nature of the image is needed as the parameter for function design. Because the gray value of the pupil itself is relatively small, the gray value will be less than the average value in most cases, but the unpredictable situation in the unsteady iris is more complicated, and the gray value of other non-pupil parts cannot be guaranteed. Subtracting the average value may cause points that are larger than the grayscale average value to be suppressed too much, affecting the appearance of the pupils. Therefore, only half of the average grayscale value is subtracted in the function design.
(3)
For points where the gray value is lower than the average value, subtracting the minimum gray value ensures that the pupil can be separated to the greatest extent. Then the difference between pupil and eye hair can be distinguished, and then as much of the pupil connected area can be identified as possible.
Therefore, the design idea of formula 2 is to as much of the pupil connected area as possible and eliminate the interference of the non-pupil area.
The eye discrimination image and the corresponding binarized image of the images collected by different types of devices after this step are shown in Figure 8 (corresponding to the image of the device of Figure 2).
3. Preliminary determination of the pupil area: The location of the pupil connected area boundary is found by performing a canny edge detection [23] operation and Hough circle detection [24] operation on the binarized image of the pupil, and the pupil center and radius are determined.
(1)
It can be seen from Figure 8 that although the shape of the pupil connection area is irregular and complete, it can be regarded as a circle. Therefore, in order to reduce the calculation amount, the existing circle detection technology is used to fit the pupil connection area.
(2)
During the detection, due to the unpredictability of the unsteady state iris, the circle radius of the pupil connection area cannot be set in a fixed area. The image dimension of the image used in this paper is 640 × 480. In order to reduce the computational complexity, the image dimension is reduced to 160 × 120 by the majority statistical pooling method to form a reduced- dimensional image.
Majority statistical pooling method: The image is divided into 160 × 120 sub-blocks according to the 4 × 4 sub-blocks. The number of points with a gray value of 0 in each sub-block is counted, and if it is greater than 8, the point is considered to be in a small gray value region, and the gray value of the corresponding point in the reduced-dimensional image is set to 0, otherwise set to 255. The reason for using 4 × 4 dimensional sub-blocks is because according to the physiological iris features, the radius of the iris is about 1–2 times the pupil radius. Therefore, the iris acquisition image with a width of 480 dimensions can accommodate up to 6 times the pupil radius, that is, 80 dimensions, and after being reduced 4 times, it is 20 dimensions. Six times of twenty is 120, so the 160 × 120 dimensional reduced image can meet the pupil requirements in non-extreme cases.
Because the threshold of circle detection is not fixed in the pupil binarized image, multiple circle centers and radii may be detected in an image. In an unstable iris, the pupil position and radius cannot be completely determined. Therefore, these detected possible circle centers and corresponding radii are used as eye concept label candidate data.
4. Iris connected area: Use the detected center and radius to make candidate data images, and then extract concept labels. The position of the point where the gray value is 0 in the pupil binarized image correspondes to the original image, and the gray value of the point in the eye discrimination image is set to 255 to obtain a transition processed image. Because the iris region should be a small gray value region in the transition processed image, the same processing is performed on the processed image using Equation (2) to obtain an image containing the iris connected region. The transition processed image of each part corresponding to the image in Figure 8 and the image of the connected area of the iris are shown in Figure 9.
Using the Daugman rubber band method [30], the center of each pupil is detected as the polar point, and the radius is 2 times the polar axis, and the radius of the iris connected area and the original image are all doubled to 2 The annular area between the double-radius lengths is transformed into a normalized image of 512 × 64 dimensions and the texture is highlighted by equalizing the histogram [31] to form a normalized enhanced image. At the same time, the normalized processing of the iris connected area is also performed. The example of the normalized enhanced image of Figure 2 and the normalized image of the iris connected area (image of Figure 9) correspond to the image of Figure 10.
The image in Figure 10 is used as the basis of the obtained center and radius data to construct knowledge in the eye concept knowledge base. After the collected images are processed, the concept labels in the eye concept knowledge base are constructed, and the concept label rules are developed to convert human subjective concepts into digital concepts and form judgment rules, which together serve as the knowledge reserve. The process is as follows:
1. Pupil: According to the iris connected area image, the inside of the pupil is a white area with a gray value of 255 similar to a circle. The circle range of the detected center and the radius in the iris connected area are different, as shown in Figure 11.
It can be seen from Figure 11 that the gray value of a point inside a circle that can well reflect the pupil range is mostly 255. The design of the conceptual label of the pupil should satisfy this phenomenon.
Therefore, the subjective concept of the pupil is that most points in the pupil satisfying the circled range are white points, and the white points are evenly distributed in each part.
The process of turning subjective concepts into digital concepts is as follows:
(1)
In the image of the connected area of the iris, divide the pupil range circled by a detected center radius into four equal parts;
(2)
Calculate the number of points with a gray value of 255 in these four parts, and calculate the proportional value K i of the points in the area;
(3)
Calculate the variance value T of the proportional value K i of the four parts.
The four proportional values K i and the variance value T are the numerical concept labels of the subjective concept.
The concept label rule of the digital concept label is expressed by Formula 3. Formula 3 is as follows:
i = 1 4 ( 1 K i ) < η T μ
Among them: η and μ are real numbers close to 0, which are set according to the collector device and the specific environment.
When the sum of the difference between the four proportional values and 1 is infinitely close to 0 and the variance value is close to 0, it means that there are pupil-like parts in the circled range.
2. Iris: It can be seen from Figure 9 that there is a certain amount of iris around the pupil, so it should be an iris area around the pupil that is accurately circled, and the iris area is considered from the perspective of vision to consist of irregular and uniformly distributed textures, In terms of an image, it is visually more inclined to the vertical form, so a Sobel operator that can extract vertical features is used, which is shown in Figure 12.
The Sobel operator is used to perform the texture highlighting on the normalized image, and the gray value of the points whose gray value is lower than 0 is set to 0, and the gray value of which is greater than 0 is 255. The example of salient images with and without the iris image are shown in Figure 13.
It can be seen from Figure 13 that after the iris image is highlighted again, the points with a gray value of 255 are distributed throughout the highlighted image. Images with no eyes or no iris have fewer point distributions with a gray value of 255, which is used as a subjective concept for the iris.
The process of turning subjective concepts into digital concepts is as follows:
(1)
In order to reduce the amount of calculations, the normalized image of the iris connected area image is intercepted, and the 512 × 64 dimension image is divided into eight 64 × 64 dimension images on average. The intercepted coordinate points of the image with the smallest gray value among the 8 images is recorded;
(2)
The outstanding image is equally divided into eight 64 × 64 dimensional images, and the image at the same position as the intercepted coordinate point recorded in 1) is selected as the iris detection image. The detection image of the iris part and the detection image of the non-iris part are shown in Figure 14.
(3)
The minimum pooling method is used to minimize the detection image in 2 × 2 dimensions [27] to form a 32 × 32 dimensional pooling detection image. In order to better reflect the irregular and black-and-white form of the iris, the Hamming distance [28] is used for representation. Set the points with a gray value of 255 to 1 and the points with a gray value of 0 to 0 to form 32 binary sequences. Calculate the Hamming distance between each adjacent sequence, a total of 31 values, numbered H 1 H 31 .
The 31 Hamming distances are the digital concept labels for this subjective concept. The concept label rule of the digital concept label is expressed by formula 4, which is as follows:
i = 1 31 ( 1 H i ) < α
Among them: α is a real number close to 0, which is set according to the collector device and the specific environment.
In each row, 1 is distributed irregularly and exists, and the ideal situation is that the Hamming distance between each row will be close to 1. Therefore, when the sum of the difference between the 31 Hamming distances and one is close to 0, it means that there are iris-like parts in this area.

3.2.2. Quality Concept Knowledge Base and Quality Inference

In the example presented in this paper, because of the unpredictability of the unsteady iris, it is difficult to ensure that a single fixed index can be used to evaluate the quality of the iris. Iris images must be excluded in extreme cases, as many images as can be recognized must be kept, and the scope of iris recognition must be expanded. In the process of digitizing the concept, based on the premise that there are eyes in the image and in the case of an unsteady state without determining the expression and recognition of iris features according to a person’s intuitive impression of the iris, an image that can be recognized as the iris should not be too blurred or excessively squinted. Therefore, design ambiguity and direct vision for iris quality evaluation, excluding irises that cannot be identified, improve the detection rate of unsteady irises, and pave the way for the expression and recognition of unsteady irises afterwards.
The process of transforming subjective concepts into digital concepts and setting rules for concept labels is as follows:
1. Image blur: Determine the center of the pupil in the eye as the pole of the polar coordinates, and convert all the annular areas between 1 times the radius length and 2 times the radius length in the image into a normalized image of 512 × 64 dimensions. A 256 × 32 dimensional intercepted image is taken from the upper left corner as a blur test image.
Set the points in the normalized image whose gray value is not 0 or 255 as the feature points. According to formula 5, formula 6 obtains the image blur degree. Formula 5:
H i = | F ( x i , y i + 1 ) F ( x i , y i ) | + | F ( x i , y i 1 ) F ( x i , y i ) | + | F ( x i + 1 , y i ) F ( x i , y i ) | + | F ( x i 1 , y i ) F ( x i , y i ) |
Among them: F ( x i , y i ) represents the gray value of the i-th feature point with coordinate ( x i , y i ) in the normalized image; F ( x i , y i + 1 ) represents the gray value of the adjacent point above the feature point with coordinate ( x i , y i + 1 ) in the normalized image; F ( x i , y i 1 ) represents the gray value of the adjacent point above the feature point with coordinate ( x i , y i 1 ) in the normalized image; F ( x i + 1 , y i ) represents the gray value of the adjacent point to the right of the feature point with coordinate ( x i + 1 , y i ) in the normalized image; F ( x i 1 , y i ) represents the gray value of the adjacent point to the left of the feature point with coordinate ( x i 1 , y i ) in the normalized image; H i represents the sum of the absolute value of the difference in gray value between the i-th feature point with coordinate ( x i , y i ) in the normalized image and the gray values of the four adjacent points on the upper, lower, left, and right sides.
Formula 6 is as follows:
D = i = 1 s H i s
Among them: D represents the normalized image blur; s represents the number of feature points in the normalized image.
According to the specifications of different collectors, when a clear threshold p is set, and D is greater than or equal to p, it means that the image collected by the collector is clear, and the iris image processing can continue. When D is smaller than p, it means that the image collected by the collector is blurred, and the iris image processing cannot be continued.
2. Image visibility: The center point of the pupil is the center point. The gray value of the point in the grayscale image of the iris that is more than 5 times the radius length and less than 1 time the radius length is set to 255, and the gray value of the other points is unchanged. The direct view of the image according to the four parameters obtained by formula group 7 is calculated, and the dimensions of the grayscale image of the iris is set to to M × N, where M represents the length and N represents the width (Formula 7):
G 1 = i = 1 N | sgn ( f 2 ( M , y 1 i ) 255 ) | N   G 2 = i = 1 N | sgn ( f 3 ( 0 , y 2 i ) 255 ) | N   G 3 = i = 1 M | sgn ( f 4 ( x 1 i , N ) 255 ) | M   G 4 = i = 1 M | sgn ( f 5 ( x 2 i , 0 ) 255 ) | M
Among them: f 2 ( M , y 1 i ) represents the gray value of the i-th point of the coordinate ( M , y 1 i ) on the rightmost boundary; G 1 represents the ratio of the sum of the number of points with gray values not equal to 255 on the rightmost boundary of the image to the dimension N; f 3 ( 0 , y 2 i ) represents the gray value of the i-th point of the coordinate ( 0 , y 2 i ) on the leftmost boundary; G 2 represents the ratio of the sum of the number of points with gray values not equal to 255 on the leftmost boundary of the image to the dimension N; f 4 ( x 1 i , N ) represents the gray value of the i-th point of the coordinate ( x 1 i , N ) on the lowermost boundary; G 3 represents the ratio of the sum of the number of points with gray values not equal to 255 on the lowermost boundary of the image to the dimension M; f 5 ( x 2 i , 0 ) represents the gray value of the i-th point of the coordinate ( x 2 i , 0 ) on the uppermost boundary; G 4 represents the ratio of the sum of the number of points with gray values not equal to 255 on the uppermost boundary of the image to the dimension M; sgn is a symbolic function that represents the sign of the return parameter. The direct view of the image is determined according to formula 8:
Z s = sgn ( ( sgn ( G 1 q ) + sgn ( G 2 q ) + sgn ( G 3 q ) + sgn ( G 4 q ) ) + 3 )
Among them: q represents the discrimination threshold, which is set according to the specifications of different collectors.
Z s represents the value of image visibility. If Z s = –1, it is determined that the image visibility of the image cannot continue to perform iris image processing. If Z s = 1, the image visibility of the image is determined to continue iris image processing.
According to the image blur and image direct vision, the iris quality concept knowledge base (poor quality iris knowledge base in this paper) is formed, and the concept of eyes is combined to jointly build the iris quality concept library in the example.

3.3. Feedback Learning Mechanism of Quality Fuzzy Inference

Uncertainty caused by iris instability, limitations on the cognition of iris quality and digital methods, and restrictions on the number and types of initial irises all lead to inadequate initial concept settings, which lead to incorrect certification results. Under the prerequisites of this paper, the recognition error caused by the iris exceeding the prerequisites of the recognition model is ruled out. In this paper, the cause of the certification error is identified only as a problem in the certification process.
When building the knowledge base, there are some drawbacks for the following reasons:
  • The subjective concept of the human eye is limited by the accumulation of human knowledge. The subjective impressions obtained only by visual observation of the data possessed are not necessarily completely accurate.
  • The method of transforming the subjective concept into a digital concept can digitally represent the subjective description of the iris only adequately; however, this does not mean that the means of expression must be the best and unique.
  • As the number of acquisitions increases and the external conditions of the acquisition and the needs of the iris change, the perception of the subjective concept of the iris may change.
Because the iris quality evaluation index is highly subjective and the application environment of this article is an unpredictable unsteady-state iris, it is difficult to accurately modify the quality standard in the traditional function type judgment method, and only the final recognition result can reflect the performance of the quality concept. Therefore, the feedback dynamic learning mechanism proposed in this paper is initiated every time a misidentification or a new recognition requirement occurs. The purpose is to analyze the cause of the error, adjust or expand the quality concept, and improve the accuracy of the iris quality evaluation, thus making the iris more adaptable to the recognition method, thereby improving the accuracy of iris recognition. After the feedback dynamic learning mechanism is initiated, humans with professional knowledge will assist in analyzing the causes of the errors and then propose appropriate solutions for improvement to meet most of the principles. Except for a very few extreme cases, most images can be correctly identified.
The operating steps of the quality learning system and feedback dynamic learning mechanism are as follows:
  • When it is determined that the quality concept should be adjusted, analyze the existing quality concept labels, each label setting process one by one, and whether the process settings are standardized; then, record if there are irregularities.
  • Modify the parameter settings in the original process appropriately, observe the modified recognition results, and then analyze the relationship between each quality concept label and the standardization of the settings to discover loopholes and record them; analyze various recording situations, modify existing quality concept labels (for human subjective concepts, digital expression of subjective concepts, etc.), quality concept label representation methods, and image processing processes to correct the quality concept labels.
  • When the recognition environment changes and it is necessary to expand the concept library, analyze the new environment and conditions, add new quality concepts, and expand the quality concepts (by creating new concepts and new digital expression methods).
The revised quality concept inference system requalifies and recognizes the new image and observes the results again. If the recognition is correct, the correction is completed. An incorrect recognition indicates that the correction is insufficient. Repeat steps 1–3 and continue to modify the quality concept until no iris image is available or it is determined that there is an error in the certification module.

4. Certification Module

Under the premise that the amount of data is small and that various states cannot be accurately classified, the iris can only be identified based on mixed data. It is necessary to design a neural network structure that can recognize this situation, which can achieve lightweight feature label settings and heterogeneous authentication in the case of one-to-one certification. Therefore, the neural network structure should not be complicated. In this regard, this paper uses a convolutional neural network [32] with a lightweight iris certification layer structure.

4.1. Iris Processing

The images of each stage of positioning are shown in Figure 15. Examples of iris certification areas of different iris libraries are shown in Figure 16.
Iris images must be processed prior to feature extraction. In this paper, the normalized iris image has dimensions of 360 × 64, and the 180 × 32 dimensional area is taken from the upper left corner as the recognition area. In the actual iris acquisition process, under external environmental influences such as illumination, defocusing, and the deflection and change of the collector state, many states of iris presentation might occur. To ensure the efficiency of shooting and recognition, the qualified standard of the iris quality evaluation method for continuous frames is low, and only extreme conditions with poor iris quality are removed.

4.2. Multi-Source Multi-Feature Fusion Mechanism and Heterogeneous One-to-One Certification

The multi-source multi-feature fusion mechanism for iris feature extraction is established. The image of the iris recognition area is processed by a smoothing algorithm and a texture highlighting algorithm. Each iris image passes through 12 layers of an image-processing network consisting of convolutional layers, pooling layers, ReLU layers, and an expansion layer. Finally, each iris image forms 15 expanded parameters and a total of 45 expanded parameters in the expansion layer. The expanded parameters of the three images are fused by average fusion through the feature fusion layer to form 15 recognition parameters.

4.2.1. Feature Expression

The traditional single-angle statistical learning method encounters difficulty in collecting the multi-state iris. Therefore, a multi-source method is used to express the features from multiple angles. Smooth filtering and texture highlight filtering are applied. It is better to find a stable region that can reflect the texture change relationship inside the iris and to suppress the influence of illumination and defocusing on the feature expression as much as possible.
This paper uses Gaussian filtering [33] (smoothed) and the equalization histogram [34] (highlighted) as examples to explain the image-processing network. An example of the filtered image in Figure 16 is shown in Figure 17.
Each image is processed by the same image-processing method. The purpose of image processing is to highlight the iris texture as much as possible, so that the value of the recognition parameters reaches a relatively large level, in order to facilitate the use of the certification function. Therefore, some common convolution kernels can be used to highlight the iris texture. This paper uses the gradient Laplacian convolutional kernel [35], Sobel convolutional kernel [36], gradient convolutional kernel [37], and eight-neighborhood convolution operator with a center of nine as an example and combines them.
The specific steps of the image-processing module are listed as follows:
First step: Input an iris image into the first convolutional layer. A gradient Laplacian convolution kernel is used. After image convolution, the image is converted into 2 × 2 maximum pooling [38] in the first pooling layer, resulting in a 90 × 16 dimensional image, thus thinning the pooled image through the Softplus function [39] in the first ReLU layer.
The Softplus function is shown in Equation (9):
S o f t p l u s ( x ) = log ( 1 + e x )
where x represents the gray value of each point of the first pooling image, and S o f t p l u s ( x ) is the resulting value of each point of the first ReLU image.
The convolutional kernel of the first convolutional layer is shown in Figure 18.
Finally, the result in the first ReLU layer is one processed image. For the original image (taking Figure 2b as an example), an example of the processed image formed in the first step is shown in Figure 19.
Second step: The second convolutional layer uses three convolution kernels, namely the gradient Laplacian convolution kernel, horizontal Sobel convolution kernel, and vertical Sobel convolution kernel. The gradient Laplacian convolution kernel is the same as the first convolution layer. After a ReLU image is convolved, three convolutional images are formed. The image is converted into a 45 × 8 dimensional image by 2 × 2 maximum pooling in the second pooling layer and the Softplus function in the second ReLU layer. The second ReLU layer uses the same Softplus function as the first ReLU layer to perform thinning operations on the pooled images in the second pooling layer.
The convolutional kernels of the second convolutional layer are shown in Figure 20.
The result in the second ReLU layer is three processed images. An example of the three processed images (taking the image processing process in Figure 9 as an example) formed in the second step is shown in Figure 21.
Third step: The third convolutional layer uses five convolution kernels, namely the gradient Laplacian convolution kernel, horizontal Sobel convolution kernel, vertical Sobel convolution kernel, horizontal gradient convolution kernel, and vertical gradient convolution kernel. The gradient Laplacian convolution kernel is the same as the first convolution layer, the horizontal Sobel convolution kernel is the same as the second convolution layer, and the vertical Sobel convolution kernel is the same as the second convolution layer. After convolving three second ReLU images, 15 convolutions are formed. In the third pooling layer, the image is converted into a 22 × 4 dimension image by 2 × 2 maximum pooling, and via the Softplus function in the third ReLU layer, the third ReLU layer uses the same Softplus as the first ReLU layer function to perform the thinning operation on the pooled image in the third pooling layer.
The convolutional kernels of the third convolutional layer are shown in Figure 22.
The result in the third ReLU layer is 15 processed images. An example of the 15 processed images (taking the image processing process in Figure 11 as an example) formed in the third step is shown in Figure 23.
Fourth step: Input image of the third ReLU layer into the fourth convolutional layer. The purpose is to sharpen the edges of the image and enhance the local image contrast. An 8-neighborhood convolution operator with a center of nine is used to convolve the image. A neighborhood convolution operator with a center of nine is shown in Figure 24.
The processed image is converted into an 11 × 2 dimension image by 2 × 2 maximum pooling, with a total of 15 images. An example of the 15 images (taking the image processing process in Figure 13 as an example) formed in the fourth step is shown in Figure 25.
The average gray value of the 15 images is read and input into the expanded layer, and the image feature is converted into the data feature; the result of the expanded layer is 15 numbers, and the 15 numbers are the expanded parameters of an iris image.
The three iris images (the original image, the smooth image, and the texture-emphasized image) are processed through the above steps, and the expanded parameters of the three images are collected. Three sets of the 15 expanded parameters can be obtained in the expanded layer. The average of the three numbers at the same position in the three sets as that of the image’s recognition parameters is calculated. The recognition parameters consist of 15 numbers.

4.2.2. Iris Certification

After obtaining a sufficiently large recognition parameter, according to the iris training samples, it is necessary to set the category labels of different categories and set the number of training irises to m. Because these images are all iris components, the identified recognition parameters can represent the iris feature. The parameters of the certification function are set based on the training data currently held. The training iris is used as a test object, and a single category label is set according to the distribution of the data; as the amount of iris data increases, an analysis of certification error conditions is performed. The parameters of the certification function are first determined to ascertain whether they need to be updated under the effect of the feedback learning mechanism. If it is determined that they need to be updated, then the parameters are updated.
The steps for iris recognition are given as follows:
1. Set the iris information entropy and recognition parameter labels: Calculate the average value L i of the recognition parameters of each training iris (the average value of the i-th recognition parameter in the training iris), calculate the probability p i that the i-th recognition parameter is less than L i in all training images, and calculate the information entropy of each recognition node under different conditions according to Formula 10.
H 1 i = p i × log ( p i )    H 2 i = ( 1 p i ) × log ( ( 1 p i ) )
where H 1 i represents the information entropy with a recognition parameter less than L i , and H 2 i represents the information entropy with a recognition parameter greater than L i . Finally, H 1 i × p i + H 2 i × ( 1 p i ) is used as the category label of the iris.
2. Calculate the matching value H D between the template feature information and test feature information: The objective of the recognition function is to find a category with sufficient discrimination and move the test iris feature as close as possible to the category label and observe whether it matches.
We set each recognition parameter of the test iris to f i (i-th recognition parameter) and calculate the information offset Z i of the recognition parameter according to Formula 11:
Z i = { f i L i × H 2 i × ( 1 p i ) f i > L i f i L i × H 1 i × p i f i L i
where Z i represents the deviation of the amount of information of the recognition parameters of the test iris at their respective probabilities (greater or less than the average L i ). In the ideal state, f i and L i should be the same, but they will definitely be different. Therefore, the information offset is obtained by multiplying the ratio by the respective probability and information entropy. Formula 12 is used to calculate the entropy feature G , where G is the sum of the ratio of the information offset Z i to the category label H 1 i × p i + H 2 i × ( 1 p i ) of each category. In the ideal state, Z i should be 0.5 times that of the following:
G = i = 1 15 | 2 × Z i H 1 i × p i + H 2 i × ( 1 p i ) |
Although the value of the entropy feature G in each category of the test iris is not the same, it might be relatively similar. Therefore, the entropy feature G is enlarged by the exponential function based on e to form the category label S . Based on statistical ideas, we summarize the expanded scope of the category labels in each category. Because this paper focuses on lightweight iris recognition, even if we apply the multi-range setting of category labels, different iris categories can also be well distinguished. In Formula 13, [ α 1 i , β 1 i ] [ α n i , β n i ] represents different category label ranges (the interval between each interval range is set to λ 1 , , λ n ; these values are not necessarily the same, and they are set based on the number of irises and the resulting distribution. The entropy feature G of the iris in this category will fall in the range after it is enlarged. For entropy features that fall within this range, we multiply them by 100 to expand and facilitate the calculation of matching values in the sigmoid function and enlarge the matching values of the same category.
S = { 100 × G e G [ α 1 i , β 1 i ] [ α n i , β n i ] G e G [ α 1 i , β 1 i ] [ α n i , β n i ]    H D = 1 1 + e S
After the output layer gets the certification result, the certification threshold is set to L (the specific value is set according to the condition of the iris, and the set value is greater than 0.9), and the values of HD and L are compared. If HD is greater than L, the image is considered to be of the same person.
The nature of the certification process is judged according to the degree of coincidence between the calculated entropy features and the distribution of category labels. L i and p i are the basis of all parameters; all parameters in formulas 1–5 are calculated according to these values. L i and p i are obtained from all the feature data extracted from the training iris, and the impact of L i and p i on certification are reflected in the category label range [ α 1 i , β 1 i ] [ α n i , β n i ] of the final training data (enlarged area of entropy feature expansion of the same category). Therefore, during the test, if the entropy feature expansion value of the test iris is in the category label range, the input value of the sigmoid function is expanded to calculate the certification judgment result.

4.3. Feedback Learning Mechanism of Iris Certification

The feedback learning mechanism of the certification function is based on statistical ideas. With the increase of the number of test irises and the increase of recognition error conditions, it is determined whether to adjust and modify the function parameters. First, we accumulate the certification error conditions to determine whether the recognition parameters need to be modified. As the amount of data and the number of certification errors increase, we modify the parameters in certification functions 1–5 in turn according to the statistics of the data distribution.
Because of the causal effects of the parameters between the functions, the L i and p i of all the training irises are mainly adjusted, and then the range intervals λ 1 , , λ n and [ α 1 i , β 1 i ] [ α n i , β n i ] are modified as L i and p i change to achieve dynamic learning. Such measures enable the entropy features of the function and the range of the category labels to be dynamically adjusted as the number of iris increases, thereby achieving a current optimal effect.
The specific steps of the feedback learning mechanism are as follows:
  • The number of certification errors within a certain number of tests are counted and it is determined whether the parameters need to be adjusted based on the changes in the training data. This article assumes that the correct recognition rate is less than 95% or the number of training irises is increased by more than five times (where the new training iris contains the previous test iris); then, the certification model needs to be updated.
  • If it is determined that the update of the certification model is not required, the original set parameters and category label range for certification are used.
  • If it is determined that the certification model needs to be updated, new statistical adjustments are made to L i and p i based on all existing training data and according to the maximum satisfaction principle (choosing to satisfy the data distribution as much as possible). L i and p i are recalculated based on the recognition parameters of all training irises. The certification function group with formulas 1–5 is recalculated according to the new L i and p i , according to the new entropy feature G , and the distribution of the expanded values, and this is used as a basis to count the new category label range and range interval.
  • Steps 1–3 are repeated, until a situation that cannot optimize the certification model appears (because of training data exhaustion, etc.).

5. Experiments and Analysis

Experimental data: All experiments in this paper use the JLU-4.0 (device in Figure 2a), JLU-6.0 (device in Figure 2c), and JLU-7.0 (device in Figure 2e) iris libraries [40]. The key indicators of the three acquisition sensors are shown in Table 1.
The JLU-6.0 iris library is a low-end acquisition device, while JLU-4.0 and JLU-7.0 are advanced devices. As of 2020, the JLU-6.0 original iris library contains more than 100 categories of irises, and each category contains more than 1000 unstable irises photographed in various states (including the standard morphology set). The JLU-4.0 original iris library contains more than 50 categories of iris, and each category has thousands of unstable irises photographed in various states (including the standard morphology set). The JLU-7.0 original iris library has more than 30 categories of iris, and each category has more than 1000 unsteady irises (including standard morphology sets) taken in various states.
Experimental setup: In these experiments, the computing system included a dual-core 2.5 GHz CPU with 8 GB memory, and the operating system was Windows.
Section 5.1 gives the explanation of the structural meaning. Section 5.1.1: The relationship between iris features and iris requirements. From the quality evaluation process, it explains why iris quality has an impact on recognition results. Section 5.1.2: Reasonable structural design description. From the perspective of the overall design of the certification structure, it explains why this paper did not improve on other deep learning frameworks. Section 5.1.3: Structural properties and algorithm independence: From the feature expression process in the certification structure, the rationality of the structure design of this paper is indicated by replacing the algorithms in the recognition method and not using feature fusion. Section 5.1.4: Significance of certification function. From the perspective of the certification function design in the certification structure, it explains the design significance of the authentication function in this paper. Section 5.1.5: Reasonable setting of the feedback learning mechanism. From the perspective of the feedback mechanism, it explains the design significance of the feedback learning mechanism from two aspects: quality concept and design label.
Section 5.2 describes the performance of the algorithm. Section 5.2.1: Sensor heterogeneous versatility experiment: Explains the overall structure of the method proposed in this paper for heterogeneous certification. Section 5.2.2: Certification performance: Explains the experiment and the performance of single category certification. Section 5.2.3: Time operation experiment: It shows the time running of the process from confirming the collection to outputting the results in the one-to-one certification in this paper.
Section 5.3 contains a comprehensive experiment. Under the prerequisites of this paper, the processing structure of this paper is compared with the existing mechanical processing algorithms and the fixed combination of existing recognition algorithms, which illustrates the impact of iris processing and feature expression recognition algorithms on iris certification in the case of improper matching and the advantages of this method compared with other methods.

5.1. Explanation of Structural Meaning

5.1.1. The Relationship Between Iris Features and Iris Requirements

Experimental settings and indicators: Several feature expression and recognition algorithms were listed. According to the requirements of feature expression and recognition algorithms for iris quality, the relevant iris concepts from the quality knowledge system illustrated in this paper and literature [41,42] were selected. One category was selected in JLU-6.0, and the iris was evaluated by certain evaluation indicators. The appropriate method to infer the iris images that meet the requirements (the number of iris images in each category in the initial iris library is 1000, the internal iris shape is different, and no quality evaluation is performed) was selected, and all qualified irises were determined to be training irises. Additionally, all of the irises were used as test irises to perform one-to-one certification tests (to identify the appearance of default overfitting and other situations). By exploring whether the iris that meets the requirements of the feature extraction algorithm is available, the rationality of the iris quality knowledge system designed in this paper can be explained.
The enumeration algorithms are as follows:
  • Fusion method: Iris recognition based on a feature weighted fusion method based on Haar and LBP in [17] and training feature weights through statistical learning;
  • Secondary iris recognition: Multicategory secondary iris recognition based on a BP neural network after noise reduction by principal component analysis, as used in [7]:
  • DPSO-certification function: A certification function optimization algorithm based on a decision particle swarm optimization algorithm and stable feature, as used in [16];
  • Statistical cognitive learning: A multistate iris multiclassification recognition method based on statistical cognitive learning, as used in [20];
  • Multialgorithm parallel integration: An unsteady state multialgorithm parallel integration decision recognition algorithm in [18];
  • Capsule Deep Learning: Iris recognition based on capsule deep learning in [13].
In the case of determining that the photographer is in a living state, according to the quality requirements of the iris images of each algorithm, the degree of the quality requirements of the iris indicators of each algorithm is shown in Table 2.
According to the design of the recognition algorithm structure and the nature of each indicator in the knowledge base, the evaluation knowledge selected by the six algorithms is as follows:
  • Fusion method: Clarity selects the secondary inspection method in literature [41]; the effective iris area detection uses the gray histogram distribution detection in literature [42]; the strabismus selects the eccentricity detection in literature [42]; and confirmation of the presence of eyes uses the eye concept cognition in the examples presented in this paper.
  • Secondary iris recognition: Clarity selects the secondary inspection method in literature [41]; effective iris area detection uses the method of detecting the appropriate iris area by using the block gray value variance in literature [41]; the strabismus selects the strabismus detection in literature [41]; and confirmation of the presence of eyes uses the eye concept cognition in the examples presented in this paper.
  • DPSO-certification function: Clarity selects the image blur detection in the example presented in this paper; the effective iris area detection uses and the presence of eyes selects the concept awareness of the eye in the example presented in this paper; and the strabismus selects the strabismus detection in literature [41];
  • Statistical cognitive learning: Clarity selects the image blur detection in the example presented in this paper; the effective iris area detection uses and the presence of eyes selects the concept awareness of the eye in the example presented in this paper; the strabismus uses the image direct vision detection in the example presented in this paper.
  • Multialgorithm parallel integration: Clarity selects the image blur detection in the example presented in this paper; the effective iris area detection uses and the presence of eyes selects the concept awareness of the eye in the example presented in this paper; the strabismus uses the image direct vision detection in the example presented in this paper.
  • Capsule deep learning: Clarity selects the sharpness test method used in literature [42]; effective iris area detection uses the method of detecting the appropriate iris area using the block gray value variance in literature [41]; the strabismus selects the strabismus detection used in literature [41]; confirmation of the presence of eyes uses the eye concept cognition in the examples presented in this paper.
All the indicators have met the qualification standards. Table 3 shows the qualifications and certifications of the six methods.
Table 2 and Table 3 show that different feature extraction methods have different quality requirements for irises, and after selecting appropriate quality conditions according to the quality requirements, a rigorous one-to-one is passed to the qualified iris. The certification test found that most of the iris can be used, and this test proves that the exclusive setting of the quality evaluation conditions for the feature expression and recognition algorithm is reasonable.
Additionally, the sample knowledge base and other quality evaluation standards are used in this experiment. According to the quality requirements of different situations and different methods, different types of quality knowledge are selected for quality reasoning, thus showing that the iris quality evaluation standard is not a fixed scale. The standard should be expanded and revised as external knowledge increases. Therefore, the feedback dynamic learning mechanism designed in this paper is reasonable and meaningful. The mechanism can dynamically modify and expand the knowledge base according to the changes in iris quality requirements and prevent the omission of identifiable irises because of fixed indicators.

5.1.2. Reasonable Certification Structural Design Description

Experimental setup: In this section of the experiment, the CNN architecture of this paper (CNN-special certification function) and a variety of deep learning architectures are used to perform a pair of category certification experiments; here, it is explained why the CNN framework in this paper was used and the prerequisites and innovation of this paper are identified.
For iris localization and normalization, we adopted a method in which the normalized size of the iris recognition area is 180 × 32 dimensions, and the quality confirmation had no effect on iris certification. The test iris and training iris in the three iris libraries were the same. The number of iris categories and the number of irises in a single category are shown in Table 4. The training iris and testing iris met the prerequisites of this paper, excluding extreme cases of interference. For the multi-source features in the experiment, Gaussian filtering (smoothed) and the equalizing histogram (highlighted) were used as image-processing algorithms. The convolutional kernel of the image processing is the example convolutional kernel of this paper.
The comparative deep learning architectures are as follows:
1. Faster R-CNN Inception Resnet V2(FRIR-V2) in [43]; 2. VGG-Net in [44]; 3. DeepIrisNet-A in [15]; 4. DeepIris in [14]; 5. DeepIrisNet in [24]; 6. Alex-Net in [45]; 7. ResNet in [46]; 8. Inception-v3 in [47]; 9. CNN with self-learned features generated (CNN-self-learned) in [48]; 10. Fully convolutional network (FCN) in [49]; 11. DBN-RVLR-NN in [50].
The certification statuses of all 10 deep learning architecture methods are shown in Table 5.
As one of the most popular research tools, deep learning is of great significance in pattern recognition. There are also many deep learning architectures that have CNN as their main object and have good results. However, in terms of iris recognition, the time for the application of deep learning in iris recognition is relatively short, and the existing deep learning architecture has the following problems:
  • The existing structural framework is designed to process a certain type of data, and the design purpose of the framework is not necessarily aimed at iris recognition. Therefore, when the existing frame is used for iris recognition, inputting iris data directly into the frame may not achieve a good recognition effect;
  • At present, there is no detailed definition of iris features. The expression of iris features is usually determined after the iris is converted from an image into a digital form, which has certain requirements for the internal feature transformation of the deep learning architecture. When faced with a multi-sensor iris whose acquisition status is unpredictable, there will be a large number of frameworks that cannot adapt to this kind of situation, which will lead to a decline in the accuracy of identifying irises from different acquisition sensors;
  • The architecture of deep learning has high requirements for the classification of data volume and data situation. In addition, for a certain deep learning framework, the computing power is highly related to computing equipment, making it difficult for many low-level devices to use deep learning architectures for calculations.
In actual research on iris recognition, there will not be much iris data—only lightweight data will be present. However, even for the same person’s iris, because the collector is not guided and the types of the collection sensors are different, the same deep learning architecture does not learn enough about the iris category, with different effects on different sensors. Expensive equipment and cumbersome training processes will hinder the promotion of iris recognition technology, and thus it is necessary to design a simple certification architecture for such situations.
In the analysis based on the data in Table 5, as far as the commonly used deep learning architectures are concerned, this paper is an improvement on the basic CNN architecture, but this is the most suitable scheme designed under the prerequisites of this paper. The VGG-Net architecture, Alex-Net architecture, ResNet architecture, and Inception-v3 architecture are not designed specifically for iris recognition. Therefore, the data need to be converted into a form that can be recognized by the architecture, and need to be classified according to different collection statuses. In addition, it is possible to improve the accuracy of recognition only when a large amount of data is used for training. The prerequisites of this article do not match, making the recognition accuracy lower than this paper.
DeepIrisNet-A architecture and DeepIris architecture are a specialized iris recognition framework, which includes eight convolutional layers (each has a small standardized layer after each layer), four pooling layers, three fully connected layers, and two drop-out layer. These architectures can achieve good results in existing public iris libraries. However, the prerequisite for this article is a small sample multi-state training set. The number of iris samples and classifications cannot meet the needs of large-scale iris recognition model training, so the recognition accuracy is low. The FCN architecture and the DBN-RVLR-NN architecture are improvements to the existing architecture. This type of architecture improves the way of learning and feature extraction, thereby improving the recognition effect. However, they have certain requirements for images, which make feature learning of multi-state images easy to miss, so the recognition accuracies are lower than this paper.
Using the same iris recognition model for different irises for certification, the migration learning iris recognition framework DeepIrisNet is used for lightweight training samples to solve the situation in which the data set is not large enough. The improvement of the recognition structure through transfer learning can effectively train with lightweight training samples and improve the certification accuracy. The Faster R-CNN Inception Resnet V2 model architecture is an advanced convolutional neural network model that allows training with lightweight data, which can reduce the difficulty of training. The reason why the certification effect is not as good as in the structure of this paper is that the prerequisite of this paper is an unpredictable multi-state iris. Compared with the traditional public iris library, the experimental iris shape is changeable, and it is difficult to classify the situation. Training can only be performed on mixed data, thus reducing the training effect.
Under the prerequisites of this paper, when we have only a small amount of data that mixes irises from different acquisition states, in order to ensure that the same structure can play a role for different acquisition sensors, the structure of this article focuses on the design of the authentication function later. Under the role of a convolution kernel that highlights the edge of the iris texture as much as possible, considering the amount of data and the few categories to be distinguished, according to the situation of data distribution, a non-linear function is used to expand the entropy features, making sure there is no intersection between different categories, and making the certification feature tags of the same class unique. Therefore, under the prerequisites of this paper, although this paper only uses an improved typical CNN structure, this structure can still have a better effect than other deep learning architectures.

5.1.3. Structural Properties and Algorithm Independence

Experimental setup in this section: The knowledge base of the quality evaluation concept system itself is carried out in a mechanism that can be expanded and modified. Therefore, the experiments in this section are to verify the algorithm independence of the recognition model structure. The algorithm in this paper is replaced with the multi-source image-processing algorithm to verify the relationship between the algorithm structure design and the specific algorithm. Part 1 is feature extraction using only the original image, without feature smoothing and highlighting. Part 2 is the replacement of the original Gaussian filtering and equalization histogram with median filtering [51] (smoothed) and the Laplace operator [52] (highlighted). The reason for choosing these two filtering operations is because it is also a common smoothing and highlighted filtering image. Using these two components of the experiments, the design significance of the multi-source feature fusion of the method in this paper is explained, and the mechanism is independent of the specific algorithm. The convolutional kernel of the image processing is the example convolutional kernel of this paper.
In this section of the experiment, the same iris category as the certification experiment is selected, with each category having 2000 initial training irises (the quality evaluation indicator is the evaluation indicator exemplified in this paper). The test iris and training iris of each category iris photographed in the same state in each category experiment meet the prerequisites of this paper and exclude extreme interference. The comparative test irises are all irises inside the experimental iris library, and the number of test irises is 1000 (after passing the quality assessment of this paper, this is qualified and will not affect the certification). The experimental results in the first-part and second-part experiments are shown in Table 6.
It can be observed from Table 6, after the algorithm is changed, the overall certification effect does not appear to fluctuate greatly; the original image processing is used, and the certification effect is greatly reduced. Therefore, it can be concluded that the setting of the multi-source feature fusion mechanism in the method in this paper can improve the recognition effect, but the recognition effect has little correlation with the specific image-processing algorithm. The multi-source processing image mechanism of the method in this paper is designed to achieve the purpose of feature fusion by smoothing processing, highlighting, texture processing, and the numerical averaging of the original image. The reason for this design is that in the case of an unsteady iris, it is impossible to determine the appearance status of the iris feature points. Therefore, once the training iris is set in a fixed state, it is easy to increase the difficulty of matching the test iris with the template iris (e.g., the training iris is a defocused image, and the texture is blurred; however, the test iris is a clear image, and the texture is prominent, which causes a large difference between the values after convolution processing). Although training via a large amount of data can make up for this shortcoming, due to the unpredictable state, it is difficult to fully obtain all of the cases when the data set is incomplete, and it is easy to encounter different categories of edge interactions.
Therefore, the method in this paper adopts a neutralization method. By averaging the values of the original image, the smooth image, and the texture highlighted image, the values after convolutional processing are concentrated to a certain extent so as to deal with the entropy of the subsequent recognition function. The clustering of features creates conditions to avoid the adverse effects of highly scattered data on clustering. This mode does not place overly high requirements on the image-processing algorithm itself, and therefore, even if the image-processing algorithm is replaced, the overall recognition effect is not affected.

5.1.4. Significance of Certification Function

Experimental setup in this section: One training iris in the three iris libraries waws taken as an example. The changes of each parameter in 100 training images and 500 training images were compared, and the rule of parameter selection in this paper was explored. L i and p i of 15 recognition parameters of 500 training images and 100 training images are shown in Table 7. The information entropy H 1 i and H 2 i of 15 recognition parameters of 500 training images and 100 training images are shown in Table 8. The 15 values of the category labels of the three iris libraries are shown in Table 9. The recognition parameters of the example training iris are shown in Table 10. The information offset Z i of the example training iris is shown in Table 11. The entropy feature G and enlarging value e G are shown in Table 12.
According to the changes of all the parameters of the training iris in the example, it can be seen from the data changes in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 that the entropy feature expansion values finally obtained by the same recognition parameter in different numbers of training iris are different. Because the iris defined in this article is a constrained iris, which also eliminates extreme collection situations, the quality of the iris itself is considered to be a digital representation of features. Therefore, even with a different number of training irises, its own distribution will allow entropy feature values to cluster into certain regions.
The method used in this paper to distinguish different features analyzes the distribution of the entropy feature expansion values of all the training irises. Under the condition that the data can form a certain degree of clustering, select the appropriate category interval to mark a fixed category. Because in the calculation based on different category labels, there must be disjointed areas for different categories. For example, in the process of performing the same-category certification of category A, the entropy feature expansion value based on the category label A can only fall in the category interval B. An iris that is not in category A can also be excluded if the entropy feature expansion value is in the category interval B, because the basis of the calculation is not the category label of category A. Through this mode, different categories can be effectively distinguished in lightweight same-category certification, and the accuracy of one-to-one certification in the same category is guaranteed.
The data difference within the same category is small, and closer clustering can be achieved. However, due to the limitation of the number of recognition parameters, the difference in entropy features is not obvious in certain cases. Therefore, the exponential function with the base e is expanded to expand the value of the entropy features, thus greatly expanding the differentiation among the categories of irises and the performance of one-to-one certification. The entropy features of the iris in the same category are mostly concentrated in a certain region. After expansion, due to the multi-state iris, the enlarged iris image is concentrated in different regions. Even if the entropy features are relatively close, they can be expanded to a large gap. Because the prerequisites for this paper require running the tester to retake the iris image, this paper uses the multi-range method. Using reasonable range intervals, the enlarged data of different categories of iris after the same category label can be effectively distinguished and will not coincide. This result can greatly improve the accuracy of certification.

5.1.5. Reasonable Setting of the Feedback Learning Mechanism

Reasonable setting of quality concept

Experimental setup: In this section of the experiment, the quality of iris images in one category of the three iris libraries is evaluated (1000 images per iris library, without prior classification, and the iris normalization dimension is 180 × 32). Using the multi-indicator mechanical determination method in [41], three types of indicators were selected: sharpness, squint, and effective iris area. First, one indicator was used for quality evaluation, and then the three indicators were used for quality evaluation. The qualified iris images were converted into 512-bit binary feature codes by Haar wavelet, and the iris of the same category in the same iris library were matched with each other by Hamming distance, and the problem of mechanical determination index was analyzed through the recognition result of the same category matching.
The results of the quality evaluation and recognition of the four cases in the experiments in this section are shown in Table 13, where:
Case 1: represents only using the indicator of sharpness; Case 2: represents only using the indicator of squint; Case 3: represents only using the indicator of effective iris area; Case 4: represents the situation where the three indicators are used together.
According to the previous experiment, it can be known that the steady-state iris has an effect on the expression and recognition of features, so the accuracy value is not important. The observation index of this experiment is the change of accuracy in different situations. When using a single indicator for evaluation, although there are more qualified images than multiple indicators, the accuracy rate is not as high as the multiple indicators. Because of the unsteady state iris, there are many situations that affect the quality of the iris, which are the result of a variety of factors. It is difficult to judge the quality completely with a single indicator or mechanical indicator, and because the matching degree of image quality and recognition algorithm is not high, the selected “qualified image” may not be suitable for the feature extraction algorithm. These two points cause the final accuracy rate to not be high. Therefore, this paper focuses on feature extraction and recognition methods, takes cognition as the lead, and formulates a quality evaluation mechanism that can be modified and expanded according to the external environment. The image quality is conceptualized to make it adapt to different algorithms as much as possible, and the feedback mechanism allows it to be flexibly adjusted according to specific circumstances. This method is more flexible and more reasonable, and can effectively solve the problem of reverse modification of concepts in the process of quality evaluation.
The significance of the setting of the feedback mechanism of the certification module
Experimental setup in this section: After ensuring that the quality of the iris did not affect the recognition results, each iris library selected one category and used the certification function designed in this paper for iris certification. The convolutional kernel of the image processing was the example convolutional kernel of this paper. The growth trend of 200 test irises (experimental test images and training images) without a feedback mechanism when the training irises consisted of 10, 100, 200, and 500 images in the three iris libraries is presented in Figure 26.
In Figure 26, the abscissa represents the serial number of the test image, and the ordinate represents the number of irises recognized correctly. The feedback learning mechanism was used to dynamically modify the certification function (the average L i and probability values p i of the recognition parameters and the range of the category interval). Each of the three iris libraries selected a certain category of iris.
Although the design of the certification function guaranteed the accuracy of the iris certification results, it can be observed from Figure 26 that, when the number of trained irises was small and constant, if the number of tested irises was much larger than the number of trained irises, the accuracy rate was expected to decrease. Because the number of training irises was insufficient to make the selected category label range insufficient, the information entropy value could not be guaranteed to be always available. Therefore, it was necessary to increase the training data in time to adjust the recognition model, but the user needed to know when the recognition model needed to be adjusted to avoid unnecessary adjustment. Thus, it was necessary to design a feedback learning mechanism.
Table 14 shows examples of different amounts of training, numbers of iris recognitions, and updated examples of the certification situation. This paper assumes that if the correct recognition rate is less than 95% or if the number of trained irises is increased by more than five times (where the new training iris contains the previous test iris), then the model needs to be updated. After judging that the certification model needs to be updated, the numerical changes of the parameters of the iris category range under different numbers of training iris are shown in Table 15 (the range of iris categories is chosen as an example because the changes in the average and probability values can ultimately be reflected by the range of iris categories).
In the example of the feedback learning mechanism shown in Table 14, it can be observed that the number of irises trained affects the accuracy of the large-scale test iris, and thus dynamic adjustment is necessary. The number of training irises should ensure the correct recognition rate of the test iris within a certain number, and the designed feedback learning mechanism can prompt the user to increase the training iris data and adjust the new recognition model (adjusting the average number and concept, adding new category interval ranges) such that the recognition model always maintains a high accuracy rate and makes up for the problem of the dynamic adjustment of the convolutional neural network structure under the training of lightweight data.
Because L i and p i are adjusted according to all the training irises that they have, as the amount of training iris data increases, these two values will constantly change; then, they will reflect the entropy feature and the entropy feature expansion value. It can be seen from Table 15 that as the number of training irises increases, the category label distribution area (the area in which the entropy feature expansion values are gathered) is constantly increasing. Because there will always be new clustering areas, the parameters of the certification function in this paper need to be continuously changed with the number of irises. The setting of any parameter is determined based on the number of existing irises. In the case of a multi-state iris, it is difficult to simply cluster all images into a small area with simple entropy features. The expansion of non-linear functions makes the distinction between different types of iris greatly expand, which will prevent the clustered areas of different types of iris from overlapping, which will improve the accuracy of certification.

5.2. Certification Method Performance Experiment

5.2.1. Sensor Heterogeneous Versatility Experiment

Experimental settings and indicators: In this section of the experiment, each of the three iris libraries uses five categories as the verification data, and the parameters of each category are trained by 500 training images qualified by the quality evaluation system (without new feedback learning adjustments). In one-to-one certification for each category for separate collection tests, each category directly collects one iris for individual matching. The tester’s iris was collected without the guidance of posture correction and normal shooting. One-to-one certification of each category of the same iris library was selected for comparison. For the multi-source features in the experiment, Gaussian filtering (smoothed) and the equalizing histogram (highlighted) were used as the image-processing algorithms. The judgment indicators for the results of this experiment are listed as follows:
  • The number of iris acquisitions when each category of each iris library reaches 100 accurate certifications (which is divided into: the number of repeated acquisitions due to quality evaluation problems, and the number of repeated acquisitions caused by system false rejection (no problem in quality detection)).
  • The number of errors identified when each category of each iris library reaches 100 times was accurately identified.
The results of the heterogeneous universality experiment are shown in Table 16.
It can be seen from Table 16 that in the process of iris certification using the same set of structures in multiple iris libraries, the iris’ false acceptance rate is extremely low, and the occurrence of redundant shooting is basically based on the quality of the non-compliance and false rejection. The collector is deemed not to be in the template iris category. This is because in the case of allowing repeated shooting, the quality evaluation is based on the actual shooting conditions of each iris, the subjective feeling concept is digitally expressed in a suitable way, and there is no fixed indicators used. This paper uses the example of quality evaluation indicators to consider the differences between different iris libraries when setting, based on a large number of non-thresholding processes, so the three libraries use the same indicator. Each parameter in the iris recognition function is based on the training iris, and the different people’s features are greatly diluted by the entropy feature expansion effect, thereby improving the possibility of parameter value cross over between different people by the range interval. To improve the model the samples must be differentiated, the further dilution range of the feature value of the certification function must be narrowed, the matching value of the same category as much as possible must be increased, and the matching value of different categories must be reduced. Therefore, in the formal test, because the quality evaluation conditions are extensive and the range interval compresses the range of some parameters, the user needs to take a new image, but the overall number is not large and is within the acceptable range. In addition, although there are very few cases of incorrect certification, the feedback learning mechanism in this paper can handle new situations, adjust the problems in the certification structure in time, and improve the certification accuracy.
From these, it can be concluded that the overall recognition structure designed in this paper can effectively operate in different iris banks, has good heterogeneity and universality, can effectively operate in multiple iris libraries, and the overall difference is not high.

5.2.2. Certification Performance Experiment

Experimental indicators: The multi-source features in the certification performance experiments used Gaussian filtering (smoothing) and the equalization histogram (prominence) as multi-source image-processing algorithms. The evaluation indicators included the correct recognition rate (CRR) and ROC space (curve) (including false positive rate (FPR), true positive rate (TPR), and area under curve (AUC)) [53]. The ROC space (curve) defines the false positive rate (FPR) as the x-axis and the true positive rate (TPR) as the y-axis. Because the prerequisite requirement for this article was to guarantee accuracy and allow users to re-shoot, the case of false rejection was also considered true.
Experimental setup in this section: In this section, each of the three iris libraries was selected from five categories for experimentation, and the parameters of each category were trained from 2000 training images which qualified for quality evaluation (without new feedback learning adjustments; iris quality does not affect certification). The convolutional kernel of the image processing is the example convolutional kernel of this paper. During the test, the same category and different categories were used in one-to-one certification (originating from the same iris library, and the test iris and training irises are different). We observed the experimental data for analysis. Table 17 shows the recognition times of each category in different iris libraries. The threshold range of the ROC curve in this experiment was [0.5, 1], and the interval was 0.01. The ROC curves for single-category recognition of each category in the three iris libraries are shown in Figure 27.
From Figure 27, under the condition that the number of training irises is guaranteed, although the certification effect of each category of different iris libraries is different, the overall area under the curve (AUC) is significantly larger than 0.5, indicating that the accuracy of each single-category certification for each category is high, and the method has good heterogeneity. If the amount of data is sufficient, the method has good predictive value.
According to analysis of the results in Table 17 and Figure 17, due to the unpredictability of the multi-state iris acquired by different acquisition sensors, the feature appearance between different iris images might be notably different, which is reflected probabilistically in the amplitude value and distribution after edge detection based on the limitation of the scale and situation classification of the iris data set, which makes it impossible to use a large-scale deep learning architecture. The certification function of the method in this paper is based on the current iris data when performing one-to-one certification. The same category labels and recognition parameters are clustered in the same category, and the parameters are differentiated according to the probability of different amplitude values. Features are extracted in the form of information entropy to find the connection between the images of the same category to the extent possible. Iris features of the same category can be grouped together such that features of different categories cannot be aggregated. In expanding and diluting the non-linear function, the clusters of the same category in the multi-state iris are strengthened by setting multiple label ranges. Compared with the mechanical single threshold judgment, the recognition range is larger, and the structure setting is more flexible.

5.2.3. Time Operation Experiment

Experimental indicators: Under the prerequisites of this paper, the images were selected from JLU-4.0, JLU-6.0, and JLU-7.0 iris libraries to constitute the experimental iris library. The classification thresholds of the three algorithms in the recognition model are trained by 1000 training iris images (50 categories are selected from each iris library and 20 images are from each category). The test iris used a series of 10 additional images in the same category (1000 in each iris library). The test irises have not passed the quality judgment. All irises will pass all the iris recognition processes and test the certification time of all processes.
The test irises for continuous certification time (milliseconds (ms)) and the certification situation are shown in Table 18.
It can be seen from Table 18 that after 1000 consecutive one-to-one certification experiment in the same category, the running times of the entire iris certification process in different iris libraries are within 6000–6500 ms. In the case of ensuring that the front-end preprocessing is completed, the certification time of a single image was 6–7 ms, which met the actual work requirements.

5.3. Comprehensive Experiment

Experimental setup: In this section of the experiment, the overall process structure of this article is compared with a combination of various existing quality evaluation and recognition methods (localization and normalization adopt this method, the normalized size of the iris recognition area is 180 × 32 dimensions). The certification algorithms in the three iris libraries were trained using the qualified images detected by their respective quality evaluation algorithms (the training iris is not classified by state, and is mixed data). The test iris was the initial photographed iris without the quality evaluation algorithm. The number of iris categories and the number of iris in a single category are shown in Table 19.
Training irises and testing irises met the prerequisites of this paper and exclude extreme situation interference. Training irises and testing irises were completely different. During the experiment, quality evaluation was first performed, recognizable irises were found by the quality evaluation method for iris one-to-one certification in the same category, and the identified irises were matched with the identified unrecognizable irises through the recognition algorithm for one-to-one certification in the same category, and the corresponding proportion to analyze the experimental results was calculated. For the multi-source features in the experiment, Gaussian filtering (smoothed) and the equalizing histogram (highlighted) were used as image-processing algorithms.
The experimental evaluation indicators in this section include: the number of recognizable irises considered by the quality evaluation method, the number of correctly identified recognizable irises and the correct certification rate (the number of correctly identified of recognizable irises/the number of recognizable irises considered by quality evaluation method), the number of correctly identified of unrecognizable irises and the correct certification rate (the number of correctly identified of unrecognizable irises/the number of unrecognizable iris esconsidered by quality evaluation method).
The overall certification structure (Case 0) in this paper is compared with the following 20 algorithm combinations:
Mechanical quality evaluation indicator + traditional iris recognition algorithm:
  • Case 1: Qualified iris evaluation indicators (biological detection, sharpness, effective area, strabismus, and transboundary) in [42] and iris recognition with the image processed by the log operator based on Gabor filter optimization and Hamming distance in [6].
  • Case 2: Qualified iris evaluation indicators (biological detection, sharpness, effective area, strabismus, and transboundary) in [42] and iris recognition based on feature weighted fusion in [17], which train feature weights through statistical learning.
  • Case 3: Inference engine system (sharpness, effective area, strabismus, and transboundary) of quality qualified iris evaluation indicators in [41] and a certification function optimization algorithm based on the decision particle swarm optimization algorithm and stable features in [16].
Mechanical quality evaluation indicator + deep learning framework recognition algorithm:
4.
Case 4: Qualified iris evaluation indicators (biological detection, sharpness, effective area, strabismus, and transboundary) in [42] and iris recognition and prediction based on multi-view learning classifiers in [54].
5.
Case 5: Inference engine system (sharpness, effective area, strabismus, and transboundary) of quality qualified iris evaluation indicators in [41] and concept cognition based on the deep learning neural network in [11].
6.
Case 6: Inference engine system (sharpness, effective area, strabismus, and transboundary) of quality qualified iris evaluation indicators in [41] and a unsteady iris one-to-one certification method based on statistical cognitive learning in [20].
Quality evaluation fuzzy inference + traditional iris recognition algorithm:
7.
Case 7: Quality evaluation fuzzy reasoning system (example indicators in this paper) and iris feature representation based on the fractal coding method in [55].
8.
Case 8: Quality evaluation fuzzy reasoning system (example indicators in this paper) and the histogram of oriented gradients (HOG) is used to extract the iris features and certification by support vector machine (SVM) in [56].
9.
Case 9: Quality evaluation fuzzy reasoning system (example indicators in this paper) and feature extraction based on the scale-invariant feature transform (SIFT) in [57] and recognition based on SVM.
Quality evaluation fuzzy inference +deep learning framework recognition algorithm:
10.
Case 10: Quality evaluation fuzzy reasoning system (example indicators in this paper) and iris recognition based on the cognitive internet of things (CIoT) identified by multi-algorithm methods in [58]
11.
Case 11: Quality evaluation fuzzy inference (example indicators in this paper) and the iris recognition method based on the iris-specific Mask R-CNN in [12].
12.
Case 12: Quality evaluation fuzzy inference (example indicators in this paper) and an iris recognition method based on error correction codes and convolutional neural networks in [22].
In the comprehensive experiment, the number of qualified irises in the three iris libraries developed in three different ways are shown in Table 20. The recognition results of all 12 cases in the three libraries are shown in Table 21.
From the experimental results in Table 20 and Table 21, it can be seen that the experimental results of the three iris libraries are relatively consistent. The study of the overall structure of the iris has a great impact on the certification of the iris. The algorithm does not match and the certification accuracy is low. An iris considered as unrecognizable may not necessarily be used in this certification algorithm. The overall structure of this paper has the highest certification accuracy for the detection of unsteady irises, and the lowest certification accuracy for the iris considered as unrecognizable.
Analysis of the experimental results in Table 20: The quality evaluation method of the mechanical threshold indicator for the purpose of detecting qualified images depends on the form of the template iris, and the process was intended to find an iris image similar to the template iris. Compared with the fuzzy system dominated by the unqualified knowledge system in this paper, the flexibility is insufficient. Therefore, the number of recognizable irises considered by the quality evaluation method in this paper is large. Because the unrecognizable irises considered by the system in this paper example is selected based on the poor quality knowledge concept in the unsteady irises, it is rare that the unrecognizable image can still get the correct result when it is recognized. However, the other two comparison methods are based on the eligibility criteria for judgment, so there may be cases where the recognizable irises (the irises matching the feature extraction certification algorithm) are mistakenly eliminated. The fuzzy system in this paper can be adjusted through the feedback dynamic learning mechanism according to external conditions and other methods, so the overall structure can be more flexible.
In summary, we can conclude that different iris feature extraction and recognition methods have different requirements for iris quality, so special quality requirements need to be designed according to different feature extraction and recognition methods. The certification architecture of this paper considers the connection relationship among different links, which can avoid the shortcomings of low adaptation among different links caused by the division of the whole process machinery, and solve the problem of correlation between each link. At the same time, it pays attention to the universality of the quality concept setting process and the certification structure, so that it is independent of equipment and environment. In addition, knowledge cognition is the main factor, and the concept of each component is established to prevent the problem of insufficient learning that restricts the different states of the unsteady iris due to the fixed threshold. Aimed at the lack of training data sets and training data classification, the features of the iris are expressed through multiple sources, and the existing single deep learning architecture model is improved. A new certification function was designed to improve the degree of discrimination between different categories, thereby improving the certification accuracy of one-to-one certification of the lightweight category of the certification module and the capability of heterogeneous iris certification.

6. Conclusions

Aiming at the unsteady iris caused by different devices and environments, this paper proposes a lightweight heterogeneous iris one-to-one certification overall process with universal sensors based on a quality evaluation fuzzy inference and a multi-feature fusion lightweight neural network. The focus is to resolve the issue that the traditional processing method of single-source irises in unsteady state iris is ineffective, and the correlation of each link caused by the mechanical segmentation of the iris certification process is insufficient. The existing iris data set size and situation classification constraints means that it is difficult to meet the requirements of learning methods in a single deep learning framework. The results of different iris libraries prove that the design of each part of the method is reasonable and meaningful. The accuracy in different iris libraries is maintained at a high level and the recognition range is wide. Through the feedback learning mechanism, experts dynamically adjust the model according to the results.
This paper mainly focuses on recognition forward flow process. For the reverse optimization process, only the relevant mechanism is designed. Making the mechanism more automated and letting the system perform unsupervised learning and multi-categories recognition are the next areas of research focus.

Author Contributions

Conceptualization, L.S. and Z.X.; Data curation, L.S.; Formal analysis, L.S., L.Y., and Z.X.; Funding acquisition, L.Y., Z.X., and H.G.; Investigation, L.S.; Methodology, L.S.; Project administration, L.S.; Resources, L.S., W.Z., L.X., and W.C.; Software, L.S. and C.J.; Supervision, L.S.; Validation, L.S.; Visualization, L.S.; Writing—original draft, L.S.; Writing—review and editing, L.S. and C.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC), grant number 61471181; Natural Science Foundation of Jilin Province, grant number 20140101194JC and 20150101056JC; Jilin Province Industrial Innovation Special Fund Project, grant number 2019C053-2 and the science and technology project of the Jilin Provincial Education Department under Grant No. JJKH20180448KJ. Thanks to the Jilin Provincial Key Laboratory of Biometrics New Technology for supporting this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, S.; Liu, Y.-N.; Zhu, X.; Huo, G.; Liu, W.-T.; Feng, J.-K. Iris double recognition based on modified evolutionary neural network. J. Electron. Imaging 2017, 26, 1. [Google Scholar]
  2. Chen, B.; Xing, L.; Zheng, N.; Príncipe, J.C. Quantized Minimum Error Entropy Criterion. IEEE Trans. Neural Netw. Learn. Syst. 2019, 5, 1370–1389. [Google Scholar] [CrossRef] [PubMed]
  3. Nguyen, D.; Pham, T.; Lee, Y.W.; Park, K. Deep Learning-Based Enhanced Presentation Attack Detection for Iris Recognition by Combining Features from Local and Global Regions Based on NIR Camera Sensor. Sensors 2018, 18, 2601. [Google Scholar]
  4. Alonso-Fernandez, F.; Farrugia, R.A.; Bigun, J.; Fierrez, J.; Gonzalez-Sosa, E. A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning. IEEE Access 2018, 7, 6519–6544. [Google Scholar]
  5. Jha, R.R.; Jaswal, G.; Gupta, D.; Saini, S.; Nigam, A. PixISegNet: Pixel-level iris segmentation network using convolutional encoder–decoder with stacked hourglass bottleneck. IET Biom. 2020, 9, 11–24. [Google Scholar] [CrossRef]
  6. Liu, Y.; Liu, S.; Zhu, X.; Chen, Y.H. LOG operator and adaptive optimization Gabor filtering for iris recognition. J. Jilin Univ. (Eng. Technol. Ed.) 2018, 5, 1606–1613. [Google Scholar]
  7. Liu, S.; Liu, Y.; Zhu, X.; Lin, Z.; Yang, J. Ant Colony Mutation Particle Swarm Optimization for Secondary Iris Recognition. J. Comput. Des. Comput. Graph. 2018, 30, 1604. [Google Scholar]
  8. Lee, Y.W.; Kim, K.W.; Hoang, T.M.; Arsalan, M.; Park, K. Deep Residual CNN-Based Ocular Recognition Based on Rough Pupil Detection in the Images by NIR Camera Sensor. Sensors 2019, 19, 842. [Google Scholar]
  9. Bazrafkan, S.; Thavalengal, S.; Corcoran, P. An end to end Deep Neural Network for iris segmentation in unconstrained scenarios. Neural Netw. 2018, 106, 79–95. [Google Scholar]
  10. Zhang, M.; He, Z.; Zhang, H.; Tan, T.; Sun, Z. Toward practical remote iris recognition: A boosting based framework. Neurocomputing 2019, 330, 238–252. [Google Scholar] [CrossRef]
  11. Mowla, N.I.; Doh, I.; Chae, K. Binarized Multi-Factor Cognitive Detection of Bio-Modality Spoofing in Fog Based Medical Cyber-Physical System. In Proceedings of the 2019 International Conference on Information Networking (ICOIN), Kuala Lumpur, Malaysia, 9–11 January 2019; pp. 43–48. [Google Scholar]
  12. Zhao, Z.; Kumar, A. A deep learning based unified framework to detect, segment and recognize irises using spatially corresponding features. Pattern Recognit. 2019, 93, 546–557. [Google Scholar] [CrossRef]
  13. Zhao, T.; Liu, Y.; Huo, G.; Zhu, X. A Deep Learning Iris Recognition Method Based on Capsule Network Architecture. IEEE Access 2019, 7, 49691–49701. [Google Scholar] [CrossRef]
  14. Liu, N.; Zhang, M.; Li, H.; Sun, Z.; Tan, T. Deepiris: Learning pairwise filter bank for heterogeneous iris verification. Pattern Recognit. Lett. 2016, 82, 154–161. [Google Scholar] [CrossRef]
  15. Nguyen, K.; Fookes, C.; Ross, A.; Sridharan, S. Iris Recognition with Off-the-Shelf CNN Features: A Deep Learning Perspective. IEEE Access 2017, 6, 18848–18855. [Google Scholar] [CrossRef]
  16. Liu, Y.; Liu, S.; Zhu, X.; Guang, H.; Tong, D.; Zhang, K.; Jiang, X.; Guo, S.-J.; Zhang, Q.X. Iris secondary recognition based on decision particle swarm optimization and stable texture. J. Jilin Univ. (Eng. Technol. Ed.) 2019, 49, 1329–1338. [Google Scholar]
  17. Liu, Y.; Liu, S.; Zhu, X.; Liu, T.-H.; Liu, Y.-N. Iris recognition algorithm based on feature weighted fusion. J. Jilin Univ. (Eng. Technol. Ed.) 2019, 49, 221–229. [Google Scholar]
  18. Shuai, L.; Yuanning, L.; Zhu, X.; Xinlong, L.; Chaoqun, W.; Kuo, Z.; Tong, D. Unsteady State Lightweight Iris Certification Based on Multi-Algorithm Parallel Integration. Algorithms 2019, 12, 194. [Google Scholar] [CrossRef] [Green Version]
  19. Arsalan, M.; Kim, D.S.; Lee, M.B.; Owais, M.; Park, K. FRED-Net: Fully residual encoder–decoder network for accurate iris segmentation. Expert Syst. Appl. 2019, 122, 217–241. [Google Scholar] [CrossRef]
  20. Shuai, L.; Yuanning, L.; Xiaodong, Z.; Guang, H.; Jingwei, C.; Qixian, Z.; Zukang, W.; Zhiyi, D. Statistical Cognitive Learning and Security Output Protocol for Multi-State Iris Recognition. IEEE Access 2019, 7, 132871–132893. [Google Scholar] [CrossRef]
  21. Wang, Z.; Li, C.; Shao, H.; Sun, J. Eye Recognition with Mixed Convolutional and Residual Network (MiCoRe-Net). IEEE Access 2018, 6, 17905–17912. [Google Scholar] [CrossRef]
  22. Cheng, Y.; Liu, Y.; Zhu, X.; Li, S. A Multiclassification Method for Iris Data Based on the Hadamard Error Correction Output Code and a Convolutional Network. IEEE Access 2019, 7, 145235–145245. [Google Scholar] [CrossRef]
  23. Liu, S.; Liu, Y.; Zhu, X.; Huo, G.; Cui, J.; Zhang, Q.; Dong, Z.; Jiang, X. Current optimal active feedback and stealing response mechanism for low-end device constrained defocused iris certification. J. Electron. Imaging 2020, 29, 013012. [Google Scholar]
  24. Gangwar, A.; Joshi, A. DeepIrisNet: Deep iris representation with applications in iris recognition and cross-sensor iris recognition. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2301–2305. [Google Scholar]
  25. Liu, N.; Liu, J.; Sun, Z.; Tan, T. A Code-Level Approach to Heterogeneous Iris Recognition. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2373–2386. [Google Scholar] [CrossRef]
  26. Garea-Llano, E.; Vázquez, M.S.G.; Colores-Vargas, J.M.; Fuentes, L.M.Z.; Acosta, A.A.R. Optimized robust multi-sensor scheme for simultaneous video and image iris recognition. Pattern Recognit. Lett. 2018, 101, 44–51. [Google Scholar] [CrossRef]
  27. Subramani, B.; Veluchamy, M. Fuzzy contextual inference system for medical image enhancement. Measurement 2019, 148, 106967. [Google Scholar] [CrossRef]
  28. Benalcazar, D.; Perez, C.; Bastias, D.; Bowyer, K. Iris Recognition: Comparing Visible-Light Lateral and Frontal Illumination to NIR Frontal Illumination. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; pp. 867–876. [Google Scholar]
  29. Lv, J.; Wang, Y.; Xu, L.; Gu, Y.; Zou, L.; Yang, B.; Ma, Z. A method to obtain the near-large fruit from apple image in orchard for single-arm apple harvesting robot. Sci. Hortic. 2019, 257, 108758. [Google Scholar] [CrossRef]
  30. Rana, H.K.; Azam, S.; Akhtar, M.R.; Quinn, J.M.; Moni, M.A. A fast iris recognition system through optimum feature extraction. PeerJ Comput. Sci. 2019, 5, e184. [Google Scholar] [CrossRef] [Green Version]
  31. Yao, L.; Muhammad, S. A novel technique for analysing histogram equalized medical images using superpixels. Comput. Assist. Surg. 2019, 24, 53–61. [Google Scholar] [CrossRef] [Green Version]
  32. Shi, H.; Zhang, Y.; Zhang, Z.; Ma, N.; Zhao, X.; Gao, Y.; Sun, J. Hypergraph-Induced Convolutional Networks for Visual Classification. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2963–2972. [Google Scholar] [CrossRef]
  33. Yu, H.; Zhang, X.J.; Wang, S.; Song, S.M. Alternative framework of the Gaussian filter for non-linear systems with synchronously correlated noises. IET Sci. Meas. Technol. 2016, 10, 306–315. [Google Scholar] [CrossRef]
  34. Völgyes, D.; Martinsen, A.C.; Stray-Pedersen, A.; Waaler, D.; Pedersen, M. A Weighted Histogram-Based Tone Mapping Algorithm for CT Images. Algorithms 2018, 11, 111. [Google Scholar] [CrossRef] [Green Version]
  35. Yu, Y.; Chen, Y.; Guo, P.; Chen, P.; Peng, N. Noisy Image Blind Deblurring via Hyper Laplacian Prior and Spectral Properties of Convolution Kernel. Chin. J. Eng. Math. 2018, 35, 648–654. [Google Scholar]
  36. Lan, G.; Shen, Y.; Chen, T.; Zhu, H. Parallel implementations of structural similarity based no-reference image quality assessment. Adv. Eng. Softw. 2017, 114, 372–379. [Google Scholar] [CrossRef]
  37. Mohebian, M.R.; Marateb, H.R.; Karimimehr, S.; Mañanas, M.A.; Kranjec, J.; Holobar, A. Non-invasive Decoding of the Motoneurons: A Guided Source Separation Method Based on Convolution Kernel Compensation With Clustered Initial Points. Front. Comput. Neurosci. 2019, 13, 14. [Google Scholar] [CrossRef] [PubMed]
  38. Ding, H.; Pan, Z.; Cen, Q.; Li, Y.; Chen, S. Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 2020, 380, 150–161. [Google Scholar] [CrossRef]
  39. Gao, L.; Zheng, H. Convolutional neural network based on PReLUs-Softplus nonlinear excitation function. J. Shenyang Univ. Technol. 2018, 40, 54–59. [Google Scholar]
  40. JLU Iris Image Database. Available online: http://www.jlucomputer.com/index/irislibrary/irislibrary.html (accessed on 22 March 2020).
  41. Liu, S.; Liu, Y.; Zhu, X. Sequence Iris Quality Evaluation Algorithm Based on Morphology and Gray Distribution. J. Jilin Univ. (Eng. Technol. Ed.) 2018, 56, 1156–1162. [Google Scholar]
  42. Shuai, L.; Yuanning, L.; Xiaodong, Z.; Hao, Z.; Guang, H.; Guangyu, W.; Jingwei, C.; Xinlong, L.; Zukang, W.; Zhiyi, D. Constrained Sequence Iris Quality Evaluation Based on Causal Relationship Decision Reasoning. In Proceedings of the 14th Chinese Conference on Biometric Recognition, CCBR2019, Zhuzhou, China, 11–12 October 2019; pp. 337–345. [Google Scholar]
  43. Gao, F.; Huang, T.; Wang, J.; Sun, J.; Hussain, A.; Yang, E. Dual-Branch Deep Convolution Neural Network for Polarimetric SAR Image Classification. Appl. Sci. 2017, 7, 447. [Google Scholar] [CrossRef] [Green Version]
  44. Gangwar, A.; Joshi, A. An Experimental Study of Deep Convolutional Features For Iris Recognition‘presented. In Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 3 December 2016. [Google Scholar]
  45. Maram, A.; Lamiaa, E. Convolutional Neural Network Based Feature Extraction for IRIS Recognition. Int. J. Comput. Sci. Inf. Technol. 2018, 10, 65–78. [Google Scholar]
  46. Gómez-Ríos, A.; Tabik, S.; Luengo, J.; Shihavuddin, A.; Krawczyk, B.; Herrera, F. Towards highly accurate coral texture images classification using deep convolutional neural networks and data augmentation. Expert Syst. Appl. 2019, 118, 315–328. [Google Scholar] [CrossRef] [Green Version]
  47. Umer, S.; Sardar, A.; Dhara, B.C.; Raout, R.K.; Pandey, H.M. Person identification using fusion of iris and periocular deep features. Neural Netw. 2019, 122, 407–419. [Google Scholar] [CrossRef] [PubMed]
  48. Wang, K.; Kumar, A. Cross-spectral iris recognition using CNN and supervised discrete hashing. Pattern Recognit. 2019, 86, 85–98. [Google Scholar] [CrossRef]
  49. Zhao, Z.; Kumar, A. Towards more accu-rate iris recognition using deeply learned spatially cor-responding features. In Proceedings of the IEEE International Conferenceon Computer Vision, Venice, Itally, 22–29 October 2017; pp. 3809–3818. [Google Scholar]
  50. Baqar, M.; Ghani, A.; Aftab, A.; Arbab, S.; Yasin, S. Deep belief networks for irisrecognition based on contour detection. In Proceedings of the Inter-national Conference on Open Source Systems & Tech-nologies (ICOSST), Lahore, Pakistan, 15–17 Decmber 2016; pp. 72–77. [Google Scholar]
  51. Cheng, D.; Kou, K.I. Multichannel interpolation of nonuniform samples with application to image recovery. J. Comput. Appl. Math. 2020, 367, 112502. [Google Scholar] [CrossRef]
  52. Biagi, S.; Isernia, T. On the solvability of singular boundary value problems on the real line in the critical growth case. Discret. Contin. Dyn. Syst. - A 2020, 40, 1131–1157. [Google Scholar] [CrossRef] [Green Version]
  53. Chen, W.; Sun, Z.; Han, J. Landslide Susceptibility Modeling Using Integrated Ensemble Weights of Evidence with Logistic Regression and Random Forest Models. Appl. Sci. 2019, 9, 171. [Google Scholar] [CrossRef] [Green Version]
  54. Kuehlkamp, A.; Pinto, A.; Rocha, A.; Bowyer, K.; Czajka, A. Ensemble of Multi-View Learning Classifiers for Cross-Domain Iris Presentation Attack Detection. IEEE Trans. Inf. Forensics Secur. 2018, 14, 1419–1431. [Google Scholar] [CrossRef] [Green Version]
  55. Al-Saidi, N.M.; Mohammed, A.J.; Al-Azawi, R.J.; Ali, A.H. Iris Features Via Fractal Functions for Authentication Protocols. Int. J. Innov. Comput. Inf. Control 2019, 14, 1441–1453. [Google Scholar]
  56. Patil, C.M.; Gowda, S. An Approach for Secure Identification and Authentication for Biometrics using Iris. In Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, India, 8–9 September 2017; Volume SEP 08-09, pp. 421–424. [Google Scholar]
  57. Manzo, M. Attributed Relational SIFT-based Regions Graph (ARSRG): Concepts and applications. arXiv 2019, arXiv:1912.09972. (under review). [Google Scholar]
  58. Gad, R.; Talha, M.; El-Latif, A.A.A.; Zorkany, M.; El-Sayed, A.; El-Fishawy, N.; Muhammad, G.; Talha, M. Iris Recognition Using Multi-Algorithmic Approaches for Cognitive Internet of things (CIoT) Framework. Future Gener. Comput. Syst. 2018, 89, 178–191. [Google Scholar] [CrossRef]
Figure 1. The overall working process of the method in this paper.
Figure 1. The overall working process of the method in this paper.
Sensors 20 01785 g001
Figure 2. Eye acquisition sensors. (a) NIR acquisition sensor, (b) NIR acquisition sensor for ideal images, (c) ordinary optical acquisition sensor, (d) ordinary optical acquisition sensor for ideal images, (e) ordinary optical acquisition sensor for upgrading sensors, (f) ordinary optical acquisition sensor with upgraded sensors corresponds to ideal images.
Figure 2. Eye acquisition sensors. (a) NIR acquisition sensor, (b) NIR acquisition sensor for ideal images, (c) ordinary optical acquisition sensor, (d) ordinary optical acquisition sensor for ideal images, (e) ordinary optical acquisition sensor for upgrading sensors, (f) ordinary optical acquisition sensor with upgraded sensors corresponds to ideal images.
Sensors 20 01785 g002
Figure 3. Examples of multi-state irises: (a) normal iris, (b) iris with dark condition, (c) iris with defocused condition, (d) iris with deflected condition.
Figure 3. Examples of multi-state irises: (a) normal iris, (b) iris with dark condition, (c) iris with defocused condition, (d) iris with deflected condition.
Sensors 20 01785 g003
Figure 4. The fault tree of the certification model in the case of certification error.
Figure 4. The fault tree of the certification model in the case of certification error.
Sensors 20 01785 g004
Figure 5. The whole structure of quality concept fuzzy inference.
Figure 5. The whole structure of quality concept fuzzy inference.
Sensors 20 01785 g005
Figure 6. The example of an image that may be acquired before the eye is pointed at the camera.
Figure 6. The example of an image that may be acquired before the eye is pointed at the camera.
Sensors 20 01785 g006
Figure 7. The operation process of iris quality knowledge concept construction mechanism.
Figure 7. The operation process of iris quality knowledge concept construction mechanism.
Sensors 20 01785 g007
Figure 8. The eye discrimination image. (a) Eye discrimination image collected by the device in Figure 2b, (b) the image of Figure 2a corresponds to the binarized image of the pupil, (c) eye discrimination image collected by the device of Figure 2d, (d) the image of Figure 8c corresponds to the binarized image of the pupil, (e) eye discrimination image collected by the device of Figure 2f, (f) the image of Figure 8e corresponds to the binarized image of the pupil.
Figure 8. The eye discrimination image. (a) Eye discrimination image collected by the device in Figure 2b, (b) the image of Figure 2a corresponds to the binarized image of the pupil, (c) eye discrimination image collected by the device of Figure 2d, (d) the image of Figure 8c corresponds to the binarized image of the pupil, (e) eye discrimination image collected by the device of Figure 2f, (f) the image of Figure 8e corresponds to the binarized image of the pupil.
Sensors 20 01785 g008
Figure 9. The transition processed image. (a) The transition processed image of Figure 2b, (b) the image of Figure 8a corresponds to the connected area, (c) the transition processed image of Figure 2d, (d) the image of Figure 8c corresponds to the connected area, (e) the transition processed image of Figure 2f, (f) the image of Figure 8e corresponds to the connected area.
Figure 9. The transition processed image. (a) The transition processed image of Figure 2b, (b) the image of Figure 8a corresponds to the connected area, (c) the transition processed image of Figure 2d, (d) the image of Figure 8c corresponds to the connected area, (e) the transition processed image of Figure 2f, (f) the image of Figure 8e corresponds to the connected area.
Sensors 20 01785 g009
Figure 10. The example of a normalized image. (a) Normalized enhanced image of Figure 2b, (b) normalized image of Figure 9b, (c) normalized enhanced image of Figure 2d, (d) normalized image of Figure 9d, (e) normalized enhanced image of Figure 2f, (f) normalized image of Figure 9f.
Figure 10. The example of a normalized image. (a) Normalized enhanced image of Figure 2b, (b) normalized image of Figure 9b, (c) normalized enhanced image of Figure 2d, (d) normalized image of Figure 9d, (e) normalized enhanced image of Figure 2f, (f) normalized image of Figure 9f.
Sensors 20 01785 g010
Figure 11. Example of the circle range of the detected different center and radius in the iris connected area.
Figure 11. Example of the circle range of the detected different center and radius in the iris connected area.
Sensors 20 01785 g011
Figure 12. Sobel operator.
Figure 12. Sobel operator.
Sensors 20 01785 g012
Figure 13. Example of salient images with and without an iris image. (a) Salient image of Figure 2b, (b) salient image of Figure 2d, (c) salient image of Figure 2f, (d) example of eyeless highlight image, (e) example of salient image with incorrect pupil delineation.
Figure 13. Example of salient images with and without an iris image. (a) Salient image of Figure 2b, (b) salient image of Figure 2d, (c) salient image of Figure 2f, (d) example of eyeless highlight image, (e) example of salient image with incorrect pupil delineation.
Sensors 20 01785 g013
Figure 14. The detection image of the iris part and the detection image of the non-iris part. (a) Detection image of iris area, (b) detection image of non-iris area.
Figure 14. The detection image of the iris part and the detection image of the non-iris part. (a) Detection image of iris area, (b) detection image of non-iris area.
Sensors 20 01785 g014
Figure 15. The image of each stage of positioning. (a) Quality qualified image, (b) positioning image, (c) segmentation image.
Figure 15. The image of each stage of positioning. (a) Quality qualified image, (b) positioning image, (c) segmentation image.
Sensors 20 01785 g015
Figure 16. Examples of iris recognition areas of different iris libraries. (a) Iris recognition area of Figure 2b, (b) iris recognition area of Figure 2d, (c) iris recognition area of Figure 2f.
Figure 16. Examples of iris recognition areas of different iris libraries. (a) Iris recognition area of Figure 2b, (b) iris recognition area of Figure 2d, (c) iris recognition area of Figure 2f.
Sensors 20 01785 g016
Figure 17. An example of the filtered image. (a) Gaussian filtering of Figure 6a, (b) equalization histogram of Figure 6a, (c) Gaussian filtering of Figure 6b, (d) equalization histogram of Figure 6b, (e) Gaussian filtering of Figure 6c, (f) equalization histogram of Figure 6c.
Figure 17. An example of the filtered image. (a) Gaussian filtering of Figure 6a, (b) equalization histogram of Figure 6a, (c) Gaussian filtering of Figure 6b, (d) equalization histogram of Figure 6b, (e) Gaussian filtering of Figure 6c, (f) equalization histogram of Figure 6c.
Sensors 20 01785 g017
Figure 18. Gradient Laplacian convolutional kernel of the first convolutional kernel.
Figure 18. Gradient Laplacian convolutional kernel of the first convolutional kernel.
Sensors 20 01785 g018
Figure 19. An example of the processed image formed in the first step.
Figure 19. An example of the processed image formed in the first step.
Sensors 20 01785 g019
Figure 20. Convolutional kernels of the second convolutional layer: (a) gradient Laplacian convolutional kernel, (b) horizontal Sobel convolutional kernel, and (c) vertical Sobel convolutional kernel.
Figure 20. Convolutional kernels of the second convolutional layer: (a) gradient Laplacian convolutional kernel, (b) horizontal Sobel convolutional kernel, and (c) vertical Sobel convolutional kernel.
Sensors 20 01785 g020
Figure 21. An example of three processed image formed in the second step.
Figure 21. An example of three processed image formed in the second step.
Sensors 20 01785 g021
Figure 22. Convolutional kernels of the third convolutional layer: (a) gradient Laplacian convolutional kernel, (b) horizontal Sobel convolutional kernel, (c) vertical Sobel convolutional kernel, (d) horizontal gradient convolutional kernel, (e) vertical gradient convolutional kernel.
Figure 22. Convolutional kernels of the third convolutional layer: (a) gradient Laplacian convolutional kernel, (b) horizontal Sobel convolutional kernel, (c) vertical Sobel convolutional kernel, (d) horizontal gradient convolutional kernel, (e) vertical gradient convolutional kernel.
Sensors 20 01785 g022
Figure 23. An example of fifteen processed image formed in the third step.
Figure 23. An example of fifteen processed image formed in the third step.
Sensors 20 01785 g023
Figure 24. Convolutional kernels of the fourth convolutional layer.
Figure 24. Convolutional kernels of the fourth convolutional layer.
Sensors 20 01785 g024
Figure 25. An example of fifteen processed image formed in the fourth step.
Figure 25. An example of fifteen processed image formed in the fourth step.
Sensors 20 01785 g025
Figure 26. The growth trend of 200 test irises, (a) 4.0, (b) 6.0, (c) 7.0.
Figure 26. The growth trend of 200 test irises, (a) 4.0, (b) 6.0, (c) 7.0.
Sensors 20 01785 g026
Figure 27. ROC curves of one-to-one certification of each category of the three iris libraries, (a) 4.0, (b) 6.0, (c) 7.0.
Figure 27. ROC curves of one-to-one certification of each category of the three iris libraries, (a) 4.0, (b) 6.0, (c) 7.0.
Sensors 20 01785 g027
Table 1. The key indicators of the three acquisition sensors.
Table 1. The key indicators of the three acquisition sensors.
JLU-4.0JLU-6.0JLU-7.0
Pixel5,000,0002,000,000300,000
Collection distance150–200 mm100–150 mm150–200 mm
Resolution640 × 480640 × 480640 × 480
LightInfraredInfraredNon-infrared
ColorColorGrayscaleGrayscale
Table 2. The degree of quality requirements of the iris indicators of each algorithm.
Table 2. The degree of quality requirements of the iris indicators of each algorithm.
MethodClarity (Degree of Light Effect)Effective Iris AreaStrabismusConfirm the Presence of Eyes
Fusion methodhighhighhighyes
Secondary iris recognitionhighmediummediumyes
DPSO-certification functionlowlowmediumyes
Statistical cognitive learninglowlowlowyes
Multialgorithm parallel integrationlowlowlowyes
Capsule deep learningmediummediummediumyes
Table 3. The qualifications and certifications of the six methods.
Table 3. The qualifications and certifications of the six methods.
Method Consider Qualified Number Correct Certifications of Consider Qualified Number
JLU-4.0Fusion method156155
Secondary iris recognition235230
DPSO-certification function435434
Statistical cognitive learning943943
Multialgorithm parallel integration954952
Capsule deep learning513511
JLU-6.0Fusion method179175
Secondary iris recognition258257
DPSO-certification function474473
Statistical cognitive learning963958
Multialgorithm parallel integration974973
Capsule deep learning526523
JLU-7.0Fusion method111110
Secondary iris recognition210210
DPSO-certification function453453
Statistical cognitive learning10071003
Multialgorithm parallel integration987985
Capsule deep learning546545
Table 4. The number of iris categories and the number of single-category irises.
Table 4. The number of iris categories and the number of single-category irises.
Category NumberImage Number in Each CategoryTotalMatch Number in the Same CategoryMatch Number in Different CategoryTotal
505025002500750010,000
Table 5. The certification statuses of all 12 deep learning architecture methods.
Table 5. The certification statuses of all 12 deep learning architecture methods.
JLU-4.0JLU-6.0JLU-7.0
MethodCorrectly CertificationCRRCorrectly CertificationCRRCorrectly CertificationCRR
CNN-special certification function999799.97%10,000100%999899.98%
FRIR-V2913591.35%842184.21%875487.54%
VGG-Net736973.69%812381.23%786978.69%
DeepIrisNet-A786978.69%749874.98%809680.96%
DeepIris845684.56%856985.69%837583.75%
DeepIrisNet923692.36%934793.47%914791.47%
Alex-Net678967.89%569856.98%684568.45%
ResNet712571.25%723672.36%752375.23%
Inception-v3769876.98%774877.48%749674.96%
CNN-self-learned896389.63%874287.42%854185.41%
FCN854685.46%832583.25%869886.98%
DBN-RVLR-NN796879.68%776277.62%802380.23%
Table 6. The number of comparison times and recognition situation.
Table 6. The number of comparison times and recognition situation.
Number of Correct CertificationCorrect Certification Rate
JLU-4.0Original structure99899.8%
Unprocessed structure87587.5%
Structure of replacement algorithm99899.8%
JLU-6.0Original structure1000100%
Unprocessed structure86986.9%
Structure of replacement algorithm99799.7%
JLU-7.0Original structure99999.9%
Unprocessed structure89289.2%
Structure of replacement algorithm1000100%
Table 7. L i And p i of 15 recognition parameters of 500 training images and 100 training images.
Table 7. L i And p i of 15 recognition parameters of 500 training images and 100 training images.
The Number of Training Iris Is 500The Number of Training Iris Is 100
No.JLU-4.0JLU-6.0JLU-7.0No.JLU-4.0JLU-6.0JLU-7.0
1 L i 41.860815.717624.3741 L i 13.115218.816425.5812
p i 0.60.580.58 p i 0.60.480.56
2 L i 38.242425.029667.30792 L i 56.600145.550354.137
p i 0.40.480.48 p i 0.40.480.5
3 L i 15.12716.802349.46733 L i 27.157627.71735.3843
p i 0.40.580.5 p i 0.60.580.52
4 L i 54.545519.228532.87694 L i 27.375827.026116.1855
p i 0.60.540.48 p i 0.60.540.48
5 L i 25.575824.15754.83455 L i 36.284840.563647.3764
p i 0.40.480.42 p i 0.40.560.48
6 L i 26.090920.895855.06186 L i 40.315233.690942.1176
p i 0.40.580.46 p i 0.40.460.6
7 L i 38.854513.957625.94367 L i 21.127319.971522.3182
p i 0.40.560.58 p i 0.60.540.52
8 L i 32.909121.713952.75468 L i 42.139435.950944.1636
p i 0.60.50.54 p i 0.60.50.54
9 L i 16.915114.275238.44129 L i 24.436421.973327.5012
p i 0.60.520.54 p i 0.40.540.54
10 L i 30.072714.730921.141810 L i 16.315218.547319.3721
p i 0.20.520.52 p i 0.60.540.5
11 L i 29.321218.734550.53711 L i 36.490936.465542.06
p i 0.60.640.44 p i 0.60.50.42
12 L i 21.642413.269734.118212 L i 17.981719.14320.5849
p i 0.60.620.52 p i 0.40.60.54
13 L i 39.872719.308531.361813 L i 30.40628.743616.9236
p i 0.60.60.48 p i 0.60.520.52
14 L i 25.50323.336450.721814 L i 24.957636.105439.3079
p i 0.60.560.48 p i 0.60.460.58
15 L i 19.042422.095754.72315 L i 37.678833.105537.7097
p i 0.40.480.36 p i 0.40.440.5
Table 8. The information entropy H 1 i and H 2 i of 15 recognition parameters of 500 training images.
Table 8. The information entropy H 1 i and H 2 i of 15 recognition parameters of 500 training images.
The Number of Training Iris Is 500The Number of Training Iris Is 100
No.JLU-4.0JLU-6.0JLU-7.0No.JLU-4.0JLU-6.0JLU-7.0
1 H 1 i 0.3064950.3159420.3159421 H 1 i 0.3064950.3523050.324698
H 2 i 0.3665160.364350.36435 H 2 i 0.3665160.3400420.361231
2 H 1 i 0.3665160.3523050.3523052 H 1 i 0.3665160.3523050.346574
H 2 i 0.3064950.3400420.340042 H 2 i 0.3064950.3400420.346574
3 H 1 i 0.3665160.3159420.3465743 H 1 i 0.3064950.3159420.340042
H 2 i 0.3064950.364350.346574 H 2 i 0.3665160.364350.352305
4 H 1 i 0.3064950.3327410.3523054 H 1 i 0.3064950.3327410.352305
H 2 i 0.3665160.3572030.340042 H 2 i 0.3665160.3572030.340042
5 H 1 i 0.3665160.3523050.364355 H 1 i 0.3665160.3246980.352305
H 2 i 0.3064950.3400420.315942 H 2 i 0.3064950.3612310.340042
6 H 1 i 0.3665160.3159420.3572036 H 1 i 0.3665160.3572030.306495
H 2 i 0.3064950.364350.332741 H 2 i 0.3064950.3327410.366516
7 H 1 i 0.3665160.3246980.3159427 H 1 i 0.3064950.3327410.340042
H 2 i 0.3064950.3612310.36435 H 2 i 0.3665160.3572030.352305
8 H 1 i 0.3064950.3465740.3327418 H 1 i 0.3064950.3465740.332741
H 2 i 0.3665160.3465740.357203 H 2 i 0.3665160.3465740.357203
9 H 1 i 0.3064950.3400420.3327419 H 1 i 0.3665160.3327410.332741
H 2 i 0.3665160.3523050.357203 H 2 i 0.3064950.3572030.357203
10 H 1 i 0.3218880.3400420.34004210 H 1 i 0.3064950.3327410.346574
H 2 i 0.1785150.3523050.352305 H 2 i 0.3665160.3572030.346574
11 H 1 i 0.3064950.2856240.36123111 H 1 i 0.3064950.3465740.36435
H 2 i 0.3665160.3677940.324698 H 2 i 0.3665160.3465740.315942
12 H 1 i 0.3064950.2963820.34004212 H 1 i 0.3665160.3064950.332741
H 2 i 0.3665160.3676820.352305 H 2 i 0.3064950.3665160.357203
13 H 1 i 0.3064950.3064950.35230513 H 1 i 0.3064950.3400420.340042
H 2 i 0.3665160.3665160.340042 H 2 i 0.3665160.3523050.352305
14 H 1 i 0.3064950.3246980.35230514 H 1 i 0.3064950.3572030.315942
H 2 i 0.3665160.3612310.340042 H 2 i 0.3665160.3327410.36435
15 H 1 i 0.3665160.3523050.36779415 H 1 i 0.3665160.3612310.346574
H 2 i 0.3064950.3400420.285624 H 2 i 0.3064950.3246980.346574
Table 9. The 15 values of the category labels of the three iris libraries.
Table 9. The 15 values of the category labels of the three iris libraries.
The Number of Training Iris Is 500The Number of Training Iris Is 100
No.JLU-4.0JLU-6.0JLU-7.0No.JLU-4.0JLU-6.0JLU-7.0
10.3305040.3362730.33627310.3305040.3459280.340773
20.3305040.3459280.34592820.3305040.3459280.346574
30.3305040.3362730.34657430.3305040.3362730.345928
40.3305040.3439930.34592840.3305040.3439930.345928
50.3305040.3459280.33627350.3305040.3407730.345928
60.3305040.3362730.34399360.3305040.3439930.330504
70.3305040.3407730.33627370.3305040.3439930.345928
80.3305040.3465740.34399380.3305040.3465740.343993
90.3305040.3459280.34399390.3305040.3439930.343993
100.2071890.3459280.345928100.3305040.3439930.346574
110.3305040.3152050.340773110.3305040.3465740.336273
120.3305040.3234760.345928120.3305040.3305040.343993
130.3305040.3305040.345928130.3305040.3459280.345928
140.3305040.3407730.345928140.3305040.3439930.336273
150.3305040.3459280.315205150.3305040.3407730.346574
Table 10. The recognition parameters of the training iris of the example.
Table 10. The recognition parameters of the training iris of the example.
No.JLU-4.0JLU-6.0JLU-7.0No.JLU-4.0JLU-6.0JLU-7.0
135.726812.78812.9091921.848424.090919.1818
230.757924.545255.6973108.3636722.21217.5758
324.939422.303326.96971122.363624.363647.606
428.393913.151515.18171215.27277.969623.2121
528.030325.60656.21211315.72724.0606610.4243
621.757621.909151.48491423.363713.666740.4849
719.878711.242426.9091155.9393311.151556.1819
822.81816.9999348.4243
Table 11. The information offset Z i of the example training iris.
Table 11. The information offset Z i of the example training iris.
The Number of Training Iris Is 500The Number of Training Iris Is 100
No.JLU-4.0JLU-6.0JLU-7.0No.JLU-4.0JLU-6.0JLU-7.0
10.156950.1490910.097051910.3993670.1149280.0917578
20.1179140.1658340.13993620.07966960.09112460.178281
30.3031850.2031280.094476430.1688770.1474540.134772
40.09572850.1228940.07808940.1520590.08743620.158619
50.2015460.1874280.1878550.1132550.1147820.209799
60.1222570.1604480.15363960.07912170.1068530.179213
70.07500670.1464590.15872270.1730290.1011460.203892
80.1275080.05586260.16493180.09957870.03374030.180166
90.1893640.2853850.089658690.131080.1801490.125325
100.01790440.2549890.063361100.09427130.1967810.0677668
110.140260.172190.149724110.1127020.1157780.207409
120.1297730.1103620.1203120.124520.076560.185284
130.07253560.03867440.056209130.0951190.02497990.108916
140.1684710.1064870.134977140.1721530.06219630.157609
150.04572660.08534650.187673150.02310970.05353910.258172
Table 12. The entropy feature G and enlarging value e G .
Table 12. The entropy feature G and enlarging value e G .
The Number of Training Iris Is 500The Number of Training Iris Is 100
No.JLU-4.0JLU-6.0JLU-7.0No.JLU-4.0JLU-6.0JLU-7.0
G 11.950213.239511.0544 G 12.21118.797514.2748
e G 154,844562,13763,220.3 e G 201,0136617.711,583,010
Table 13. The results of the quality evaluation and recognition of the four cases.
Table 13. The results of the quality evaluation and recognition of the four cases.
Case JLU-4.0JLU-6.0JLU-7.0
1Qualified quantity937953964
Accuracy42.39%47.53%28.75%
2Qualified quantity975946968
Accuracy44.53%42.85%32.56%
3Qualified quantity986986972
Accuracy37.59%39.42%41.25%
4Qualified quantity903863893
Accuracy50.67%58.25%49.53%
Table 14. Feedback and updates.
Table 14. Feedback and updates.
No.Training Iris NumberTest Iris NumberNumber of Correct CertificationCorrect Certification RateWhether to Update Judgment
JLU-4.0
110504896%No
2101006969%Yes
310020018994.5%Yes
4500200200100%No
JLU-6.0
110503876%Yes
210020019396.5%No
310050037575%Yes
450050049999.8%No
JLU-7.0
110504284%Yes
2501008989%Yes
320020019899%No
420050042384.6%Yes
Table 15. The change situation of category label range distribution.
Table 15. The change situation of category label range distribution.
The Number of Training IrisCategory Label Range
JLU-4.0
10[1000,2000]
100[1000,2000],[1800,1900]
500[1000,2000],[1800,1900],[13000,14000],[8900,9000]
JLU-6.0
10[890,000,900,000],[310,000,320,000],
50[890,000,900,000],[310,000,320,000],[138,000,000,139,000,000],[1,820,000,1,830,000]
100[890,000,900,000],[310,000,320,000],[138,000,000,139,000,000],[1,820,000,1,830,000],
[2,000,000,2,100,000]
500[890,000,900,000],[310,000,320,000],[138,000,000,139,000,000],[1,820,000,1,830,000],
[2,000,000,2,100,000],[1,190,000,1,200,000],[125,000,000,126,000,000]
JLU-7.0
10[138,000,139,000],[1,150,000,1,160,000]
50[138,000,139,000],[1,150,000,1,160,000],[3,500,000,3,800,000]
100[138,000,139,000],[1,150,000,1,160,000],[3,500,000,3,800,000],[2,900,000,3,000,000],
[840,000,850,000]
200[138,000,139,000],[1,150,000,1,160,000],[3,500,000,3,800,000],[2,900,000,3,000,000],
[840,000,850,000],[2,470,000,2,480,000],[720,000,730,000]
500[138,000,139,000],[1,150,000,1,160,000],[3,500,000,3,800,000],[2,900,000,3,000,000],
[840,000,850,000],[2,470,000,2,480,000],[720,000,730,000]
Table 16. The results of the heterogeneous universality experiment.
Table 16. The results of the heterogeneous universality experiment.
Total Number of CollectionQuality ErrorFalse RejectionFalse Acceptance
JLU-4.0Category 1105320
Category 2107511
Category 3103030
Category 4101100
Category 5106231
JLU-6.0Category 1101010
Category 2100000
Category 3103111
Category 4100000
Category 5104130
JLU-7.0Category 1105131
Category 2109540
Category 3104121
Category 4107070
Category 5105230
Table 17. Certification times of each category in different iris libraries.
Table 17. Certification times of each category in different iris libraries.
The Same CategoryDifferent Category
JLU-4.0Category 117568435
Category 216579135
Category 315689253
Category 414568965
Category 515369568
JLU-6.0Category 114788965
Category 215698567
Category 317459542
Category 418699645
Category 517569456
JLU-7.0Category 116897895
Category 217568456
Category 318969674
Category 415988456
Category 517968695
Table 18. The results of the time operation experiment.
Table 18. The results of the time operation experiment.
Iris LibraryAlgorithm Running Time in This PaperRecognizable Irises Considered by EvaluationThe Correct Number of RecognizableUnrecognizable Irises Considered by EvaluationThe Correct Number of Unrecognizable
JLU-4.06186 ms7867852140
JLU-6.06347 ms8158151850
JLU-7.06412 ms8688681320
Table 19. The number of iris categories and the number of single-category irises.
Table 19. The number of iris categories and the number of single-category irises.
Category NumberImage Number in Each CategoryTotalMatch Number in the Same CategoryMatch Number in Different CategoryTotal
501005000500010,00015,000
Table 20. The number of qualified irises in the three iris libraries developed in three different ways.
Table 20. The number of qualified irises in the three iris libraries developed in three different ways.
Recognizable Iris Considered by EvaluationCertified RatioUnrecognizable Iris Considered by Evaluation
JLU-4.0
Qualified iris evaluation indicators in literature [41]913560.9%5865
Inference engine system in literature [40]963564.233%5365
Quality evaluation fuzzy reasoning system(example indicators in this paper)13,58590.567%1415
JLU-6.0
Qualified iris evaluation indicators in literature [41]812354.153%6877
Inference engine system in literature [40]765851.053%7342
Quality evaluation fuzzy reasoning system(example indicators in this paper)14,32595.5%675
JLU-7.0
Qualified iris evaluation indicators in literature [41]663544.233%8365
Inference engine system in literature [40]746849.787%7532
Quality evaluation fuzzy reasoning system(example indicators in this paper)12,45383.02%2547
Table 21. The recognition results of all 12 cases in the three libraries.
Table 21. The recognition results of all 12 cases in the three libraries.
CaseThe Correct Number of Recognizable IrisesCRR of RecognizableThe Correct Number of Unrecognizable IrisesCRR of Unrecognizable
JLU-4.0
013,585100%10.0707%
1684274.899%112519.182%
2736880.657%4267.263%
3856888.928%5369.991%
4835691.472%136523.274%
5678970.462%1452.703%
6875690.877%365868.183%
710,68578.653%856.007%
8975671.815%120.848%
911,42384.085%60.424%
1013,08696.327%100.707%
1110,98580.861%90.636%
1210,58677.924%231.625%
JLU-6.0
014,32399.986%00%
1613575.526%157422.888%
2658581.066%5898.565%
3698791.237%4586.238%
4716288.169%286941.719%
5586976.639%951.294%
6743697.101%365849.823%
711,46880.056%253.704%
810,12570.681%30.444%
911,98783.678%50.741%
1013,84596.649%00%
1112,45286.925%00%
1212,65388.328%50.741%
JLU-7.0
012,45299.992%20.079%
1478572.118%102312.230%
2498575.132%5236.252%
3672390.024%3544.700%
4562184.717%186222.259%
5586278.495%1151.527%
6725697.161%367548.792%
7992679.708%10.039%
8935275.098%30.118%
9978978.607%10.039%
1011,58993.062%10.039%
1110,98588.212%30.118%
1210,70985.995%40.157%

Share and Cite

MDPI and ACS Style

Shuai, L.; Yuanning, L.; Xiaodong, Z.; Guang, H.; Zukang, W.; Xinlong, L.; Chaoqun, W.; Jingwei, C. Heterogeneous Iris One-to-One Certification with Universal Sensors Based On Quality Fuzzy Inference and Multi-Feature Fusion Lightweight Neural Network. Sensors 2020, 20, 1785. https://doi.org/10.3390/s20061785

AMA Style

Shuai L, Yuanning L, Xiaodong Z, Guang H, Zukang W, Xinlong L, Chaoqun W, Jingwei C. Heterogeneous Iris One-to-One Certification with Universal Sensors Based On Quality Fuzzy Inference and Multi-Feature Fusion Lightweight Neural Network. Sensors. 2020; 20(6):1785. https://doi.org/10.3390/s20061785

Chicago/Turabian Style

Shuai, Liu, Liu Yuanning, Zhu Xiaodong, Huo Guang, Wu Zukang, Li Xinlong, Wang Chaoqun, and Cui Jingwei. 2020. "Heterogeneous Iris One-to-One Certification with Universal Sensors Based On Quality Fuzzy Inference and Multi-Feature Fusion Lightweight Neural Network" Sensors 20, no. 6: 1785. https://doi.org/10.3390/s20061785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop