Next Article in Journal
Research Developments of Aerostatic Thrust Bearings: A Review
Previous Article in Journal
Large-Eddy Simulation of Hydrodynamic Structure in a Strongly Curved Bank
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition of Continuous Face Occlusion Based on Block Permutation by Using Linear Regression Classification

College of Electrical and Electronic Engineering, Wenzhou University, Wenzhou 325035, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 11885; https://doi.org/10.3390/app122311885
Submission received: 7 October 2022 / Revised: 10 November 2022 / Accepted: 11 November 2022 / Published: 22 November 2022
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)

Abstract

:
Face occlusion is still a key issue in the study of face recognition. Continuous occlusion affects the overall features and contour structure of a face, which brings significant challenges to face recognition. In previous studies, although the Representation-Based Classification Method (RBCM) can better capture the differences in different categories of faces and accurately identify human face images with changes in light and facial expressions, it is easily affected by continuous occlusion. For face recognition, there is a situation where face error recognition occurs. The RBCM method frequently learns to cover the characteristics of face recognition and then handle face error recognition. Therefore, the elimination of occlusion information from the image is necessary to improve the robustness of such models. The Block Permutation Linear Regression Classification (BPLRC) method proposed in this paper includes image block permutation and Linear Regression Classification (LRC). The LRC algorithm belongs to the category of nearest subspace classification and uses the Euclidean distance as a metric to classify images. The LRC algorithm is based on one of the classification methods that is susceptible to outliers. Therefore, block permutation was used with the aim of establishing an image set that does not contain much occlusion information and constructing a robust linear regression model. The BPLRC method first modulates all the images and then lists the schemes that arrange all segments, enters the image features of various schemes into linear models, and classifies the result according to the minimum residual of the person’s face image and reconstruction image. Compared to several state-of-the-art algorithms, the proposed method effectively solves the continuous occlusion problem for the Extended Yale B, ORL, and AR datasets. The proposed method recognizes the AR data concentration scarf to cover the accuracy of human face images to 93.67%. The dataset recognition speed is 0.094 s/piece. The arrangement method can be combined not only with the LRC algorithm, but also other algorithms with weak robustness. Due to the increase in the number of blocks and the increase in the calculation index of block arrangement methods, it is necessary to explore reasonable iteration methods in the future, quickly find the optimal or sub-best arrangement scheme, and reduce the calculation of the proposed method.

1. Introduction

With the emergence of the epidemic, face recognition technology has been rapidly implemented in various fields of people’s daily life, such as in recognition of automatic passage on campuses, temperature measurement (all-in-one machines), and the use of face recognition in attendance. Traditional face recognition technology includes two main parts: feature extraction and classification. Gabor [1], the Histogram of Oriented Gradient (HOG) [2], and the Local Binary Pattern (LBP) [3] are often used to describe image features. Principal Component Analysis (PCA) [4,5] is based on the singular value decomposition (SVD) [6] algorithm. It performs eigen decomposition on the covariance matrix to obtain the principal components of the data and to achieve data dimensional reduction and extraction. The purpose of important features of PCA is also known as eigenfaces. The feature extraction method, combined with the corresponding classification rules, has been used to solve initial face recognition problems. Common classifiers include the Nearest Neighbor Classifier (NNC) [7], the Minimum Distance Classifier (MDC) [8], the K-nearest Neighbor Classifier (KNNC) [9], and so on. Since traditional face recognition technology may overlook some important facial information or cause the over-fitting of problems in the feature extraction process, the Representation-Based Classification Method (RBCM) [10,11,12,13] has attracted much attention in recent years. Due to the epidemic’s impact, people must wear masks while going out. The traditional face recognition method cannot effectively identify obscured faces, and the RBCM method is easily affected by abnormal features. In real life, face images often have problems such as illumination, facial expression changes, and facial occlusion. Among these problems, facial occlusion is considered the most challenging. From the literature [10,11,12,13,14,15,16], it can be seen that the RBCM method can effectively identify face images with changes in light and changes in facial expressions. Still, it is not easy to identify face images with occlusion. Therefore, the RBCM method is improved for facial occlusion to solve the three problems: light changes, facial expression changes, and facial occlusion.
To solve the problem of abnormal features in the RBCM, many research groups have proposed related robust algorithms, among which the Sparse Representation-based Classification (SRC) algorithm was among the first proposed [10,11]. The biggest feature of this type of algorithm is its ability to linearly represent the test samples by building a dictionary containing all the training samples. The relatively large time complexity of the SRC algorithm in solving the L1 norm optimization problem has largely limited its applications. However, Zhang et al. [14] proposed the Collaborative Representation-based Classification (CRC) algorithm. The SRC and CRC algorithms both use training samples from all categories to linearly represent test samples. However, the biggest difference between them is that the CRC algorithm uses the less computationally intensive L2 norm instead of the L1 norm, as used in the SRC algorithm, to solve the optimization problem. Some scholars have proposed a Two-Phase Test Sample Sparse Representation (TPTSSR) algorithm based on the CRC algorithm [15]. In the first stage of the algorithm, CRC was used to select M training samples for the best representation of test samples. In the second stage, the CRC algorithm was used again to identify the test samples, and the training set was the M training samples determined in the previous stage. The training samples selected in the first stage of TPTSSR cannot improve the accuracy of CRC for face image recognition. Thus, Tang et al. [16] further improved the algorithm and proposed the Random-filtering-based Sparse Representation (RFSR) algorithm. Liu et al. [17] improved the distance metric of the SRC algorithm. According to the author’s experiments, using cosine or Euler distance as the measure can expand the inter-sample and intra-class distance simultaneously, and the multiple inter-class distance expansion was much higher than the multiple intra-class distance expansion, which was conducive to improving the robustness of the SRC algorithm. The various RBCM methods proposed in the literature [10,11,12,13,14,15,16,17] constrained the model coefficients to float in a small range, reducing the negative impact of face occlusion features on the model coefficients. Constraining the model coefficient through the L1 and L2 norm cannot improve model identification performance. Only when the model is constrained to learn as small a number of occlusion features as possible is model performance made stable.
In addition to using all classes of training samples to represent test samples linearly, representation-based classification methods can also use a single class of training samples to represent the test samples. Linear Regression Classification (LRC) [18] uses a single category of training samples to reconstruct the samples to be tested. LRC can be viewed as a representation based on the L2 norm, which uses the classification rules of the nearest subspace to classify face images. LRC finally selects the subspace with the smallest distance by projecting the test image onto the subspace. For this, the decision method is based on the distance metric, which is unsuitable for dealing with continuous severe occlusion problems. The LRC method is similar to the SRC and CRC methods, but the LRC method does not restrict the representation coefficient. Therefore, if there are many abnormal variables in the training or test sample, the LRC method can learn a lot of abnormal information. As a result, the model classification fails. Therefore, some researchers, such as the authors of [18], have adopted the method of modular [19] image representation, which has made LRC more promising when trying to accurately identify face images with facial occlusions. The modular LRC algorithm [18] first worked by segmenting a given occlusion image. The “good or bad” image patch was then judged by the distance metric of the intermediate decision, and the “best” image patch was selected. Finally, converting the block into individual decisions was considered the final classification result. The main advantage of this method is its equivalency when dynamically removing occluded partitions. However, the biggest drawback of modular LRC is that it uses only specific blocks (with minimal residuals), discarding blocks that contain other useful face information. At this point, how to extract effective face characteristics and remove occlusion has become a question that researchers [20,21] care about.
This paper proposes the Block Permutation Linear Repression Classification (BPLRC) algorithm. The proposed method first modulates the image and then groups the schemes that retain the same number of blocks. Next, their residuals are compared to determine the permutation of the group and the group with the best recognition effect is finally selected. Compared to the modular LRC algorithm, the proposed method retains more useful face information and also achieves the purpose of removing invalid occlusion blocks. Moreover, there is no need to judge the occlusion ratio to achieve the best recognition effect. The algorithm improves the robustness of LRC to recognize the occluded image to a certain extent.
The RBCM methods used in this article mainly include LRC, SRC, CRC, Euler Sparse Representation-based Classification (ESRC [17]), Module LRC, and the BPLRC method proposed in this article. These methods are the same as linear models, and the type of linear representation is the difference. LRC uses a single class of travelers to reply to the test samples. SRC, ESRC, and CRC methods use all-class training samples to indicate test samples linearly. Among them, SRC and ESRC have differences in distance measurement. The difference between the SRC and CRC algorithm is that SRC uses the L1 model to restrain the sparse coefficient, and CRC uses the L2 model to restrain the sparse coefficient. Module LRC selects the “most useful” block as a feature input of the LRC algorithm through blocking. The difference from the LRC algorithm is that the Module LRC is used to screen the effective face characteristics as much as possible during identification, and the LRC algorithm simply uses all image features in identification. The BPLRC algorithm proposed in this article is similar to Module LRC. All images are modularized, and the identification of LRC algorithms characterizes effective face characteristics. The difference between the BPLRC algorithm and the Module LRC algorithm is that the Module LRC only retains one block, while the BPLRC method considers all the segmentation arrangement schemes to retain as much information as possible. From a principle point of view, all block arrangement schemes in BPLRC include the full-retained LRC and the Module LRC method that only retains one block.
The rest of this article is organized as follows: Section 2 details the principles of the LRC, SRC, CRC, and BPLRC algorithms, with a slight reference to ESRC and Module LRC. Section 3 describes the decision principles and experimental results of the proposed method. Section 4 discusses the contrast between BPLRC and other related algorithms in relation to the other performance metrics of the model. Section 5 is the conclusion of this paper.

2. Materials and Methods

The LRC [18] method can effectively identify face images with illumination and expression changes, but this method finds it challenging to identify occluded face images. According to the literature [14,17,20,21], LRC finds it easier to identify face images with illumination and expression changes than SRC, CRC, and ESRC algorithms in both linear models. Nevertheless, its robustness is weaker than the SRC, CRC, and ESRC algorithms. The BPLRC algorithm proposed in this paper improves robustness based on the LRC algorithm to effectively identify face images with light changes, expression changes, and facial occlusion.

2.1. Related Works

Suppose that the training set samples are represented as X = [ X 1 , X 2 , , X n ] Τ n × d , where n is the number of categories contained in the training set samples and d is the dimension of any sample. Suppose the training sample of the i-th subspace is X i = [ x i 1 , x i 2 , , x i n i ] Τ n i × d , denoted as x i d ( i = 1 , 2 , , n ) , with x i j denoted as the j-th sample of the i-th class, n i denoted as the number of the i-th class of samples, and the test sample denoted as y.
The Linear Regression Classification algorithm is based on the assumption of subspace, and the data containing n categories are represented as n different subspace vectors, in which the samples belonging to the i-th category are represented as X i . The following is the specific principle of the LRC [18] algorithm when classifying the test samples.
Assuming that the test sample y belongs to the i-th class, it can be approximately represented as a linear combination of the training samples of the same category.
y = X i β i , i = 1 , 2 , , n ,
where β i is the representation coefficient of the i-th category of training samples.
Face recognition is expressed as a regression problem in the above formula, and the representation coefficient is obtained through the pseudo-inverse matrix.
β ^ i = ( X i Τ X i ) 1 X i Τ y ,
where y is the test sample vector.
The projection and projection surface of the test sample y in each subspace can be expressed as:
y ^ i = X i β ^ i , i = 1 , 2 , , n ,
y ^ i = X i ( X i Τ X i ) 1 X i Τ y ,
y ^ i = P i y ,
where β ^ i is the predicted regression coefficient of the i-th category training sample, y ^ i is the linear model reconstruction vector, and P is the projection matrix.
The distance between the test sample vector and the projection of y on the i-th subspace can be expressed as:
d i ( y ) = y y ^ i 2 , i = 1 , 2 , , n .
Selection of the category with the smallest Euclidean distance as the discrimination result obtains:
identity ( y ) = arg min i   d i ( y ) , i = 1 , 2 , , n .

2.2. Other Related Algorithms

Sparse Representation-based Classification (SRC) is based on linear regression through the punishment of regression coefficients. The SRC [10,11] algorithm introduces the L1 norm to constrain the regression coefficients so that more zero values are included in the regression coefficient, equivalent to using the Lasso regression model. The SRC algorithm first encodes the test samples as a sparse linear combination of all the training samples. It then makes the final decision by comparing which category has the smallest error.
The sparse coefficient of the SRC model can be equivalent according to the Lasso regression model:
β ^ = arg min β y X β 2 2 + λ β 1 ,
where X is all the training samples, y is the test sample, β is the regression coefficient of X, and 1 is the L1 norm.
After determining the sparse coefficient of the SRC algorithm, the SRC algorithm is similar to the LRC algorithm, and the same equations as those of (3), (6), and (7) can use the Euclidean distance as a measure to determine the category of the test sample.
Since there is no analytical solution to the Lasso problem, its computational complexity is much greater than the classifier. Later, the L2 norm was introduced to restrict the regression coefficient, that is, the classification method (Collaborative Representation-based Classification, CRC).
The CRC [14] problem is equivalent to the ridge regression problem. It is similar to the SRC algorithm in that it similarly encodes the test samples as a linear combination of all the training samples. The CRC algorithm performs the L2 norm constraint on the regression coefficient and its sparse coefficient:
β ^ = arg min β y X β 2 2 + λ β 2 2 ,
where X is all the training samples, β is the regression coefficient of X, λ is regularization coefficient, and 2 is the L2 norm.
The regression coefficients in Equation (9) have analytical solutions with the expression:
β ^ = ( X Τ X + λ I ) 1 X Τ y
By combining Equations (3), (6), and (7), the residuals of the CRC algorithm and the predicted test sample category can be expressed as:
d i ( y ) = y X i β ^ i 2 β ^ i 2 , i = 1 , 2 , , n .
identity ( y ) = arg min i   d i ( y ) , i = 1 , 2 , , n ,
where β ^ i is the coefficient of the i-th category of training samples.
The SRC and CRC algorithms improve the linear regression classifier by restricting the L1 and L2 norm on the regression coefficient, respectively, which can somewhat suppress the influence of noise on the linear model. Nevertheless, the CRC algorithm is more computational than the SRC algorithm.
The Euler Sparse Representation-based Classification (ESRC [17]) algorithm is similar to the SRC algorithm, which takes Euler distance as a measure and expands intra-class and inter-class distance. The multiple of the inter-class distance will be greater than the multiple of the in-class distance in some data, thus improving the robustness of the SRC algorithm. The ESRC method involves an implementation process, and details of image mapping to the complex space process can found in [17].

2.3. The Proposed Method

The workflow of the BPLRC method is shown in Figure 1 under the assumption that the image is divided into 5 pieces. When the image only retains one block, the principle of the BPLRC algorithm is equivalent to Module LRC; when the image retains all blocks, the principle of the BPLRC algorithm is equivalent to LRC. The proposed method contains Module LRC that is reserved with a full block and blocks that only retain one block. Theoretically, its recognition effect will be better than LRC and Module LRC. BPLRC aims to find the best facial features for classification. The proposed method first divides all training images and a test image and divides the number of groups according to the number of reserved blocks. As shown in the figure, the number of arrangements from the first and fifth groups is 5, 10, 10, 5, and 1, respectively. The construction of a linear model according to each scheme then determines the scheme with the minimum residual and obtains the identification result of the scheme. Finally, samples can be continually drawn from the test concentration and the above work can be repeated.
Assuming that the training samples contain some noise when the test samples have the same noise, test samples can be linearly represented by the training samples. However, if the training samples contain no noise and the test samples contain a lot of noise, the linear model will be invalid. Therefore, selecting the sample variables and filtering out the variables with noise in the samples are necessary, which can then help train a more robust linear regression model. The proposed method divides the variables into blocks and performs permutation, combination, and reorganization. The best combination method is then judged by Euclidean distance (residual value) to achieve the purpose of removing continuous noise variables. BPLRC aims to find face characteristics that contain only a small amount of noise or no noise. The following is the basic process of the BPLRC method:
Suppose the training samples on the i-th subspace are divided into T blocks such that each sub-image can be represented as:
U i t = [ w i 1 ( t ) , w i 2 ( t ) , , w i p i ( t ) ] , i = 1 , 2 , , n ,
where w i p i ( t ) is the grayscale value on the p i pixel of the t-th block image from the i-th class.
The number of groups of arrangement schemes is determined by the number of divided blocks, and the block arrangement and combination of the i-th type of training samples in the first group can be expressed as:
X i ( 11 ) = [ U i ( 1 ) , O ( 2 ) , O ( 3 ) , O ( 4 ) , , O ( T ) ] Τ X i ( 1 m 1 ) = [ O ( 1 ) , O ( 2 ) , , U i ( m 1 ) , , O ( T ) ] Τ , m 1 = 1 , 2 , , C T 1 ,
where O is a zero matrix and U i ( m 1 ) is the m 1 -th block of class i.
Different groups retain a certain number of sub-images. For example, the second group retains 2 sub-images, and the T-th group retains T sub-images. Therefore, the 2-T groups i-th training sample reorganization can be expressed as:
X i ( 2 m 2 ) , X i ( 3 m 3 ) , , X i ( t m t ) , t = 1 , 2 , , T ; m t = 1 , 2 , , C T t ,
where m t is the m t -th arrangement and C T t is the number of arrangements.
Similar to Equations (14) and (15), the recombination test samples of different groups can be expressed as:
y ( 1 m 1 ) , y ( 2 m 2 ) , , y ( t m t ) , t = 1 , 2 , , T ; m t = 1 , 2 , , C T t .
where m t is the m t -th arrangement and C T t is the number of arrangements. Equations (14)–(16) represent different segmentation arrangements in the image. In this article, which refers to the method of Module LRC, the residues of all solutions are compared and the minimum residual arrangement scheme is found to obtain the best face characteristics used in identification. BPLRC reduces the continuous occlusion characteristics of linear model learning and increases the learning of effective face characteristics, thereby building a strong, robust linear model.
The coefficient vector and prediction vector for each group were calculated in the same manner as the expressions in Equations (2) and (3):
β ^ i ( t m t ) = ( X i ( t m t ) ) Τ X i ( t m t ) 1 ( X i ( t m t ) ) Τ y ( t m t ) ,
y ^ i ( t m t ) = X i ( t m t ) β ^ i ( t m t ) ,
where X i ( t m t ) is all the training samples of class i in the t-th group of scheme m t .
In the same arrangement, the distance between the test vector and its projection on the i-th subspace is:
d i ( y ( t m t ) ) = y ( t m t ) y ^ i ( t m t ) 2 , i = 1 , 2 , , n ,
where y ( t m t ) is the test sample of the m t arrangement scheme in t-th group and y ^ i ( t m t ) is the prediction vectors of class i in the t-th group of scheme m t .
Selection of the distance to the nearest subspace of the test vector in the same arrangement:
d ( t m t ) = min d i ( y ( t m t ) ) , i = 1 , 2 , , n ,
where d i ( y ( t m t ) ) is the distance between the test sample vector and the i-th subspace in the t-th group of scheme m t .
Comparison of the size of the nearest subspace distance between different arrangements in the same group and selection of the arrangement with the smallest distance as the optimal arrangement in the group is expressed as:
D t = arg min m t   d ( t m t ) , m t = 1 , 2 , , C T t ,
where d ( t m t ) is the distance (residual) of the m t -th arrangement in t-th group.
Under the optimal scheme, the subspace closest to the recombination test sample is selected as the prediction result:
i d e n t i t y ( y ) = arg min i   d i ( y ( t D t ) ) , i = 1 , 2 , , n ,
where d i ( y ( t D t ) ) is the distance between the reorganization test sample vector and the i-th subspace under the optimal scheme in the t-th group.
Assuming that the number of test samples is S, the true label is A t s t 1 , , C and the predicted label is A ^ t s t , the model’s recognition accuracy can be expressed as:
a c c t = 1 S s t = 1 S Ι A t s t = A ^ t s t , t = 1 , 2 , , T ,
where Ι ( ) is the indicator function.
According to Equation (23), the recognition accuracy of each group was obtained, and the final result was obtained by comparison:
result = max a c c t , t = 1 , 2 , , T ,
where a c c t is the accuracy of the identification of all test samples in the t-th group.
In addition to recognition accuracy, precision and recall can be partially evaluated to evaluate the model. Precision and recall can be expressed as:
Pr e c i s i o n = T P T P + F P ,
Re c a l l = T P T P + F N ,
where T P is actually a positive sample prediction as a positive sample, F P is actually negative sample prediction as a positive sample, and F N is actually positive sample prediction as a negative sample.
F1-Score can represent the harmonic average of accuracy and recall rate:
F 1 S c o r e = 2 Pr e c i s i o n Re c a l l Pr e c i s i o n + Re c a l l .

3. Results

3.1. Data Sources and Operating Environment

The effectiveness of the proposed method was demonstrated based on three standard databases, namely AR [22], Extended Yale B [23], and ORL [24]. These databases contain several deviations from ideal conditions, including pose, lighting, occlusion, and gesture changes. Appropriate experimental results have demonstrated that the developed method performs well for severe continuous occlusion with small changes in pose, scale, illumination, and rotation. All the above experiments were run on the Windows 10 operating system (Intel Core i7-4770 CPU M620 @ 3.40 GHz and 8 GB RAM), and the programming environment was Python 3.7.

3.2. Selection of Optimal Block Arrangement and Combination Scheme

By taking the AR data as an example, the AR face database subset [22], comprising 50 males and 50 females, contains 2600 images in total. In the experiment, eight images without facial occlusion, such as smiling and not smiling, and brightness changes were selected as training samples for each object. Moreover, three face images covered by sunglasses and three face images covered by scarves were selected as test samples for each object. A total of 100 objects were selected. References [14,17,18,20,21,25] were selected for this experiment, and the images, with a resolution of 165 × 120 pixels, were downsampled to 15 × 10 pixels, 20 × 15 pixels, and 25 × 20 pixels for experiments.
The occluded face image contains face information and non-face information. Since the distribution of non-face information cannot be perceived in advance by algorithms, it is difficult to distinguish between face information and non-face information. Figure 2 shows a part of the occluded face images from the AR subset.
In this paper, the block arrangement method was adopted. All the blocks were arranged, combined, and finally compared to the residual values of each scheme to determine the final arrangement (see Figure 3, Figure 4 and Figure 5). During this process, the approximate position of the obstructions, such as sunglasses or scarves, was determined. Considering the amount of computation, when taking three blocks in the five-block image as an example, the number of permutation schemes was 10. Only the horizontal block combination is shown in the figure, but the vertical block is also considered. The residual calculation of the arrangement of scheme h is shown in Figure 5, and Figure 6 shows the minimum residuals of face images involving sunglasses with different permutations.
It can be seen from Figure 4 that this group retains 60% of the image information, and the sunglasses part was removed in scheme h. The distance between the test sample and the subspace projected by all training samples of an object was considered as the basis for the selection of the scheme. It can be seen from Figure 6 that the residual value of scheme h (preserved block positions one, four, and five) was the smallest. This means that the reorganized training samples in Figure 3h were constructed as an optimal linear model to predict the category of the test image in Figure 4h. Thus, the prediction result of scheme h was selected at this time.
The test sample contained face images involving scarves as an example, and the situation after the reorganization is shown in Figure 7. The residual calculation of the arrangement of scheme a is shown in Figure 8, and Figure 9 shows the minimum residuals of face images involving scarves with different permutations.
As shown in Figure 7, the scarf part was removed in scheme a. It can be seen from the residual values of each scheme in Figure 9 that the residual value of scheme a (reserved block positions one, two, and three) was the smallest. This means that the reconstituted training samples in Figure 3a were built using the optimal linear model to predict the class of the test image in Figure 7a. Therefore, the prediction result of scheme a was selected at this time. Similarly, the optimal arrangement and combination of one, two, four, and five sub-images were also similar to the above examples.

3.3. Continuous Occlusion Result for the AR Database

According to the above selection, the optimal block arrangement and combination was applied to the AR datasets (this method was also applied to the Extended Yale B and ORL datasets later). First of all, for data processing, the experiment divided the test analysis into three parts. The first part was the recognition of face images occluded by scarves. The second part was the recognition of face images occluded by sunglasses. The third part was the analysis of face images occluded by scarves and by sunglasses. When the BPLRC algorithm was applied, the face images were all horizontally divided into five blocks. The results regarding AR dataset recognition are shown in Table 1. In the table, ESRC stands for the Euler Sparse Representation Classification (ESRC) [17] algorithm. The experimental parameters were set according to [17], where λ = 1.9 and α = 0.5. The results showed that the proposed BPLRC method was significantly better than the LRC, SRC, CRC, ESRC, and Module LRC algorithms in the three parts of the AR database. Among them, the LRC method was easily affected by the blocking features. In the AR datasets, the scarf part of the test image was linked to the characteristics of the male beard in the training image. Learning the wrong characteristics caused model recognition performance to be poor. Although the SRC, ESRC, and CRC algorithms can also connect the scarf parts with the characteristics of a male beard, the three coefficients will be constrained through the L1 and L2 models, thus limiting their ability to learn in the wrong direction to a certain extent. Therefore, the accuracy rate of the SRC, ESRC, and CRC algorithms in terms of recognizing scarves covering human faces in images was higher than the LRC algorithm. The Module LRC and BPLRC methods extract the effective face characteristics as much as possible, and they will thus only learn a little noise information, as with the LRC, SRC, ESRC, and CRC algorithms. The BPLRC method is similar to Module LRC. The former considers retaining more blocks. However, face information was more effective than the Module LRC method, so the linear models learned more face characteristics. Therefore, the BPLRC algorithm identifies the effect of occlusion in face images better than other related algorithms.
The LRC and CRC methods are relatively simple, and the calculation time is short. For 25 × 20 pixels images, the average time for each image with the LRC and CRC methods was 0.61 s and 0.98 s. The advantage of the LRC method is that it is very simple to calculate; however, it is easily affected by abnormal values. When the test graph contains continuous occlusion, recognition performance decreases sharply. The BPLRC method proposed in this article improves its robustness based on LRC, but the calculation cost increases. From Table 2, it was found that the calculation time of SRC and ESRC was much higher than the BPLRC method, and Table 1 did not show robustness higher than BPLRC.

3.4. Robust Regression Model Based on Occlusion Training

In practice, there may be too few images in each object, and these images contain a lot of non-face information. A face image with a scarf-occluded face was used as the training sample, and a face image without occlusion but with different lighting conditions was used as the recognition object (see Figure 10). Table 3 shows the results of identifying downsampled images with a resolution of 25 × 20 pixels. The results in Table 3 and Figure 11 prove that the algorithm has strong robustness and outperforms other algorithms, even with a small number of training samples.

3.5. Analysis of Verification Results of Different Data

3.5.1. Continuous Occlusion Results for the Extended Yale B Database

The Extended Yale B [23] face database is comprised of 64 face images per object that contain different lighting conditions (see Figure 12). There are a total of 38 objects, of which the 11th and 13th objects have only 60 images. The 12th contains 59, the 15th has 62, and the 14th, 16th, and 17th have 63. Therefore, in the experiment, the first 26 images of each object were selected as training samples, and the last 33 images were randomly occluded as test samples. A total of 38 objects were selected. Each image was downsampled from the original 165 × 120 pixels to 15 × 10 pixels, 20 × 15 pixels, and 25 × 20 pixels for the analysis.
Figure 13 shows occlusion maps of different proportions that were used with the test samples. The results of Extended Yale B database recognition are shown in Table 4 and Table 5. The results in the table show that the proposed BPLRC method was significantly better than the LRC, SRC, CRC, ESRC, and Module LRC (five-block processing) algorithms. For 25 × 20 pixels images, the average time taken for the BPLRC algorithm to identify each image was 0.08 s (see Table 6), and the calculation time was not very long. However, lengthening the calculation time of the LRC algorithm is sometimes necessary in exchange for higher robustness. For example, in Table 4, the higher the occlusion rate of the test image, the greater the difference between the accuracy of the BPLRC and LRC methods.
However, randomly generating occlusion blocks to occlude the faces in the Extended Yale B dataset needs to be more comprehensive. The two special cases of vertical and diagonal occlusion also need to be considered. The vertical occlusion of 10% of faces is shown in Figure 14a,b, and the diagonal occlusion of 20% of faces is shown in Figure 14c,d.
It can be seen from Table 5 that, compared to the recognition accuracy of BPLRC when images are randomly occluded by 10%, in most cases, the recognition accuracy of BPLRC is higher than that of LRC, SRC, CRC, ESRC, and Module LRC. For example, in the case of 20% diagonal occlusion, the SRC method has better recognition accuracy than BPLRC for images downsampled to 15 × 10 pixels. The biggest reason is the particularity of the occlusion distribution and the small size of the image after downsampling. The results in Table 5 show that, with the diagonal face occlusion in this special case, good results can also be obtained when using the BPLRC method.

3.5.2. Continuous Occlusion Results with the ORL Database

There were 10 grayscale images per object in the ORL [24] face database for a total of 40 objects. Some of these images were different in terms of shooting time, lighting, facial expressions (eyes open/closed, smiling), and facial details (glasses). All images in the ORL database were selected for the experiment. The first six images were used as training samples. The last four images were randomly occluded as test samples (see Figure 15). The pixel size of each image was downsampled from the original 112 × 92 pixels to 15 × 10 pixels, 20 × 15 pixels, and 25 × 20 pixels for the experiments.
The processing method for the ORL database was consistent with the Extended Yale B database. The results of ORL database recognition are shown in Table 7 and Table 8. Table 9 shows the calculation time for different methods to identify 160 images in the ORL dataset. The proposed BPLRC method also significantly outperformed the LRC, SRC, CRC, ESRC, and Module LRC (five-block processing) algorithms when using this database. The computation time of the BPLRC method is moderate, and its computational complexity is lower than that of the SRC and ESRC methods. Synthesizing the results of BPLRC on the AR, Extended Yale B, and ORL datasets demonstrated the effectiveness of the proposed BPLRC for solving face occlusion problem.
In the ORL datasets, the two special cases of vertical occlusion of the face and diagonal occlusion of the face were also considered. Vertical occlusion of 10% of the face is shown in Figure 16a,b, and diagonal occlusion of 20% of the face is shown in Figure 16c,d.
It can be seen from Table 8 that, compared to face images randomly obscured by 10% and 20%, the recognition accuracy of the BPLRC method has decreased. However, the BPLRC method also obtains the best results when compared to other related methods.

4. Discussion

As has been reported in the literature [10,11,14,18], LRC, SRC, and CRC use the residues between the Euclidean distance measurement model and the test image. There are advantages and disadvantages to each. From the above experimental results, ESRC identification is, in many cases, inferior to the SRC method. The proposed BPLRC algorithm aims to determine the characteristics of the best face recognition and then use the LRC method to classify them. Therefore, the measurement method of the BPLRC algorithm is Euclidean distance.
To fully verify that the BPLRC method’s recognition performance is better than other methods, in addition to identifying accuracy indicators this chapter introduces precision, recall, and F1score indicators to evaluate the model. Because AR, Extended Yale B, and ORL datasets include many categories, each category cannot be evaluated locally. All the methods involved in this paper were evaluated globally by calculating precision, recall, and F1score, which were >0.7 in all categories. When precision, recall, and F1score are equal to 0, the samples representing a certain category are incorrectly identified. The more of such categories there are, the worse the model’s performance.

4.1. Continuous Occlusion Analysis of the AR Database

The AR dataset contains 100 classes of target faces. Discussing the assessment metrics for all categories takes time and effort. We set the threshold for precision, recall, and F1score to 0.7 and then analyzed the number of categories that were greater than 0.7. If the threshold was too large or too small, the value was close and it was not easy to compare the performance of the following methods.
As shown in Table 10, BPLRC identifies face images containing only scarf occlusion, sunglasses occlusion, or mixed occlusion whose number of precision, recall, and F1score > 0.7 categories is more than or equal to other methods. Precision, recall, and F1score equal to 0 was present in less categories than other methods. The model evaluation indicators involved in the table, combined with the identification accuracy of the experimental part, show that the proposed method can effectively solve the face occlusion problem in the AR face dataset. However, other relevant algorithms, such as the LRC, SRC, CRC, and ESRC methods, have poor identification performance, partly because they belong to linear models and are, in turn, susceptible to anomalous variables or abnormal points. The key to the SRC, CRC, and ESRC methods, compared to the LRC method, is that they regularize the model coefficient, which improves the robustness of the linear model to some extent. From the results relating to identification of scarf occlusion, the SRC, CRC, and ESRC methods are much better than the LRC method. The results relating to identification of sunglasses occlusion show that the LRC method is superior to the SRC, CRC, and ESRC methods. This indicates the effect of a face image linearly represented by the same category of face image, which is generally better than a face image linearly represented by all categories of face images. To circumvent this drawback, that LRC is extremely poor in robustness, block arrangement is combined with the LRC method. Block arrangement achieves excellent recognition performance in LRC and ensures that more useful face information is extracted. Module LRC only retains the most useful block, thus keeping too little effective information which further leads to its inferior recognition performance compared to BPLRC.

4.2. Extended Yale B Database Analysis and Discussion

Extended Yale B datasets include 38 target faces. As shown in Table 11, when BPLRC recognizes 10%, 20%, and 30% of face images, its number of categories where precision, recall, and F1score are >0.7 is more than other methods. When BPLRC identifies 40% of the face image, its number of precision categories >0.7 is more than other methods, but the number of recall and F1score categories > 0.7 is less than the Module LRC method. In fact, in face recognition, precision indicators are more important than recall and F1Score. For example, when the precision value is too low in the family access control system, it is possible to identify strangers as family members. When the recall value is too low, even family members cannot be identified (though you can enter the house by inputting a password), and there will be no severe theft incidents. F1score is the harmonic mean of two metrics, and simply provides a summary evaluation of model performance. According to the results in Table 11, the number of precision, recall, and F1score categories equal to 0 is only 1 or 0; thus, the model’s identification performance cannot be evaluated from the number of precision, recall, or F1Score categories equal to 0.
In summary, from the number of categories for each method with precision >0.7, the proposed method identifies 10%, 20%, 30%, and 40% of faces, vertical occlusion of 10% of faces, and diagonal occlusion of 20% of face images better than other related algorithms.

4.3. ORL Database Analysis and Discussion

The ORL dataset contains 40 classes of target faces. As shown in Table 12, when BPLRC identifies random occlusion of 10%, 20%, and 30% of faces, 10% vertical occlusion of faces, and 20% diagonal occlusion of faces, its number of precision, recall, and F1score categories greater than 0.7 is greater than other related methods. The number of precision, recall, and F1score categories equal to 0 is less than or equal to different related algorithms. Therefore, the BPLRC method solves the face occlusion problem more effectively in the ORL datasets than the other related algorithms, showing stronger robustness than LRC, SRC, CRC, ESRC, and Module LRC. Combined with the identification accuracy obtained from the previous experiments, we fully confirm that BPLRC improves upon both LRC and Module LRC.

4.4. Comparison of Differences in Algorithms

The LRC method uses individual categories of training samples to represent the test samples linearly. In contrast, the SRC, CRC, and ESRC methods use all categories of training samples to represent the test samples linearly. Other differences are shown in Table 13, with SRC being equivalent to the Lasso regression model and constraining the regression coefficients using the L1 norm. CRC is equivalent to the ridge regression model, and the regression coefficient is constrained using the L2 norm. At the same time, ESRC replaces the metric of the SRC method with the Euler distance. The Module LRC and BPLRC methods both determine the face features conducive to the LRC method and further determine the category to which the image belongs. The BPLRC approach is similar to Module LRC in that it considers the case of block combinations.
Table 13 shows the differences in the LRC, SRC, CRC, ESRC, Module LRC, and BPLRC methods. The linear model is susceptible to contaminated data [26] and LRC is directly affected by contaminated data, while other RBCM algorithms are more robust than LRC algorithms. Notably, Module LRC and BPLRC can identify the target images by effectively using face features. If a small number of non-face images exist in the test sample, the influence function in the literature [27] is used to obtain a clean sample set. From the recognition results of the three face datasets, Module LRC and BPLRC are more suitable for recognizing facial occlusion images and have strong robustness. The three datasets contain various face images with light changes and expression changes. Therefore, all RBCM methods (including LRC, SRC, CRC, ESRC, Module LRC, and BPLRC) can achieve better results when setting a low occlusion ratio. At the same time, it shows that RBCM methods can effectively solve the problem of illumination and facial expression changes.

5. Conclusions

The method proposed in this paper combines local image information into a whole, reflecting both the local information and the overall information of the image. In the AR datasets, face images with scarves were downsampled to 25 × 20 pixels as an example, and the recognition accuracy of the BPLRC algorithm in identifying the face images with scarf occlusion was 93.67%. The number of categories with precision, recall, and F1score greater than 0.7 was 86, 94, and 93, respectively, and the number of categories with precision, recall, and F1score equal to 0 was 1. These indicators can indicate the degree of excellence various algorithms have in identification. Furthermore, experiments on the Extended Yale B, ORL, and AR datasets showed that the BPLRC algorithm was significantly better than other related classification methods in identifying images with continuous occlusion. The LRC, SRC, ESRC, and CRC algorithms did not remove occlusions, which caused them to learn a lot of noise. As a result, model performance was worse than the Module LRC and BPLRC models. Although Module LRC removes the continuous occlusion part, it only considers retaining one block, and there may be less reserved effective face information than the BPLRC method. Therefore, the BPLRC algorithm’s ability to identify face images is better than other related algorithms.
BPLRC reorganizes different training samples and test samples through block arrangement and combination, but the number of combinations increases exponentially with the number of blocks. Compared to the LRC algorithm, the method proposed optimizes face characteristics in the image while at the same time reducing the negative impact of occlusion on the model. For example, the LRC method easily attributes face images with scarf occlusion to a large number of beards, or other categories that have characteristics similar to scarves. Compared to the Module LRC algorithm, this algorithm’s novelty lies in retaining as many image block schemes as possible in order to retain useful face characteristics. Additionally, the block arrangement can be combined with other algorithms that are less robust. For example, when the number of blocks is five, the average time taken with BPLRC in AR datasets for setting an image with a size of 25 × 20 pixels is 0.094 s, and the calculation amount is relatively small. However, when the number of image divisions is too large, many arrangement schemes will greatly reduce the recognition speed of the BPLRC algorithm. In response to the defects of the BPLRC algorithm, in the future, the rapid iteration method will be studied and the optimal or subsequent arrangement will be found in many solutions to reduce the calculation amount, which should provide the possibility of dividing more blocks.

Author Contributions

Conceptualization, data curation, software, validation, writing—original draft preparation, visualization, J.X.; conceptualization, writing—review and editing, funding acquisition, project administration and formal analysis, X.C. (Xiaojing Chen); conceptualization, writing—review and editing, and resources, Z.X.; writing—review and editing, and resources, S.A.; resources, L.Y. and X.C. (Xi Chen); writing—review and editing, formal analysis, and validation, W.S.; software, writing—review and editing, and supervision, G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Wenzhou Social Development (Medical and Health) Science and Technology Project grant number [ZY2021027] and National Natural Science Foundation of China grant number [62105245, 61893096014].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors would like to acknowledge the financial support provided by the Natural Science Foundation of Zhejiang (LY21C200001 and LQ20F030059), the Wenzhou Science and Technology Bureau General Project (S2020011 and G20200044).

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Kamarainen, J.K.; Kyrki, V.; Kalviainen, H. Invariance properties of Gabor filter-based features-overview and applications. IEEE Trans. Image Process. 2006, 15, 1088–1099. [Google Scholar] [CrossRef] [PubMed]
  2. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
  3. Zhang, B.; Gao, Y.; Zhao, S.; Liu, J. Local derivative pattern versus local binary pattern: Face recognition with high-order local pattern descriptor. IEEE Trans. Image Process. 2009, 19, 533–544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Turk, M.A.; Pentland, A.P. Face recognition using eigenfaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, 3–6 June 1991; pp. 586–591. [Google Scholar]
  5. Huang, G.; Chen, X.; Chen, X.; Chen, X.; Shi, W. A one-class feature extraction method based on space decomposition. Soft Comput. 2022, 26, 5553–5561. [Google Scholar] [CrossRef]
  6. Wall, M.E.; Rechtsteiner, A.; Rocha, L.M. Singular value decomposition and principal component analysis. In A Practical Approach to Microarray Data Analysis; Springer: Boston, MA, USA, 2003; pp. 91–109. [Google Scholar]
  7. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef] [Green Version]
  8. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, Third Edition. J. Biomed. Opt. 2009, 14, 29901. [Google Scholar] [CrossRef] [Green Version]
  9. Peterson, L.E. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
  10. Wright, J.; Ma, Y.; Mairal, J.; Sapiro, G.; Huang, T.S.; Yan, S. Sparse Representation for Computer Vision and Pattern Recognition. Proc. IEEE 2010, 98, 1031–1044. [Google Scholar] [CrossRef] [Green Version]
  11. Shi, Y.; Dai, D.; Liu, C.; Yan, H. Sparse discriminant analysis for breast cancer biomarker identification and classification. Prog. Nat. Sci. 2009, 19, 1635–1641. [Google Scholar] [CrossRef]
  12. Mairal, J.; Ponce, J.; Sapiro, G.; Zisserman, A.; Bach, F. Supervised dictionary learning. Adv. Neural Inf. Process. Syst. 2008, 21. Available online: https://proceedings.neurips.cc/paper/2008 (accessed on 6 October 2022).
  13. Yang, J.; Zhang, L.; Xu, Y.; Yang, J. Beyond sparsity: The role of L1-optimizer in pattern classification. Pattern Recognit. 2012, 45, 1104–1118. [Google Scholar] [CrossRef]
  14. Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the 2011 International Conference on Computer Vision. Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  15. Xu, Y.; Zhang, D.; Yang, J.; Yang, J.Y. A Two-Phase Test Sample Sparse Representation Method for Use with Face Recognition. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 1255–1262. [Google Scholar]
  16. Tang, D.; Zhou, S.; Yang, W. Random-filtering based sparse representation parallel face recognition. Multimed. Tools Appl. 2018, 78, 1419–1439. [Google Scholar] [CrossRef]
  17. Liu, Y.; Gao, Q.; Han, J.; Wang, S. Euler sparse representation for image classification. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  18. Naseem, I.; Togneri, R.; Bennamoun, M. Linear Regression for Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2106–2112. [Google Scholar] [CrossRef] [PubMed]
  19. Pentland, A.; Moghaddam, B.; Starner, T. View-based and modular eigenspaces for face recognition. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994. [Google Scholar]
  20. Luan, X.; Fang, B.; Liu, L.; Zhou, L. Face recognition with contiguous occlusion using linear regression and level set method. Neurocomputing 2013, 122, 386–397. [Google Scholar] [CrossRef]
  21. Mi, J.X.; Zhu, Q.; Luo, Z. Matrix regression-based classification with block-Norm. Pattern Recognit. Lett. 2019, 125, 654–660. [Google Scholar] [CrossRef]
  22. Martinez, A.; Benavente, R. The AR Face Database: CVC Technical report, No 24. 1998. Available online: https://portalrecerca.uab.cat/en/publications/the-ar-face-database-cvc-technical-report-24 (accessed on 6 October 2022).
  23. Georghiades, A.S.; Belhumeur, P.N.; Kriegman, D.J. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 643–660. [Google Scholar] [CrossRef] [Green Version]
  24. Lee, K.C.; Ho, J.; Kriegman, D.J. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 684–698. [Google Scholar] [PubMed]
  25. Shen, F.; Shen, C.; Zhou, X.; Yang, Y.; Shen, H.T. Face image classification by pooling raw features. Pattern Recognit. 2016, 54, 94–103. [Google Scholar] [CrossRef] [Green Version]
  26. Xie, Z.; Chen, X. Partial least trimmed squares regression. Chemom. Intell. Lab. Syst. 2022, 221, 104486. [Google Scholar] [CrossRef]
  27. Xie, Z.; Chen, X. Subsampling for partial least-squares regression via an influence function. Knowl.-Based Syst. 2022, 245, 108661. [Google Scholar] [CrossRef]
Figure 1. The overall workflow of the BPLRC method (with 5 blocks) [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 1. The overall workflow of the BPLRC method (with 5 blocks) [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g001
Figure 2. Faces from the AR subset occluded by sunglasses and scarves: (a) face images occluded by sunglasses; (b) face images occluded by scarves [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 2. Faces from the AR subset occluded by sunglasses and scarves: (a) face images occluded by sunglasses; (b) face images occluded by scarves [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g002
Figure 3. Permutation scheme for the third group of training sub-images: (a) reserved block 1, 2, 3; (b) reserved block 1, 2, 4; (c) reserved block 1, 3, 4; (d) reserved block 2, 3, 4; (e) reserved block 1, 2, 5; (f) reserved block 1, 3, 5; (g) reserved block 2, 3, 5; (h) reserved block 1, 4, 5; (i) reserved block 2, 4, 5; (j) reserved block 3, 4, 5 [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 3. Permutation scheme for the third group of training sub-images: (a) reserved block 1, 2, 3; (b) reserved block 1, 2, 4; (c) reserved block 1, 3, 4; (d) reserved block 2, 3, 4; (e) reserved block 1, 2, 5; (f) reserved block 1, 3, 5; (g) reserved block 2, 3, 5; (h) reserved block 1, 4, 5; (i) reserved block 2, 4, 5; (j) reserved block 3, 4, 5 [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g003
Figure 4. Arrangement and combination scheme of a certain test sub-image (involving sunglasses) in the third group: (a) reserved block 1, 2, 3; (b) reserved block 1, 2, 4; (c) reserved block 1, 3, 4; (d) reserved block 2, 3, 4; (e) reserved block 1, 2, 5; (f) reserved block 1, 3, 5; (g) reserved block 2, 3, 5; (h) reserved block 1, 4, 5; (i) reserved block 2, 4, 5; (j) reserved block 3, 4, 5 [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 4. Arrangement and combination scheme of a certain test sub-image (involving sunglasses) in the third group: (a) reserved block 1, 2, 3; (b) reserved block 1, 2, 4; (c) reserved block 1, 3, 4; (d) reserved block 2, 3, 4; (e) reserved block 1, 2, 5; (f) reserved block 1, 3, 5; (g) reserved block 2, 3, 5; (h) reserved block 1, 4, 5; (i) reserved block 2, 4, 5; (j) reserved block 3, 4, 5 [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g004aApplsci 12 11885 g004b
Figure 5. Residual calculation of arrangement scheme h (sunglasses occluding the face). [reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 5. Residual calculation of arrangement scheme h (sunglasses occluding the face). [reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g005
Figure 6. Residuals of face images involving sunglasses with different permutations [authors’ own processing].
Figure 6. Residuals of face images involving sunglasses with different permutations [authors’ own processing].
Applsci 12 11885 g006
Figure 7. Arrangement and combination scheme of a certain test sub-image (involving a scarf) in the third group: (a) reserved block 1, 2, 3; (b) reserved block 1, 2, 4; (c) reserved block 1, 3, 4; (d) reserved block 2, 3, 4; (e) reserved block 1, 2, 5; (f) reserved block 1, 3, 5; (g) reserved block 2, 3, 5; (h) reserved block 1, 4, 5; (i) reserved block 2, 4, 5; (j) reserved block 3, 4, 5 [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 7. Arrangement and combination scheme of a certain test sub-image (involving a scarf) in the third group: (a) reserved block 1, 2, 3; (b) reserved block 1, 2, 4; (c) reserved block 1, 3, 4; (d) reserved block 2, 3, 4; (e) reserved block 1, 2, 5; (f) reserved block 1, 3, 5; (g) reserved block 2, 3, 5; (h) reserved block 1, 4, 5; (i) reserved block 2, 4, 5; (j) reserved block 3, 4, 5 [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g007
Figure 8. Residual calculation of arrangement scheme a (scarves occluding the face). [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 8. Residual calculation of arrangement scheme a (scarves occluding the face). [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g008
Figure 9. Residuals of face images involving scarves with different permutations [authors’ own processing].
Figure 9. Residuals of face images involving scarves with different permutations [authors’ own processing].
Applsci 12 11885 g009
Figure 10. Some face images from the AR subset: (a) the left picture shows some of the training images used in the experiment; (b) the right picture shows some of the test samples used in the experiment [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 10. Some face images from the AR subset: (a) the left picture shows some of the training images used in the experiment; (b) the right picture shows some of the test samples used in the experiment [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g010
Figure 11. The relationship between the recognition rate of different methods for face images with scarves (the images were downsampled to 500 dimensions) and the number of training samples per class [authors’ own processing].
Figure 11. The relationship between the recognition rate of different methods for face images with scarves (the images were downsampled to 500 dimensions) and the number of training samples per class [authors’ own processing].
Applsci 12 11885 g011
Figure 12. Some face images from Extended Yale B [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 12. Some face images from Extended Yale B [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g012
Figure 13. Occlusion maps of different proportions used with the test samples: (a) face image with an occlusion ratio of 10%; (b) face image with an occlusion ratio of 20%; (c) face image with an occlusion ratio of 30%; (d) face image with an occlusion ratio of 40% [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 13. Occlusion maps of different proportions used with the test samples: (a) face image with an occlusion ratio of 10%; (b) face image with an occlusion ratio of 20%; (c) face image with an occlusion ratio of 30%; (d) face image with an occlusion ratio of 40% [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g013
Figure 14. Test samples with vertical occlusion of 10% and diagonal occlusion of 20%: (a,b) face images with 10% vertical occlusion; (c,d) face images with 20% diagonal occlusion [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 14. Test samples with vertical occlusion of 10% and diagonal occlusion of 20%: (a,b) face images with 10% vertical occlusion; (c,d) face images with 20% diagonal occlusion [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g014
Figure 15. Some images from the ORL database: (a) the ORL database without occluded images; (b) the ORL database with 10%-occluded images [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 15. Some images from the ORL database: (a) the ORL database without occluded images; (b) the ORL database with 10%-occluded images [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g015
Figure 16. Test samples with vertical occlusion of 10% and diagonal occlusion of 20% from the ORL datasets: (a,b) face images with 10% vertical occlusion; (c,d) face images with 20% diagonal occlusion [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Figure 16. Test samples with vertical occlusion of 10% and diagonal occlusion of 20% from the ORL datasets: (a,b) face images with 10% vertical occlusion; (c,d) face images with 20% diagonal occlusion [Reprinted with permission from Elsevier [20]. Copyright 2013, Neurocomputing].
Applsci 12 11885 g016
Table 1. Accuracy (%) of different methods when identifying scarf occlusion, sunglasses occlusion, and mixed occlusion images.
Table 1. Accuracy (%) of different methods when identifying scarf occlusion, sunglasses occlusion, and mixed occlusion images.
Occlusion TestImage SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
Scarf occlusion15 × 108.3331.3325.677.6737.0085.33
20 × 1510.3342.6746.6718.3360.6791.33
25 × 2011.3348.0061.6722.3367.3393.67
Sunglasses occlusion15 × 1037.0025.0020.3324.0071.6785.67
20 × 1551.3348.0046.3338.6786.0091.00
25 × 2054.3347.3347.0039.6788.0090.67
Mixed occlusion15 × 1022.6728.1723.0015.8454.3485.50
20 × 1530.8345.3446.5028.5073.3491.17
25 × 2032.8347.6754.3431.0077.6792.17
Table 2. Computational time (seconds) for different methods to identify 300 images.
Table 2. Computational time (seconds) for different methods to identify 300 images.
Image SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
15 × 100.4655.460.7861.032.3615.83
20 × 150.46143.150.78140.653.4921.74
25 × 200.61323.800.98329.514.9228.17
Table 3. Accuracy of different methods when recognizing face images with scarves (images are downsampled to 500 dimensions).
Table 3. Accuracy of different methods when recognizing face images with scarves (images are downsampled to 500 dimensions).
MethodNumber of Training Samples Per Class
23456
LRC6.757.0010.759.8810.88
SRC3.882.383.8810.8826.00
CRC24.0028.6351.1359.2561.00
Module LRC20.2549.1363.0073.0080.75
BPLRC32.7559.8874.7583.8889.25
Table 4. Accuracy (%) of different methods when identifying 10%-, 20%-, 30%-, and 40%-occluded face images in the Extended Yale B database.
Table 4. Accuracy (%) of different methods when identifying 10%-, 20%-, 30%-, and 40%-occluded face images in the Extended Yale B database.
Occlusion RateImage SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
10% 15 × 1060.2147.2943.0633.254.9464.51
20 × 1565.1564.8360.0547.4532.4677.27
25 × 2066.8367.7867.4646.7368.4278.87
20%15 × 1043.4641.6335.8928.795.0259.57
20 × 1552.4755.7452.2341.8732.3074.40
25 × 2057.5859.4961.4840.9167.6276.00
30%15 × 1026.9531.5827.5123.525.2651.28
20 × 1540.8345.6940.9933.7330.7871.85
25 × 2046.3345.6150.8034.5368.6674.00
40%15 × 1021.2925.3622.5719.704.7040.19
20 × 1529.1132.7032.6927.4330.4666.59
25 × 2037.1633.2539.9526.9567.3068.74
Table 5. Accuracy (%) of different methods when identifying 10% vertically and 20% diagonally occluded face images in the Extended Yale B database.
Table 5. Accuracy (%) of different methods when identifying 10% vertically and 20% diagonally occluded face images in the Extended Yale B database.
Occlusion Rate and Method Image SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
10% (Vertical
Occlusion)
15 × 1058.2151.5146.8131.343.1164.19
20 × 1565.1566.9163.0845.2154.7070.41
25 × 2067.6270.4171.0546.7356.8674.72
20% (Diagonal
Occlusion)
15 × 1033.2540.2733.0124.803.3533.25
20 × 1540.4351.1254.3939.3942.5075.44
25 × 2046.2563.3263.3235.3369.0676.79
Table 6. Computational time (seconds) for different methods to identify 1254 images.
Table 6. Computational time (seconds) for different methods to identify 1254 images.
Image SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
15 × 100.83248.201.40255.646.8945.69
20 × 150.9770.951.59860.8512.6772.87
25 × 201.042231.742.032073.8315.08100.86
Table 7. Accuracy (%) of different methods when identifying 10%-, 20%-, 30%-, and 40%-occluded face images from the ORL database.
Table 7. Accuracy (%) of different methods when identifying 10%-, 20%-, 30%-, and 40%-occluded face images from the ORL database.
Occlusion RateImage SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
10%15 × 1083.1363.1373.1378.7539.3893.75
20 × 1590.6330.0075.6335.0076.2596.88
25 × 2091.2569.3883.1375.6382.5095.63
20%15 × 1078.1346.2561.2559.3841.2588.75
20 × 1582.5015.6368.7523.7578.1393.13
25 × 2088.7553.7571.8854.3884.3893.75
30%15 × 1056.2536.2550.0051.5137.5085.63
20 × 1565.0013.7552.5013.7576.2593.13
25 × 2069.3843.1358.1343.7580.0091.88
40%15 × 1039.3823.7535.0034.3834.3883.75
20 × 1545.0010.0040.0011.8875.6388.13
25 × 2046.2529.3848.1329.3883.7588.75
Table 8. Accuracy (%) of different methods when identifying 10% vertically and 20% diagonally occluded face images from the ORL database.
Table 8. Accuracy (%) of different methods when identifying 10% vertically and 20% diagonally occluded face images from the ORL database.
Occlusion Rate and Method Image SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
10% (Vertical
Occlusion)
15 × 1089.3836.8853.1358.7585.6391.25
20 × 1585.6317.5060.0024.3891.2595.00
25 × 2089.3847.5063.1355.0090.6393.13
20% (Diagonal
Occlusion)
15 × 1023.7513.7530.6331.2553.1376.25
20 × 1528.135.6336.883.7585.6385.63
25 × 2027.5017.5026.8812.5085.6385.63
Table 9. Computational time (seconds) for different methods to identify 160 images.
Table 9. Computational time (seconds) for different methods to identify 160 images.
Image SizeMethod
LRCSRCCRCESRCModule LRCBPLRC
15 × 100.169.300.1911.160.482.97
20 × 150.1040.830.1622.230.603.46
25 × 200.1324.080.2220.700.664.06
Table 10. The number of categories that identified performance indicators to an extent greater than 0.7 or single categories that were wrongly identified from AR datasets for different methods.
Table 10. The number of categories that identified performance indicators to an extent greater than 0.7 or single categories that were wrongly identified from AR datasets for different methods.
Occlusion TestOther Measurement Model Performance IndicatorsMethod
LRCSRCCRCESRCModule LRCBPLRC
Scarf occlusionPrecision > 0.76243847786
Recall > 0.784670249194
F1Score > 0.75335369093
Precision, recall, and F1Score = 08124155021
Sunglasses
occlusion
Precision > 0.7372725137275
Recall > 0.7544854418593
F1Score > 0.7423043238994
Precision, recall, and F1Score = 02929153120
Mixed occlusionPrecision > 0.74153419396
Recall > 0.755126268691
F1Score > 0.745206919393
Precision, recall, and F1Score = 023771610
Table 11. The number of categories that identified performance indicators to an extent greater than 0.7 or single categories that were wrongly identified from Extended Yale B datasets for different methods.
Table 11. The number of categories that identified performance indicators to an extent greater than 0.7 or single categories that were wrongly identified from Extended Yale B datasets for different methods.
Occlusion Rate and MethodOther Measurement Model Performance IndicatorsMethod
LRCSRCCRCESRCModule LRCBPLRC
10% (Random Occlusion) Precision > 0.716121601730
Recall > 0.728193142431
F1Score > 0.717152201632
Precision, recall, and F1Score = 0000000
20% (Random Occlusion) Precision > 0.78101201529
Recall > 0.721132452228
F1Score > 0.7771201631
Precision, recall, and F1Score = 0000000
30% (Random Occlusion) Precision > 0.744401625
Recall > 0.72121922425
F1Score > 0.721401527
Precision, recall, and F1Score = 0000000
40% (Random Occlusion) Precision > 0.720201017
Recall > 0.71211212924
F1Score > 0.700102517
Precision, recall, and F1Score = 0000000
10% (Vertical
Occlusion)
Precision > 0.71820182825
Recall > 0.730203072529
F1Score > 0.71922240427
Precision, recall, and F1Score = 0000000
20% (Diagonal
Occlusion)
Precision > 0.77171421826
Recall > 0.717192882527
F1Score > 0.74131711728
Precision, recall, and F1Score = 0100000
Table 12. The number of categories that identified performance indicators to an extent greater than 0.7 or single categories that were wrongly identified from ORL datasets for different methods.
Table 12. The number of categories that identified performance indicators to an extent greater than 0.7 or single categories that were wrongly identified from ORL datasets for different methods.
Occlusion Rate and MethodOther Measurement Model
Performance Indicators
Method
LRCSRCCRCESRCModule LRCBPLRC
10% (Random Occlusion)Precision > 0.7392433293740
Recall > 0.7382732273740
F1Score > 0.7391830253740
Precision, recall, and F1Score = 0010100
20% (Random Occlusion) Precision > 0.7381427133640
Recall > 0.7381326223739
F1Score > 0.737923103640
Precision, recall, and F1Score = 0041200
30% (Random Occlusion) Precision > 0.7241115123638
Recall > 0.7261017153438
F1Score > 0.7165873438
Precision, recall, and F1Score = 0171600
40% (Random Occlusion) Precision > 0.71451853939
Recall > 0.718618113737
F1Score > 0.7911023737
Precision, recall, and F1Score = 071361200
10% (Vertical
Occlusion)
Precision > 0.7361219153840
Recall > 0.7361524253639
F1Score> 0.736815113639
Precision, recall, and F1Score = 0030100
20% (Diagonal
Occlusion)
Precision > 0.795823535
Recall > 0.7961483434
F1Score > 0.741413333
Precision, recall, and F1Score = 02527212700
Table 13. Attribute comparison with prior methods.
Table 13. Attribute comparison with prior methods.
MethodLRC [18]SRC [10,11]CRC [14]ESRC [17]Module LRC [18]BPLRC
The basic modelLinear regression modelLasso regression modelRidge regression modelLasso regression modelLinear regression modelLinear regression model
Measuring methodEuclidean distance Euclidean distance Euclidean distance Euler distanceEuclidean distance Euclidean distance
Image recognition speedExtremely fastExtremely slowFastExtremely slowRelatively fastRelatively slow
RobustnessExtremely weakRelatively weakRelatively weakWeakRelatively strongStrong
Scope1. Face image with light changes; 2. face image with expression changes1. Face image with light changes; 2. face image with expression changes1. Face image with light changes; 2. face image with expression changes1. Face image with light changes; 2. face image with expression changes1. Face image with light changes; 2. face image with expression changes;
3. face occlusion image
1. Face image with light changes; 2. face image with expression changes;
3. face occlusion image
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xue, J.; Chen, X.; Xie, Z.; Ali, S.; Yuan, L.; Chen, X.; Shi, W.; Huang, G. Recognition of Continuous Face Occlusion Based on Block Permutation by Using Linear Regression Classification. Appl. Sci. 2022, 12, 11885. https://doi.org/10.3390/app122311885

AMA Style

Xue J, Chen X, Xie Z, Ali S, Yuan L, Chen X, Shi W, Huang G. Recognition of Continuous Face Occlusion Based on Block Permutation by Using Linear Regression Classification. Applied Sciences. 2022; 12(23):11885. https://doi.org/10.3390/app122311885

Chicago/Turabian Style

Xue, Jianxia, Xiaojing Chen, Zhonghao Xie, Shujat Ali, Leiming Yuan, Xi Chen, Wen Shi, and Guangzao Huang. 2022. "Recognition of Continuous Face Occlusion Based on Block Permutation by Using Linear Regression Classification" Applied Sciences 12, no. 23: 11885. https://doi.org/10.3390/app122311885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop