Next Article in Journal
Using Fuzzy Comprehensive Evaluation to Assess the Competency of Full-Time Water Conservancy Emergency Rescue Teams
Next Article in Special Issue
Granular Elastic Network Regression with Stochastic Gradient Descent
Previous Article in Journal
A Note on Bell-Based Apostol-Type Frobenius-Euler Polynomials of Complex Variable with Its Certain Applications
Previous Article in Special Issue
Intuitionistic Fuzzy-Based Three-Way Label Enhancement for Multi-Label Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Bilinear Supervised Neighborhood Discrete Discriminant Hashing

1
School of Information Engineering, Nanjing Audit University, Nanjing 211815, China
2
Key Laboratory of Intelligent Information processing, Nanjing Xiaozhuang University, Nanjing 211171, China
3
Jiangsu Key Laboratory of Image and Video Understanding for Social Safety, Nanjing University of Science and Technology, Nanjing 210014, China
4
School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, China
5
School of Science, East China Jiaotong University, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(12), 2110; https://doi.org/10.3390/math10122110
Submission received: 25 April 2022 / Revised: 7 June 2022 / Accepted: 15 June 2022 / Published: 17 June 2022
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)

Abstract

:
Feature extraction is an important part of perceptual hashing. How to compress the robust features of images into hash codes has become a hot research topic. Converting a two-dimensional image into a one-dimensional descriptor requires a higher computational cost and is not optimal. In order to maintain the internal feature structure of the original two-dimensional image, a new Bilinear Supervised Neighborhood Discrete Discriminant Hashing (BNDDH) algorithm is proposed in this paper. Firstly, the algorithm constructs two new neighborhood graphs to maintain the geometric relationship between samples and reduces the quantization loss by directly constraining the hash codes. Secondly, two small rotation matrices are used to realize the bilinear projection of the two-dimensional descriptor. Finally, the experiment verifies the performance of the BNDDH algorithm under different feature types, such as image original pixels and a Convolutional Neural Network (CNN)-based AlexConv5 feature. The experimental results and discussion clearly show that the proposed BNDDH algorithm is better than the existing traditional hashing algorithm and can represent the image more efficiently in this paper.

1. Introduction

A series of problems that are caused by the “curse of dimensionality” make feature representation the core problem of the development of the visual field [1,2,3]. As a feature extraction algorithm with a low computational cost, the hashing algorithm has received more attention. The advantage of the hashing algorithm is that it uses binary hash code to represent each image, which is different from other subspace learning algorithms whose learned low-dimensional feature elements are floating point numbers. There is no doubt that the operation speed of binary is better than that of real numbers, and the storage efficiency of hash codes is higher.
Generally, hashing algorithms are divided into data-independent hashing and data-dependent hashing, according to whether training samples are considered or not. Data-dependent hashing has attracted much attention because it can obtain better hash codes more efficiently. In data-dependent hashing, many hashing algorithms obtain hash codes through linear projection, such as Spectral Hashing (SH) [4], Iterative Quantization (ITQ) [5] and Semi-supervised Hashing (SSH) [6]. In detail, the SH algorithm that is proposed by Weiss et al. [4] transforms the hash codes-solving problem into the feature-solving problem of the weighted Laplace–Beltrami operators in manifold learning by relaxing the constraints, and defines the concept of spectral relaxation. However, this algorithm is more suitable for the representation of dense matrices, and the data need to meet the uniform distribution, so its application range is very limited. The ITQ algorithm that is proposed by Gong et al. [5] is different from the Principal Component Analysis Hierarchical clustering method (PCAH) [7], which directly uses the Principal Component Analysis (PCA) [8] algorithm to reduce the dimension of the data, and then obtains the hash codes through symbol function quantization. The ITQ algorithm rotates the projected data set on the basis of PCA to reduce the loss of information in the quantization process. Therefore, the hash codes that are learned by the ITQ algorithm are more robust than the PCAH algorithm. In order to reduce the bad-code problem that is caused by the empirical error on the label set, the SSH [6] algorithm uses the distance in metric space or class label to obtain the sample relationship matrix, and constructs a regularizer using labeled and unlabeled data to reduce the over-fitting problem, and finally obtains the hash codes through simple linear mapping.
The hashing algorithms mentioned above usually transform discrete constraints into continuous problems without directly constraining the discrete variables. In order to further reduce the loss of data that is projected from the original space to the Hamming space, some more promising algorithms with performance-discrete constraints have been proposed, such as Discrete Supervised Hashing (SDH) [9], Supervised Discrete Hashing with Relaxation (SDHR) [10], and Supervised Discrete Discriminant Hashing (SDDH) [11]. The SDH algorithm that was proposed by Shen et al. [9] regresses the hash codes to the corresponding label through least square regression and uses the Discrete Cyclic Coordinate Descent (DCC) method to solve the hash codes. Obviously, the ordinary least-squares problem cannot achieve the optimal effect of classification. Therefore, in order to further improve the possibility of correct classification of each data point, Gui et al. [10] proposed SDHR, which learns the regression targets from the input data in a direct way and applies it to the learning of hash codes. Cui et al. [11] proposed the SDDH algorithm, which uses the given label information to construct a relationship matrix to directly learn a hashing function and effective hash codes at the same time.
One-dimensional feature extraction algorithms have apparent shortcomings because they need to manually convert two-dimensional images into one-dimensional vectors, resulting in the loss of image structure information. Yang et al. proposed the Two-Dimensional Principal Component Analysis (2DPCA) [12] on the basis of PCA, which makes it easier to accurately calculate the covariance matrix so as to obtain effective features more conveniently. Li et al. proposed the Two-Dimensional Linear Discriminant Analysis (2DLDA) [13] on the basis of the Linear Discriminant Analysis (LDA) [14], which not only reduced the calculation cost, but also improved the recognition accuracy. In manifold learning, similar ideas have attracted extensive attention, such as Two-Dimensional Local Preserving Projections (2DLPP) [15,16,17], which effectively improve the performance of feature extraction. These 2D feature-extraction algorithms make full use of the advantages of matrix features, so they show stronger robustness and save more space.
Inspired by the above Two-Dimensional methods, many hashing algorithms that are based on bilinear projection have been proposed. For example, Mao et al. proposed two-dimensional PCA hashing (2DPCAH) [18], which first uses the two-dimensional features of the image for principal component analysis, and then quantizes the reduced features to obtain the hash codes. Ding et al. proposed the 2D version of ITQ, Two-Dimensional Iterative Quantization (2DITQ) [19]. Based on the 2DLDA algorithm, they also proposed Linear Discrimination Discrete Hashing (LDDH) [19] to unify 2DLDA and SDH into one framework. 2DITQ alternately reduces the dimension of features from row direction and column direction by 2DPCA to project high-dimensional 2D image features into a low-dimensional matrix. Bilinear Random Projections for Locality-Sensitive Binary Code (BLSH-SIK) [20] proposed by Kim is a bilinear extension of LSH. Like LSH, BLSH-SIK can maintain the similarity between similar data, and the calculation time and memory consumption of binary code that are required by BLSH-SIK are significantly reduced compared with linear projection. Bilinear Projection-based Binary Codes (BPBC) [21] is a typical unsupervised bilinear projection-hashing algorithm. It transforms a large projection matrix into two small projection matrices to obtain better binary features with less calculating complexity. Bilinear Discriminant Analysis Hashing (BDAH) [22] uses the 2DLDA algorithm to reduce the dimension of features, and its performance is better than the unsupervised hashing algorithm. Another supervised bilinear projection hash is Bilinear Supervised Hashing (BSDH) [23], which no longer adopts the algorithm of continuous optimization quantization, but restricts discrete variables to obtain more competitive hash codes.
However, most of the existing 2D-hashing algorithms are divided into two steps. Firstly, the traditional 2D image-processing algorithm is used to reduce the dimension of the input image, and then the reduced features are quantified to obtain the hash codes. This shows that this algorithm has shortcomings, that is, directly binarizing the features after dimension reduction will lose a lot of useful information. Although many improved algorithms are proposed to solve this defect, such as 2DITQ, Global Similarity Preserving Hashing (GSPH) [24], and PCA-ITQ [25], the learning hash codes are still suboptimal. Therefore, based on the advantages of 2D image processing and the shortcomings of existing 2D-hashing algorithms, Bilinear Projection Supervised Neighborhood Discrete Discriminant Hashing (BNDDH) is proposed in this paper, in order to reduce the information loss in the process of projection into Hamming space, and to try to maintain the original geometric structure relationship between data, so as to obtain more efficient hash codes. Therefore, the algorithm that is proposed in this paper uses discrete constraints rather than continuous constraints to approximate, and in the projection process, the two-dimensional data are no longer transformed into one-dimensional vectors, but are directly projected by two orthogonal matrices.
Our main contributions are outlined as follows:
(1)
We present a novel hashing algorithm called the BNDDH algorithm, which can perform perceptual hashing, an optimal Laplacian graph, and supervised discrete-discriminant preserving simultaneously in a unified strategy.
(2)
We propose an optimized Laplace matrix construction strategy to maintain the neighborhood structure information of samples, so as to narrow the scope of similarity retrieval from the same object category to the same instance.
(3)
We perform bilinear projection on two-dimensional images, which overcomes the disadvantage of large amounts of calculation of high-dimensional linear projection in the past. In addition, we use bilinear projection to better maintain the two-dimensional features of the image.
The rest of this paper is planned as follows: Section 2 mainly presents the algorithms that are related to this paper, including SDDH and BPBC. The objective function and optimization algorithm of BNDDH are given in detail in Section 3. The experimental results on the CIFAR-10, Yale-B, MINIST and AR databases show the effectiveness of BNDDH in Section 4. Finally, Section 5 summarizes the full text.

2. Related Algorithms

2.1. Supervised Discrete Discriminant Hashing (SDDH)

To facilitate reading, the main notations that are used in this article are shown in Table 1.
SDDH [11] is characterized by paying attention to the direct learning of hash codes. It is a supervised hashing algorithm with direct constraints on discrete variables. For the input sample X R d × n , the relationship matrix S { 1 , 1 } n × n of the samples is constructed according to the given label information. S i j = 1 indicates that x i and x j are of the same class, and S i j = 1 indicates different classes. The similarity measurement matrix W R b × n is defined to describe the relationship between the hash codes. The objective function is in Equation (1), and b ( b < < d ) is the length of the hash codes:
max   B , W , F   tr ( W T B S B T W ) υ b i F ( X ) 2 s . t . b i { 1 , 1 } b
The second term of the formula defines F ( x ) = G T Φ ( x ) , G R m × b as the projection matrix, which needs continuous iterative updating to obtain a better projection effect. ν is the penalty parameter. Like the idea of using anchors to reduce time complexity in Anchor Graph Hashing (AGH) [26], in SDDH Φ ( X ) = [ ϕ ( x 1 ) , ϕ ( x 2 ) , , ϕ ( x n ) ] R m × n ( m < n ) is an m-dimensional column vector, which is achieved by using Radial Basis Function (RBF) kernel function, where ϕ ( x i ) = [ exp ( ( x i , a 1 ) 2 t ) , , ( ( x i , a m ) 2 t ) ] T . { a i } i = 1 m are randomly selected anchors from the training databases X , and t is the kernel bandwidth. Experiments show that SDDH can effectively ensure the minimum feature loss when the samples are mapped to Hamming space.

2.2. Bilinear Projection-Based Binary Codes (BPBC)

BPBC [21] is a typical unsupervised bilinear-projection hashing algorithm. Given sample X = { x 1 , x 2 , , x n } ( x i R d 1 × d 2 ) , these samples are projected into Hamming space by using random rotation matrix R ( R R d 1 d 2 × d 1 d 2 ) .
H ( x ) = sgn ( R T x )
where sgn ( ) is binary processing to learn binary codes. For positive numbers, output + 1 , otherwise output 1 . In order to reduce the loss from the projection process, the goal of the BPBC algorithm is to minimize the angle θ between v e c ( R ^ T X ) and v e c ( sgn ( R ^ T X ) ) , where v e c ( ) denotes vectorization.
max i = 1 n cos ( θ i ) = i = 1 n sgn ( R ^ T X i ) T ( R ^ T X i ) d 1 × d 2
Then, based on Kronecker product , the complete rotation is transformed into bilinear rotation R ^ = R 2 R 1 ( R 1 R d 1 × d 1 , R 2 R d 2 × d 2 ) . Thus, the projection matrix is reduced to reduce the computational complexity and further accelerate the projection speed. The Equation (2) can be rewritten as:
H ( x ) = sgn ( R 1 T x R 2 )
To obtain the low dimensional c-bits ( c = c 1 × c 2 ) hash code, let R 1 R d 1 × c 1 and R 2 R d 2 × c 2 , the objective function can be transformed into:
max B , R 1 , R 2 i = 1 n cos ( θ i ) = i = 1 n v e c ( sgn ( R 1 T X i R 2 ) ) T v e c ( R 1 T X i R 2 ) c = 1 c i = 1 n t r ( b i R 2 T X i T R 1 ) s . t . R 1 T R 1 = I , R 2 T R 2 = I
where t r ( ) denotes the trace of matrix, I is the identity matrix. b i = sgn ( R 1 T X i R 2 ) and B = [ b 1 , b 2 , , b n ] . In order to optimize the projection, the variables B , R 1 and R 2 need to be updated. SVD is used to obtain R 1 and R 2 for each iteration. Further details of optimization can be found in reference [21].

3. Proposed Methods

3.1. The Objective Function

Considering the labeled image dataset X = [ x 1 , , x n ] R d 1 × d 2 × n , for each sample x i , its corresponding label is l j ( j = 1 , 2 , , q ) . Our goal is to learn the binary hash code b i to represent x i , and set B = [ b 1 , , b n ] .
In order to enhance the scalability of the algorithm, we obtain the binary hash codes of each input two-dimensional image through the hash function H ( X ) . The purpose of our learning is to maximize the similarity between classes and minimize the similarity within classes. In order to better measure the relationship between hash codes, we define the similarity measurement matrix A . We hope to learn the optimal hash function H ( X ) by maximizing Equation (6); the specific definition of H ( X ) can be found in Equation (20).
max ( x i , x j ) S H ( X ) T M H ( X ) ( x i , x j ) S H ( X ) T M H ( X )
We need to define a formulation to represent the relationship between the input data. Some hashing algorithms define the relationship matrix S { 0 , 1 } n × n , where the value of the matrix is equal to 1 , indicating that the two samples at the corresponding position belong to the same class, with 0 indicating different classes. Previous hashing algorithms only care about the label information of the sample when describing or representing the sample, so the scope of similarity retrieval is the same object label. However, in the real world, the given labels are not always accurate, and obtaining high-quality labels through Subject Matter Experts (SMEs) is often expensive and difficult [27,28], which affects the effectiveness of hash-learning to a certain extent. In order to avoid the loss that is caused by unreliable label information, we relax the relationship matrix S and define S b R n × n and S w R n × n to represent the relationship between classes and within classes, respectively. In this relationship matrix, we also notice the sample points with the same label but not belonging to the nearest-neighbor relationship, and accurately describe the geometric relationship between the samples by reducing their weight. Firstly, we define a between-class relationship matrix graph as:
S b = { 1 ,   x i O N o ( x j )   o r   x j O N o ( x i ) exp ( | | x i x j | | / ε ) ,   x i N o ( x j )   o r   x j N o ( x i ) 0 ,   o t h e r w i s e
where O N o ( x i ) represents a set, which belongs to different classes of x i and does not belong to the o nearest neighbor of x i ; we give it the maximum weight. N o ( x i ) represents the set of o nearest neighbors of x i ; we reduce the weight of elements in this set. The value of the weight is determined by calculating the direct Euclidean distance between x i and x j , which is recorded as x i x j . exp ( ) is an exponential function and ε is the parameter.
Similarly, the within-class relationship matrix is defined as:
S w = { 1 ,   x i N o + ( x j )     o r   x j N o + ( x i ) exp ( | | x i x j | | / ε ) ,   o t h e r w i s e
where N o + ( x i ) represents the set of o nearest neighbors belonging to the same classes of x i .
Through the above definition, Equation (6) can be converted to the following form:
max B , U   tr ( U T B S w B T U ) tr ( U T B S b B T U )   s . t . B { 1 , 1 } b × n
where U is obtained by decomposing the similarity matrix A , and A = U U T .
Transform this problem into the solution of continuous variables, which brings steep losses. In this paper, we directly constrain binary codes to obtain better hash functions and hash codes. Our goal is to learn hash function H ( X ) with a good scalability. Therefore, we need to add constraints to the above objective function and convert it into Equation (10)
max B , U , H   tr ( U T B S w B T U ) tr ( U T B S b B T U )   υ B H ( X ) 2
where ν is the penalty parameter, and we hope that B and H ( X ) are as similar as possible. However, in the real world, B slight differing from H ( X ) is acceptable.

3.2. Bilinear Projection Learning

Firstly, we define bilinear rotation matrix R 1 R d 1 × c 1 and R 2 R d 2 × c 2   ( c 1 < d 1 , c 2 < d 2 ) ; input data X R d 1 × d 2 × n will be converted to R 1 T X R 2 R c × n .
Referring to the Fisher Discriminant Analysis algorithm (FDA), we define the between-class scatter matrix as Equation (11), and the within-class scatter matrix as Equation (12):
D b = j = 1 q k j R 1 T ( M j M ) R 2 2 = t r ( j = 1 q k j R 1 T ( M j M ) R 2 R 2 T ( M j M ) T R 1 ) = t r ( R 1 T ( j = 1 q k j ( M j M ) R 2 R 2 T ( M j M ) T ) R 1 )
D w = j = 1 q i = 1 k j R 1 T ( X i M j ) R 2 2 = t r ( j = 1 q i = 1 k j R 1 T ( X i M j ) R 2 R 2 T ( X i M j ) T R 1 ) = t r ( R 1 T ( j = 1 q i = 1 k j ( X i M j ) R 2 R 2 T ( X i M j ) T ) R 1 )
In order to disperse the data between classes as much as possible, the data within classes should be similar as much as possible, which equals to maximizing D b / D w in Equation (13).
max D b / D w = t r ( R 1 T ( j = 1 q k j ( M j M ) R 2 R 2 T ( M j M ) T ) R 1 ) / t r ( R 1 T ( j = 1 q i = 1 k j ( X i M j ) R 2 R 2 T ( X i M j ) T ) R 1 )
To simplify the expression, we define H b Q 1 , H w Q 1 , H b Q 2 and H w Q 2 in Equation (14), Equation (15), Equation (16), and Equation (17), respectively.
H b Q 1 = j = 1 q k j ( M j M ) T R 1 R 1 T ( M j M )
H w Q 1 = j = 1 q i = 1 k j ( X i M j ) T R 1 R 1 T ( X i M j )
H b Q 2 = j = 1 q k j ( M j M ) R 2 R 2 T ( M j M ) T
H w Q 2 = j = 1 q i = 1 k j ( X i M j ) R 2 R 2 T ( X i M j ) T
Because the eigenvector with larger a eigenvalue carries more identification information, we calculate the eigenvalue of H b Q 2 H w Q 2 and use the eigenvector corresponding to the larger eigenvalue to form R1, and by calculating the eigenvalue of H b Q 1 H w Q 1 to obtain R2. After obtaining the bilinear projection matrices R1 and R2, we project the high-dimensional 2D image features to obtain a smaller two-dimensional feature matrix as shown in Equation (18). This operation can reduce the computing cost.
z i = v e c ( R 1 T X i R 2 )
Z = [ z 1 , z 2 , , z n ] R c × n
To ensure bilinear projection, we define the embedding function in Equation (20), where P is the projection matrix that projects Z into a lower dimensional space.
H ( X ) = P Z
Finally, the objective function is transformed into:
max B , U , P   tr ( U T B S w B T U ) tr ( U T B S b B T U )   υ B P Z 2

3.3. Optimization

In order to alternately optimize U , B and P to obtain better data representation, we need to fix two of them and update the remaining one. The optimization process is divided into the following steps:
  • Update U: U only depends on B . By keeping B fixed, we can achieve U by the optimization problem in Equation (22):
    max B , U   tr ( U T B S w B T U ) tr ( U T B S b B T U )
U consists of the eigenvalue vector corresponding to the largest eigenvalues of B S w B T B S b B T , which will be simplified into the eigenvalue–eigenvectors problem.
2.
Update P: Through the objective function (21), we find that the value of P is only related to B , and can define the function L ( P ) as shown in Equation (23). By fixing B , the projection matrix P can be easily calculated by regression:
L ( P ) = P Z B 2
Take the partial derivative of P and set it equal to 0, then we can achieve Equation (25):
L ( P ) P = 2 ( P Z B ) Z T = 0
P Z Z T = B Z T
We can easily obtain:
P = B Z T ( Z Z T ) 1
3.
Update B: By keeping Z and U constant, we make the objective function transform into the following problem to optimize:
t r ( U T B S w B T U ) t r ( U T B S b B T U ) ν t r ( ( B P Z ) T ( B P Z ) ) = t r ( B T ( U U T B ( S w S b ) ) + ν P Z )
In the t-th iteration, we define the learned hash codes as B t . Then, we can update B t + 1 from Equation (28):
B t + 1 = sgn ( ( U U T B t S w U U T B S b ) ν P Z )
Algorithm 1 gives the concrete steps of BNDDH:
Algorithm 1 BNDDH
Input: Training databases X = { x i } i = 1 n , label information l i , bilinear projection rotation size c 1 and c 2 , number of iterations t 1 and t 2 , hash bits b , parameter ν .
  • Compute M and M j , respectively. Initialize diagonal matrix R 1 .
  • Repeat the following steps t 1 times:
Compute H b Q 1 by Equation (14);
Compute H w Q 1 by Equation (15);
Update R 2 by solving the eigenvalue–eigenvector problem of H b Q 1 H w Q 1 ;
Compute H b Q 2 by Equation (16);
Compute H w Q 2 by Equation (17);
Update R 1 by solving the eigenvalue–eigenvector problem of H b Q 2 H w Q 2 .
3.
Initialize B randomly. Compute Z by Equation (18).
4.
Repeat the following steps t 2 times:
Update U by Equation (22);
Solve P by Equation (26);
Solve B by Equation (28).
Output: Rotation matrix R 1 and R 2 , the hash encoding matrix B = { b i } i = 1 n .
The optimization process is mainly divided into two parts. First, update R 1 and R 2 , with a computational complexity of O ( d 1 3 ) and O ( d 2 3 ) . The second part can be divided into three sub processes, in which the time complexity of updating U is O ( b n 2 ) , the time complexity of updating P is O ( c 2 n ) , and the time complexity of updating B is O ( b n 2 ) . Therefore, the total time complexity of the optimization process is O ( t 1 ( d 1 3 + d 2 3 ) + t 2 ( 2 b n 2 + c 2 n ) ) .

4. Experiment and Discussion

In order to verify the effectiveness of the BNDDH algorithm, we first introduce three experimental databases, including CIFAR-10 and Yale-B, MNIST and AR database. Then, we provide evaluation indexes to evaluate the effects of different algorithms. Finally, for each database, we give the experimental results and corresponding analysis.

4.1. Databases and Setting

As shown in Figure 1, the CIFAR-10 database involves 10 different classes, including birds, deer, cats, automobiles, airplanes, etc., all of which are 32 × 32 RGB, 60,000 pictures in total. The database is used to verify the ability of the BNDDH algorithm to classify the common coarse-grained label data in real life.
At present, facial recognition is widely used, and it is very useful to process high-dimensional face data. Therefore, we choose the Yale-B face database to evaluate the performance of the proposed algorithm. As shown in Figure 2, this database includes 15 people, and each of them has 11 pictures, which is used to study facial expression changes. Each image is 50 × 50 pixels, and 165 pictures in total.
The other database we selected is the handwritten numeral set, as shown in Figure 3. The MNIST database is composed of 10 different categories of pictures, which are handwritten numerals 0 to 9, and the size of each picture is 28 × 28 pixels. If the handwritten data set can be recognized quickly and accurately by a computer, it will liberate the labor force of the need for manual input.
The accuracy of facial recognition in the real world will be affected by expression, occlusion, and lighting conditions. The AR face database is used to verify the robustness of the proposed algorithm. As shown in Figure 4, the database contains 120 people, each with 26 pictures. The size of each picture is 50 × 40 pixels.
For the sake of verifying the effectiveness of the BNDDH algorithm that is proposed in this paper, we select several classical hashing algorithms for performance comparison, including SH [4], PCAH [7], SDDH [11], 2DITQ [19], LDDH [19], BPBC [21], BSDH [23], LSH [29], SKLSH [30], SP [31], and DSH [32]. During the experiment, different hash code lengths are set and tested on five different hash code lengths: 8, 16, 32, 48 and 64. To ensure that the results are reliable in our experiment, we set t 1 = 10 and t 2 = 5 . The iteration of other comparison algorithms adopts the convergence times that are set in the experiment of the original paper.
In the experiment, the Hamming radius is uniformly set to 2. That is, within the range of Hamming radius 2, judge whether the nearest data are retrieved correctly. We evaluate the difference between the classification of the algorithm and the ground truth. The precision rate, recall rate, precision–recall curves and mean average precision (MAP) are used to evaluate the performance of the algorithm.

4.2. Experimental Results

4.2.1. Experiment on the CIFAR-10 Database

In this part, we test the proposed algorithm on the CIFAR-10 database. Inspired by reference [8], we extract the depth two-dimensional features of the CIFAR-10 database and extract the image features by a Convolutional Neural Network (CNN) [33]. The two-dimensional AlexConv5 feature (256 × 36) is used to replace the original 32 × 32 RGB image in the experiment. This paper selects 10,000 data as the training set and 1000 data as the test sample. The precision–recall curve of the experiment is shown in Figure 5 below. The BNDDH algorithm shows good performance on different hash bits. Although the hash code is short, the BNDDH algorithm is still better than other comparative hashing algorithms.
Figure 6 shows the results of the precision and recall rate of the experiment on different hash bits from 8 to 64. When the hash bit is 32, the effect of BNDDH algorithm reaches its best; the precision rate is about 60%, and the recall rate is close to 60%. Among all the comparison algorithms, the hash codes that are learned by the PCAH algorithm are the worst, possibly because the PCAH algorithm relaxes the discrete constraint in the learning process, transforms the hash codes-solving problem into a continuous function-solving problem, and obtains the suboptimal hashing representation. Therefore, it is necessary to consider the direct constraint on discrete hash codes.
The MAP results are shown in Table 2. It can be found that the MAP of the BNDDH algorithm is optimal under different hash bits and is about 10% higher than that of SDDH and 2% higher than that of LDDH. The SDDH algorithm is a supervised hashing algorithm. It learns binary hash codes according to class label, which is better than the unsupervised hashing algorithm that was compared in the experiment. Because the same category label in the CIFAR-10 database is not the same object, its effect is inferior to the BNDDH algorithm. We believe that the retrieval of the BNDDH algorithm is more accurate. LDDH adopts the idea of 2DLDA to reduce the dimension of two-dimensional images and discretizes the hash codes with reference to SDH. From the experimental results, it can be seen that LDDH is better than other comparison algorithms; the results prove that it is effective to consider the two-dimensional features.

4.2.2. Experiment on the Yale-B Database

Different from CIFAR-10, the same object in the Yale-B database is marked with the same label. On this database, we analyze the performance of the BNDDH algorithm. The accuracy and recall under different hash bits are shown in Figure 7. It can be seen that when the hash code is equal to 8, the accuracy of the BNDDH algorithm is not as good as the SDDH algorithm and the 2DITQ algorithm. However, the BNDDH algorithm is better than other comparison algorithms in general.
As shown in Table 3, when the hash codes are 8, 16, 32, 48 and 64, the MAP of BNDDH exceeds 0.65; when the hash code is 48, the MAP value is close to 0.8. The performance of the ITQ algorithm is generally consistent with that of the 2DITQ algorithm, and the MAP value of the 2DITQ algorithm is higher than that of the ITQ algorithm, except when the hash code is 48. We think it is useful to consider the two-dimensional characteristics of the image.

4.2.3. Experiment on the MNIST Database

The experimental results on the MINST database are shown in Table 4. It can be seen that when the length of the hash code is short, the algorithm advantage of the BNDDH algorithm is obvious. When the hash bit equals 8, the MAP of the comparison algorithm is lower than 0.3, while the MAP of the BNDDH that is proposed in this paper is close to 0.6. When the number of hash bits increases, the algorithm advantage of BNDDH is slight, which may be because the size of each picture in the MNIST database is 28 × 28, while the BNDDH algorithm is more suitable for a high-dimensional database. Among all the comparison algorithms, the best performance is from the BPBC algorithm, which is an unsupervised hashing algorithm based on bilinear projection. We believe that it is rational to integrate the idea of bilinear projection into a hashing algorithm, and the experimental results support this point of view.
We choose an unsupervised algorithm with relatively good MAP and the BNDDH algorithm to analyze the running time. The training time and testing time are shown in Figure 8. It can be seen that the training time of all algorithms increases with the increase in hash code. The training time of the DSH algorithm is the shortest, probably because the method is relatively simple. The DSH algorithm uses K-means to partition data, and learns projection vectors through the geometric structure of the data to obtain hash codes. The training time of the BNDDH algorithm is similar to that of another unsupervised bilinear projection BPBC algorithm, but the testing time of BNDDH is relatively long.

4.2.4. Experiment on the AR Database

On the AR face database, we test the performance of the proposed algorithm BNDDH. BSDH and LDDH are selected as the comparison algorithms. These two comparison algorithms are both supervised hashing algorithms based on two-dimensional image learning. The experimental results are shown in Figure 9. Under different hash bits, the MAP of BNDDH is higher than that of the two comparison algorithms. However, the MAP of the three algorithms is not good, which may be due to many interference factors in the AR database, such as facial expression, illumination, and facial occlusion. It is worth mentioning that the training time of BNDDH is significantly better than that of BSDH and LDDH. With the increase in hash-code length, the training time of BSDH and LDDH increases significantly, while the training time of BNDDH increases very little. This may be because both BSDH and LDDH use the Discrete Cyclic Coordinate Descent method when learning hash codes, which only updates one bit hash code at a time, thus, it takes a lot of time when the hash code is long. It can be said that the time advantage of BNDDH algorithm isobvious.
Image feature extraction is crucial in the fields of pattern recognition and machine learning. The above experimental results and analysis show that the BNDDH algorithm that is proposed in this paper performs well in feature extraction. Using compressed binary hash codes that are learned by the BNDDH algorithm to represent images is very useful in image storage and retrieval.

4.3. Friedman Test

In order to more intuitively represent the performance of each algorithm, we conducted a Friedman test and Nemenyi post-hoc test on three databases, respectively, and the results are shown in Figure 10. δ = q α k ( k + 1 ) / 6 N is the critical threshold, where k represents the number of all hashing algorithms and N = 5 represents the number of different hash codes. The test level α of 0.05, so we can achieve q α 1 = 3.031   ( k = 8 ) and q α 2 = 3.164   ( k = 10 ) . For each algorithm, dots represent the average value of the MAP in different hash codes, and the length of the horizontal line represents the value of critical threshold δ , so we achieve δ 1 = 4.696 and δ 2 = 3.498 .

4.4. Parameter Analysis

There are three important parameters: c1, c 2 and ν in this paper. We choose to analyze and compare the influence of these parameters when the hash code length is 8, 16, 32, 48 and 64 on CIFAR-10.
In order to reconcile the computational efficiency and performance of the BNDDH algorithm, we compare the relationship between size c 1 and c 2 of the projected characteristic matrix and the value of MAP under different hash bits. Since the size of the AlexConv5 feature of CIFAR-10 is 36 × 256, the experimental setting c 1 is much smaller than c 2 . Theoretically, we hope to learn the optimal hash code when c 1 and c 2 are as small as possible. The smaller c 1 and c 2 indicate the lower complexity of the algorithm, but the MAP value of the algorithm is often lower. As can be seen from Figure 11a,e, the MAP increases with the increase in c 1 and c 2 , but when the value of c 1 is greater than 16 and the value of c 2 is greater than 48, the MAP is basically stable. In practical application, it is quite straightforward to find such critical values to ensure that learning is better.
Figure 11f shows the change of MAP of the BNDDH algorithm on the CIFAR-10 database when ν in [10−8 10−6 10−4 10−2 10−1 101 102 103 104]. We can observe that when ν is equal to 103 and 104, the hash-learning ability decreases significantly. When α is 102 to 10−8 the change of map value is not obvious. This finding shows that in practical application, ν can be controlled slightly without significant parameter tuning.

5. Conclusions

This paper proposed a Bilinear Supervised Neighborhood Discrete Discriminant Hashing algorithm, namely the BNDDH algorithm, which overcomes the shortcomings of the traditional manual transformation of two-dimensional pictures into one-dimensional vectors and applies the idea of bilinear projection to the supervised discrete discriminant hashing. Compared with one-dimensional projection, the projection matrix of bilinear projection is smaller, so the calculation memory is also smaller. Different from other supervised hashing, the proposed BNDDH algorithm maintains the relationship between the original data sample graphs by constructing the visual features of the data graph, and effectively solves the problem where different objects are marked as the same label. In order to reduce the projection loss from Euclidean space to Hamming space, the BNDDH algorithm directly restricts the binary hash code. Experimental results on four different databases show that compared with other hashing algorithms, the BNDDH algorithm performs better in similar image retrieval. However, the advantages of the BNDDH algorithm are not obvious in low dimensional data. In the future, we will explore the self-supervised learning of the BNDDH algorithm to further improve the application scope of the algorithm.

Author Contributions

Conceptualization, X.C. and M.W.; methodology, M.W. and H.Z.; software, X.C. and C.S.; validation, Z.F., M.W. and X.C.; formal analysis, C.S.; investigation, H.Z.; resources, C.X.; data curation, X.C.; writing—original draft preparation, X.C. and M.W.; writing—review and editing, X.C. and M.W.; visualization, X.C.; supervision, M.W.; project administration, C.S. and X.C.; funding acquisition, C.X. and M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Postgraduate Research &Practice Innovation Program of Jiangsu Province Nos. KYCX21_1945; the National Science Foundation of China under Grant Nos. 61876213, 61861033, 61991401, 62172229, 61976117, 71972102, 61976118; the Key R&D Program Science Foundation in Colleges and Universities of Jiangsu Province Grant Nos. 18KJA520005, 19KJA360001, 20KJA520002; the Natural Science Fund of Jiangsu Province under Grants Nos. BK20201397, BK20191409, BK20211295; and the Jiangsu Key Laboratory of Image and Video Understanding for Social Safety of Nanjing University of Science and Technology under Grants J2021-4. The Future Network Scientific Research Fund Project SRFP-2021-YB-25, and China’s Jiangxi Province Natural Science Foundation (No. 20202ACBL202007). This work is funded in part by the “Qinglan Project” of Jiangsu Universities under Grant D202062032.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript or in the decision to publish the results.

References

  1. Abu Khurma, R.; Aljarah, I.; Sharieh, A.; Abd Elaziz, M.; Damaševičius, R.; Krilavičius, T.S. A Review of the Modification Strategies of the Nature Inspired Algorithms for Feature Selection Problem. Mathematics 2022, 10, 464. [Google Scholar] [CrossRef]
  2. Wan, M.; Yao, Y.; Zhan, T.; Yang, G. Supervised Low-Rank Embedded Regression (SLRER) for Robust Subspace Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1917–1927. [Google Scholar] [CrossRef]
  3. Berisha, V.; Krantsevich, C.; Hahn, P.R.; Hahn, S.; Dasarathy, G.; Turaga, P.; Liss, J. Digital medicine and the curse of dimensionality. NPJ Digit. Med. 2021, 4, 1–8. [Google Scholar] [CrossRef]
  4. Weiss, Y.; Torralba, A.; Fergus, R. Spectral hashing. Adv. Neural Inf. Processing Syst. 2009, 282, 1753–1760. [Google Scholar]
  5. Gong, Y.; Lazebnik, S.; Gordo, A.; Perronnin, F. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2916–2929. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, J.; Kumar, S.; Chang, S.F. Semi-supervised hashing for large-scale search. Pattern Anal. Mach. Intell IEEE Trans. 2012, 34, 2393–2406. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. He, Y.; Wang, J.; Zhong, X.; Mei, L.; Wu, Z. PCAH: A PCA-based hierarchical clustering method for visual words construction. In Proceedings of the 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, Shenzhen, China, 4–7 May 2015; pp. 1009–1018. [Google Scholar]
  8. Turk, M.; Pentland, A. Eigenfaces for recognition. Cognit. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef]
  9. Shen, F.; Shen, C.; Liu, W.; Shen, H.T. Supervised Discrete Hashing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 37–45. [Google Scholar]
  10. Jie, G.; Liu, T.; Sun, Z.; Tao, D.; Tan, T. Supervised discrete hashing with relaxation. IEEE Trans. Neural Netw. Learn. Syst. 2016, 29, 608–617. [Google Scholar]
  11. Cui, Y.; Jiang, J.; Lai, Z.; Hu, Z.; Wong, W.K. Supervised discrete discriminant hashing for image retrieval. Pattern Recognit. 2018, 78, 79–90. [Google Scholar] [CrossRef]
  12. Yang, J.; Zhang, D.; Frangi, A.F.; Yang, J.Y. Two-dimensional PCA: A new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 131–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Li, M.; Yuan, B. 2D-LDA: A statistical linear discriminant analysis for image matrix. Pattern Recogn. Lett. 2005, 26, 527–532. [Google Scholar] [CrossRef]
  14. Belhumeur, P.N.; Hespanha, J.P.; Kriengman, D.J. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 711–720. [Google Scholar] [CrossRef] [Green Version]
  15. Hu, D.; Feng, G.; Zhou, Z. Two-dimensional locality preserving projections (2DLPP) with its application to palmprint recognition. Pattern Recognit. 2007, 40, 339–342. [Google Scholar] [CrossRef]
  16. Wan, M.; Chen, X.; Zhan, T.; Xu, C.; Yang, G.; Zhou, H. Sparse Fuzzy Two-Dimensional Discriminant Local Preserving Projection (SF2DDLPP) for Robust Image Feature Extraction. Inf. Sci. 2021, 563, 1–15. [Google Scholar] [CrossRef]
  17. Lu, Y.; Yuan, C.; Lai, Z.; Li, X.; Wong, W.K.; Zhang, D. Nuclear Norm-Based 2DLPP for Image Classification. IEEE Trans. Multimed. 2017, 19, 2391–2403. [Google Scholar] [CrossRef]
  18. Liang, X.; Tang, Z.; Xie, X.; Wu, J.; Zhang, X. Robust and fast image hashing with two-dimensional PCA. Multimed. Syst. 2021, 27, 389–401. [Google Scholar] [CrossRef]
  19. Ding, Y.; Wong, W.K.; Lai, Z. Study on 2D Feature-Based Hash Learning. IEEE Trans. Multimed. 2020, 22, 1298–1309. [Google Scholar] [CrossRef]
  20. Kim, S.; Choi, S. Bilinear random projections for locality-sensitive, binary codes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1338–1346. [Google Scholar]
  21. Gong, Y.; Kumar, S.; Rowley, H.A.; Lazebnik, S. Learning binary codes for high-dimensional data using bilinear projections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 484–491. [Google Scholar]
  22. Liu, Y.; Bai, X.; Yan, C.; Zhou, J. Bilinear discriminant analysis hashing: A supervised hashing approach for high-dimensional data. In Proceedings of the Asian Conference of Computer Vision, Taipei, Taiwan, 21–23 November 2016; pp. 297–310. [Google Scholar]
  23. Ding, Y.; Wong, W.K.; Lai, Z.; Zhang, Z. Bilinear Supervised Hashing Based on 2D Image Features. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 590–602. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, Y.; Feng, L.; Liu, S.; Sun, M. Global similarity preserving hashing. Soft Comput. 2018, 22, 2105–2120. [Google Scholar] [CrossRef]
  25. Mao, M.; Zheng, Z.; Chen, Z.; Liu, H.; He, X.; Ye, R. Two-dimensional pca hashing and its extension. In Proceedings of the 2016 23rd International Conference on Pattern Recognition, Cancun, Mexico, 4–8 December 2016; pp. 1624–1629. [Google Scholar]
  26. Liu, W.; Wang, J.; Kumar, S.; Chang, S.F. Hashing with Graphs. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011; pp. 1–8. [Google Scholar]
  27. Zhang, L.; Du, G.; Liu, F.; Tu, H.; Shu, X. Global-Local Multiple Granularity Learning for Cross-Modality Visible-Infrared Person Reidentification. IEEE Trans. Neur. Net. Lear. 2021, 1–11. [Google Scholar] [CrossRef]
  28. Shen, F.; Zhou, X.; Yu, J.; Yang, Y.; Liu, L.; Shen, H.T. Scalable Zero-Shot Learning via Binary Visual-Semantic Embeddings. IEEE Trans. Image Process. 2019, 28, 3662–3674. [Google Scholar] [CrossRef] [PubMed]
  29. Datar, M. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Twentieth Annual Symposium on Computational Geometry, Brooklyn, NY, USA, 8–11 June 2004; pp. 253–262. [Google Scholar]
  30. Raginsky, M.; Lazebnik, S. Locality-sensitive binary codes from shift-invariant kernels. Adv. Neural Inf. Processing Syst. 2009, 22, 1509–1517. [Google Scholar]
  31. Yan, X.; He, K.; Kohli, P.; Sun, J. Sparse projections for high-dimensional binary codes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3332–3339. [Google Scholar]
  32. Jin, Z.; Li, C.; Lin, Y.; Cai, D. Density Sensitive Hashing. IEEE Trans. Cybern. 2017, 44, 1362–1371. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology, Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar]
Figure 1. Some samples of the CIFAR-10 database.
Figure 1. Some samples of the CIFAR-10 database.
Mathematics 10 02110 g001
Figure 2. Some samples of the Yale-B database.
Figure 2. Some samples of the Yale-B database.
Mathematics 10 02110 g002
Figure 3. Some samples of the MNIST database.
Figure 3. Some samples of the MNIST database.
Mathematics 10 02110 g003
Figure 4. Some samples of the AR database.
Figure 4. Some samples of the AR database.
Mathematics 10 02110 g004
Figure 5. Precision–recall on the CIFAR-10 database (a) @8-bits (b) @16-bits(c) @32-bits (d) @48-bits (e) @64-bits.
Figure 5. Precision–recall on the CIFAR-10 database (a) @8-bits (b) @16-bits(c) @32-bits (d) @48-bits (e) @64-bits.
Mathematics 10 02110 g005
Figure 6. Precision and recall on the CIFAR-10 database (a) precision (b) recall.
Figure 6. Precision and recall on the CIFAR-10 database (a) precision (b) recall.
Mathematics 10 02110 g006
Figure 7. Precision and recall on the Yale-B database (a) precision (b) recall.
Figure 7. Precision and recall on the Yale-B database (a) precision (b) recall.
Mathematics 10 02110 g007
Figure 8. Training time on the MNIST database. (a) training time (b) testing time.
Figure 8. Training time on the MNIST database. (a) training time (b) testing time.
Mathematics 10 02110 g008
Figure 9. MAP and training time on the AR database (a) MAP (b) training time.
Figure 9. MAP and training time on the AR database (a) MAP (b) training time.
Mathematics 10 02110 g009
Figure 10. The Friedman test of the MAP (%) on the (a) CIFAR-10 database; (b) Yale-B database; (c) MNIST database.
Figure 10. The Friedman test of the MAP (%) on the (a) CIFAR-10 database; (b) Yale-B database; (c) MNIST database.
Mathematics 10 02110 g010
Figure 11. The heatmap of parameter selection experiments on the CIFAR-10 database. The MAP changes with c 1 and c 2 when: (a) hash bits equal 8; (b) hash bits equal 16; (c) hash bits equal 32; (d) hash bits equal 48; (e) hash bits equal 64. (f) The MAP changes with ν on different hash bits.
Figure 11. The heatmap of parameter selection experiments on the CIFAR-10 database. The MAP changes with c 1 and c 2 when: (a) hash bits equal 8; (b) hash bits equal 16; (c) hash bits equal 32; (d) hash bits equal 48; (e) hash bits equal 64. (f) The MAP changes with ν on different hash bits.
Mathematics 10 02110 g011
Table 1. Explanation of main notations.
Table 1. Explanation of main notations.
NotationsSizeExplanation
q -Total number of classes
l j ( j = 1 , 2 , , q ) -Label information
k j ( j = 1 , 2 , , q ) -Number of samples per class
n -Total number of samples
d 1 , d 2 -Size of feature matrix
d -The product of d 1 and d 2
c 1 , c 2 -Size of feature matrix after projection
c -The product of c 1 and c 2
b -Length of hash codes
o -Number of nearest neighbors
X = [ x 1 , , x n ] d 1 × d 2 × n Input data matrix
M d 1 × d 2 Average value of all sample
M j d 1 × d 2 Average value of sample for j t h class
B = [ b 1 , , b n ] b × n Binary hashing matrix
Z c × n Bilinear projection feature matrix
A b × b Similarity measure matrix
S b n × n Between-class relationship matrix
S w n × n Within-class relationship matrix
P b × c Projection matrix
R 1 , R 2 d 1 × c 1
d 2 × c 2
Bilinear rotation matrix
H ( X ) -Hash function
Table 2. MAP of different hash code lengths on the CIFAR-10 database.
Table 2. MAP of different hash code lengths on the CIFAR-10 database.
Method8 bits16 bits32 bits48 bits64 bits
MAPLSH0.08190.12400.17920.22320.2589
AGH0.09170.14210.19220.15440.1675
DSH0.15160.19590.31850.29710.3717
PCAH0.12020.11640.10570.10200.1117
SH0.11510.13060.18890.21370.2490
SKLSH0.06530.09000.14340.27110.2926
SDDH0.32700.44720.42280.40930.4801
LDDH0.42040.48420.53370.54590.5526
BNDDH0.43750.50300.55190.56240.5644
Table 3. MAP of different hash code lengths on the Yale-B database.
Table 3. MAP of different hash code lengths on the Yale-B database.
Method8 bits16 bits32 bits48 bits64 bits
MAPLSH0.38080.46980.56960.68110.6133
DSH0.45930.63090.61500.62220.6755
PCAH0.28500.26470.32390.30800.2959
SH0.25730.26370.28720.34440.3064
SKLSH0.24600.38540.54560.60310.7194
SP0.52850.60010.64360.66990.6507
ITQ0.56300.64800.64900.69160.7011
2DITQ0.57360.64930.67840.66320.7124
SDDH0.58910.63020.6940.67860.6851
BNDDH0.65950.67760.69340.79230.7687
Table 4. MAP of different hash code lengths on the MNIST database.
Table 4. MAP of different hash code lengths on the MNIST database.
Method8 bits16 bits32 bits48 bits64 bits
MAPLSH0.08120.21030.32200.45860.5015
DSH0.08400.19680.39160.55400.6500
PCAH0.19870.35240.38330.36540.3427
SH0.14100.29710.40770.45730.4624
SKLSH0.05830.05570.21340.22650.3093
SP0.20550.47630.63820.68110.7356
BPBC0.25330.51580.68290.73650.7783
2DITQ0.27310.36530.37150.45980.4641
BNDDH0.58590.66760.69340.74090.7686
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Wan, M.; Zheng, H.; Xu, C.; Sun, C.; Fan, Z. A New Bilinear Supervised Neighborhood Discrete Discriminant Hashing. Mathematics 2022, 10, 2110. https://doi.org/10.3390/math10122110

AMA Style

Chen X, Wan M, Zheng H, Xu C, Sun C, Fan Z. A New Bilinear Supervised Neighborhood Discrete Discriminant Hashing. Mathematics. 2022; 10(12):2110. https://doi.org/10.3390/math10122110

Chicago/Turabian Style

Chen, Xueyu, Minghua Wan, Hao Zheng, Chao Xu, Chengli Sun, and Zizhu Fan. 2022. "A New Bilinear Supervised Neighborhood Discrete Discriminant Hashing" Mathematics 10, no. 12: 2110. https://doi.org/10.3390/math10122110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop