Next Article in Journal
Cannabinoid Therapeutic Effects in Inflammatory Bowel Diseases: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
Previous Article in Journal
Association of Patients’ Epidemiological Characteristics and Comorbidities with Severity and Related Mortality Risk of SARS-CoV-2 Infection: Results of an Umbrella Systematic Review and Meta-Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified LBP Operator-Based Optimized Fuzzy Art Map Medical Image Retrieval System for Disease Diagnosis and Prediction

1
Department of Computing Technologies, School of Computing, College of Engineering and Technology, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur, Chennai 603203, India
2
Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602107, India
3
Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai 600119, India
4
Bachelor Program in Industrial Projects, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
5
Department Electronic Engineering, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
6
School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
*
Author to whom correspondence should be addressed.
Biomedicines 2022, 10(10), 2438; https://doi.org/10.3390/biomedicines10102438
Submission received: 4 August 2022 / Revised: 19 September 2022 / Accepted: 21 September 2022 / Published: 29 September 2022
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)

Abstract

:
Medical records generated in hospitals are treasures for academic research and future references. Medical Image Retrieval (MIR) Systems contribute significantly to locating the relevant records required for a particular diagnosis, analysis, and treatment. An efficient classifier and effective indexing technique are required for the storage and retrieval of medical images. In this paper, a retrieval framework is formulated by adopting a modified Local Binary Pattern feature (AvN-LBP) for indexing and an optimized Fuzzy Art Map (FAM) for classifying and searching medical images. The proposed indexing method extracts LBP considering information from neighborhood pixels and is robust to background noise. The FAM network is optimized using the Differential Evaluation (DE) algorithm (DEFAMNet) with a modified mutation operation to minimize the size of the network without compromising the classification accuracy. The performance of the proposed DEFAMNet is compared with that of other classifiers and descriptors; the classification accuracy of the proposed AvN-LBP operator with DEFAMNet is higher. The experimental results on three benchmark medical image datasets provide evidence that the proposed framework classifies the medical images faster and more efficiently with lesser computational cost.

1. Introduction

1.1. Motivation

Advanced developments in computer hardware, storage technology, and the availability of intelligent algorithms have provided opportunities for maintaining records, data, and information in digital form. The information available in the stored data leads to faster development, improvement, detection, forecasting, analysis, education, and invasions. The challenge at hand is to frame a methodology that can search and retrieve the required information from these huge digital databases. An overview of a Retrieval System architecture is depicted in Figure 1. Keywords relevant to requirement information are presented as queries to a search process. The records in the database are indexed using their related keywords. The searching process looks for matches in the index of the database to the query and displays the best matches along with locations of records. Queries for the retrieval system, indexing, and the search algorithm adopted for classification and matching measures play a vital role in deciding the performance.
Health care is one among various applications of image retrieval systems. The improvements and availability of advanced technologies and methods of digital storage have created a broad scope for adopting Electronic Medical Record (EMR) systems. EMR systems contain information about the health of an individual [1]. The information in the medical records includes the patient’s personal details, administrative data, allergies, vital symptoms, medicinal histories, lab test results, radiology images, diagnoses, medications, progress details, and follow-up dates. One/many of the pieces of information in the record shall have a significant feature suitable to be the index for the record and key for representing the record in the EMR system.
The need is to frame an efficient retrieval system that can extract features to index, search, and retrieve the records relevant to queries on large image databases. The challenge is to develop a well-structured system that is on par with techniques like human annotation. MIR Systems provide a prominent scope for developing and improving the services in the field of medicine. Many MIR systems have been developed and are available. In this paper, we propose an MIR System adopting LBP-based indexing and a FAM-based classifier with an efficient retrieval performance.
The main contributions in the proposed are outlined below:
The framework adopts a novel LBP operator, namely, AvN-LBP.
The proposed descriptor addresses the issues due to noisy images and includes the information available in the neighborhood pixels.
An optimized FAM network is modeled as a Classifier.
The Deferential Evolution Algorithm (DE) is used for evolving the FAM network, namely, DEFAM.
A MIR System, namely, DEFAMNet, to retrieve the required images from medical databases, is developed, trained, and tested using benchmark databases.
A modified Mutation operator is implemented during the evolution of the FAM Network.

1.2. Related Work

Visual patterns in images are better represented by Texture. LBP is broadly adopted by researchers in image classification studies such as face detection, industrial applications, and pattern recognition. The Texture-based LBP descriptor [2] is an efficient and simple operator derived from a pre-defined threshold segmentation method. The neighborhood pixels are compared with a threshold and represented as ‘0’ and ‘1’ and finally converted into binary numbers. The computational simplicity and robustness of the LBP operator to variations in illumination is an undebatable choice for various practical applications. Many enhanced and modified variants of LBP operators have been developed to overcome the challenges faced by operators and fit different applications. Modifications are incorporated either by including the features of other operators or implementing new mathematical processes for estimating the LBP decimal.
Research articles have provided enough significance to the effectiveness of LBP operators and their variants. An enhanced LBP operator, EF-ALBP [3], by combining the local binary pattern with an edge smoothing filter and accumulation function suppresses noise, preserves edges, and extracts the margins of medical images. Experiments were performed on medical images comprising MRI, CT, and X-ray. The results demonstrate the strength and robustness of the EF-ALBP compared to other descriptors. A novel operator, namely, Local Binary Patterns Clustering (LC) is used for dermoscopic image segmentation [4]. The operator improves the segmentation process by extracting border lines, geometric details, and target regions. The performance of LC was found to be better compared to 26 popular segmentation algorithms. Color information with the LBP feature is combined to generate a maximal multi-channel LBP (MMLBP) [5] operator. The new MMLBP proved efficient in identifying similarities between images and improves the performance of the CBIR system. The size of the MMLBP operator is smaller and performed consistently across different datasets. The Attractive Center-Symmetric LBP (ACS-LBP) and Hessian matrix are combined to develop a hybrid operator Hess-ACP-LBP [6]. The new LBP operator has texture analysis capabilities and differential texture information. A similar feature with the combined capabilities of LBP and affine-invariant detectors [7] proved its ability to identify key points and descriptions from MRI images to detect mild cognitive impairment convertible. A wavelet-based LBP for classifying X-ray images using a random forest classifier [8] exhibited improved performance. Local and global features represent images and are informative to differentiate objects. A combined feature of the LBP operator and Chebyshev moments [9] is introduced and tested to classify images in the most challenging datasets of Outex, ALOT. The results were found to be very encouraging and supportive of the superiority of the LBP operator.
Textures patterns from biomedical images [10] are extracted and the weight aspect is added to form a Multi-scale Gabor rotation-invariant local binary pattern (MGRLBP). Using the MGRLBP operator significantly improved the classification capability and reduced the false classification ratio. To include the strength of local information available in the pixels, an LBP operator accounting center-symmetric (CSLBP) [11] is developed. The operator was tested on databases of different types of images including medical images from the OASIS dataset and was found to be effective. The algorithm to estimate CSLBP features is simple and needs less computation time. The length of the operator is also considerably smaller, and the performance is on par with other operators. The influence of noise on the LBP operator makes it less suitable for facial expression recognition. An LBP operator with reduced noise in its feature extraction estimated using new neighborhoods obtained by means of pixels in radial directions and adaptive windows was derived and successfully tested [12]. A Hybrid operator combining CLBP and MSCLBP has been developed to perform classification tasks in GeoEye imagery [13] and found suitable. LBP operators and variants are based on greyscale images. Hence, information is lost during conversion from color images to greyscale images. A color-based LBP extracted directly from color images [14] is used for the classification of color images and is found to be more efficient. An LBP operator with a color difference sign and color difference is fused to form a new multiple-channel LBP (MCLBP) [15] and has the potential to perform well in color image classification.
Experiments performed on images in well-known datasets demonstrate the superior performance of descriptors. QuLBP [16], the quantum Local Binary Pattern, has been formulated and tested using MRI and grayscale images. Extensive experimentation results were recorded, and analysis provided encouraging results of the QuLBP approach. A combination of Local Binary Pattern in three orthogonal planes (LBP-TOP) and a histogram of orientation gradients (HOG-TOP) has been used to learn brain tumors [17]. The results of testing on BRATS 2013 certify the suitability of the proposed framework in detecting brain tumors. The LBP operator in combination with the local information analysis component produced a high classification accuracy of around 99.86% in the identification of the biometrics of coronary angiography images [18]. Identifying a more suitable set of LBP variants to represent images assures the best performance of classification systems. Experiments have been performed to explore the capacity of several LBP-based descriptors to represent images. It was analyzed and recorded that LPB operators generated with the same radius and number of neighborhoods best represent the images of images [19]. HIS images have a huge amount of spatial and spectral information. Existing feature descriptors incur large computational costs. Classical LBP operators are exclusively implemented for spatial texture representation. LBP operators used to index HSI images have limitations on spatial–spectral information. To overcome the limitation, Clifford algebra-based multidimensional LBP (MDLBP) for HSI is proposed [20]. The descriptor is able to extract spatial–spectral feature from multiple dimensions. The test results of HSI classification using MDLBP outperforms the LBP variants. A new DR_LBP [21] approach including the information available in neighborhood pixels was framed and implemented for face recognition tasks and produced good results.
An LBP operator occupies a very larger space in texture classification requirements because of its capability to represent the texture features of an image. The operator has few limitations if attended and shall be a more powerful tool for applications in more sensible areas such as security, medical, defense, etc. The three major shortfalls of an LBP operator are (1) its performance being limited by the quality of texture images because of low-resolution imaging sensors due to size constraints in transmission networks and transmission loss, (2) the robustness of the center pixel, and (3) the corruption of uniform patterns by the presence of noise.
The retrieval of records that are similar to the requirements from the database involves classifying, searching, matching, and locating the records. This complete sequence is performed by algorithms, namely, classifiers. There are several effective algorithms that are available and used as classifiers. Among the algorithms, Neural Networks with deep learning techniques have proven their suitability for Medical Image Classification (MIC). CNN architectures such as Alexnet, ResNet, Inception, Densenet, etc. are used in many of MIC studies. The medical image classification research work published is mentioned with complete analysis [22]. The survey provides a detailed insight into the experimentation and methodologies used on medical image databases for feature extraction and various classification techniques.
A Deep Convolutional Neural Network (DCNN) [23] has been trained and implemented for detecting Alzheimer’s disease in MRI brain images. DCNN has delivered better performance and enhanced precision in retrieval. CNN architectures VGG16 and InceptionV3, capsule network training, and Space Vector Machine classifiers were tested for the evaluation of their performance to classify pneumonia from chest X-ray images [24]. The results depicted showed that CNN architectures are more effective and delivered useful performance. CNN and its best-performing frameworks have been discussed in detail [25]. The discussion covers the contents of understanding medical imaging tasks such as classification, segmentation, localization, and detection. The detailed survey provided clarity about CNN in medical image processing. The tasks include analysis and alignments in the breast, chest, lungs, brain, and other organs. CNN is also implemented for various other applications such as object detection and segmentation in medical image studies [26]. The algorithms are found to be suitable for processing ultrasound images, endoscopic images, CT, PET, and MRI. CNN was trained to detect the skin lesion from the image and for measuring the border irregularity [27]. The network achieved outstanding performance. A deep-learning CNN network was employed to diagnose infections from X-ray images [28]. The analysis was performed by extracting LBP features from the images. The results were found to be very encouraging compared to results obtained in other studies. The BreastUNet [29] framework with CNN with a capability to graft features has been developed and trained to analyze mitotic nuclei in breast histopathology images. GLocal Pyramid Pattern, a variant of LBP, is used for texture recognition in Breast Cancer datasets. The framework is found suitable for successfully classifying images with mitotic nuclei and non-mitotic nuclei. Detecting various types of cells in and around the tumor matrix is very significant and supports classifying the tumor micro-environment for cancer prognostication and research. The availability of a real-time dataset plays a vital role and encourages researchers to develop and test algorithms for efficient classification and retrieval networks. A large and diverse dataset of nucleus boundary annotations and class labels with key findings during a challenging task with the dataset MoNuSAC2020 has been published [30]. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, 4 organs, and 4 nucleus types. A detailed analysis has been performed by implementing six different algorithms for classification tasks [31]. The evaluation of retrieval and classification algorithms dealt with textured 3D objects. The results and insight in the work have provided much-required contents to the research community. The images received from sensors and transmitted from remote locations will face a loss of information. The lost information may cause a variety of transformations simultaneously, such as non-rigid deformations (changes in pose), topological noise, and missing parts—a combination of nuisance factors that renders the retrieval process extremely challenging. A total of 15 retrieval algorithms were evaluated in the contest [32] which provide the details of the dataset and present comparisons among the methods. Intuitionistic fuzzy sets (IFSs) are a competitive tool during decision-making during classification tasks and resolving ambiguity and vagueness cases. A novel similarity-distance technique [33] with a better performance rating has been framed. A comparative analysis is presented to showcase the advantages of the novel similarity-distance over similar existing approaches. Furthermore, the applications of the novel similarity-distance technique in various decision-making situations have been explored. Predicting patient survival by the degree of accuracy and efficiency is the goal of any retrieval approach. An approach is demonstrated [34] to understand the importance of using classification and FS algorithms to obtain the best results faster, as it is a crucial factor in a patient’s survival. After conducting experiments and analyzing results obtained in terms of error rate and accuracy, it was concluded that the classification algorithm produces better results without combining them with the FSFA. The rich literature provides strong evidence for using an LBP operator for indexing and a Neural Network-based framework for classification.
From the literature, a few characteristics of images, such as overlapping classes, noisy images, the sequence of presenting images for training, etc., influence performance and cause category proliferation problems in Neural Networks. This importantly influences the prediction accuracy of the classifier, and the proposed work aims to improve the accuracy.

2. Materials and Methods

The proposed framework performs image indexing using a novel LBP operator, namely, AvN-LBP. The proposed descriptor addresses the issues caused due to noisy images by including the information available in the neighborhood pixels. The difficulties faced by stand-alone classification algorithms are addressed by including neural network-based classifier FAMs. To improve the computation efficiency and classification accuracy, the FAM network is optimized. DE is used for evolving a FAM network, namely, DEFAM. A MIRS framework DEFAMNet to retrieve the required images from benchmark medical image databases by using AVN-FAM for indexing and DEFAM for classification and retrieval is developed, trained, and tested.

2.1. Introduction to LBP Methodology

An LBP modeled by Ojala [35,36] is a descriptor used to represent texture images and is estimated at each pixel of an image I of size N × M. Xr,n denotes the (n + 1)th pixel at a distance r around the center pixel x0,0. The image represented by I in Equation (1) is of size 5 × 5 with r = 2 and, for example, X2,4 represents the 5th (fifth) pixel in the 2nd (second) row of image I.
I 5 × 5 = x 2 , 6 x 2 , 5 x 2 , 4 x 2 , 3 x 2 , 2 x 2 , 7 x 1 , 3 x 1 , 2 x 1 , 1 x 2 , 1 x 2 , 8 x 1 , 4 x 00 x 1 , 0 x 2 , 0 x 2 , 9 x 1 , 5 x 1 , 6 x 1 , 7 x 2 , 15 x 2 , 10 x 2 , 11 x 2 , 12 x 2 , 13 x 2 , 14
The LBP descriptor is a string of binary bits 0 (zero) or 1 (one) based on a mathematical relationship between a particular pixel and the pixels present in its neighborhood. The estimation of LBP is defined in Equation (2).
L B P p , r = n = 0 p 1 S ( x r , n x 0 , 0 ) 2 n
where
S ( x ) = 1 x 1 0 x < 1
Two examples of image patches (a,b) and corresponding LBP factors (a1, b1) are shown in Figure 2.
Equation (3) provides the equation to compute the h(k) of length K = 2p, LBP descriptor for the complete image.
h ( k ) = i = 0 N j = 0 M δ ( L B P p , r ( i , j ) k )
where 0 k K 1 ; L B P p , r ( i , j ) is the LBP pattern of each pixel i , j of an image I of size N X M .
LBP has many promising features that makes it an eminent choice for the representation of texture images. The features include invariance, simple calculation that reduces computational time, fewer assumptions and parameters, rotation invariance, and better discrimination power. However, LBP suffers from a few weaknesses that need to be addressed for adopting the descriptor for applications expecting superior performances. From Equation (3), it can be noticed that the length of the histogram depends on P and stretched to a length of K = 2 p . Larger P values produce longer histograms. It is also evident from the process of estimating the descriptor that the presence of noise influences the LBP operators. The presence of noise in the images will produce non-uniform patterns. The performance of the stand-alone classifiers is affected because of these non-uniform patterns. To overcome the weaknesses of the LBP descriptor, the below session presents a modified LBP-based descriptor AvN-LBP for texture image representation.

2.2. Proposed Approach

The proposed approach is modeled explicitly to include the combined information of the central pixel and the pixels in its neighborhood and hence possesses the strength of maintaining information coming from neighbors and minimizes the effects of noise. It produces consistent patterns and is discriminative and robust to noise. The AvN-LBP operator of an image is estimated using Equation (4).
A v N L B P p , r = n = 0 p 1 S ( x r , n μ r ) 2 n
where
S ( x ) = 1 , x 0 0 , x < 0
μ r = 1 p n = 0 p 1 x r , n
The x r , n is computed by averaging the pixels in an image patch lying on an angular sector of a circle with a radius R = 2.3 and θ = 15 0 , 30 0 , 45 0 . If θ = 45 0 , the pixels in the circle with radius R from the center pixel are divided into eight sectors. The example in Figure 3 provides the steps for the calculation of AvN-LBP. The proposed AvN-LBP descriptor has three major advantages:
  • Thresholding at μ r tends to make local neighborhood vectors almost zero-mean. Hence, the descriptor proposed is not affected by grayscale changes and is resistant to lighting effects.
  • Since the threshold is estimated from the neighborhood pixels, the pattern is more discriminative compared to LBP. Figure 4 shows the discriminative strength of the proposed descriptor.
  • Weak edges are better preserved by AvN-LBP. Analyzing Figure 5, it can be noticed that the LBP patterns are not reflecting the actual distribution of pixels compared to the proposed descriptor.
  • Less influenced by the presence of noise. Figure 6 provides a comparison between the signal-to-noise ratio (SNR) and the classification accuracy for the proposed AvN-LBP and conventional LBP. The results provide the evidence for the robustness of the proposed descriptor for different Gaussian noise levels added to images.
Figure 3. Steps detailed for the calculation of the proposed AvP-LBP descriptor for the center pixel in an example image patch. (a) Image Patch and sectors for R = 3, p = 8, and θ = 45 0 . (b) Average value of the circular sector = sum (185,163,144,190,132,146)/6 = 160.33. (c) AvN-LBP pattern is 01100100 for μ r = sum (160.33, 157.5, 172.66, 157.33, 161.66, 172.66, 192.83, 157.83)/8 =166.26 and AvN-LBP = 100.
Figure 3. Steps detailed for the calculation of the proposed AvP-LBP descriptor for the center pixel in an example image patch. (a) Image Patch and sectors for R = 3, p = 8, and θ = 45 0 . (b) Average value of the circular sector = sum (185,163,144,190,132,146)/6 = 160.33. (c) AvN-LBP pattern is 01100100 for μ r = sum (160.33, 157.5, 172.66, 157.33, 161.66, 172.66, 192.83, 157.83)/8 =166.26 and AvN-LBP = 100.
Biomedicines 10 02438 g003

2.3. Fuzzy ARTMAP Architectures

2.3.1. Fuzzy ARTMAP (FAM)

The FAM Architecture in Figure 7 has ART modules A R T a and A R T b . These two modules are interconnected by a mapping field. The unlabeled descriptors of images from a class are submitted to input (a) of module A R T a , and the labeled descriptor of the same class is given to input (b) of A R T b . The map field relates the descriptors to their labeled counterparts. In FAM, learning ensures that each category is connected to one node representing a particular class in the database. The classes of all categories are identified and labeled with their corresponding classes during the learning processes. Each set of patterns (a, b) given as an input to the FAM classifier during the learning process shall belong to correctly labeled classes.
In case the input pattern to A R T a does not choose a class associated with the one presented to A R T b , FAM performs a reset operation laterally, and the vigilance parameter is varied. Thus, the network selects a different suitable category. The categories in A R T a and A R T b that are not committed with patterns will not have associations in the map field during the initial stages of the learning process. The focus of the work is classification learning. The images are classified into their respective classes. The output nodes F2 of FAM will have 10 nodes to represent a database with 10 classes. A FAM network is trained to identify the classes in a database and used as a classifier in Content-Based Image Retrieval (CBIR) systems to organize and search the images in the database based on their features. Images with the same features are considered similar and grouped as the same class. Algorithm 1 provides the algorithm for training FAM network. After proper training, a FAM network is capable of identifying the class of query images submitted to the CBIR system.
Algorithm 1: Training FAM network
  • FNT = Training (F, l)//FNT-Trained FAM Network
  • //F: Feature Vector, l: Label → Class of the feature vector.
  • Initialize ρ , NE = 0// ρ = Vigilance Parameter, NE: Count training epochs
  • While NE < number of training epochs
  • Ii = F = (a1,a2,…,ad);//d: dimension of feature vector
  • AI1 = (a1, a2,…,ad,1-a1,1-a2,…,1-ad)
  • if AI = first input of label l
  • W = AI
  • W←l
  • else for (all j) compute Tj(AI) = A I W j / α + W j
  • J = argmax (Tj(AI); J: Winner node
  • if A I W j / A I ρ ; vigilance test
  • if l = J: W k n e w = W k o l d + β ( A I W k o l d )
  • else: ρ = M F k ( A I ) +
  • While (more winner nodes are available)
  • While (more training patterns are available)
  • End
Based on the sequence of input images during the training phase of the classifier, the presence of similar characteristics between images (overlapping classes) across different classes in the dataset and the presence of noise are major reasons that increase the size of FAM networks. The problem is mentioned as a category proliferation problem [36]. The problem influences the precision of image categorization and also increases the computational cost.

2.3.2. Non-Proliferation Fuzzy ARTMAP (NPFAM)

NPFAM is a FAM architecture with a modified learning process that addresses the category proliferation problem. The NPFAM framework is adopted for image retrieval [37] with an inter-ART reset model to restrict the growth of the number of categories that result in larger networks leading to category proliferation. The model decides the requirement of the newly created category and skips its creation if found not required. In addition, to ensure classification accuracy, an offline learning model is included, and the requirement of the newly created category is ensured before deciding on the deletion of a particular category. Probabilistic data for connected one/many relationships are stored in w j k a b , and the probability of an inter-ART reset during prediction is stored in weight v j k a b . These weights define the entropy (coverage) of the training set.

2.3.3. Differential Evolution of FAM Networks (DEFAM)

DEFAM [38] is an optimized FAM architecture used for image retrieval. The architecture is created by repeatedly applying DE operators to the initial population of trained FAM networks. The initial population is generated by training the FAM networks for image classification following all the reasons that cause the category proliferation problem. Due to the category proliferation problem, large networks are created. A network evolved from these populations of trained networks using DE will be able to deliver high precision classification and smaller classifier network. In the proposed work, the complex problem of optimizing the FAM network is achieved by adopting an advanced version of the DE algorithm that uses neighborhood mutation and opposition-based learning [39] for solving the complex problem of optimizing the FAM network. The DE algorithm searches in the space of solutions with the guidance of the difference between individuals. The basic phenomenon is to generate a mutation vector by differentiating and scaling two vectors of a population and adding them to the third vector in the same population. The mutation vector is crossed with the parent vector with a predefined probability to generate a target vector. Then, the most efficient vectors among the target and parent vectors are identified using a fitness function, which proceeds to the next generation. The basics of the adopted DEFAM evolution process, including a brief explanation about the proposed mutation strategy, are outlined below.

Initialization

Each of the trained FAM networks is represented by D dimensional vectors of size M. Similarly, N networks are considered as the initial population. Each individual vector shall be expressed as in Equation (5).
x i ( G ) = x i 1 , x i 2 , x i 3 , , x i D
where G represents the generation and i = 1 to N.

Mutation

Mutation operation generates mutation vector V i , G corresponding to each x i , G target vector of the present population employing mutation strategies. Equations (6)–(9) provide the most commonly used mutation strategies
D E / r a n d / 1 V i , G = x r 1 , G + F · ( x r 2 , G x r 3 , G )
D E / b e s t / 1 V i , G = x b e s t , G + F · ( x r 1 , G x r 2 , G )
D E / r a n d t o b e s t / 1 V i , G = x i , G + F · ( x b e s t , G x i , G ) + F · ( x r 1 , G x r 2 , G )
D E / r a n d / 2 V i , G = x r 1 , G + F · ( x r 2 , G x r 3 , G ) + F · ( x r 4 , G x r 5 , G )
where r 1 , r 2 , r 3 , r 4 & r 5 are randomly selected integer numbers between (1, M), F is the scaling factor, and x b e s t , G is the individual vector evaluated with the best fitness in the current generation.
DE and Particle Swarm Algorithms face difficulties to balance local and global search abilities. During the evolution process, the local search leads to premature convergence and on the other side, depending on the global search, degrades the exploring ability and increases the convergence time. Neighborhood mutation modules [39,40] provide promising results and avoid the convergence of the DE algorithm to the local optimum. Influenced by the capability and results of neighborhood approaches, a modified version of the strategy in Equation (8) based on the neighborhood approach is framed as in Equation (10).
D E / n e i g h t o b e s t / 1 V i G = x r 3 G + F 1 · ( x b e s t G x r 3 G ) + F 2 · ( x r 1 G x r 2 G )
The proposed strategy in Equation (10) avoids premature convergence, restricts incompetent mutation operators, involves individual vectors with high fitness values to reach convergence faster, and avoids large-scale global convergence. Algorithm 2 depicts the steps involved in proposed mutation technique.
Algorithm 2: Steps for Mutation
  • Input = x G
  • for all x G calculate fitness value using Equation (13)
  • Sort x G based on its Fitness value
  • Select x b e s t G and 30% to 60% of top ranked x G
  • Implement Mutation operation based on Equation (10)
  • End

Crossover

A new test vector U i , G = ( u 1 , G , u 2 , G , , u i , G ) is generated during the process by employing a binomial crossover between the target vectors x i , G and the mutation vector v i , G . The procedure is defined in Equation (11)
u i , G = v i , G , i f   r a n d j ( 0 , 1 ) > C R ) o r ( j = j r a n d , j = 1 , 2 , 3 , , D ) x i , G , o t h e r w i s e
where CR is the crossover rate. C R [ 0 , 1 ] and j r a n d [ 1 , D ] .

Selection

Selection is performed between the target vectors x i , G and test vectors U i , G based on their fitness. The selected vectors shall be the part of the population used for evolving the network during the next generation. Selection is performed by evaluating the fitness of x i , G and U i , G using the function f ( p ) . The selection operation is performed following Equation (12).
X i , G + 1 = U i , G , i f   f ( U i , G ) > f ( X i , G ) x i , G , o t h e r w i s e
The fitness function for optimizing the FAM classifier is expressed in Equation (13).
F ( p ) = 100 c a t min p c c ( p ) N a ( p ) ( c a t max N a ( p ) ) p c c 2 ( p )
Subjected to:
N c c a t min c a t max
c a t min < N a ( p ) c a t max
where c a t min and c a t max are the minimum and maximum number of categories that can be allowed. These values shall be decided by the user. A best c a t min value shall be equal to the number of classes in the dataset. The fitness function in Equation (3) is derived such that DE has to evolve a FAM-based classifier such that F(p) as a minimum. An optimum FAM network is one that has a smaller number of categories N a ( p ) near to the minimum number of categories c a t min and delivers the highest possible accuracy p c c ( p ) equal to or nearer to 100, that is, the numerator of the fitness function in Equation (13) 100 c a t min p c c ( p ) N a ( p ) shall be near to zero. Similarly, the denominator of the fitness function ( c a t max N a ( p ) ) p c c 2 ( p ) must be very larger. Algorithm 3 portrays the steps involved in evolution of FAM network using proposed DE algorithm.
Algorithm 3: Evolution of FAM Network
  • Pop_best_fit = DE(FNT)
  • Initialize β = 1.0 and CR = 0.7
  • Set Generation Count t = 0(max = G)
  • Create the initial population C(t) = NP
  • While (stopping criteria not true)
  • For each target xi((G)ϵC(G)
  • Create the Mutant Vector Vi(G)
  • Create the test vector Ui(G)
  • For all i
  • Evaluate F(xi(G)) and F(Ui(G))
  • If (f(xi(t) ≥ f(Ui(t))
  • xi(t + 1) = x0(t)
  • else xi(t + 1) = U0(t)
  • Return population with best fitness
  • End

3. Results

The parameters for the AvN-LBP extraction, training, and optimization of the FAM network adopted during the experimentation of the proposed DEFAMNet-based MIR are provided in Table 1. The parameters used influenced the overall performance of the proposed framework. The parameters used in the proposed DEFAMNet can be grouped into three categories. The parameters related to AvN-LBP belong to the first category and decide the effectiveness of indexing in the proposed framework. p is the number of circle parts. In the proposed work, the pixels in the circle of pixels are portioned into eight at an angle of 45°. A low number of partitions will not consider the actual neighbors of the pixel, and a large number of partitions will increase computations and consume more time. Hence, an optimum value p = 8 is chosen. R is the radius of the circle considered for averaging. R = 3 means three pixels in the grid will be averaged. The radius also influences the number of computations and decides the time consumption. The second category of parameters is related to the FAM Network. The FAM dynamics are decided by a learning parameter β 0 ,   1 , Vigilance parameter ρ 0 , 1 and choice parameter α > 0. For fast learning, β = 1. We performed experimentations using different β values and decided to assign β = 0.8 during the learning process of the FAM. Vigilance parameter ρ decides the size of hyper boxes representing the category of a class in the database. ρ ≅ 1 allows larger boxes; ρ ≅ 0 restricts the size of the boxes and increases the size of the FAM network. In our work, we decided to have ρ ≅ 1, have smaller networks, and use DE for achieving the required performance. Small values of α reduce recoding during learning, hence α = 0.001 is assigned during the experimentation. DE has been adopted to optimize plenty of problems, and from the literature, β = 1 and CR = 0.7 are considered.
The proposed DEFAMNet classifier is trained for 100 epochs, with a batch size of 64, assigning a learning rate of 0.001 and training data from a medical image dataset.
During the development of the MIR framework 40%, 30%, and 30% of the images in each class in the database are adopted for training, testing, and validation, respectively [41,42]. An NVIDIA DGX station with a processor 2.2 GHz, Intel Xeon E5-2698 (20-Core), NVIDIA Tesla V100 4 × 16 GB GPU was used.

3.1. Database

There are several databases available to support research on medical images. Images in each database have different features and offer interesting challenges. A MIR system shall be capable of performing in a similar pattern for any type of image. The capability of the proposed DEFAMNet medical image retrieval system is examined using the Public Lung Image Database (I-ELCAP) [43], the Open Access Series of Imaging Studies–Medical Image Resonance (OASIS–MRI) [44], and Interstitial Lung Disease (ILD) [45].

3.2. Evaluation of Proposed DEFAMnet Medical Image Retrieval System

The effectiveness of any retrieval system should guarantee that the retrieved images should be closer in similarity with the query images. The Average Retrieval Precision (ARP) and Average Retrieval Rate (ARR) are evaluation metrics commonly used to measure the performance of CBIR systems. The proposed DEFAMNet classifier is evaluated and analyzed by the ARP and ARR values calculated using Equations (16)–(19).
P r e c i s i o n = P ( I q ) = N R N R T n R T
R e c a l l = R ( I q ) = N R N R T n R
A R P = 1 D B n = 1 D B P ( I n ) n 10
A R R = 1 D B n = 1 D B R ( I n ) n 10
where I n is the query image, and DB is the number of images in the database. N R is the number of images in the database that are similar images to the query image, N R T is the total number of retrieved images similar to the query image, and N R N R T provides the number of images in common between similar images in the database and similar images retrieved. n R is the total of the relevant images present in the database relevant to the query image, n R T is the total of the images retrieved by the MIR System. The results of the proposed AvN-LBP descriptor are compared with texture descriptors such as LBP [46], LTP [47], LDP [48], and other patterns [49,50].

3.3. Performance on the I-ELCAP Database

The proposed DEFAMNet framework is evaluated using the I-ELCAP dataset. The I-ELCAP dataset has been used because of its popularity in evaluating MIR systems. The dataset has 10 classes with 100 images in each class. The weights associated with DEFAMNet are trained with 10% of the images from each class of the dataset. The proposed DEFAMNet is tested with the remaining images. One sample image from each of the 10 classes in the I-ELCAP database are shown in Figure 8. A comparison of performance in terms of APR and ARR achieved by DEFAMNet and other approaches along with a performance comparison between AvN-LBP with other LBP variants are presented in Table 2 and Table 3. DEFAMNet retrieved 100% similar images for the query image from eight classes and an average performance of 99.8% for all 10 classes. It can also be noticed from Table 2 and Table 3 that the best performance among other techniques considered for comparison is 99.1%.

3.4. Retrieval Analysis on the OASIS–MRI Database

Experimentations are performed on the publicly available OASIS–MRI medical image database to evaluate AvN-LBP and DEFAMNet. OASIS comprises 421 images. The images belong to patients aged between 18 and 96. For the sample images in the OASIS database, the images are organized into four classes. Each class consists of 124, 102, 89, and 106. Figure 9 presents one sample image from each of the four classes. Briefly, 10% of the images from the database are used to tune the weights of DEFANET and balance, and 90% of the images are used to evaluate DEFAMNet’s retrieval performance. The results achieved by AvN-LBP and DEFAMNet are compared with existing approaches and variants of LBP on the OASIS dataset and are summarized in Table 4. It is clear from the results; the best performance of existing approaches is 84.3%, and that of the proposed DEFAMNet is 93.01% in terms of average retrieval accuracy.

3.5. Result Analysis on the ILD Database

In actuality, the medical records are a combination of images and clinical data. The evaluation of the proposed framework is performed on the ILD database. The ILD database has valuable content and contains 658 images of interstitial lung disease along with clinical data of patients for research studies. One sample image from five classes is provided in Figure 10. The classes of ILD are micronodules (173 images), ground glass (106 images), emphysema (53 images), fibrosis (187 images), and healthy (139 images). The DEFAMNet weights are tuned with 10% of the images and tested with the remaining 90% of the images. The performance of the proposed approach in terms of ARP compared with other approaches and feature descriptors on the ILD dataset is given in Table 5. DEFAMNet performed with 96.06% average retrieval accuracy against the 92.4% best performance of the approaches considered for comparison.
From the experimental results on the three databases used for evaluation, the proposed DEFAMNet and AvN-LBP outperformed the existing approaches used for medical image retrieval and prove that the proposed perform with better results across different types of image datasets and different classes in the dataset.

3.6. MIR System Adopting NPFAM and DEFAMNet Classifiers

The medical images in the OASIS database are categorized using two classifiers, namely, NPFAM and DEFAMNet. The classifiers are chosen for comparison because both are FAM networks and address the category proliferation problem. On the other hand, the procedure adopted to construct the classification model is entirely different. This section compares the performance of these classifiers in medical image classification. The classifiers trained with AvN-LBP operators produced better results. Training was undertaken with 30 percent of the images (36, 30, 26, and 32) from each of the four classes. The trained classifiers were evaluated by testing to classify 421 images in the database (124, 102, 89, and 106). After training, the network was tested to classify the images from the training set, and we found that DEFAMNet with AvN-LBP performed at 100%. The trained network was then tested to classify the balance of 40% of images (50, 40, 35, and 43) from each class. Table 6 provides the comparison of accuracy considering only the top 25 retrieved images based on similarity.

3.7. Retrieval Accuracy Obtained on Different Body Parts

Medical images of different parts of the human body exhibit different texture patterns. Hence, the performance of the DEFAMNet MIR system is examined with diverse X-Ray images of body parts. Medical images from the internet are collected for six different parts (classes) of the body, namely, the chest, head, foot, neck, palm, and spine [54,55,56]. A total of 150 images has been used, with 25 images in each class. The classifier is trained with 10 images per class and tested with the remaining images. Table 7 shows the retrieval accuracy achieved by the system with only AvN-LBP and AvN-LBP with classifiers.

4. Discussion

Retrieval time, required resources, and retrieval performance are factors considered to evaluate the suitability of retrieval systems for performing the retrieval task. The execution time consists of the time required for feature extraction from query images and retrieval (classification, searching, and retrieving similar images from the database). The system’s time consumption depends on the procedure for the extraction of the feature descriptor. The larger the operator, the longer it will take to extract the vector and retrieve the images from the database. The retrieval time is also affected by the organization of databases and the use of classifiers in the retrieval system. Table 8 summarizes the time required for the estimation of the feature descriptor and retrieval time in seconds over the OASIS–MRI database using the proposed AvN-LBP and other descriptors considered for comparison.
The extraction time of images from the OASIS database with the proposed method is less than that of LDEP and ZMs, but greater than that of LBP and ULBP. Table 8 and Figure 11 provide the comparison of the total CPU time (feature extraction time + retrieval time) required for image retrieval using the proposed method and other feature descriptors on the OASIS–MRI database. The results show that the computation time of ULBP is less than that of the proposed AvN-LBP operator. The extra time is compensated for with improved retrieval accuracy.

5. Conclusions

This paper has proposed a local texture-based descriptor, AvN-LBP, a modified version of the well-known LBP. The proposed descriptor addresses issues due to noisy images by including the information available in the neighborhood pixels to better represent the images and avoid the loss of valuable information. In addition, a FAM classifier evolved using the DE algorithm that avoids the category proliferation problem is implemented. Modifications are performed during the mutation stage to make the DE algorithm more suitable for the intended application of MIR. The complete proposed NPFAM-based MIR system accommodating AvN-LBP for indexing medical images and DEFAMNet for retrieval is subjected to extensive experimentation, and the results presented provide the proof of its suitableness. In future, if the time consumption and resource requirements are reduced, the proposed system can be more competitive with the existing MIR systems. Experimentation is performed on smaller databases, but the size of the datasets influences the retrieval results, hence the proposed framework shall be implemented on larger datasets and needs to be extended to real-time application.

Author Contributions

A.K.: research concept and methodology, writing—original draft preparation. R.S. and K.C.: Investigation, S.R.S. and N.K.: review and editing. W.-C.L.: Validation and Funding Acquisition. All authors contributed to the article and approved the submitted version. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the National Yunlin University of Science and Technology, Douliu.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Clarke, R.L. National Alliance for Health Information Technology. Healthc. Financ. Manag. 2002, 56, 16–17. [Google Scholar]
  2. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  3. Qiao, S.; Yu, Q.; Zhao, Z.; Song, L.; Tao, H.; Zhang, T.; Zhao, C. Edge extraction method for medical images based on improved local binary pattern combined with edge-aware filtering. Biomed. Signal Process. Control. 2022, 74, 103490. [Google Scholar] [CrossRef]
  4. Pereira, P.M.; Fonseca-Pinto, R.; Paiva, R.P.; Assuncao, P.A.; Tavora, L.M.; Thomaz, L.A.; Faria, S.M. Dermoscopic skin lesion image segmentation based on Local Binary Pattern Clustering: Comparative study. Biomed. Signal Process. Control. 2020, 59, 101924. [Google Scholar] [CrossRef]
  5. Vimina, E.; Divya, M.O. Maximal multi-channel local binary pattern with colour information for CBIR. Multimed. Tools Appl. 2020, 79, 25357–25377. [Google Scholar] [CrossRef]
  6. Alpaslan, N.; Hanbay, K. Multi-resolution intrinsic texture geometry-based local binary pattern for texture classification. IEEE Access 2020, 8, 54415–54430. [Google Scholar] [CrossRef]
  7. Francis, A.; Pandian, I.A. Early detection of Alzheimer’s disease using local binary pattern and convolutional neural network. Multimed. Tools Appl. 2021, 80, 29585–29600. [Google Scholar] [CrossRef]
  8. Ko, B.C.; Kim, S.H.; Nam, J.Y. X-ray image classification using random forests with local wavelet-based CS-local binary patterns. J. Digit. Imaging 2011, 24, 1141–1151. [Google Scholar] [CrossRef]
  9. Hosny, K.M.; Magdy, T.; Lashin, N.A. Improved color texture recognition using multi-channel orthogonal moments and local binary pattern. Multimed. Tools Appl. 2021, 80, 13179–13194. [Google Scholar] [CrossRef]
  10. Murugappan, V.; Beenian, R.S. Texture based medical image classification by using multi-scale Gabor rotation-invariant local binary pattern (MGRLBP). Clust. Comput. 2019, 22, 10979–10992. [Google Scholar] [CrossRef]
  11. Verma, M.; Raman, B. Center symmetric local binary co-occurrence pattern for texture, face and bio-medical image retrieval. J. Vis. Commun. Image Represent. 2015, 32, 224–236. [Google Scholar] [CrossRef]
  12. Kola, D.G.R.; Samayamantula, S.K. A novel approach for facial expression recognition using local binary pattern with adaptive window. Multimed. Tools Appl. 2021, 80, 2243–2262. [Google Scholar] [CrossRef]
  13. Chairet, R.; Ben Salem, Y.; Aoun, M. Potential of multi-scale completed local binary pattern for object based classification of very high spatial resolution imagery. J. Indian Soc. Remote Sens. 2021, 49, 1245–1255. [Google Scholar] [CrossRef]
  14. Zhao, Q. Research on the application of local binary patterns based on color distance in image classification. Multimed. Tools Appl. 2021, 80, 27279–27298. [Google Scholar] [CrossRef]
  15. Shu, X.; Song, Z.; Shi, J.; Huang, S.; Wu, X.J. Multiple channels local binary pattern for color texture representation and classification. Signal Process. Image Commun. 2021, 98, 116392. [Google Scholar] [CrossRef]
  16. Lekehali, S.; Moussaoui, A. Quantum Local Binary Pattern for Medical Edge Detection. J. Inf. Technol. Res. 2019, 12, 36–52. [Google Scholar] [CrossRef]
  17. Abbasi, S.; Tajeripour, F. Detection of brain tumor in 3D MRI images using local binary patterns and histogram orientation gradient. Neurocomputing 2017, 219, 526–535. [Google Scholar] [CrossRef]
  18. Kobat, M.A.; Tuncer, T. Coronary Angiography Print: An Automated Accurate Hidden Biometric Method Based on Filtered Local Binary Pattern Using Coronary Angiography Images. J. Pers. Med. 2021, 11, 1000. [Google Scholar] [CrossRef]
  19. Nanni, L.; Brahnam, S.; Lumini, A. Combining different local binary pattern variants to boost performance. Expert Syst. Appl. 2011, 38, 6209–6216. [Google Scholar] [CrossRef]
  20. Li, Y.; Tang, H.; Xie, W.; Luo, W. Multidimensional local binary pattern for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  21. Najafi Khanbebin, S.; Mehrdad, V. Local improvement approach and linear discriminant analysis-based local binary pattern for face recognition. Neural Comput. Appl. 2021, 33, 7691–7707. [Google Scholar] [CrossRef]
  22. Kotadiya, H.; Patel, D. Review of medical image classification techniques. In Third International Congress on Information and Communication Technology; Springer: Singapore, 2019; pp. 361–369. [Google Scholar]
  23. Sathiyamoorthi, V.; Ilavarasi, A.K.; Murugeswari, K.; Ahmed, S.T.; Devi, B.A.; Kalipindi, M. A deep convolutional neural network based computer aided diagnosis system for the prediction of Alzheimer’s disease in MRI images. Measurement 2021, 171, 108838. [Google Scholar] [CrossRef]
  24. Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef]
  25. Sarvamangala, D.R.; Kulkarni, R.V. Convolutional neural networks in medical image understanding: A survey. Evol. Intell. 2021, 15, 1–22. [Google Scholar] [CrossRef]
  26. Yang, R.; Yu, Y. Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis. Front. Oncol. 2021, 11, 638182. [Google Scholar] [CrossRef]
  27. Ali, A.R.; Li, J.; Yang, G.; O’Shea, S.J. A machine learning approach to automatic detection of irregularity in skin lesion border using dermoscopic images. PeerJ Comput. Sci. 2020, 6, e268. [Google Scholar] [CrossRef]
  28. Yasar, H.; Ceylan, M. A new deep learning pipeline to detect COVID-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks. Appl. Intell. 2021, 51, 2740–2763. [Google Scholar] [CrossRef]
  29. Iqbal, S.; Qureshi, A.N. A heteromorphous deep CNN framework for Medical Image Segmentation using Local Binary Pattern. IEEE Access 2022, 10, 63466–63480. [Google Scholar] [CrossRef]
  30. Biasotti, S.; Cerri, A.; Aono, M.; Hamza, A.B.; Garro, V.; Giachetti, A.; Giorgi, D.; Godil, A.; Li, C.; Sanada, C.; et al. Retrieval and classification methods for textured 3D models: A comparative study. Vis. Comput. 2016, 32, 217–241. [Google Scholar] [CrossRef]
  31. Verma, R.; Kumar, N.; Patil, A.; Kurian, N.C.; Rane, S.; Graham, S.; Vu, Q.D.; Zwager, M.; Raza, S.E.; Rajpoot, N.; et al. MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge. IEEE Trans. Med. Imaging 2021, 40, 3413–3423. [Google Scholar] [CrossRef]
  32. Rodolà, E.; Cosmo, L.; Litany, O.; Bronstein, M.M.; Bronstein, A.M.; Audebert, N.; Hamza, A.B.; Boulch, A.; Castellani, U.; Do, M.N.; et al. SHREC’17: Deformable Shape Retrieval with Missing Parts. In Proceedings of the Eurographics Workshop on 3D Object Retrieval, Lyon, France, 23–24 April 2017. [Google Scholar]
  33. Ejegwa, P.A.; Agbetayo, J.M. Similarity-Distance Decision-Making Technique and its Applications via Intuitionistic Fuzzy Pairs. J. Comput. Cogn. Eng. 2022. [Google Scholar] [CrossRef]
  34. R22 Masood, F.; Masood, J.; Zahir, H.; Driss, K.; Mehmood, N.; Farooq, H. Novel Approach to Evaluate Classification Algorithms and Feature Selection Filter Algorithms Using Medical Data. J. Comput. Cogn. Eng. 2022. [Google Scholar] [CrossRef]
  35. Liu, L.; Zhao, L.; Long, Y.; Kuang, G.; Fieguth, P. Extended local binary patterns for texture classification. Image Vis. Comput. 2012, 30, 86–99. [Google Scholar] [CrossRef]
  36. Marček, D.; Rojček, M. The category proliferation problem in ART neural networks. Acta Polytech. Hung. 2017, 14, 49–63. [Google Scholar]
  37. Anitha, K.; Chilambuchelvan, A. NPFAM: Non-proliferation fuzzy ARTMAP for image classification in content-based image retrieval. KSII Trans. Internet Inf. Syst. 2015, 9, 2683–2702. [Google Scholar]
  38. Anitha, K.; Naresh, K.; Devi, D.R. A framework to reduce category proliferation in fuzzy ARTMAP classifiers adopted for image retrieval using differential evolution algorithm. Multimed. Tools Appl. 2020, 79, 4217–4238. [Google Scholar] [CrossRef]
  39. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  40. Deng, W.; Xu, J.; Gao, X.Z.; Zhao, H. An enhanced MSIQDE algorithm with novel multiple strategies for global optimization problems. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 1578–1587. [Google Scholar] [CrossRef]
  41. Chaudhary, S.; Murala, S. Deep network for human action recognition using Weber motion. Neurocomputing 2019, 367, 207–216. [Google Scholar] [CrossRef]
  42. Galshetwar, G.M.; Patil, P.W.; Gonde, A.B.; Waghmare, L.M.; Maheshwari, R.P. Local directional gradient-based feature learning for image retrieval. In Proceedings of the 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 1–2 December 2018; pp. 113–118. [Google Scholar]
  43. Reeves, A.P.; Xie, Y.; Liu, S. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation. J. Med. Imaging 2017, 4, 024505. [Google Scholar] [CrossRef]
  44. Marcus, D.S.; Fotenos, A.F.; Csernansky, J.G.; Morris, J.C.; Buckner, R.L. Open access series of imaging studies: Longitudinal MRI data in nondemented and demented older adults. J. Cogn. Neurosci. 2010, 22, 2677–2684. [Google Scholar] [CrossRef]
  45. Depeursinge, A.; Vargas, A.; Platon, A.; Geissbuhler, A.; Poletti, P.A.; Müller, H. Building a reference multimedia database for interstitial lung diseases. Comput. Med. Imaging Graph. 2012, 36, 227–238. [Google Scholar] [CrossRef]
  46. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar]
  47. Verma, M.; Raman, B. Local neighborhood difference pattern: A new feature descriptor for natural and texture image retrieval. Multimed. Tools Appl. 2018, 77, 11843–11866. [Google Scholar] [CrossRef]
  48. Murala, S.; Maheshwari, R.P.; Balasubramanian, R. Local tetra patterns: A new feature descriptor for content-based image retrieval. IEEE Trans. Image Process. 2012, 21, 2874–2886. [Google Scholar] [CrossRef]
  49. Murala, S.; Wu, Q.J. Local mesh patterns versus local binary patterns: Biomedical image indexing and retrieval. IEEE J. Biomed. Health Inform. 2013, 18, 929–938. [Google Scholar] [CrossRef]
  50. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  51. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 84–90. [Google Scholar] [CrossRef]
  52. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  53. Takala, V.; Ahonen, T.; Pietikäinen, M. Block-based methods for image retrieval using local binary patterns. In Proceedings of the Scandinavian Conference on Image Analysis, Joensuu, Finland, 19–22 June 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 882–891. [Google Scholar]
  54. Kavitha, C.; Anita, X.; Selvan, S. Improving the efficiency of speculative execution strategy in hadoop using amazon elasticache for redis. J. Eng. Sci. Technol. 2021, 16, 4864–4878. [Google Scholar]
  55. Kavitha, C.; Anita, X. Task failure resilience technique for improving the performance of MapReduce in Hadoop. ETRI J. 2020, 42, 748–760. [Google Scholar]
  56. Kavitha, C.; Mani, V.; Srividhya, S.R.; Khalaf, O.I.; Tavera Romero, C.A. Early-Stage Alzheimer’s Disease Prediction Using Machine Learning Models. Front. Public Health 2022, 10, 853294. [Google Scholar] [CrossRef]
Figure 1. General Architecture of CBIR System.
Figure 1. General Architecture of CBIR System.
Biomedicines 10 02438 g001
Figure 2. Image patches (a,b) and corresponding LBP patterns (a1,b1).
Figure 2. Image patches (a,b) and corresponding LBP patterns (a1,b1).
Biomedicines 10 02438 g002
Figure 4. Three texture patterns and LBPs, AvN-LBPs. MBPs, and VAR values for the texture patterns.
Figure 4. Three texture patterns and LBPs, AvN-LBPs. MBPs, and VAR values for the texture patterns.
Biomedicines 10 02438 g004
Figure 5. Example for effective edge preservation by proposed AvN-LBP. (a,b) Two image patches of size 5 × 5. (a1,b1) AvN-LBP preserves weak edge patterns. Two 5 × 5 example image patches are shown in (a,b). (a1,a2) are the patterns given by LBP for (a,b), respectively. (b1,b2) AvN-LBP patterns of (a,b), respectively.
Figure 5. Example for effective edge preservation by proposed AvN-LBP. (a,b) Two image patches of size 5 × 5. (a1,b1) AvN-LBP preserves weak edge patterns. Two 5 × 5 example image patches are shown in (a,b). (a1,a2) are the patterns given by LBP for (a,b), respectively. (b1,b2) AvN-LBP patterns of (a,b), respectively.
Biomedicines 10 02438 g005
Figure 6. Comparison of the robustness of the proposed AvN-LBP descriptor to noise and the comparison with conventional LBP.
Figure 6. Comparison of the robustness of the proposed AvN-LBP descriptor to noise and the comparison with conventional LBP.
Biomedicines 10 02438 g006
Figure 7. FAM Architecture.
Figure 7. FAM Architecture.
Biomedicines 10 02438 g007
Figure 8. Sample images from 10 different classes of the I-ELCAP image database.
Figure 8. Sample images from 10 different classes of the I-ELCAP image database.
Biomedicines 10 02438 g008
Figure 9. Sample images from different classes of the OASIS database.
Figure 9. Sample images from different classes of the OASIS database.
Biomedicines 10 02438 g009
Figure 10. Sample images from five classes of ILD database.
Figure 10. Sample images from five classes of ILD database.
Biomedicines 10 02438 g010
Figure 11. Comparison of total CPU Elapsed time and Retrieval time in seconds for LBP, ULBP, LDEP, ZMs, and the proposed LBP/VAR-based feature descriptors over OASIS-MRI databases.
Figure 11. Comparison of total CPU Elapsed time and Retrieval time in seconds for LBP, ULBP, LDEP, ZMs, and the proposed LBP/VAR-based feature descriptors over OASIS-MRI databases.
Biomedicines 10 02438 g011
Table 1. Parameters adopted for the AvN-LBP extraction, training, and optimization of the FAM network adopted during experimentation.
Table 1. Parameters adopted for the AvN-LBP extraction, training, and optimization of the FAM network adopted during experimentation.
MODULE
AvN-LBPFAMDE
PARAMETERpRΘΡβ δ α β CR
VALUE834501.00.80.20.0011.00.7
Table 2. Comparison of Percentage Retrieval Accuracy (ARP) on the I-ELCAP database.
Table 2. Comparison of Percentage Retrieval Accuracy (ARP) on the I-ELCAP database.
OPERATOR/
ALGORITHM
CLASS
12345678910Avg
LBP [46]56.672.764.789.567.891.680.499.778.490.779.1
LTP [47]49.652.658.580.848.277.255.392.561.574.865.4
GLBP [46]67.884.277.892.379.289.377.199.184.197.484.7
LMeP [49]77.375.168.192.673.593.486.810080.187.283.2
AvN-LBP *85.689.986.795.691.198.591.499.497.197.193.4
GLCM [47]76.855.155.154.949.974.268.594.732.372.463.3
GLMeP [49]82.582.285.695.574.697.590.110082.794.288.4
ResNet [50]10096.810010010010085.796.896.810097.3
AlexNet [51]82.999.195.278.996.692.699.173.293.862.584.1
VGG-16 [52]63.810079.485.790.959.110083.910095.282.1
RetrieveNet [53]10010010098.110098.110010010094.299.1
DEFAMNet *10010010099.410010010010010098.699.8
* Proposed Method.
Table 3. Comparison of Percentage Retrieval Accuracy (ARR) on the I-ELCAP database.
Table 3. Comparison of Percentage Retrieval Accuracy (ARR) on the I-ELCAP database.
OPERATOR/
ALGORITHM
CLASS
12345678910Avg
LBP [46]32.440.538.667.036.254.248.581.247.872.751.9
LTP [47]23.125.823.852.120.638.522.840.926.440.631.4
GLBP [46]34.542.541.566.037.751.140.272.147.476.751.0
LMeP [49]37.435.428.973.037.952.953.395.643.069.653.7
AvN-LBP *49.851.247.972.541.152.952.776.145.684.657.5
GLCM [47]33.228.216.526.419.734.227.652.915.442.129.6
GLMeP [49]38.738.337.371.436.562.256.389.644.870.454.6
ResNet [50]10010096.786.710010010010010090.297.3
AlexNet [51]96.776.766.710093.383.333.390.210010084.2
VGG-16 [52]10096.790.110033.396.760.586.710066.782.4
RetrieveNet [53]10010010010010010096.196.810098.5 99.1
DEFAMNet *10097.099.599.010010010099.010098.099.3
* Proposed Method.
Table 4. Comparison of Retrieval Accuracy (ARP) on the OASIS database.
Table 4. Comparison of Retrieval Accuracy (ARP) on the OASIS database.
OPERATOR/
ALGORITHM
CLASS
1234Avg
LBPSEG [45]43.2138.0328.3246.4439.01
LTP [47]56.3336.7034.9750.0245.17
CSLBP [45]44.7240.1531.1748.2741.06
GLDP [47]48.7240.0938.4141.5242.23
AvN-LBP *58.9161.3851.366.4359.51
LDP [47]46.2936.3736.8245.5641.8
LMEBP [54]46.17 40.1736.8349.1743.08
ResNet [50]78.0157.7473.9186.5275.62
AlexNet [51]88.0154.5162.5173.8268.52
VGG-16 [52]75.7457.1452.4170.7466.15
RetrieveNet [53]90.0171.2582.1595.9284.3
DEFAMNet *96.1684.6491.2310093.01
* Proposed Method.
Table 5. Retrieval Accuracy (ARP) in percentage on ILD database.
Table 5. Retrieval Accuracy (ARP) in percentage on ILD database.
OPERATOR/
ALGORITHM
CLASS
EmphysemaFibrosisGroundglassHealthyMicronodulesAvg
LBP [46]26.7947.8535.6628.7128.7933.53
LTP [47]36.7949.8245.6638.7137.7241.73
LTCop [50]31.3250.8244.1528.7163.8843.77
LTrP [54]42.0751.8749.2741.7957.6548.52
AvN-LBP *56.4265.6457.6448.2465.8658.76
ResNet [50]90.9891.4988.9683.7872.4982.86
AlexNet [51]62.5292.6888.2355.7883.7875.47
VGG-16 [52]50.7579.4556.9573.6894.7275.89
RetrieveNet [53]99.8998.9291.7580.5696.3692.40
DEFAMNet *10097.9294.5692.9099.1296.90
* Proposed Method.
Table 6. Accuracy in percentage achieved on the OASIS database for the top 25 retrieved images.
Table 6. Accuracy in percentage achieved on the OASIS database for the top 25 retrieved images.
DESCRIPTORCLASSIFIER
NPFAMDEFAMNet
LBP87.6490.00
CS-LBP89.4191.76
NI-LBP86.4788.82
LTP88.8290.58
AVN-LBP91.8893.71
Table 7. Percentage accuracy achieved by MIR Systems on the database of human parts for the top 25 retrieved results.
Table 7. Percentage accuracy achieved by MIR Systems on the database of human parts for the top 25 retrieved results.
CLASSRETRIEVAL ACCURACY
AvN-LBP AvN-LBP + NPFAMAvN-LBP +
DEFAM
CHEST54.689.191.2
HEAD66.886.490.3
FOOT60.285.289.2
NECK71.487.691.7
PALM69.288.794.4
SPINE56.186.290.3
Table 8. Feature extraction and retrieval time in secs over OASIS–MRI databases using proposed LBP/VAR and other descriptors considered for comparison.
Table 8. Feature extraction and retrieval time in secs over OASIS–MRI databases using proposed LBP/VAR and other descriptors considered for comparison.
Eature
Descriptor
Feature Extraction TimeRetrieval Time in Seconds
Without ClassifierWith NPFAM ClassifierWith DEFAMNet Classifier
ZMs16.851.370.420.41
ULBP3.895.481.141.02
LBP4.5620.410.520.51
AvN-LBP8.4622.320.490.47
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

K., A.; S., R.; C., K.; Lai, W.-C.; Srividhya, S.R.; K., N. A Modified LBP Operator-Based Optimized Fuzzy Art Map Medical Image Retrieval System for Disease Diagnosis and Prediction. Biomedicines 2022, 10, 2438. https://doi.org/10.3390/biomedicines10102438

AMA Style

K. A, S. R, C. K, Lai W-C, Srividhya SR, K. N. A Modified LBP Operator-Based Optimized Fuzzy Art Map Medical Image Retrieval System for Disease Diagnosis and Prediction. Biomedicines. 2022; 10(10):2438. https://doi.org/10.3390/biomedicines10102438

Chicago/Turabian Style

K., Anitha, Radhika S., Kavitha C., Wen-Cheng Lai, S. R. Srividhya, and Naresh K. 2022. "A Modified LBP Operator-Based Optimized Fuzzy Art Map Medical Image Retrieval System for Disease Diagnosis and Prediction" Biomedicines 10, no. 10: 2438. https://doi.org/10.3390/biomedicines10102438

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop