Next Article in Journal
Precise Positioning of Autonomous Vehicles Combining UWB Ranging Estimations with On-Board Sensors
Previous Article in Journal
Efficient Facial Landmark Localization Based on Binarized Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of FireNet for Liver Lesion Classification

School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(8), 1237; https://doi.org/10.3390/electronics9081237
Submission received: 30 June 2020 / Revised: 18 July 2020 / Accepted: 23 July 2020 / Published: 31 July 2020
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In recent years, deep learning techniques, and in particular convolutional neural networks (CNNs) methods have demonstrated a superior performance in image classification and visual object recognition. In this work, we propose a classification of four types of liver lesions, namely, hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues using convolutional neural networks with a succinct model called FireNet. We improved speed for quick classification and decreased the model size and the number of parameters by using fire modules from SqueezeNet. We have used bypass connection by adding it around Fire modules for learning a residual function between input and output, and to solve the vanishing gradient problem. We have proposed a new Particle Swarm Optimization (NPSO) to optimize the network parameters in order to further boost the performance of the proposed FireNet. The experimental results show that the parameters of FireNet are 9.5 times smaller than GoogLeNet, 51.6 times smaller than AlexNet, and 75.8 smaller than ResNet. The size of FireNet is reduced 16.6 times smaller than GoogLeNet, 75 times smaller than AlexNet and 76.6 times smaller than ResNet. The final accuracy of our proposed FireNet model was 89.2%.

1. Introduction

The biggest challenge in applying deep learning for medical imaging domain is the small dataset with the lack of labeled data [1,2]. Radiographic imaging plays a great important role in reducing cancer mortality, especially Computed tomography (CT) is the used method to assist liver tumor diagnosis. In medical imaging, once a liver lesion is detected through radiographic imaging, a radiologist will need to recognize if the nature and the type of liver lesion is benign or malignant [3,4]. There are three types of liver lesion, including hemangiomas (Hema), metastasis (Meta), and hepatocellular carcinoma (HCC) [5,6]. Hemangioma is the most common type of benign liver lesion [7]; metastases are the plural form of metastasis. Metastases most commonly develop when cancer cells break away from the main tumor and enter the bloodstream or lymphatic system. These systems carry fluids around the body. This means that the cancer cells can travel far from the original tumor and form new tumors when they settle and grow in a different part of the body. It is the most secondary liver cancer [8]. Hepatocellular carcinoma is the most primary malignant liver lesion [9].
Nowadays, convolutions neural networks (CNNs) have led to an outstanding performance in image classification as more recently in the medical domain and pattern recognition [10]. To prevent overfitting while training, researchers have used many effective tricks, including dropout [11], Rectified Linear Unit (ReLU) for activation [12], batch normalization, data augmentation [13], and transfer learning. Some researchers have attempted to modify networks by reducing the number of parameters with state-of-the-art performance while maintaining accuracy [14]. SqueezeNet is a perfect example [15]. SqueezeNet is a smaller CNN architecture that uses fewer parameters while still preserving accuracy, and it achieved AlexNet-level accuracy on ImageNet with 50 times fewer parameters. In medical imaging tasks, medical data annotation is usually made by radiologists, it takes a considerable amount of effort and experience on the part of radiologists to detect and label the medical image as benign or as a probable case of malignancy [16]. Considering the large number of cases encountered by radiologists every day, there is a constant pressure on them to analyze a huge amount of data and make a decision as quickly as possible based on the analysis [17]. Researchers attempt to solve the problem of the lack of labeled data in medical imaging by using a data augmentation method, which includes some modifications of dataset images such as rotation, translation, and scale flip. Recently, several medical imaging methods have more focused on augmentation data for improving the classification performance of medical images, especially liver lesion. Most studies have applied a Generative Adversarial Network (GAN) framework [18].
This paper focuses on addressing the challenges in medical imaging, especially liver lesions, namely, hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues by utilizing state-of-the-art deep learning-based techniques. The goal of this research is to build a concise model that has few parameters while maintaining accuracy. Our proposed FireNet model can help doctors reduce misdiagnosis and eliminate images containing lesions to attenuate doctors’ burden. We have replaced traditional convolutional layers by using fire module from SqueezeNet and the fully connected layers were removed to obtain a fully convolutional network (FCN). Based on these achievements, our proposed FireNet model is smaller with fewer parameters. We have also proposed a new particle swarm optimization called NPSO, which improved the result of our proposed FireNet model from 81.8% to 89.2%. In addition, compared to standard deep neural networks and state-of-the-art methods, our proposed FireNet method achieves higher classification accuracy in less time. Our contributions to this work can be summarized as follows: (i) We have constructed a new method called FireNet for classifying liver lesion by using convolutional neural networks. (ii) We have used bypass connection and concatenation connection on SqueezeNet to enhance its performance by decreasing the model size and the number of parameters for a succinct model, which could save time in liver lesions classification. (iii) We have proposed a new Particle Swarm Optimization (NPSO) to optimize the results of the proposed FireNet, which increased the final accuracy and can maintain the model’s accuracy with a succinct model.
The rest of the paper is organized as follows: In Section 2 is an overview presentation on related work. In Section 3 is the introduction of our proposed method. A presentation of our experiment and results is shown in Section 4, and lastly the conclusions of our work in Section 5.

2. Related Work

The literature review shows that there are several proposed approaches for classifying medical imagery, in particular liver lesion. In [19] Frid-Adar, M. et al. have applied GAN framework and they show that medical image can be used for data augmentation to improve the performance of CNN for medical image classification. The result shows that the classification performance by using only classic data augmentation yielded 78.6% sensitivity and 88.4% specificity. After using the synthetic data augmentation, the results increased to 85.7% sensitivity and 92.4% specificity. In [20] Devi, S. et al. proposed a new method called an automatic support system for stage classification using artificial neural network, and for detecting liver tumors through fuzzy clustering methods for medical application. In [21] Gletos, M. et al. have applied a method for liver lesion classification by using texture features in four categories, which include the normal liver parenchyma class. They used a hierarchical classifier of neural networks. Yasaka, K. et al. [22] constructed a convolution neural network to realize the classification based on 1068 lesion CT images and conducted the testing with the models preserved in different processes, which yielded 84% of total accuracy. In [23] Liang, D. et al. have proposed a model which combined the local information with the global information based on dataset that contains 480 CT liver slice images to distinguish diverse types of focal liver lesions, and which yielded 87% of total accuracy. [24] Diamant, I. et al. have used the method of bag-of-visual-words (BoVW) learned from image patches. They have applied two dictionaries for lesion interior and boundary regions. They have generated histograms for each lesion regions of interest (ROI) based on two dictionaries. The have used support-vector machines (SVM) for the final classification. Chen, P. et al. [25] used the same dataset that we used in this study. They have proposed a new method for clean and effective feature fusion adversarial learning network to mine useful features and relieve over-fitting problems. Firstly, they train a fully convolution autoencoder network with unsupervised learning to mine useful feature maps with liver lesion data. Secondly, they transfer feature maps to their adversarial SENet network proposed for liver lesion classification. The results of liver lesion classification in CT show an average accuracy of 85.4%. In 2017, Hoogi, A. et al. [26] presented the adaptive model by using the convolutional neural network. They have proven their model for the reasonable dataset, i.e., 164 magnetic resonance imaging (MRI) and 112 CT images of liver lesions. For all the cases, they have evaluated their model for the various parameters like Dice similarity coefficients. It has been found that this model was significantly accurate than the other present models. Perdigón Romero, F. et al. [27] have presented a method by using deep learning approach to assist in the discrimination between liver metastases from colorectal cancer and benign cysts in abdominal CT images of the liver. The approach incorporates the efficient feature extraction of InceptionV3 combined with residual connections and pre-trained weights from ImageNet. The result obtained was 0.96 of accuracy, and had an F1 score of 0.92 based on an in-house clinical biobank with 230 liver lesions originating from 63 patients. Alahmer, H. and Ahmed, A. in [28] proposed an automated computer-aided classification (CAD) system to classify liver lesions as benign or malignant. The proposed method consists of three stages; firstly, they introduce an automatic liver segmentation and lesion detection. Secondly, extracting features from multiple ROIs, which is the novelty by dividing the segmented lesion into inside and border areas which improved the classification accuracy to over 98%. In [29] Stoitsis, J. et al. presented a semi-automatic classification system. During image pre-processing stage, they enhance image quality and defined the tumor as ROI. Their proposed system was able to classify four types of liver tissues: healthy, cyst, hemangioma and HCC. Five texture features (First order statistics (FOS), spatial gray level dependence matrix (SGDLM), gray level difference method (GLDM), texture energy measures (TEM) and Fractal dimension measurements (FDM)) were extracted for each tumor. The most they used a feature selection based on genetic algorithms. The final classification achieved 90.63% accuracy. Wang, L. et al. in [30] have proposed a method for classification system of liver lesion was proposed. The proposed method classified three types of hepatic tissue (normal, HCC, and hemangioma). The ROIs of the tumor were defined by experienced radiologists. For each ROI, four texture features FOS, SGLDM, gray level run length matrix (GLRLM), and GLDM were extracted to feed an SVM classifier. The classifier used two strategies to achieve multiclass SVMs. In [31] Kumar, S.S. et al. proposed a fully automated classification system specialized in differentiation between HCC (malignant) and hemangioma (benign). From each ROI four texture features set were extracted. A probabilistic neural network classifier was used in the tumor classification. The final result was 96.7% accuracy, which had been obtained with contourlet coefficient co-occurrence features. However, the proposed system can be extended for other types of liver diseases but the performance measures and accuracy mainly depend on the number of samples used. In [32] Çomak, E. proposed a new method for PSO called reverse direction supported particle swarm optimization (RDS-PSO), with an adaptive regulation procedure. RDS-PSO was constructed with both linearly increasing and decreasing inertia weights (with 1000 and 2000 iterations). In [33], Chen, S. et al. proposed a new method for PSO called an improved particle swarm optimization algorithm (IPSO). This new method IPSO is based on two forms of exponential inertia weight and two types of centroids. The experimental results show that the proposed IPSO algorithm is more efficient than existing methods.

3. Proposed Model

We propose a novel FireNet method for liver lesions classification. Firstly, we have used fire modules from SqueezeNet to improve speed for quick classification and reduced the model size and the number of parameters. Secondly, we have added bypass connections around Fire modules for learning a residual function between input and output, and to solve the vanishing gradient problem. In addition, we applied concatenation connections to maintain the feature information of each layer used for classification, which increase accuracy as compared to standard SqueezeNet. Thirdly, we have proposed a new Particle Swarm Optimization (NPSO) method to optimize the result of the proposed FireNet model, which increased the final accuracy. Figure 1 shows the overall structure of our proposed method called FireNet for liver lesions classification.

3.1. Architecture of FireNet

In this work, we have used eight fire modules and two traditional convolution layers to create our proposed FireNet model as a smaller convolutional neural network (CNN) with fewer parameters. A fire module is comprised of a squeeze layer and expand layer [15]. The main contribution of fire module is to reduce the model size and parameters during training stage. We constructed FireNet by starting with a single convolutional layer; to reduce internal covariate shift and overfitting we have added the batch normalization before the nonlinearity, followed by max-pooling (2, 2), followed by fire2, fire3, followed by fire4, followed by Max-pooling (2, 2), followed by fire5, fire6, fire7, fire8, followed by max-pooling (2, 2), followed by fire 9, and ending with a final convolutional layer. We have used three types of kernels. First, in the convolutional layer, kernel size 3 × 3 was used for large size. Second, kernel size with 2 × 2 was used for extracting high-dimensional semantic information. Third, kernel size with 1 × 1 was used in squeeze layers and expand layers for extracting more useful information and discarding redundant information.
We have used bypass connections to avoid the saturating neurons problem or the vanishing gradient problem during the training section. In this paper, bypass connection is added around Fire modules for learning a residual function between input and output. Bypass connection takes the output of the first layer as the input of the first fire module, and we set the input to Fire4 equal to the output of Fire2 + output of Fire3, where the + operator is an element-wise addition, as shown in the Figure 2. Bypass can be calculated as follows:
X n = S ( x n 1 ) + x n 1
where x n 1 denotes the input of the first fire module, S denotes a nonlinear function which representing the transformation in a fire module. In addition, we have used concatenation for maintaining the feature information of different layers by adding a 1 × 1 convolutional layer on top of each concatenation. This structure is shown in Figure 2. Concatenation connections can be calculated as:
X n = S ( [ m ( x n 1 ) , m ( x n 2 ) , m ( x n 3 ) ] )
where m is a 1 × 1 convolution layer added on top of each concatenation.

3.2. A Brief Overview of PSO

The particle swarm optimization (PSO) algorithm is one of the evolutionary computing methods, and is based on simulating the social behavior of some animals [34]. The PSO algorithm consists of finding the optimum value by sharing information between the particles and individuals. This is achieved by, firstly, randomly initializing the positions and velocities of a group of particles. At each step, the particle is updated with new values. The velocity of the particle is updated through the two best positions. The best position of a particle found so far is p b e s t . The best position that any neighbor of a particle discovered by the whole swarm is g b e s t . The positions and velocities of the particles are updated by performing Equations (4) and (5).
v t + 1 = v t + c 1 r 1 ( p b e s t x t ) + c 2 r 2 ( g b e s t x t )
x t + 1 = x t + v t + 1
where c 1 , c 2 are positive constants, r 1 , r 2 are two random numbers within the range 0–1, and t is iteration. Figure 3 shows the flow chart of a PSO algorithm.
A basic PSO method can be described as follows:
Step 1:
Initialize the original position and velocity of particle swarm;
Step 2:
Evaluate the fitness of each particle;
Step 3:
Determine gbest from PSO Swarm; determine pbest from PSO Swarm;
Step 4:
If f(x) < f(gbest), update the swarm, gbest = x;
Step 5:
Repeat until certain termination criteria are met
Step 5.1:
Pick random numbers r1 and r2;
Step 5.2:
Update every particle’s velocity;
Step 5.3:
Update every particle’s position;
Step 5.4:
If f(x) < f(pbest), update the particle pbest = x.
If f(x) < f(gbest), update the swarm, gbest = x.
Step 5 End.

3.3. A New Particle Swarm Optimization (NPSO)

In this paper, we have proposed a new Particle Swarm Optimization (NPSO) to optimize the results of the proposed FireNet model from 81.8% to 89.2%. To improve the performance of our proposed FireNet model, the parameter inertia weight ω is added to control the impact of the previous velocity of the particle, and to improve the global search. To improve the result further, the alpha α parameter is added. A particle’s velocity is updated as follows:
v t + 1 = α ω v t + c 1 r 1 ( p b e s t x t ) + c 2 r 2 ( g b e s t x t )
where ω is the inertia weight, and α is the parameter to improve the performance further.
The inertia weight ω is one of the more important parameters of the NPSO algorithm, which is able to find the optimum solution accurately. Inertia weight ω is employed as follows:
ω = ω m a x ω m a x ω m i n k m a x · k
Here, ω m a x is the maximum inertial weight, ω m i n is the minimum inertial weight, k m a x is the maximum number of iterations and k is the current epoch. The maximum number of iterations and the inertial weight are oppositely proportional. In this study, we set the inertia weight ω = 0.4 as ω m i n and 0.9 as ω m a x . Figure 4 illustrates the flow chart of an NPSO algorithm.
The NPSO procedure can be divided into the following steps:
Step1:
Initialize the learning rate of the proposed FireNet model, batch size. NPSO is used to control the convergences of the model. After three iterations, if the error value does not change, then the NPSO is considered convergent;
Step2:
FireNet training process;
Step3:
The result of FireNet are optimized by using the NPSO algorithm;
Step4:
The output of FireNet is updated if the solution of the swarm has less error than the old output;
Step5:
FireNet for testing;
Step6:
Final output is the accuracy of FireNet.

4. Experiment and Results

4.1. Dataset

In this paper, the CT image dataset of liver lesions that were used in the study was collected from Jiangbin Hospital, an affiliated hospital of Jiangsu University (from 2015 to 2018), by searching for the medical records with hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues. This work included data from 120 different patients, 30 patients with one or multiple HCC, 26 patients with one or multiple Hema, 23 patients with one or multiple Meta and 41 Heal. The dataset contains a total of 4142 images, including 1040 images of HCC, 1036 images of Hema, 1032 images of Meta, and 1034 images of Heal. From each class, 250 images were randomly selected for testing dataset, and the rest images were considered as the training dataset. Figure 5 shows the CT image samples of each lesion. An expert radiologist was in charge of marking the margins and to determine the corresponding diagnosis which was established by biopsy. Figure 6 illustrates a set of data samples from the different types. Liver lesions are different in size, shape and contrast. We preprocessed the raw in Digital Imaging and Communications in Medicine (DICOM) CT images 512 × 512-pixel metrics with a slice collimation of 5–7 mm, and 0.57–0.89 as an in-plane resolution range of slice. We have truncated the CT scan Hounsfield units (HU) values, and we normalized all slice intensities into the range [0, 1] with min–max normalization.

4.2. Experiment

The proposed Fire-liverNet model was run on computer by using an NVIDIA GeForce GTX 1080 Ti GPU. The code was written in Python using the Pytorch framework. The available patient data was split into 75% training and 25% test set. For training our model, we used a batch size of 64 with a learning rate of 0.001 and stochastic gradient descent (SGD), with momentum of 0.9 and weight decay of 0.0001, for 175 epochs. The size of the input image is region of interests (ROIs) of 64 × 64 cropped from CT scans. Our loss function can be calculated as follows:
L C . E = 1 L i = 1 L ( y ( i ) . l o g y ^ ( i ) ) + ( 1 y ( i ) ) l o g ( 1 y ^ ( i ) )

4.3. Results

In this stage, we will discuss the results of our study from three aspects. First, we will evaluate the performance of our proposed Fire-liverNet in terms of classification and will compare it with SqueezeNet. Second, we will analyze the value of parameter and model size. Third, we will discuss about the impact of NPSO method on the FireNet. Finally, we will compare our proposed FireNet model with GoogLeNet, AlexNet, ResNet, and state-of-the art methods.

4.3.1. Performance of Proposed FireNet and SqueezeNet

Three types of measures are used to evaluate the performance of our model. These measures are precision, recall, accuracy, and F1 Score as assessed with true positive (TP), false positive (FP), true negative (TN), and false negative (FN) [35]. All three types of measures are described by the following equations:
Total   Accuracy = ( TP + TN ) ( TP + TN ) + ( FP + FN )
Precision = TP TP + FP
Recall = TP TP + FN
F 1 - S c o r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
where TP is the number of liver lesions classified as lesion that was lesion; TN is the number of liver lesions classified as non-lesion that was non-lesion; FP is the number of liver lesions classified as lesion but was non-lesion; and FN denotes the number of liver lesions classified as non-lesion but was lesion.
Figure 7a shows the performance of FireNet and SqueezeNet during the training phase. The result shows that our proposed model FireNet works better than SqueezeNet with an accuracy rate of 81.4%. Figure 7b shows the performance of FireNet and SqueezeNet during the training loss phase. The result shows that our proposed FireNet model works better than SqueezeNet with 0.079 (loss).
In this work, four categories of liver lesion data were used for classification task, which include hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues. Healthy tissues are used to establish the diversity of training data by which the model have the ability to figure out on lesions and non-lesions. The use of bypass connections and concatenation connections is very significant, which greatly increased the accuracy of our proposed FireNet model. We applied concatenation for maintaining the feature information of different layers by adding a 1 × 1 convolutional layer on top of each concatenation, and the bypass connection is added around Fire modules for learning a residual function between input and output. Table 1 shows that our proposed FireNet model performs better than SqueezeNet.

4.3.2. Number of Parameters and Size Model

Our proposed FireNet model introduces the fire module from SqueezeNet to decrease the size model and number of parameters.
Table 2 shows that the parameters of FireNet are 9.5 times smaller than GoogLeNet, 51.6 times smaller than AlexNet, and 75.8 smaller than ResNet. The size of FireNet is 16.6 times smaller than GoogLeNet, 75 times smaller than AlexNet and 76.6 times smaller than ResNet.
Figure 8a shows the performance of FireNet with the state-of-the-art deep learning models during the training phase. Figure 8b shows the performance of FireNet with the state-of-the art deep learning models during the training loss phase. The results show that the proposed model works better than the others models.
Table 3 shows the performance of FireNet before using the proposed fine-tuning method in comparison with the state-of-the-art deep learning models. FireNet is more robust and more accurate, with an accuracy of 81.8 as compared to GoogLeNet with 80.9, AlexNet with 79.8 and ResNet with 78.2 in terms of classification. FireNet performs very well as small architectures (size model 3 MB and number of parameters 790,890) as compared to GoogLeNet (size model 50 MB, number of parameters 7,521,212), AlexNet (size model 225 MB, number of parameters 40,885,256) and ResNet (size model 230 MB, number of parameters 60,012,023).
The results show that our proposed model FireNet works better than the others models, with an accuracy rate of 89.2%. Figure 9b illustrates the performance of FireNet with the state-of-the-art deep learning models during the training loss phase. Figure 9c shows the performance of the proposed FireNet model versus NPSO and PSO. The results show that NPSO performs better than PSO.
In this stage, we have used NPSO to optimize the results of the proposed FireNet model from 81.4% to 89.2%. NPSO starts by initializing the number of particles, the velocity, and the number of iterations. The output of FireNet will be updated if the solution of the swarm has less error than the old output. The number of particles was set to 50, the number of iterations was set to 175, the inertia weights were set to 0.9 and 0.4, the parameter alpha was set to 1.2, C 1 was set to 2, and C 2 was set to 2. Table 4 shows that our model FireNet gives better results than standard CNN models, with an accuracy rate of 89.2%.
Table 5 presents a performance comparison of our model against some state-of-the-art models. The data is from Chen, P. et al. [25] which used the same dataset as in the present study. The results in Table 5 clearly demonstrate the superiority of the proposed FireNet model in terms of accuracy and in terms of execution time.

4.4. Discussion

In this paper, we illustrated the potential benefit of a concise model with few parameters while preserving accuracy. We have constructed a CNN called FireNet to classify four categories of liver lesions, namely, hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues. Fire modules from SqueezeNet as the basic component of the architecture to decrease the model size and the number of parameters were applied to potentially improve computational efficiency and the speed of proposed FireNet model classification. In this stage, the proposed FireNet model is 16.6 times smaller than GoogLeNet, 75 times smaller than AlexNet and 76.6 times smaller than ResNet. The parameters of FireNet are 9.5 times smaller than GoogLeNet, 51.6 times smaller than AlexNet, and 75.8 smaller than ResNet. FireNet requires 2.2 s, which proved that our proposed model with few parameters is faster and more efficient when compared to the 3.4 s needed by GoogLeNet, 4.6 s by AlexNet or 4.3 s by ResNet. The use of bypass connections and concatenation connections is very significantly, which greatly improved the classification accuracy of FireNet, with 81.8% accuracy, as compared to SqueezeNet with 79.3% accuracy. We have used concatenation for preserving the feature information of different layers by adding a 1 × 1 convolutional layer on top of each concatenation, and the bypass connections are added around fire modules for learning a residual function between input and output, and to solve the vanishing gradient problem. We have also proposed NPSO to optimize our proposed FireNet model by adding two parameters ω , α to the velocity, which greatly increased the final result leading to an accuracy of 89.2%. The proposed method is promising for recognition of liver lesions, which will help doctors avoid misdiagnosis.

5. Conclusions

In this study, we proposed a new method of liver lesion classification called FireNet for classifying liver lesions, namely, hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues. Our proposed FireNet model introduces fire modules to decrease the model size and number of parameters while increasing speed for quick classification. The model size of the FireNet is 16.6 times smaller than GoogLeNet, 75 times smaller than AlexNet and 76.6 times smaller than ResNet. The number of parameters is 9.5 times smaller than GoogLeNet, 51.6 times smaller than AlexNet, and 75.8 smaller than ResNet. After training our proposed model from scratch, the accuracy could only reach 81.8%, which is insufficient for clinical systems. We introduced a new Particle Swarm Optimization (NPSO) method to optimize the results of the proposed FireNet model in order to improve the classification accuracy from 81.8% to 89.2%, which demonstrates that a model with few parameters can reach an outstanding result. We hope that the proposed model can lead to stronger radiology support systems. For future work, we plan to enlarge the number of CT images to improve the performance of FireNet.

Author Contributions

G.K.K., Y.S. and Z.L. developed the main framework and collaborated in writing the paper. G.K.K. Methodology and software; Y.S. and Z.L. supervision. All other authors contributed by revising the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Nature Science Foundation of China (61976106, 61772242, 61572239); China Postdoctoral Science Foundation (2017M611737); dix talent peaks project in Jiangsu Province (DZXX-122); and key special projects of health and family planning science and technology in Zhenjiang City (SHW2017019).

Acknowledgments

The authors would like to thank the radiologists of the Medical Imaging Department of the Affiliated Hospital of Jiangsu University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Roth, H.; Lu, L.; Liu, J.; Yao, J.; Seff, A.; Cherry, K.; Kim, L.; Summers, R.M. Improving Computer-Aided Detection Using_newlineConvolutional Neural Networks and Random View Aggregation. IEEE Trans. Med. Imaging 2015, 35, 1170–1181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Greenspan, H.; Van Ginneken, B.; Summers, R.M. Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
  4. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
  5. A Marrero, J.; Ahn, J.; Reddy, R.K. ACG clinical guideline: The diagnosis and management of focal liver lesions. Am. J. Gastroenterol. 2014, 109, 1328–1347. [Google Scholar] [CrossRef]
  6. Dietrich, C.F.; Sharma, M.; Gibson, R.N.; Schreiber-Dietrich, D.; Jenssen, C. Fortuitously discovered liver lesions. World J. Gastroenterol. 2013, 19, 3173–3188. [Google Scholar] [CrossRef]
  7. Bajenaru, N.; Balaban, V.; Săvulescu, F.; Campeanu, I.; Patrascu, T. Hepatic hemangioma-review. J. Med. Life 2015, 8, 4–11. [Google Scholar]
  8. Serrablo, A.; Tejedor, L.; Ramia, J.-M. Liver Metastases—Surgical Treatment. In Liver Tumors; Reeves, H., Manas, D.M., Lochan, R., Eds.; IntechOpen: Rijeka, Croatia, 2013. [Google Scholar]
  9. Bruix, J.; Sherman, M. Management of hepatocellular carcinoma: An update. Hepatology 2011, 53, 1020–1022. [Google Scholar] [CrossRef]
  10. Shi, J.; Zhou, S.; Liu, X.; Zhang, Q.; Lu, M.; Wang, T. Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing 2016, 194, 87–94. [Google Scholar] [CrossRef]
  11. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  12. Pereira, S.; Pinto, J.A.A.D.S.R.; Alves, V.; Silva, C. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef] [PubMed]
  13. Miao, S.; Wang, Z.J.; Liao, R. A CNN Regression Approach for Real-Time 2D/3D Registration. IEEE Trans. Med. Imaging 2016, 35, 1352–1363. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, X.; Hu, W.; Chen, F.; Liu, J.; Yang, Y.; Wang, L.; Duan, H.; Si, J. Gastric precancerous diseases classification using CNN with a concise model. PLoS One 2017, 12. [Google Scholar] [CrossRef] [PubMed]
  15. Iandola, F.; Han, S.; Moskewicz, M.; Ashraf, K.; Dally, W.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  16. Bellver, M.; Maninis, K.-K.; Pont-Tuset, J.; Giró-i-Nieto, X.; Torres, J.; van Gool, L. Detection-aided liver lesion segmentation using deep learning. arXiv 2017, arXiv:1711.11069. [Google Scholar]
  17. Wang, W.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Chen, Q.; Liang, N.; Lin, L.; Hu, H.; Zhang, Q. Classification of Focal Liver Lesions Using Deep Learning with Fine-Tuning. In Proceedings of the 2018 International Conference on Digital Medicine and Image Processing, Okinawa, Japan, 12–14 November 2018; pp. 56–60. [Google Scholar] [CrossRef]
  18. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; Volume 2, pp. 2672–2680. [Google Scholar]
  19. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef] [Green Version]
  20. Devi, S.M.; Sruthi, A.N.; Jothi, S.C. MRI Liver Tumor Classification Using Machine Learning Approach and Structure Analysis. Res. J. Pharm. Technol. 2018, 11, 434. [Google Scholar] [CrossRef]
  21. Gletsos, M.; Mougiakakou, S.G.; Matsopoulos, G.K.; Nikita, K.S.; Kelekis, D.; Nikita, A.S. A computer-aided diagnostic system to characterize CT focal liver lesions: Design and optimization of a neural network classifier. IEEE Trans. Inf. Technol. Biomed. 2003, 7, 153–162. [Google Scholar] [CrossRef]
  22. Yasaka, K.; Akai, H.; Abe, O.; Kiryu, S. Deep Learning with Convolutional Neural Network for Differentiation of Liver Masses at Dynamic Contrast-enhanced CT: A Preliminary Study. Radiology 2018, 286, 887–896. [Google Scholar] [CrossRef] [Green Version]
  23. Liang, N.; Lin, L.; Hu, H.; Zhang, Q.; Chen, Q.; Lwamoto, Y.; Han, X.; Chen, Y.-W. Residual Convolutional Neural Networks with Global and Local Pathways for Classification of Focal Liver Lesions. In Proceedings of the 15th Pacific Rim International Conference on Artificial Intelligence, Nanjing, China, 28–31 August 2018; pp. 617–628. [Google Scholar]
  24. Diamant, I.; Hoogi, A.; Beaulieu, C.F.; Safdari, M.; Klang, E.; Amitai, M.; Greenspan, H.; Rubin, D.L. Improved Patch-Based Automated Liver Lesion Classification by Separate Analysis of the Interior and Boundary Regions. IEEE J. Biomed. Health Inform. 2015, 20, 1585–1594. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, P.; Song, Y.; Yuan, D.; Liu, Z. Feature fusion adversarial learning network for liver lesion classification. In Proceedings of the ACM Multimedia Asia, Beijing, China, 16–18 December 2019; pp. 1–7. [Google Scholar] [CrossRef]
  26. Hoogi, A.; Subramaniam, A.; Veerapaneni, R.; Rubin, D.L. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis. IEEE Trans. Med. Imaging 2016, 36, 781–791. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Romero, F.P.; Diler, A.; Bisson-Gregoire, G.; Turcotte, S.; Lapointe, R.; Vandenbroucke-Menu, F.; Tang, A.; Kadoury, S. End-To-End Discriminative Deep Network for Liver Lesion Classification. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019. [Google Scholar] [CrossRef]
  28. Alahmer, H.; Ahmed, A. Computer-aided Classification of Liver Lesions from CT Images Based on Multiple ROI. Procedia Comput. Sci. 2016, 90, 80–86. [Google Scholar] [CrossRef] [Green Version]
  29. Stoitsis, J.; Valavanis, I.; Mougiakakou, S.G.; Golemati, S.; Nikita, A.; Nikita, K.S. Computer aided diagnosis based on medical image processing and artificial intelligence methods. Nucl. Instrum. Methods Phys. Res. A 2006, 569, 591–595. [Google Scholar] [CrossRef]
  30. Wang, L.; Zhang, Z.; Liu, J.; Jiang, B.; Duan, X.; Xie, Q.; Hu, D.; Li, Z. Classification of Hepatic Tissues from CT Images Based on Texture Features and Multiclass Support Vector Machines. In Proceedings of the Advances in Neural Networks—ISNN 2009, Wuhan, China, 26–29 May 2009; pp. 374–381. [Google Scholar]
  31. Kumar, S.S.; Moni, R.; Rajeesh, J. An automatic computer-aided diagnosis system for liver tumours on computed tomography images. Comput. Electr. Eng. 2013, 39, 1516–1526. [Google Scholar] [CrossRef]
  32. Comak, E. A particle swarm optimizer with modified velocity update and adaptive diversity regulation. Expert Syst. 2019, 36, e12330. [Google Scholar] [CrossRef] [Green Version]
  33. Chen, S.; Xu, Z.; Tang, Y.; Liu, S. An Improved Particle Swarm Optimization Algorithm Based on Centroid and Exponential Inertia Weight. Math. Probl. Eng. 2014, 976486. [Google Scholar] [CrossRef]
  34. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  35. Zhu, W.; Zeng, N.; Wang, N. Sensitivity, Specificity, Accuracy, Associated Confidence Interval and ROC Analysis with Practical SAS ® Implementations. Northeast SAS Users Group Health Care Life Sci. 2010, 19, 67. [Google Scholar]
Figure 1. Overall structure of our proposed FireNet method for liver lesions classification.
Figure 1. Overall structure of our proposed FireNet method for liver lesions classification.
Electronics 09 01237 g001
Figure 2. The schema for concatenation.
Figure 2. The schema for concatenation.
Electronics 09 01237 g002
Figure 3. Flowchart of a PSO algorithm.
Figure 3. Flowchart of a PSO algorithm.
Electronics 09 01237 g003
Figure 4. Flowchart of the proposed NPSO algorithm.
Figure 4. Flowchart of the proposed NPSO algorithm.
Electronics 09 01237 g004
Figure 5. Computed tomography (CT) images of liver lesions.
Figure 5. Computed tomography (CT) images of liver lesions.
Electronics 09 01237 g005
Figure 6. Dataset examples.
Figure 6. Dataset examples.
Electronics 09 01237 g006
Figure 7. Training progress of our proposed FireNet model and SqueezeNet. (a) the performance during the training phase. (b) the performance during the training loss phase.
Figure 7. Training progress of our proposed FireNet model and SqueezeNet. (a) the performance during the training phase. (b) the performance during the training loss phase.
Electronics 09 01237 g007
Figure 8. Training progress of our proposed FireNet model with the state-of-the-art deep learning models before using NPSO method. (a) the performance of FireNet with the state-of-the-art deep learning models during the training phase. (b) the performance of FireNet with the state-of-the art deep learning models during the training loss phase.
Figure 8. Training progress of our proposed FireNet model with the state-of-the-art deep learning models before using NPSO method. (a) the performance of FireNet with the state-of-the-art deep learning models during the training phase. (b) the performance of FireNet with the state-of-the art deep learning models during the training loss phase.
Electronics 09 01237 g008
Figure 9. Training progress of our proposed FireNet model with the state-of-the-art deep learning models by using NPSO method. (a) illustrates the performance of FireNet with the state-of-the-art deep learning models during the training phase. (b) illustrates the performance of FireNet with the state-of-the-art deep learning models during the training loss phase. (c) shows the performance of the proposed FireNet model versus NPSO and PSO.
Figure 9. Training progress of our proposed FireNet model with the state-of-the-art deep learning models by using NPSO method. (a) illustrates the performance of FireNet with the state-of-the-art deep learning models during the training phase. (b) illustrates the performance of FireNet with the state-of-the-art deep learning models during the training loss phase. (c) shows the performance of the proposed FireNet model versus NPSO and PSO.
Electronics 09 01237 g009aElectronics 09 01237 g009b
Table 1. Results of the proposed FireNet method and SqueezeNet.
Table 1. Results of the proposed FireNet method and SqueezeNet.
MethodLossAccuracyRecallPrecisionF1-Score
SqueezeNet0.9681.277.779.978.7
FireNet0.07981.878.781.680.1
Table 2. Number of parameters and size model.
Table 2. Number of parameters and size model.
MethodModel Size (MB)Fire ModulesParameters
GoogLeNet50-7,521,212
AlexNet225-40,885,256
ResNet230-60,012,023
FireNet38790890
Table 3. Results before using NPSO method in comparison with standard convolutional neural networks (CNN) models.
Table 3. Results before using NPSO method in comparison with standard convolutional neural networks (CNN) models.
MethodLossAccuracyRecallPrecisionF1-Score
GoogLeNet0.10281.277.680.378.3
AlexNet0.15080.177.180.678.8
ResNet0.09778.677.777.977.7
FireNet0.07981.878.781.680.1
Table 4. Results after using a proposed fine-tuning method in comparison with the standard CNN models.
Table 4. Results after using a proposed fine-tuning method in comparison with the standard CNN models.
MethodLossAccuracyRecallPrecisionF1-ScoreTime(s)
GoogLeNet0.08387.184.983.384.03.4s
AlexNet0.09788.885.187.386.34.6s
ResNet0.09686.284.186.585.24.3s
FireNet0.04989.286.287.386.72.2s
Table 5. Comparison to state-of-the art methods.
Table 5. Comparison to state-of-the art methods.
MethodAccuracy
Chen, P. et al. [25]85.4
Liang, D. et al. [23]87.0
Yasaka, K. et al. [22]84.0
FireNet89.2

Share and Cite

MDPI and ACS Style

Kashala Kabe, G.; Song, Y.; Liu, Z. Optimization of FireNet for Liver Lesion Classification. Electronics 2020, 9, 1237. https://doi.org/10.3390/electronics9081237

AMA Style

Kashala Kabe G, Song Y, Liu Z. Optimization of FireNet for Liver Lesion Classification. Electronics. 2020; 9(8):1237. https://doi.org/10.3390/electronics9081237

Chicago/Turabian Style

Kashala Kabe, Gedeon, Yuqing Song, and Zhe Liu. 2020. "Optimization of FireNet for Liver Lesion Classification" Electronics 9, no. 8: 1237. https://doi.org/10.3390/electronics9081237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop