Next Article in Journal
Characteristics of Walk-In Patients and Related Factors in a Dental University Hospital
Next Article in Special Issue
A Multimodal Auxiliary Classification System for Osteosarcoma Histopathological Images Based on Deep Active Learning
Previous Article in Journal
A Comparative Study of Intravital CT and Autopsy Findings in Fatal Traumatic Injuries
Previous Article in Special Issue
Optimal Deep Stacked Sparse Autoencoder Based Osteosarcoma Detection and Classification Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auxiliary Segmentation Method of Osteosarcoma in MRI Images Based on Denoising and Local Enhancement

1
School of Computer Science and Engineering, Central South University, Changsha 410083, China
2
The First People’s Hospital of Huaihua, Huaihua 418099, China
3
Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Changsha 410036, China
4
Research Center for Artificial Intelligence, Monash University, Clayton, Melbourne, VIC 3800, Australia
*
Authors to whom correspondence should be addressed.
Healthcare 2022, 10(8), 1468; https://doi.org/10.3390/healthcare10081468
Submission received: 11 July 2022 / Revised: 1 August 2022 / Accepted: 3 August 2022 / Published: 4 August 2022
(This article belongs to the Special Issue Advances of Decision-Making Medical System in Healthcare)

Abstract

:
Osteosarcoma is a bone tumor which is malignant. There are many difficulties when doctors manually identify patients’ MRI images to complete the diagnosis. The osteosarcoma in MRI images is very complex, making its recognition and segmentation resource-consuming. Automatic osteosarcoma area segmentation can solve these problems to a certain extent. However, existing studies usually fail to balance segmentation accuracy and efficiency. They are either sensitive to noise with low accuracy or time-consuming. So we propose an auxiliary segmentation method based on denoising and local enhancement. The method first optimizes the osteosarcoma images, including removing noise using the Edge Enhancement based Transformer for Medical Image Denoising (Eformer) and using a non-parameter method to localize and enhance the tumor region in MRI images. Osteosarcoma was then segmented by Deep Feature Aggregation for Real-Time Semantic Segmentation (DFANet). Our method achieves impressive segmentation accuracy. Moreover, it is efficient in both time and space. It can provide information about the location and extent of the osteosarcoma as a basis for further diagnosis.

1. Introduction

Osteosarcoma is a kind of malignant bone tumor [1], accounting for 20% to 45% of the total malignant bone tumors, with a high incidence rate. The disease is locally aggressive, develops rapidly, and has a high metastasis rate [2,3,4,5]. Due to the chemotherapy resistance of osteosarcoma and the high recurrence rate of the disease, its overall prognosis is still unsatisfactory [6,7,8,9]. At present, osteosarcoma still has high morbidity and the mortality rate, about 35% of the patients face amputation [10], and survival rate is about 20% [11]. Early detection and localization of disease before surgery or treatment can improve overall survival and reduce amputation rates [12].
Among imaging methods used in order to achieve the clinical evaluation of osteosarcoma, MRI has good soft-tissue contrast and is sensitive to osteosarcoma, which can detect abnormal signals in the early stage of lesions. Its multi-parameter and multi-planar slicing capabilities can display the location and extent of lesions [13,14]. Therefore, MRI is often used to diagnose osteosarcoma. After the clinical acquisition of patients’ MRI images, diagnosis requires tumor identification and delineation [12].
In most developing countries, the diagnosis of osteosarcoma is difficult due to economic backwardness, shortage of medical resources and lack of experienced doctors [15]. Conventional methods of osteosarcoma diagnosis rely on manual tasks. One patient will generate more than 600 MRI images at a time [16], which is a huge amount of data. However, only about 3% of these data are useful, leading to a heavy workload for doctors and inefficiency in diagnosis. The doctor’s manually produced magnetic resonance imaging (MRI) suffers from subjectivity and fatigue limitations. Doctor’s average manual segmentation accuracy is about 90%. For inexperienced doctors, the accuracy of the judgment is only about 85% due to their own subjective judgments or mistakes [17]. The significant differences in the location, extent and shape of osteosarcoma in different patients lead to complex MRI images. Moreover, the tumor area is difficult to identify because of the uneven internal grayscale and texture features. Imaging will also affect visual effects [18]. Due to the high heterogeneity of osteosarcoma, the formed osteoid is indistinguishable, and the blurring of edges caused by partial volume effects in structural MRI reduces segmentation accuracy.
However, the identification and segmentation of osteosarcoma lesions is necessary. It is the basis for further quantitative analysis or tissue classification. Segmentation is usually achieved by drawing a region of interest (ROI) within the tumor margin [19,20,21]. Compared to manual segmentation, automated methods are generally faster, more objective, and provide more accurate results [22,23]. Therefore, automatic segmentation techniques for osteosarcoma are needed clinically. In recent years, artificial intelligence-assisted segmentation methodologies of osteosarcoma images have also been developed, including fuzzy connectivity [17], region growing, unsupervised clustering methods [24], supervised machine learning methods, etc. [25,26,27]. Using computer-aided diagnosis technology and artificial intelligence systems can ease the problems, including the shortage of medical resources, serious imbalance of doctor-patient ratio, and lack of professional doctors in developing countries.
However, it has been reported that the segmentation accuracy of osteosarcoma is generally 38–89% [12]. In the existing studies on osteosarcoma MRI image segmentation, cluster-based methods are computationally efficient but are sensitive to noise with low accuracy. Learning-based segmentation methods cannot balance accuracy and segmentation efficiency. CNN-based segmentation methods are more accurate but time-consuming and memory-consuming. In order to find an osteosarcoma image segmentation method with high accuracy, high efficiency, high degree of automation, and good reproducibility, we propose a method based on denoising and tumor localization and enhancement.
Our method aims to achieve automatic segmentation of osteosarcoma target regions in MRI images so that doctors can accurately and quickly analyze the patient’s condition on this basis. The method firstly preprocesses the osteosarcoma MRI image dataset, uses the Eformer model to remove noise, and then uses non-parametric localization and enhancement methods to dispose of the tumor area, making it clearer and easier to identify. The osteosarcoma region was then segmented using DFANet. DFANet is a lightweight real-time semantic segmentation network that handles image segmentation with fast speed, high efficiency, considerable accuracy and low memory consumption.
Image data preprocessing, including denoising and local enhancement, can improve the accuracy of segmentation, and the use of real-time semantic segmentation network can improve the calculation speed. Therefore, our method also improves efficiency on the basis of ensuring accuracy.
Using the method in this paper to process MRI images, the location and extent of osteosarcoma can be obtained, providing radiologists with intuitive features of osteosarcoma as a reliable basis for subsequent diagnosis and analysis. At the same time, our method can save resources and reduce the cost of osteosarcoma image segmentation processing. This can alleviate the problems of low medical levels and shortage of medical resources in developing countries. It is suitable for clinical use and promotion.
The contributions are as follows:
  • This study proposes an auxiliary segmentation method of osteosarcoma in MRI images based on denoising and local enhancement, improving the accuracy and speed of segmentation and reducing resource consumption.
  • We use the medical denoising model Eformer to remove noise and then localize and enhance the osteosarcoma region in MRI images. After preprocessing, the tumor region in the MRI image will be clearer and the boundary can be enhanced. Finally, an efficient and accurate network DFANet is used to segment osteosarcoma in MRI images.

2. Related Works

In the diagnosis of osteosarcoma, using computer-aided diagnosis technology and an artificial intelligence system as auxiliary is particularly important. Current methods for segmenting tumor regions from images can be broadly divided into three categories [28].
The first type of method is based on clustering or clustering. Mandava et al. proposed a spatial fuzzy clustering method based on a multi-criteria optimization method, which considers two criteria of spatial information and intensity based on fuzzy C-means clustering (FCM) [29]. Mohamed Nasor et al. presented a technique using K-means clustering, etc., to complete segmentation [18]. EBK et al. created an automated segmentation system to measure RECIST using SLIC-S and FCM [30]. However, these methods are sensitive to initial noise and can only deal with images with simple structure and ordered texture due to the lack of object priors.
The second type is the traditional learning-based approach. Frangee et al. proposed a method using a supervised cascaded feedforward neural network to complete osteosarcoma segmentation in dynamic perfusion MRI images, training a two-stage cascade classifier model with multi-scale spatial features generated by a pharmacokinetic model [31]. The overall segmentation accuracy is 38–78%. Glass et al. segmented MRI images by a hybrid neural network and then used a multi-layer BPNN to complete classification [32]. Chen et al. used Zernike moment and SVM to segment osteosarcoma in T1-weighted image (TIWI) [33]. The limitation of these methods is that they need to compute a large quantity of features such as texture and wavelet features to train a classifier, which can be slow, time-consuming, and memory-intensive. However, reducing dimensionality [34] or selecting a feature [35] to reduce improvement can result in low accuracy [36]. Therefore, these methods are not effective when the number of osteosarcoma images is large because of handcrafted features and they cannot extract target osteosarcoma tumor regions with complex structures and disordered textures.
The last type of method is based on CNN. Zhang et al. used a multi-supervised residual network (MSRN)-based method [37]. Wu et al. [13] use the Mean Teacher algorithm to optimize the dataset, and the obtained noisy data were subjected to the second round of training. Finally, SepUNet and CRF were used to segment osteosarcoma lesions. Barzekar et al. [38] use CNNs as feature extractors to achieve malignant and benign tumors classification of images. Anisuzzaman et al. used computer-aided detection (CAD) and diagnosis (CADx) to detect osteosarcoma [39].
Based on deep architecture, various CNN-based studies have been used for tumor segmentation in images [40]. CNN operates on patches using kernels without the need to extract handcrafted features, which can significantly improve segmentation accuracy [41]. However, overlapping patches always cause redundancy [42], so these methods are too time-consuming and memory-intensive. The improved fully convolutional network (FCN) based segmentation task achieved good results but failed to identify some smaller object regions [42,43].

3. Methods

In the diagnosis of osteosarcoma from patients’ MRI images, there are many difficulties in the traditional manual identification by doctors. Due to a large amount of MRI image data but few useful images generated by patients, the doctors have a large workload and low work efficiency [44,45,46]. The results produced by experience and subjective assessment may also be inaccurate. In the situation of developing countries, because of economic backwardness, shortage of medical resources, and lack of equipment and professionals, early diagnosis of osteosarcoma is more difficult [13].
Segmenting and delineating osteosarcoma in MRI images can assist doctors in determining important information such as the location and extent of lesions [45]. Automated segmentation can reduce labor costs and time costs. It also improves the reliability of segmentation results to a certain extent. In order to further improve the accuracy and efficiency of segmentation, this paper proposes an auxiliary segmentation method of osteosarcoma in MRI images based on denoising and local enhancement. The overall structure of the method is shown in Figure 1.
The method begins with preprocessing the osteosarcoma image dataset, including denoising and tumor localization and enhancement, because MRI images usually have noise affecting the segmentation accuracy. Moreover, the high heterogeneity of osteosarcoma can lead to blurred edges. Therefore, we use the Eformer model to remove noise from MRI images and perform edge enhancement. Afterward, we used non-parametric localization and enhancement methods to localize and enhance the tumor area so that the shape of osteosarcoma is clearer and easier to identify. Osteosarcoma in MRI images was then segmented using DFANet. DFANet is a lightweight network that handles image segmentation quickly and efficiently. We finally obtain the location and extent of osteosarcoma, which provides a more accurate and reliable basis for the subsequent diagnosis and analysis.
The following content is divided into three parts. Firstly, collected MRI images are preprocessed to improve the accuracy and robustness of segmentation and enhance the model’s generalization ability. Section 3.1 shows using Eformer to remove noise in MRI images and perform osteosarcoma edge enhancement. In Section 3.2, we locate and clarify tumor regions, achieving a more efficient segmentation process. Section 3.3 introduces the process of segmenting the preprocessed images using DFANet.
The symbols used in this chapter and their explanations are shown in Table 1.

3.1. Remove Noise

Because the distribution density of osteosarcoma in MRI images is not uniform, the brightness between images is different, and there is much noise in MRI images, which will lead to overfitting of the model. Artifacts caused by partial volume effects and edge blurring caused by high osteosarcoma heterogeneity can affect the subsequent segmentation accuracy. In order to address these issues, we adopt Eformer [47]. Using the Eformer process can effectively remove noise in the MRI images and enhance the edge of the tumor region, improving the accuracy of subsequent segmentation. It utilizes transformer blocks to build an encoder-decoder network. Window-based self-attention is used to reduce computational requirements and effort. Furthermore, Eformer connects the learnable Sobel-Feldman operator to the middle layer of the architecture to enhance the edge and improve the denoising performance. Its model structure is shown in Figure 2.
After inputting the original osteosarcoma MRI image I, the osteosarcoma edge enhancement feature map S(I) is first generated by Sobel Filter and then activated by GeLU [48]. After Sobel Filter post-processing, the feature map of the entire image will also be obtained, and the number of channels has changed. In each LC2D block of encoding stage, the feature map is first processed using LeWin transformer blocks, then concatenated with S(I), and processed by a convolutional layer. Finally, the feature map and S(I) are down-sampled and encoded. The encoded feature map is passed to another LeWin Transformer block for processing. During decoding, the LC2U block up-samples the feature map and then passes it through the convolution block after being concatenated with the previously generated edge feature map S(I). Finally, they are passed to LeWin transformer blocks. The final part of decoding uses a single-layer convolution module output projection. After this process, the noise in the MRI image is removed. Moreover, the edge of the osteosarcoma is enhanced.
The denoising principle of our model is to find out the clean images that may exist in the image through the process of residual learning, and remove them from the original image, and finally obtain the residual, that is, the noise distribution in the image. After the noise distribution is obtained, it is only necessary to remove the noise distribution from the original image to complete denoising.
Sobel-Feldman operator: The Sobel operator is a classic edge detection algorithm [49]. Eformer uses its expanded version, including the diagonal direction, not only the horizontal and vertical directions. The four filters used are shown in Figure 3. The edge enhancement feature map is used many times in the whole network. It is repeatedly cascaded with the image feature map, which can enhance the edge features to the greatest extent and solve the blurred edge of osteosarcoma in images.
Transformer-based encoder-decoder: Both the LC2D and the LC2U use LeWin transformer containing a local enhancement window (LeWin) to process convolutional feature maps. LeWin consists of a low-resolution feature map. Equation (1) is the calculation expression, where LN represents layer normalization. P n . is the output of W-MSA block. Q n is the output of LeFF block.
P n = W M S A L N Q n 1 + Q n 1 Q n = L e F F L N P n + P n
We can see the LeWin block’s structure in Figure 4. The feature map is normalized by a Layer Normalization and then passed to W-MSA, where the two-dimensional feature map is decomposed into non-overlapping windows, and then the flattened features of each window are performed self-attention. At last it concatenates all outputs and linearly projects the final result. It is passed to locally enhanced feedforward network (LeFF) through a Layer Normalization. In LeFF block, the image patch is managed through a linear projection layer and a 3 × 3 depth-wise convolutional layer.
Down-sampling & up-sampling: The Eformer down-sampling uses strided convolutions. Up-sampling adopts transpose convolution [50], which can reconstruct the spatial structure. To avoid uneven overlap, the size of the convolution kernel should generally be divisible by the step size, so the size of the convolution kernel of 4 × 4 is selected for transpose convolution, and the step size is set to 2.
Residual Learning: This aims to implicitly remove potentially clean images in hidden layers of MRI images. For example, an MRI image of osteosarcoma that contains noise is t = c + r . Eformer‘s approach is to train a network that can learn this residual mapping R t = r , thereby estimating the noise distribution r in the image, and then the denoised image can be obtained through c = t r .
Optimization: Multiple loss functions are used. First, the Mean Squared Error (MSE) loss function is used to estimate the pixel distance between the actual output and the clean image as in Equation (2). The clean image here refers to the corresponding low-noise image.
L mse = 1 N i = 1 N ( t i R ( t i ) ) c i 2
However, the MSE loss function easily causes image artifacts such as over-smoothing and blurring, which is very unfavorable for the denoising of osteosarcoma MRI images, so the Multi-scale Perceptual (MSP) [51] loss function used in ResNet is added, as in Equation (3).
L m s p = 1 N M i = 1 N s = 1 M ϕ s t i R t i , θ ^ ϕ s c i , θ ^ 2
Among them, ϕ uses ResNet-50 to extract features (but the pooling layer is removed, only the convolutional layer parameters are kept, and the pre-trained model on ImageNet is used). c and t R t should be processed by the feature extractor and produce the perceptual loss. The final loss function combines the above two loss functions, as shown in the following formula, where the two λ are predefined numbers.
L f i n a l = λ m s e L m s e + λ m s p L m s p
In this way, the combination of perceived loss MSP and mean square error MSE can not only process the overall structural information, but also process the pixel-by-pixel similarity, so as to obtain more accurate processing result, improve the effect of denoising and minimize the loss of important information from the original MRI image of osteosarcoma. After Eformer model processing, we obtained the image denoising and edge enhancement.

3.2. Tumor Localization and Enhancement Methods

Since the osteosarcoma region’s location, shape and structure varied greatly, it is difficult to identify in the image. Therefore, non-parametric tumor localization and enhancement methods were used to locate and clarify the tumor region [52] in order to find more accurate targets for subsequent segmentation. Firstly, the frequency distribution histogram of intensity value in MRI image was used to distinguish background and tumor region, and the zero intensity value was dislodged. We calculated the frequency of intensity value by Formula (5). j   represents the range from 0 to the maximum intensity value σ in the image. n j is the frequency distribution value of the jth intensity in the MRI image to be calculated. I j is the image. x and y represent the coordinates of the points in the image.
n j = j = 1 σ I j   a n d   I x , y 0 , σ
After the above calculation, we can draw the frequency distribution histogram such as Figure 5. Figure 5a is an image that requires osteosarcoma localization and enhancement, and Figure 5b is a histogram of the entire corresponding frequency distribution.
Then, the initial non-parametric threshold θ (average of the frequency of the intensity values) is determined by the frequency of the intensity values. Its calculation method is shown in Equation (6).
θ = 1 σ j = 1 σ n j
In MRI images of osteosarcoma, the areas with infrequent intensity values represent the background, while the areas with the most frequent intensity values represent healthy tissue and tumor areas. It is then possible to use θ to determine the intensity minima representing the background and tumor areas E m i n and F m i n , using Equation (7). I θ is the intensity value with a frequency greater than θ . The calculations of E m i n and F m i n allow us to make a preliminary localization of the tumor area, where E m i n < F m i n is the background and F m i n < σ is the tumor area.
E m i n = m i n I θ   a n d   F m i n = m a x I θ
Subsequent operations aim to identify tumor regions as significant or low-contrast tumors. To judge the distinguishability and contrast between background and tumor area, we used basic statistical methods to achieve this by comparing standard deviations. If the osteosarcoma is a significant tumor, it is easy to distinguish and segment. Otherwise, as a low-contrast tumor, its localization requires further processing. Standard deviations for background and tumor areas were calculated using Equation (8). In the formula, represents the intensity, m is the mean value of it, and n represents the number of pixels.
S j = j m j 2 n j
After determining the initial localization and contrast of the osteosarcoma in the MRI images, we should find the final localization of the osteosarcoma. The final positioning formula is shown in Equation (9).
E i f S E > S F F i f S E < S F
To enhance the visual appearance of the tumor area, we processed the localized tumor area, ignoring the blank area to make the osteosarcoma area more prominent. First, when the tumor is located in the F region, we use Equation (10).
Z F = I F x , y F m i n S F
The enhancement method we used also takes into account the updated tumor region minimum. The updated E m i n and S F of the tumor area are shown in Equations (11) and (12).
E m i n = 1 x j = E m i n σ I j
In Equation (11), E m i n is the updated minimum value of the tumor area, which is defined as the average intensity value between E m i n to σ . x represents the pixels in this area.
S F = F m F 2 n F
In Equation (12), σ F represents the intensity value, m F represents the mean valueof the intensity value, and n F represents the number of pixels in the updated tumor area F m i n . Moreover, the final calculation is shown in Equation (13) below.
Z E = I F x , y F m i n S F
Finally, by adding the enhanced image to the preprocessed MRI image, the osteosarcoma localization and enhanced image can be obtained, as shown in Equation (14). I f represents the filtered image. Z i will be defined as Z E . or Z F according to Equation (9).
Z R = I f + Z i
The use of non-parametric localization and enhancement methods can effectively solve the difficulty caused by the osteosarcoma’s complex structure, shape, and location. This method makes the shape of the osteosarcoma clearer, facilitating the subsequent segmentation to obtain more accurate results.

3.3. Osteosarcoma Image Segmentation

3.3.1. Deep Feature Aggregation Network (DFANet)

During tumor localization and enhancement, some healthy tissues in the image with close strength values to the osteosarcoma region were localized and enhanced, which forced the use of supervised segmentation techniques to avoid the segmentation of unrelated areas. We used the DFANet model to segment the region of osteosarcoma in the image. DFANet is a very efficient CNN structure with high accuracy, used for semantic segmentation [53].
We improved the Xception network with smaller computational complexity to pursue the inference speed of our proposed method and pre-trained it. Then it is used as the backbone in our model.

3.3.2. Network Architecture

The DFANet semantic segmentation network can be regarded as an encoder-decoder structure, as shown in Figure 5. The encoder includes three Xception backlines. Encoders are not only composed of sub-backbone and sub-backbone networks but also some sub-stages that connect this information, combining high-level and low-level characteristics [54]. DFANet implements cross-level feature aggregation. Subnetwork aggregation refers to the up-sampling of advanced feature maps from the previous trunk and the input to the next trunk. Subphase aggregation refers to the transfer of receiving domains and higher-dimensional structural details by grouping layers with the same dimensions [55]. The decoder consists of convolution and bilinear up-sampling operations, which combine the output of each stage to generate segmentation results. Its architecture is shown in Figure 6.

3.3.3. Deep Feature Aggregation

DFANet first learns sub-pixel details by up-sampling the output features of the network and refining the feature map at a larger resolution with another sub-network. To avoid the accuracy loss of high-dimensional features and receptive fields in the feature flow, DFANet achieves stage-level refinement by concatenating feature layers with the same resolution. It reuses high-level features extracted from the backbone network to bridge semantic information and structural details (this refers to the spatial structure, such as edge, shape, etc.). These two aggregation strategies combine detailed and spatial information at different depth locations in the feature extraction network to achieve comparable performance. A schematic diagram of the two aggregation strategies is shown in Figure 7.
Our paper uses a new method for assisted segmentation of osteosarcoma, including denoising and tumor localization and enhancement. It solves the noise and blurred edges of MRI images of osteosarcoma. At the same time, osteosarcoma can be enhanced in complex images, which improves the accuracy and precision of segmentation. The segmentation speed is fast and consumes less memory. The position and range of osteosarcoma can be obtained after the MRI image is processed by the above methods. It provides intuitive image features for subsequent diagnosis and analysis. The value of this method is mainly to provide hospitals and doctors with more accurate auxiliary information for the diagnosis of osteosarcoma. Our method can save resource consumption, including workforce and time costs. If it is promoted clinically, it can effectively improve the current situation of osteosarcoma’s difficult diagnosis.

4. Experiments

4.1. Dataset

We collected a total of 81,326 clinical images of 204 patients with approximately 400 images per patient, all of which are representative images. Patient-specific information is shown in the table below. We selected about 80% of the data as the training set and about 20% of the data as the test set. Out of 204 patients, 164 training sets and 40 test sets were obtained. Several radiologists and image processing operators participated in the annotation of image data. After identifying the osteosarcoma, they used the itk-snap tool to mark the image. The basic information of patients who provided experimental data is shown in the Table 2.
In addition, to improve the model’s generalization and accuracy, we need to enhance the image data. So we scale, randomly crop, flip horizontally, and rotate the images by 90 degrees and 180 degrees.

4.2. Evaluation Indicators

We use some metrics to evaluate the model. Confusion matrices are often used to evaluate network performance in supervised learning and are mainly used to compare classification results with the true classification of instances. Evaluation of our network consists of four parts: the osteosarcoma region predicted by the model as osteosarcoma (True Positive, TP), the other regions predicted by the model as osteosarcoma (False Positive, FP), areas of osteosarcoma predicted to be non-tumor (False Negative, FN), and other areas predicted by the model to be non-tumor (True Negative, TN) [56]. The confusion matrix is shown in Table 3 below. According to these four parts, we use the following indicators for evaluation. We use metrics such as accuracy, precision, recall, and F1-score to count the accuracy of segmentation results [57]. We use IOU, DSC to calculate the effect of segmentation based on area. The number of parameters is also used to measure the complexity of the model.
Acc is the most commonly used classification performance indicator [58]. It is defined as Equation (18).
A c c = T P + T N T P + T N + F P + F N
Pre is the percent of all regions predicted to be tumors that are correctly predicted and is defined as Equation (19).
P r e = T P T P + F P
Recall is the proportion of all true tumor regions that are correctly predicted as tumors and is defined as Equation (20).
R e = T P T P + F N
F1-score represents the model’s robustness and it is defined as follows [59].
F 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
IOU is the intersection ratio.
I O U = p r e d i c t i o n t a r g e t p r e d i c t i o n t a r g e t
DSC reflects the similarity of two samples, and it is defined as follows.
D S C = 2 × p r e d i c t i o n t a r g e t p r e d i c t i o n + t a r g e t
The result of the above metrics can reflect the effect of our segmentation model of osteosarcoma, and we use these metrics to measure model performance.

4.3. Comparison Algorithm

We set up comparative experiments to compare our method with FCN [60], PSPNet [61], MSRN [37], MSFCN [62], FPN [63], and U-Net [64] algorithms and analyze the experimental results.
(1) FCN implements pixel-level classification. The problem of repeated storage and calculation of convolutions due to the use of pixel blocks is avoided. This paper uses the FCN-8s and FCN-16s networks, respectively.
(2) PSPNet adopts a pyramid pooling model, which can collect hierarchical information and use global knowledge, promoting the development of scene parsing and semantic segmentation.
(3) MSRN uses residual blocks and convolution blocks to use feature detection at different scales. A simple and efficient reconstruction structure is also designed to easily achieve multi-scale upscaling.
(4) MSFCN uses feature channels to capture contextual information while up-sampling. The accuracy of segmentation is ensured.
(5) FPN adopts a unique feature pyramid model and utilizes the hierarchical semantic features of convolutional networks to achieve feature extraction. It can greatly improve the performance of segmentation models.
(6) U-Net proposes a network structure and a strategy to utilize labeled data efficiently. It uses spliced feature fusion and relies on data enhancement to use data more effectively.

4.4. Parameter Setting

The experiments were trained using the “poly“ learning rate strategy, and multiplying the initial rate by (1-maxiteriter) to the power of 0.9, set-ting the basic The learning rate is 2 × 10−1. The batch size is 48, and the weight decay is 10−5. The cross-entropy error at each pixel on the class is applied as our loss function. All experiments were repeated five times, and the results were averaged. When we train the model, we use three-fold cross-validation to obtain a model with better generalization ability.

4.5. Evaluation of Segmentation Effect

In our method, we performed denoising and tumor localization and enhancement on the initial MRI images of the dataset, and the processing effect is shown in Figure 8. The three MRI images of osteosarcoma represent the initial image (Figure 8a), the image after noise removal (Figure 8b), and the image after localization enhancement (Figure 8c). It can be seen that preprocessing can effectively remove noise and enhance the tumor area, which is convenient for improving segmentation accuracy.
We can see the effect of data processing in Figure 9, including the original osteosarcoma MRI image (Figure 9a), the ground truth (Figure 9b), the segmentation rendering obtained on the unpreprocessed dataset (Figure 9c) and the result after denoising, tumor localization, and enhancement (Figure 9d). We can see the preprocessed model segmentation results are closer to the ground truth, and the edge processing and shape of osteosarcoma are more accurate.
Figure 10 contains the results of the comparison experiment, showing the intuitive results of the segmentation of the same original MRI image by DFANet and the comparison algorithm, respectively. The first column is the original MRI image of the osteosarcoma before segmentation, and the second column represents the image after marking the position of the osteosarcoma, i.e., the real mask. Columns 3–7 represent the segmentation results of different comparison algorithms, respectively. After comparison, DFANet has better segmentation results. Compared with other algorithms, the images predicted by DFANet are closest to the real labels. The edge processing and predicted shape of osteosarcoma are more accurate.
The performance of our algorithm cannot be wholly determined only by the effect chart. In order to evaluate the effect of the model more accurately, we adopt some indicators to analyze and evaluate the algorithm’s performance. After obtaining the value of each comparison algorithm evaluation index through experiments, it is compared with our method. The result is seen in Table 4. Our method has better performance on each evaluation index. The table also compares the performance indicators of the segmentation effect with or without the data preprocessing operation. It shows that the data set preprocessing effectively improves the segmentation results and optimizes the segmentation boundary. It is necessary to denoise the image data and enhance the tumor area before model training.
In order to compare the effects more intuitively, we drew a line chart as shown in Figure 11. Our model has better performance than other models in each evaluation index. Among them, the IOU and DSC indicators are nearly 0.1 higher than other models. The IOU index is about 0.15 higher than PSPNet, and the precision is nearly 0.1 higher than PSPNet. The F1 indicator is also nearly 0.05 higher than the other models. From various indicators, the indicators of PSPNet are generally the lowest, and the segmentation effect is the worst.
Figure 12 compares the IOU and parameters of DFANet and various comparison algorithms. It can be seen in the image that the IOU value of our model is the highest. It also consumes the least number of parameters, only 8.72 M. Among various algorithms, the FCN algorithm has a general segmentation effect but consumes the largest number of parameters. PSPnet has the worst segmentation effect but also consumes a large number of parameters.
Figure 13 shows the accuracy values of those methods. Our method’s performance is not as good as that of FPN and UNet in the first 30 rounds of training. However, in the subsequent training, the accuracy value of our model is higher than that of other comparison algorithms, nearly 0.1 higher than MSRN. The performance of MSRN and MSFCN models is very unstable. In contrast, the accuracy value of our model remains above 0.95, and reaches 0.99 in the later stage.
We plot the precision as a function of 100 epochs for five of the contrasting models in Figure 14. It can be seen from the figure that the accuracy of our method is the highest most of the time, reaching more than 0.95, which is nearly 0.2 higher than the worst-performing MSRN. Between epochs 30 and 60 and after 80, the precision of DFANet is maintained in a relatively stable and high state, and the obtained segmentation effect is relatively accurate.
We compared the F1-score values of different models, as shown in Figure 15. The F1-score value of DFANet gradually increased in the later stage, and reached 0.94 at about 100 rounds of training. Our method has a good segmentation effect and high segmentation accuracy for osteosarcoma.
Finally, we compared the DSC values of these models at corresponding epochs. As shown in Figure 16, the DSC value of our model is the highest, close to 0.95, nearly 0.05 higher than MSFCN, and about 0.15 higher than the worst MSRN. DSC of our model has a continuous upward trend, and as the number of training rounds increases, its effect will get better and better. It shows that DFANet has high segmentation similarity and can better segment the target area of osteosarcoma and process the boundary.

5. Summary

In this paper, an auxiliary segmentation method of osteosarcoma in MRI images based on denoising and local enhancement is proposed. More than 8000 osteosarcoma MRI images collected from hospitals are used as a dataset. We also compare the method with other segmentation models. The results show that our method is very accurate and combines accuracy and speed, reducing the consumption of resources and costs. Using this method to process MRI images of osteosarcoma is helpful for doctors to diagnose patient conditions more efficiently and accurately.
Our method only uses MRI image processing to aid in diagnosing osteosarcoma. In the future, we will work on combining more data sources, relying on images for diagnosis, and studying multimodal learning combined with medical records.

Author Contributions

Writing—original draft, L.W. and L.Y.; writing—review & editing, J.Z., H.T., F.G. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work received funding from the Hunan Provincial Natural Science Foundation of China (2018JJ3299, 2018JJ3682).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data used to support the findings of this study are currently under embargo while the research findings are commercialized. Requests for data, 12 months after publication of this article, will be considered by the corresponding author.

Acknowledgments

Availability of data and materials: all data analyzed during the current study are included in the submission.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Corre, I.; Verrecchia, F.; Crenn, V.; Redini, F.; Trichet, V. The Osteosarcoma Microenvironment: A Complex but Targetable Ecosystem. Cells 2020, 9, 976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Liu, F.; Gou, F.; Wu, J. An Attention-Preserving Network-Based Method for Assisted Segmentation of Osteosarcoma MRI Images. Mathematics 2022, 10, 1665. [Google Scholar] [CrossRef]
  3. Kazuhiro, T.; Toshifumi, O. Management of elderly patients with bone and soft tissue sarcomas: JCOG Bone and Soft Tissue Tumor Study Group. Jpn. J. Clin. Oncol. 2022, 52, 526–530. [Google Scholar]
  4. Le, T.; Su, S.; Shahriyari, L. Immune classification of osteosarcoma. Math. Biosci. Eng. 2021, 18, 1879–1897. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, Z.; Zhou, X.; Xiong, W. BA-GCA Net: Boundary Aware Grid Contextual Attention Net in Osteosarcoma MRI Image Segmentation. Comput. Intell. Neurosci. 2022, 2022, 3881833. [Google Scholar] [CrossRef]
  6. Xiao, P.; Zhou, Z.; Dai, Z. An artificial intelligence multiprocessing scheme for the diagnosis of osteosarcoma MRI images. IEEE J. Biomed. Health Inform. 2021, 15, 1–12. [Google Scholar] [CrossRef]
  7. Arianna, F.; Gasperini, C.; Gómez, M.P.A.; Bazzocchi, A.; Fanti, S.; Nanni, C. The Role of FDG-PET and Whole-Body MRI in High Grade Bone Sarcomas with Particular Focus on Osteosarcoma; WB Saunders: Philadelphia, PA, USA, 2021. [Google Scholar]
  8. Abd, A.R.H.; Gaber, K.M.M.; Ahmed, A.I. Role of Diffusion Weighted MRI in Assessment of Treatment Response to Chemotherapy in Patients with Osteogenic Sarcoma. QJM Int. J. Med. 2021, 114 (Suppl. S1). [Google Scholar]
  9. Chen, H.; Liu, J.; Cheng, Z.; Lu, X.; Wang, X.; Lu, M.; Li, S.; Xiang, Z.; Zhou, Q.; Liu, Z.; et al. Development and external validation of an MRI-based radiomics nomogram for pretreatment prediction for early relapse in osteosarcoma: A retrospective multicenter study. Eur. J. Radiol. 2020, 129, 109066. [Google Scholar] [CrossRef]
  10. Zhuang, Q.; Dai, Z. Deep active learning framework for lymph nodes metastases prediction in medical support system. Comput. Intell. Neurosci. 2022, 2022, 4601696. [Google Scholar] [CrossRef]
  11. Tian, X.; Yan, L.; Jiang, L. Comparative transcriptome analysis of leaf, stem, and root tissues of Semiliquidambar cathayensis reveals candidate genes involved in terpenoid biosynthesis. Mol. Biol. Rep. 2022, 49, 5585–5593. [Google Scholar] [CrossRef]
  12. Kayal, E.B.; Kandasamy, D.; Sharma, R.; Bakhshi, S.; Mehndiratta, A. Segmentation of osteosarcoma tumor using diffusion weighted MRI: A comparative study using nine segmentation algorithms. Signal Image Video Process. 2020, 14, 727–735. [Google Scholar] [CrossRef]
  13. Yang, S.; Zhou, Z.; Xie, P.; Xu, N.; Dai, Z. Intelligent Segmentation Medical Assistance System for MRI Images of osteosarcoma in Developing Countries. Comput. Math. Methods Med. 2022, 2022, 6654946. [Google Scholar] [CrossRef]
  14. Yu, G.; Chen, Z.; Tan, Y. Medical Decision Support System for Cancer Treatment in Precision Medicine in Developing Countries. Expert Syst. Appl. 2021, 186, 115725. [Google Scholar] [CrossRef]
  15. Jia, W.; Gou, F.; Tan, Y.A. Staging Auxiliary Diagnosis Model for Nonsmall Cell Lung Cancer Based on the 707 Intelligent Medical System. Comput. Math. Methods Med. 2021, 2021, 6654946. [Google Scholar] [CrossRef]
  16. Shen, Y.; Gou, F.; Dai, Z. Osteosarcoma MRI Image-Assisted Segmentation System Base on Guided Aggregated Bilateral Network. Mathematics 2022, 10, 1090. [Google Scholar] [CrossRef]
  17. Yu, G.; Wu, J. Efficacy prediction based on attribute and multi-source data collaborative for auxiliary medical system in developing countries. Neural Comput. Appl. 2022, 34, 5497–5512. [Google Scholar] [CrossRef]
  18. Chang, L.; Yu, G. Effective Data Decision-Making and Transmission System Based on Mobile Health for Chronic Disease Management in the Elderly. IEEE Syst. J. 2021, 15, 5537–5548. [Google Scholar] [CrossRef]
  19. Qin, Y.; Li, X. A management method of chronic diseases in the elderly based on IoT security environment. Comput. Electr. Eng. 2022, 102, 108188. [Google Scholar] [CrossRef]
  20. Jiao, Y.; Qi, H. Capsule network assisted electrocardiogram classification model for smart healthcare. Biocybern. Biomed. Eng. 2022, 42, 543–555. [Google Scholar] [CrossRef]
  21. Fangfang, G.; Wu, J. Data Transmission Strategy Based on Node Motion Prediction IoT System in Opportunistic Social Networks. Wirel. Pers. Commun. 2022, 121, 1–12. [Google Scholar] [CrossRef]
  22. Krm, F.; Tsokos, C.P. Deep and Statistical Learning in Biomedical Imaging: State of the Art in 3D MRI Brain Tumor Segmentation. arXiv 2021, arXiv:2103.05529. [Google Scholar]
  23. Xiong, W.; Zhou, X. A Reputation Value-based Task-sharing Strategy in Opportunistic Complex Social Networks. Complexity 2021, 2021, 8554351. [Google Scholar] [CrossRef]
  24. Amin, J.; Sharif, M.; Raza, M.; Saba, T.; Anjum, M.A. Brain tumor detection using statistical and machine learning method. Comput. Methods Programs Biomed. 2019, 177, 69–79. [Google Scholar] [CrossRef]
  25. Chang, L.; Moustafa, N.; Bashir, A.K.; Yu, K. AI-Driven Synthetic Biology for Non-Small Cell Lung Cancer Drug Effectiveness-Cost Analysis in Intelligent Assisted Medical Systems. IEEE J. Biomed. Health Inform. 2021, 54, 1–12. [Google Scholar] [CrossRef]
  26. Ouyang, T.; Yang, S.; Dai, Z. Rethinking U-Net from an Attention Perspective with Transformers for Osteosarcoma MRI Image Segmentation. Comput. Intell. Neurosci. 2022, 2022, 7973404. [Google Scholar] [CrossRef]
  27. Norouzi, A.; Shafry, M.; Rahim, M.; Altameem, A.; Saba, T.; Rad, A.E.; Rehman, A.; Uddin, M. Applications medical image segmentation methods, algorithms, and applications. IETE Tech. Rev. 2014, 3, 37–41. [Google Scholar] [CrossRef]
  28. Li, L.; Gou, F.; Long, H.; He, K.; Wu, J. Effective data optimization and evaluation based on social communication with AI-assisted in opportunistic social networks. Wirel. Commun. Mob. Comput. 2022, 2022, 4879557. [Google Scholar] [CrossRef]
  29. Mandava, R.; Wei, B.C.; Yeow, L.S. Spatial multiple criteria fuzzy clustering for image segmentation. In Proceedings of the Second International Conference on Computational Intelligence, Washington, DC, USA, 28–30 September 2010; IEEE: Manhattan, NY, USA, 2010; pp. 305–310. [Google Scholar]
  30. Ebk, A.; Kandasamy, D.; Yadav, R.; Bakhshi, S.; Sharma, R.; Mehndiratta, A. Automatic segmentation and RECIST score evaluation in osteosarcoma using diffusion MRI: A computer aided system process. Eur. J. Radiol. 2020, 133, 109359. [Google Scholar]
  31. Frangi, A.F.; Egmont-Petersen, M.; Niessen, W.J.; Reiber, J.H.C.; Viergever, M.A. Bone tumor segmentation from MR perfusion images with neural networks using multi-scale pharmacokinetic features. Image Vis. Comput. 2001, 19, 679–690. [Google Scholar] [CrossRef]
  32. Glass, J.O.; Reddick, W.E. Hybrid artificial neural network segmentation and classification of dynamic contrast-enhanced MR imaging (DEMRI) of osteosarcoma. Magn. Reson. Imaging 1998, 16, 1075–1083. [Google Scholar] [CrossRef]
  33. CHEN, C.X.; Zhang, D.; Li, N.; Qian, X.J.; Wu, S.; Gail, S. Osteosarcoma segmentation in MRI based on Zernike moment and SVM. Chin. J. Biomed. Eng. 2013, 22, 70–78. [Google Scholar]
  34. Tripathy, R.; Bilionis, I.; Gonzalez, M. Gaussian processes with built-in dimen sionality reduction: Applications to high-dimensional uncertainty propagation. J. Comput. Phys. 2016, 321, 191–223. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, X.; Mei, C.L.; Chen, D.G.; Li, J.H. Feature selection in mixed data: A method using a novel fuzzy rough set-based information entropy. Pattern Recognit. 2016, 56, 1–15. [Google Scholar] [CrossRef]
  36. Mohammad, H.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.; Larochelle, H. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2016, 35, 18–31. [Google Scholar]
  37. Rui, Z.; Huang, L.; Xia, W.; Zhang, B.; Qiu, B.; Gao, X. Multiple supervised residual network for osteosarcoma segmentation in CT images. Comput. Med. Imaging Graph. 2018, 63, 1–8. [Google Scholar]
  38. Barzekar, H.; Yu, Z. C-Net: A Reliable Convolutional Neural Network for Biomedical Image Classification. 2022, 187, 116003. Expert Syst. Appl. 2022, 187, 116003. [Google Scholar] [CrossRef]
  39. Anisuzzaman, D.M.; Barzekar, H.; Tong, L.; Luo, J.; Yu, Z. A deep learning study on osteosarcoma detection from histological images. Biomed. Signal Process. Control 2021, 69, 102931. [Google Scholar] [CrossRef]
  40. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  41. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  42. Fangfang, G.; Wu, J. Message transmission strategy based on recurrent neural network and attention mechanism in IoT system. J. Circuits Syst. Comput. 2022, 31, 2250126. [Google Scholar] [CrossRef]
  43. Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–16 December 2015; pp. 1395–1403. [Google Scholar]
  44. Cui, R.; Chen, Z.; Tan, Y.; Yu, G. A Multiprocessing Scheme for PET Image Pre-Screening, Noise Reduction, Segmentation and Lesion Partitioning. IEEE J. Biomed. Health Inform. 2021, 25, 1699–1711. [Google Scholar] [CrossRef]
  45. Yu, G.; Chen, Z.; Tan, Y. A Diagnostic Prediction Framework on Auxiliary Medical System for Breast Cancer in Developing Countries. Knowl. -Based Syst. 2021, 232, 107459. [Google Scholar] [CrossRef]
  46. Jia, W.; Xia, J.; Gou, F. Information transmission mode and IoT community reconstruction based on user influence in opportunistic social networks. Peer-to-Peer Netw. Appl. 2022, 15, 1398–1416. [Google Scholar] [CrossRef]
  47. Yang, W.; Luo, J. Application of information transmission control strategy based on incremental community division in IoT platform. IEEE Sens. J. 2021, 21, 21968–21978. [Google Scholar] [CrossRef]
  48. Tian, X.; Wu, J.; Gou, F. Disease Control and Prevention in Rare Plants Based on the Dominant Population Selection Method in Opportunistic Social Networks. Comput. Intell. Neurosci. 2022, 2022, 1489988. [Google Scholar] [CrossRef]
  49. Sobel, I. An Isotropic 3 × 3 Image Gradient Operator. Presented at Stanford A.I. Project 1968, San Francisco, CA, USA, 2 February, 2014.
  50. Yepeng, D.; Gou, F.; Wu, J. Hybrid Data Transmission Scheme Based on Source Node Centrality and Community Reconstruction in Opportunistic Social Network. Peer-to-Peer Netw. Appl. 2021, 14, 3460–3472. [Google Scholar] [CrossRef]
  51. Liang, T.; Jin, Y.; Li, Y.; Wang, T. Edcnn: Edge enhancement-based densely connected network with compound loss for low-dose ct denoising. In Proceedings of the 2020 15th IEEE International Conference on Signal Processing (ICSP), Beijing, China, 6–9 December 2020; IEEE: Manhattan, NY, USA, 2020. [Google Scholar]
  52. Ilhan, A.; Sekeroglu, B.; Abiyev, R. Brain tumor segmentation in MRI images using non-parametric localization and enhancement methods with U-net. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 589–600. [Google Scholar] [CrossRef]
  53. Li, H.; Xiong, P.; Fan, H.; Sun, J. DFANet: Deep Feature Aggregation for Real-Time Semantic Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9514–9523. [Google Scholar] [CrossRef] [Green Version]
  54. Gou, F.; Wu, J. Triad link prediction method based on the evolutionary analysis with IoT in opportunistic social networks. Comput. Commun. 2022, 181, 143–155. [Google Scholar] [CrossRef]
  55. Xiangbing, Z.; Long, H.; Duan, X.; Kong, G. A Convolutional Neural Network-Based Intelligent Medical System with Sensors for Assistive Diagnosis and Decision-Making in Non-Small Cell Lung Cancer. Sensors 2021, 21, 7996. [Google Scholar] [CrossRef]
  56. Shen, Y.; Gou, F.; Wu, J. Node Screening Method Based on Federated Learning with IoT in Opportunistic Social Networks. Mathematics 2022, 10, 1669. [Google Scholar] [CrossRef]
  57. Zhou, L.; Tan, Y. A Residual Fusion Network for Osteosarcoma MRI Image Segmentation in Developing Countries. Comput. Intell. Neurosci. 2022, 2022, 7285600. [Google Scholar] [CrossRef]
  58. Limiao, L.; Gou, F.; Wu, J. Modified Data Delivery Strategy Based on Stochastic Block Model and Community Detection with IoT in Opportunistic Social Networks. Wirel. Commun. Mob. Comput. 2022, 2022, 5067849. [Google Scholar] [CrossRef]
  59. Guo, Y.; Jia, W.; Gou, F.; Dai, Z. A Medical Assistant Segmentation Method for MRI Images of osteosarcoma based on DecoupleSegNet. Int. J. Intell. Syst. 2022, 56, 1–17. [Google Scholar] [CrossRef]
  60. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  61. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  62. Huang, L.; Xia, W.; Zhang, B.; Qiu, B.; Gao, X. MSFCN-multiple supervised fully convolutional networks for the osteosarcoma segmentation of CT images. Comput. Methods Programs Biomed. 2017, 143, 67–74. [Google Scholar] [CrossRef]
  63. Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object 812 detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar] [CrossRef] [Green Version]
  64. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The overall structure of the method.
Figure 1. The overall structure of the method.
Healthcare 10 01468 g001
Figure 2. Eformer model structure diagram.
Figure 2. Eformer model structure diagram.
Healthcare 10 01468 g002
Figure 3. Schematic diagram of Sobel filter.
Figure 3. Schematic diagram of Sobel filter.
Healthcare 10 01468 g003
Figure 4. LeWin Transformer Block structure diagram.
Figure 4. LeWin Transformer Block structure diagram.
Healthcare 10 01468 g004
Figure 5. The frequency distribution histogram.
Figure 5. The frequency distribution histogram.
Healthcare 10 01468 g005
Figure 6. DFANet structure diagram.
Figure 6. DFANet structure diagram.
Healthcare 10 01468 g006
Figure 7. Schematic diagram of deep feature aggregation.
Figure 7. Schematic diagram of deep feature aggregation.
Healthcare 10 01468 g007
Figure 8. Denoising and tumor localization and enhancement renderings.
Figure 8. Denoising and tumor localization and enhancement renderings.
Healthcare 10 01468 g008
Figure 9. Comparison of segmentation effects before and after preprocessing.
Figure 9. Comparison of segmentation effects before and after preprocessing.
Healthcare 10 01468 g009
Figure 10. Comparison of segmentation effects of different algorithms.
Figure 10. Comparison of segmentation effects of different algorithms.
Healthcare 10 01468 g010
Figure 11. Comparison of evaluation indicators of various algorithms.
Figure 11. Comparison of evaluation indicators of various algorithms.
Healthcare 10 01468 g011
Figure 12. The IOU value and the amount of training parameters of the training results of different models.
Figure 12. The IOU value and the amount of training parameters of the training results of different models.
Healthcare 10 01468 g012
Figure 13. Accuracy values of different models under corresponding epochs.
Figure 13. Accuracy values of different models under corresponding epochs.
Healthcare 10 01468 g013
Figure 14. Precision values of different models under corresponding epochs.
Figure 14. Precision values of different models under corresponding epochs.
Healthcare 10 01468 g014
Figure 15. F1-score of different models under certain epochs.
Figure 15. F1-score of different models under certain epochs.
Healthcare 10 01468 g015
Figure 16. DSC values of different models under certain epochs.
Figure 16. DSC values of different models under certain epochs.
Healthcare 10 01468 g016
Table 1. Some symbols and their meanings.
Table 1. Some symbols and their meanings.
SymbolParaphrase
P n W-output of MSA module
Q n L eFF module
t MRI image of osteosarcoma with noise
c clean MRI images
r residual noise
E Background regions in MRI images
F tumor area in the image
Table 2. Patient information providing experimental data.
Table 2. Patient information providing experimental data.
Characteristics Total Training SetTest SetCharacteristics
Age<1548(23.5%)38(23.2%)10(25%)
15–25131(64.2%)107(65.2%)24(60%)
>2525(12.3%)19(11.6%)6(15.0%)
SexFemale92 (45.1%)69 (42.1%)23 (57.5%)
Male112 (54.9%)95 (57.9%)17 (42.5%)
Marital statusMarried32 (15.7%)19 (11.6%)13 (32.5%)
Unmarried172 (84.3%)145 (88.4%)27 (67.5%)
SESLow SES78 (38.2%)66 (40.2%)12 (30.0%)
High SES126 (61.8%)98 (59.8%)28 (70.0%)
SurgeryYes181 (88.8%)146 (89.0%)35 (87.5%)
No23 (11.2%)18 (11.0%)5 (12.5%)
GradeLow grade41 (20.1%)15 (9.1%)26 (65%)
High grade163 (79.9%)149 (90.9%)14 (35%)
LocationAxial29 (14.2%)21 (12.8%)8 (20%)
Extremity138 (67.7%)109 (66.5%)29 (72.5%)
Other37 (18.1%)34 (20.7%)3 (7.5%)
Table 3. Patient information providing experimental data.
Table 3. Patient information providing experimental data.
Predicted: NOPredicted: YES
Actual: NOTNFN
Actual: YESFPTP
Table 4. Comparison of osteosarcoma segmentation performance of different algorithms.
Table 4. Comparison of osteosarcoma segmentation performance of different algorithms.
ModelAccPreReF1IOUDSCParams
FCN-8s0.9930.9410.8730.9010.8310.876134.3 M
FCN-16s0.9890.9220.8820.9000.8240.859134.3 M
PSPNet0.9750.8560.8880.8710.7710.87046.70 M
MSRN0.9880.8930.9450.9170.8530.88623.38 M
MSFCN0.9910.8810.9360.9060.8410.87414.27 M
FPN0.9890.9140.9240.9190.8520.88848.20 M
UNet0.9900.9220.9240.9230.8680.89317.26 M
Our (DFANet)0.9910.9430.9520.9500.9030.94911.25 M
Our (Eformer + DFANet)0.9950.9590.9550.9610.9280.96419.97 M
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, L.; Yu, L.; Zhu, J.; Tang, H.; Gou, F.; Wu, J. Auxiliary Segmentation Method of Osteosarcoma in MRI Images Based on Denoising and Local Enhancement. Healthcare 2022, 10, 1468. https://doi.org/10.3390/healthcare10081468

AMA Style

Wang L, Yu L, Zhu J, Tang H, Gou F, Wu J. Auxiliary Segmentation Method of Osteosarcoma in MRI Images Based on Denoising and Local Enhancement. Healthcare. 2022; 10(8):1468. https://doi.org/10.3390/healthcare10081468

Chicago/Turabian Style

Wang, Luna, Liao Yu, Jun Zhu, Haoyu Tang, Fangfang Gou, and Jia Wu. 2022. "Auxiliary Segmentation Method of Osteosarcoma in MRI Images Based on Denoising and Local Enhancement" Healthcare 10, no. 8: 1468. https://doi.org/10.3390/healthcare10081468

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop