Next Article in Journal
Off-Label Use of an External Hand Fixator for Craniomaxillofacial Fractures—An Anatomical Feasibility Study
Next Article in Special Issue
AutoEpiCollect, a Novel Machine Learning-Based GUI Software for Vaccine Design: Application to Pan-Cancer Vaccine Design Targeting PIK3CA Neoantigens
Previous Article in Journal
Evaluating Cognitive-Motor Interference in Multiple Sclerosis: A Technology-Based Approach
Previous Article in Special Issue
Comparing the Robustness of ResNet, Swin-Transformer, and MLP-Mixer under Unique Distribution Shifts in Fundus Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dense Multi-Scale Graph Convolutional Network for Knee Joint Cartilage Segmentation

by
Christos Chadoulos
1,
Dimitrios Tsaopoulos
2,
Andreas Symeonidis
1,
Serafeim Moustakidis
1 and
John Theocharis
1,*
1
Department of Electrical & Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
2
Institute for Bio-Economy and Agri-Technology, Centre for Research and Technology—Hellas, 38333 Volos, Greece
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(3), 278; https://doi.org/10.3390/bioengineering11030278
Submission received: 14 February 2024 / Revised: 7 March 2024 / Accepted: 11 March 2024 / Published: 14 March 2024
(This article belongs to the Special Issue Machine Learning Technology in Biomedical Engineering)

Abstract

:
In this paper, we propose a dense multi-scale adaptive graph convolutional network (DMA-GCN) method for automatic segmentation of the knee joint cartilage from MR images. Under the multi-atlas setting, the suggested approach exhibits several novelties, as described in the following. First, our models integrate both local-level and global-level learning simultaneously. The local learning task aggregates spatial contextual information from aligned spatial neighborhoods of nodes, at multiple scales, while global learning explores pairwise affinities between nodes, located globally at different positions in the image. We propose two different structures of building models, whereby the local and global convolutional units are combined by following an alternating or a sequential manner. Secondly, based on the previous models, we develop the DMA-GCN network, by utilizing a densely connected architecture with residual skip connections. This is a deeper GCN structure, expanded over different block layers, thus being capable of providing more expressive node feature representations. Third, all units pertaining to the overall network are equipped with their individual adaptive graph learning mechanism, which allows the graph structures to be automatically learned during training. The proposed cartilage segmentation method is evaluated on the entire publicly available Osteoarthritis Initiative (OAI) cohort. To this end, we have devised a thorough experimental setup, with the goal of investigating the effect of several factors of our approach on the classification rates. Furthermore, we present exhaustive comparative results, considering traditional existing methods, six deep learning segmentation methods, and seven graph-based convolution methods, including the currently most representative models from this field. The obtained results demonstrate that the DMA-GCN outperforms all competing methods across all evaluation measures, providing  D S C = 95.71 %  and  D S C = 94.02 %  for the segmentation of femoral and tibial cartilage, respectively.

1. Introduction

Osteoarthritis (OA) is one of the most prevalent joint diseases worldwide, causing pain and mobility issues, reducing the ability to lead an independent lifestyle, and ultimately decreasing the quality of life in patients. It primarily manifests among populations of advanced age, with an estimated  10 %  of people over the age of 55 dealing with this condition. That percentage is likely to noticeably increase in the coming years, especially in the developing parts of the world where life expectancy is steadily on the rise [1].
Among the available imaging modalities, magnetic resonance imaging (MRI) constitutes a valuable tool in the characterization of the knee joint, providing a robust quantitative and qualitative analysis for detecting anatomical changes and defects in the cartilage tissue. Unfortunately, manual delineations performed by human experts are resource- and time-consuming, while also suffering from unacceptable levels of inter- and intrarater variability. Thus, there is an increasing demand for accurate and time-efficient fully automated methods for achieving reliable segmentation results.
During the past decades, a considerable amount of research has been conducted to achieve the above goals. However, the thin cartilage structure, as well as the great variability in and intensity inhomogeneity of MR images have posed significant challenges. Several methods are proposed to address those issues, ranging from more traditional image processing ones such as statistical shape models and active appearance models, to more automated ones employing classical machine learning and deep learning techniques. A comprehensive review of such automated methods for knee articular segmentation can be found in [2].

1.1. Statistical Shape Methods

A wide variety of methods that fall under the statistical shape model (SSM) and active appearance model (AAM) family have been extensively employed in knee joint segmentation applications in the past. Since the specific shape of the cartilage structure is quite distinct and characteristic, these methods employ this feature as a stepping stone towards complete delineation for the whole knee joint [3]. Additionally, SSMs have been successfully utilized as shape regularizers within more complex segmentation pipelines, mainly as a final postprocessing step [4]. While conceptually simple, these methods are highly sensitive to the initial landmark selection process.

1.2. Machine Learning Methods

Under the classical machine learning setting, knee cartilage segmentation is cast as a supervised classification task, estimating the label of each voxel from a set of handcrafted or automatically extracted features from the available set of images. Typical examples of such approaches can be found in [5,6]. These methods are conceptually simple but usually offer mediocre results, due to their poor generalization capabilities and the utilization of fixed feature descriptors that may not be well suited to efficiently capture the data variability.

1.3. Multi-Atlas Patch-Based Segmentation Methods

Multi-atlas patch-based methods have long been a staple in medical imaging applications [7,8]. Utilizing an atlas library  A = { A i , L i } i = 1 n A  comprising  n A  magnetic resonance images  ( A i )  and their corresponding label maps  L i , these methods operate on a single target image  T  at a time, annotating it by propagating voxel labels from the atlas library  A . The implicit assumption in this framework is that the target image  T  and the corresponding images comprising the atlas library  A  reside in a common coordinate space. This assumption is enforced by registering all atlases  A i A  along with their corresponding labels maps  L i  to the target image space, via an affine or deformable transformation [9].
Multi-atlas patch-based methods usually consist of the following steps: For all voxels  x T , a search volume  N ( x )  of size  N s = ( n × n × n )  centered around  x  is formed, and every corresponding voxel  y N ( x )  in the spatially adjacent locations in the registered atlases yields a patch library  P L = p A i ( y ) , y N ( x )  for all atlases  i = 1 , , n A . An optimization problem such as sparse coding (SC) is then used to reconstruct the target patch as a linear combination of its corresponding atlas library. Established methods of this category of segmentation algorithms are presented in [7,8]. Despite being capable of achieving appreciable results, these methods do not scale well to large datasets due to the intense computational demands of constructing a patch library and solving an optimization problem for each voxel of every target image.

1.4. Deep Learning Methods

The recent resurgence of deep learning has had a great impact on medical imaging applications, with an increasing number of works reporting the use of deep architectures in various applications pertaining to that field. Initially restricted to 2D models due to the large computational load imposed by the 3D structure of magnetic resonance images, the recent advancements in processing power have allowed for a wide variety of fully 3D models to be proposed, offering markedly better performance with respect to the more traditional methods [10,11]. In our study, we compare our proposed method against a series of representative deep architectures, with applications ranging from semantic segmentation to point-cloud classification and medical image segmentation. In particular, we consider the SegNet [12], DenseVoxNet [13], VoxResNet [14], PointNet [15], CAN3D [13], and KCB-Net [16] architectures (Section 9.4.2).

1.5. Graph Convolutional Neural Networks

Recently, intensive research has been conducted in the field of graph convolutional networks, owing to their efficiency in handling non-Euclidean data [17]. These models can be distinguished into two general categories, namely, the spectral-based [18,19] and the spatial-based methods [20,21,22,23,24]. The spectral-based networks rely on the graph signal processing principles, utilizing filters to define the node convolutions. The ChebNet in [18] approximates the convolutional filters by Chebyshev polynomials of the diagonal matrix of eigenvalues while the GCN model in [19] performs a first-order approximation of ChebNet.
Spatial-based graph convolutions, on the other hand, update a central node’s representation by aggregating the representations of its neighboring nodes. The message-passing neural network (MPNN) [20] considers graph convolutions as a message-passing process, whereby information is traversed between nodes via the graph edges. GraphSAGE [25] applies node sampling to obtain a fixed number of neighbors for each node’s aggregation. A graph attention network (GAT) [24] assumes that the contribution of the neighboring nodes to the central one is determined according to a relative weight of importance, a task achieved via a shared attention mechanism across nodes with learnable parameters.
During the last years, GCNs have found extensive use in a diverse range of applications, including citation and social networks [19,26], graph-to-sequence learning tasks in natural language processing [27], molecular/compound graphs [28], and action recognition [29]. Considerable research has been conducted on the classification of remotely sensed hyperspectral images [30,31,32], mainly due to the capabilities of GCNs to capture both the spatial contextual information of pixels, as well as the long-range relationships of distant pixels in the image. Another domain of application is the forecasting of traffic features in smart transportation networks [33,34]. To capture the varying spatio-temporal relationships between nodes, integrated models are developed in these works, which combine graph-based spatial convolutions with temporal convolutions.
Finally, to confront the gradient vanishing effect faced by traditional graph-based models, deep GCN networks have recently been suggested [35,36]. Particularly, in [35], a densely connected graph convolutional network (DCGCN) is proposed for graph-to-sequence learning, which can capture both local and nonlocal features. In addition, Ref. [36] presents a densely connected block of GCN layers, which is used to generate effective shape descriptors from 3D meshes of images.

1.6. Outline of Proposed Method

The existing patch-based methods exhibit several drawbacks which can potentially degrade their segmentation performance. First, for each target voxel, these methods construct a local patch library at a specific spatial scale, comprising neighboring voxels from atlas images. Then, classifiers are developed by considering pairwise similarities between voxels in that local region. This suggests that target labeling is accomplished by relying solely on local learning while disregarding the global contextual information among pixels. Hence, long-range relationships among distant voxels in the region of interest are ignored, although these voxels may belong to the same class but with a different textural appearance. Secondly, previous methods in the field resort to inductive learning to produce voxel segmentation, which implies that the features of the unlabeled target voxels are not leveraged during the labeling process. Finally, some recent segmentation methods employ graph-based approaches allowing a more effective description of voxel pairwise affinities via sparse code reconstructions [8,37]. Despite the better data representation, the target voxel labeling is achieved using linear aggregation rules for transferring the atlas voxel’s labels, such as the traditional label propagation (LP) mechanism via the graph edges. Such first order methods may fail to adequately capture the full scope of dependencies among the voxel representations. The labeling of each voxel proceeds by aggregating spectral information strictly from its immediate neighborhood, failing to exploit long-term dependencies with potentially more similar patches in distant regions of the image, thus ultimately yielding suboptimal segmentation results.
To properly address the above shortcomings, in this paper we present a novel method for the automatic segmentation of knee articular cartilage, based on recent advances in the field of graph-based neural networks. More concretely, we propose the dense multi-scale adaptive graph convolutional network (DMA-GCN) method, which constructively integrates local spatial-level learning and global-level contextual learning concurrently. Our goal is to generate, via automatic convolutional learning, expressive node representations by merging pairwise importance at multiple spatial scales with long-range dependencies among nodes for enhanced volume segmentation. We approach the segmentation task as a multi-class classification problem with the five classes: background: 0; femoral bone: 1; femoral cartilage: 2; tibial bone: 3; tibial cartilage: 4. Recognizing the more crucial role of the cartilage structure in the assessment of the knee joint and considering the increased difficulty for its automatic segmentation as contrasted with that of bones, our efforts are primarily devoted to that issue. Figure 1 depicts a schematic framework of the proposed approach. The main properties and innovations of the DMA-GCN model are described as follows:
Multi-atlas setting: Our scheme is tailored to the multi-atlas approach, whereby label information from atlas images (labeled) is transferred to segment the target image (unlabeled). To this end, at the preliminary stage, images are aligned using a cost-effective affine registration. Subsequently, for each target image  T , we generate its corresponding atlas library  { A , L }  according to a similarity criterion (Figure 1a).
Graph construction: This part refers to the way in which images are represented in terms of nodes and the organization of node data to construct the overall graph (Figure 1b). Here, the graph node corresponds to a generic patch of size 5 × 5 × 5 around a central voxel, while the node feature vector is provided by a 3D-HOG feature descriptor. Accordingly, the image is represented as a collection of spatially stratified nodes, covering adequately all classes across the region of interest. Following the multi-atlas setting, we construct sequences of aligned data, comprising target nodes and those for the atlases at spatially correspondent locations. Given the node sequences, we further generate the sequence libraries which are composed of neighboring nodes at various spatial scales. The collection of all node libraries forms the overall graph structure, whereby both local (spatially neighboring) and global (spatially distant) node relationships are incorporated.
Semi-supervised learning (SSL): Following the SSL scenario, the input graph data comprise both labeled nodes from the atlas library and unlabeled ones from the target image to be segmented. In that respect, contrary to some existing methods, the features of unlabeled data are leveraged via learning to compute the node embbeddings and label the target nodes.
Local–global learning: As can be seen from Figure 1c, graph convolutions over the layers proceed along two directions, namely, the local spatial level and the global level, respectively. The local spatial branch includes the so-called local convolutional ( L c o n v ) units which operate on the subgraph of aligned neighborhoods of nodes (sequence libraries). The node embeddings generated by these units incorporate the contextual information between nodes at a local spatial level. To further improve local search, we integrate local convolutions at multiple scales, so that the local context around nodes is captured more efficiently. On the other hand, the global branch includes global convolution ( G c o n v ) units. These units provide the global node embeddings by taking into consideration the pairwise affinities of distant nodes distributed over the entire region of interest of the cartilage volume. The final node representations are then obtained by aggregating the embeddings computed at the local spatial and the global levels, respectively.
Convolutional building models: An important issue is how the local and global hidden representations of nodes are combined across the convolution layers. In this context, we propose two different structures: the cross-talk building model (CT-BM) and the sequential building model (SEQ-BM). Both models comprise four convolutional units overall, specifically, two  L c o n v  and two  G c o n v  units, undertaking local and global convolutions, respectively. The CT-BM (Figure 1c) performs intertwined local–global learning, with skip connections and aggregators. The links indicate the cross-talks between the two paths. The SEQ-BM, on the other hand, adopts a sequential learning scheme. In particular, local spatial learning is completed first, followed by the respective convolutions at the global level.
Adaptive graph learning (AGL): Considering fixed graphs with predetermined adjacency weights among nodes can degrade the segmentation results. To confront this drawback, every  L c o n v  and  G c o n v  unit is equipped here with an AGL mechanism, which allows us to automatically learn the proper graph structure at each layer. At the local spatial level, AGL adaptively designates the connectivity relationships between nodes via learnable attention coefficients. Hence,  L c o n v  can concentrate and aggregate features from relevant nodes in the local search region. Further, at the global level we propose a different AGL scheme for  G c o n v  units, whereby graph edges are learned from the input features of each layer.
Densely connected GCN: The proposed CT-BM and SEQ-BM can be utilized as standalone models to undertake the graph convolution task. Nevertheless, their depth is confined to two local–global layers, since an attempt to deepen the networks is hindered by the gradient-vanishing effect. To circumvent this deficiency, we finally propose a densely connected convolutional network, the DMA-GCN model. The DMA-GCN considers CT-BM or SEQ-BM as the building block of the deep structure. It exhibits a deep architecture with skip connections whereby each layer in the block receives feature maps from all previous layers and transmits its outputs to all subsequent layers. Overall, the DMA-GCN shares some salient qualities, such as a deep structure with an enhanced performance rate and better information flow, local–global level convolutions, and adaptive graph learning.
In summary, the main contributions of this paper are described as follows.
  • A novel multi-atlas approach is presented for knee cartilage segmentation from MRI images based on graph convolutional networks which operates under the semi-supervised learning paradigm.
  • With the aim to generate expressive node representations, we propose a new learning scheme that integrates graph information at both local and global levels concurrently. The local branch exploits the relevant spatial information of neighboring nodes at multiple scales, while the global branch incorporates global contextual relationships among distant nodes.
  • We propose two convolutional building models, the CT-BM and SEQ-BM. In the CT-BM, the local and global learning tasks are intertwined along the layer convolutions, while the SEQ-BM follows a sequential mode.
  • Both local and global convolutional units, at each layer, are equipped with suitable attention mechanisms, which allows the network to automatically learn the graph connective relationships among nodes during training.
  • Using the proposed CT-BM and SEQ-BM as block units, we finally present a novel densely connected model, the DMA-GCN. The network exhibits a deeper structure which leads to more enhanced segmentation results, while at the same time, it shares all salient properties of our approach.
  • We have devised a thorough experimental setup to investigate the capabilities of the suggested segmentation framework. In this setting, we examine different test cases and provide an extensive comparative analysis with other segmentation methods.
The remainder of this paper is organized as follows. Section 2 reviews some representative forms of graph convolutional networks related to our work and involved in the experimental analysis. Section 3 presents the image preprocessing steps and the atlas selection process. In Section 4, we discuss the node feature descriptor, as well as the graph construction of the images. Section 5 elaborates on the proposed local and global convolutional units, along with their attention mechanisms. Section 6 describes the suggested convolutional building blocks, while Section 7 presents our densely connected network. Section 8 discusses the transductive vs. inductive learning and the full-batch vs. mini-batch learning in our approach. In Section 9 and Section 10, we provide the experimental setup and respective comparative results of the proposed methodology, while Section 11 concludes this study.

2. Related Work

In this section, we review some representative models in the field of graph convolutional networks that are related to our work and are also included in the experiments.
Definition 1.
A graph  G  is defined as  G = ( V , E , X )  where  V  denotes the set of N nodes  v i V , and  E  is the set of edges connecting the nodes  ( v i , v j ) E X R N × F  is a matrix subsuming the node feature descriptors  x i R D , i = 1 , N  with F denoting the feature vector dimensionality. The graph is associated with an adjacency matrix  A R N × N  (binary or weighted), which includes the connection links between nodes. A larger entry  A i j > 0  suggests the existence of a strong relationship between nodes  ( v i , v j ) , while  A i j = 0  signifies the lack of connectivity. The graph Laplacian matrix  L R ( N × N )  is defined as  L = D A , where  D  is the diagonal degree matrix with  D i i = j = 1 N A i j , i = 1 , , N . Finally, the normalized graph adjacency matrix with the added self-connections is denoted by  A ˜ = A + I N , with the corresponding degree matrix given by  D ˜ i i = j = 1 N A ˜ i j

2.1. Graph Convolutional Network (GCN)

The GCN proposed in [19] is a spectral convolutional model. It tackles the node classification task under the semi-supervised framework, i.e., where labels are available only for a portion of the nodes in the graph. Under this setting, learning is achieved by enforcing a graph Laplacian regularization term with the aim of smoothing the node labels:
L = L 0 + λ L r e g
L r e g = i j A i j f ( x i ) f ( x j ) = f ( X ) T L f ( X )
where  L 0  represents the supervised loss measured on the labeled nodes of the graph,  f ( X , A )  is a differentiable function implemented by a graph neural network,  λ  is a regularization term balancing the supervised loss in regard to the overall smoothness of the graph,  X  is the node feature matrix, and  L  is the graph Laplacian.
In a standard multilayer graph-based neural network framework, information flows across the nodes by applying the following layerwise propagation rule:
H ( l + 1 ) = σ D ˜ 1 2 A ˜ D ˜ 1 2 H ( l ) W ( l )
where  σ ( · )  denotes the  L e a k y R e L U ( · )  activation function,  W ( l )  is a layer-specific trainable weight matrix, and  H ( l )  is the matrix of activation functions in the lth layer, with  H ( 0 ) = X . The authors in [19] show that the propagation rule in Equation (3) provides a first-order approximation of localized spectral filters on graphs. Most importantly, we can construct multilayered graph convolution networks by stacking several convolutional layers of the form in Equation (3). For instance, a two-layered GCN can be represented by
Z = f ( X , A ) = s o f t m a x A ^ R e L U ( A ^ X W ( 0 ) ) W ( 1 )
where  Z  is the network’s output,  s o f t m a x ( · )  is the output layer activation function for multi-class problems, and  A ^ = D ˜ 1 2 A ˜ D ˜ 1 2  is the normalized adjacency matrix. The weight matrices  W ( 0 )  and  W ( 1 )  are trained using some variant of gradient descent with the aid of a loss function.

2.2. Graph Attention Network (GAT)

A salient component in GATs [24] is an attention mechanism incorporated in the aggregation of the graph attention layers (GALs), with the aim to automatically capture valuable relationships between neighboring nodes. Let  H = { h 1 , , h N } , h i R D  and  H ˜ = { h ˜ 1 , , h ˜ N } , h ˜ i R D ˜  denote the inputs and outputs of a GAL, where N is the number of nodes, while D and  D ˜  are the corresponding dimensionalities of the node feature vectors. The convolution process entails three distinct issues: the shared node embeddings, the attention mechanism, and the update of node representations. As an initial step, a learnable transformation parameterized by the weight matrix  W R D ˜ × D  is applied on nodes, with the goal of producing expressive feature representations. Next, for every node pair, a shared attention mechanism is performed on the transformed features,
g i j = α T · W h i | | W h j
where  g i j  signifies the importance between nodes  h i  and  h j α  is a learnable weight vector, and  | |  denotes the concatenation operator. To make the above mechanism effective, the computation of the attention coefficients is confined between each node  i  and its neighboring nodes  j j N i . In the GAT framework, the attention mechanism is implemented by a single-layer feed-forward neural network, parameterized by  L e a k y R e L U  nonlinearities, which provide the normalized attention coefficients:
a i j = exp L e a k y R e L U α T · [ W h i | | W h j ] k N i exp L e a k y R e L U α T · [ W h i | | W h k ]
Given the attention coefficients, the node feature representations at the output of the GAT are updated via a linear aggregation of neighboring nodes’ features
h ˜ i = σ j N i α i j W h j
To stabilize the learning procedure, the previous approach is extended in a GAT by considering multiple attention heads. In that case, the node features are computed by
h ˜ i = | | k = 1 K σ j N i α i j ( k ) W ( k ) h j
where K is the number of independent attention heads applied, while  α i j ( k )  and  W ( k )  denote the normalized attention coefficients associated with the kth attention head and its corresponding embedding matrix, respectively. In this work, we exploit the principles of the GAT-based attention mechanism in the proposed local convolutional units, with the goal to aggregate valuable contextual information from local neighborhoods, at multiple search scales (Section 5.1).

2.3. GraphSAGE

The GraphSAGE network in [25] tackles the inductive learning problem, where labels must be generated for previously unseen nodes, or even entirely new subgraphs. GraphSAGE aims to learn a set of aggregator functions  A G G R k , k = 1 , , K , which are used to aggregate information from each node’s local neighborhood. Node aggregation is carried out at multiple spatial scales (hops). Among the different schemes proposed in [25], in our experiments, we consider the max-pooling aggregator, where each neighbor’s vector is independently supplied to a fully connected neural network:
h N v k = A G G R k p o o l = max σ W p o o l h u i k + b , u i N ( v )
where  m a x ( · )  denotes the element-wise max operator,  W p o o l  is the weight matrix of learnable parameters,  b  is the bias vector, and  σ ( · )  is a nonlinear activation function. Further,  h N ( v ) k  denotes the result obtained after a max-pooling aggregation on the neighboring nodes of node v. GraphSAGE then concatenates the current node’s representation  h v k 1  with the aggregated neighborhood feature vector  h N ( v ) k 1  to compute, via a fully connected layer, the updated node feature representations:
h v k = σ W k · h v k 1 | | h N v k 1
where  W k  is a weight matrix associated with aggregator  A G G R k p o o l .

2.4. GraphSAINT

GraphSAINT [21] differs from the previously examined architectures in that instead of building a full GCN on all the available training data, it samples the training graph itself, creating subsets of the original graph, building and training the associated GCNs on those subgraphs. For each mini-batch sampled in this iterative process, a subgraph  G s = ( V s , E s )  (where  | V s | | V | ) is used to construct a GCN. Forward and backward propagation is performed, updating the node representations and the participating edge weights. An initial preprocessing step is required for the smooth operation of the process, whereby an appropriate probability of sampling must be assigned to each node and edge of the initial graph.

3. Materials

In this section, we present the dataset used in this study, the image preprocessing steps, and finally, the construction of the atlas library.

3.1. Image Dataset

The MR images used in this study comprise the entirety of the publicly available, baseline Osteoarthritis Initiative (OAI) repository, for which segmentation masks are available, consisting of a total of 507 subjects. The specific MRI modality utilized across all the experiments corresponds to the sagittal 3D dual-echo steady-state (3D-DESS) sequence with water excitation, with an image size of  384 × 384 × 384  voxels and a voxel size of  0.36 × 0.36 × 0.70  mm. The respective segmentation masks serving as the ground truth are provided by the publicly available repository assembled by [38], including labels for the following knee joint structures (classes): background tissue, femoral bone (FB), femoral cartilage (FC), tibial bone (TB), and tibial cartilage (TC). Figure 2 showcases a typical knee MRI, in the three standard orthogonal planes (sagittal, coronal, axial).

3.2. Image Preprocessing

The primary source of difficulties in automated cartilage segmentation stems from the similar texture and intensity profile of articular cartilage and background tissues, as they are depicted in most MRI modalities, a problem further accentuated by the usually high intersubject variability present in the imaging data. To this end, the images were preprocessed by applying the following steps:
  • Curvature flow filtering: A denoising curvature flow filter [39] was applied, with the aim of smoothing the homogeneous image regions, while simultaneously leaving the surface boundaries intact.
  • Inhomogeneity correction: N3 intensity nonuniform bias field correction [40] was performed on all images, dealing with the issue of intrasubject variability within similar classes among subjects.
  • Intensity standardization: MRI histograms were mapped to a common template, as described in [41], ensuring that all associated structures across the subjects shared a similar intensity profile.
  • Nonlocal-means filtering: A final filtering process smoothed out any leftover artefacts and further reduced noise. The method presented in [42] offers a robust performance and is widely employed in similar medical imaging applications. Finally, the intensity range of all images was rescaled to  [ 0 , 100 ] .

3.3. Atlas Selection and ROI Extraction

The construction of an atlas library for each target image  T  to be segmented necessitates the registration of all atlases  { A i } i = 1 n A  to the particular target image. An affine transformation was employed, registering all atlas images in the target image domain space, accounting for deformations of linear nature, such as rotations, translations, shearing, scaling, etc. The same transformation was also applied to the corresponding label map  L i  of each atlas, resulting in the atlas library  { A i T , L i T } i = 1 N A  registered to  T .
Considering the fact that the cartilage volume accounts for a very small percentage of the overall image volume, a region of interest (ROI) was defined for every target image, covering the entire cartilage structure and its surrounding area. A presegmentation mask was constructed by passing the registered atlas cartilage mask through a majority voting (MV) filter, and then expanded by a binary morphological dilation filter, yielding the ROI for the target image. This region corresponded to the sampling volume for the target image  T  and its corresponding atlas library  { A i , L i } i = 1 N A . This process guaranteed that the selected ROI enclosed the totality of cartilage tissue both in the target  T , as well as in the corresponding atlas library.
Finally, to simultaneously reduce the computational load and increase the spatial correspondence between target and atlas images, we included a final atlas selection step. Measuring the spatial misalignment in the ROI of every pair  { T , A i } i = 1 T  using the mean squared difference  ( M S D i R O I ) , we only kept the first  N A  atlases exhibiting the least disagreement in the metric [37].

4. Graph Constructions

In this section, we describe the node representation, the construction of aligned sequences of nodes, and the sequence libraries, which lead to the formation of the aligned image graphs used in the convolutions.

4.1. Node Representation

An important issue to properly address is how an image is transformed to a graph structure of nodes. In our setting, a node was described by a generic  5 × 5 × 5  patch  p i = p ( x i )  surrounding a central voxel  x i . The image was then represented by a collection of nodes which were spatially distributed across the ROI volume.
Each node was described by a feature vector  x i = f e n c ( p i ) R 20  implemented via HOG descriptors [43], which aggregated the local information on the node patch. HOG descriptors constitute a staple feature descriptor in image processing and recognition. Here, we applied a modification suitable for operating on 3D data [44]. For each voxel  x i , we extracted an HOG feature description by computing the gradient magnitude and direction along the  x y z  axes for each constituent voxel in the node patch. The resulting values were binned to a  ( q = 20 ) -dimensional feature vector where each entry corresponded to the vertex of a regular icosahedron, with each bin representing the strength of the gradient along that particular direction.
Finally, each node was associated with a class indicator vector  y i = [ y i 1 , , y i , c ] R c , where  y i , c = 1  if voxel  x i  belongs in class i and 0 otherwise.

4.2. Aligned Image Graphs

The underlying principle of the multi-atlas approach is that the target image  T  and the atlas library  A i T  are aligned via affine registration, thus sharing a common coordinate space. This allows the transfer of label information from atlas images towards the target one by operating upon sequences of spatially correspondent voxels. Complying with the multi-atlas setting, we applied a two-stage sampling process with the aim to construct a sequence of aligned graphs, involving the target image and its respective atlases. This sequence contained the so-called root nodes which were distinguished from the neighboring nodes introduced in the sequel.
  • Target graph construction: This step used a spatially stratified sampling method to generate an initial set of target voxels  X r T R n T × D , where D denotes the feature dimensionality. To ensure a uniform spatial covering of all classes in the target ROI, we performed a spatial clustering step partitioning all contained voxels into  n r T  clusters. After interpolating the cluster centers to the nearest grid point, we obtained the global dataset  X r T = { x r T ( i ) , i = 1 , , n r T } , which defined a corresponding target graph of root nodes  G r T . These target nodes served as reference points from which the aligned sequences were subsequently generated.
  • Sequences of aligned data: For each  x r T ( i ) X r T , we defined a sequence of aligned nodes  S ( i ) , containing the target node  x r T ( i )  and its respective nodes from the atlas library, located at spatially correspondent positions:
    S ( i ) = x r T ( i ) ; x r A 1 ( i ) , , x r A n A ( i ) , i = 1 , n T
    The entire global dataset of root nodes, containing all sampled target nodes and their associated atlas ones, was defined as the union of all those sequences
    X r = i = 1 n T S ( i ) = X r T , X r A
    where  X r R ( N r × D )  contains a total number of  N r = n T · ( n A + 1 )  root nodes, while  X r T  and  X r A  denote the datasets of root nodes sampled from the target and atlases, respectively. Accordingly, this led to a sequence of aligned graphs  G r = { G r T ; G r A 1 G r A n A } , which is schematically shown in Figure 3. In this figure, we can distinguish two modes of pairwise relationships among root nodes that should be explored. Concretely, there are local spatial affinities across the horizontal axis between nodes belonging to a specific node sequence. On the other hand, there also exist global pairwise affinities between nodes of each image individually, as well as between nodes belonging to different images in the sequence. The latter type of search ensures that nodes of the same class located at different positions in the ROI volume and with different textural appearance are taken into consideration, thus leading to the extraction of more expressive node representations of the classes via learning.

4.3. Sequence Libraries

It should be stressed that the cost-effective affine registration used in our method is not capable of coping with severe image deformations. Hence, it cannot provide sufficiently accurate alignment between the target and the atlases. To account for this deficiency, we expanded the domain of local search by considering neighborhoods around nodes. Specifically, for each node  x i , we defined multihop neighborhoods at multiple scales:
R s ( x i ) = R s 1 ( x i ) R 1 R s 1 ( x i )
for  s = 1 , , S , where  R s ( x i )  denotes the neighborhood at scale s, and S is the number of scales used.  R 0 ( x i ) = x i  corresponds to the basic patch  5 × 5 × 5  of the node itself.  R 1 ( x i )  and  R 2 ( x i )  are the 1-hop and 2-hop neighborhoods delineated as  9 × 9 × 9  and  13 × 13 × 13  volumes around  x i , respectively. In our experiments, we considered two different spatial scales  ( S = 2 ) . Figure 4 illustrates the different node neighborhoods.
Next, for each sequence  S ( i ) i = 1 , , n T , we created the corresponding sequence libraries by incorporating the local neighborhoods of all root nodes belonging to that sequence. The sequence library  SL s ( i )  at scales  s = 0 , 1 , , S  was defined by
SL s ( i ) = R s ( x r T ( i ) ) R s ( x r A 1 ( i ) ) R s ( x r A n A ( i ) )
SL s ( i )  contains  ( n A + 1 ) · | R s |  nodes, where  | R s |  denotes the size of the spatial neighborhood at scale s. Figure 5 provides a schematic illustration of a sequence library.
The collection of all  SL s  forms a global dataset  X s  of aligned neighborhoods described as follows:
X s = i = 1 n T SL s ( i ) = X r N s
for  s = 1 , , S X s  is formed as union of the dataset  X r  of root nodes and the dataset  N s  comprising their neighboring nodes at scale s. Its cardinality is  | X s | = N r + | N s | , where  N r  is the number of root nodes, and  | N s |  the cardinality of  N s . Further,  X s  corresponds to a subgraph  G s ( V s , E s ) , with  | V s | = | X s | . In this subgraph, connective edges are established in  E s  along the horizontal axis, namely, between root nodes and the neighboring nodes across the sequence libraries. Concluding, since the neighborhoods are by definition inclusive as the scale increases, the dataset  X S  contains the maximum number of nodes, forming the overall dataset  X ˜ :
X ˜ = X S = i = 1 n T SL S ( i )
The corresponding graph  G  comprises an overall total number of  N = | N r | + | N S |  nodes, including the root and neighboring nodes.

5. Convolutional Units

This section elaborates on the basic convolutional units, namely, the local convolutional unit  L c o n v  and the global convolutional unit  G c o n v  which serve as structural elements to devise our proposed models (Figure 6).

5.1. Local Convolutional Unit

The local convolutional unit undertakes the local spatial learning task, operating horizontally along the sequence libraries (SLs) of nodes. Instead of confining ourselves to predefined and fixed weights in the graphs, we opted to apply a local attention mechanism to adaptively learn the graph structure information at each layer. Specifically, we used the attention approach suggested in the GAT as a means to capture the local contextual relationships among nodes in the search area.
A functional outline of  L c o n v  at layer l is shown in Figure 7. It receives an input  V ( l 1 ) R N × E i n ( l 1 )  from the previous layer and provides its output  Q ( l ) R N × E o ( l ) , where  E i n ( l 1 )  and  E o ( l )  denote the dimensionalities of the input and output node features, respectively.
Figure 7 provides a detailed architecture of  L c o n v . The model involves S sub-modules, each one associated with a specific spatial scale of aggregation. The sub-module s acts upon the subgraph  G s ( V s , E s ) , s = 0 , , S , which subsumes the sequence libraries  SL s  of root nodes. Its input is  V s R | X s | × E i n ( l 1 )  and after a local convolution at scale s, it provides its own output  Q s ( l ) R | X s | × E o ( l ) . In this context, the structure of  G s  is adapted to the local attention mechanism. Let us assume that a root node  x i  belongs to the qth sequence library:  x i SL s ( q ) , q = 1 , , n A . Then, node  x i  pays attention to two pools of neighboring nodes (Figure 3): (a) it aggregates relevant feature information from nodes  x j  of its own neighborhood,  x j R s ( x i )  (self-neighborhood attention); (b) it aggregates features of nodes belonging to the other aligned neighborhoods in  SL s ( q )  pertaining to the atlas images:  x j SL s ( q ) R s ( x i ) . For these pairs of nodes, we compute normalized attention coefficients using Equation (6). Further, pairwise affinities between nodes belonging to different sequence libraries are disregarded, i.e.,  α i j = 0  when  x i SL s ( q )  and  x j SL s ( p ) p q . It should be noticed that we are primarily focused on computing comprehensive feature representations of the root nodes. Nevertheless, neighboring nodes are also updated; however, in this case, the attention is confined to the neighborhood of the root node it belongs to.
For convenience, let us consider the input node features of the form  V s ( l 1 ) = [ v s , 1 ( l 1 ) , , v s , | X s | ( l 1 ) ] , where  v s , i R E i n ( l 1 )  and similarly,  Q s ( l ) [ q s , 1 ( l ) , , q s , | X s | ( l ) ] , q s , i ( l ) R E o ( l 1 ) . The local-level convolution at scale s of a root node  x i SL s ( q ) , i = 1 , , N r  is obtained by:
q s , i ( l ) = σ j R s ( x i ) α i j ( k ) W s , k ( l ) v s , i ( l 1 ) + j SL s ( q ) R s α i j ( k ) W s , k ( l ) v s , i ( l 1 )
The first term in the above equation refers to the self-neighborhood attention, which aggregates node features from  R s ( x i )  within the same image. Moreover, the second term aggregates node information from the other aligned neighborhoods in the sequence. In an attempt to stabilize the learning process and further enhance the local feature representations, we followed a multi-head approach, whereby K independent attention mechanisms are applied. Accordingly, the node convolutions proceed as follows:
q s , i ( l ) = | | k = 1 K σ j R s ( x i ) α i j ( k ) W s , k ( l ) v s , i ( l 1 ) + j SL s ( q ) R s α i j ( k ) W s , k ( l ) v s , i ( l 1 )
where  α i j ( k )  denote the attention coefficients between nodes  x i  and  x j  according to the kth attention head, while  W s , k ( l ) R E i n ( l ) × E o ( l 1 )  are the corresponding parameter weights used for node embeddings. The attention parameters are shared across all nodes in  G s ( V s , E s )  and are simultaneously learned at each layer l and for each spatial scale, individually. The outputs of the different sub-modules are finally aggregated to yield the overall output of the  L c o n v  unit:
Q ( l ) = Q 1 ( l ) Q S ( l )
where ⊕ denotes the concatenation operator.
The multilevel attention-based aggregation of valuable contextual information from sequence libraries offers some noticeable assets to our approach: (a) it acquires comprehensive node representations which assist in producing better segmentation results, (b) the graph learning circumvents the inaccuracies caused by affine registration in severe image deformations which may lead to node misclassification.

5.2. Global Convolutional Unit

The global convolutional unit conducts the global convolution task; it acts upon the subgraph  G r ( V r , E r )  which includes the sequence of root nodes  S ( i ) G c o n v  aims at exploring the global contextual relationships among nodes located at different positions in the target image and the atlases (Figure 3). Accordingly, we established in  E r  suitable pairwise connective weights according to spectral similarity  A ˜ i j 0  for nodes  x i S ( p ) x j S ( q ) , p q . Node pairs belonging to the same sequence are processed by  L c o n v  units; hence, they are disregarded in this case.
The global convolution at layer l is acquired using the spectral convolutional principles in GCN:
H ( l ) = σ A ˜ ( l ) S ( l 1 ) W g ( l )
where  S ( l 1 ) R ( N × F i n ( l ) )  and  H ( l 1 ) R ( N × F o ( l ) )  denote the input and output of  G c o n v , respectively, whereas  F i n ( l 1 ) , F o ( l )  are the corresponding dimensionalities.  W g ( l )  is the learnable embedding matrix and  A ˜ ( l )  is the adjacency matrix, as defined in Section 2.1.
Similar to  L c o n v , we also incorporated the AGL mechanism to  G c o n v , so that global affinities could be automatically captured at each layer via learning. More concretely, we applied an adaptive scheme whereby the connective weights between nodes are determined from the module’s input signals [45]. The adjacency matrix elements were computed by:
A ˜ = σ H ˜ ( l 1 ) W ϕ H ˜ ( l 1 ) W ϕ T + I N × N
where  S ˜ ( l 1 ) = BN ( S ( l 1 ) )  is obtained after applying batch-normalization to the inputs,  σ ( · )  is the sigmoidal activation function applied on an element-wise operation, and  W ϕ  is the embedding matrix to be learned, shared across all nodes of  G r ( V r , E r ) . The adaptation scheme in Equation (21) assigns greater edge values between nodes with high spectral similarity and vice-versa.
In the descriptions above, we considered the GCN model equipped with AGL as a baseline scheme. Nevertheless, in our experimental investigation, we examined several scenarios whereby the global convolution task was tackled using alternative convolutional models, including GraphSage, GAT, GraphSAINT, etc.

6. Proposed Convolutional Building Blocks

In this section, we present two alternative building models, namely, the cross-talk building model (CT-BM) and the sequential building model (SEQ-BM). They are distinguished according to the way the local and global convolutional units are blended across the layers. Every constituent local and global unit within the structures has its individual embedding matrix of learnable parameters. Further, it is also equipped with its own AGL mechanism for adaptive learning of the graphs, as described in the previous section.

6.1. Cross-Talk Building Model (CT-BM)

The CT-BM is shown in the outline of our approach in Figure 1. Nevertheless, a more compact form is depicted in Figure 8. The model comprises two composite layers  ( l = 1 , 2 ) , each one containing one local and one global unit. As can be seen, convolutions proceed in an alternating manner across the layers, whereby the local unit transmits its output to the next global unit, and vice versa. A distinguishing feature of this structure is that there are also skip connections and aggregators which implement cross-talk links between the local and global components. Particularly, in addition to the standard flow from one unit to the next, each unit’s output is aggregated with the output of the subsequent unit.
The overall workflow of the CT-BM is outlined below:
  • The first local unit yields
    V ( 0 ) = X , Q ( 1 ) = L c o n v ( V ( 0 ) )
  • The local unit’s output is passed to the first global unit to compute
    S ( 0 ) = Q ( 1 ) , H ( 1 ) = G c o n v ( S ( 0 ) )
  • The second local unit receives an aggregated signal to provide its output,
    V ( 1 ) = H ( 1 ) + Q ( 1 ) , Q ( 2 ) = L c o n v ( V ( 1 ) )
  • The second global unit produces
    S ( 1 ) = H ( 1 ) + Q ( 2 ) , H ( 2 ) = G c o n v ( S ( 1 ) )
  • The final output of the CT-BM is the obtained by
    Z = H ( 2 ) + Q ( 2 )

6.2. Sequential Building Model (SEQ-BM)

The architecture of the SEQ-BM is illustrated in Figure 9. This model also contains two local and two global units. Contrary to the CT-BM, convolutions are conducted in the SEQ-BM sequentially. Concretely, the local learning task is first completed using the first two local convolutional units. The outputs of this stage are then transmitted to the subsequent stage which accomplishes the global learning task, using the two global convolutional units. The overall output of the SEQ-BM is formed by aggregating the resulting local and global features of the two stages.
The workflow of the SEQ-BM is outlined as follows:
  • The local learning task is described by
    V ( 0 ) = X , Q ( 1 ) = L c o n v ( V ( 0 ) )
    V ( 1 ) = Q ( 1 ) , Q ( 2 ) = L c o n v ( V ( 1 ) )
  • The global learning task is described by
    S ( 0 ) = Q ( 2 ) , H ( 1 ) = G c o n v ( S ( 0 ) )
    S ( 1 ) = H ( 1 ) , H ( 2 ) = G c o n v ( S ( 1 ) )
  • The final output of the SEQ-BM is obtained by
    Z = Q ( 2 ) + H ( 2 ) = Z l o c + Z g l o
The alternating blending of the CT-BM provides a more effective integration between local and global features at each layer, individually, as compared to the sequential combination in SEQ-BM. This observation is attested experimentally as shown in Section 10.

7. Proposed Dense Convolutional Networks

In this section, we present two variants of our main model, the DMA-GCN network. The motivation behind this design is based on an attempt to further expand the structures CT-BM and SEQ-BM by including multiple layers of convolutions, to face the gradient vanishing effect, where the gradients diminish, thus hindering effective learning or even worsening the results. This is the reason why in the above building blocks, we are restricted to two-layered local–global convolutions.
To tackle this problem, we resorted to the recent advancements in deep GCNs [36] and developed the DMA-GCN model with a densely connected convolutional architecture utilizing residual skip connections, as shown in Figure 10. As can be seen, the model consists of several blocks arranged across M layers of block convolutions. These blocks are implemented by either CT-BM or SEQ-BM described in Section 5, which leads to two different alternative configurations, the DMA-GCN(CT-BM) and DMA-GCN(SEQ-BM), respectively. Let  Z i n ( i ) R ( N × P i n ( i ) )  and  Z o ( i ) R ( N × P o ( i ) )  denote the input and output of the ith block,  i = 1 , , M , with  P i n ( i ) , P o u t ( i )  being the feature dimensionalities, respectively. The properties of the suggested DMA-GCN are discussed in the following:
  • The skip connections interconnect the blocks across the layers. Concretely, each block receives as input the outputs of blocks from all preceding layers:
    Z i n ( i ) = Z o ( 1 ) Z o ( 2 ) Z o ( i 1 )
    for  i = 1 , , M , where ⊕ denotes the concatenation operator. This allows the generation of deeper GCN structures which can acquire more expressive node features. Overall, the DMA-GCN involves  4 M  convolutional units. Within each block, two layers of local–global convolutions are internally performed; the resulting outputs are then integrated along the block layers to provide the final output:
    Z o = Z o ( 1 ) Z o ( 2 ) Z o ( M )
  • The other beneficial effect of skip connections is that they allow the final output to have direct access to the outputs of all blocks in the dense network. This assures a better reverse flow of information and facilitates the effective learning of parameters pertaining to the blocks. Since block operations are confined to two-layered local–global convolutions, overall, we can circumvent the gradient vanishing problem.
  • In order to preserve the parametric complexity at a reasonable level, similar to [36], we define the feature dimensions of each block in DMA-GCN to be the same:
    P o ( 1 ) = P o ( 2 ) = = P o ( M ) = d
    The node feature growth rate caused by the aggregators can be defined as  P i n ( i ) = ( i 1 ) · d i = 1 , , M . The input dimensions grow linearly as we proceed to deeper block layers, with the last block showing the largest increase  P i n ( M ) = ( M 1 ) · d . To prevent feature dimensionalities from receiving too large values, we considered initially a DMA-GCN model with  M = 4  blocks. The particular number of blocks in the above range was then decided after experimental validation (Section 10).
  • Every block in DMA-GCN is supported with its corresponding AGL process to automatically learn the graph connective affinities at each layer. This is accomplished by applying an attention-based mechanism for local convolutional units (Section 5.1) and an adaptive construction of adjacency matrices from inputs node features (Section 5.2).
The output of the DMA-GCN is fed to a two-Layer MLP unit to obtain the label estimates
Z o ^ = softmax R e L U ( W 1 Z ) W 2
We adopted the cross-entropy error to penalize the differences between the model’s output  Z ^ o  and the corresponding node labels
L = l Y L c = 1 C Y l c Z l c
where  Y L  denotes the subset of labeled nodes. The DMA-GCN network was trained under either the transductive or the inductive learning methods, as discussed in the following section.

8. Network Learning

In our setting, we considered transductive learning (SSL) as the basic learning scheme for training the DMA-GCN models. In this case, both unlabeled data from the target image to be segmented  T  and the labeled data from the atlases  LB ( T )  were used for the construction of the model. Nevertheless, in the experiments, we also investigated the inductive (supervised) learning scenario, whereby training was conducted by solely using labeled data from the atlases.

8.1. Transductive Learning (SSL)

The SSL scheme was adapted to the context of image segmentation task elaborated here. In regard to the data used in the learning, SSL can be carried out along two different modes of operation, namely, mini-batch learning and full-batch learning. Next, we detail mini-batch learning and then conclude with full-batch learning, which is a special case of the former one.
Mini-batch learning was implemented by following a three-stage procedure. In stage 1, an initial model was learned and used to label an initial batch of data from  T . Stage 2 was an iterative process, where out-of-sample batches were sequentially sampled from  T  and labeled via refreshing learning. Finally, stage 3 labeled the remaining voxels of the target image using a majority voting scheme.
Stage 1: Learning. In this stage,  ( t = 0 ) , we started by sampling an initial unlabeled batch  X r T ( 0 )  from the target image. Then, we used the different steps detailed in Section 4 to construct the corresponding graph of nodes. (a) Given  X r T ( 0 ) , we created the corresponding aligned sequences (Section 4.3), giving rise to the dataset of root nodes  X r ( 0 ) = X r T ( 0 ) , X r A ( 0 ) . (b) Next, we incorporated neighborhood information by generating sequence libraries at multiple scales (Section 4.3), leading to the datasets  X s ( 0 ) = X r ( 0 ) N s ( 0 ) , s = 1 , S , where  N s ( 0 )  denotes the neighboring nodes. (c) Finally, we considered the overall dataset  X ^ ( 0 ) = X S ( 0 )  that corresponded to the graph  G 0 ( V 0 , E 0 )  containing the root and neighboring nodes at that stage.
The next step was to perform convolutional learning on graph  G 0 ( V 0 , E 0 )  using DMA-GCN models. Upon completion of the training process, we accomplished the labeling of  X r T ( 0 ) ,
l X r T ( 0 ) = F ( 0 ) X ˜ ( 0 ) , W ( 0 )
F ( 0 )  denotes the model’s functional mapping,  l ( · )  is the labeling function of the target nodes, and  W ( 0 )  stands for the network’s weights, including the learnable parameters of embedding matrices and attention coefficients, across all layers of the DMA-GCN.
Stage 2: iterative learning. This stage followed an iterative procedure,  t = 1 , , T , whereby at each iteration, out-of-sample batches of yet unlabeled nodes were sampled from  T X o T ( t )  of size  n o . Considering the nodes in  X o T  as root nodes, we then applied steps (a)–(c) of the previous stage, to obtain the datasets  X o , s = X o ( 0 ) N o , s ( 0 ) , s = 0 , , S , and the overall set  X ˜ o ( t ) = X o , S ( t ) , which corresponded to a graph of out-of-sample nodes. In the following, data  X ˜ o ( t )  were fed to the pretrained model from stage 1,  l ( X o T ) = F ( t ) ( X ˜ o ( t ) , W ( t ) ) . The model was initialized as  W ( t ) = W 0  to preserve previously acquired knowledge. Further, it was subject to several epochs of refreshing convolutional learning, with the aim of adapting to the newly presented data. The above sequential process terminated at  t = T  when all target nodes were labeled.
Stage 3: labeling of remaining voxels. This was the final stage of target image segmentation, entailing the labeling of target voxels not considered during the previous learning stages. Given that nodes were the central voxels of a generic  5 × 5 × 5  patches, there were multiple remaining voxels scattered within  3 × 3 × 3  volumes. Labeling of these voxels was accomplished by a voting scheme. Specifically, for each  x r , the voting function accounted for both the spectral and the spatial distance from its surrounding labeled vertices:
l ( x r ) = i w r , i l ( x i )
w r , i = w r , i s p e c × w r , i s p a t
where  w r , i s p e c  and  w r , i s p a t  are normalized weighting coefficients denoting the spectral and spatial proximity, measured by the  l 2  norm (Euclidean distance) and  l 1  norm (Manhattan distance), respectively.
Full-batch learning is a special case of the above mini-batch learning. In that case, the dataset  X r T ( 0 )  is a large body of data, comprising all possible target nodes contained in the target ROI. Under this circumstance, the iterative stage 2 is disregarded. Full-batch learning is completed after convolutional learning (stage 1), followed by the labeling of the rest of target voxels (stage 3).

8.2. Inductive Learning

Under this setting, the target data remain unseen during the entire training phase. Adapting to the multi-atlas scenario, we devised a supervised learning scheme according to the following steps. (a) For each target image  T , we selected the most similar labeled image  T ^ = NN ( T )  from atlases, where  NN ( · )  is a spectral similarity function used to identify the nearest neighbors of  T . (b) The image  T ^  along with its corresponding atlas library  LB ( T ^ )  were used to learn a supervised model  F i n d ( T ^ )  by applying exactly stage 1 of the previous subsection. (c) Finally, the target image  T  was labeled by means of  F i n d ( T ˜ ) . As opposed to the SSL scenario, the critical difference was that the developed model relied solely on labeled data, disregarding target image information.

9. Experimental Setup

9.1. Evaluation Metrics

The overall segmentation accuracy achieved by the proposed methods was evaluated using the following three, standard volumetric measures: the Dice similarity coefficient  ( DSC ) , the volumetric difference  ( VD ) , and the volume overlap error  ( VOE ) . Denoting  Y  the ground truth labels and  Y ^  the estimated ones, the above measures are defined as:
DSC = 100 | Y Y ^ | | Y | + | Y ^ |
VOE = 100 1 DSC 200 D S C
VD = 100 | Y ^ | | Y | | Y |
Taking into account that the large majority of voxels correspond to either the background class or the two bone classes, we opted to also include the precision and recall classification measures, to better evaluate the segmentation performance on each individual structure. All measures correspond exclusively to the image content delineated by the respective ROI of each evaluated MRI.

9.2. Hyperparameter Setting and Validation

The overall performance of the proposed method depends on a multitude of preset parameters, the most prominent of which are the number of atlases  N A  comprising the atlas library  { A , L } i , i = 1 , , N A , the number of heads K utilized in the multi-head attention mechanism, and the number of scales S corresponding to the different neighborhood scales. The optimal values of the above hyperparameters, as well as the performance of the DMA-GCN(SEQ) and DMA-GCN(CT) segmentation methods, were evaluated through a 5-fold cross-validation.

9.3. Experimental Test Cases

The proposed methodology comprises several components affecting its overall capacity and performance. We hereby present a series of experimental test cases, aiming to shed light on those effects.
  • Local vs. global learning: In this scenario, we aimed to observe the effect of performing local-level learning in addition to global learning. The goal here was to determine the potential boost in performance facilitated by the inclusion of the attention mechanism in our models.
  • Transductive vs. inductive learning: The goal here was to ascertain whether the increased cost accompanying the transductive learning scheme could be justified in terms of performance, as compared to the less computationally demanding inductive learning.
  • Sparse dense adjacency matrix: Here, we examined the effect of progressively sparsifying the adjacency matrix  A ˜ ( l )  at each layer on the overall performance. We examined the following cases: (1) the default case with a dense  A ˜ ( l )  and (2) thresholding  A ˜ ( l )  so that each node was allowed connections to  5 , 10 , or 20 spectrally adjacent ones.
  • Global convolution models: Finally, we tested the effect of varying the design of the global components by examining some prominent architectures, namely, GCN, ClusterGCN, GraphSAINT, and GraphSAGE

9.4. Competing Cartilage Segmentation Methods

The efficacy of our proposed method was evaluated against several published works dealing with the problem of automatic knee cartilage segmentation.

9.4.1. Patch-Based Methods

The patch-based sparse coding (PBSC) [8] and patch-based nonlocal-means (PBNLM) [7] methods are two state-of-the art approaches in medical image segmentation. For consistency reasons, similar to the DMA-GCN, we set the patch size for both these methods to  ( 5 × 5 × 5 )  and the corresponding search volume size to  ( 13 × 13 × 13 ) . The remaining parameters were taken as described in their respective works.

9.4.2. Deep Learning Methods

Here, we opted to evaluate the DMA-GCN against some state-of-the-art deep learning architectures that were successfully applied in the field of medical image segmentation.
SegNet [12]: A convolutional encoder–decoder architecture, utilizing the state-of-the-art VGG16 [46] network that is suitable for pixelwise classification. Since we are dealing with 3D data, we split each input image patch into its constituent planes ( x y , x z , y z ) and fed those to the network.
DenseVoxNet [13]: A convolutional network proposed for cardiovascular MRI segmentation. It comprises a downsampling and upsampling sub-component, utilizing skip connections from each layer to its subsequent ones, enforcing a richer information flow across the layers. In our experiments, the model was trained using the same initialization scheme for the parameters’ values as described in the original paper.
VoxResNet [14]: A deep residual network comprising a series of stacked residual modules, each one performing batch-normalization and convolution, also containing skip connections from each module’s input to its respective output.
KCB-Net [16]: A recently proposed network that performs cartilage and bone segmentation from volumetric images, by utilizing a modular architecture, where initially, each one of the three sub-components is trained to process a separate plane (sagittal, coronal, axial), followed by a 3D component with the task of aggregating the respective outputs into a single overall segmentation map.
CAN3D [47]: This network utilizes a successively dilated convolution kernel aiming to aggregate multi-scale information by performing feature extraction within an increasingly dilating receptive field, facilitating the final voxelwise classification in full resolution. Additionally, the loss function employed at the final layer consists of a combination of Dice similarity coefficient  DSC  and  DSF , a variant of the standard  DSC  used in evaluating segmentation results.
Point-Net [15]: Point-Net is a recently proposed architecture specifically geared towards point-cloud classification and segmentation. As an initial preprocessing step, it incorporates a spatial transformer network (STN) [48] that renders the input invariant to permutations and is used to produce a global feature for the whole point cloud. That global feature is appended on the output of a standard multilayer perceptron (MLP) that operates on the initial point-cloud features, and the resulting aggregated features are passed through another MLP in order to provide the final segmentation map.

9.4.3. Graph-Based Deep Learning Methods

In regard to graph-based convolution models, we compared our DMA-GCN approach to a series of baseline GCN architectures to carry out the global learning task. In addition, we considered in the comparisons the multilevel GCN with automatic graph learning (MGCN-AGL) method [30], used for the classification of hyperspectral remote sensing images. The MGCN-AGL approach takes a form similar to the one of the SEQ-BM in Figure 9 to combine the local learning via a GAT-based attention mechanism and global learning implemented by a GCN. A salient feature of this method is that the global contextual affinities are reconstructed based on the node representations obtained after completion of the local learning stage.

9.5. Implementation Details

All models presented in this study were developed using the PyTorch Geometric library (https://github.com/pyg-team/pytorch_geometric, accessed on 1 May 2023), specifically built upon PyTorch (https://pytorch.org) to handle graph neural networks. For the initial registration step, we used the elastix toolkit (https://github.com/SuperElastix/elastix, accessed on 1 May 2023). The code for all models proposed in this study can be found at (https://gitlab.com/christos_chadoulos/graph-neural-networks-for-medical-image-segmentation, accessed on 10 March 2024).
Regarding the network and optimization parameters used in our study, we opted for the following choices: after some initial experimentation, the parameter d controlling the node feature growth rate (Section 7) was set to  d = 128 , resulting in the input feature dimensionality progression  128 256 384 512  for  M = 4  dense layers. All models were trained for 500 epochs using the Adam optimizer, with an early stopping criterion halting the training process either when no further improvement was detected on the validation error in the span of 50 epochs, or when the validation error steadily increased for more than 10 consecutive epochs.

10. Experimental Results

10.1. Parameter Sensitivity Analysis

In this section, we examine the effect of critical hyperparameters in the performance of the models under examination. The numbers and figures presented for each hyperparameter correspond to results obtained while the remaining ones assumed their optimal determined value.

10.1.1. Number of Selected Atlases  N A

The number of selected atlases is a crucial parameter for all methods adopting the multi-atlas framework. Figure 11 shows the effect on the performance of DMA-GCN(CT) and DMA-GCN(SEQ) by sampling the following values  N A = 5 , , 20 .
For both methods, the number of atlases has a similar effect on the overall performance. The highest score in each case is achieved for  N A = 10  atlases and slowly diminishes as that number grows. Constructing the graph by sampling voxels from a small pool of atlases increases the bias of the model, thus failing to capture the underlying structure of the data. Increasing that number allows the image graphs to include a greater percentage of nodes with dissimilar feature descriptions, which enhances the expressive power of features. Accordingly, a moderate number of aligned atlases seems to provide the best overall rates, as it achieves a reasonable balance between bias and variance.

10.1.2. Number of Attention Heads

The number of attention heads is arguably one of the most influential parameters for the local units of our models. Figure 12 demonstrates the effect on performance for  K = 0 , 4 , 8 , 12 . The trivial case of  K = 0  corresponds to the case where the local convolutional units are disregarded, i.e., the convolution task is undertaken solely by the global units.
The best performance is achieved for  K = 8  for either DMA-GCN(CT) or DMA-GCN(SEQ). Most importantly, in both charts, we can notice a sharp drop in performance when  K = 0 . This indicates that discarding the local convolutional units significantly aggravates the overall efficiency of the DMA-GCN. Particularly, in that case, the model disregards the local contextual information contained in node libraries, while the node features are formed by applying graph convolutions at a global level exclusively.
It can also be noted that independent embeddings of the attention mechanism provide different representations of the local pairwise affinities between nodes, which facilitates a better aggregation of the local information. Nevertheless, beyond a threshold value, the results deteriorate, most likely due to overfitting.

10.1.3. Sparsity of Adjacency Matrix  A ˜

The adjacency matrix  A ˜  is the backbone of graph neural networks in general, encoding the graph structure and node connectivity. As mentioned in [30], a densely connected  A ˜  may have a negative impact on the overall segmentation performance. To this end, we evaluated a number of thresholds that served as cut-off points, discarding edges that were not sufficiently strong. A small threshold leaves most edges in the graph intact, while a larger one creates a sparser  A ˜  by preserving only the most significant edges. Figure 13 summarizes the effects of thresholding  A ˜  at different sparsity levels.
As can be seen, large sparsity values considerably reduce the segmentation accuracy, which suggests that preserving only the strongest graph edges decreases the aggregation range from neighboring nodes, and hence the expressive power of the resulting models. On the other hand, similar deficiencies are incurred for low sparsity values with dense matrices  A ˜ . In that case, the neighborhood’s range is unduly expanded, thus allowing aggregations between nodes with weak spectral similarity. A moderate sparsity of the adjacency matrices corresponding to a threshold value of  0.50  attains the best results for both DMA-GCN(CT) and DMA-GCN(SEQ).

10.1.4. Number of Scales

The number of scales considered in conjunction with the local attention mechanism plays an important role in the overall performance of the DMA-GCN. It defines the size of spatial neighborhoods considered in local convolutional units, which greatly affects the resulting node feature representations. Figure 14 shows the segmentation rates for different values of scales  s = 0 , 1 , 2 . As can be seen, for both DMA-GCN(CT) and the DMA-GCN(SEQ) models, the incorporation of additional neighborhoods of progressively larger scales improves the accuracy results, consistently.
In the trivial case of  s = 0 , each node aggregates local information by paying attention solely to its aligned root nodes. Due to the restricted attention, we were led to weak local representations of nodes, and hence degraded overall performance for the models (left columns). Incorporating the one-hop neighborhoods  ( s = 1 ) , we expanded the range of the attention mechanism, which provided more enriched node features. This resulted in significantly better results compared to the previous case (middle columns). The above trend was further retained by including the two-hop neighborhoods of nodes  ( s = 2 ) , where we could notice an even greater improvement of results (right columns).

10.2. Number of Dense Layers

In this section, we examine the effect of the number M of dense layers used in the DMA-GCN models (Section 7). It defines the depth of the networks and thus directly impacts the size, as well as the representational capabilities of the respective models. Figure 15 shows the results obtained by progressively using up to four dense layers. It should be noticed that the single layer results refer to the case where DMA-GCN(CT) and DMA-GCN(SEQ) coincide with their constituent block models, i.e., the CT-BM and SEQ-BM, respectively.
Figure 15a clearly demonstrates an upwards trend in the obtained rates as the number of dense layers increases. The lowest performance is unsurprisingly achieved for the shallow network of a single layer (CT-BM). The best results are achieved for  M = 3 , while the inclusion of an additional layer diminishes slightly the accuracy, an effect most likely attributed to overfitting. A similar pattern of improvements can also be observed for the case of DMA-GCN(SEQ) model. Concluding, the proposed densely connected block architectures lead to deeper GCN networks with multiple local–global convolutional layers, which can acquire more comprehensive node features, and thus offer better results.

10.3. Mini-Batch vs. Full-Batch Training

The goal of this section is twofold. First, we aim to compare the mini-batch against full-batch learning schemes (Section 8). Secondly, we examine the effect of the batch size on the performance of the mini-batch learning. Figure 16a,b presents the  DSC  metrics of the models DMA-GCN(CT) and DMA-GCN(SEQ), respectively, for varying batch sizes. The first four columns refer to the mini-batch learning, while the rightmost column represents the full-batch scenario.
Both figures share a similar pattern of results. Noticeably, in the mini-batch learning, the best results  ( DSC = 95 % )  were achieved for a batch size of 128. This value referred to the size of the initial batch  ( X 0 T )  as well as the out-of-sample batches extracted from the target  T . Given these batches, we then proceeded to the formation of the sequences of aligned nodes and the sequence libraries at multiple scales, to generate the corresponding graphs of nodes used for convolutional learning. Finally, the full-batch learning provided a significantly inferior performance  ( DSC = 81 % )  compared to the mini-batch scenario.

10.4. Global Module Architecture

In this section, we examine the effectiveness of some of the popular graph-based networks in our approach. Concretely, in the context of DMA-GCN structures, we considered several model combinations, whereby the GAT-based attention and local information aggregation was used to carry out the local learning task, while the global task was undertaken by the GCN, SAGE, SAINT and ClustGCN, respectively.
Figure 17 shows the  DSC  measures for both DMA-GCN(CT) and DMA-GCN(SEQ) models. Table 1 also provides more detailed results on this issue. As can be seen, for both models, the utilization of GraphSAGE clearly provided the best performance, possibly due to its more sophisticated node sampling method. Nevertheless, all the alternatives offered consistently good results.

10.5. Transductive vs. Inductive Learning

In this final test case, we investigated the efficacy of the transductive against the inductive learning schemes (Section 8.1). Table 2 presents detailed results pertaining to both DMA-GCN(CT/SEQ) structures.
According to the results, transductive learning significantly outperforms the inductive learning scenario, for both cartilage classes of interest and across all evaluations metrics. This can be attributed to the following reasons. First, corroborating the well-established finding of the literature, the superior rates underscore the importance of utilizing the SSL features of the unlabeled nodes in the training process, combined with those of labeled ones. Secondly, the refreshing learning stage applied in mini-batch learning (Section 10.3) allows the network to appropriately adjust to the newly observed out-of-sampling batches. On the other hand, the inductive model is trained once using the nearest neighbor image. This network is then used to segment the target image, by classifying the entire set of unlabeled batches from  T .

10.6. Comparative Results

Table 3 presents extensive comparative results, contrasting our DMA-GCN models with traditional patch-based approaches, and state-of-the-art deep learning architectures established in the field of medical image segmentation. We also applied six graph-based convolution networks in the comparisons. These networks were used as standalone models solely to conduct global learning of the nodes. Finally, we applied the more integrated MGCCN model [30]. For the DMA-GCN models, we applied the following parameter setting:  N A = 10  atlases,  K = 8  attention heads,  S = 2  spatial scales,  M = 3  dense layers, and transductive learning with a mini-batch size of 128.
Based on the results of Table 3, we should notice the more enhanced rates of GAT and MGCN compared with those of the other graph convolution networks, suggesting that the attention mechanism combined with a multi-scale consideration of the data can improve the model’s performance. Furthermore, the proposed deep DMA-GCN(CT) and DMA-GCN(SEQ) models are both shown to outperform all competing methods in the experimental setup, achieving  DSC f m r l = ( 95.71 % , 95.44 % )  and  DSC t b l = ( 94.02 % , 93.87 % ) , respectively, across all evaluation metrics and in both femoral and tibial segmentation. DMA-GCN(CT) provides a slightly better performance compared to the DMA-GCN(SEQ) indicating that the alternating combination of local–global convolutional units is more effective. However, both methods may fail to deliver satisfactory results in certain cases where the cartilage tissue is severely damaged or otherwise deformed. Figure 18 showcases an example of a successful application of both DMA-GCN(SEQ) and DMA-GCN(CT) models, along with a marginal case exhibiting suboptimal results, due to extreme cartilage thinning.

11. Conclusions and Future Work

In this paper, we presented the DMA-GCN for knee joint cartilage segmentation. Our models shared a number of attractive properties, such as the constructive integration of local-level and global-level learning, a densely connected structure, and adaptive graph learning. These features rendered the DMA-GCN capable of acquiring expensive node representations. A comparative analysis with various state-of-the-art deep learning and graph-based convolution networks validated the efficacy of the proposed approach. As future research, we intend to extent the current framework by focusing on hypergraph networks, which allow the incorporation of multiple views of graph data.

Author Contributions

Conceptualization, J.T.; methodology, C.C. and J.T.; software, C.C.; writing—original draft preparation, C.C.; writing—review and editing, J.T., D.T., A.S., S.M. and C.C.; visualization, C.C. and S.M.; supervision, J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available in [4,49].

Conflicts of Interest

The authors declare that they have no conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Felipe, J.; McCombie, J. Burden of major musculoskeletal conditions. ERD Work. Pap. Ser. 2002, 81, 1–27. [Google Scholar]
  2. Ebrahimkhani, S.; Jaward, M.H.; Cicuttini, F.M.; Dharmaratne, A.; Wang, Y.; de Herrera, A.G. A review on segmentation of knee articular cartilage: From conventional methods towards deep learning. Artif. Intell. Med. 2020, 106, 101851. [Google Scholar] [CrossRef] [PubMed]
  3. Fripp, J.; Crozier, S.; Warfield, S.K.; Ourselin, S. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee. Phys. Med. Biol. 2007, 52, 1617–1631. [Google Scholar] [CrossRef]
  4. Ambellan, F.; Tack, A.; Ehlke, M.; Zachow, S. Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative. Med. Image Anal. 2019, 52, 109–118. [Google Scholar] [CrossRef] [PubMed]
  5. Folkesson, J.; Dam, E.B.; Olsen, O.F.; Pettersen, P.C.; Christiansen, C. Segmenting articular cartilage automatically using a voxel classification approach. IEEE Trans. Med. Imaging 2007, 26, 106–115. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, K.; Lu, W.; Marziliano, P. Automatic knee cartilage segmentation from multi-contrast MR images using support vector machine classification with spatial dependencies. Magn. Reson. Imaging 2013, 31, 1731–1743. [Google Scholar] [CrossRef] [PubMed]
  7. Rousseau, F.; Habas, P.A.; Studholme, C. A supervised patch-based approach for human brain labeling. IEEE Trans. Med. Imaging 2011, 30, 1852–1862. [Google Scholar] [CrossRef]
  8. Zhang, D.; Guo, Q.; Wu, G.; Shen, D. Sparse patch-based label fusion for multi-atlas segmentation. In Multimodal Brain Image Analysis, Proceedings of the Multimodal Brain Image Analysis: Second International Workshop, MBIA 2012, Nice, France, 1–5 October 2012; Lecture Notes in Computer Science Series; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7509, pp. 94–102. [Google Scholar] [CrossRef]
  9. Hajnal, J.V.; Hill, D.L.; Hawkes, D.J. Medical image registration. Med. Image Regist. 2001, 46, 1–383. [Google Scholar] [CrossRef]
  10. Wang, R.; Lei, T.; Cui, R.; Zhang, B.; Meng, H.; Nandi, A.K. Medical image segmentation using deep learning: A survey. IET Image Process. 2022, 16, 1243–1267. [Google Scholar] [CrossRef]
  11. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3523–3542. [Google Scholar] [CrossRef]
  12. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  13. Yu, L.; Cheng, J.Z.; Dou, Q.; Yang, X.; Chen, H.; Qin, J.; Heng, P.A. Automatic 3D cardiovascular MR segmentation with densely-connected volumetric convnets. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017, Proceedings of the 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Lecture Notes in Computer Science Series; Springer: Cham, Switzerland, 2017; Volume 10434, pp. 287–295. [Google Scholar] [CrossRef]
  14. Chen, H.; Dou, Q.; Yu, L.; Heng, P.A. VoxResNet: Deep Voxelwise Residual Networks for Volumetric Brain Segmentation. arXiv 2016, arXiv:1608.05895. [Google Scholar]
  15. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar] [CrossRef]
  16. Xie, H.; Pan, Z.; Zhou, L.; Zaman, F.A.; Chen, D.Z.; Jonas, J.B.; Xu, W.; Wang, Y.X.; Wu, X. Globally optimal OCT surface segmentation using a constrained IPM optimization. Opt. Express 2022, 30, 2453. [Google Scholar] [CrossRef] [PubMed]
  17. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A Comprehensive Survey on Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef] [PubMed]
  18. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. Adv. Neural Inf. Process. Syst. 2016, 29, 395–398. [Google Scholar]
  19. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017—Conference Track Proceedings, Toulon, France, 24–26 April 2017; pp. 1–14. [Google Scholar]
  20. Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural Message Passing for Quantum Chemistry. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017. [Google Scholar]
  21. Zeng, H.; Zhou, H.; Srivastava, A.; Kannan, R.; Prasanna, V. GraphSAINT: Graph sampling based inductive learning method. In Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020; pp. 1–19. [Google Scholar]
  22. Chen, J.; Ma, T.; Xiao, C. FastGCN: Fast learning with graph convolu-tional networks via importance sampling. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 1–15. [Google Scholar]
  23. Chiang, W.L.; Li, Y.; Liu, X.; Bengio, S.; Si, S.; Hsieh, C.J. Cluster-GCN: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Anchorage AK USA, 4–8 August 2019; pp. 257–266. [Google Scholar] [CrossRef]
  24. Veličković, P.; Casanova, A.; Liò, P.; Cucurull, G.; Romero, A.; Bengio, Y. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 1–12. [Google Scholar] [CrossRef]
  25. Hamilton, W.L.; Ying, R.; Leskovec, J. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 2017, 30, 1025–1035. [Google Scholar]
  26. Qiu, J.; Tang, J.; Ma, H.; Dong, Y.; Wang, K.; Tang, J. DeepInf: Social influence prediction with deep learning. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK, 19–23 August 2018; pp. 2110–2119. [Google Scholar] [CrossRef]
  27. Science, C. A Graph-to-Sequence Model for AMR-to-Text Generation. arXiv 2017, arXiv:1805.02473v3. [Google Scholar]
  28. Cao, N.D.; Kipf, T. MolGAN: An implicit generative model for small molecular graphs. arXiv 2019, arXiv:1805.11973v2. [Google Scholar]
  29. Yan, S.; Xiong, Y.; Lin, D. Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. arXiv 2017, arXiv:1801.07455v2. [Google Scholar] [CrossRef]
  30. Wan, S.; Pan, S.; Zhong, S.; Yang, J.; Yang, J.; Zhan, Y.; Gong, C. Multi-level graph learning network for hyperspectral image classification. Pattern Recognit. 2022, 129, 108705. [Google Scholar] [CrossRef]
  31. Yang, P.; Tong, L.; Qian, B.; Gao, Z.; Yu, J.; Xiao, C. Hyperspectral Image Classification with Spectral and Spatial Graph Using Inductive Representation Learning Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 791–800. [Google Scholar] [CrossRef]
  32. Jia, S.; Jiang, S.; Zhang, S.; Xu, M.; Jia, X. Graph-in-Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 1157–1171. [Google Scholar] [CrossRef] [PubMed]
  33. Zhang, M.; Chen, Y. Link prediction based on graph neural networks. Adv. Neural Inf. Process. Syst. 2018, 31, 5165–5175. [Google Scholar]
  34. Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. IJCAI Int. Jt. Conf. Artif. Intell. 2018, 2018, 3634–3640. [Google Scholar]
  35. Guo, Z.; Li, X.; Huang, H.; Guo, N.; Li, Q. Deep learning-based image segmentation on multimodal medical imaging. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 3, 162–169. [Google Scholar] [CrossRef] [PubMed]
  36. Ma, Y.; Tang, J. Deep Learning on Graphs; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  37. Chadoulos, C.; Moustakidis, S.; Tsaopoulos, D.; Theocharis, J. Multi-atlas segmentation of knee cartilage by Propagating Labels via Semi-supervised Learning. In Proceedings of the IMIP 2022: 2022 4th International Conference on Intelligent Medicine and Image Processing, Tianjin, China, 18–21 March 2022; pp. 76–82. [Google Scholar] [CrossRef]
  38. Peterfy, C.G.; Schneider, E.; Nevitt, M. The osteoarthritis initiative: Report on the design rationale for the magnetic resonance imaging protocol for the knee. Osteoarthr. Cartil. 2008, 16, 1433–1441. [Google Scholar] [CrossRef]
  39. Sethian, J. Advancing interfaces: Level set and fast marching methods. In Level Set Methods and Fast Marching Methods, 2nd ed.; Cambridge Press: Cambridge, UK, 1999. [Google Scholar]
  40. Sled, J.G.; Zijdenbos, A.P.; Evans, A.C. A nonparametric method for automatic correction of intensity nonuniformity in mri data. IEEE Trans. Med. Imaging 1998, 17, 87–97. [Google Scholar] [CrossRef]
  41. Nyul, L.G.; Udupa, J.K. Standardizing the MR image intensity scales: Making MR intensities have tissue specific meaning. Med. Imaging 2000 Image Disp. Vis. 2000, 3976, 496–504. [Google Scholar]
  42. Buades, A.; Coll, B.; Morel, J.M. Non-Local Means Denoising. Image Process. Line 2011, 1, 208–212. [Google Scholar] [CrossRef]
  43. Dalal, N.; Triggs, B. Histogram of Oriented Gradients for Human Detection. IEEE Trans. Ind. Informatics 2020, 16, 4714–4725. [Google Scholar] [CrossRef]
  44. Klaser, A.; Marszalek, M.; Schmid, C. A spatio-temporal descriptor based on 3D-gradients. In Proceedings of the BMVC 2008—Proceedings of the British Machine Vision Conference 2008, Leeds, UK, September 2008. [Google Scholar]
  45. Liu, Q.; Xiao, L.; Yang, J.; Wei, Z. CNN-Enhanced Graph Convolutional Network with Pixel- and Superpixel-Level Feature Fusion for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8657–8671. [Google Scholar] [CrossRef]
  46. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  47. Peng, Y.; Zheng, H.; Liang, P.; Zhang, L.; Zaman, F.; Wu, X.; Sonka, M.; Chen, D.Z. KCB-Net: A 3D knee cartilage and bone segmentation network via sparse annotation. Med. Image Anal. 2022, 82, 102574. [Google Scholar] [CrossRef] [PubMed]
  48. Jaderberg, M.; Simonyan, K.; Zisserman, A.; Kavukcuoglu, K. Spatial transformer networks. Adv. Neural Inf. Process. Syst. 2015, 28, 2017–2025. [Google Scholar]
  49. Schneider, E.; NessAiver, M.; White, D.; Purdy, D.; Martin, L.; Fanella, L.; Davis, D.; Vignone, M.; Wu, G.; Gullapalli, R. The osteoarthritis initiative (OAI) magnetic resonance imaging quality assurance methods and results. Osteoarthr. Cartil. 2008, 16, 994–1004. [Google Scholar] [CrossRef]
Figure 1. Outline of the proposed knee cartilage segmentation approach. It comprises the atlas subset selection (a), the graph construction part (b), a specific form of graph-based convolutional model (c), the adaptive graph learning (d), and the MLP network providing the class estimates for the segmentation of the target image. Black dots correspond to central nodes and colored nodes to neighboring ones, respectively. The encircled C symbol represents an aggregation function.
Figure 1. Outline of the proposed knee cartilage segmentation approach. It comprises the atlas subset selection (a), the graph construction part (b), a specific form of graph-based convolutional model (c), the adaptive graph learning (d), and the MLP network providing the class estimates for the segmentation of the target image. Black dots correspond to central nodes and colored nodes to neighboring ones, respectively. The encircled C symbol represents an aggregation function.
Bioengineering 11 00278 g001
Figure 2. A typical knee MRI viewed in three orthogonal planes (left to right: sagittal, coronal, axial).
Figure 2. A typical knee MRI viewed in three orthogonal planes (left to right: sagittal, coronal, axial).
Bioengineering 11 00278 g002
Figure 3. Schematic illustration of a sequence of aligned image graphs of root nodes, including the target graph (left) and the graphs of its corresponding atlases (right). There are local spatial affinities at aligned positions (horizontal axis), as well as global pairwise similarities between nodes located at different positions in the ROIs.
Figure 3. Schematic illustration of a sequence of aligned image graphs of root nodes, including the target graph (left) and the graphs of its corresponding atlases (right). There are local spatial affinities at aligned positions (horizontal axis), as well as global pairwise similarities between nodes located at different positions in the ROIs.
Bioengineering 11 00278 g003
Figure 4. A generic  5 × 5 × 5  patch (left) representing a node  ( s = 0 ) . The corresponding 1-hop ( s = 1 , middle) and 2-hop neighborhoods ( s = 2 , right), corresponding to  9 × 9 × 9  and  13 × 13 × 13  hypercubes, respectively. Black dots correspond to root nodes, while colored ones stand for the neighboring nodes. All nodes are represented by  5 × 5 × 5  patches.
Figure 4. A generic  5 × 5 × 5  patch (left) representing a node  ( s = 0 ) . The corresponding 1-hop ( s = 1 , middle) and 2-hop neighborhoods ( s = 2 , right), corresponding to  9 × 9 × 9  and  13 × 13 × 13  hypercubes, respectively. Black dots correspond to root nodes, while colored ones stand for the neighboring nodes. All nodes are represented by  5 × 5 × 5  patches.
Bioengineering 11 00278 g004
Figure 5. Schematic illustration of a sequence library for a specific scale  s = 1 , comprising the aligned neighborhoods from the target and the atlas images. Green arrows indicate the different scopes of the attention mechanism. For a particular root node, attention is paid to its own neighborhood, as well as the other neighborhoods in the sequence.
Figure 5. Schematic illustration of a sequence library for a specific scale  s = 1 , comprising the aligned neighborhoods from the target and the atlas images. Green arrows indicate the different scopes of the attention mechanism. For a particular root node, attention is paid to its own neighborhood, as well as the other neighborhoods in the sequence.
Bioengineering 11 00278 g005
Figure 6. Outline of the convolutional units employed. (a) The local convolutional unit  ( L c o n v ) , (b) the global convolutional unit  ( G c o n v ) .
Figure 6. Outline of the convolutional units employed. (a) The local convolutional unit  ( L c o n v ) , (b) the global convolutional unit  ( G c o n v ) .
Bioengineering 11 00278 g006
Figure 7. Detailed description of the local convolutional unit, which aggregates local contextual information from node neighborhoods at different spatial scales.
Figure 7. Detailed description of the local convolutional unit, which aggregates local contextual information from node neighborhoods at different spatial scales.
Bioengineering 11 00278 g007
Figure 8. Illustration of the proposed cross-talk building model (CT-BM), where local and global convolutional units are combined following an alternating scheme.
Figure 8. Illustration of the proposed cross-talk building model (CT-BM), where local and global convolutional units are combined following an alternating scheme.
Bioengineering 11 00278 g008
Figure 9. Illustration of the proposed sequential building model (SEQ-BM), where the local and global learning tasks are carried out sequentially.
Figure 9. Illustration of the proposed sequential building model (SEQ-BM), where the local and global learning tasks are carried out sequentially.
Bioengineering 11 00278 g009
Figure 10. Description of the proposed DMA-GCN model, with densely connected block convolutional structure and residual skip connections.
Figure 10. Description of the proposed DMA-GCN model, with densely connected block convolutional structure and residual skip connections.
Bioengineering 11 00278 g010
Figure 11. Cartilage  DSC ( % )  score vs. number of atlases. (a) Femoral cart, (b) tibial cart.
Figure 11. Cartilage  DSC ( % )  score vs. number of atlases. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g011
Figure 12. Cartilage  DSC ( % )  score vs. number of attention heads. (a) Femoral cart, (b) tibial cart.
Figure 12. Cartilage  DSC ( % )  score vs. number of attention heads. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g012
Figure 13. Cartilage  DSC ( % )  score vs. adjacency threshold. (a) Femoral cart, (b) tibial cart.
Figure 13. Cartilage  DSC ( % )  score vs. adjacency threshold. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g013
Figure 14. Cartilage  DSC ( % )  score vs. number of scales. (a) Femoral cart, (b) tibial cart.
Figure 14. Cartilage  DSC ( % )  score vs. number of scales. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g014
Figure 15. Cartilage  DSC ( % )  score vs. number of dense layers. (a) Femoral cart, (b) tibial cart.
Figure 15. Cartilage  DSC ( % )  score vs. number of dense layers. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g015
Figure 16. Cartilage  DSC ( % )  score vs. batch size. (a) Femoral cart, (b) tibial cart.
Figure 16. Cartilage  DSC ( % )  score vs. batch size. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g016
Figure 17. Cartilage  DSC ( % )  score vs. global module architecture. (a) Femoral cart, (b) tibial cart.
Figure 17. Cartilage  DSC ( % )  score vs. global module architecture. (a) Femoral cart, (b) tibial cart.
Bioengineering 11 00278 g017
Figure 18. Segmentation results for femoral (FC) and tibial (TC) cartilage for the two main proposed models (DMA-GCN(SEQ) and DMA-GCN(CT)). The first part of the figure illustrates a case of successful application of DMA-GCN on a healthy knee (KL grade 0), while the second and third parts correspond to more challenging subjects with moderate (KL grade 2) and severe (KL grade 4) osteoarthritis. (Left to right: ground truth, DMA-GCN(SEQ), DMA-CGN(CT)—color coding: pink → FC, white → TC). (a) Segmentation showcase—KL grade 0. (b) Segmentation showcase—KL grade 2. (c) Segmentation showcase—KL grade 4.
Figure 18. Segmentation results for femoral (FC) and tibial (TC) cartilage for the two main proposed models (DMA-GCN(SEQ) and DMA-GCN(CT)). The first part of the figure illustrates a case of successful application of DMA-GCN on a healthy knee (KL grade 0), while the second and third parts correspond to more challenging subjects with moderate (KL grade 2) and severe (KL grade 4) osteoarthritis. (Left to right: ground truth, DMA-GCN(SEQ), DMA-CGN(CT)—color coding: pink → FC, white → TC). (a) Segmentation showcase—KL grade 0. (b) Segmentation showcase—KL grade 2. (c) Segmentation showcase—KL grade 4.
Bioengineering 11 00278 g018
Table 1. Summary of segmentation performance measures (means ± stds) of the two cartilage classes of our proposed method DMA-GCN (CT/SEQ) by varying the global component of the sub-modules (GCN, SAINT, SAGE, ClustGCN). Best results for each category (CT vs SEQ) with respect to DSC index are highlighted.
Table 1. Summary of segmentation performance measures (means ± stds) of the two cartilage classes of our proposed method DMA-GCN (CT/SEQ) by varying the global component of the sub-modules (GCN, SAINT, SAGE, ClustGCN). Best results for each category (CT vs SEQ) with respect to DSC index are highlighted.
Femoral CartilageTibial Cartilage
ModuleCTSEQRecallPrecision   DSC   VOE   VD RecallPrecision   DSC   VOE   VD
GAT-GCN   88.26 %   88.91 %   89.23 %   22.87 %   7.05 %   87.02 %   85.36 %   90.12 %   25.36 %   7.97 %
  ( ± 0.076 )   ( ± 0.041 )   ( ± 0.074 )   ( ± 0.058 )   ( ± 0.048 )   ( ± 0.032 )   ( ± 0.022 )   ( ± 0.038 )   ( ± 0.078 )   ( ± 0.055 )
  87.13 %   88.12 %   86.49 %   23.06 %   7.13 %   86.88 %   85.03 %   84.79 %   25.76 %   8.01 %
  ( ± 0.077 )   ( ± 0.052 )   ( ± 0.061 )   ( ± 0.082 )   ( ± 0.029 )   ( ± 0.021 )   ( ± 0.035 )   ( ± 0.021 )   ( ± 0.041 )   ( ± 0.044 )
GAT-SAGE   96.17 %   95.81 %   95.71 %   13.17 %   3.94 %   95.31 %   94.78 %   94.02 %   17.98 %   4.99 %
  ( ± 0.021 )   ( ± 0.019 )   ( ± 0.039 )   ( ± 0.055 )   ( ± 0.091 )   ( ± 0.029 )   ( ± 0.045 )   ( ± 0.036 )   ( ± 0.047 )   ( ± 0.022 )
  96.13 %   95.21 %   95.44 %   13.21 %   3.98 %   95.19 %   94.41 %   93.87 %   18.65 %   5.03 %
  ( ± 0.061 )   ( ± 0.071 )   ( ± 0.065 )   ( ± 0.059 )   ( ± 0.031 )   ( ± 0.025 )   ( ± 0.045 )   ( ± 0.023 )   ( ± 0.038 )   ( ± 0.044 )
GAT-SAINT   92.91 %   93.43 %   92.87 %   14.02 %   5.21 %   88.81 %   86.05 %   88.03 %   25.39 %   7.87 %
  ( ± 0.051 )   ( ± 0.039 )   ( ± 0.072 )   ( ± 0.082 )   ( ± 0.031 )   ( ± 0.023 )   ( ± 0.039 )   ( ± 0.057 )   ( ± 0.062 )   ( ± 0.044 )
  87.13 %   88.12 %   86.49 %   23.06 %   7.13 %   87.92 %   86.45 %   85.91 %   25.32 %   7.71 %
  ( ± 0.066 )   ( ± 0.045 )   ( ± 0.061 )   ( ± 0.071 )   ( ± 0.049 )   ( ± 0.028 )   ( ± 0.056 )   ( ± 0.032 )   ( ± 0.033 )   ( ± 0.034 )
GAT-ClustGCN   93.29 %   94.05 %   92.18 %   19.21 %   6.02 %   92.81 %   93.17 %   89.61 %   22.13 %   7.16 %
  ( ± 0.075 )   ( ± 0.043 )   ( ± 0.071 )   ( ± 0.062 )   ( ± 0.039 )   ( ± 0.027 )   ( ± 0.041 )   ( ± 0.026 )   ( ± 0.033 )   ( ± 0.049 )
  93.16 %   93.75 %   92.42 %   19.09 %   6.41 %   92.59 %   93.08 %   89.24 %   22.31 %   7.63 %
  ( ± 0.037 )   ( ± 0.072 )   ( ± 0.057 )   ( ± 0.081 )   ( ± 0.047 )   ( ± 0.041 )   ( ± 0.032 )   ( ± 0.039 )   ( ± 0.043 )   ( ± 0.055 )
Table 2. Summary of segmentation performance measures (means ± stds) of the two cartilage classes of our proposed methods DMA-GCN (CT/SEQ) by varying the overall learning paradigm (transductive vs. inductive). Best results for each category (CT vs SEQ) with respect to DSC index are highlighted.
Table 2. Summary of segmentation performance measures (means ± stds) of the two cartilage classes of our proposed methods DMA-GCN (CT/SEQ) by varying the overall learning paradigm (transductive vs. inductive). Best results for each category (CT vs SEQ) with respect to DSC index are highlighted.
Femoral CartilageTibial Cartilage
ModuleSEQCTRecallPrecision   DSC   VOE   VD RecallPrecision   DSC   VOE   VD
Inductive   91.78 %   89.61 %   89.45 %   15.11 %   5.98 %   86.09 %   84.29 %   83.81 %   26.02 %   8.45 %
  ( ± 0.051 )   ( ± 0.038 )   ( ± 0.086 )   ( ± 0.093 )   ( ± 0.067 )   ( ± 0.036 )   ( ± 0.053 )   ( ± 0.041 )   ( ± 0.094 )   ( ± 0.078 )
  85.78 %   87.71 %   85.87 %   23.61 %   7.79 %   85.92 %   84.93 %   84.05 %   26.15 %   8.71 %
  ( ± 0.045 )   ( ± 0.033 )   ( ± 0.056 )   ( ± 0.069 )   ( ± 0.041 )   ( ± 0.049 )   ( ± 0.071 )   ( ± 0.046 )   ( ± 0.029 )   ( ± 0.027 )
Transductive   96.13 %   95.21 %   95.44 %   13.21 %   3.98 %   95.19 %   94.41 %   93.87 %   18.65 %   5.03 %
  ( ± 0.121 )   ( ± 0.183 )   ( ± 0.022 )   ( ± 0.045 )   ( ± 0.089 )   ( ± 0.031 )   ( ± 0.027 )   ( ± 0.024 )   ( ± 0.034 )   ( ± 0.023 )
  96.17 %   95.81 %   95.71 %   13.17 %   3.94 %   95.31 %   94.78 %   94.02 %   17.98 %   4.99 %
  ( ± 0.117 )   ( ± 0.189 )   ( ± 0.032 )   ( ± 0.051 )   ( ± 0.094 )   ( ± 0.032 )   ( ± 0.025 )   ( ± 0.021 )   ( ± 0.034 )   ( ± 0.029 )
Table 3. Summary of segmentation performance measures (means ± stds) of the two cartilage classes of our proposed methods DMA-GCN (CT) and DMA-GCN (SEQ) compared to state of the art: 1. patch-based methods, 2. deep learning methods, 3. graph deep learning methods. Best results for all three categories (CT vs SEQ) with respect to DSC index are highlighted.
Table 3. Summary of segmentation performance measures (means ± stds) of the two cartilage classes of our proposed methods DMA-GCN (CT) and DMA-GCN (SEQ) compared to state of the art: 1. patch-based methods, 2. deep learning methods, 3. graph deep learning methods. Best results for all three categories (CT vs SEQ) with respect to DSC index are highlighted.
Femoral Cartilage Tibial Cartilage
MethodRecallPrecision   DSC   VOE   VD RecallPrecision   DSC   VOE   VD
PBSC   83.51 %   82.65 %   82.23 %   29.97 %   11.01 %   79.76 %   81.45 %   78.85 %   34.28 %   11.79 %
  ( ± 0.066 )   ( ± 0.045 )   ( ± 0.061 )   ( ± 0.082 )   ( ± 0.037 )   ( ± 0.021 )   ( ± 0.036 )   ( ± 0.027 )   ( ± 0.031 )   ( ± 0.041 )
PBNLM   84.12 %   83.28 %   84.09 %   26.73 %   8.25 %   81.22 %   82.07 %   80.04 %   33.91 %   11.41 %
  ( ± 0.071 )   ( ± 0.045 )   ( ± 0.052 )   ( ± 0.061 )   ( ± 0.084 )   ( ± 0.038 )   ( ± 0.019 )   ( ± 0.027 )   ( ± 0.031 )   ( ± 0.029 )
HyLP   94.04 %   93.16 %   92.56 %   15.16 %   5.12 %   91.08 %   89.98 %   89.91 %   19.67 %   5.85 %
  ( ± 0.052 )   ( ± 0.051 )   ( ± 0.026 )   ( ± 0.028 )   ( ± 0.034 )   ( ± 0.074 )   ( ± 0.023 )   ( ± 0.018 )   ( ± 0.025 )   ( ± 0.011 )
SegNet   89.18 %   89.48 %   89.09 %   20.73 %   5.65 %   87.22 %   89.07 %   86.12 %   22.79 %   6.23 %
  ( ± 0.116 )   ( ± 0.219 )   ( ± 0.089 )   ( ± 0.039 )   ( ± 0.066 )   ( ± 0.056 )   ( ± 0.062 )   ( ± 0.034 )   ( ± 0.012 )   ( ± 0.016 )
DenseVoxNet   88.75 %   88.67 %   87.54 %   21.83 %   6.45 %   87.45 %   86.03 %   85.68 %   25.47 %   7.98 %
  ( ± 0.156 )   ( ± 0.204 )   ( ± 0.042 )   ( ± 0.048 )   ( ± 0.121 )   ( ± 0.076 )   ( ± 0.041 )   ( ± 0.047 )   ( ± 0.028 )   ( ± 0.025 )
VoxResNet   88.03 %   88.92 %   88.12 %   22.71 %   6.64 %   87.04 %   85.26 %   85.12 %   26.03 %   8.04 %
  ( ± 0.187 )   ( ± 0.205 )   ( ± 0.047 )   ( ± 0.055 )   ( ± 0.128 )   ( ± 0.071 )   ( ± 0.044 )   ( ± 0.043 )   ( ± 0.032 )   ( ± 0.029 )
KCB-Net   89.74 %   90.12 %   88.92 %   23.13 %   6.72 %   88.12 %   87.46 %   87.92 %   25.90 %   8.04 %
  ( ± 0.149 )   ( ± 0.185 )   ( ± 0.031 )   ( ± 0.042 )   ( ± 0.098 )   ( ± 0.055 )   ( ± 0.029 )   ( ± 0.033 )   ( ± 0.017 )   ( ± 0.021 )
CAN3D   88.04 %   88.54 %   87.12 %   22.93 %   6.59 %   87.28 %   85.26 %   85.02 %   25.76 %   8.01 %
  ( ± 0.156 )   ( ± 0.205 )   ( ± 0.03 )   ( ± 0.042 )   ( ± 0.098 )   ( ± 0.055 )   ( ± 0.029 )   ( ± 0.033 )   ( ± 0.017 )   ( ± 0.021 )
PointNet   87.13 %   88.12 %   86.49 %   23.06 %   7.13 %   86.88 %   85.03 %   84.79 %   24.92 %   7.39 %
  ( ± 0.121 )   ( ± 0.187 )   ( ± 0.023 )   ( ± 0.051 )   ( ± 0.104 )   ( ± 0.062 )   ( ± 0.031 )   ( ± 0.023 )   ( ± 0.024 )   ( ± 0.018 )
GCN   90.19 %   90.84 %   89.23 %   19.65 %   5.48 %   88.92 %   89.02 %   88.26 %   23.27 %   6.78 %
  ( ± 0.129 )   ( ± 0.126 )   ( ± 0.030 )   ( ± 0.046 )   ( ± 0.098 )   ( ± 0.059 )   ( ± 0.033 )   ( ± 0.021 )   ( ± 0.028 )   ( ± 0.017 )
SGC   91.02 %   91.31 %   89.84 %   17.41 %   5.19 %   89.81 %   89.54 %   89.02 %   22.11 %   5.89 %
  ( ± 0.212 )   ( ± 0.132 )   ( ± 0.032 )   ( ± 0.064 )   ( ± 0.098 )   ( ± 0.061 )   ( ± 0.047 )   ( ± 0.032 )   ( ± 0.039 )   ( ± 0.028 )
ClusterGCN   90.56 %   91.08 %   90.12 %   17.33 %   5.16 %   90.28 %   91.05 %   89.93 %   22.08 %   5.82 %
  ( ± 0.141 )   ( ± 0.150 )   ( ± 0.039 )   ( ± 0.058 )   ( ± 0.107 )   ( ± 0.054 )   ( ± 0.061 )   ( ± 0.039 )   ( ± 0.036 )   ( ± 0.024 )
GraphSAINT   92.61 %   92.74 %   90.87 %   17.18 %   5.16 %   91.75 %   91.04 %   90.12 %   22.04 %   5.76 %
  ( ± 0.132 )   ( ± 0.131 )   ( ± 0.027 )   ( ± 0.051 )   ( ± 0.102 )   ( ± 0.054 )   ( ± 0.029 )   ( ± 0.026 )   ( ± 0.022 )   ( ± 0.020 )
GraphSAGE   92.87 %   92.91 %   90.95 %   17.12 %   5.09 %   92.04 %   92.53 %   90.49 %   20.71 %   5.66 %
  ( ± 0.129 )   ( ± 0.144 )   ( ± 0.031 )   ( ± 0.065 )   ( ± 0.098 )   ( ± 0.049 )   ( ± 0.033 )   ( ± 0.038 )   ( ± 0.027 )   ( ± 0.024 )
GAT   93.14 %   93.09 %   92.87 %   13.90 %   4.36 %   93.29 %   93.81 %   90.86 %   19.58 %   5.61 %
  ( ± 0.141 )   ( ± 0.203 )   ( ± 0.045 )   ( ± 0.051 )   ( ± 0.101 )   ( ± 0.055 )   ( ± 0.031 )   ( ± 0.019 )   ( ± 0.029 )   ( ± 0.026 )
MGCN   94.11 %   93.92 %   93.27 %   14.05 %   4.27 %   93.79 %   94.06 %   91.43 %   19.84 %   5.34 %
  ( ± 0.125 )   ( ± 0.181 )   ( ± 0.029 )   ( ± 0.045 )   ( ± 0.096 )   ( ± 0.041 )   ( ± 0.027 )   ( ± 0.024 )   ( ± 0.035 )   ( ± 0.021 )
DMA-GCN (SEQ)   96.13 %   95.21 %   95.44 %   13.21 %   3.98 %   95.19 %   94.41 %   93.87 %   18.65 %   5.03 %
  ( ± 0.121 )   ( ± 0.183 )   ( ± 0.022 )   ( ± 0.045 )   ( ± 0.089 )   ( ± 0.031 )   ( ± 0.027 )   ( ± 0.024 )   ( ± 0.034 )   ( ± 0.023 )
DMA-GCN (CT)   96.17 %   95.81 %   95.71 %   13.17 %   3.94 %   95.31 %   94.78 %   94.02 %   17.98 %   4.99 %
  ( ± 0.117 )   ( ± 0.189 )   ( ± 0.032 )   ( ± 0.051 )   ( ± 0.094 )   ( ± 0.032 )   ( ± 0.025 )   ( ± 0.021 )   ( ± 0.034 )   ( ± 0.029 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chadoulos, C.; Tsaopoulos, D.; Symeonidis, A.; Moustakidis, S.; Theocharis, J. Dense Multi-Scale Graph Convolutional Network for Knee Joint Cartilage Segmentation. Bioengineering 2024, 11, 278. https://doi.org/10.3390/bioengineering11030278

AMA Style

Chadoulos C, Tsaopoulos D, Symeonidis A, Moustakidis S, Theocharis J. Dense Multi-Scale Graph Convolutional Network for Knee Joint Cartilage Segmentation. Bioengineering. 2024; 11(3):278. https://doi.org/10.3390/bioengineering11030278

Chicago/Turabian Style

Chadoulos, Christos, Dimitrios Tsaopoulos, Andreas Symeonidis, Serafeim Moustakidis, and John Theocharis. 2024. "Dense Multi-Scale Graph Convolutional Network for Knee Joint Cartilage Segmentation" Bioengineering 11, no. 3: 278. https://doi.org/10.3390/bioengineering11030278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop