Next Article in Journal
Remote Sensing Image Denoising via Low-Rank Tensor Approximation and Robust Noise Modeling
Previous Article in Journal
Stability Assessment of OCO-2 Radiometric Calibration Using Aqua MODIS as a Reference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Perspectives toward the Recommendation of Remote Sensing Images Using the INDEX Indicator, Based on Principal Component Analysis

1
Department of Geomatics, National Cheng Kung University, Tainan 701, Taiwan
2
Information Division, National Science and Technology Center for Disaster Reduction, New Taipei City 23143, Taiwan
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(8), 1277; https://doi.org/10.3390/rs12081277
Submission received: 10 March 2020 / Revised: 5 April 2020 / Accepted: 14 April 2020 / Published: 17 April 2020
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Progress in the development of sensor technology has increased the speed and convenience of remote sensing (RS) image acquisition. As the volume of RS images steadily increases, the challenge is no longer in producing and acquiring an RS image, but in finding a particular image from numerous RS images that precisely meets user application needs. Some spatial measuring methods specific to the recommendation of RS images have been proposed and could be used to score and sort RS images according to users’ requests. Our previous study introduced two measuring methods, namely, available space (AS) and image extension (IE), which have similar results but complementary effects for spatially ranking recommended images. The AS indicator could cover the inadequacies of the IE indicator in some cases and vice versa. The current study combines these two indicators using principal component analysis and produced a new indicator called INDEX, which we used in the RS image spatial recommendation. The ranking results were measured using a normalized discounted cumulative gain (NDCG) and several other statistic criteria. The results indicate that users are more satisfied with the recommendations of the INDEX indicator than those of AS, IE and Hausdorff distance for single RS image type selections which is the most common scenario for RS image applications. When dealing with hybrid RS image types, the INDEX indicator performs very closely to the dominant IE indicator, yet maintaining the characteristics of the AS indicator.

Graphical Abstract

1. Introduction

In recent years, the ability to acquire remote sensing data has been improved to an unprecedented level. Evidently, handling, storing, managing and making the best usage of this tremendous volume of data is a massive challenge [1]. The correct use of remote sensing images has been proven to be an effective solution for resolving real world problems, for example, Liu and Di [2] attempted to introduce the latest theory, methods, and applications to manage, exploit, and analyze remote sensing big data. To optimize the efficiency of the geospatial service in a flood response decision-making system, a parallel agent-as-a-service (P-AaaS) method is proposed and implemented in a cloud computing platform [3]. The P-AaaS method includes parallel architecture and mechanism for adjusting the computational resources and the execution algorithm. Given that this method considers multi-scale and dynamic-state characteristics, the sub-pixel land-cover change detection with the use of different resolution images is addressed [4]. A novel approach based on back propagation neural network with different resolution images is proposed to solve the problem of mixed pixels in change detection. Ding [5] presents the association rules-based coastal land use spatial sequence model (ARCLUSSM) to mine the sequential pattern of land use with interesting associations in the sea–land direction of a coastal zone. ARCLUSSM is a good application of remote sensing big data and focuses on land use in the sea–land direction and sequential relationship between land-use types. As the varieties and volume of remote sensing (RS) images is growing with an overwhelming speed, users need an efficient and effective way to find the RS images that are best for their applications, and the design of recommendation methods therefore become an important topic. Despite the fact that there are many factors to consider, an essential and necessary requirement is the spatial perspective, i.e., the area of interests (AOI) must be visible in the RS image. This location-based constraint narrows the data range according to the location of the demand. The studies of [6] [7] are location-based recommendation and use point patterns to represent user preferences. Different from simply using the point patterns, Wang [8] further considers the sequential influence of locations.
Two primary challenges in the supply of RS imagery are the selection of candidate images that match the spatial relationships given by the search constraints based on an AOI and the recommendation of optimal spatial rankings for the candidate images. These issues were addressed in our previous study, which proposed the location-based RS image finding engine (LIFE) search engine framework for RS images [9]. This framework includes RS image metadata databases and a recommendation engine. Figure 1 shows the recommendation is based on results of the two proposed quantitative parameters, namely, the available space (AS) and image extension (IE), according to a user-defined AOI. All the qualified images are ranked by the proposed indicators respectively.
For given RS image datasets, the AS and IE indicators can effectively compare the spatial conditions between the AOI and RS images and separately ranking the candidate images. As the ranking behaviors of these two parameters are different, the proposed representative indicator in this research intends to combine the qualities of the AS and IE indicators, such that their supplementary characteristics will be retained to facilitate user selection among the RS image search results. Models that can merge multiple variables must be developed to merge the highly correlated AS and IE spatial ranking parameters. Given the experimental platform, AOIs, and candidate image databases from our previous study, we augmented the LIFE framework by introducing a method to linearly merge the AS and IE spatial ranking parameters, thereby deriving a new spatial ranking indicator named INDEX. After comparing the model of factor analysis (FA) and principal component analysis (PCA), PCA was chosen as the method to merge the AS and IE indicators. More discussion about the reasons for choosing PCA is included in the next section.
This paper focuses on the recommendation of remote sensing images from the spatial perspective. In addition to the required topological constraint, i.e., contains, the selection of RS images may be also based on other factors, e.g., spectrum, imagery ground resolution, cloud coverage, and temporal constraints. By specifying an acceptable range of values according to the application requirements, e.g., a specific spectrum band, ground resolution higher than 2 m, the acquisition time between 2017 to 2019, cloud coverage less than 30%, we can easily use the database management system to reduce the number of candidate images. As the spatial coverage of RS images are usually arbitrary and an area may be covered by multiple RS image, the spatial recommendation strategies must take the “difference in degree” between the AOI and candidate RS images into consideration, not only the topological relationship of “contains.” The proposed indicators were therefore developed for ranking candidate images from spatial consideration. The recommendation strategies of other factors are beyond the scope of this research, but can be easily added to the RS imagery selection module.
The remainder of this paper is organized as follows. Section 2 reviews and compares models that can be used to merge the AS and IE indicators. Section 3 states the problem examined in this research and explains our proposed solution. Section 4 introduces the definition of the INDEX indicator and the procedure designed. Section 5 presents our major findings of the experimental data. Section 6 presents the discussions about findings and future works. Lastly, Section 7 provides the conclusions of this study.

2. Related Work

The objective of this paper is to combine the indicators proposed in our previous study [9] and uses a variable reduction method to endow a single indicator with the characteristics of both indicators. For RS image users without expertise and experience in handling complicated platforms or archives, an effective spatial ranking and recommendation indicator may help in finding useful images, which may consume the most of precious time in RS image applications. This section describes the definitions of proposed spatial ranking indicators and variable reduction methods.

2.1. RS Image Ranking Indicators

Despite the fact that many ranking mechanisms for spatial data have been proposed, the majority of these proposals rank images only according to the combinations and calculations from predefined geographical location attributes of the data [10,11,12]. In the LIFE framework, we proposed the AS and IE indicators, which measure the extensibility and centrality of data, respectively. These indicators are defined as follows.

2.1.1. AS Indicator

The idea of the AS indicator was inspired by the consideration of neighborhood area of AOI that is often neglected by conventional ordering methods, but may be very useful for providing additional reference. Originating from the idea of distance buffers [13], the AS indicator uses the maximum buffering distance (MBD) as the basis for measuring the additional spatial coverage of data adjacent to AOI in each RS image (see Definition 1). Instead of choosing a fixed distance buffer, our proposed method uses a dynamically defined boundary and MBD, meaning the distance buffer is determined according to the relative location between an AOI and an RS image, and every image will have its own AS indicator value after AOI is specified.
Definition 1.
[14] MBD.Given an AOI aoi = <l1, l2, …, ln>, an RS image rsi = <id, c, llt, lrt, llb, lrb>, and aoi ⊆ rsi, the MBD of rsi for aoi, denoted as MBD(rsi, aoi), is defined in Equation (1).
M B D ( r s i ,   a o i ) = M i n l a o i ( M i n ( | l . x l l t . x | ,   | l . x l r b . x | ,   | l . y l l t . y | ,   | l . y l r b . y | ) ) .
Definition 2.
[14] Extensibility.Given an AOI aoi = <l1, l2, …, ln>, an RS image rsi = <id, c, llt, lrt, llb, lrb>, and aoi ⊆ rsi, the Extensibility of rsi for aoi, denoted as Extensibility(rsi, aoi), physically means the maximum buffering area (MBA), which is defined in Equation (2), where the symbol “%” indicates the modulo operation.
E x t e n s i b i l i t y ( r s i ,   a o i ) = i = 1 n d i s ( l i , l ( i % n ) + 1 ) × M B D ( r s i ,   a o i ) + π × M B D ( r s i ,   a o i ) 2 .

2.1.2. IE Indicator

The idea of the IE indicator is to measure the closeness of the center points of an AOI and an RS image. The IE indicator is used to measure the extent to which the RS image should be expanded to make the center point of the RS image coincides with the center of the AOI. The area of the expansion is denoted as the minimum expanded area (MEA), which is defined in Equation (3). The objective of the IE indicator is to measure the centrality of an RS image with respect to AOI, which is defined in Equation (4). The proportion of MEA in the RS image is proportional to the difference between the center points of AOI and the RS image.
Definition 3.
[14] MEA.Given an AOI aoi = <l1, l2, …, ln>, an RS image rsi = <id, c, llt, lrt, llb, lrb>, and aoi ⊆ rsi,, the MEA of rsi for aoi, denoted as MEA(rsi, aoi), is defined in Equation (3), where MBR = <lMBRlt, lMBRrt, lMBRlb, lMBRrb> indicates the four corner locations of the minimum boundary rectangle of aoi.
M E A ( r s i ,   a o i )        = X E x p a n d × | l l t . y l r b . y | + Y E x p a n d × | l l t . x l r b . x |        + X E x p a n d × Y E x p a n d X E x p a n d = | ( | l M B R l t . x l l t . x | | l M B R r b . x l r b . x | ) |      Y E x p a n d = | ( | l M B R l t . y l l t . y | | l M B R r b . y l r b . y | ) | .
Definition 4.
[14] Centrality.Given an AOI aoi = <l1, l2, …, ln>, an RS image rsi = <id, c, llt, lrt, llb, lrb>, and aoi ⊆ rsi, the Centrality of rsi for aoi, denoted as Centrality(rsi, aoi), is defined in Equation (4).
C e n t r a l i t y ( r s i ,   a o i ) = | l l t . x l r b . x | × | l l t . y l r b . y | | l l t . x l r b . x | × | l l t . y l r b . y | + M E A ( r s i ,   a o i ) .
The values of the AS and IE indicators are based on the respective measurements of the extensibility and centrality of a candidate image relative to AOI. Our previous study shows that each of the AS and IE indicators presents a unique set of behaviors and characteristics for the ranking of the RS images and they can also supplement each other. For example, IE can be used to rank two RS images, whose AS values are not distinguishable by AS alone and vice versa (Figure 2).

2.2. Variable Reduction

Multivariate statistical analysis (also known as multi-element statistical analysis or multivariate analysis) is a branch of statistics frequently used in such areas as managerial science [15], social science [16], and biology [17]. Some of the most common methods employed for multivariate analysis include principal component analysis (PCA), factor analysis (FA), and canonical correlation analysis [18]. PCA and FA are superficially similar but are completely different methods in practice.

2.2.1. Factor Analysis

  • Exploratory factor analysis
Exploratory factor analysis (EFA) is a technique to find the essential structure of diversified observational variables and perform processing and dimensionality reduction. Therefore, EFA could synthesize variables with complicated relationships into a few core factors. Spearman [19] proposed a single intellectual factor for EFA. However, further research indicates Spearman’s single factor theory is inadequate due to lack of diversity and applicability. Thurstone [20] challenged the popular single factor theory assumption and proposed multiple factor analysis, which broke through the limitations in terms of the number of factors. The purpose of EFA is therefore to reduce the dimensions of the original space and turn the original data into a space with numerous compact variables [21].
  • Confirmatory Factor Analysis
Confirmatory factor analysis (CFA) is a statistical approach to verify the known factor structures or assumptive theory. The purpose of CFA is to use the actual data collected to verify the consistency of previously assumed factor structures [22].
CFA and EFA are similar to each other in many aspects, because they are both linear statistical models that assume a normal distribution in the sample space. Both methods also involve the discovery of latent structures and variable measurement. However, these two methods differ in various important aspects. EFA is mainly used to search for unknown structures and factors in data, whereas CFA is used to validate theoretically deduced factors. For example, Featherman and Pavlou [23] used a CFA model that was deduced theoretically instead of derived from their data. For CFA, a model must be predefined before factor validation can be performed, whereas EFA probes for structures that connect the factors within a data set [24].
  • CFA
    Requires a model a priori
    Requires the number of factors
    Requires information on which items load on each factor
    Requires a model supported by theory or previous research
    Requires explicit error description
  • EFA
    Determines the factor structure (model)
    Explains a maximum amount of variance

2.2.2. Principal Component Analysis

PCA is a technique for analyzing and simplifying data sets. The formal description of PCA, as proposed by Pearson [25], is to find the line or surface nearest to a sample in the sample space, thereby effectively reducing the data dimension. The main idea of PCA is to analyze the characteristic properties of a covariance matrix to obtain the principal components of data (eigenvectors) and their weights (eigenvalues), by retaining the lower-order principal component (corresponding to the maximum eigenvalue). The high-order principal component (corresponding to the minimum eigenvalue) is abandoned to reduce the data set dimension while preserving the maximum data set variation. More discussion about this approach can be found in [26,27].
The PCA process is as follows.
  • Suppose that we have n independent observations on the p-element random vector x denoted by x1, x2, …, n. The deviation matrix X is defined as follows:
    X = [ ( x 1 m ) T ( x n m ) T ] ,   where   m = 1 n k = 1 n x k .
  • The covariance matrix S is defined as follows:
    S = 1 n 1   X T X
  • The equation that solves the correlation matrix S is | S λ I p | = 0 , where λ is the eigenvalue and p eigenvalues could be obtained. j = 1 r λ j j = 1 p λ j 0.85 , λ j λ j + 1 Thus, we have retained 85% of the data variation and confirmed the r value. For each λ j ,   j = 1 , 2 , , r , the equation Sc j = λ j c j is solved to obtain the eigenvector c j .
  • The principal component matrix C is defined as follows:
    C = [ c 1 , c 2 , , c r ]
  • The coefficient matrix Z of the principal component is defined as follows:
    Z = [ ( x 1 m ) T ( x n m ) T ] [ c 1 , c 2 , , c r ] = X C

2.2.3. Comparison of FA and PCA

According to the FA model defined in [28], Figure 3 shows the different roles of FA and PCA respectively play in our works. The FA (left part of Figure 3) is for discovering the latent factors and their weights (denoted by b1, b2, b3 and b4), including centrality and extensibility for RS image spatial selections, which have formed the foundations of AS and IE indicators from our previous study. The PCA (right part of Figure 3) adopted in this research is to combine the two observations with calculated weights (denoted by w1 and w2) into a new indicator, for more efficient and effective RS image spatial recommendations.
This study aims to integrate the two indicators proposed by the LIFE framework from our previous study through variable reduction. EFA is used to determine the latent variables that have an overall effect on the data, thereby summarizing and simplifying the data. CFA is subsequently used to validate the known variable structures and models. Apart from the reduction of variables, this research also aims to retain the characteristics of the two aforementioned indicators after they are merged. Therefore, the FA-based variable reduction for discovering latent factors (centrality and extensibility) and related weights is not used, because it is unable to determine the principal component from observations (AS, IE), which is the target of this study.

3. Problem Statement

The core idea of the INDEX indicator is inspired by the shortcomings of the AS and IE indicators that are used for ranking recommendations. Our previous study found that the ranking behaviors of AS and IE differ completely from each other under certain AOI/RS image conditions. If the two indicators have contradicting suggestions, a dilemma would arise as to which one should be chosen for ranking. The two indicators have their respective significance. Previous research on normalized discounted cumulative gain (NDCG) indicates that both of them can be given a certain degree of approval. Therefore, we do not attempt to recommend to the user which indicator to use between the two. Instead, we attempt to propose an integrated solution that provides a new indicator that combines the characteristics of both indicators, thereby enabling users to only manage a single consideration. The demanding of RS image participation in domains and applications has also resulted in the establishment of RS image cloud-based platforms such as GEE. The addition of RS image recommendation capability enables such platforms to provide more precise recommendation and reduce the time and efforts of domain users in finding the best images for their applications.
The selection of remote sensing images may include multiple types of constraint, and spatial constraints are a necessary consideration. By using the proposed ranking indicators, users can acquire a sorted list of recommended results that meet their spatial conditions for specific areas of interest. For increasingly complex remote sensing image products, comprehensive and effective spatial conditions recommendation can facilitate the acquisition of potential suitable images for users without expert knowledge, which is the first step in starting remote sensing image applications.

4. New Indicator Design Based on PCA

This section describes the design of the INDEX indicator and its relative role in the LIFE framework. According to the discussions in previous sections, the INDEX indicator must retain the characteristics of AS and IE indicators, yet deliver better spatial recommendations for RS image users. In order to merge the AS and IE indicators discussed in Section 2.1 properly, PCA was adopted as the method of producing the INDEX indicator according to the discussion in Section 2.2. The fundamental definition and the workflow of calculation of the INDEX indicator was discussed in Section 4.1. In Section 4.2, the expanded LIFE framework was explored.

4.1. The INDEX Indicator

Our objective is to combine the AS and IE indicators from our previous study. However, the obtained AS and IE values should first be normalized, because the domains of the non-normalized AS and IE ranges are quite divergent and difficult to control. Therefore, we normalized the AS and IE indicators and used PCA to merge them.
PCA seeks to reduce the number of variables and it can simultaneously reflect inter-variable relationships. To use PCA to merge the AS and IE indicators, the AS and IE eigenvectors were first obtained. These eigenvectors were subsequently multiplied by their normalized values and combined to form the INDEX indicator. The detailed procedure for combining the AS and IE indicators into the INDEX indicator is described in Figure 4 and the following pseudo-code below.
  • Pseudo code for the INDEX Indicator
    Begin
  • Given an AOI aoi = <l1, l2, …, ln>, an RS image rsi = <id, c, llt, lrt, llb, lrb>
  • Obtain the MBD of rsi for aoi,
    M B D ( r s i , a o i ) = M i n l a o i ( M i n ( | l . x l l t . x | , | l . x l r b . x | , | l . y l l t . y | , | l . y l r b . y | ) ) E x t e n s i b i l i t y ( r s i , s o i ) = i = 1 n d i s ( l i , l ( i % n ) + 1 ) × M B D ( r s i , a o i ) + π × M B D ( r s i , a o i ) 2
  • Obtain the MEA of rsi for aoi
    M E A ( r s i , a o i ) = X E x p a n d × | l l t . y l r b . y | + Y E x p a n d × | l l t . x l r b . x | + X E x p a n d × Y E x p a n d X E x p a n d = | ( | l M B R l t . x l l t . x | l M B R r b . x l r b . x | ) | Y E x p a n d = | ( | l M B R l t . y l l t . y | l M B R r b . y l r b . y | ) C e n t r a l i t y ( r s i , a o i ) = | l l t . x l r b . x | × | l l t . y l r b . y | | l l t . x l r b . x | × | l l t . y l r b . y | + M E A ( r s i , a o i )
    • Let minDist = min(right_MinDist, left_MinDist, upper_MinDist, bottom_MinDist)
    • AS = BufferArea (AOI, minDist)
    • IE = (left_MinDist-right_MinDist)∗Y + (bottom_MinDist-upper_MinDist)∗(X + (left_MinDist-right_MinDist))
  • Normalization
    • N_AS = A S m i n A S m a x A S m i n A S
    • N_IE = 1 I E m i n I E m a x I E m i n I E
  • PCA calculation
    Obtain the covariance matrix, then analyze the characteristic value and eigenvector
    c = ( c 1 c 2 . . . c n ) = ( w 1 , 1 w 1 , n w n , 1 w n , n ) ( x 1 x 2 . . . x n )
    where C is the vector of the principal components, W denotes the transformation matrix, and X denotes the vector of the original data.
  • Select c1 and c2 as the eigenvector of N_AS and N_IE.
    INDEX = c1 × N_AS + c2 × N_IE.
  • END
Steps 1 to 3 define AS and IE. Step 4 normalizes AS and IE and their normalized values are denoted as N_AS and N_IE, respectively. Step 5 defines PCA. Step 6 identifies the eigenvectors of AS and IE through Step 5. Moreover, the INDEX is subsequently formed by the multiplication of N_AS and N_IE with their corresponding eigenvectors.
We use the following example to illustrate the INDEX parameter.
Let right_MinDist = k1, left_MinDist = k2, upper_MinDist = k3, bottom_MinDist = k4.
Assume that min (k1, k2, k3, k4) = k1. Then, minDist = k1.
AS = BufferArea (l, k1).
IE = (k2k1) × Y + (k4k3) × (X+ (k2k1)).
Thereafter, N_AS = A S m i n A S m a x A S m i n A S , N_IE = 1 I E m i n I E m a x I E m i n I E .
Let x1 and x2 be the vectors of the N_AS and N_IE, respectively.
By PCA calculation, we obtain the eigenvector c1 and c2.
Thereafter, c = ( c 1 c 2 ) = ( w 1 w 2 w 3 w 4 ) ( x 1 x 2 ) = ( w 1 x 1 + w 2 x 2 w 3 x 1 + w 4 x 2 ) .
Therefore, INDEX = (w1x1 + w2x2) × N_AS+ (w3x1 + w4x2) × N_IE.
A demonstration of PCA related statistics was discussed in Figure 5, after the AOI (no. 62, the orange polygon showed in Figure 5) was selected, the image database returns nine images (the red and blue rectangle) that satisfy the conditions of containing the AOI. For each pair of AOI and candidate images, the values of AS, IE, Hausdorff distance, and the INDEX indicators were calculated.
The parameter calculation data of the AOI (62) is shown in Figure 6. These data included nine records with attributes such as aoiId, imageId, normalized AS, and normalized IE, Hausdorff distance, the INDEX indicator and eigenvector of PCA calculation (c1, c2), etc.
Figure 7 shows the relevant statistics of PCA calculations for the INDEX indicator for the AOI in Figure 5, including covariance matrix, eigenvalue, eigenvector, factor loadings, factor scores, and cumulative variances (%). The eigenvalues of the AS and IE indicator were 0.170 and 0.064 and the cumulative variance from the AS to the IE indicator was 72.534% and 100%. The results demonstrated the percentage (72.534%) of variance of the original data, explained by the AS and IE indicator respectively, which were consistent with the results of eigenvalues (0.170 and 0.064) and Scree plot in Figure 7.

4.2. System Framework

Figure 8 shows the three modules in this system. The first module is the RS image database module, which contains six data sets with varying sizes of RS images. The second module is the LIFE framework, which performs RS image spatial ranking according to the AOI input from the user and generates two different types of spatial recommendations that correspond to the AS and IE indicators. The third module is the INDEX generator described in this study.

5. Experimental Evaluations

In order to deliver comprehensive discussions and comparisons among AS, IE, INDEX and Hausdorff distance indicators, four parts of experiments were discussed in this section. In Section 5.1, we compared the ranking behaviors of AS, IE and the INDEX indicators, with 1000 simulated images of different image sizes. In Section 5.2, we developed a user score collection platform with 10,000 simulated images of six types of RS image format. We discussed the user preferences for four indicators based on collected user scores for each pair of image and AOI. In Section 5.3, we used the NDCG method to compare the results for 4 indicators among six simulated RS image formats. We also calculated the improvement rates of the INDEX indicator with respect to the other three indicators for the six RS image formats by NDCG results. In Section 5.4, we used five ranking evaluation criteria based on the user scores and corresponding precision and recall, comparing the performance of the four indicators.

5.1. Analyses of Spatial Recommendation Indicator Behaviors

This section chooses a single AOI to compare the ranking behaviors of the three proposed indicators (AS, IE and INDEX). All the simulated images conform to the setting conditions, where the spatial extent of images entirely contain the spatial extent of the AOI (refer to Figure 9). As the AS, IE and INDEX are correlated, their ranking behaviors are separately evaluated in the following three subsections. The ranking results of Hausdorff distance will be further added in Section 5.2 to Section 5.4.
A 10 km ×10 km AOI was first specified, then 1000 images of different sizes completely containing the AOI were randomly simulated. The magnification of the size of the simulated images ranged from 1.1 to 3.0, with the interval of 0.1. Four indicators, AS, IE and INDEX indicator, were calculated for each of 1000 simulated images (Figure 9).
Figure 10 summarizes the figures and tables and their major focuses in the following discussions.

5.1.1. AS Indicator Score According to the Sorting Pattern

Figure 11 shows the distribution of IE after the AS indicator score-based sorting. The AS indicator score of tested images maintains a flat growth at lower values and climbs at the end (Figure 11), mainly because the AS indicator score is normalized with respect to the maximum and minimum AS indicator scores of the images. When a larger image provides a higher AS indicator score, the AS indicator score of smaller images will be compressed and the AS indicator score of the latter will not change significantly, thereby resulting in a gradual change. Further analysis shows that the average size ratio of the top 100 AS-ranked images from the 1000 tested images is 2.45. Thus, a larger image is considerably favorable in AS-based sorting. By contrast, Figure 11 also shows that many images with lower AS indicator scores have excellent IE counterparts. Consequently, the two indicators may have different ranking results regarding optimal image recommendation. Figure 12 limits the results to only the top 100 AS-ranked images. The IE indicator score of the topmost AS-ranked image does not rank first, and due to fluctuations, it will appear that the corresponding IE indicator values of the two adjacent AS-ranked images (minor discrepancy in the AS indicator score) are subject to a dramatic change. Only 35 of the top 100 AS-ranked images are among the top 100 IE-ranked images, and only three of the top 10 AS-ranked images are on the list of the top 100 IE-ranked images. Moreover, none of the top 10 AS-ranked images are also top 10 IE-ranked images. Since no consistency occurs in the top-ranked results based on the sorting of the two indicators, the image recommendation based on the AS indicator alone cannot guarantee that images with the optimal IE indicator score can be recommended, and vice versa. That is, a mechanism is necessary to assist in the selection of images, by considering both types of ranking preference.
After the addition of the INDEX, the INDEX and IE line charts show similar changing tendencies with evidently small amplitudes (Figure 13). However, a clear difference from the IE line chart is that a significant climb occurs at the end of the INDEX line chart, thereby suggesting that the greater AS indicator score has a positive impact on the sorting results of the INDEX. Such a change also implies that the INDEX-based recommendation results will differ from the IE-based results. Table 1 shows the comparison of the top 10 AS-ranked images and their corresponding IE and the INDEX scores and sorting order. Although the INDEX and IE are evidently similar in the change trends of their line charts; the INDEX-based sorting results considerably differ from that of the IE-based sorting results by considering the impacts from the AS indicator score. Thus, the INDEX can be regarded as a reference for recommendation that combines both the behaviors of the AS and IE indicators, which is an outcome consistent with the design viewpoint of this study.

5.1.2. IE Indicator Score-Based Sorting Pattern

Figure 14 shows the changes of the AS indicator according to the IE-ranked result. Similar to the discussions in Figure 11, as the IE score arises, the AS indicator score fluctuates, which means that two images with similar IE indicator scores may have a sizable fall in the AS indicator scores, thereby verifying again the inconsistency between the recommendation based on the two indicators. As for the distribution pattern, two significant stages of climb in IE distribution occur. The climb at the end indicates that higher IE indicator scores can provide good recommendations based on the consideration of centrality. However, a notable difference between Figure 11 and Figure 14 is that many images have higher IE indicator scores, but their AS indicator scores are close to 0. Typical scenarios are images where the size of the AOI is close to the size of the images. The average size ratio of the top 100 IE-ranked images is 1.706, indicating that the IE-based recommendations are not significantly affected by image size. A total of 37 of the top 100 IE-ranked images have their AS indicators on the list of the top 100 AS-ranked images (Figure 15). Further evaluation reveals only 1 of the top 10 IE-ranked images is on the list of the top 100 AS-ranked images and none of the top 10 IE-ranked images are on the list of the top 10 AS-ranked images.
Figure 16 shows the score changes of the AS and INDEX of the top 100 IE-ranked images. The changes of the AS and INDEX line charts are approximately similar. Table 2 shows the corresponding sorting results of the AS and INDEX indicator scores of the top 10 IE-ranked images. Significant differences occur between the sorting results of the IE indicator and those of the other two indicators, thereby indicating many top-ranked images based on IE indicators are scenarios with good centrality, but poor additional spatial coverage reference. The INDEX-based sorting results are significantly affected by the AS indicator scores and none of the top 10 IE-ranked images are on the list of the top 10 INDEX-ranked images. However, the INDEX sorting results of each image are considerably better than those of the AS-ranked images, because of the high IE indicator scores.

5.1.3. INDEX Indicator Score-Based Sorting Pattern

Figure 17 shows the changes of the three indicators of the top 100 INDEX-ranked images. As the INDEX indicator score increases, the scores of AS and IE fluctuate and the AS indicator score shows a tendency to increase, thereby indicating that the higher the INDEX indicator score, the higher the AS indicator score. Comparatively speaking, the increase of IE score is less obvious, but both converge at the tail. Thus, the top INDEX-ranked images may satisfy the recommended conditions by considering the two indicators simultaneously. In Figure 14, images on the left side mostly have one indicator score higher and another lower, while those on the right side (i.e., top-ranked images according to INDEX) have better AS and IE scores. That is, the INDEX-based sorting results can effectively and automatically exclude images with only one optimal indicator score. Moreover, additional reasonable image recommendations are given by simultaneously considering the IE and AS indicators.
Table 3 shows the top 10 INDEX-ranked images and their scoring and sorting. The range of the sorting of the AS indicator rank is from 1 to 33, while that of the IE indicator is 12 to 192. The INDEX score is significantly affected by the change of the AS indicator score. Relative to the fall in the recommendation results in the previous two tables, the INDEX-based sorting results may compensate for the deficiency in the results ranked by a single indicator, thereby finding a better combination of the AS and IE indicators.

5.2. User Scores Analysis

In order to perform effective evaluation criteria such as NDCG, a user score collection platform was implemented. The collected data is used to evaluate whether the scores of the 4 indicators match human subjects’ image selection preferences. In Section 5.2.1, the specifications of simulated images are discussed. In Section 5.2.2, the system UI and workflow of the platform is introduced. Finally, the comparisons of 4 indicators and user scores is presented in Section 5.2.3.

5.2.1. Image Simulation for User Scoring

The test image database is simulated according to the specifications of six RS image products, namely, FORMOSAT-2, GEOEYE-1, IKONOS, QUICKBIRD, GEOES, and WORLDVIEW-1. The spatial coverage ranges from 11.3 km × 11.3 km to 30 km × 30 km. Overall, 10,000 images were simulated based on the geographic extent of Taiwan (119.9E–122.1E and 21.8N–25.4N) (shown in Figure 18).

5.2.2. User Score Data Collecting Platform

To evaluate the ranking behaviors of the AS, IE, Hausdorff distance and INDEX indicators, an online web-based system (http://demo4.gips.com.tw:8080/aoi/) was developed to guide testers to complete the scoring procedure via the map interface, that illustrates the spatial layout of both the candidate images and the AOI. The system UI was demonstrated in Figure 19.
  • Each tester will be prompted with 30 AOIs randomly selected by the system (from the 100 AOIs in the database). A search of qualified images is performed after the tester selects an AOI. The system returns the candidate images whose spatial extent matches the constraint of containing the spatial extent of the selected AOI.
  • The testers are required to give a score (from 1 to 10) to each pair of candidate image and AOI (the red frame in Figure 20), with the scores representing his or her preference about the candidate images. For example, Figure 20 shows the scores of four images with respect to the same AOI from a tester. The scoring of the candidate images of each AOI from 1 to 10 does not need to be in sequence and two candidate images may be given the same score. For example, AOI is located at the center of the first image, hence the tester gives 9 points. In the latter cases, the position of the AOI is relatively close to the boundary of the image, so only 6 and 4 points are obtained.
  • The scores of the candidate images of each AOI are stored in the system for further analyses.

5.2.3. Comparisons of the User Scores for Four Indicators

For a randomly selected AOI, Table 4 shows the top 10 images ranked by the average scores given by multiple testers. The scores of the four indicators and the testers’ score are listed separately. Four of the top five IE/INDEX-ranked images are consistent with the top five images ranked by the testers’ scores, but the two slightly differ in the sorting results. The INDEX-ranked images are obviously affected by the AS indicator score. However, only two of the top 5 AS-ranked images are consistent with the recommendation from the IE and INDEX. When the number of the recommended images increases to six, five of the top six IE/INDEX-ranked images are consistent with testers’ choices. When the number of the recommended images further increases to 10, nine of the top 10 recommended images based on the IE and INDEX are consistent with testers’ choices. Thus, the recommendation results based on the two indicators are extremely close to those based on the testers’ selection behavior. The inference can be made that, although the testers may not select images by only considering the additional available space as the AS indictor suggests, it is still favorable to consider both the demands of centrality and additional space, as the results of the INDEX indicator suggest.
Figure 21 shows the relationships between the scores of four indicators of all images and testers’ scores, with x- and y-axes respectively representing the tester’s score and the average scores of the four indicators. The scores of the four indicators increase with the testers’ scores, thereby indicating that (in principle) all four indicators can provide reasonable recommended reference for image selection. However, further analysis shows that the outcome can be subdivided into three stages with the increase of testers’ scores. In the first stage (testers’ scores between 1 to 4), the four indicator scores are positively correlated with the testers’ scores, thereby suggesting the indicator scores can serve as a sorting reference. When the tester’s scores are between 4 and 7, the changes of all the four indicator scores are gradual. The four indicators therefore do not provide recommended results similar to those ranked by the testers’ scores. When the testers’ scores are between 7 and 10, the four indicator scores are positively correlated again. Moreover, the increases of the IE and INDEX indicator scores are relatively evident, which is an outcome that suggests that the testers’ preferences may be better represented by the two indicators. As the results of the INDEX are also close to the tester’s behaviors, we argue that the INDEX indicator is an appropriate choice that combines both the advantages of its two counterparts. Despite the fact that the AS indicator scores appear to increase gradually with the increase of the testers’ scores, the changes in the third stage is not significant which may suggest testers’ decisions are not considering the size of the additional available space.

5.3. Normalized Discounted Cumulative Gain

5.3.1. Evaluation Methodology

Discounted cumulative gain (DCG) is a method for measuring the quality of a ranking. The NDCG is obtained via the normalization of DCG, which is a considerably common method for gauging search engine performance [29].
The equation for calculating the G vector in DCG calculations is shown in Equation (5), where the parameter b is used to define the index at which the relevance reduction operation will begin. In this experiment, b was set as 2. For example, if the relevance vector is <1,3,5,7> and b is set as 3, then the result of DCG [4] will be 1 + 3 + ( 5 l o g 3 3 ) + ( 7 l o g 3 4 ) .
D C G [ i ] = { G [ i ] , if   i = 1 D C G [ i 1 ] + G [ i ] ,   if   i < b D C G [ i 1 ] + G [ i ] log b i , if   i > b
NDCG[k] calculates the relevance of the top k, as shown in Equation (6). The ideal discounted cumulative gain (IDCG) refers to the DCG values in the ideal ranking list.
NDCG [ k ] = D C G [ k ] I D C G [ k ] .
For example, the ideal ranking list of a ranking result with three items and a vector <4,1,3> is <4,3,1>, and the NDCG [3] of this ranking result is shown in Equation (7).
NDCG [ 3 ] = 4 + 1 l o g 2 2 + 3 l o g 2 3 4 + 3 l o g 2 2 + 1 l o g 2 3 .

5.3.2. Data Preprocessing

To ensure that the scores of each tester have a consistent benchmark and remove unreasonable scores, the collected scores were preprocessed according to the following procedure.

Normalization of User Scoring Data

The system allows for a scoring range of 1 to 10, but the recognized standards of each tester may not be equal. For example, Table 6 shows that Tester A may score images from 2 to 9, whereas Tester B may score images from 3 to 7. If the original scoring data is directly used in the NDCG calculations, then the results may be inaccurate and insufficiently objective.
Definition 5.
Normalization of Tester Scores.The original score of the tester is denoted as Origin(tester, score). The minimum and maximum scores of the tester are denoted as Min(tester, score) and Max(tester, score), respectively. Thus, the normalized score of the tester, denoted as Normalization(tester, score) is defined in Equation (8).
N o r m a l i z a t i o n ( t e s t e r , s c o r e ) = O r i g a i n ( t e s t e r , s c o r e ) M i n ( t e s t e r , s c o r e ) M a x ( t e s t e r , s c o r e ) M i n ( t e s t e r , s c o r e ) .
For example, Image 3 corresponding to the same AOI received scores of 8 and 6 from Testers A and B, respectively (Table 5). After normalization, the scores became 0.86 and 0.75 (Table 6).

Deletion of Unreasonable Scoring Data

After the testers’ data were normalized, unreasonable scoring data were filtered out in this step. Furthermore, the average and standard deviation of the scores of each tester on each AOI–Image pair were calculated. Given a 95% confidence level, a user’s score will be removed if it exceeds 1.96 times the standard deviation: rating scores (the score) > mean (average value) + 1.96 × stdev (standard deviation). Thus, all unreasonable scores for each image were deleted using this equation.

5.3.3. Comparison of Four Indicators

Figure 22 is an NDCG line chart of the AS, IE, INDEX and Hausdorff distance indicators at K values of 1 to 9. The user is most satisfied with the IE recommendation results, followed by the INDEX, and then the AS and the Hausdorff distance at last. Previous studies have revealed that users are more interested in the degree of centering of the image than neighboring information. Therefore, we are convinced that obtaining such a result is relatively reasonable. The INDEX is a combination of IE and AS indicators and the degree of satisfaction of the recommended images is somewhere in between, which is an outcome that we believe to be acceptable. Compared with using AS completely, users are more satisfied with the INDEX because they simultaneously consider, which is an aspect that users are most interested in.

5.3.4. Relationship between NDCG and RS Image Types

Only when an image completely contains the entire AOI will it be marked as a candidate image. Relative to a counterpart with a lower image coverage, the RS image type with larger image coverage has a higher chance of containing the target AOI, thereby more candidate images would be selected. Figure 23 shows the number of NDCG evaluations of 100 AOIs on each type of images when k = 3. When k = 3, each NDCG result indicates that the AOI has at least three candidate images to calculate the NDCG. The number of NDCG(k = 3) evaluations on GEOES (30 km × 30 km) and FORMOSA (24 km × 24 km) are evidently more than those summarized from other RS image type with smaller coverage, like WorldView-1(3 qualified), QUICKBIRD(6 qualified), GEOEYE-1(2 qualified) and IKONOS (0 qualified). When the RS image type with a relative smaller coverage, the number of candidate image is inevitably minimal, and at this time the recommendation mechanism may be of no help, because the user can make very few choices. However, with a larger coverage, the number of candidate images is relatively large. At that point, it is an effective recommendation mechanism for users to select proper RS images quickly. Therefore, the proposed INDEX indicator can fully demonstrate its value for RS image type with larger coverage.

5.3.5. Comparison of NDCG of Each Type of RS Image

The NDCG results discussed in Section 5.3.3 are performed with 10,000 simulated images of six types of RS image merger. In this section, the recommendations of the RS image types are provided individually for calculating the NDCG. The differences are discussed in the following section. The y-axis and the x-axis in Figure 24 indicate the NDCG and the k value, respectively.
Figure 24a shows the NDCG of the four indicators for the GEOES, the largest coverage of the six RS image types. The NDCG of AS and IE are stably superior to those of IE and Hausdorff distance. Figure 24b shows the NDCG of the four indicators for the WorldView-1 RS image type. The NDCG of the four indicators are different when k = 1. Figure 24c shows the NDCG of the four indicators of QUICKBIRD. The NDCG of the INDEX is the best among the four indicators, and when the k value is 1~4, the NDCG of the INDEX are superior to those of the other three indicators. Figure 24d shows the NDCG of the four indicators of GEOEYE-1. As the image coverage of the image type is relatively small (15.2 km × 15.2 km), the k value of NDCG is at most 3, and only a few candidate images are available for calculating the NDCG with increasing k value. When k = 1, the NDCG of the INDEX are superior to those of the other three indicators. Otherwise, they are not inferior to those of the other three indicators. Figure 24e shows the NDCG of the four indicators on the FORMOSAT-2, which has the second largest image size of the six image types. The NDCG of the INDEX is still the best regardless of the k value comparing with other three indicators. Figure 24f shows the IKONOS image type, which has the smallest image coverage among six types of RS image. With very few candidate images, the four indicators are identical in the NDCG.
The major reason of the AS indicator performs well in {GEOS, FORMOSAT-2} datasets is for their relatively large coverage, providing more additional information for a specific AOI. Hence, the AS performs well in large coverage RS-Image datasets. While adding the INDEX into comparison, we found that the INDEX performed better than AS. The INDEX retains the advantages of the AS in large coverage RS image types and considers the centrality simultaneously. This design makes the INDEX better than the AS.
The IE performs better than the AS in {QUICKBIRD, GEOEYE-1} datasets. The reason is that the users prefer to select an image that just contains the AOI, and such a situation is more likely to occur for RS image with small coverage. At this point, the centrality will greatly dominate the user’s choice preferences, so the IE performs better than the AS in the medium and small coverage RS image types. Even so, the INDEX outperforms the IE in QUICKBIRD. In GEOEYE-1, the INDEX and the IE have the similar performance. This shows that INDEX successfully retains the characteristics of the IE, and after adding considerations of additional information, it performs better than the IE in QUICKBIRD. Overall, the NDCG of the INDEX with images recommended separately on each RS image type is superior to that with images combined for recommendation on all the included RS image types. The NDCG of the INDEX are best on three of the six image types, are acceptable on any of the other three types, and by no means obtain the worst results.
When the images of all the participated RS image types are combined for recommendation, the NDCG of the IE is the best. However, when an individual RS image type is recommended separately, the NDCG of the INDEX are generally superior to those of IE and other indicators.

5.3.6. Evaluations of the INDEX Improvement Rates According to RS Image Type

A total of 100 AOIs with candidate images were used for calculating the average NDCG of the four indicators on each of the five RS image types (IKONOS image of smaller coverage can only perform k = 2 NDCG for 100 AOIs). The observation indicated that the NDCG of the INDEX are better than those of AS, IE and Hausdorff distance for every RS image type (k = 3). The average INDEX-based improvement rate of AS can reach 0.76% on the GEOEYE-1 type (Figure 25), while those of AS and IE can exceed their original values on all the RS image types. The performance of the INDEX on such a larger image as GEOES (30.0km × 30.0 km) or FORMOSAT-2 (24.0 km × 24.0 km) is better than the AS and IE in terms of the user scores and improvement rates. Thus, substituting the INDEX for AS, IE or Hausdorff distance is merely what users expect and the INDEX is superior to AS, IE and Hausdorff distance.

5.4. Other Evaluation Criteria Demonstrations

According to the definitions of precision and recall in previous studies, we considered user scores of 6 or higher as correct recommendations. A confusion matrix is shown in Figure 26, where   | R | represents the images with a rating of 6 or higher for each AOI in the simulated RS image database.
Accordingly, we calculated the precision and recall for each AOI and obtained a graphical data output for the P–R curve, AP, and mAP for each indicator (i.e., AS, IE, Hausdorff distance, and INDEX), according to the specified k values. To illustrate the recommendation effect of each indicator, we calculated the precision@k and recall@k of each AOI and averaged the results with respect to k to obtain the P–R curve and AP for each recommendation indicator. According to the definitions provided in previous studies [30], after acquiring the P–R curve, the AP for each recommendation indicator was determined by calculating the area under the curve (AUC). The mAP was obtained by averaging the AP values for a single query [31]. In this study, the AP values of 100 AOIs were averaged. Figure 28 shows the process for calculating the P–R curve, AP, and mAP. According to Figure 28, the four leftmost recommendation indicators (AS, IE, Hausdorff distance, and INDEX) in the comparison were tested against the 100 AOIs and 10,000 simulated remote sensing images, covering six specifications in the simulated image database. The number of candidate images for scoring was determined based on the condition of fully containing the AOI (e.g., AOI1 was contained in nine images). Next, the precision and recall values were calculated for all k values and expressed as Precision@k and Recall@k, respectively. For AOI1, which contained nine images, we calculated Precision@9 and Recall@9. Because k was limited to a maximum of nine in the scope provided in a previous study, this condition was followed. For example, although AOI100 was contained in 13 candidate images, Precision@9 and Recall@9 were calculated. The AP values for AOI1–AOI100 were calculated and averaged, to determine the mAP value. To calculate the AP values for the four recommendation parameters for each AOI query condition, we averaged the precision and recall values for various k values. Averaged precision and averaged recall are defined as Figure 27.
Averaged precision and averaged recall were expressed as P@k and R@k, respectively, as shown in Figure 28 ((P@1,R@1), (P@2,R@2), (P@3,R@3), …, (P@9,R@9)). The averaged precision and averaged recall values were used to plot the P-R curves of the four recommendation parameters for 100 AOI query conditions, and the corresponding AP values were obtained by calculating the AUC area under curve).
In order to enrich the content of indicators evaluations and comparisons, we expanded five criteria of evaluation methods in our experiment. Figure 29 illustrates the experimental system architecture and steps. The architecture includes user rating collection module and the evaluation module. In the user rating collection module, the ground truth of each AOI was averaged from the total number of times each AOI was scored in the system (e.g., AOI1 was tested by nine users). The evaluation module calculated AS, IE, the Hausdorff distance, and the INDEX indicator. The Precision, Recall, P–R curve, AP, and mAP values were derived from each tested AOI and spatial recommendation parameters (i.e., AS, IE, Hausdorff distance, INDEX), as shown in Figure 29 denoted as Remotesensing 12 01277 i011, Remotesensing 12 01277 i012, Remotesensing 12 01277 i013, Remotesensing 12 01277 i014, Remotesensing 12 01277 i015. These results were discussed in the following sections.

5.4.1. Precision Evaluation

Figure 30 shows the averaged precision curves for the four recommendation indicators, based on 100 AOIs for k = 1–9. According to the figure, INDEX and IE exhibited equally optimal performance, followed by AS and then the Hausdorff distance.

5.4.2. Recall Evaluation

Figure 31 shows the averaged recall curves for the four recommendation indicators based on 100 AOIs for k = 1–9. According to the figure, IE exhibited the optimal performance, followed by INDEX, AS, and then the Hausdorff distance.

5.4.3. Precision-Recall Curve Evaluation

Figure 32 shows the averaged precision–recall curves for the four recommendation indicators, based on 100 AOIs for k = 1–9. According to the referenced studies, the higher the AUC (area under curve) of the P–R curve, the more favorable the performance is. The figure indicates that IE and INDEX exhibited similar performance and were superior to AS and the Hausdorff distance. The Hausdorff distance had the lowest AUC, which represents the worst performance among four indicators in P-R curve evaluations.

5.4.4. Average Precision Evaluation

The averaged AUC of the P–R curves based on 100 AOIs were obtained for the four recommendation indicators. Figure 33 shows a histogram of AP values for the four recommendation indicators. According to the definition provided in a previous study, AP is calculated from the AUC of the P–R curve; a higher AP value indicates a more favorable discrimination. The AP results revealed that the performance of INDEX was superior to those of IE, AS, and the Hausdorff distance.

5.4.5. mean Average Precision Evaluation

The AP values of the four recommendation indicators were based on 100 AOIs, after which the results were averaged. Figure 34 shows a histogram of the mAP values of the four recommendation indicators. The results of mAP and AP were similar: the performance ranking orders of the indicators were the same, and they differed only in numbers. According to the mAP results, INDEX was the most optimal indicator, followed by IE, AS, and the Hausdorff distance.

6. Discussions

In this study, a new indicator INDEX was proposed that intends to provide a single ranking recommendation, based on the AS and IE indicator proposed in our previous work. In order to demonstrate its integrated characteristics inherited from the AS and IE indicators and relatively comprehensive spatial recommendation ability, four parts of experiments and evaluations from three perspectives of fundamental behaviors, user scores of preferences and ranking evaluation criteria were developed and discussed in Section 5.
In the first part of the experiment, the INDEX indicator demonstrated its superiority for providing better recommendation, when compared to those based on only AS or IE indicators. While both AS and IE have their shortcomings, the distinguished characteristics of the INDEX indicator are its integrated capability for finding images that meet both the preference of extensibility (AS) and centrality (IE), especially in highly ranked RS images.
In the second part of the experiment, the recommendations from the proposed indicators were compared with the quantitative scores collected from human subjects. The results of Table 4 demonstrate the high consistency between users’ image selection preferences and the recommendations of INDEX and IE indicators (80% consistent with user score ranking among the top five images). According to the conclusions of the previous study, users are more satisfied with the IE-based recommendations. However, the result shows that images with a high IE score may have an extremely poor AS score. To avoid this shortcoming, the INDEX indicator can neutralize the dominant effect of IE indicator and deliver balanced recommendations by including the consideration of AS characteristics.
In the third part of the experiment, NDCG and other well-known evaluation methods were introduced for evaluating the performance of the four indicators. The result of NDCG shows that the INDEX indicator outperforms AS, IE and Hausdorff distance. We also evaluated the performances of four indicators for each type of simulated image, by the NDCG of the available k value. The NDCG of the INDEX indicator leaded in three of six types of image and showed equivalent ability with IE indicator for the other three types of image. The improvement rates of average NDCG were also calculated for the INDEX indicator over the other three indicators for each simulated image type. Five evaluation methods, including precision, recall, P-R curve (precision-recall curve), AP (average precision) and mAP (mean average precision) were further used to evaluate the performance of the 4 indicators. In Precision, Recall and P-R Curve evaluations, the INDEX indicator performs very close to the IE indicator and outperforms AS indicator and Hausdorff distance. In AP and mAP evaluations, the histograms demonstrate consistent results with NDCG and other evaluations.
For delivering a consistent and solid foundation of comparisons and results with the previous study, the results of the INDEX indicator and AS and IE indicator were compared using identical datasets (10,000 simulated real RS images) and methods (precision, recall and NDCG) from our previous study. We intended to deliver unbiased comparisons for the INDEX indicator with AS and IE indicators.
The following lists the major findings and evaluation in this research:
  • In terms of the spatial consideration, the centrality appears to be the dominant factor when considering the selection of RS images, but only considering that the IE indicator may result in recommending images with superior centrality, but poor extensibility. The INDEX indicator demonstrates its capability to resolve this issue by providing ranking recommendation that consider both spatial perspectives.
  • RS image coverage has large impacts on the scores of the AS and INDEX indicator. When images with relatively different levels of spatial coverages are simultaneously considered, images with larger spatial coverages have higher chances of receiving better ranking scores. This may bring negative impacts to the recommendation. To avoid such dominant influence, further examination on the impacts of spatial coverage is necessary in the future.
  • In the operation model of GEE, users are provided with a variety of powerful tools for their applications from the abundant volume of remote sensing images archives. It enables the domain users to quickly develop their workflows based on their knowledge of remote sensing images, including the search, interpretation, processing and visualization. With the fast variety and volume growth of remote sensing images that practically accumulate every day, one of the criteria of using remote sensing images begins with efficiently finding the right images. From spatial perspectives, the ranking mechanism proposed in this paper presents a way to provide a single ranking result based on the formalization of RS image selection knowledge. We believe this ranking and recommendation mechanism can serve as good complimentary aids to current RS image cloud-based platforms like GEE. With the use of catalogue service based on standardized metadata, such platforms can further expand their capability of services.
  • Only spatial perspectives are considered in this research. Other constraints are absolutely necessary to be integrated with spatial constraints for narrowing down the list of candidate images and even provide reasonable recommendations. Similar to what has been addressed in this research, some of the constraints, e.g., temporal constraints, can also be examined from the perspective of “degree of difference”. The integrated model of spatial and temporal perspectives would require more in-depth examination in the future.
  • Machine learning has revealed its outstanding ability in remote sensing related big data applications such as feature detections and classifications [32]. In RS image ranking and recommendation problem, machine learning may be another effective approach in the future.

7. Conclusions

Given the rapid increase in the quantity of RS images, the ability to search for the most appropriate RS images should receive more research attention. Therefore, a ranking mechanism is required to help users rank RS images. In our previous study, we proposed the LIFE framework with the AS and IE indicators for ranking and recommending RS images to users according to user-defined AOIs. In this work, we refined our previous efforts by combining the strengths of the AS and IE indicators into a single indicator called the INDEX. PCA is used for reducing the number of variables and retains the features of variables. We used PCA to combine AS and IE, thereby making the result meaningful. This experiment indicated that the INDEX indicator is positively correlated with user scores. When the users provide high scores to a candidate RS-image, the corresponding INDEX indicators will also be relatively high. The statistical data analysis in Section 5.3.3 reveals that the INDEX can improve IE’s lack of pattern recognition ability in intermediate scores and improve the performance of AS in the last ascending period. Thereafter, we used the NDCG calculations to verify our work. The current outcomes verify those of previous studies. The user’s interest in centrality is higher than that in neighbor information. The NDCG results indicated that the ranking of users’ satisfaction with the indicators is in the following order: IE, INDEX, and AS. However, we found that the INDEX performs best on the same image platform. If the user wants to pick an image from all the platforms, IE is the best of the four indicators, as judged by the NDCG of the INDEX. However, if the user uses a single image platform, the INDEX would be better in most cases.
The contributions of this study are as follows.
  • We merged the highly correlated AS and IE ranking parameters for the RS images through PCA calculations. PCA was used to calculate reasonable values for the coefficients of the merging equation.
  • The high rank part (best 10%) of AS and IE diverse drastically for different considerations behind each of two indicators. The INDEX indicator can effectively converge the high rank part which users care most and deliver better recommendations than either AS or IE indicators.
  • We performed the NDCG and other ranking evaluation methods from user scores representing selection preferences for RS image applications. The results verified the applicability and superiority of the INDEX indicator and demonstrate better spatial recommendations for the entire LIFE mechanism.

Author Contributions

Conceptualization, J.-H.H. and Z.L.-T.S.; methodology, J.-H.H., Z.L.-T.S. and E.H.-C.L.; software: Z.L.-T.S.; formal analysis, J.-H.H., Z.L.-T.S. and E.H.-C.L.; writing—original draft preparation, J.-H.H. and Z.L.-T.S.; writing—review and editing, J.-H.H., Z.L.-T.S. and E.H.-C.L.; visualization, Z.L.-T.S.; supervision, J.-H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skytland, N. What is NASA Doing with Big Data Today? 2012. Available online: https://open.nasa.gov/blog/what-is-nasa-doing-with-big-data-today/ (accessed on 30 March 2017).
  2. Liu, P.; Di, L.; Du, Q.; Wang, L. Remote sensing big data: Theory, methods and applications. Remote Sens. 2018, 10, 711. [Google Scholar] [CrossRef] [Green Version]
  3. Tan, X.; Guo, S.; Di, L.; Deng, M.; Huang, F.; Ye, X.; Sun, Z.; Gong, W.; Sha, Z.; Pan, S. Parallel agent-as-a-service (p-aaas) based geospatial service in the cloud. Remote Sens. 2017, 9, 382. [Google Scholar] [CrossRef] [Green Version]
  4. Wu, K.; Du, Q.; Wang, Y.; Yang, Y. Supervised sub-pixel mapping for change detection from remotely sensed images with different resolutions. Remote Sens. 2017, 9, 284. [Google Scholar] [CrossRef] [Green Version]
  5. Ding, Z.; Liao, X.; Su, F.; Fu, D. Mining coastal land use sequential pattern and its land use associations based on association rule mining. Remote Sens. 2017, 9, 116. [Google Scholar] [CrossRef] [Green Version]
  6. Li, H.; Hong, R.; Zhu, S.; Ge, Y. Point-of-interest recommender systems: A separate-space perspective. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; pp. 231–240. [Google Scholar]
  7. Kosmides, P.; Demestichas, K.; Adamopoulou, E.; Remoundou, C.; Loumiotis, I.; Theologou, M.; Anagnostou, M. Providing recommendations on location-based social networks. J. Ambient Intell. Hum. Comput. 2016, 7, 567–578. [Google Scholar] [CrossRef]
  8. Wang, W.; Yin, H.; Sadiq, S.; Chen, L.; Xie, M.; Zhou, X. Spore: A sequential personalized spatial item recommender system. In Proceedings of the 2016 IEEE 32nd International Conference on Data Engineering (ICDE), Helsinki, Finland, 16–20 May 2016; pp. 954–965. [Google Scholar]
  9. Hong, J.-H.; Su, Z.L.-T.; Lu, H.-C. A recommendation framework for remote sensing images by spatial relation analysis. J. Syst. Softw. 2014, 90, 151–166. [Google Scholar] [CrossRef]
  10. Hjaltason, G.R.; Samet, H. Ranking in spatial databases. In Proceedings of the International Symposium on Spatial Databases, Portland, ME, USA, 6–9 August 1995; pp. 83–95. [Google Scholar]
  11. Kim, J.-H.; Yoon, T.-B.; Kim, K.-S.; Lee, J.-H. The trackback-rank algorithm for the blog search. In Proceedings of the 2008 IEEE International Multitopic Conference, Karachi, Pakistan, 23–24 December 2008; pp. 454–459. [Google Scholar]
  12. Markowetz, A.; Chen, Y.-Y.; Suel, T.; Long, X.; Seeger, B. Design and implementation of a geographic search engine. In Proceedings of the WebDB, Baltimore, Maryland, 16–17 June 2005; pp. 19–24. [Google Scholar]
  13. Dong, P.; Yang, C.; Rui, X.; Zhang, L.; Cheng, Q. An effective buffer generation method in gis. In Proceedings of the IGARSS 2003 (2003 IEEE International Geoscience and Remote Sensing Symposium), Toulouse, France, 21–25 July 2003; Proceedings (IEEE Cat. No. 03CH37477). IEEE, 2003; Volume 6, pp. 3706–3708. [Google Scholar]
  14. ISO. 19107: 2003 Geographic Information-Spatial Schema; International Organization for Standardization: Geneva, Switzerland, 2003. [Google Scholar]
  15. Adler, N.; Golany, B. Evaluation of deregulated airline networks using data envelopment analysis combined with principal component analysis with an application to western europe. Eur. J. Oper. Res. 2001, 132, 260–273. [Google Scholar] [CrossRef]
  16. Fabrigar, L.R.; Wegener, D.T.; RMacCallum, C.; Strahan, E.J. Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 1999, 4, 272. [Google Scholar] [CrossRef]
  17. Oja, E. Simplified neuron model as a principal component analyzer. J. Math. Biol. 1982, 15, 267–273. [Google Scholar] [CrossRef] [PubMed]
  18. Wikipedia. Multivariate Statistics. 2006. Available online: https://en.wikipedia.org/wiki/Multivariate_statistics (accessed on 13 October 2017).
  19. Spearman, C. “General intelligence” objectively determined and measured. Am. J. Psychol. 2000, 15, 201–293. [Google Scholar] [CrossRef]
  20. Thurstone, L.L. Multiple-Factor Analysis; A Development and Expansion of the Vectors of Mind; University of Chicago Press: Chicago, IL, USA, 1947. [Google Scholar]
  21. Habing, B. Exploratory Factor Analysis. 2003. Available online: http://www.stat.sc.edu/~habing/courses/530efa.pdf (accessed on 20 August 2018).
  22. Suhr, D.D. Exploratory or confirmatory factor analysis? In Proceedings of the SUGI 31, San Francisco, CA, USA, 26–29 March 2006. [Google Scholar]
  23. Featherman, M.S.; Pavlou, P.A. Predicting e-services adoption: A perceived risk facets perspective. Int. J. Hum. Comput. Stud. 2003, 59, 451–474. [Google Scholar] [CrossRef] [Green Version]
  24. Steyn, P. Which Test: Factor Analysis (FA, EFA, PCA, CFA). 2017. Available online: https://www.introspective-mode.org/factor-analysis-fa-efa-pca-cfa/ (accessed on 15 July 2019).
  25. Pearson, K. Liii. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef] [Green Version]
  26. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  27. Jolliffe, I.T. Principal components in regression analysis. In Principal Component Analysis; Springer: New York, NY, USA, 1986; pp. 129–155. [Google Scholar]
  28. DeCoster, J. Overview of Factor Analysis. 1998. Available online: http://www.stat-help.com/notes.html (accessed on 26 December 2019).
  29. Manning, C.D.; Raghavan, P.; Schütze, H. Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  30. Boyd, K.; Eng, K.H.; Page, C.D. Area under the precision-recall curve: Point estimates and confidence intervals. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Prague, Czech Republic, 23–27 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 451–466. [Google Scholar]
  31. Cormack, G.V.; Lynam, T.R. Statistical precision of information retrieval evaluation. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, USA, 6–11 August 2006; pp. 533–540. [Google Scholar]
  32. Lary, D.J.; Alavi, A.H.; Gandomi, A.H.; Walker, A.L. Machine learning in geosciences and remote sensing. Geosci. Front. 2016, 7, 3–10. [Google Scholar]
Figure 1. Remote sensive (RS)-Image recommendations using the location-based RS image finding engine (LIFE) framework.
Figure 1. Remote sensive (RS)-Image recommendations using the location-based RS image finding engine (LIFE) framework.
Remotesensing 12 01277 g001
Figure 2. Image extension (IE)-based ranking is necessary for the RS images that have the same area of interests (AOI) and available space (AS).
Figure 2. Image extension (IE)-based ranking is necessary for the RS images that have the same area of interests (AOI) and available space (AS).
Remotesensing 12 01277 g002
Figure 3. Factor analysis (FA) and principal component analysis (PCA) role in our work.
Figure 3. Factor analysis (FA) and principal component analysis (PCA) role in our work.
Remotesensing 12 01277 g003
Figure 4. The workflow for the INDEX indicator calculations.
Figure 4. The workflow for the INDEX indicator calculations.
Remotesensing 12 01277 g004
Figure 5. Relationship between AOI and the candidate images.
Figure 5. Relationship between AOI and the candidate images.
Remotesensing 12 01277 g005
Figure 6. PCA calculation data of the AOI example in Figure 5.
Figure 6. PCA calculation data of the AOI example in Figure 5.
Remotesensing 12 01277 g006
Figure 7. Statistics of PCA for the AOI (62).
Figure 7. Statistics of PCA for the AOI (62).
Remotesensing 12 01277 g007
Figure 8. LIFE Recommendation Framework with INDEX Generator.
Figure 8. LIFE Recommendation Framework with INDEX Generator.
Remotesensing 12 01277 g008
Figure 9. Simulation of RS images and the calculation of indicators.
Figure 9. Simulation of RS images and the calculation of indicators.
Remotesensing 12 01277 g009
Figure 10. Figures and tables for first part of experiment.
Figure 10. Figures and tables for first part of experiment.
Remotesensing 12 01277 g010
Figure 11. In this figure; 1000 images ranked by AS.
Figure 11. In this figure; 1000 images ranked by AS.
Remotesensing 12 01277 g011
Figure 12. Line graph for AS and IE of the top 100 AS-ranked images.
Figure 12. Line graph for AS and IE of the top 100 AS-ranked images.
Remotesensing 12 01277 g012
Figure 13. Line graph for AS, IE, and the INDEX of the top 100 AS-ranked images.
Figure 13. Line graph for AS, IE, and the INDEX of the top 100 AS-ranked images.
Remotesensing 12 01277 g013
Figure 14. In this figure; 1000 images ranked by IE.
Figure 14. In this figure; 1000 images ranked by IE.
Remotesensing 12 01277 g014
Figure 15. Line graph for AS and IE of the top 100 IE-ranked images.
Figure 15. Line graph for AS and IE of the top 100 IE-ranked images.
Remotesensing 12 01277 g015
Figure 16. Line graph for AS, IE and INDEX of the top 100 IE-ranked images.
Figure 16. Line graph for AS, IE and INDEX of the top 100 IE-ranked images.
Remotesensing 12 01277 g016
Figure 17. Line graph for AS, IE, and INDEX of the top 100 INDEX-ranked images.
Figure 17. Line graph for AS, IE, and INDEX of the top 100 INDEX-ranked images.
Remotesensing 12 01277 g017
Figure 18. Here, 10,000 simulated RS images specifications and legend.
Figure 18. Here, 10,000 simulated RS images specifications and legend.
Remotesensing 12 01277 g018
Figure 19. Interface of the System.
Figure 19. Interface of the System.
Remotesensing 12 01277 g019
Figure 20. Screenshot of the Scoring System.
Figure 20. Screenshot of the Scoring System.
Remotesensing 12 01277 g020
Figure 21. User Score Preference for Four Indicators.
Figure 21. User Score Preference for Four Indicators.
Remotesensing 12 01277 g021
Figure 22. Comparison on normalized discounted cumulative gain (NDCG)@k by Various RS-Image Ranking Indicators.
Figure 22. Comparison on normalized discounted cumulative gain (NDCG)@k by Various RS-Image Ranking Indicators.
Remotesensing 12 01277 g022
Figure 23. The number of NDCG evaluations of Each Type of RS Image (With k = 3).
Figure 23. The number of NDCG evaluations of Each Type of RS Image (With k = 3).
Remotesensing 12 01277 g023
Figure 24. NDCG of the AS, IE, INDEX, and Hausdorff for six Types of RS image.
Figure 24. NDCG of the AS, IE, INDEX, and Hausdorff for six Types of RS image.
Remotesensing 12 01277 g024
Figure 25. NDCG Improvement Rates of the INDEX over other three indicators in Each of the Type of RS Image (k = 3).
Figure 25. NDCG Improvement Rates of the INDEX over other three indicators in Each of the Type of RS Image (k = 3).
Remotesensing 12 01277 g025
Figure 26. Confusion Matrix for Experiments.
Figure 26. Confusion Matrix for Experiments.
Remotesensing 12 01277 g026
Figure 27. Definitions of Averaged Precision@k and Averaged Recall@k.
Figure 27. Definitions of Averaged Precision@k and Averaged Recall@k.
Remotesensing 12 01277 g027
Figure 28. P-R Curve; AP and mAP calculation workflow.
Figure 28. P-R Curve; AP and mAP calculation workflow.
Remotesensing 12 01277 g028
Figure 29. Experimental Architecture for Other five Evaluation Criteria.
Figure 29. Experimental Architecture for Other five Evaluation Criteria.
Remotesensing 12 01277 g029
Figure 30. Precision of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Figure 30. Precision of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Remotesensing 12 01277 g030
Figure 31. Recall of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Figure 31. Recall of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Remotesensing 12 01277 g031
Figure 32. P-R Curve of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Figure 32. P-R Curve of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Remotesensing 12 01277 g032
Figure 33. AP of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Figure 33. AP of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Remotesensing 12 01277 g033
Figure 34. mAP of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Figure 34. mAP of AS, IE, Hausdorff distance and INDEX of k = 1~9.
Remotesensing 12 01277 g034
Table 1. Image recommendation by the three indicators (sorted by the AS Indicator).
Table 1. Image recommendation by the three indicators (sorted by the AS Indicator).
Image No.INDEX (Rank)AS (Rank) IE (Rank)
9991.15 (9)1.00 (1)0.66 (192)
9821.28 (1)0.98 (2)0.84 (45)
9961.22 (3)0.96 (3)0.78 (77)
9951.09 (17)0.92 (4)0.64 (211)
9981.15 (8)0.91 (5)0.73 (112)
9631.22 (2)0.90 (6)0.83 (49)
9871.16 (6)0.88 (7)0.77 (81)
9901.10 (14)0.86 (8)0.72 (122)
9861.03 (32)0.84 (9)0.63 (220)
9891.09 (16)0.83 (10)0.72 (123)
Table 2. Image Recommendation by the Three Indicators (Sorted by the IE Indicator).
Table 2. Image Recommendation by the Three Indicators (Sorted by the IE Indicator).
Image No.INDEX (Rank)AS (Rank)IE (Rank)
3600.89 (65)0.20 (294)1.00 (1)
7281.07 (20)0.49 (79)0.99 (2)
2800.86 (71)0.16 (350)0.99 (3)
820.81 (81)0.10 (498)0.99 (4)
6711.03 (29)0.43 (105)0.98 (5)
6100.99 (38)0.38 (132)0.98 (6)
180.80 (84)0.11 (470)0.96 (7)
1790.78 (92)0.09 (523)0.96 (8)
2010.80 (85)0.13 (414)0.95 (9)
1130.79 (99)0.08 (555)0.95 (10)
Table 3. Top 10 INDEX images.
Table 3. Top 10 INDEX images.
Image No.INDEX (Rank)AS (Rank)IE (Rank)
9821.28 (1)0.98 (2)0.84 (45)
9631.22 (2)0.90 (6)0.83 (49)
9961.22 (3)0.96 (3)0.78 (77)
9011.20 (4)0.75 (19)0.94 (12)
9401.16 (5)0.82 (12)0.82 (55)
9871.16 (6)0.88 (7)0.77 (81)
8631.15 (7)0.68 (29)0.94 (13)
9981.15 (8)0.91 (5)0.73 (112)
9991.15 (9)1.00 (1)0.66 (192)
9231.12 (10)0.65 (33)0.92 (16)
Table 4. Users’ Score and the four Indicator Values for the Recommended Images.
Table 4. Users’ Score and the four Indicator Values for the Recommended Images.
ImageCoverage
(km × km)
User Score (Rank)INDEX (Rank)AS (Rank)IE (Rank)Hausdorff Distance (Rank)
Remotesensing 12 01277 i00117.6 × 14.08.66 (1)0.892 (4)0.291 (7)1.000 (1)8713.785 (1)
Remotesensing 12 01277 i00217.6 × 14.08.6 (2)0.852 (5)0.272 (8)0.962 (2)10,461.089 (2)
Remotesensing 12 01277 i00324.0 × 24.08.1 (3)0.961 (3)0.703 (3)0.654 (4)15,842.102 (6)
Remotesensing 12 01277 i00430.0 × 30.08 (4)1.130 (1)1.00 (1)0.582 (6)21,723.347 (12)
Remotesensing 12 01277 i00517.6 × 14.07.84 (5)0.664 (7)0.198 (10)0.765 (3)10,722.992 (3)
Remotesensing 12 01277 i00630.0 × 30.07.62 (6)1.017 (2)0.921 (2)0.501 (8)25,951.888 (15)
Remotesensing 12 01277 i00717.6 × 14.07.16 (7)0.440 (10)0.103 (14)0.537 (7)11,402.754 (4)
Remotesensing 12 01277 i00824.0 × 24.06.8 (8)0.686 (6)0.353 (5)0.629 (5)20,596.387 (11)
Remotesensing 12 01277 i00924.0 × 24.06.64 (9)0.609 (8)0.391 (4)0.475 (11)18,953.448 (9)
Remotesensing 12 01277 i01016.5 × 16.56.60 (10)0.41 (11)0.11 (13)0.48 (10)11,406.427 (5)
Table 5. Rating Scores before Normalization.
Table 5. Rating Scores before Normalization.
Image 1Image 2Image 3Image 4Image 5
Tester A22896
Tester B34675
Table 6. Rating Scores after Normalization.
Table 6. Rating Scores after Normalization.
Image 1Image 2Image 3Image 4Image 5
Tester A000.8610.57
Tester B00.250.7510.5

Share and Cite

MDPI and ACS Style

Hong, J.-H.; Su, Z.L.-T.; Lu, E.H.-C. Spatial Perspectives toward the Recommendation of Remote Sensing Images Using the INDEX Indicator, Based on Principal Component Analysis. Remote Sens. 2020, 12, 1277. https://doi.org/10.3390/rs12081277

AMA Style

Hong J-H, Su ZL-T, Lu EH-C. Spatial Perspectives toward the Recommendation of Remote Sensing Images Using the INDEX Indicator, Based on Principal Component Analysis. Remote Sensing. 2020; 12(8):1277. https://doi.org/10.3390/rs12081277

Chicago/Turabian Style

Hong, Jung-Hong, Zeal Li-Tse Su, and Eric Hsueh-Chan Lu. 2020. "Spatial Perspectives toward the Recommendation of Remote Sensing Images Using the INDEX Indicator, Based on Principal Component Analysis" Remote Sensing 12, no. 8: 1277. https://doi.org/10.3390/rs12081277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop