entropy-logo

Journal Browser

Journal Browser

Recent Advances in Statistical Theory and Applications

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 20585

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3, Canada
Interests: statistical inference; higher order likelihood-based asymptotic methods; econometrics; survival data analysis; statistical computational methods

E-Mail Website
Guest Editor
Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia Okanagan, Kelowna, BC V1V 1V7, Canada
Interests: KL divergence; clustering; change-point analysis; variable selection; high-dimensional inference

Special Issue Information

Dear Colleagues,

Complex data pose unique challenges for data processing in an era of ever-increasing data availability. Some methods have been explored for modeling the latent data, but the calculation of the likelihood function is hindered by an integral without a closed form. Highly accurate approximation is needed for efficient inference. Additionally, a perturbation of data may result in a very different statistical inference. Robust inference may avoid this perturbation. Moreover, the current multivariate analysis is challenged by high dimensionality. A weighted graph can provide a structure of high-dimensional data and has been developed for nonparametric inference.

More specifically, the recently emerged methods for complex data include:

  • Density approximation based on KL divergence.
  • Saddlepoint approximation for integrals.
  • Inverse moment approximation for risk evaluation.
  • Nonparametric inference for change point and multiple sample comparison.
  • Robust clustering.
  • Penalized regression for variable selection.
  • Data sharpening for bias reduction.
  • Data depth for multivariate analysis.
  • Graph theory for high-dimensional inference.

The scope of the contributions to this Special Issue will include new and original research motivated by real-world problems. The articles will focus on approximate inference, methods, and statistical applications in a wide range of areas, including finance, economics, environmetrics, biological, psychonometrics, social sciences, physical sciences, and geography. Manuscripts extending the current theory development of these topics will also be welcome.

Dr. Augustine Wong
Dr. Xiaoping Shi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • KL divergence
  • asymptotic inference
  • high dimension
  • clustering
  • change point
  • multiple-sample comparison
  • data sharpening
  • data depth
  • variable selection
  • robust inference

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

3 pages, 170 KiB  
Editorial
Recent Advances in Statistical Theory and Applications
by Augustine Wong and Xiaoping Shi
Entropy 2023, 25(12), 1661; https://doi.org/10.3390/e25121661 - 15 Dec 2023
Viewed by 789
Abstract
Complex data pose unique challenges for data processing [...] Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)

Research

Jump to: Editorial, Review, Other

17 pages, 431 KiB  
Article
Modeling Terror Attacks with Self-Exciting Point Processes and Forecasting the Number of Terror Events
by Siyi Wang, Xu Wang and Chenlong Li
Entropy 2023, 25(7), 1011; https://doi.org/10.3390/e25071011 - 30 Jun 2023
Viewed by 997
Abstract
Rampant terrorism poses a serious threat to the national security of many countries worldwide, particularly due to separatism and extreme nationalism. This paper focuses on the development and application of a temporal self-exciting point process model to the terror data of three countries: [...] Read more.
Rampant terrorism poses a serious threat to the national security of many countries worldwide, particularly due to separatism and extreme nationalism. This paper focuses on the development and application of a temporal self-exciting point process model to the terror data of three countries: the US, Turkey, and the Philippines. To account for occurrences with the same time-stamp, this paper introduces the order mark and reward term in parameter selection. The reward term considers the triggering effect between events in the same time-stamp but different order. Additionally, this paper provides comparisons between the self-exciting models generated by day-based and month-based arrival times. Another highlight of this paper is the development of a model to predict the number of terror events using a combination of simulation and machine learning, specifically the random forest method, to achieve better predictions. This research offers an insightful approach to discover terror event patterns and forecast future occurrences of terror events, which may have practical application towards national security strategies. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

20 pages, 2521 KiB  
Article
Multilayer Perceptron Network Optimization for Chaotic Time Series Modeling
by Mu Qiao, Yanchun Liang, Adriano Tavares and Xiaohu Shi
Entropy 2023, 25(7), 973; https://doi.org/10.3390/e25070973 - 24 Jun 2023
Cited by 1 | Viewed by 1067
Abstract
Chaotic time series are widely present in practice, but due to their characteristics—such as internal randomness, nonlinearity, and long-term unpredictability—it is difficult to achieve high-precision intermediate or long-term predictions. Multi-layer perceptron (MLP) networks are an effective tool for chaotic time series modeling. Focusing [...] Read more.
Chaotic time series are widely present in practice, but due to their characteristics—such as internal randomness, nonlinearity, and long-term unpredictability—it is difficult to achieve high-precision intermediate or long-term predictions. Multi-layer perceptron (MLP) networks are an effective tool for chaotic time series modeling. Focusing on chaotic time series modeling, this paper presents a generalized degree of freedom approximation method of MLP. We then obtain its Akachi information criterion, which is designed as the loss function for training, hence developing an overall framework for chaotic time series analysis, including phase space reconstruction, model training, and model selection. To verify the effectiveness of the proposed method, it is applied to two artificial chaotic time series and two real-world chaotic time series. The numerical results show that the proposed optimized method is effective to obtain the best model from a group of candidates. Moreover, the optimized models perform very well in multi-step prediction tasks. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

27 pages, 422 KiB  
Article
From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis
by The Tien Mai
Entropy 2023, 25(2), 333; https://doi.org/10.3390/e25020333 - 11 Feb 2023
Cited by 2 | Viewed by 1134
Abstract
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, [...] Read more.
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, a problem known as inductive matrix completion. To address these issues, we propose a novel approach that combines elements of Bayesian statistics with a quasi-likelihood method. Our proposed method starts by addressing the problem of bilinear regression using a quasi-Bayesian approach. The quasi-likelihood method that we employ in this step allows us to handle the complex relationships between the variables in a more robust way. Next, we adapt our approach to the context of inductive matrix completion. We make use of a low-rankness assumption and leverage the powerful PAC-Bayes bound technique to provide statistical properties for our proposed estimators and for the quasi-posteriors. To compute the estimators, we propose a Langevin Monte Carlo method to obtain approximate solutions to the problem of inductive matrix completion in a computationally efficient manner. To demonstrate the effectiveness of our proposed methods, we conduct a series of numerical studies. These studies allow us to evaluate the performance of our estimators under different conditions and provide a clear illustration of the strengths and limitations of our approach. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
13 pages, 1459 KiB  
Article
Two-Sample Tests Based on Data Depth
by Xiaoping Shi, Yue Zhang and Yuejiao Fu
Entropy 2023, 25(2), 238; https://doi.org/10.3390/e25020238 - 28 Jan 2023
Viewed by 1465
Abstract
In this paper, we focus on the homogeneity test that evaluates whether two multivariate samples come from the same distribution. This problem arises naturally in various applications, and there are many methods available in the literature. Based on data depth, several tests have [...] Read more.
In this paper, we focus on the homogeneity test that evaluates whether two multivariate samples come from the same distribution. This problem arises naturally in various applications, and there are many methods available in the literature. Based on data depth, several tests have been proposed for this problem but they may not be very powerful. In light of the recent development of data depth as an important measure in quality assurance, we propose two new test statistics for the multivariate two-sample homogeneity test. The proposed test statistics have the same χ2(1) asymptotic null distribution. The generalization of the proposed tests into the multivariate multisample situation is discussed as well. Simulations studies demonstrate the superior performance of the proposed tests. The test procedure is illustrated through two real data examples. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

15 pages, 6055 KiB  
Article
Forecast of Winter Precipitation Type Based on Machine Learning Method
by Zhang Lang, Qiuzi Han Wen, Bo Yu, Li Sang and Yao Wang
Entropy 2023, 25(1), 138; https://doi.org/10.3390/e25010138 - 10 Jan 2023
Cited by 4 | Viewed by 1617
Abstract
A winter precipitation-type prediction is a challenging problem due to the complexity in the physical mechanisms and computability in numerical modeling. In this study, we introduce a new method of precipitation-type prediction based on the machine learning approach LightGBM. The precipitation-type records of [...] Read more.
A winter precipitation-type prediction is a challenging problem due to the complexity in the physical mechanisms and computability in numerical modeling. In this study, we introduce a new method of precipitation-type prediction based on the machine learning approach LightGBM. The precipitation-type records of the in situ observations collected from 32 national weather stations in northern China during 1997–2018 are used as the labels. The features are selected from the conventional meteorological data of the corresponding hourly reanalysis data ERA5. The evaluation results of the model performance reflect that randomly sampled validation data will lead to an illusion of a better model performance. Extreme climate background conditions will reduce the prediction accuracy of the predictive model. A feature importance analysis illustrates that the features of the surrounding area with a –12 h offset time have a higher impact on the ground precipitation types. The exploration of the predictability of our model reveals the feasibility of using the analysis data to predict future precipitation types. We use the ECMWF precipitation-type (ECPT) forecast products as the benchmark to compare with our machine learning precipitation-type (MLPT) predictions. The overall accuracy (ACC) and Heidke skill score (HSS) of the MLPT are 0.83 and 0.69, respectively, which are considerably higher than the 0.78 and 0.59 of the ECPT. For stations at elevations below 800 m, the overall performance of the MLPT is even better. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

9 pages, 267 KiB  
Article
Comparing Several Gamma Means: An Improved Log-Likelihood Ratio Test
by Augustine Wong
Entropy 2023, 25(1), 111; https://doi.org/10.3390/e25010111 - 05 Jan 2023
Viewed by 1037
Abstract
The two-parameter gamma distribution is one of the most commonly used distributions in analyzing environmental, meteorological, medical, and survival data. It has a two-dimensional minimal sufficient statistic, and the two parameters can be taken to be the mean and shape parameters. This makes [...] Read more.
The two-parameter gamma distribution is one of the most commonly used distributions in analyzing environmental, meteorological, medical, and survival data. It has a two-dimensional minimal sufficient statistic, and the two parameters can be taken to be the mean and shape parameters. This makes it closely comparable to the normal model, but it differs substantially in that the exact distribution for the minimal sufficient statistic is not available. A Bartlett-type correction of the log-likelihood ratio statistic is proposed for the one-sample gamma mean problem and extended to testing for homogeneity of k2 independent gamma means. The exact correction factor, in general, does not exist in closed form. In this paper, a simulation algorithm is proposed to obtain the correction factor numerically. Real-life examples and simulation studies are used to illustrate the application and the accuracy of the proposed method. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
21 pages, 360 KiB  
Article
Asymptotics of Subsampling for Generalized Linear Regression Models under Unbounded Design
by Guangqiang Teng, Boping Tian, Yuanyuan Zhang and Sheng Fu
Entropy 2023, 25(1), 84; https://doi.org/10.3390/e25010084 - 31 Dec 2022
Viewed by 1162
Abstract
The optimal subsampling is an statistical methodology for generalized linear models (GLMs) to make inference quickly about parameter estimation in massive data regression. Existing literature only considers bounded covariates. In this paper, the asymptotic normality of the subsampling M-estimator based on the Fisher [...] Read more.
The optimal subsampling is an statistical methodology for generalized linear models (GLMs) to make inference quickly about parameter estimation in massive data regression. Existing literature only considers bounded covariates. In this paper, the asymptotic normality of the subsampling M-estimator based on the Fisher information matrix is obtained. Then, we study the asymptotic properties of subsampling estimators of unbounded GLMs with nonnatural links, including conditional asymptotic properties and unconditional asymptotic properties. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
12 pages, 287 KiB  
Article
Nonparametric Clustering of Mixed Data Using Modified Chi-Squared Tests
by Yawen Xu, Xin Gao and Xiaogang Wang
Entropy 2022, 24(12), 1749; https://doi.org/10.3390/e24121749 - 29 Nov 2022
Viewed by 980
Abstract
We propose a non-parametric method to cluster mixed data containing both continuous and discrete random variables. The product space of the continuous and discrete sample space is transformed into a new product space based on adaptive quantization on the continuous part. Detection of [...] Read more.
We propose a non-parametric method to cluster mixed data containing both continuous and discrete random variables. The product space of the continuous and discrete sample space is transformed into a new product space based on adaptive quantization on the continuous part. Detection of cluster patterns on the product space is determined locally by using a weighted modified chi-squared test. Our algorithm does not require any user input since the number of clusters is determined automatically by data. Simulation studies and real data analysis results show that our proposed method outperforms the benchmark method, AutoClass, in various settings. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
14 pages, 812 KiB  
Article
Mildly Explosive Autoregression with Strong Mixing Errors
by Xian Liu, Xiaoqin Li, Min Gao and Wenzhi Yang
Entropy 2022, 24(12), 1730; https://doi.org/10.3390/e24121730 - 26 Nov 2022
Viewed by 1128
Abstract
In this paper, we consider the mildly explosive autoregression yt=ρnyt1+ut, 1tn, where ρn=1+c/nν, c>0, [...] Read more.
In this paper, we consider the mildly explosive autoregression yt=ρnyt1+ut, 1tn, where ρn=1+c/nν, c>0, ν(0,1), and u1,,un are arithmetically α-mixing errors. Under some weak conditions, such as Eu1=0, E|u1|4+δ< for some δ>0 and mixing coefficients α(n)=O(n(2+8/δ)), the Cauchy limiting distribution is established for the least squares (LS) estimator ρ^n of ρn, which extends the cases of independent errors and geometrically α-mixing errors. Some simulations for ρn, such as the empirical probability of the confidence interval and the empirical density, are presented to illustrate the Cauchy limiting distribution, which have good finite sample performances. In addition, we use the Cauchy limiting distribution of the LS estimator ρ^n to illustrate real data from the NASDAQ composite index from April 2011 to April 2021. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

23 pages, 535 KiB  
Article
A New Class of Weighted CUSUM Statistics
by Xiaoping Shi, Xiang-Sheng Wang and Nancy Reid
Entropy 2022, 24(11), 1652; https://doi.org/10.3390/e24111652 - 14 Nov 2022
Cited by 1 | Viewed by 1259
Abstract
A change point is a location or time at which observations or data obey two different models: before and after. In real problems, we may know some prior information about the location of the change point, say at the right or left tail [...] Read more.
A change point is a location or time at which observations or data obey two different models: before and after. In real problems, we may know some prior information about the location of the change point, say at the right or left tail of the sequence. How does one incorporate the prior information into the current cumulative sum (CUSUM) statistics? We propose a new class of weighted CUSUM statistics with three different types of quadratic weights accounting for different prior positions of the change points. One interpretation of the weights is the mean duration in a random walk. Under the normal model with known variance, the exact distributions of these statistics are explicitly expressed in terms of eigenvalues. Theoretical results about the explicit difference of the distributions are valuable. The expansions of asymptotic distributions are compared with the expansion of the limit distributions of the Cramér-von Mises statistic and the Anderson and Darling statistic. We provide some extensions from independent normal responses to more interesting models, such as graphical models, the mixture of normals, Poisson, and weakly dependent models. Simulations suggest that the proposed test statistics have better power than the graph-based statistics. We illustrate their application to a detection problem with video data. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

18 pages, 1166 KiB  
Article
Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm
by Jiaxuan Xu, Jiang Wu, Taiyong Li and Yang Nan
Entropy 2022, 24(10), 1324; https://doi.org/10.3390/e24101324 - 21 Sep 2022
Cited by 3 | Viewed by 1326
Abstract
Accurate clustering is a challenging task with unlabeled data. Ensemble clustering aims to combine sets of base clusterings to obtain a better and more stable clustering and has shown its ability to improve clustering accuracy. Dense representation ensemble clustering (DREC) and entropy-based locally [...] Read more.
Accurate clustering is a challenging task with unlabeled data. Ensemble clustering aims to combine sets of base clusterings to obtain a better and more stable clustering and has shown its ability to improve clustering accuracy. Dense representation ensemble clustering (DREC) and entropy-based locally weighted ensemble clustering (ELWEC) are two typical methods for ensemble clustering. However, DREC treats each microcluster equally and hence, ignores the differences between each microcluster, while ELWEC conducts clustering on clusters rather than microclusters and ignores the sample–cluster relationship. To address these issues, a divergence-based locally weighted ensemble clustering with dictionary learning (DLWECDL) is proposed in this paper. Specifically, the DLWECDL consists of four phases. First, the clusters from the base clustering are used to generate microclusters. Second, a Kullback–Leibler divergence-based ensemble-driven cluster index is used to measure the weight of each microcluster. With these weights, an ensemble clustering algorithm with dictionary learning and the L2,1-norm is employed in the third phase. Meanwhile, the objective function is resolved by optimizing four subproblems and a similarity matrix is learned. Finally, a normalized cut (Ncut) is used to partition the similarity matrix and the ensemble clustering results are obtained. In this study, the proposed DLWECDL was validated on 20 widely used datasets and compared to some other state-of-the-art ensemble clustering methods. The experimental results demonstrated that the proposed DLWECDL is a very promising method for ensemble clustering. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

30 pages, 633 KiB  
Article
Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
by Daniel Andrés Díaz-Pachón and Ola Hössjer
Entropy 2022, 24(10), 1323; https://doi.org/10.3390/e24101323 - 21 Sep 2022
Cited by 7 | Viewed by 1529
Abstract
A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes [...] Read more.
A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in order to reach a certain target. A function f quantifies specificity for each possible outcome x of a search, so that the target of the algorithm is a set of highly specified states, whereas fine-tuning occurs if it is much more likely for the algorithm to reach the target as intended than by chance. The distribution of a random outcome X of the algorithm involves a parameter θ that quantifies how much background information has been infused. A simple choice of this parameter is to use θf in order to exponentially tilt the distribution of the outcome of the search algorithm under the null distribution of no tuning, so that an exponential family of distributions is obtained. Such algorithms are obtained by iterating a Metropolis–Hastings type of Markov chain, which makes it possible to compute their active information under the equilibrium and non-equilibrium of the Markov chain, with or without stopping when the targeted set of fine-tuned states has been reached. Other choices of tuning parameters θ are discussed as well. Nonparametric and parametric estimators of active information and tests of fine-tuning are developed when repeated and independent outcomes of the algorithm are available. The theory is illustrated with examples from cosmology, student learning, reinforcement learning, a Moran type model of population genetics, and evolutionary programming. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

Review

Jump to: Editorial, Research, Other

14 pages, 4491 KiB  
Review
Fuzzy C-Means Clustering: A Review of Applications in Breast Cancer Detection
by Daniel Krasnov, Dresya Davis, Keiran Malott, Yiting Chen, Xiaoping Shi and Augustine Wong
Entropy 2023, 25(7), 1021; https://doi.org/10.3390/e25071021 - 04 Jul 2023
Cited by 4 | Viewed by 2197
Abstract
This paper reviews the potential use of fuzzy c-means clustering (FCM) and explores modifications to the distance function and centroid initialization methods to enhance image segmentation. The application of interest in the paper is the segmentation of breast tumours in mammograms. Breast cancer [...] Read more.
This paper reviews the potential use of fuzzy c-means clustering (FCM) and explores modifications to the distance function and centroid initialization methods to enhance image segmentation. The application of interest in the paper is the segmentation of breast tumours in mammograms. Breast cancer is the second leading cause of cancer deaths in Canadian women. Early detection reduces treatment costs and offers a favourable prognosis for patients. Classical methods, like mammograms, rely on radiologists to detect cancerous tumours, which introduces the potential for human error in cancer detection. Classical methods are labour-intensive, and, hence, expensive in terms of healthcare resources. Recent research supplements classical methods with automated mammogram analysis. The basic FCM method relies upon the Euclidean distance, which is not optimal for measuring non-spherical structures. To address these limitations, we review the implementation of a Mahalanobis-distance-based FCM (FCM-M). The three objectives of the paper are: (1) review FCM, FCM-M, and three centroid initialization algorithms in the literature, (2) illustrate the effectiveness of these algorithms in image segmentation, and (3) develop a Python package with the optimized algorithms to upload onto GitHub. Image analysis of the algorithms shows that using one of the three centroid initialization algorithms enhances the performance of FCM. FCM-M produced higher clustering accuracy and outlined the tumour structure better than basic FCM. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

Other

14 pages, 3821 KiB  
Essay
Detecting Structural Change Point in ARMA Models via Neural Network Regression and LSCUSUM Methods
by Xi-hame Ri, Zhanshou Chen and Yan Liang
Entropy 2023, 25(1), 133; https://doi.org/10.3390/e25010133 - 09 Jan 2023
Viewed by 1112
Abstract
This study considers the change point testing problem in autoregressive moving average (ARMA) (p,q) models through the location and scale-based cumulative sum (LSCUSUM) method combined with neural network regression (NNR). We estimated the model parameters via the NNR method [...] Read more.
This study considers the change point testing problem in autoregressive moving average (ARMA) (p,q) models through the location and scale-based cumulative sum (LSCUSUM) method combined with neural network regression (NNR). We estimated the model parameters via the NNR method based on the training sample, where a long AR model was fitted to obtain the residuals. Then, we selected the optimal model orders p and q of the ARMA models using the Akaike information criterion based on a validation set. Finally, we used the forecasting errors obtained from the selected model to construct the LSCUSUM test. Extensive simulations and their application to three real datasets show that the proposed NNR-based LSCUSUM test performs well. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
Show Figures

Figure 1

Back to TopTop