Algorithms in Monte Carlo Methods

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Analysis of Algorithms and Complexity Theory".

Deadline for manuscript submissions: closed (15 March 2023) | Viewed by 5023

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Statistics, Macquarie University, Sydney, NSW 2109, Australia
Interests: statistical modeling; change-point problem; Markov chain Monte Carlo methods; cross-Entropy method; optimal stopping rules
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, P.O. Box 830688, MS-EC31 Richardson, TX 75083-0688, USA
Interests: communication networks and their protocols; network design/analysis methods; algorithms; complexity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Monte Carlo methods, in the most general sense, encompass algorithms that can be characterized by two key features: (1) they use random choices in their operation; and (2) they relax the requirement of insisting on a deterministically correct result. A Monte Carlo method may produce a solution that is correct only with high probability (but not with certainty), with respect to the internal random choices of the algorithm.

Such an algorithm can have a variety of goals. It may solve a deterministic problem, for example, finding a minimum cut in a graph. It may look for an approximate solution of a numerical task, including optimization and numerical integration. The target can also be inherently random, such as drawing a random sample from a complicated probability distribution.

The key issue in designing a Monte Carlo method is to find an algorithm that can capitalize on the random choices to efficiently produce a result, which is correct with high probability. In some cases, such an algorithm is completely problem specific; examples include the Karger–Stein minimum cut algorithm and various randomized primality tests. However, there are useful general schemes which provide solutions to a multitude of tasks. Quite a few such schemes exist. These include general-purpose randomized optimization methods such as simulated annealing, genetic algorithm, solutions for optimal stopping, change-point detection, the Moser–Tardos algorithm, etc. Another class of methods aim at producing random samples from complex probability distributions, including rejection sampling, importance sampling, the cross-entropy method, Markov chain Monte Carlo algorithms, and others.

This is a broad and rich research area. Topics of interest include, but are not limited to, the following:

  • Randomized algorithms for graph problems and combinatorial optimization;
  • Randomized algorithms for continuous optimization;
  • Randomized algorithms for numerical problems;
  • Methods for generating random samples from complex probability distributions;
  • Optimal stopping rules;
  • Change-point detection methods;
  • Markov chain Monte Carlo algorithms;
  • Estimation of mixing rate in Markov chains;
  • Computational complexity issues relating to Monte Carlo methods;
  • Case studies of solving practical problems via Monte Carlo methods.

Dr. Georgy Sofronov
Prof. Dr. Andras Farago
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • randomized algorithms
  • graph problems
  • continuous optimization
  • numerical problems
  • generating random samples
  • complex probability distributions
  • optimal stopping rules
  • change-point detection methods
  • Markov chain monte Carlo algorithms
  • estimation of mixing rate in Markov chains
  • monte Carlo methods

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1356 KiB  
Article
A Discrete Partially Observable Markov Decision Process Model for the Maintenance Optimization of Oil and Gas Pipelines
by Ezra Wari, Weihang Zhu and Gino Lim
Algorithms 2023, 16(1), 54; https://doi.org/10.3390/a16010054 - 12 Jan 2023
Cited by 6 | Viewed by 1711
Abstract
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes [...] Read more.
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes a partially observable Markov decision process (POMDP) model for optimizing maintenance based on the corrosion progress, which is monitored by an inline inspection to assess the extent of pipeline corrosion. The states are defined by dividing the deterioration range equally, whereas the actions are determined based on the specific states and pipeline attributes. Monte Carlo simulation and a pure birth Markov process method are used for computing the transition matrix. The cost of maintenance and failure are considered when calculating the rewards. The inline inspection methods and tool measurement errors may cause reading distortion, which is used to formulate the observations and the observation function. The model is demonstrated with two numerical examples constructed based on problems and parameters in the literature. The result shows that the proposed model performs well with the added advantage of integrating measurement errors and recommending actions for multiple-state situations. Overall, this discrete model can serve the maintenance decision-making process by better representing the stochastic features. Full article
(This article belongs to the Special Issue Algorithms in Monte Carlo Methods)
Show Figures

Figure 1

20 pages, 489 KiB  
Article
Coordinate Descent for Variance-Component Models
by Anant Mathur, Sarat Moka and Zdravko Botev
Algorithms 2022, 15(10), 354; https://doi.org/10.3390/a15100354 - 28 Sep 2022
Viewed by 1474
Abstract
Variance-component models are an indispensable tool for statisticians wanting to capture both random and fixed model effects. They have applications in a wide range of scientific disciplines. While maximum likelihood estimation (MLE) is the most popular method for estimating the variance-component model parameters, [...] Read more.
Variance-component models are an indispensable tool for statisticians wanting to capture both random and fixed model effects. They have applications in a wide range of scientific disciplines. While maximum likelihood estimation (MLE) is the most popular method for estimating the variance-component model parameters, it is numerically challenging for large data sets. In this article, we consider the class of coordinate descent (CD) algorithms for computing the MLE. We show that a basic implementation of coordinate descent is numerically costly to implement and does not easily satisfy the standard theoretical conditions for convergence. We instead propose two parameter-expanded versions of CD, called PX-CD and PXI-CD. These novel algorithms not only converge faster than existing competitors (MM and EM algorithms) but are also more amenable to convergence analysis. PX-CD and PXI-CD are particularly well-suited for large data sets—namely, as the scale of the model increases, the performance gap between the parameter-expanded CD algorithms and the current competitor methods increases. Full article
(This article belongs to the Special Issue Algorithms in Monte Carlo Methods)
Show Figures

Figure 1

20 pages, 1305 KiB  
Article
Estimating Tail Probabilities of Random Sums of Phase-Type Scale Mixture Random Variables
by Hui Yao and Thomas Taimre
Algorithms 2022, 15(10), 350; https://doi.org/10.3390/a15100350 - 27 Sep 2022
Cited by 2 | Viewed by 1102
Abstract
We consider the problem of estimating tail probabilities of random sums of scale mixture of phase-type distributions—a class of distributions corresponding to random variables which can be represented as a product of a non-negative but otherwise arbitrary random variable with a phase-type random [...] Read more.
We consider the problem of estimating tail probabilities of random sums of scale mixture of phase-type distributions—a class of distributions corresponding to random variables which can be represented as a product of a non-negative but otherwise arbitrary random variable with a phase-type random variable. Our motivation arises from applications in risk, queueing problems for estimating ruin probabilities, and waiting time distributions, respectively. Mixtures of distributions are flexible models and can be exploited in modelling non-life insurance loss amounts. Classical rare-event simulation algorithms cannot be implemented in this setting because these methods typically rely on the availability of the cumulative distribution function or the moment generating function, but these are difficult to compute or are not even available for the class of scale mixture of phase-type distributions. The contributions of this paper are that we address these issues by proposing alternative simulation methods for estimating tail probabilities of random sums of scale mixture of phase-type distributions which combine importance sampling and conditional Monte Carlo methods, showing the efficiency of the proposed estimators for a wide class of scaling distributions, and validating the empirical performance of the suggested methods via numerical experimentation. Full article
(This article belongs to the Special Issue Algorithms in Monte Carlo Methods)
Show Figures

Figure 1

Back to TopTop