High-Speed Computing and Parallel Algorithms

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 989

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
Interests: high-speed computing and parallel algorithms; computational linear algebra; numerical methods for partial differential equations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. Parallelism, the capability of a computer to execute operations concurrently, has been a constant throughout the history of computing. It impacts hardware, software, theory, and applications. Supercomputers owe their performance advantage to parallelism. Today, physical limitations have forced the adoption of parallelism as the preeminent strategy of computer manufacturers for performance gains of all classes of machines, from embedded and mobile systems to the most powerful servers. Parallelism is crucial for many applications in the sciences and engineering.

The focus will be on models, algorithms, and software tools that facilitate efficient and convenient utilization of modern parallel and distributed computing architectures, as well as on large-scale applications.

Topics of interest include but are not limited to:

  • Parallel/distributed architectures, enabling technologies;
  • Quantum computing and communication;
  • Cluster, cloud, edge, and fog computing;
  • Multi-core and many-core parallel computing, GPU computing;
  • Heterogeneous/hybrid computing and accelerators;
  • Parallel/distributed algorithms;
  • Performance analysis;
  • Performance issues on various types of parallel systems;
  • Auto-tuning and auto-parallelization: methods, tools, and applications;
  • Parallel/distributed programming;
  • Tools and environments for parallel/distributed computing;
  • HPC numerical linear algebra;
  • HPC methods of solving differential equations;
  • Applications of parallel/distributed computing;
  • Methods and tools for the parallel solution of large-scale problems.

Dr. Ivan Lirkov
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • parallel computing
  • GPU computing
  • parallelization on HPC platforms
  • performance analysis
  • multi-/many-core platforms
  • cache management

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 427 KiB  
Article
Efficient Automatic Subdifferentiation for Programs with Linear Branches
by Sejun Park
Mathematics 2023, 11(23), 4858; https://doi.org/10.3390/math11234858 - 03 Dec 2023
Viewed by 642
Abstract
Computing an element of the Clarke subdifferential of a function represented by a program is an important problem in modern non-smooth optimization. Existing algorithms either are computationally inefficient in the sense that the computational cost depends on the input dimension or can only [...] Read more.
Computing an element of the Clarke subdifferential of a function represented by a program is an important problem in modern non-smooth optimization. Existing algorithms either are computationally inefficient in the sense that the computational cost depends on the input dimension or can only cover simple programs such as polynomial functions with branches. In this work, we show that a generalization of the latter algorithm can efficiently compute an element of the Clarke subdifferential for programs consisting of analytic functions and linear branches, which can represent various non-smooth functions such as max, absolute values, and piecewise analytic functions with linear boundaries, as well as any program consisting of these functions such as neural networks with non-smooth activation functions. Our algorithm first finds a sequence of branches used for computing the function value at a random perturbation of the input; then, it returns an element of the Clarke subdifferential by running the backward pass of the reverse-mode automatic differentiation following those branches. The computational cost of our algorithm is at most that of the function evaluation multiplied by some constant independent of the input dimension n, if a program consists of piecewise analytic functions defined by linear branches, whose arities and maximum depths of branches are independent of n. Full article
(This article belongs to the Special Issue High-Speed Computing and Parallel Algorithms)
Show Figures

Figure 1

Back to TopTop