Next Article in Journal
Influence of the Algorithmization Process on the Mathematical Competence: A Case Study of Trainee Teachers Assessing ABN- and CBC-Instructed Schoolchildren by Gamification
Previous Article in Journal
Numerical Analyses and a Nonlinear Composite Controller for a Real-Time Ground Aerodynamic Heating Simulation of a Hypersonic Flying Object
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Methods, Applications and Publications on the Approximation of Piecewise Linear and Generalized Functions

by
Sergei Aliukov
1,*,
Anatoliy Alabugin
2 and
Konstantin Osintsev
3
1
Department of Automotive Engineering, Institute of Engineering and Technology, South Ural State University, 76 Prospekt Lenina, 454080 Chelyabinsk, Russia
2
Department of Digital Economy and Information Technology, School of Economics and Management, South Ural State University, 76 Prospekt Lenina, 454080 Chelyabinsk, Russia
3
Department of Energy and Power Engineering, Institute of Engineering and Technology, South Ural State University, 76 Prospekt Lenina, 454080 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 3023; https://doi.org/10.3390/math10163023
Submission received: 18 July 2022 / Revised: 16 August 2022 / Accepted: 19 August 2022 / Published: 22 August 2022
(This article belongs to the Section Dynamical Systems)

Abstract

:
Approximation of piecewise linear and generalized functions is an important and difficult problem. These functions are widely used in mathematical modeling of various processes and systems, such as: automatic control theory, electrical engineering, radio engineering, information theory and transmission of signals and images, equations of mathematical physics, oscillation theory, differential equations and many others. The widespread use of such functions is explained by their positive properties. For example, piecewise linear functions are characterized by a simple structure over segments. However, these features also have disadvantages. For example, in the case of using piecewise linear functions, solutions have to be built in segments. In this case, the problem of matching the obtained solutions at the boundaries of the segments arises, which leads to the complication of the research results. The use of generic functions has similar disadvantages. To eliminate shortcomings in practice, one resorts to the approximation of these functions. There are a large number of well-known methods for approximating piecewise linear and generalized functions. Recently, new methods for their approximation have been developed. In this study, an attempt was made to generalize and discuss the existing methods for approximating the considered functions. Particular emphasis is placed on the description of new approximation methods and their applications in various fields of science and technology. The publication-based review discusses the strengths and weaknesses of each method, compares them, and considers suitable application examples. The review will undoubtedly be interesting not only for mathematicians, but also for specialists and scientists working in various applied fields of research.

1. Introduction

In many areas of mathematics and its applications, a problem often arises related to replacing functions that are complex or inconvenient in one sense or other with simpler ones, but close to the original ones. This problem is called the approximation of functions. In many areas of mathematics and its applications, the problem often arises of replacing functions that are complex or inconvenient in one sense or other with simpler ones, but close to the original ones. A number of studies have been devoted to solving this problem, and a large amount of scientific and educational literature has been published. As an example, one can point to the fundamental books [1,2,3,4] and many others.
The review considers methods of approximation of piecewise linear and generalized functions [5,6,7,8,9,10,11,12,13,14,15,16,17], which are universal and find wide application for solving a variety of problems related to mathematical modeling of systems, processes and phenomena described using such functions.
Piecewise linear and generalized functions are widely used in various fields of scientific research. Their traditional fields of application are technical and mathematical disciplines, for example, automatic control theory, electrical engineering, radio engineering, information theory and transmission of signals and images, equations of mathematical physics, oscillation theory, differential equations and many others [18,19,20,21,22,23,24]. Using such functions, for example, they describe the dynamics of mechanical systems with nonlinear elasticities, elastic-dissipative characteristics of vehicle suspensions, systems with loads of the “dry friction” type, impulse transformations during transmission and reception of signals, distributed and concentrated loads, and many other processes.
The widespread use of piecewise linear functions is explained by the simplicity of their structure, especially by areas. At each site, these functions are straight lines and their segments, which in many cases allows one to obtain solutions using the methods of the theory of linear systems. At the same time, problems often arise when constructing solutions over the entire domain of definition of piecewise-linear functions, linking solutions for sections with the need to use special mathematical methods. Systems with piecewise linear characteristics and functions are referred to as essentially nonlinear structures, emphasizing the complexity of obtaining solutions for such structures. Despite the simplicity of piecewise linear functions over sections, the construction of solutions in problems with piecewise linear functions over the entire domain of definition requires the use of special mathematical methods, for example, the matching method [25], with the need to match solutions over sections and switching surfaces. So, when constructing periodic solutions, it is necessary to monitor the fulfillment of the conditions for the transition of the system from section to section, fixing the values of the variables at the end of the previous section and taking these values as the initial conditions for the next section. The need to match the values of the variables over the sections, as well as at the beginning and end of the cycle, leads to a cumbersome system of transcendental equations. Therefore, the application of the fitting method often requires overcoming significant mathematical difficulties, and even if the periodic solution can be found in analytical form, it is usually obtained in the form of complex expressions. In addition, the derivatives of piecewise linear functions are not continuous, which in some cases also complicates research. To simplify calculations, one often resorts to approximating piecewise linear functions. The existing approximation methods have their drawbacks.
Generalized functions were introduced in connection with the problems of physics and mathematics that appeared in the twentieth century and required a new understanding of the concept of a function. Such problems include the problems of determining the density of a point mass, the intensity of a point charge and a point dipole, problems of quantum theory and many others. Generalized functions are now widely used in a wide variety of research areas. For example, generalized functions and their derivatives represent a very convenient mathematical tool for developing control systems. The use of impulse controls significantly increases the control capabilities of various systems. An example of a generalized function is the δ-function or the Dirac function.
Generalized singular functions are very different from regular functions. In many practical applications, these unusual functions are used as purely abstract mathematical constructions, completely divorced from their physical understanding. This approach does not seem to be correct. For a better understanding of generalized functions and a conscious decision on their basis of practical problems, you can use the approximations of these functions. Some of these possible approximations are suggested in this review.
The content of the new methods is explained by a number of practical examples. The examples given are taken from a wide variety of fields: structural mechanics, medicine, quantum theory, signal theory, semiconductor theory, mechanical engineering, heat engineering, and others. The variety of the considered examples emphasizes the rather broad possibilities of practical application of the conceded approximation methods. Therefore, the review will undoubtedly be interesting not only for mathematicians, but also for specialists and scientists working in various applied fields of research.

2. Basic Provisions and Methods of Approximation Theory

This section of the review introduces the basic concepts of approximation theory, discusses the canonical methods of approximation of continuous and discontinuous functions [26,27,28,29]. The considered methods are illustrated with examples, the positive and negative sides of these methods are noted.

2.1. The Main Idea of the Approximation of the Original Function: Basic Concepts

Let M be some point set in n -ddimensional space and let f A ,   A M   be some function defined on this set. This function f A , due to some considerations that we will talk about below, we want to replace with other, the so-called approximation function φ A , B 1 ,   B 2 ,     ,   B k , also defined on the set M , where B 1 ,   B 2 ,     ,   B k are parameters. It is necessary to determine the parameters so that the deviation of the function φ A , B 1 ,   B 2 ,     ,   B k   from the original function f A would be the least according to some criteria.
It is clear that the meaning of such a replacement will be only when the original function does not suit us in some way, and therefore we want to go to a function as close to the original, as possible, but devoid of shortcomings of the original function. For example, dissimilar to the original function f A , the approximation function φ A , B 1 ,   B 2 ,     ,   B k   can have a simpler structure, be continuous, differentiable, analytical, allow the use of any mathematical methods and so on. The approximation function must belong to a certain type of function, which has these advantages over the original function and the properties of functions of this type are well studied in mathematics. For example, an algebraic polynomial, a fractional rational function, a trigonometric sum, a spline function, and so on can act as an approximation function.
When constructing an approximating function, the question arises: what is to be understood by the deviation, or, in other words, by the proximity of functions, to determine the approximation error. To solve this issue, the concepts of metric and metric space are introduced in functional analysis.
Definition 1.
A set Y is called a metric space if each pair of its elements f 1 and f 2 is associated with a real number ρ f 1 , f 2 0 , for which the following axioms hold:
1.
Identity axiom: ρ f 1 , f 2 = 0 if and only if f 1 = f 2 ;
2.
Symmetry axiom: ρ f 1 , f 2 = ρ f 2 , f 1 ;
3.
Triangle axiom: ρ f 1 , f 2 + ρ f 2 , f 3 ρ f 1 , f 3 , for f 3 Y .
This number ρ f 1 , f 2 is called the metric of the set Y   or the distance between the elements f 1 and f 2 .
Within the framework of this monograph, we mainly consider the issues of approximation of such mathematical objects as functions. In this case, the elements (points) of the sets under consideration are functions. Therefore, we will often denote points of sets and spaces by the letter symbol f , as is conducted in the above definition of a metric space.
Consider examples of metrics and metric spaces that are important from the point of view of approximation of functions.
1.
Let Y   be the set of all continuous functions on the segment a , b . As a metric, we can take the maximum modulus of the difference ρ f 1 , f 2 = max x a , b f 1 x f 2 x .
Without losing the generality of reasoning, the segment a , b   with the introduction of a new variable can always be reduced to the segment [0,1]. Therefore, the set of all continuous functions defined on a closed interval with the metric introduced in this way is called the space of all continuous functions and is denoted by C 0 , 1 ;
2.
Let Y   be a set of all real bounded functions on the interval [0,1]. In this case, as a metric, we can take the supremum of the modulus of the difference ρ f 1 , f 2 = sup x 0 , 1 f 1 x f 2 x .
The set of all real bounded functions with the metric introduced in this way is called the space M 0 , 1 . It is clear that C 0 , 1 M 0 , 1 ;
3.
Let Y   be the set of all measurable functions defined on the interval [0, 1]. Two functions that differ only on a set of measure zero (coinciding almost everywhere) will be considered identical. As a metric, we can take the equality ρ f 1 , f 2 = 0 1 f 1 x f 2 x 1 + f 1 x f 2 x d x . Such a space is called the space S 0 , 1 . Convergence in this space is convergence in measure, that is, a sequence of elements f n f ,   if   ρ f n , f n   0 ;
4.
Let Y   be the set of all functions with integrable p -th power on the interval [0,1]. Recall that a function f x is called a function with integrable p -th power on the interval a , b , if a b f x p d x < . The integral is considered in the sense of Lebesgue. As in the previous case, we will consider two functions that differ only on a set of measure zero to be identical. In this case, as a metric, we can take the integral ρ f 1 , f 2 = a b f 1 x f 2 x p d x 1 p . Such a space is called the space L p 0 , 1 . For p = 2 we obtain the so-called functional Hilbert space.
In the further description, we need concepts such as norm and linear normed space.
Definition 2.
A set E   is called a linear space if the operations of adding elements and multiplying an element by a number are defined in this set, and for any elements f 1 ,   f 2 ,   f 3 E   and for any numbers α , β   the conditions are met:
1.
f 1 + f 2 = f 2 + f 1 ;
2.
f 1 + f 2 + f 3 = f 1 + f 2 + f 3 );
3.
f 1 + 0 = f 1 ;
4.
f 1 + f 1 = 0 ;
5.
α f 1 + f 2 = α f 1 + α f 2 ;
6.
( α + β ) f 1 = α f 1 + β f 2 ;
7.
α β f 1 = α β f 1 ;
8.
1 · f 1 = f 1 .
Definition 3.
The norm of an arbitrary element f of the set E   is a nonnegative number, which is denoted by f , for which the conditions are met:
1.
f = 0 f = 0 ;
2.
α · f = α · f ;
3.
f 1 + f 2     f 1 + f 2 .
Definition 4.
The linear space E   with the introduced norm is called a normed linear space.
A metric in a normed linear space can be introduced using the equality ρ f 1 , f 2 = f 1 f 2 . Convergence in a normed linear space is convergence in the norm, that is, a sequence of elements f n f ,   if   f n f n   0 .
Let us give examples of some normed linear spaces that are of great importance from the point of view of approximation theory.
1.
The space of all continuous functions C 0 , 1   with operations of addition and multiplication by a number, defined in the usual way. The norm is introduced by the relation f = max x f x ;
2.
The space L p 0 , 1   of all functions with integrable p -th power on the interval [0,1] with operations of addition and multiplication by a number, defined in the usual way. We introduce a norm in the space of such functions using the equality f = a b f x p d x 1 p ;
3.
The space of all continuous functions on the segment [0,1] with derivatives continuous on this segment up to the k -th order inclusive. The notation for such a space is C k [0,1]. As a norm in this space, one usually takes the relation f = max max x f x ,   max x f x , , max x f k x .
The basic idea of approximation in a normed linear space can be expressed by the following theorem.
Theorem 1.
Let E   be some normed linear space in which the elements f 1 ,   f 2 ,   , f n are linearly independent. Let some element f E be given. For the element f one can choose such numbers λ 1 , λ 2 ,   ,   λ n , that the value Δ λ 1 , λ 2 ,   ,   λ n = f λ 1 f 1 + λ 2 f 2 + + λ n f n obtains the smallest value.
Let us present the proof of the theorem [22].
Let λ 1 , λ 2 , ,   ,   λ n be an arbitrary set of numbers
Using the reverse triangle inequality, we estimate the modulus of the difference
Δ λ 1 , λ 2 , ,   ,   λ n Δ λ 1 , λ 2 ,   ,   λ n = f k = 1 n λ k f k f k = 1 n λ k f k k = 1 n λ k λ k f k k = 1 n λ k λ k f k max k λ k λ k · k = 1 n f k .
Therefore, for ( λ 1 , λ 2 , ,   ,   λ n ) ( λ 1 , λ 2 ,   ,   λ n ) we obtain that λ 1 , λ 2 , ,   ,   λ n Δ λ 1 , λ 2 ,   ,   λ n   . Therefore, the function Δ λ 1 , λ 2 , ,   ,   λ n   is continuous.
Consider a continuous function Ω λ 1 , λ 2 ,   ,   λ n = λ 1 f 1 + λ 2 f 2 + + λ n f n .
The continuity of the function Ω λ 1 , λ 2 ,   ,   λ n can be proved similarly to the proof of the continuity of the function Δ λ 1 , λ 2 , ,   ,   λ n .
The sphere k = 1 n λ k 2   is a bounded closed set in a finite-dimensional Euclidean space, therefore, based on the well-known Weierstrass theorem, the function Ω λ 1 , λ 2 ,   ,   λ n attains its minimum on this set, which we denote by m . Note that m   > 0, since the function Ω λ 1 , λ 2 ,   ,   λ n   is a norm, and the elements f 1 ,   f 2 ,   , f n are linearly independent.
Let   ¯ ≥ 0 be some lower bound for the set of values of the function Ω λ 1 , λ 2 ,   ,   λ n .
If k = 1 n λ k 2 > 1 m ¯ + 1 + f   = R, then, it is easy to obtain
  Δ λ 1 , λ 2 ,   ,   λ n λ 1 f 1 + λ 2 f 2 + + λ n f n f k = 1 n λ k 2 m f > ¯ + 1 ,
therefore, to find the minimum, we can consider the function λ 1 , λ 2 ,   ,   λ n   only in a closed bounded domain k = 1 n λ k 2 < R 2   . According to the well−known Weierstrass theorem, a continuous function reaches its minimum in such a domain. Therefore, there are numbers λ 1 , λ 2 ,   ,   λ n , that provide the best approximation of the element f   by the linear combination λ 1 f 1 + λ 2 f 2 + + λ n f n .

2.2. Contraction Mapping Principle

The contraction mapping principle is one of the most important mathematical achievements and is widely used to prove existence and uniqueness theorems, find solutions to equations of various types by the method of successive approximations, and prove the convergence of iterative procedures used, for example, in approximation theory. The contraction mapping principle was formulated by the Polish mathematician S. Banach [30,31,32,33,34].
First, we give the following definition.
Let Y be some metric space and f n —a sequence of elements of this space.
Definition 5.
A sequence f n   is called Cauchy sequence, if for any positive number ε there is a number N 0 , depending on ε, such that for any numbers n and m it follows from the condition n ,   m N 0   than ρ f n , f m < ε .
In quantifiers, this definition will be written as follows
ε > 0   N 0 = N 0 ε N   n , m N :   n , m > N 0 ρ f n , f m < ε .
A space Y   is called complete, if any Cauchy sequence converges to some limit that is an element of the same space.
Theorem 2.
 (Principle of contraction mappings). Let Y   be a complete metric space, in which some operator A , s given, transforming elements of a given space into elements of the same space, i.e., A :   Y Y . Let, for any f , φ Y , there exist a number α < 1 , independent of f and φ , such that the condition ρ A f , A φ   α ρ f , φ is satisfied. Then, there is a single element (point) f 0 : A f 0 = f 0 . This point is called the fixed point of the operator A .
 Proof.
Let f be an arbitrary fixed element of the set Y .
Let us create a sequence f 1 = A f ,   f 2 = A f 1 ,   ,   f n = A f n 1 ,   , which is fundamental. Really,
ρ f 1 , f 2 = ρ A f , A f 1   α ρ f , A f , ρ f 2 , f 3 = ρ A f 1 , A f 2   α ρ f 1 , f 2   α 2 α ρ f , A f   and   so   on .
We obtain, ρ f n , f n + 1   α n α ρ f , A f .
By the triangle axiom for the metric, we write
ρ f n , f n + p ρ f n , f n + 1 + ρ f n + 1 , f n + 2 + + ρ f n + p 1 , f n + p ( α n + α n + 1 + + α n + p 1 )   ρ f , A f = α n α n + p 1 α ρ f , A f ,   p N .
Taking into account that α < 1, we obtain ρ f n , f n + p α n 1 α ρ f , A f .
Therefore, ρ f n , f n + p n 0 . Therefore, the sequence f n is fundamental, and since, by the hypothesis of the theorem, the space Y   is complete, there is an element f 0 Y :   lim n f n = f 0 .
Let us show that this element f 0 is a fixed point.
Really,
ρ f 0 , A f 0 ρ f 0 , f n + ρ f n , A f 0 = ρ f 0 , f n + ρ A f n 1 , A f 0 ρ f 0 , f n + α ρ f n 1 , f 0 .
Take an arbitrary number ε > 0 . Since the sequence f n   converges to the element f 0 , then, there is a number N 1 , for which for any n N 1   the condition ρ f 0 , f n < ε 2 , will be satisfied, and there is a number N 2 , for which for any n N 2 the condition α ρ f n 1 , f 0 < ε 2 is satisfied.
Then, we obtain
ε > 0   N ε = max N 1 ,   N 2 n :   n > N ρ f 0 , A f 0 ρ f 0 , f n + α ρ f n 1 , f 0 < ε 2 + ε 2 = ε .
Since the number ε is arbitrary, then ρ f 0 , A f 0 = 0 , whence we obtain A f 0 = f 0 , that is, the element f 0 is a fixed point.
Suppose that along with the fixed point f 0 there is one more, belonging to Y , which we denote by f 0 . By the definition of a fixed point, we write A f 0 = f 0 and A f 0 = f 0 . In this case ρ f 0 , f 0 = ρ A f 0 , A f 0 < α ρ f 0 , f 0 and α > 1 , if the fixed points are different. The obtained value of α   contradicts the hypothesis of the theorem; therefore, the fixed point is the only one. □

2.3. Weierstrass Theorems on the Convergence of a Sequence of Approximating Functions

Let us define the concept of uniform convergence of a sequence f n x   of elements of a metric space.
Let Y be a metric space.
Definition 6.
A sequence f n x of elements of this space Y converges uniformly on the segment [0,1] to an element f x Y , if
ε > 0   N 0 ε N   n N : n N 0   ρ f n x , f x < ε   x .
Theorem 3. Weierstrass’ First Theorem.
If the function f x is continuous on the interval [0,1], then ε > 0   n ε N , that there is a sequence of polynomials P n x of degree n , for which the relation holds:
f x P n x < ε ,   x 0 , 1 .
In other words, one can construct a sequence of polynomials P n x , uniformly converging to the original function f x on the segment [0,1]. The polynomials of this sequence can be used to approximate the function f x .
 Proof of Theorem.
Consider a sequence of polynomials A n x = k = 0 n C n k · x k 1 x n k f k n . where C n k are binomial coefficients.
Differentiating the binomial relation
k = 0 n C n k · p k q n k = p + q n
in p twice and carrying out simple transformations, we write down the relations
k = 0 n k n C n k · x k 1 x n k = x ; k = 0 n k 2 n 2 C n k · x k 1 x n k = 1 1 n x 2 + 1 n x .
Taking into account the identity k = 0 n C n k · x k 1 x n k = 1 ,   the obtained relations allow us to find k = 0 n k n x 2 C n k · x k 1 x n k = x 1 x n .
Let’s write the expression
f x A n x = k = 0 n f x f k n C n k · x k 1 x n k = B 1 + B 2 ,
where B 1 corresponds to terms for which k n x n 1 / 4 . B 2   matches all other terms.
We set λ n = max k n x n 1 / 4 f x f k n . Then, we obtain B 1 λ n k = 0 n C n k · x k 1 x n k = λ n . Note that λ n n 0.
By the conditions of the theorem, the function f x is continuous on a closed segment and, therefore, is bounded. Therefore, f x M   on this segment. Where we can obtain
B 2 2 M B 2 C n k x k 1 x n k = 2 M B 2 ( k n x ) 2 ( k n x ) 2 C n k x k 1 x n k 2 M B 2 ( k n x ) 2 n 3 / 2 C n k x k 1 x n k 2 M 1 n 3 / 2 k = 0 n ( k n x ) 2 C n k x k 1 x n k = = 2 M 1 n 3 / 2 n x ( 1 x ) M 2 n .
We will finally write down
f x A n x = B 1 + B 2 B 1 + B 2 λ n + M 2 n .
It is easy to see that λ n + M 2 n   0 .
Whence it follows that the sequence of polynomials A n x   tends uniformly to the function f x on a given interval. □
Theorem 4. Weierstrass’ Second Theorem.
If a periodic function f x with period 2π is continuous, then ε > 0   n ε N , there is a trigonometric sum S n x = a 0 2 + k = 1 n a k sin k x + b k cos k x , for which the relation f x S n x < ε   x holds.
Proof. 
We introduce two even continuous functions with period 2π using the relations
μ x = f x + f x 2   and   ν x = f x f x 2 sin x .
Let t = cos x . Then, if x 0 , π , then t 1 , 1 . Note that due to the parity and periodicity of the functions under consideration, all conclusions that are valid for x 0 , π ,   will be valid for any x .
Let us introduce the functions φ t = μ x and ψ t = ν x , which are continuous for t 1 , 1 . Based on the first Weierstrass theorem, for any number ε > 0 there are polynomials P t and Q t , for which the conditions μ x P cos x < ε 4   and ν x Q cos x < ε 4 .
Since the relation f x sin x = μ x sin x + ν x holds, there is a trigonometric sum A x = Q ( cos x ) + P cos x sin x , for which, for all x the relation holds
f x sin x A x = μ x P ( cos x ) sin x + ν x Q ( cos x ) μ x P ( cos x ) sin x + ν x Q ( cos x ) < ε 4 + ε 4 = ε .  
Similarly, for the function f π 2 x there is a trigonometric function B x , for which f π 2 x sin x B x < ε 2 .
Making the substitution π 2 x = x , we rewrite the last inequality in the form f x cos x B π 2 x < ε 2 .
From the inequalities obtained, we find
f x sin 2 x A x sin x < ε 2 ; f x cos 2 x B π 2 x cos x < ε 2 .
where do we obtain
f x A x sin x B π 2 x cos x = f x sin 2 x A x sin x + f x cos 2 x B π 2 x cos x f x sin 2 x A x sin x + f x cos 2 x B π 2 x cos x < ε 2 + ε 2 = ε .
Since ε can be any positive number, including arbitrarily small, it can be concluded that the trigonometric sum is uniformly convergent S x = A x sin x + B π 2 x cos x to the function f x . □
Note that the considered Weierstrass theorems can be generalized to the case of the space L p 0 , 1 .

2.4. Approximation by Algebraic Polynomials

The approximation by algebraic polynomials of functions [35,36,37,38,39], in particular, piecewise-linear ones, is often performed using algebraic polynomials in the system of polynomials 1 , x , x 2 , . These polynomials are simple and well-studied mathematical constructions, with the possibility of simple differentiation, and the derivative is again a polynomial.
Let the function f : R R   be subject to approximation. Let x a , b and the values of the function f k = f x k are known at the points x k ,   k = 0 , 1 , , n . We approximate our function by the algebraic polynomial P n x = a 0 x n + a 1 x n 1 + + a n 1 x + a n ,   a 0 0 ,   for which the condition   P n x k = f x k . Such a polynomial exists and is unique.
The approximating polynomial can be found by the Lagrange formula
L n x = k = 0 n k = 0 n x x k f x k x x k i = 0 i k n x k x i .
Example. Approximate the piecewise linear function on the segment [0, 7]
f x = 2 x ,   x 0 , 2 ; 6 x ,   x 2 , 7 .
Let’s calculate the values of the function at several points
f 0 = 0 ,   f 1 = 2 ,   f 2 = 4 ,   f 3 = 3 ,   f 5 = 1 ,   f 7 = 1 .
Using the calculated values, we construct an approximating polynomial in the Lagrange form
L 5 x = x x 1 x 2 x 3 x 5 x 7 · 0 210 x + x x 1 x 2 x 3 x 5 x 7 2 48 x 1 + x x 1 x 2 x 3 x 5 x 7 4 30 x 2 + x x 1 x 2 x 3 x 5 x 7 3 48 x 3 + x x 1 x 2 x 3 x 5 x 7 1 240 x 5 + x x 1 x 2 x 3 x 5 x 7 1 1680 x 7 .
After transformations, we find
L 5 x = 0.034 x 5 + 0.548 x 4 2.941 x 3 + 5.495 x 2 1.068 x .
Figure 1 shows the graphs of the original function (thickened line) and the approximating function (thin line). As you can see, despite the relatively high degree of the approximating polynomial, the approximation error is large.
The approximation error can be found by the relation [35]
f x L n x M n + 1 k = 0 n x x k n + 1 ! ,   where   M n + 1 = sup x a , b f ( n + 1 x .
This assessment should be carefully considered. It may not be valid for all continuous functions due to the appearance of a derivative of order n + 1. Convergence issues are considered in the works [1,2,3,4].
Since the expression for the error includes the product k = 0 n x x k , the error depends on the choice of points x k . The approximation error reaches the smallest value on the interval [–1,1] if the points x k are the roots of the Chebyshev polynomial G n + 1 x = cos n + 1 arccos x 2 n , which are calculated by the formula x k = cos 2 k + 1 2 n + 1 ,   k = 0 , 1 , , n .
In the case x a , b , there are optimal points as follows
x k = a + b 2 + b a 2 cos 2 k + 1 2 n + 1 .
Moreover, max x a , b k = 0 n x x k = b a n + 1 2 2 n + 1 , and the approximation error estimate takes the form f x L n x M n + 1 b a n + 1 2 2 n + 1 n + 1 ! .
The approximation accuracy can be improved by increasing the points (nodes) of the approximation.
To approximate the function, you can apply a polynomial in the Newtonian form
N n x = f x 0 + x x 0 f x 0 , x 1 + x x 0 x x 1 f x 0 , x 1 , x 2 + + x x 0 x x 1 x x n 1 f x 0 , x 1 , , x n ,
where f x i , x j = f x j f x i x j x i , i , j = 0 , 1 , , n ,   i j   are separated first-order differences, f x i , x j , x k = f x j , x k f x i , x j x k x i , i , j , k = 0 , 1 , , n ,   i j k   are separated second-order differences, f x i , x i + 1 , . x i + k = f x i + 1 , x i + 2 , , x i + k f x i , x i + 1 , , x i + k 1 x i + k x i   are separated k-th-order differences.
Newton’s formula is applied at equidistant points at which the values of the original function are calculated. The advantage of Newton’s formula over Lagrange’s formula is that when adding new approximation points, all the coefficients in the Lagrange’s formula have to be recalculated, whereas only new terms are added in Newton’s formula, while the old ones remain unchanged.
Newton’s formula is a difference analogue of Taylor’s formula, which is used in the case of approximation by algebraic polynomials of an analytic function in a neighborhood of some point x 0 , and which has the form
f x = f x 0 + k = 1 n f k x 0 k ! x x 0 + f n + 1 x 0 + θ x x 0 n + 1 ! x x 0 n + 1 ,   θ 0 , 1 ,
where the remainder f n + 1 x 0 + θ x x 0 n + 1 ! x x 0 n + 1 is written in Lagrange form.
In some cases, the original function cannot be approximated with the required accuracy by algebraic polynomials. Sometimes such an approximation is possible, but the sequence of polynomials converges very slowly. In these cases, rational fractions or fractional rational functions representing the ratio of polynomials are used to approximate the function [1,2,3].

2.5. Approximation of Piecewise Linear Functions by Fourier Series

This paragraph is based on the publications [40,41,42,43,44].
Let Y be a linear space. Let us introduce the scalar product operation on this space, which is defined as follows.
Definition 7.
To each pair of elements f 1 , f 2 Y we associate some (generally complex) number f 1 , f 2 ,   satisfying the conditions:
1.
f 1 , f 1 0 ,   and f 1 , f 1 = 0 f 1 = 0 ;
2.
f 1 , f 2 = f 2 , f 1 ¯ ;
3.
f 1 + f 2 , f 3 = f 1 , f 3 + f 2 , f 3 ,   f 3 Y ;
4.
λ f 1 , f 2 = λ f 1 , f 2   for any complex number λ.
So, the entered number is called the dot product of the elements f 1   and f 2 .
The norm of an element f Y   can be introduced using the relation f = f , f , and the metric in this space can be defined by the relation ρ f 1 , f 2 = f 1 f 2 . It is easy to verify that all the axioms of the norm and metrics are satisfied.
Elements f 1 , f 2   are called orthogonal if f 1 , f 2 = 0 .
A system of elements f 1 , f 2 ,   ,   f n Y   is called orthonormal if for any elements f i , f j   of this system the condition f i , f j   = δ i , j is satisfied, where δ i , j is the Kronecker symbol equal to one for i = j and zero for i j .
Let the system of functions f 1 x , f 2 x ,   ,   f k x , be orthogonal on the segment a , b . A series of the form k = 1 c k f k x , in which the coefficients c k are found by the formulas c k = 1 f k x 2 a b f x f k x d x   , is called the Fourier series in the orthogonal system f 1 x , f 2 x ,   ,   f k x , for the function f x . For the orthonormal system of functions f 1 x , f 2 x ,   ,   f k x ,   the coefficients of the Fourier series can be found by the formulas c k = a b f x f k x d x .
There are various orthogonal systems of functions. Often the trigonometric system is 1 ,   sin x ,   cos x ,   sin 2 x ,   cos 2 x , , sin k x ,   cos k x ,   ,   for which the scalar product is defined by the relation f i , f j = a b f i x f j x .
If the functions in this system are normalized, then we obtain the orthonormal system of functions
1 2 π , sin x π , cos x π · sin 2 x π , cos 2 x π , , sin k x π , cos k x π , .
A trigonometric series is a series of the form
f x = a 0 2 + k = 1 a k sin k x + b k cos k x ,  
where a 0 , a k , b k   k = 1 , 2 , . . are the coefficients of the trigonometric series.
The sum of a converging trigonometric series is a periodic function with a period of 2π, since the functions sin k x . and cos k x are periodic with a period 2π.
If the coefficients of the trigonometric series are found by the formulas
a 0 = 1 π π π f x d x ;   a k = 1 π π π f x sin k x d x ;   b k = 1 π π π f x cos k x d x ,  
Then, the trigonometric series is called the Fourier series in the orthogonal system 1 ,   sin x ,   cos x ,   sin 2 x ,   cos 2 x , , sin k x ,   cos k x ,   .
Theorem 5.
Dirichlet’s Theorem.If the original function f x is periodic with period 2π, piecewise monotone and bounded on the interval [−π,π], then the Fourier series for this function converges at all points. At the points of continuity of the function, the sum of the series is equal to the value of the function at these points. If x 0 is the point of discontinuity of the original function, then the sum of the series at this point is equal to the half-sum of the one-sided limits, i.e.,
f ( x 0 0 + f x 0 0 ) / 2 .
The proof of the theorem is omitted.
Example. Let a periodic function f x   with period 2π be defined as
f x = 0 ,   x π , 0 ;   x ,   x 0 , 0 = π .
Find the coefficients of the Fourier series
a 0 = 1 π π π f x d x = 1 π π 0 0 d x + 0 π x d x = 1 π · π 2 2 = π 2 ; a k = 1 π π π f x sin k x d x = 1 π 0 π x sin k x d x = 1 π x cos k x k I 0 π + 0 π cos k x d x = 1 k cos k x ;   b k = 1 π π π f x cos k x d x = 1 π 0 π x cos k x d x = 1 π x sin k x k I 0 π 1 k 0 π sin k x d x = 1 π k 2 ( cos k x 1 ) .  
Then, the Fourier series for the original function will be written as follows
f x = π 4 + sin x 2 π cos x 1 2 sin 2 x + 1 3 sin 3 x 2 9 π cos 3 x + .
The graphs of the original piecewise linear function (thickened line) and its approximations by several successive partial sums of the Fourier series (thin lines) on the segment [−π,π] are shown in Figure 2.
As you can see in Figure 2, the approximation error is large enough and, as will be shown in the next chapter, in the vicinity of the discontinuity points, even with an infinite increase in the number of terms in the Fourier series; the error, understood as the difference between the values of the original function and its approximation, does not tend to zero.
For an odd function, the coefficients of the Fourier series are found by the formulas
a 0 = 0 ;   a k = 2 π 0 π f x sin k x d x ;   b k = 0 .
for an even function, these coefficients can be found as follows
a 0 = 2 π 0 π f x d x ;   a k = 0 ;   b k = 2 π 0 π f x cos k x d x .
In the case of a periodic function with a period of 2 l by changing the variable, one can always go to a function with a period of 2π. In this case, the Fourier series for a function with a period 2 l will have the form
f x = a 0 2 + k = 1 a k sin k π l + b k cos k π l ,
where the coefficients are found by the formulas
a 0 = 1 l l l f x d x ;   a k = 1 l l l f x sin k π l x d x ;   b k = 1 l l l f x cos k π l x d x .
If a piecewise-monotone non-periodic function is given, the values of which are of interest to us only on a certain interval a , b , then to expand this function in a Fourier series, we can use a periodic function with a period 2 l b a , which coincides with the original function on the interval a , b .

2.6. Function Approximation Using Splines

A spline is a function composed of parts of polynomials that form a basis [45,46,47,48,49]. The polynomials 1 , x , x 2 ,   are usually taken as a basis. In the general case, the functions forming the basis may not be polynomials, but in the overwhelming majority of cases, so-called polynomial splines are constructed, the basic functions of which are precisely polynomials.
Some advantages of approximating the original function using splines can be pointed out:
1.
Stability of splines with respect to outliers and bursts;
2.
Good convergence of the approximation method;
3.
Ease of implementation on computers using well-developed mathematical methods such as the sweep method.
There are other advantages to this kind of approximation.
Let’s consider the basic idea of spline-function approximation.
Let the segment a , b by points x 0 , x 1 , x 2 ,     , x i , x i + 1 ,   ,   x n 1 , x n , be divided into partial segments, so that
a = x 0 < x 1 < x 2 < < x i < x i + 1 < < x n 1 < x n = b .  
It is said that a grid is given on the segment a , b . In addition, let P k be the set of all polynomials of degree at most k , and   C k a , b is the set of all continuous functions defined on the segment a , b   and having continuous derivatives on this segment up to the k -th order inclusive.
Definition 8.
The function S k x   is called a spline of degree k   of a defect d with a given grid if the following conditions are met:
1.
S k x P k ,   x x i ,   x i + 1 ,   i = 0 , n 1 ¯ ;
2.
S k x C k d a , b .
Let some initial function f x defined on the segment a , b be given. A spline can be used to approximate this function by putting S k x i = f x i ,   i . . In this case, the grid nodes are called approximation nodes.
Consider an example of drawing up a parabolic spline, that is, a spline of second degree.
Let there be a function   y x , which at the nodes x = 0 , 1 , 2   has the values 5, 2, 3, respectively.
For this function on the segment [0,2] we construct a spline of the form
S 2 x = a 1 x 2 + b 1 x + c 1 ,   x 0 , 1 ; a 2 x 2 + b 2 x + c 2 ,   x 1 , 2 .
From the condition S 2 x i = y x i we obtain
c 1 = 5 ; a 1 + b 1 + c 1 = 2 ; a 2 + b 2 + c 2 = 2 ; 4 a 2 + 2 b 2 + c 2 = 3 .
From the continuity condition for the first derivative, we find
2 a 1 + b 1 = 2 a 2 + b 2 .
In total, we obtained five equations, but we also have six unknowns. Additional necessary equations are usually derived from some considerations, most often associated with boundary conditions. Putting, for example, S 2 0 = 0 ,   we obtain the missing condition b 1 = 0 .
Solving all the obtained equations in the system, we find the values of all coefficients a 1 = 3 ,   a 2 = 7 ,   b 1 = 0 ,   b 2 = 20 ,   c 1 = 5 ,   c 2 = 15 .
The desired spline will be written as follows
S 2 x = 3 x 2 + 5 ,   x 0 , 1 ; 7 x 2 20 x + 15 ,   x 1 , 2 .
The spline plot is shown in Figure 3.
Many practical examples, especially those related to mechanics, consider cubic splines, that is, splines of the third degree. Such splines allow not only the first derivative to be continuous, but also the second order derivative. In this case, it is possible to simulate the laws of motion with continuous speeds and accelerations. However, since we are primarily interested in piecewise linear functions, we will consider splines of the first degree, or, in other words, linear splines. The graphs of such splines will be continuous broken lines. For such splines, the following conditions will be met:
1.
S 1 x P 1 ,   x x i ,   x i + 1 ,   i = 0 , n 1 ¯ ;
2.
S 1 x C a , b ;
3.
S 1 x i = y x i ,   i .
Example. The original function on the segment [0,10] is defined by the expression
y = 5 sin x 2 + exp 3 x 17 .
Taking the points at which x = 0 , 1 , 2 , , 10 , as the nodes of the approximation, we construct a linear spline, which will have the form
S x = 3.507 x ,   x 0 , 1 ; 0.314 x + 3.821 ,   x 1 , 2 ; 2.778 x + 8.749 , x 2 , 3 ;   2.284 x + 7.266 ,   x 3 , 4 ; 0.116 x 1.404 , x 4 , 5 ; 1.499 x 9.123 ,   x 5 , 6 ; 1.44 x 9.123 ,   x 6 , 7 ; 0.251 x 0.798 ,   x 7 , 8 ; 0.785 x + 7.483 ,   x 8 , 9 ; 0.8787 x + 8.402 , x 9 , 10 .
The graphs of the original function (solid line) and its approximating linear spline (dashed line) are shown in Figure 4.
The error in approximating the original functions using linear splines can be quite large. Nevertheless, in some cases, approximation by linear splines may be more preferable than approximation by splines of higher degrees, for example, due to simpler expressions for linear splines. In addition, the monotonicity of the values of the origin of the specified function for a linear spline is preserved, which may not be the case for splines of higher degrees.
To reduce the error, the number of nodal points can be increased, but at the same time, having a simple structure at each partial section, the linear spline acquires a cumbersome appearance as a whole, which is clearly seen in the example considered. In addition, already the first derivative for a linear spline is not continuous. This drawback often prevents linear spline functions from being used to solve practical problems. For example, the study of the dynamics of motion of various objects involves the use of velocities and accelerations, which are derivatives of the angles of rotation and displacement. Discontinuities in the functions for velocities and accelerations create uncertainties and inconsistencies between mathematical models and real processes. A way out of the situation can be the methods of approximation of piecewise linear splines, considered in the subsequent chapters and paragraphs of the review.

2.7. Least Squares Method: Linear Regression

The approximating function F x by the least squares method [50,51] is determined from the condition of the minimum sum of squared deviations ξ i of the calculated approximating function from a given array of experimental data. This criterion of the least squares method is written as the following expression:
i = 1 N ξ i 2 = i = 1 N F x i y i 2   m i n ,
where F x i are the values of the calculated approximating function F x at the nodal points x i ; y i is a given array of experimental data at nodal points x i .
This method can be useful when dealing with a large amount of information.
As an example, consider the method for determining the approximating function, which is given as a linear relationship [52]. In accordance with the least squares method, the condition for the minimum sum of squared deviations is written as follows:
S = i = 1 N ξ i 2 = i = 1 N a 0 + a 1 · x i y i 2 ° m i n ,
where x i , y i are coordinates of nodal points of the table; a 0 , a 1 are unknown coefficients of the approximating function, which is given as a linear dependence.
The necessary condition for the existence of a minimum of a function is the equality to zero of its partial derivatives with respect to unknown variables. Then, we obtain:
S a 0 = 2 · i = 1 N a 0 + a 1 · x i y i = 0 ; S a 1 = 2 · i = 1 N a 0 + a 1 · x i y i · x i = 0 .
After some transformations we have:
a 0 · N + a 1 · i = 1 N x i = i = 1 N y i ; a 0 · i = 1 N x i + a 1 · i = 1 N x i 2 = i = 1 N y i · x i .
Solving the resulting system of linear equations, we find the coefficients of the approximating function:
a 0 = i = 1 N y i · i = 1 N x i 2 i = 1 N y i · x i · i = 1 N x i N · i = 1 N x i 2 i = 1 N x i 2 ; a 1 = N · i = 1 N y i · x i i = 1 N y i · i = 1 N x i N · i = 1 N x i 2 i = 1 N x i 2 .
These coefficients are used to construct a linear approximating function according to the criterion of the minimum sum of squares of the approximating function from the given tabular values representing the experimental data.
Example. Suppose we have initial data (Table 1).
Using the above formulas, we find the pair of regression coefficients: a 0 = 328.3 ,   a 1 = 12.078 .
Then, the regression equation will take the form
y = 12.078 · x + 328.3 .

2.8. Hermite Interpolation

When constructing the Hermite interpolation polynomial, it is required that not only that its values coincide with the tabular data in the nodes, but also the values of its derivatives are in a certain order [53,54].
Let us now look at the nodes x i a , b ,   i = 0 , 1 ,   ,   m , among which there are no coinciding ones, the values of the function f x i and its derivatives f j x i , j = 1 , 2 ,   ,   N i   1 up to N i   1 order are given. In this case, the numbers N i   are called the multiplicity of the node x i . At each point x i , thus, the following values N i   are given: f x i , f x i , f x i , , f N i   1 x i . In total, the values N 0 + N 1 + + N m are known on the entire set of nodes x 0 + x 1 + + x m , which makes it possible to raise the question of constructing a polynomial H n x of order n = N 0 + + N m 1 , satisfying the requirements:
H n j x i = f j x i ,   i = 0 , 1 ,   ,   m ,   j = 1 , 2 ,   ,   N i   1 .
Such a polynomial is called the Hermite interpolation polynomial for the function f x . It is proved [] that the Hermite interpolation polynomial exists and is unique.
The construction of the Hermite polynomial in the general case for an arbitrary number of nodes and their multiplicity leads to rather cumbersome expressions and is rarely used. Therefore, we confine ourselves to one example.
Example. Construct the Hermite interpolation polynomial for the function f x in the case when at all interpolation nodes x i a , b ,   i = 0 , 1 ,   ,   m , the values of the function f x i = f i and its first derivative f x i = f i .
In this case N i = 2 , i = 0 , 1 ,   ,   m , therefore, the degree of the polynomial H n x is 2 m + 1 . We write the original polynomial in the form:
H 2 m + 1 x = i = 0 m f i + α i x x i · x x 0 2 · · x x i 1 2 · x x i + 1 2 · · x x m 2 x i x 0 2 · · x i x i 1 2 · x i x i + 1 2 · · x i x m 2 .
When calculating the derivative of the polynomial H 2 m + 1 x at the node x i , it should be taken into account that all terms of the sum, except for the term corresponding to the node itself, provide zero contribution to the derivative at this point, so we obtain
H 2 m + 1 x = f i · 2 x i x 0 + + 2 x i x i 1 + 2 x i x i + 1 + 2 x i x m + α i = f i .
Therefore, we obtain α i = f i 2 f i A i , where the numbers A i are determined by the formula
A i = 1 x i x 0 + + 1 x i x i 1 + 1 x i x i + 1 + 1 x i x m .
Thus, the solution to this problem is the Hermite polynomial
H 2 m + 1 x = i = 0 m f i + f i 2 f i A i · x x i · · x x 0 2 · · x x i 1 2 · x x i + 1 2 · · x x m 2 x i x 0 2 · · x i x i 1 2 · x i x i + 1 2 · · x i x m 2 .

2.9. Lebesgue Functions and Lebesgue Constant in Polynomial Interpolation

The Lebesgue constant is a valuable numerical tool for linear interpolation because it provides a measure of how close the interpolation of a function is to the best polynomial approximation of a function. Many publications [55,56,57,58,59] have been devoted to finding optimal interpolation points in the sense that these points lead to the minimum Lebesgue constant for interpolation problems on the interval [−1,1].
Definition 9.
Let Ω n = x i i = 1 n be a grid on a , b .
Function Λ n x = Λ n x ,   x 1 , x 2 , , x n = i = 1 n l i x is called the Lebesgue function, and the Lebesgue constant is the number
Λ n = Λ n x 1 , x 2 , , x n = max x a , b Λ n x ,   x 1 , x 2 , , x n .
Here l 1 x ,   l 2 x , ,   l n x is some basis in the linear (vector) space of functions of dimension n .
The statements are true [59]:
  • 1 Λ n x Λ n for any x a , b .
  • The value of Λ n does not depend on a , b , but depends only on the relative position of the nodes on it.
Let us pose the question: to what extent is the method of interpolation of a function by an algebraic polynomial inferior in accuracy to the best possible method of approximating a function by an algebraic polynomial of the same degree?
Let P n 1   be an algebraic polynomial, an approximation of the function f , obtained by some method. Thus, each method has its own polynomial P n 1   . The value f x P n 1 x   determines the approximation error at a point x a , b , and the number f P n 1 = max x a , b f x P n 1 x is the maximum error of this method.
Definition 10.
An algebraic polynomial L n 1   is called a polynomial of best uniform approximation if f L n 1 = min P n 1 f P n 1 . The solution to this problem exists and is uniquely determined. The value E n f = f L n 1 is called the error of the best uniform approximation.
Let’s make the following remarks:
  • If F n 1 is an approximation of f obtained by some method (for example, F n 1 is an interpolation polynomial), then f F n 1 E n x .
  • E n x   0 as n for any function f continuous on a , b . This follows directly from the Weierstrass’ theorem.
Theorem 6.
The estimates E n f f R n 1 1 + Λ n E n f are valid.
Proof. 
Let L n 1   be a polynomial of the best uniform approximation of f . Since the interpolation polynomial is unique, L n 1   x = i = 1 n L n 1   x i l i x . Therefore,
f x R n 1 x = f x L n 1   x + L n 1   x R n 1 x f x L n 1   x + i = 1 n L n 1   x i f x i l i x 1 + Λ n x f L n 1 1 + Λ n E n f .
The lower estimate is valid by the definition of E n f . It follows from the upper estimate that the interpolation polynomial R n 1 x is less accurate than the best uniform approximation by a maximum of 1 + Λ n times in accuracy.
Let’s pose the second question: how sensitive is the interpolation polynomial to the error in setting the function?
Let the approximate values f ˜ x i be known at the interpolation nodes instead of the exact values of f x i with an error ϵ x i not exceeding ε : ϵ x i = f x i f ˜ x i ε . Thus, instead of R n 1 x the perturbed polynomial R ˜ n 1 x will be constructed from the values of f ˜ x i . Of practical interest is the deviation of R ˜ n 1 from f .
Theorem 7.
The estimate is f x R ˜ n 1 x f x R n 1 x + ε Λ n .
Proof. 
Obviously,
R n 1 x R ˜ n 1 x = i = 1 n f x i f ˜ x i l i x ε Λ n .
Therefore,
f x R ˜ n 1 x f x R n 1 x + R n 1 x R ˜ n 1 x f x R n 1 x + ε Λ n .
It follows from estimate that the larger Λ n , the more sensitive the interpolation procedure to the error in setting the function.
Important conclusions follow from the estimates obtained [59]:
1.
The smaller the Lebesgue constant Λ n , and the smoother the function, the better both in terms of accuracy and the sensitivity of interpolation to the error of setting the function;
2.
If the sequence of grids Ω n = x i i = 1 n satisfies the condition Λ n E n f 0 as n , then f x R n 1 x 0 as n uniformly in x (in this case one speaks of the convergence of the interpolation process);
3.
During calculations, the following picture can be observed: the error f x R ˜ n 1 x as n increases, it first decreases and then begins to increase.
The value of Λ n depends on the choice of nodes Ω n .
A detailed consideration of issues related to the significance of the Lebesgue constant, moduli of smoothness, selection of optimal nodes, weighted polynomial interpolation is given in fundamental works [1,2,3,4].

3. New Methods of Approximation of Piecewise-Linear and Generalized Functions

This section of the review describes new methods for approximating piecewise-linear functions, especially, piecewise constant functions, and generalized functions, a comparative analysis of the proposed and existing methods for approximating such functions by analytical dependences based on Fourier series is carried out. In addition, the issues of convergence and error of the proposed methods are studied, numerous examples and applications are considered.
The general idea of using a repeating procedure, which gives a more accurate result with each subsequent application, is the basis, for example, of the mathematical theory of deep learning. Deep learning is a type of machine learning using multi-layer neural networks that learn on their own on a large dataset. Artificial intelligence with deep learning itself finds an algorithm for solving the original problem, learns from its mistakes, and after each iteration of training gives a more accurate result.

3.1. Disadvantages of Approximating Piecewise Linear Functions by Fourier Series

To simplify calculations, when working with piecewise linear and generalized functions, in many cases they resort to approximation methods. Replacing piecewise linear functions with more regular C k functions allows you not to worry about tracking and matching the values of process variables at the boundaries of the sections, which greatly simplifies the calculations. In some cases, algebraic polynomials are used to approximate piecewise linear functions. Another of the most widely used methods for approximating piecewise linear functions is the expansion of these functions using Fourier series f = k = 1 c k φ k , where φ 1 , φ 2 ,   , φ n ,     is orthogonal system in the functional Hilbert space L 2 π , π of measurable functions with Lebesgue integrable squares,   f L 2 π , π ,   c k = f , φ k / φ k 2 . The trigonometric system of 2π-periodic functions 1 , sin n x , cos n x ,   n N is often taken as the orthogonal system.
As for the approximation of continuous functions by polynomials or Fourier series, we can discuss the uniform convergence of the approximating functions based on the Weierstrass theorems.
However, for discontinuous piecewise linear functions, the Weierstrass theorems do not hold. Therefore, when approximating such functions, problems may arise that cause negative consequences when solving applied problems. For example, the use of Fourier series, along with positive properties, has certain disadvantages. With a relatively small number of terms in the Fourier series used for the expansion of piecewise-linear functions, the approximating function has a pronounced wavy character even within one rectilinear section of the piecewise-linear function, which leads to a sufficiently large approximation error. to approximate continuous functions. Curves 1 and 2 in Figure 5 illustrate this drawback.
Moreover, even for a large number of terms in the expansion using the Fourier series, there are characteristic jumps of the approximating function in the vicinity of the discontinuity points O δ ( x 0 ) of the original function. For such points sup x O δ ( x 0 ) / { x 0 } f ( x ) S n ( x ) n A 0 , where S n ( x ) is the partial sum of the Fourier series [60].
For example, for the function
f 0 ( x ) = sign ( sin x )
with rectangular pulses, the point x = π / m , where m = 2 [ ( n + 1 ) / 2 ] ,   [ A ] is the integer part of the number A , is the maximum point of the partial sum S n ( f 0 ) of the trigonometric Fourier series [61], moreover S n f 0 , π / m   n   2 π 0 π sin   t t d t 1.17898 . That is, the magnitude of the absolute error f 0 π / m lim n S n f 0 ,   π / m   >   0 .178, and the relative error is more than 17%, regardless of the number of terms in the partial sum of the Fourier series. Notice, that x = π / m   n   0 + 0 .
In Figure 5, curve 3 corresponds to the graph of the approximating function   f n x = n = 1 20 c n φ n     and illustrates the increased approximation error in the vicinity of the discontinuity points of the original Function (1). This is the manifestation of the so-called Gibbs effect, and with an increase in the number of harmonics, the Gibbs effect does not disappear, which leads to extremely negative consequences of using the approximating function. Figure 6 shows a graph of the partial sum S 20 ( f 0 ) of the trigonometric series on the segment [−π,π], illustrating the manifestation of the Gibbs effect.
The most unpleasant thing is that the Gibbs effect is general in nature, it manifests itself for any function f L 2 [ a ,   b ] , that has bounded variation on a segment [ a ,   b ] , with an isolated breakpoint x 0 ( a ,   b ) . For such functions, the following condition is satisfied [61]
lim n S n ( f ,   x 0 + π / m ) = f ( x 0 + 0 ) + d 2 2 π 0 π sin t t d t 1 ,  
where d = f ( x 0 + 0 ) f ( x 0 0 ) .
Let us show that the absolute Δ = Δ ( x ) and conditional δ = δ ( x ) approximation errors in the vicinity of the discontinuity points can be arbitrarily large.
Really,
lim n Δ ( x 0 + π / m ) = lim n S n ( f , x 0 + π / m ) f ( x 0 + π / m ) = lim n S n ( x 0 + π / m ) lim n f ( x 0 + π / m ) = f ( x 0 + 0 ) + d 2 2 π 0 π sin t t d t 1 f ( x 0 + 0 ) = d 2 2 π 0 π sin t t d t 1 = Δ ( d ) .
For each d there exists a function f = f d   satisfying the previous conditions. The property Δ d   + has to be understood in this way. The function Δ ( d ) is infinitely large since
M > 0   d = d ( M ) > 0   d :   d > d Δ ( d ) = d 2 2 π 0 π sin t t d t 1 > M .
As d you can take, for example, 2 M π / 2 0 π sin t t d t π + 1 .
For the relative error δ ( x ) = Δ ( x ) / f ( x ) the proof is similar. Moreover, even with a fixed value d R ( d 0 ) for any M > 0 you can choose a function f ( x ) L 2 [ a , b ] , for which δ ( x 0 + 0 , d ) = Δ ( x 0 + 0 , d ) / f ( x 0 + 0 ) > M . As such a function, for example, one can take a function f ,   or which f ( x 0 + 0 ) < Δ ( x 0 + 0 , d ) / M ,   f ( x 0 + 0 ) 0 .
Note that even on the set of continuous functions C   [ π ,   π ] the Fourier series, as is known [31], does not yet have to converge at every point.
The existence of the Gibbs effect leads to extremely negative consequences of using a partial sum of a trigonometric series as an approximating function for solving problems of mathematical modeling, for example, when studying periodic movements of technical systems, distortions in signal transmission, etc.
The approximation error is especially striking when using Fourier series for generalized functions, for example, δ—function or, in other words, Dirac function. This function is widely used to describe the density of a point mass, the density of a point charge, quantum theory, concentrated loads, instantaneous impulse processes, shock effects, the intensity of a point heat source, diffusion processes in semiconductors, etc.
Generalized functions were introduced in connection with the problems of physics and mathematics that appeared in the twentieth century and required a new understanding of the concept of a function. Generalized singular functions are very different from regular functions. It is known that the δ—function is not a function in the usual sense of this word; rather, it is determined by a functional, and informally by the expression
δ ( x ) = + ,   x = 0 ,   0 ,   x 0 ,
moreover + δ ( x ) d x = 1 .
Generalized functions were introduced in mathematics by Sobolev and Schwartz [62,63,64,65]. Generalized functions have become a key tool in much of PDE theory and form a huge part of analysis.
Let us provide a mathematically more precise definition of a generalized function.
Definition 11.
A generalized function in the sense of Sobolev-Schwartz is any linear continuous functional on the space of basic functions [23].
Thus: (1) the generalized function f is a functional on the set of basic functions D [23], that is, each support of the piecewise continuous function [23] φ D is associated with a (complex) number f , φ ; (2) a generalized function f is a linear functional on D , that is, if φ , ψ D and λ , μ are complex numbers, then f , λ φ + μ ψ = λ f , φ + μ f , ψ ; (3) the generalized function f is a continuous functional on D , that is, if φ k φ ,   k in D , then f , φ k f , φ ,   k .
A very intuitive graph of δ—function is shown in Figure 7.
For the convenience of using analytical research methods, the delta function is decomposed into a Fourier series.
We introduce a sequence of step functions of the form
δ n ( x ) = n / 2 ,   x [ 1 / n ,   1 / n ] , 0 ,   x [ 1 / n ,   1 / n ] .
The functions of this sequence have graphs corresponding to the graph of the step function shown in Figure 8.
It is easy to see that for any n the area of the figure under the graph of such a step function is equal to one.
For the function, the graph of which is shown in Figure 8, we find the values of the coefficients of the Fourier series on the segment [−π,π]:
a 0 = 1 π π + π δ n ( x ) d x = 1 π 1 / n 1 / n n 2 d x = 1 π ;
a k = 0 ,   due   to   the   parity   of   the   function ; b k = 1 π π π δ n ( x ) cos k x d x = 1 π 1 / n 1 / n n 2 cos k x d x = 1 π n 2 2 n cos ( k x ) = cos ( k x ) π ,   x [ 1 / n ,   1 / n ] ,
by virtue of the theorem on the mean value of a definite integral.
Equalities with δ x have to be meant in the sense of limits of sequences distributions.
The equalities with δ(x) have to be meant in the sense of limits of sequences distributions. In the theory of generalized functions, the limit of a sequence of distributions is the distribution that sequence approaches. The distance, suitably quantified, to the limiting distribution can be made arbitrarily small by selecting a distribution sufficiently far along the sequence. This notion generalizes a limit of a sequence of functions; a limit as a distribution may exist when a limit of functions does not.
Given a sequence of distributions f n , its limit f is the distribution given by f φ = lim n f n φ for each test function φ , provided that distribution exists.
Since the delta function δ ( x ) = lim n δ n ( x ) and noticing that x n 0 , we find b k = 1 π .
Consequently, the expansion of the delta function in a Fourier series on the interval [ π ,   π ] has the form
δ ( x ) = 1 2 π + 1 π k = 1 cos ( k x ) .
For a finite series, we have an approximate relation
δ ( x ) 1 2 π + 1 π k = 1 n cos ( k x ) .
This approximate equality is only informal because the point value δ x has no meaning in Sobolev-Schwartz theory. Generalized function can be considered as set-theoretical maps (in a non-Archimedean ring of scalars) [24].
The graph of the approximation of the delta function by the Fourier series is shown in Figure 9.
Comparison of the graphs (Figure 7 and Figure 9) shows that even with a significant number of harmonics (in our case n = 1000), the approximation error is very large. The minimum value of the constructed approximation is negative and is −69.182. Moreover, with an infinite increase in the number of terms in the approximating Fourier series, the minimum value of its sum tends to −∞ (Figure 10), which fully corresponds to the assertion proved in this section about the possible infinitely large error in approximation using the Fourier series.
The existence of the Gibbs effect in the approximation of functions by trigonometric expressions also makes the proof of some important theorems critical. In particular, in the theory of signal transmission, the classical sampling Nyquist-Shannon-Kotelnikov theorem is widely used. When proving the theorem [66] to approximate functions, it uses the so-called integral sine determined by the expression.
On the basis of the integral sine Kotelnikov V.A. to prove the theorem [66] builds a function S i ( T ( ω + ω 1 ) ) S i ( T ( ω ω 1 ) ) , where ω is argument, T ,   ω 1 are some parameters. At the same time, he claims that with increasing T this function tends to the limits shown in Figure 11a, that is, we quote literally, is equal to zero at ω > ω 1 and equal to π at ω < ω 1 .
In fact, this is not the case. The graph of the limiting function will have the form shown in Figure 11b. That is, for any, even arbitrarily large but finite values of the parameter T , there will always be those ω < ω 1 , for which the values of the function constructed by Kotelnikov will be different from π , and there will always be those ω > ω 1 , for which its values will be different from zero. Moreover, it is important to note that the indicated difference with increasing T does not tend to zero, but tends to some number other than zero, approximately equal to 0.281, that is, constituting a sufficiently large value. Therefore, the classical sampling Nyquist-Shannon-Kotelnikov theorem requires a careful revision.
In the practice of creating images, the noted errors lead to a speckle effect, which manifests itself in the spotting of such images, their increased graininess (Figure 12). The speckle effect is the result of the interference of many waves of the same frequency, having different phases and amplitudes, which add up to the resulting wave, the amplitude and, therefore, the intensity of which changes randomly. It seems to the viewer that the image is covered with frequent, small spots, which, of course, degrades the quality of these images. Such disadvantages in signal transmission led to signal distortions, which can be significant.
The described shortcomings clearly indicate the need to develop new, more efficient methods for approximating piecewise linear functions.

3.2. Description of New Methods of Approximation of Piecewise-Linear Functions and Their Convergence

To eliminate the noted shortcomings, S. Aliukov [5,6,11,14,16] proposed new methods for approximating piecewise-linear functions, based, like the Fourier series, on the use of trigonometric expressions, but in the form of recursive functions.
To explain these methods, consider, for example, step Function (1) in more detail. This function is often used for an example of the application of Fourier series and therefore it is convenient to take this function for a comparative analysis of the traditional Fourier series expansion and the proposed method.
The expansion of Function (1) in a Fourier series has all the above-described disadvantages. To eliminate them, it is proposed to approximate the original step function by a sequence of recursive periodic functions
f ( x ) n     f n ( x ) = sin ( π / 2 ) f n 1 ( x ) ,   f 1 ( x ) = sin x ;   n 1 N C [ π , π ]
The graphs of the original function (thickened line) and its five successive approximations in this case have the form (Figure 13).
As you can see, even with relatively small values when using the iterative procedure (2), the graph of the approximating function approximates the original Function (1) quite well. In this case, the approximating functions obtained using the proposed method are free from the drawbacks of expansion in Fourier series. The Gibbs effect is completely absent.
Let us note some features of the proposed approximating iteration procedure.
Note that the functions f n ( x ) and f 0 ( x ) are odd and periodic with a period 2 π . The functions f n ( x + π / 2 ) and f 0 ( x + π / 2 ) are periodic even. Therefore, it is sufficient to consider the sequence of approximating functions (2) on an interval 0 ,   π / 2 .
Theorem 8.
The sequence of functions f n ( x ) converges to the original function f 0 ( x ) , and the convergence is pointwise, but not uniform.
Proof. 
At points x = 0 and x = π / 2 we have f n ( x ) f 0 ( x ) = 0 ,   n N . Therefore, at these points f n ( x ) n f 0 ( x ) .
Since sin x > ( 2 / π ) x ,   x ( 0 ,   π / 2 ) , then the condition f n ( x ) = sin ( π / 2 ) f n 1 ( x ) > f n 1 ( x ) > > f 1 ( x ) > 0 is satisfied for any x ( 0 ,   π / 2 ) . Then, the sequence f n ( x ) ,   x ( 0 ,   π / 2 ) is positive, increasing, and bounded, and therefore has a finite limit, which we denote by lim n f ( x ) n = A R . We obtain A = lim n sin ( ( π / 2 ) f n 1 ( x ) ) = sin ( ( π / 2 ) lim n f n 1 ( x ) ) = sin ( ( π / 2 ) A ) , whence we find that A = 0 or A = 1 . Since the sequence is positive and increasing, then A = 1 = f 0 ( x ) . Then, on the considered interval f n ( x ) n f 0 ( x ) . Taking into account the previously made conclusion about the convergence of the sequence at the points x = 0 and x = π / 2 , we conclude that f n ( x ) n f 0 ( x ) ,   x [ 0 ,   π / 2 ] . This convergence is only pointwise and not uniform since the function f 0 ( x ) is not continuous on the segment 0 ,   π / 2 . □
Theorem 9.
In the space of measurable functions L 1 [ 0 , π / 2 ] and in the Hilbert space L 2 [ 0 , π / 2 ] the sequence of approximating functions f n ( x ) converges in the norm to the original function f 0 ( x ) .
Proof. 
We introduce a sequence of minorants with respect to a sequence f n ( x ) of functions
η n ( x )     η n ( x ) = ( 2 / π ) arctg ( n π ) ;   n N C [ 0 , π / 2 ] .
It can be shown that f n ( x ) η n ( x ) ,   n N ,   x [ 0 ,   π / 2 ] . Note that the measure of the set of discontinuity points of the function f 0 ( x ) is equal to zero. Then, taking into account the non-negative sign and boundedness of the functions f n ( x ) and η n ( x ) and on the segment under consideration, in space L 1 [ 0 ,   π / 2 ] we obtain
f 0 ( x ) f n ( x ) = 0 π / 2 ( 1 f n ( x ) ) d x 0 π / 2 ( 1 η n ( x ) ) d x = π 2 arctg π n 2 + 1 π n ln 1 + ( π n ) 2 / 4 .
Since lim n π 2 arctg π n 2 + 1 π n ln 1 + ( π n ) 2 / 4 = 0 , then f 0 ( x ) f n ( x ) n 0 .
Similarly, one can prove that the sequence f n ( x ) converges in the norm to a function f 0 ( x ) in the space L 2 [ 0 , π / 2 ] . □
Thus, the sequence of approximating functions f n ( x ) in spaces L 1 [ π , π ] and L 2 [ π , π ] is fundamental. In space C [ π , π ] the sequence f n ( x ) is not fundamental.
The function f 1 ( x ) will be called initial (or angular). Instead of sine, we can use another (not necessarily periodic) function as an initial function. Note that when using the iterative procedure (2) and under the condition f 1 ( x ) < 2 we obtain lim n f n ( x ) = sign ( f 1 ( x ) ) . In this case, any step function can be approximated. Indeed, for the step function
f ( x ) = h ,   x ( x 1 , x 2 ) , 0 ,   x ( x 1 , x 2 )
take the initial function in the form f 1 ( x ) = exp ( 1 ( a x + b ) 2 ) 1 . From the condition f 1 ( x 1 ) = f 1 ( x 2 ) = 0 we find a = 2 / ( x 1 x 2 ) ;     b = ( x 1 + x 2 ) / ( x 2 x 1 ) . For these values of the coefficients a and b the sequence
f n ( x )     f n ( x ) = ( h / 2 ) ( 1 + sin φ n ( x ) ) ,   φ n ( x ) = ( π / 2 ) sin φ n 1 ,   φ 1 ( x ) = ( π / 2 ) f 1 ( x ) ,   n 1 N  
converges to the step function f ( x ) . Then, any step function with values h i on the intervals ( x 1 i , x 2 i ) can be approximated by the sum of similar sequences i = 1 k f n ( x ) i .
The proved Theorem 2 is of a general nature and is valid for an arbitrary step function. Therefore, for example, an arbitrary periodic step function can be represented as a linear combination f x = i = 1 k h i · f 0 i x ,   h i R ,   shifted in phase and along the ordinate axis functions f 0 i x = sign sin l i x x i , l i , x i R . According to the proved theorem, in the spaces L 1 π , π   and L 2 π , π we have the convergence f 0 i x f n i x n 0 , i , therefore the function f n x = i = 1 k h i · f n i x   converges in the norm to the function f x , since
f x f n x = i = 1 k h i · f 0 i x i = 1 k h i · f n i x   i = 1 k h i · f 0 i x f n i x n 0 .

3.3. Approximation Error

To estimate the error of approximation (2), we use the relation
φ n x f n x ψ n x
(Figure 14), where
ψ n x = π / 2 n 1 · x ,   x π / 2 ,   n N .  
Functions φ n x and ψ n x are constructed from the condition of equality of derivatives at zero φ n x = ψ n x = f n x , which allows one to obtain a narrow interval for estimating the approximation error.
In space L 1 0 , π / 2 the estimates for the absolute and relative errors are, respectively
2 / π n 1 2 f n x f 0 x L 1 0 , π 2 2 π n 1 · 1 exp π 2 n ;
2 / π n 2 f n x f 0 x L 1 0 , π 2 π 2 2 π n · 1 exp π 2 n .
For space L 2 0 ,   π / 2 , these estimates take, respectively, the form
2 / π n 1 / 3 1 / 2 f n x f 0 x L 2 0 , π 2 2 π n 1 · 1 exp 2 π 2 n 2 1 2 ;
2 / π n / 3 1 / 2 f n x f 0 x L 2 0 , π 2 π 2 1 2 2 π n · 1 exp 2 π 2 n 2 1 2 .
The graphs of the upper and lower estimates of the relative error δ depending on n N for the space L 1 0 , π / 2 (curves 1) and space L 2 0 , π / 2 (curves 2) are shown in Figure 15.
Considering the approximation of the step function f ( x ) (3), we assumed that its position and height are precisely known. In real problems, the parameters are usually set approximately. Let, for example, the initial parameters are specified with absolute errors
x ^ 1 x 1 = Δ x 1 [ 0 ,   Δ x 1 ) ,   x ^ 2 x 2 = Δ x 2 [ 0 ,   Δ x 2 ) ,   h ^ h = Δ h [ 0 ,   Δ h ) ,
where Δ x 1 = sup Δ x 1 ,   Δ x 2 = sup Δ x 2 ,   Δ h = sup Δ h , x ^ 1 ,   x ^ 2 ,   h ^ are the approximate values of the parameters. Consider the step function (3) on the segment [ c ,   d ] , for which [ x 1 Δ x 1 ,   x 2 + Δ x 2 ] [ c ,   d ]   . In this case, in spaces L 1 [ c ,   d ] ,   L 2 [ c ,   d ] ,   M [ c , d ] , where M [ c ,   d ] is the set of functions bounded on an interval [ c , d ] with a metric ρ ( f ( 1 ) ( x ) ,   f ( 2 ) ( x ) ) = sup x [ c , d ] f ( 1 ) ( x ) f ( 2 ) ( x ) , for the absolute error of approximation in the norm, we obtain, respectively, the estimates
  Δ f   <   sup Δ x 1 sup Δ x 2 sup Δ h lim n f ( x ) f n ( x ) L 1 [ c ,   d ] = ( h + Δ h ) ( Δ x 1 + Δ x 2 ) + ( x 2 x 1 ) Δ h ;   Δ f   < sup Δ x 1 sup Δ x 2 sup Δ h lim n f ( x ) f n ( x ) L 2 [ c ,   d ] = ( h + Δ h ) 2 ( Δ x 1 + Δ x 2 ) + ( x 2 x 1 ) Δ h 2 ;   Δ f   < sup Δ x 1 sup Δ x 2 sup Δ h lim n f ( x ) f n ( x ) M [ c ,   d ] = h + Δ h .
As we can see from the estimates obtained, the approximation error does not accumulate, which is a positive side of the proposed method.
Since in practice, as a rule, we only know the approximate values of the parameters and measurement errors, it is better to express the upper estimates for the absolute approximation error in the form
  Δ f   L 1 [ c , d ] < ( h ^ + 2 Δ h ) ( Δ x 1 + Δ x 2 ) + ( x ^ 2 x ^ 1 + Δ x 1 + Δ x 2 ) Δ h ;   Δ f   L 2 [ c , d ] < ( h ^ + 2 Δ h ) 2 ( Δ x 1 + Δ x 2 ) + ( x ^ 2 x ^ 1 + Δ x 1 + Δ x 2 ) Δ h 2 ;   Δ f   M [ c , d ] < h ^ + 2 Δ h .  
Let us return to function (1) and its approximation using sequence (2) in the space of bounded functions M [ 0 ,   π ] .
Let Δ = f 0 ( x ) f n ( x ) [ 0 ,   1 ] be the absolute error of approximation.
Let’s write down a sequence r n     r n = max Δ max x 1 , x 2     [ 0 , π ]   :   f n ( x 1 ) = f n ( x 2 ) x 2 x 1 of maximum metrics. From the equation f n ( x ) = 1 Δ we obtain that this sequence can be represented as
r n     r n = π 2 arcsin λ n ,   λ n = ( 2 / π ) arcsin λ n 1 ,   λ 1 = 1 Δ ,   n 1 N   .
It can be proved similarly to the proof of Theorem 1 that the sequence r n ( Δ ) n n r ( Δ ) = π ,   Δ ( 0 ,   1 ] , 0 ,   Δ = 0 , where the convergence on an interval [0,1] is pointwise but not uniform. It is important to note that the sequence { r n } also converges to a step function.
The graphs of the first few functions in the sequence are shown in Figure 16.
As we can see in Figure 16, the length of the gap at which the approximation error does not exceed Δ , sharply increases with an increase n in the region of sufficiently small error values Δ . This fact speaks of the fast convergence of the proposed method and is its positive feature.
For a quantitative assessment of the change in the length of this interval, we derive an approximate dependence for the function Δ r ( n ,   Δ ) = r n r n 1 . For this purpose, we use the ratio r n r n 1 = 2 ( x n 1 x n ) , where x n = arcsin ( 2 / π ) x n 1 , x 1 = arcsin ( 1 Δ ) . Then, r n r n 1 = 2 ( x n 1 arcsin ( ( 2 / π ) x n 1 ) ) . Expanding arcsin ( ( 2 / π ) x n 1 ) in a Maclaurin series and taking into account the sufficiently small values x n 1 , we obtain approximately r n r n 1 ( 2 / π ) ( π 2 ) x n 1 . Then, r n r n 1 2 / π n 1 ( π 2 ) arcsin ( 1 Δ ) .
Let us indicate some properties of the proposed approximation (2).
Property 1.
The maximum value of the difference in the lengths of the intervals r n r n 1 does not depend on n and is found by the ratio:
max Δ [ 0 ,   1 ] ( r n r n 1 ) = π 2 4 2 arcsin 1 4 / π 2 ,   n 1 N .
Proof. 
Based on the previously obtained relation r n r n 1 = 2 ( x n 1 arcsin ( ( 2 / π ) x n 1 ) ) ,   n 1 N , we find the derivative
d ( r n r n 1 ) d Δ = 2 n 1 π 2 4 x n 1 2 2 / i = 1 n 1 ( π 2 4 x i 2 ) ( 1 ( 1 Δ ) 2 ) .
The points x n 1 = x n 2 = = x 1 = π / 2 are the minimum points at which r n r n 1 = 0 . In case Δ = 1 we also have r n r n 1 = 0 . The points x n 1 = ( π 2 / 4 ) 1 are maximum points and are independent of n . Then, we obtain that
max Δ [ 0 ,   1 ] ( r n r n 1 ) = π 2 4 2 arcsin 1 4 / π 2 .  
For reference, we point out that max Δ [ 0 ,   1 ] ( r n r n 1 ) 0.661 .
Property 2.
The maximum value of the difference between the values of the functions f n ( x ) f n 1 ( x ) does not depend on n and is found by the ratio:
max x [ 0 ,   π ] ( f n ( x ) f n 1 ( x ) ) = ( π 2 4 2 arccos ( 2 / π ) ) / π ,   n 1 N .
The proof is similar to the proof of Property 1.
Moreover, for reference, we point out that max x [ 0 ,   π ] ( f n ( x ) f n 1 ( x ) ) 0.211 .
Property 2 shows that the sequence of approximating functions f n ( x ) (2) does not converge in the Cauchy sense, that is, it is not fundamental, since ε > 0   n N   n , m > n   , that max x [ 0 , π ] f n ( x ) f m ( x ) > ε . As ε you can take, for example, the number 0,1, setting m = n + 1 ,   n = n + 2 .
The obtained relations can be used to estimate the approximation error in solving applied problems.

3.4. Generalized Functions and Their Approximation by a Sequence of Recursive Functions

Generalized functions [67] became widespread in the 20th century, when new problems in physics and mathematics led to an urgent need to expand the definition of a function.
Let Y be a linear space whose elements are functions in the sense of the usual definition.
If there is a rule according to which a certain number is assigned to each function y Y   , then it is said that on the set Y there is a given functional. We denote the functional by I : Y R , or more simply I y .
A functional is called linear if the condition I α y 1 + β y 2 = α I y 1 + β I y 2 , y 1 , y 2 Y , α , β R holds.
A functional is called continuous if the condition y n y implies the fulfillment of the condition I y n I y ,   y n , y Y .
We will consider functions on the set R.
A function φ x is said to be compactly supported if it equals zero outside a finite interval a , b , and the boundaries of the interval depend on φ x . Any continuous compactly supported function is called basic. The set of basic functions will be denoted by C 0 .
Let the function f x be ordinary in the sense of the definition, and it is continuous except, perhaps, a finite number of discontinuity points, and bounded on any finite interval.
We define the functional by the integral I φ = + f x φ x d x , which for any basic function φ x will be finite. A functional of this kind is called a regular functional.
Definition 12.
A linear functional ξ :   C c is continuous, if for every convergent sequence f n n = 1 of functions f n C c we have lim n ξ , f n = ξ , lim n f n . We use the notation ξ , f instead of ξ f .
Definition 13.
A generalized function is any linear continuous functional I φ , defined on the set C 0 , having the properties
  • I (α φ 1 + β φ 2 ) = α I φ 1 + β I φ 2 ,   φ 1 , φ 2 C 0 ,   α , β ϵ R ;
  • I φ n I φ ,   if φ n   φ in C 0 .
Not every generic function is regular. A generic function that cannot be represented by the integral I φ = + f x φ x d x , is called singular. An example of a singular generalized function is the function I φ = φ 0 . This function is called the δ-function or the Dirac function.
Using the proposed methods, one can also approximate singular generalized functions, for example, a δ-function.
The meaning of singular generalized functions can be understood based on their approximations, perceiving the generalized function as the limit of some approximating sequence of ordinary functions. For example, as noted, the delta function can be viewed as the limit of a sequence of step functions (Figure 8). However, the use of a sequence of step functions does not allow adequate representation of the derivatives of the delta function, which, in turn, are also generalized functions. The problem is that step functions have discontinuity points at which they are not mathematically differentiable. Therefore, to represent the derivatives of a delta function, it is necessary to use an approximating sequence of analytic functions with derivatives of any order.
The expression used for approximation in this case can be of the form [11]
f x = cos A A A x ,   where   A x = π 2 sin x .
In particular, Figure 17 shows the graph of the function
f x = 310 cos ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A x ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) .
Comparing the graphs in Figure 7, Figure 9 and Figure 17, we note that the proposed approximation methods give a much more accurate approximation of the δ-function than the Fourier series. Moreover, the accuracy of the approximation can be increased to an arbitrarily large degree by increasing the number of nested functions. The height of the approximation peak (amplitude) can be determined by the integral condition in the definition of the δ-function.
To determine the height of the approximation peak, we use the fact that the δ-function is the derivative of the Heaviside function, or the unit jump function, which is defined as
H x = 1 ,   x > 0 ; 0 ,   x < 0 .
The Heaviside function can be approximated by a sequence of functions of the form H n x = 0.5 1 + f n x , where the sequence of functions f n x is defined by relation (2) and is considered on the interval [ π / 2 , π / 2 ] .
For example, Figure 18 shows graphs of three successive approximations
H 9 ( x ) = 0.5 ( 1 + sin ( A ( A ( A ( A ( A ( A ( A ( A ( x ) ) ) ) ) ) ) ) ) ) , H 10 ( x ) = 0.5 ( 1 + sin ( A ( A ( A ( A ( A ( A ( A ( A ( A ( x ) ) ) ) ) ) ) ) ) ) , H 11 ( x ) = 0.5 ( 1 + sin ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( x ) ) ) ) ) ) ) ) ) ) ) ,
where A ( x ) = π 2 sin x .
The thickness of the graph in Figure 18 increases as the number of the approximating dependence increases.
Finding the first derivatives of the approximations of the Heaviside function, we obtain successive approximations d H 9 ( x ) d x , d H 10 ( x ) d x and d H 11 ( x ) d x for the delta function. Their graphs are shown in Figure 19.
With a sufficiently large number of nested functions, we obtain an approximating function d H 18 ( x ) d x , the graph of which was obtained using the MathCAD computer program and is shown in Figure 17.
Differentiating the approximating functions of the considered sequence H n ( x ) = 0.5 ( 1 + f n ( x ) ) , we obtain
d H n ( x ) d x = π n 1 2 n k = 1 n 1 cos   π 2 f k ( x ) cos   x .
Substituting into the resulting expression for the derivatives x = 0 , taking into account the parity of the δ -function, we find the value for the peak height A n of the approximating functions H n ( x )
A n = π n 1 2 n

3.5. Approximation of Derivatives of Generalized Functions: Comparison of Approximation Methods

Since we approximated the generalized functions with analytical functions, we can differentiate these approximating functions and find them to obtain approximations of the derivatives of the generalized functions with any degree of accuracy. For example, similarly to how it was conducted in the previous section, we can build graphs of approximations of the derivatives of the δ-function. Figure 20 shows graphs of successive approximations of the first, second and third derivatives of the δ-function.
Derivatives of higher orders can be found in the same way. The plotted graphs give a good idea of the behavior of the derivatives of the δ-function. By mentally increasing the number of the approximating function [11], according to the graphs (Figure 20), it is possible to continue the traced tendencies of changes in the approximations and to present the limiting positions of the sequences of functions approximating the derivatives of the δ-function. This will help to improve the understanding of generalized functions that are derivatives of the δ-function, to use them not just as an abstract mathematical apparatus, but to consciously understand their structure, even if they are written in limiting form. This approach can also be used to better understand other generic functions and their behavior.
The δ-function can also be approximated by other continuously differentiable functions, for example, such
δ ( x , α ) = α π ( α 2 x 2 + 1 ) ,   α , δ ( x , α ) = α π exp ( α 2 x 2 ) ,   α , δ ( x , α ) = α π sin   ( α x ) α x ,   α ,
for which lim α δ ( x ,   α ) = 0 ( x 0 ) and lim α + δ ( x ,   α ) d x = 1 .
The disadvantage of approximating the δ-function using the third of these functions is a big deviation from the δ-function since this function has not only positive, but also negative values. The graphs of such a function correspond to the graph shown in Figure 10. Moreover, the sequence of negative values is not limited from below, that is, the error can be arbitrarily large.
As for the approximation using the first two functions, they allow for approximating the periodic delta function only as a sum δ ( x ) = + δ ( x 2 π k ) , which can be inconvenient for practical use, while the approximating functions according to the proposed method are periodic in nature and allow approximating the periodic delta function without any additional constructions. By a periodic delta function, we mean a generalized function whose argument values, in which the function is not equal to zero, repeat periodically.
An example is the graph of the function
f x = π 13 2 14 cos ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A ( A x ) ) ) ) ) ) ) ) ) ) ) ) ) ) ,
where A x = π 2 sin x , is shown in Figure 21.
The constructed function f x can be used to approximate the distribution function of a discrete random variable using the relation F x = P t 0 t f x d x , where P is a parameter determined from the properties of the distribution function. An example of a distribution function constructed in this way is shown in Figure 22 [5,6,14].
These techniques can be used in the context of mathematical models represented by ODEs or PDEs. For example, in the case of systems with a variable structure, mathematical models are often presented in the form of several systems of differential equations in sections. In this case, problems arise in constructing a solution to equations during a cycle, periodic solutions, the need to track the transitions of the system from section to section and coordinate solutions at the boundaries of sections. Similar problems arise when solving differential equations with piecewise linear and impulse characteristics. The use of the developed approximation methods makes it possible to overcome these problems. In particular, the authors applied the developed methods in studies, the results of which are published in the following articles [6,7,8,9,13,68,69,70] and others.

4. Practical Application and Examples of Using the Developed Approximation Methods

This section of the review discusses various possibilities of using the developed methods for approximating piecewise linear and generalized functions. The examples given are characterized by specific content, a more complete reflection of physical reality. In reality, even fast-flowing processes occur, albeit in a short, but not zero time. For example, in reality there cannot be an instantaneous change in the speed of a material object with a non-zero mass, since such an instantaneous change would require an infinite amount of energy.

4.1. Application of New Methods of Approximation in Problems of Structural Mechanics

The proposed approximation methods, similarly to the Fourier series, are universal. They can be applied in a wide variety of fields of science and technology. One of the many possible examples of the use of these methods in practice is their application in problems of resistance of materials in the broad sense of this term and in problems of structural mechanics in particular.
When calculating building structures such as trusses, beams, etc. it is necessary to take into account the action of distributed and concentrated loads [71]. Loads, as a rule, have piecewise linear characteristics and Fourier series are widely used for their approximation.
Consider, for example, a beam on which a uniformly distributed load with an intensity p acts on a section of length 2c (Figure 23). Other parameters are directly given in Figure 23.
By transforming the argument, without losing the generality of reasoning, we can always consider the load function f = f x only on the interval π , π . In this case, the expansion of the load in a Fourier series in our case has the form
f x = p c π + 2 p π k = 1 1 k sin k c · cos k x d .
Figure 24 shows some other types of loads and their representations in the form of Fourier series.
Similarly, if necessary, you can write down the corresponding Fourier series for other symmetric and asymmetric types of loading of building structures.
The decomposition of loads (acting forces) in Fourier series will have all the disadvantages noted in Section 2.1. Therefore, to eliminate these disadvantages, one can use the proposed approximation methods.
For the load shown in Figure 24a, for c = π / 6 ,   d = π / 2 ,   p = 4   the Fourier series expansion will have the form
f x = 4 3 + π 16 k = 1 sin k π 6 cos k π 6 cos k x .
Approximating function by the proposed method is found by the expression
f x = 2 2 sin π 2 sin π 2 sin sin 0.5 + cos 2 x .
Figure 25 shows the graphs of the original function and its approximating functions. Here, the graph of the original function is highlighted with a thickened stepped line. The graph of the approximating function, constructed using the Fourier series for k = 1, 2, …, 17, is shown by a thin line. The dotted line corresponds to the graph of the approximating function constructed by the proposed method for four nestings.
From Figure 25 we see that, despite the fact that for the expansion in the Fourier series we used seventeen harmonic terms (k = 17), and in the approximation by the proposed method we used only four compositions, the last approximating function approximates the original function much better. Moreover, one could graphically show that a further, arbitrarily large increase in the number of harmonic terms in the Fourier series does not save the situation. The reason for this lies in the aforementioned Gibbs effect.
The advantages of approximating the load using the proposed methods are clearly manifested for concentrated actions on a beam. For example, for the case shown in Figure 26 under the action of two equal in magnitude, but oppositely directed forces applied at symmetrically located points, the expansion in a Fourier series has the form
f x = 2 P π k = 1 sin k d · sin k x .
For d = π / 2 ,     P = 10   we approximate this loading scheme by the finite sum of the Fourier series f x = 2 P π k = 1 20 sin k d · sin k x   and function
y x = cos π 2 sin π 2 sin π 2 sin π 2 sin π 2 sin π 2 cos x · 40 x .
Figure 27 shows the graphs of the obtained approximating functions. The thin line corresponds to the approximation using the Fourier series, the thickened line corresponds to the approximation by the proposed method. As you can see, the results of the approximation clearly speak in favor of the proposed method. We have proposed an approach that allows us to model the load using an analytical function that allows finding derivatives of any order. This function is a single analytical expression for the entire domain of definition and is not composite across sections. This approach can describe a given design load circuit with any degree of accuracy.

4.2. Application of New Approximation Methods for Modeling Diffusion Processes in Semiconductor Materials

Based on the developed methods for approximating piecewise linear and generalized functions, modified models of diffusion processes in semiconductor materials were created [70,72,73,74,75,76]. These models make it possible to provide a more accurate description of diffusion processes in comparison with the known models.
In the modified models, the possibilities of using a new method of continuous approximation of step functions based on the use of trigonometric expressions in the form of recursive functions are considered. Calculations are performed for the classical model of diffusion of minority charge carriers generated by a wide electron beam in a two-layer semiconductor material.
To quantitatively describe the phenomenon of diffusion of nonequilibrium minority charge carriers generated in a semiconductor by an external energy action, the following two approaches are usually used:
(1)
a model of collective motion of minority charge carriers [77,78], according to which diffusion of nonequilibrium minority charge carriers from any microvolume of a semiconductor is influenced by other electrons or holes from other microregions of the material. Mathematically, this is expressed in the fact that the differential diffusion equation as a function of generation of minority charge carriers (usually written in the right-hand side of the differential equation) includes a function that describes the dependence on the coordinates of the density of minority charge carriers generated per unit time in the target. This model is successfully used to quantitatively describe the diffusion processes of minority charge carriers generated by a wide electron beam in homogeneous semiconductors, for which the right-hand side of the differential equation is a continuous function of coordinates. The use of a wide electron beam makes it possible to neglect the edge effects and to solve the one-dimensional problem of heat and mass transfer;
(2)
the model of independent sources, according to which the diffusion of nonequilibrium minority charge carriers from any microvolume of the semiconductor is not influenced by other electrons or holes from other microregions of the material. Mathematically, this is expressed in the fact that first the diffusion equation is solved for each of the point sources of minority charge carriers, after which, by integrating over the volume occupied by the sources of minority charge carriers, their concentration in the semiconductor is found as a result of their diffusion. The idea of this approach is borrowed from classical work [79]. This model was previously used to quantitatively describe the processes of one-dimensional diffusion of minority charge carriers generated by a wide electron beam in inhomogeneous and multilayer planar structures, for which the distribution of electrophysical parameters of materials over depth has break points of the first kind [80,81].
A modification of the first model was proposed, allowing it to be used to simulate the diffusion of minority charge carriers in a two-layer material. The possibility of using this model to solve such a problem appears if, instead of piecewise constant coefficients (electrophysical parameters) of the differential equation of diffusion of minority charge carriers we use their new approximations based on trigonometric expressions in the form of recursive functions [5]. Note that the approximating functions are continuous and analytical, and therefore, at the boundary of the layers, they correspond to a greater extent than step functions to the dependence of the values of real electrophysical parameters on the coordinate [82].
Within the framework of the considered mathematical model, in the case of one-dimensional diffusion into the final semiconductor, the concentration of minority charge carriers in depth is found as a solution to the differential equation [68,70]
D z d 2 Δ p z d z 2 Δ p z τ z = ρ z
with boundary conditions
D 1 d Δ p z d z | z = 0 = ν s 1 Δ p 0 , Δ p l = 0 .
In the modified models in the equation, instead of piecewise-constant coefficients (electrophysical parameters), their new approximations are used, based on the use of trigonometric expressions in the form of recursive functions [5]:
{ f n z | f n z = H 2 1 + sin φ n z ,   φ n z = π 2 sin φ n 1 z , φ 1 z = π 2 f 1 z ,     n 1 N } .
As an initial function, a function of the form is taken
f 1 z = exp 1 a z + b 2 1 .
Figure 28 shows the results of calculations carried out using the mathematical package Matlab (MathWorks, Inc., Natick, MA, USA) for the parameters characteristic of the semiconductor structure “epitaxial GaAs film—single-crystal GaAs substrate”.
The following estimates were obtained for the relative error between the exact analytical solution of the problem and the numerical one for n = 5:
Δ = max 0 i n δ p z i δ p i max 0 i n δ p z i · 100 % = 9.66 %
and for n = 11: Δ = 0.37 % .
Thus, for n = 5, the influence of the approximations on the simulation result is visible, and for n = 11, the error in the results is rather small, which indicates the convergence of the approximating procedure used.
The authors of the modified model note [68,70] that the described model allows, with a relatively small number of recursive functions (up to 11), to estimate the concentration of minority charge carriers generated by a wide electron beam in a semiconductor target with an accuracy sufficient for practical use. The model subsequently makes it possible to relatively easily take into account the features of real semiconductor structures (the number and nature of the layers, the space charge region, possibly the energy distribution of electrons that occurs during the interaction of the primary beam with the target, etc.), which makes it promising for quantitative descriptions of the processes of one-dimensional diffusion of minority charge carriers in inhomogeneous and multilayer planar structures.
Note that modified models have been developed for various other examples describing diffusion processes in semiconductor technology [69,70,72,73,74,75,76].

5. Conclusions

As the review has shown, the developed methods for approximating periodic and non-periodic piecewise-linear and generalized functions are undoubtedly promising and have a number of advantages. These methods are characterized by fast convergence and low approximation errors. Similarly to the Fourier series, these methods can be based on the use of well-studied trigonometric functions that have good implementation in applied computer programs. While retaining the positive qualities of the Fourier series in this respect, the new methods are devoid of their drawbacks and can be widely used in solving applied problems. The developed methods are characterized by the complete absence of negative consequences of the Gibbs effect. There is also no wavelike character of the approximation on straight sections of the original piecewise-linear function, even with a small number of nested functions used for approximation.
The proved theorems and properties concerning the convergence and error of the developed approximation methods have confirmed all of the positive aspects of these methods.
The developed methods are illustrated by a large number of theoretical and practical examples taken from a wide variety of areas, so we can confidently speak about the universality of these methods. Despite the wide variety of examples given, these examples by no means exhaust all possible areas of application of new methods. The developed methods can find wide application in the field of signal transmission, the theory of automatic control and regulation systems, mathematical models of technical systems of variable structure, systems with discontinuous characteristics, systems with distributed and concentrated loads and influences, the theory of quantum and mathematical physics, and many other areas. These methods make it possible to find analytical functions for approximating singular generalized functions, and therefore, using differential calculus methods, construct approximations for the derivatives of generalized functions, thereby helping to achieve a better understanding of the meaning of generalized functions when applied to real applied problems. In other words, the developed methods also perform an epistemological function.
Numerical tests carried out on the basis of various practical applications have convincingly shown the correctness of the theoretical studies and the assumptions made.
Note also that the proposed approximating functions are continuous and analytical and even more than step functions correspond to real processes, since in reality even jump-like processes occur, albeit for short, but not zero time intervals. So, for example, an instantaneous change in the speed of a material object requires infinite energy, which is impossible to implement in practice. Concentrated impact in reality does not occur at a point, but is a distributed impact in some small neighborhoods of this point. These realities are fully consistent with the approximating functions obtained using the considered methods based on recursive sequences. In addition, we note that the widespread approximating functions based on splines, for example, are not analytical. Smoothing abrupt changes in the function, the proposed approximation methods bring mathematical models closer to reality, contributing to a deeper understanding of the laws of the world around us.

Author Contributions

Conceptualization, S.A.; Methodology, S.A.; Validation, S.A. and K.O.; Formal Analysis, S.A.; Investigation, S.A.; Resources, S.A. and A.A.; Data Collection, S.A.; Writing-Original Draft Preparation, S.A.; Writing-Review & Editing, A.A.; Visualization, S.A.; Supervision, S.A.; Project Administration, S.A., K.O. and A.A.; Funding Acquisition, S.A. and A.A. All of the authors contributed significantly to the completion of this review, conceiving and designing the review, writing and improving the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

There are no data applicable in this study.

Acknowledgments

The authors thank South Ural State University (SUSU) for supporting.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. DeVore, R.A.; Lorentz, G.G. Constructive Approximation; Springer-Verlag: Berlin, Germany, 1993. [Google Scholar]
  2. Lorentz, G.G.; von Golitschek, M.; Makovoz, Y. Constructive Approximation: Advanced Problems; Springer: Berlin, Germany, 1996. [Google Scholar]
  3. Mastroianni, G.; Milovanovi, G.V. Interpolation Processes. Basic Theory and Applications; Springer-Verlag: Berlin, Germany, 2008. [Google Scholar]
  4. Von Neumann, J.; Taub, A.H. (Eds.) Theory of Games, Astrophysics, HydroDynamics and Meteorology. In Method in the Physical Sciences; Collected Works Volume VI; Pergamon Press: Oxford, UK, 1961. [Google Scholar]
  5. Alyukov, S.V. Approximation of step functions in problems of mathematical modeling. Math. Modeling J. Russ. Acad. Sci. 2011, 23, 75–88. [Google Scholar] [CrossRef]
  6. Alyukov, S.V. Modeling of dynamic processes with piecewise-linear characteristics. Proceedings of universities. Appl. Nonlinear Dyn. 2011, 19, 27–34. [Google Scholar]
  7. Alyukov, S.V. Improvement of mathematical models of inertial torque transformers. Bull. Mech. Eng. 2010, 7, 3–10. [Google Scholar]
  8. Alyukov, S.V. Improved Models of Inertial Torque Transformers. Russ. Eng. Res. 2010, 30, 4–15. [Google Scholar] [CrossRef]
  9. Alyukov, S.V. Relay type freewheel mechanisms. Heavy Mech. Eng. 2010, 12, 34–37. [Google Scholar]
  10. Dubrovskaya, O.A.; Dubrovsky, S.A.; Dubrovsky, A.F.; Alyukov, S.V. On the analytical representation of the elastic-dissipative characteristics of the car suspension. Bull. Sib. State Automob. Acad. 2010, 16, 23–26. [Google Scholar]
  11. Alyukov, S.V. Approximation of generalized functions and their derivatives. Questions of atomic science and technology. Ser. Math. Model Phys. Process. 2013, 2, 57–62. [Google Scholar]
  12. Osintsev, K.V.; Alyukov, S.V. Mathematical modeling of discontinuous gas-dynamic flows using a new approximation method. Materials Science. Energy 2020, 26, 41–55. [Google Scholar]
  13. Alyukov, S.V. Dynamics of Inertial Continuously Variable Automatic Transmissions; INFRA-M: Moscow, Russia, 2020; p. 251. [Google Scholar]
  14. Alabugin, A.; Aliukov, S.; Khudyakova, T. Models and Methods of Formation of the Foresight-Controlling Mechanism. Sustainability 2022, 14, 9899. [Google Scholar] [CrossRef]
  15. Alyukov, S.V. Scientific Basis of Inertial Continuous Transmissions of Increased Loading Capacity. Ph.D. Thesis, South Ural State University, Chelyabinsk, Russia, 2014; p. 369. [Google Scholar]
  16. Aliukov, S.; Buleca, J. Comparative Multidimensional Analysis of the Current State of European Economies Based on the Complex of Macroeconomic Indicators. Mathematics 2022, 10, 847. [Google Scholar] [CrossRef]
  17. Alyukov, S.V. Approximation of electrocardiograms with help of new mathematical methods. Comput. Math. Model. 2018, 29, 59–70. [Google Scholar] [CrossRef]
  18. Nikitin, A.V.; Shishlakov, V.F. Parametric Synthesis of Nonlinear Automatic Control Systems; SPb, SPbGUAP: St Petersburg, Russia, 2003; p. 358. [Google Scholar]
  19. Meltzer, D. On the expressibility of piecewise linear continuous functions as the difference of two piecewise linear convex functions. Math. Program. Study 1986, 29, 118–134. [Google Scholar]
  20. Baskakov, S.I. Radio Circuits and Signals: Textbook for Universities, 3rd ed.; Moscow Higher School: Moscow, Russia, 2000; p. 462. [Google Scholar]
  21. Rahman, M. Applications of Fourier Transforms to Generalized Functions; Dalhousie University: Halifax, NS, Canada, 2011; 192p. [Google Scholar]
  22. Balakrishnan, V. Generalized Functions. In Mathematical Physics; Springer: Cham, Switzerland, 2020; pp. 29–41. [Google Scholar] [CrossRef]
  23. Vladimirov, V.S. Methods of the Theory of Generalized Functions. In Analytical Methods and Special Functions; Taylor&Francis: Abingdon, UK, 2002; p. 328. [Google Scholar]
  24. Grosser, M.; Kunzinger, M.; Oberguggenberger, M.; Steinbauer, R. Geometric Theory of Generalized Functions with Applications to General Relativity; Kluwer Academic Publishers: Amsterdam, The Netherlands, 2001; 505p. [Google Scholar]
  25. Popov, E.P. Theory of Nonlinear Systems of Automatic Regulation and Control: Textbook Allowance, 2nd ed.; Science, C., Ed.; Springer: Berlin/Heidelberg, Germany, 1988; p. 256. [Google Scholar]
  26. Achieser, N.I. Theory of Approximation; Dover Publications: Dover, UK, 2004; 320p. [Google Scholar]
  27. Varga, R. Functional Analysis and Approximation Theory in Numerical Analysis; Publishing House: Mir, Russia, 1974; p. 124. [Google Scholar]
  28. Lyusternik, L.A.; Sobolev, V.I. A Short Course in Functional Analysis; Moscow Higher School: Moscow, Russia, 1982; p. 271. [Google Scholar]
  29. Von Neumann, J. Selected Works on Functional Analysis; Nauka: Moscow, Russia, 1987; Volume 1–2. [Google Scholar]
  30. Banach, S. Sur les opérations dans les ensembles abstraits et leur applications aux équations intégrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  31. Kirk, W.A. Fixed points of asymptotic contractions. J. Math. Anal. Appl. 2003, 277, 645–650. [Google Scholar] [CrossRef]
  32. Jleli, M.; Samet, B. A new generalization of the Banach contraction principle. J. Inequal Appl. 2014, 2014, 38. [Google Scholar] [CrossRef]
  33. Romaguera, S. On the Correlation between Banach Contraction Principle and Caristi’s Fixed Point Theorem in b-Metric Spaces. Mathematics 2022, 10, 136. [Google Scholar] [CrossRef]
  34. Choudhury, B.S.; Chakraborty, P. Local generalizations of Banach’s contraction mapping principle. J. Anal. 2022, 30, 1131–1142. [Google Scholar] [CrossRef]
  35. Kokkotas, K. Interpolation, Extrapolation & Polynomial Approximation; Eberhard Karls University of Tubingen: Tubingen, Russia, 2019; p. 38. [Google Scholar]
  36. Lasser, R.; Mache, D.; Obermaier, J. On approximation methods by using orthogonal polynomial expansions. In Advanced Problems in Constructive Approximation; Springer: Basel, Switzerland, 2003; pp. 95–107. [Google Scholar]
  37. Ditzian, Z.; Jiang, D. Approximation of Functions by Polynomials in C[-L, 1]. Can. J. Math. 1992, 44, 924–940. [Google Scholar] [CrossRef]
  38. Khuromonov, K.M.; Shabozov, M.S. Jackson–Stechkin type inequalities between the best joint polynomials approximation and a smoothness characteristic in Bergman space. Vladikavkaz. Mat. Zh. 2022, 24, 109–120. [Google Scholar]
  39. Trigub, R.M. On the Approximation of Functions by Polynomials and Entire Functions of Exponential Type. Ukr. Math J. 2019, 71, 333–341. [Google Scholar] [CrossRef]
  40. Gasquet, C.; Witomski, P. Fourier Analysis and Applications: Filtering, Numerical Computation, Wavelets; Springer: New York, NY, USA, 1999; Volume 30. [Google Scholar]
  41. Levin, D. Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data. Axioms 2020, 9, 88. [Google Scholar] [CrossRef]
  42. Fischer, J.V. Four Particular Cases of the Fourier Transform. Mathematics 2018, 6, 335. [Google Scholar] [CrossRef]
  43. Rahman, M. Applications of Fourier Transforms to Generalized Functions; WIT Press: Southampton, UK, 2011. [Google Scholar]
  44. Brandwood, D. Fourier Transforms in Radar and Signal Processing; Artech House, Inc.: Norwood, MA, USA, 2003. [Google Scholar]
  45. Howard, R.M. Dual Taylor Series, Spline Based Function and Integral Approximation and Applications. Math. Comput. Appl. 2019, 24, 35. [Google Scholar] [CrossRef]
  46. Ezhov, N.; Neitzel, F.; Petrovic, S. Spline Approximation—Part 1: Basic Methodology. J. Appl. Geod. 2018, 12, 139–155. [Google Scholar] [CrossRef]
  47. Ezhov, N.; Neitzel, F.; Petrovic, S. Spline Approximation, Part 2: From Polynomials in the Monomial Basis to B-splines—A Derivation. Mathematics 2021, 9, 2198. [Google Scholar] [CrossRef]
  48. Schumaker, L. Spline Functions: Basic Theory, 3rd ed.; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  49. Abdallah, A.K. An Approximation Method of Spline Functions. Am. J. Math. Stat. 2015, 5, 311–315. [Google Scholar] [CrossRef]
  50. Usmonov, M.T. Least square method. Sci. Educ. Sci. J. 2021, 2, 54–65. [Google Scholar]
  51. Hirica, I.; Udriste, C.; Pripoae, G.; Tevy, I. Least Squares Approximation of Flatness on Riemannian Manifolds. Mathematics 2020, 8, 1757. [Google Scholar] [CrossRef]
  52. Kumari, K.; Yadav, S. Linear regression analysis study. J. Pract. Cardiovasc. Sci. 2018, 4, 33–36. [Google Scholar] [CrossRef]
  53. Khasyanov, R. Hermite interpolation on a simplex. Izv. Saratov Univ. Math. Mech. Inform. 2018, 18, 316–327. [Google Scholar] [CrossRef]
  54. Berriochoa, E.; Cachafeiro, A.; García Rábade, H.; García-Amor, J.M. Mechanical Models for Hermite Interpolation on the Unit Circle. Mathematics 2021, 9, 1043. [Google Scholar] [CrossRef]
  55. Ibrahimoglu, B.A. Lebesgue functions and Lebesgue constants in polynomial interpolation. J. Inequal. Appl. 2016, 2016, 93. [Google Scholar] [CrossRef]
  56. Smith, S.J. Lebesgue constants in polynomial interpolation. Ann. Math. Inform. 2006, 33, 109–123. [Google Scholar]
  57. Dencker, P.; Erb, W.; Kolomoitsev, Y.; Lomako, T. Lebesgue constants for polyhedral sets and polynomial interpolation on Lissajous-Chebyshev nodes. J. Complex 2017, 43, 93. [Google Scholar] [CrossRef]
  58. Stenger, F.; El-Sharkawy, H.; Baumann, G. The Lebesgue Constant for Sinc Approximations; Birkhäuser: Cham, Switzerland, 2014; pp. 319–335. [Google Scholar] [CrossRef]
  59. Dautov, R.; Timerbaev, M. Numerical methods. In Approximation of Functions; Kazan Federal University: Kazan, Russia, 2021; p. 123. [Google Scholar]
  60. Helmberg, G. The Gibbs phenomenon for Fourier interpolation. J. Approx. Theory 1994, 78, 41–63. [Google Scholar] [CrossRef]
  61. Zhuk, V.V.; Natanson, G.I. Trigonometric Fourier Series and Elements of Approximation Theory; Publishing House Leningrad University: Saint Petersburg, Russia, 1983; p. 188. [Google Scholar]
  62. Gjestland, F.J. Distributions, Schwartz Space and Fractional Sobolev Spaces; Norwegian University of Science and Technology: Trondheim, Norway, 2013. [Google Scholar]
  63. Montillet, J.P. Sobolev Spaces, Schwartz Spaces, and a Definition of the Electromagneticand Gravitational Coupling. J. Mod. Phys. 2017, 8, 1700–1722. [Google Scholar] [CrossRef]
  64. Bhattacharyya, P.K. Distributions: Generalized Functions with Applications in Sobolev Spaces; De Gruyter: Berlin, Germany; Boston, MA, USA, 2012. [Google Scholar] [CrossRef]
  65. Becnel, J.; Sengupta, A. The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional Analysis. Mathematics 2015, 3, 527–562. [Google Scholar] [CrossRef]
  66. Kotelnikov, V.A. On the bandwidth of “ether” and wire in telecommunications. Adv. Phys. Sci. J. 2006, 7, 762–770. [Google Scholar]
  67. Gel’fand, I.M.; Vilenkin, N.Y. Generalized Functions, Vols I–VI; Academic Press: Cambridge, MA, USA, 1964. (In Russian) [Google Scholar]
  68. Alabugin, A.; Aliukov, S.; Osintsev, K. Approximation Methods for Analysis and Formation of Mechanisms for Regulating Heat and Mass Transfer Processes in Heat Equipment Systems. Int. J. Heat Technol. 2020, 38, 45–58. [Google Scholar] [CrossRef]
  69. Seregina, E.V.; Stepovich, M.A.; Makarenkov, A.M.; Filippov, M.N.; Platoshin, E.V. On a Modified Model of the Collective Motion of Minority Charge Carriers in a Two-Layer Semiconductor. In Proceedings of the XXIII International Scientific and Technical Conference on Photoelectronics and Night Vision Devices, Yundola, Russia, 13–15 November 2014; pp. 427–430. [Google Scholar]
  70. Seregina, E.V.; Stepovich, M.A.; Makarenkov, A.M. On modification of the model of diffusion of non-basic charge carriers in semiconductor materials based on the use of recursive trigonometric functions, and estimation of the stability of the power of the solution. Surf. X-ray Synchrotron Neutron Res. 2014, 9, 72. [Google Scholar]
  71. Kiselev, V.A. Construction Mechanics: Special Course. Dynamics and Stability of Structures. In Textbook for Universities, 3rd ed.; Stroyizdat Publisher: Moscow, Russia, 1980; p. 616. [Google Scholar]
  72. Seregina, E.V.; Stepovich, M.A.; Makarenkov, A.M. Modified model of minority charge carrier diffusion in semiconductor materials. Adv. Appl. Phys. 2013, 1, 544–547. [Google Scholar]
  73. Seregina, E.V.; Stepovich, M.A.; Makarenkov, A.M. On the modification of a model of minority charge-carrier diffusion in semiconductor materials based on the use of recursive trigonometric functions and the estimation of the stability of solutions for the modified model. J. Surf. Investig. X-ray Synchrotron Neutron Tech. 2014, 8, 922–925. [Google Scholar] [CrossRef]
  74. Seregina, E.V.; Stepovich, M.A.; Makarenkov, A.M.; Filippov, M.N.; Platoshin, E.V. About the possibility of using the trigonometric expressions in the form of recursive functions for solving the diffusion equation with discontinuous coefficients. Appl. Phys. 2015, 1, 5–10. [Google Scholar]
  75. Seregina, E.V.; Stepovich, M.A.; Makarenkov, A.M.; Filippov, M.N. About the possibility of using recursive trigonometric functions for calculating the distribution of non-equalible non-basic charge carriers in two-floor water. Surf. X-ray Synchrotron Neutron Res. 2015, 9, 70. [Google Scholar]
  76. Seregina, E.V.; Makarenkov, A.M.; Stepovich, M.A.; Filippov, M.N. On the possibility of using recursive trigonometric functions to calculate the distribution of nonequilibrium minority charge carriers in a two-layer semiconductor material. J. Surf. Investig. X-ray Synchrotron Neutron Tech. 2015, 9, 929–933. [Google Scholar] [CrossRef]
  77. Wittry, D.B.; Kyser, D.F. Measurement of Diffusion Lengths in Direct-Gap Semiconductors by Electron-Beam Excitation. J. Appl. Phys. 1967, 38, 375. [Google Scholar] [CrossRef]
  78. Rao-Sahib, T.S.; Wittry, D.B. Measurement of Diffusion Lengths in p-Type Gallium Arsenide by Electron Beam Excitation. J. Appl. Phys. 1969, 40, 3745. [Google Scholar] [CrossRef]
  79. Van Roosbroeck, W. Injected Current Carrier Transport in a Semi-Infinite Semiconductor and the Determination of Lifetimes and Surface Recombination Velocities. J. Appl. Phys. 1955, 26, 380. [Google Scholar] [CrossRef]
  80. Stepovich, M.A.; Snopova, M.G.; Khokhlov, A.G. Usage of model of independent sources for calculation of distribution of minority carriers generated in a two-layer semiconductor by a electron beam. Appl. Phys. 2004, 3, 61. [Google Scholar]
  81. Burylova, I.V.; Petrov, V.I.; Snopova, M.G.; Stepovich, M.A. Mathematical simulation of distribution of minority charge carriers, generated in multy-layer semiconducting structure by a wide electron beam. Phys. Technol. Semicond. 2007, 41, 458. [Google Scholar]
  82. Baek, D.H.; Kim, S.B.; Schroder, D.K. Epitaxial silicon minority carrier diffusion length by photoluminescence. J. Appl. Phys. 2008, 104, 054503. [Google Scholar] [CrossRef]
Figure 1. Graphs of the original and approximating functions.
Figure 1. Graphs of the original and approximating functions.
Mathematics 10 03023 g001
Figure 2. Graphs of the original function and its approximations.
Figure 2. Graphs of the original function and its approximations.
Mathematics 10 03023 g002
Figure 3. Parabolic spline plot.
Figure 3. Parabolic spline plot.
Mathematics 10 03023 g003
Figure 4. An example of approximating a function by a linear spline.
Figure 4. An example of approximating a function by a linear spline.
Mathematics 10 03023 g004
Figure 5. Approximation errors using Fourier series expansion.
Figure 5. Approximation errors using Fourier series expansion.
Mathematics 10 03023 g005
Figure 6. Manifestation of the Gibbs effect.
Figure 6. Manifestation of the Gibbs effect.
Mathematics 10 03023 g006
Figure 7. A very intuitive graph of δ—function.
Figure 7. A very intuitive graph of δ—function.
Mathematics 10 03023 g007
Figure 8. Graph of step function.
Figure 8. Graph of step function.
Mathematics 10 03023 g008
Figure 9. Partial sum plot of Fourier series.
Figure 9. Partial sum plot of Fourier series.
Mathematics 10 03023 g009
Figure 10. A very intuitive graph of approximation of the delta function by Fourier series.
Figure 10. A very intuitive graph of approximation of the delta function by Fourier series.
Mathematics 10 03023 g010
Figure 11. Limit function graphs in the classical sampling Nyquist-Shannon-Kotelnikov theorem.
Figure 11. Limit function graphs in the classical sampling Nyquist-Shannon-Kotelnikov theorem.
Mathematics 10 03023 g011
Figure 12. Example of speckle effect (https://bigenc.ru/physics/text/4246597 accessed on 17 July 2022).
Figure 12. Example of speckle effect (https://bigenc.ru/physics/text/4246597 accessed on 17 July 2022).
Mathematics 10 03023 g012
Figure 13. Graphs of the original function and five of its successive approximations.
Figure 13. Graphs of the original function and five of its successive approximations.
Mathematics 10 03023 g013
Figure 14. Graphs of limiting functions.
Figure 14. Graphs of limiting functions.
Mathematics 10 03023 g014
Figure 15. Graphs of estimates of the relative error.
Figure 15. Graphs of estimates of the relative error.
Mathematics 10 03023 g015
Figure 16. The lengths of the gaps with an approximation error not exceeding ∆.
Figure 16. The lengths of the gaps with an approximation error not exceeding ∆.
Mathematics 10 03023 g016
Figure 17. Graph of approximation δ-functions.
Figure 17. Graph of approximation δ-functions.
Mathematics 10 03023 g017
Figure 18. Graphs of approximations of the Heaviside function.
Figure 18. Graphs of approximations of the Heaviside function.
Mathematics 10 03023 g018
Figure 19. Graphs of approximations δ-functions.
Figure 19. Graphs of approximations δ-functions.
Mathematics 10 03023 g019
Figure 20. Graphs of approximations of the derivatives of the δ-function.
Figure 20. Graphs of approximations of the derivatives of the δ-function.
Mathematics 10 03023 g020
Figure 21. Graph of the function that approximates a periodic Δ-function.
Figure 21. Graph of the function that approximates a periodic Δ-function.
Mathematics 10 03023 g021
Figure 22. An example of approximation of the distribution function.
Figure 22. An example of approximation of the distribution function.
Mathematics 10 03023 g022
Figure 23. Example of a uniformly distributed load.
Figure 23. Example of a uniformly distributed load.
Mathematics 10 03023 g023
Figure 24. Variants of loads and their approximation by Fourier series.
Figure 24. Variants of loads and their approximation by Fourier series.
Mathematics 10 03023 g024
Figure 25. Graphs of he initial load and approximating functions.
Figure 25. Graphs of he initial load and approximating functions.
Mathematics 10 03023 g025
Figure 26. Scheme of action of concentrated forces on a beam.
Figure 26. Scheme of action of concentrated forces on a beam.
Mathematics 10 03023 g026
Figure 27. Graphs of approximating functions.
Figure 27. Graphs of approximating functions.
Mathematics 10 03023 g027
Figure 28. The concentration of minority charge carriers obtained using n = 5 recursive trigonometric functions (the graph is marked with crosses), and using n = 11 functions (the graph is marked with a solid line), as well as the concentration of minority charge carriers, calculated accurately analytically using piecewise constant coefficients (electro-physical parameters) of the differential equation of diffusion of minority charge carriers (the graph is marked with a dashed line).
Figure 28. The concentration of minority charge carriers obtained using n = 5 recursive trigonometric functions (the graph is marked with crosses), and using n = 11 functions (the graph is marked with a solid line), as well as the concentration of minority charge carriers, calculated accurately analytically using piecewise constant coefficients (electro-physical parameters) of the differential equation of diffusion of minority charge carriers (the graph is marked with a dashed line).
Mathematics 10 03023 g028
Table 1. Initial data.
Table 1. Initial data.
x i 17.2817.0518.3018.8019.2018.50
y i 537534550555560552
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aliukov, S.; Alabugin, A.; Osintsev, K. Review of Methods, Applications and Publications on the Approximation of Piecewise Linear and Generalized Functions. Mathematics 2022, 10, 3023. https://doi.org/10.3390/math10163023

AMA Style

Aliukov S, Alabugin A, Osintsev K. Review of Methods, Applications and Publications on the Approximation of Piecewise Linear and Generalized Functions. Mathematics. 2022; 10(16):3023. https://doi.org/10.3390/math10163023

Chicago/Turabian Style

Aliukov, Sergei, Anatoliy Alabugin, and Konstantin Osintsev. 2022. "Review of Methods, Applications and Publications on the Approximation of Piecewise Linear and Generalized Functions" Mathematics 10, no. 16: 3023. https://doi.org/10.3390/math10163023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop