Next Article in Journal
A New Extended Model with Bathtub-Shaped Failure Rate: Properties, Inference, Simulation, and Applications
Previous Article in Journal
A Nonhomogeneous Boundary Value Problem for Steady State Navier-Stokes Equations in a Multiply-Connected Cusp Domain
Previous Article in Special Issue
A Liouville’s Formula for Systems with Reflection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Boundary Value Problem for the Biharmonic Equation with Multiple Involutions

by
Batirkhan Turmetov
1,*,†,
Valery Karachik
2,† and
Moldir Muratbekova
1,†
1
Department of Mathematics, Khoja Akhmet Yassawi International Kazakh-Turkish University, Turkistan 161200, Kazakhstan
2
Department of Mathematical Analysis, South Ural State University (NRU), 454080 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(17), 2020; https://doi.org/10.3390/math9172020
Submission received: 29 July 2021 / Revised: 16 August 2021 / Accepted: 18 August 2021 / Published: 24 August 2021
(This article belongs to the Special Issue Functional Differential Equations and Applications 2020)

Abstract

:
A nonlocal analogue of the biharmonic operator with involution-type transformations was considered. For the corresponding biharmonic equation with involution, we investigated the solvability of boundary value problems with a fractional-order boundary operator having a derivative of the Hadamard-type. First, transformations of the involution type were considered. The properties of the matrices of these transformations were investigated. As applications of the considered transformations, the questions about the solvability of a boundary value problem for a nonlocal biharmonic equation were studied. Modified Hadamard derivatives were considered as the boundary operator. The considered problems covered the Dirichlet and Neumann-type boundary conditions. Theorems on the existence and uniqueness of solutions to the studied problems were proven.

1. Introduction

The concept of a nonlocal operator and the related concepts of a nonlocal differential equation appeared in the theory of differential equations not long ago. For example, in [1], the authors considered equations containing fractional derivatives of the desired function and equations with deviating arguments, in other words, equations that include an unknown function and its derivatives, generally speaking, for different values of arguments. Such equations are called nonlocal differential equations. Among nonlocal differential equations, a special place is occupied by equations in which the deviation of the arguments has an involutive character. Mapping S is usually called an involution if S 2 ( x ) = S ( S ( x ) ) = x . The theory of equations with involutively transformed arguments and their applications were described in detail in the monographs [2,3,4]. By now, for differential equations with various types of involution, the well-posedness of boundary and initial-boundary value problems, the qualitative properties of solutions, and spectral issues have been studied quite well [5,6,7,8,9,10,11,12,13,14,15,16]. In addition, for classical equations, one can study nonlocal boundary value problems of the Bitsadze–Samarskii-type, in which the values of the sought function u ( x ) at the boundary of the domain are related to the values of u ( S x ) [17,18,19]. Note that the questions about the solvability of the main problems for the nonlocal Poisson equation were studied in [20,21]. Note also that boundary value problems with fractional-order boundary operators for elliptic equations were studied in [22,23,24,25,26,27,28,29,30].
Applications of the boundary value problems with boundary operators of fractional-order for elliptic equations were given in [31,32,33,34].
Let us formulate the problem that is studied in this work. Let Ω be the unit ball in R l , l 2 , and Ω be the unit sphere. Let also u ( x ) be a smooth function in the domain Ω , μ 0 , α ( m 1 , m ] , m = 1 , 2 , , r = | x | = x 1 2 + + x n 2 , θ = x / r , x Ω , and δ = r d d r be the Dirac operator, where r d d r = j = 1 l x j x j . Consider the modified Hadamard integrodifferential operators ([35], p. 116):
J μ α u x = u ( x ) , α = 0 , 1 Γ α 0 1 ln 1 τ α 1 τ μ 1 u τ x d τ , α > 0 ,
D μ α u x = r μ J m α δ m τ μ · u x , m 1 < α m .
Let S 1 , , S n be a set of real symmetric commutative matrices S i S j = S j S i such that S i 2 = I . Note that, since any transform S i is isometric, then if x Ω or x Ω , then S x Ω or S x Ω , respectively. For example, matrix S i can be a matrix of the following linear mapping S i x = ( x 1 , , x i 1 , x i , x i + 1 , , x l ) .
Let a 0 , a 1 , a 2 , a 3 , , a 2 n 1 be a set of real numbers. If we write the summation index i in the binary number system ( i n i 1 ) 2 i , where i k = 0 , 1 for k = 1 , , n , then the coefficients can be written as a ( 0 00 ) 2 , a ( 0 01 ) 2 , a ( 0 10 ) 2 , a ( 0 11 ) 2 , , a ( 1 11 ) 2 .
Let us introduce the following nonlocal analogue of the biharmonic operator:
L n u ( x ) i = 0 2 n 1 a i Δ 2 u ( S n i n S 1 i 1 x ) .
Consider the following boundary value problem in the domain Ω .
Problem H.
Let μ 0 , 0 α 1 . Find the function u ( x ) C 4 Ω C Ω ¯ , for which functions D μ α [ u ] x , D μ α + 1 [ u ] x are continuous in Ω ¯ and satisfy the conditions:
L n u ( x ) = f ( x ) , x Ω ,
D μ α [ u ] x Ω = g 0 ( x ) , x Ω ,
D μ α + 1 [ u ] x Ω = g 1 ( x ) , x Ω .
Remark 1.
Note that in the case a 0 = 1 , a k = 0 , k = 1 , 2 , , 2 n 1 , Equation (1) coincides with the classical inhomogeneous biharmonic equation.
Remark 2.
Note also that for x Ω , the following equalities hold:
δ [ u ] ( x ) Ω = r d u ( x ) d r Ω = u ( x ) ν Ω ,
2 u ( x ) ν 2 Ω = r d d r r d d r 1 u ( x ) Ω = r 2 d 2 u ( x ) d r 2 Ω ,
where ν is the outer normal to the sphere Ω . Then, for α = 1 , we obtain:
D μ 1 u x = r μ J 0 δ τ μ · u x = r μ r d d r [ r μ u ( x ) ] = r d d r u ( x ) + μ u ( x ) ,
D μ 2 u x = r μ r d d r 2 [ r μ u ( x ) ] = r 1 μ d d r r μ + 1 d d r u ( x ) + μ r μ u ( x )
= r 2 d 2 u ( x ) d r 2 + ( μ + 1 ) r d d r u ( x ) + μ r d d r u ( x ) + μ 2 u ( x )
= r 2 d 2 u ( x ) d r 2 + ( 2 μ + 1 ) r d d r u ( x ) + μ 2 u ( x ) .
Thus, if μ = 0 , then in the case α = 0 in the problem H, we obtain the Dirichlet boundary conditions:
u ( x ) = g 0 ( x ) , u ( x ) ν = g 1 ( x ) , x Ω ,
and in the case α = 1 , boundary conditions of the Neumann-type [36,37]:
u ( x ) ν = g 0 ( x ) , 2 u ( x ) ν 2 + u ( x ) ν = g 1 ( x ) , x Ω .
Moreover, in the case α = 1 and μ > 0 , the boundary conditions of the problem H are written as:
u ( x ) ν + μ u ( x ) = g 0 ( x ) , 2 u ( x ) ν 2 + ( 2 μ + 1 ) u ( x ) ν + μ 2 u ( x ) = g 1 ( x ) , x Ω ,
i.e., we obtain a generalization of the third boundary value problem [38].
The article is organized as follows. In Section 1, the main problem is formulated. In Section 2, the properties of matrices of involutive transformations are considered. Section 3 contains well-known statements about the properties of Hadamard integrodifferential operators. In Section 4, the solvability of the Dirichlet problem (case α = 0 ) is studied, and in Section 5, the necessary and sufficient conditions for the solvability of Neumann-type boundary value problems (case μ = 0 , α = 1 ) are found. In Section 6, for the general case, the main boundary value problem is studied.

2. Auxiliary Statements

To study the problem (1)–(3), we need some auxiliary statements. Let us introduce the function:
v ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i u ( S n i n S 1 i 1 x ) ,
where the summation is taken in ascending order with respect to the index i. From this equality, it is easy to conclude that functions of the form v ( S n j n S 1 j 1 x ) , where j = 0 , , 2 n 1 , can be linearly expressed in terms of functions u ( S n i n S 1 i 1 x ) . If we consider the following vectors of order 2 n :
U ( x ) = u ( x ) , , u ( S n i n S 1 i 1 x ) , , u ( S n 1 S 1 1 x ) T , V ( x ) = v ( x ) , , v ( S n j n S 1 j 1 x ) , , v ( S n 1 S 1 1 x ) T ,
then this dependence can be expressed in the matrix form:
V ( x ) = A n U ( x ) ,
where A n = a i , j i , j = 0 , , 2 n 1 is the matrix of order 2 n × 2 n .
Theorem 1.
The matrix A n from the equality (5) can be represented in the form:
A n = a i , j i , j = 0 , , 2 n 1 = a i j i , j = 0 , , 2 n 1 ,
where the operation in the subscript of the matrix coefficients is understood in the following sense i j ( i ) 2 ( j ) 2 = ( ( i n + j n mod 2 ) ( i 1 + j 1 mod 2 ) ) 2 , where ( i ) 2 = ( i n i 1 ) 2 is a representation of the index in the binary number system. The linear combination of matrices of the form of (6) is a matrix of the form of (6).
Proof. 
Let n = 1 , then we have:
A 1 = a 0 0 a 0 1 a 1 0 a 1 1 = a 0 a 1 a 1 a 0 ,
and if n = 2 , then we obtain:
A 2 = a ( 00 ) 2 ( 00 ) 2 a ( 00 ) 2 ( 01 ) 2 a ( 00 ) 2 ( 10 ) 2 a ( 00 ) 2 ( 11 ) 2 a ( 01 ) 2 ( 00 ) 2 a ( 01 ) 2 ( 01 ) 2 a ( 01 ) 2 ( 10 ) 2 a ( 01 ) 2 ( 11 ) 2 a ( 10 ) 2 ( 00 ) 2 a ( 10 ) 2 ( 01 ) 2 a ( 10 ) 2 ( 10 ) 2 a ( 10 ) 2 ( 11 ) 2 a ( 11 ) 2 ( 00 ) 2 a ( 11 ) 2 ( 01 ) 2 a ( 11 ) 2 ( 10 ) 2 a ( 11 ) 2 ( 11 ) 2 = a 0 a 1 a 2 a 3 a 1 a 0 a 3 a 2 a 2 a 3 a 0 a 1 a 3 a 2 a 1 a 0 .
Consider the function v ( S n i n S 1 i 1 x ) whose coefficients at u ( S n j n S 1 j 1 ) make up the i ( i n i 1 ) 2 th row of the matrix A n :
v ( S n i n S 1 i 1 x ) = j ( j n j 1 ) 2 = 0 2 n 1 = ( 1 1 ) 2 a ( j n j 1 ) 2 u ( S n j n S 1 j 1 S n i n S 1 i 1 x )                = j ( j n j 1 ) 2 = 0 ( 1 1 ) 2 a ( j n j 1 ) 2 u ( S n j n + i n mod 2 S 1 j 1 + i 1 mod 2 x ) .
Here, the following properties S j 2 x = x and S j S i x = S i S j x of matrices S 1 , , S n are taken into account. Let us replace the index i j = l . Then, l i = i j i = j , and the correspondence j l is one-to-one. Replacement j l of the index changes only the order of summation in the sum (7). For example, if i = 1 , then the sequence j : 0 , 1 , 2 , 3 , 4 , 5 , goes to l = 1 j : 1 , 0 , 3 , 2 , 5 , 4 , . After replacing the index, we obtain:
v ( S n i n S 1 i 1 x ) = l = 0 ( 1 1 ) 2 a ( i n + l n mod 2 i 1 + l 1 mod 2 ) 2 u ( S n l n S 1 l 1 x ) ,
whence a i , l = a ( i n + l n mod 2 i 1 + l 1 mod 2 ) 2 = a i l , which proves (6).
It is clear that if α , β are constants, then:
α a i j i , j = 0 , , 2 n 1 + β b i j i , j = 0 , , 2 n 1 = α a i j + β b i j i , j = 0 , , 2 n 1 .
The theorem is proven. □
Corollary 1.
The matrix A n is uniquely determined by its first row a 0 , a 1 , , a 2 n 1 .
Indeed, the ith row of the matrix A n can be written through its first row in the form a i 0 , a i 1 , , a i ( 2 n 1 ) .
This property of the matrix A n we denote by the equality A n A n ( a 0 , , a 2 n 1 ) .
Corollary 2.
The matrix A n has the symmetry property:
a i , j i , j = 0 , , 2 n 1 = a j , i i , j = 0 , , 2 n 1
and can be written as:
A n = A n 1 ( a 0 , , a 2 n 1 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 0 , , a 2 n 1 1 )
or more generally in the form of a block matrix A n m consisting of matrices A m :
A n = A n m A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 ,
where A m ( k n k m + 1 ) 2 ( a ( k n k m + 1 0 0 ) 2 , , a ( k n k m + 1 1 1 ) 2 ) is a matrix of the form of (6) of order 2 m .
Proof. 
Indeed, since the binary operation i j is commutative:
i j = ( i n + j n mod 2 i 1 + j 1 mod 2 ) 2 = ( j n + i n mod 2 j 1 + i 1 mod 2 ) 2 = j i ,
then the property (8) holds:
a i , j i , j = 0 , , 2 n 1 = a i j i , j = 0 , , 2 n 1 = a j i i , j = 0 , , 2 n 1 = a j , i i , j = 0 , , 2 n 1 .
Further, it is easy to see the validity of the equalities:
a ( 0 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = a ( 1 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1
and:
a ( 0 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = a ( 1 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 ,
from which the property (9) follows. Indeed, if we divide the matrix A n into four equal-in-size square blocks and consider the lower right block, then its indices are located in the range ( 10 0 ) 2 i , j ( 11 1 ) 2 , which means that this block, by virtue of (11), has the form:
a ( 1 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1            = a ( 0 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = A n 1 ( a 0 , , a 2 n 1 1 ) ,
i.e., the diagonal blocks of the matrix A n are of the form A n 1 ( a 0 , , a 2 n 1 1 ) .
Similarly, the top right block of A n has the indices in the range ( 00 0 ) 2 i ( 01 1 ) 2 , ( 10 0 ) 2 j ( 11 1 ) 2 , which means this block has the form:
a ( 0 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = A n 1 ( a 2 n 1 , , a 2 n 1 ) .
By the equality (12), the lower left block of A n has the form:
a ( 1 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1            = a ( 0 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = A n 1 ( a 2 n 1 , , a 2 n 1 ) .
Equality (9) is proven. Now, consider a block matrix of the form:
A n m A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 = A m ( i n i m + 1 ) 2 ( j n j m + 1 ) 2 i , j = 0 , , 2 n m 1 .
The elements of its block matrix with the number ( k n k m + 1 ) 2 can be written as:
A m ( k n k m + 1 ) 2 ( a ( k n k m + 1 0 0 ) 2 , , a ( k n k m + 1 1 1 ) 2 ) = a ( k n k m + 1 ( i m i 1 ) 2 ( j m j 1 ) 2 ) i , j = 0 , , 2 m 1 .
Consider the element a i , j of the block matrix:
A n m A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 .
It is located in the block with coordinates ( i n i m + 1 ) 2 , ( j n j m + 1 ) 2 , and this means it is in the block A m ( i n i m + 1 ) 2 ( j n j m + 1 ) 2 and, therefore, has the form:
a i , j = a ( ( i n i m + 1 ) 2 ( j n j m + 1 ) 2 ( i m i 1 ) 2 ( j m j 1 ) 2 ) = a i j .
This coincides with Formula (6). The corollary is proven. □
Theorem 2.
Multiplication of matrices of the form of (6) is commutative. The product of matrices of the form of (6) is again a matrix of the form of (6).
Proof. 
Let us prove this property by induction on n. For n = 1 , it is obviously true that:
A 1 B 1 = a 0 a 1 a 1 a 0 b 0 b 1 b 1 b 0 = a 0 b 0 + a 1 b 1 a 0 b 1 + a 1 b 0 a 1 b 0 + a 0 b 1 a 1 b 1 + a 0 b 0                      = b 0 a 0 + b 1 a 1 b 0 a 1 + b 1 a 0 b 1 a 0 + b 0 a 1 b 1 a 1 + b 0 a 0 = B 1 A 1 .
Assuming that the multiplication of matrices A n 1 and B n 1 of order n 1 is commutative, we prove that the multiplication of matrices A n and B n of order n is also commutative. Denote A n 1 = A n 1 ( a 0 , , a 2 n 1 1 ) , A ^ n 1 = A n 1 ( a 2 n 1 , , a 2 n 1 ) . By the property (9):
A n B n = A n 1 A ^ n 1 A ^ n 1 A n 1 B n 1 B ^ n 1 B ^ n 1 B n 1 = A n 1 B n 1 + A ^ n 1 B ^ n 1 A n 1 B ^ n 1 + A ^ n 1 B n 1 A ^ n 1 B n 1 + A n 1 B ^ n 1 A ^ n 1 B ^ n 1 + A n 1 B n 1
= B n 1 A n 1 + B ^ n 1 A ^ n 1 B n 1 A ^ n 1 + B ^ n 1 A n 1 B ^ n 1 A n 1 + B n 1 A ^ n 1 B ^ n 1 A ^ n 1 + B n 1 A n 1 = B n A n .
The induction step is proven, and therefore, matrices of the form A n ( a 0 , , a 2 n 1 ) are commutative. It is not hard to see that:
A B = a i j i , j = 0 , , 2 n 1 b i j i , j = 0 , , 2 n 1 = k = 0 2 n 1 a i k b k j i , j = 0 , , 2 n 1 .
In the sum from the formula above, let us change the index k l , as in Theorem 1, according to the equality i k = l . Then, l i = i k i = i i k = k , and this means that the correspondence k l is one-to-one. Replacement of the index k l changes only the order of summation in the sum. By virtue of the associatively of the operation ⊕, we have:
A B = l = 0 2 n 1 a l b ( l i ) j i , j = 0 , , 2 n 1 = l = 0 2 n 1 a l b l ( i j ) i , j = 0 , , 2 n 1 .
The first row of the matrix A B is:
A B i = 0 = k = 0 2 n 1 a k b k j j = 0 , , 2 n 1 ,
and hence, the matrix C of the form of (6), constructed by the first row of A B , is written in the form coinciding with A B :
C k = 0 2 n 1 a k b k ( i j ) j = 0 , , 2 n 1 = A B .
The theorem is proven. □
Theorem 3.
The eigenvectors of the matrix A n ( a 0 , , a 2 n 1 ) can be chosen in the form:
a n k = a n 1 k , ± a n 1 k T , k = 0 , , 2 n 1 1 ,
where a n 1 k is the eigenvector of the matrix A n 1 ( a 0 , , a 2 n 1 1 ) , k = 0 , , 2 n 1 1 ; for n = 1 , we have a 1 0 = 1 , 1 T , a 1 1 = 1 , 1 T . The eigenvectors of the matrix A n are orthogonal. The eigenvalues of the matrix A n are of the form:
μ n k , ± = μ n 1 k ± μ ^ n 1 k , k = 0 , , 2 n 1 1 ,
where μ n 1 k and μ ^ n 1 k are eigenvalues of the matrices:
A n 1 ( a 0 , , a 2 n 1 1 ) and A ^ n 1 = A n 1 ( a 2 n 1 , , a 2 n 1 )
corresponding to the eigenvector a n 1 k , respectively; μ 1 0 = a 0 + a 1 , μ 1 1 = a 0 a 1 .
Proof. 
Let us carry out the proof by induction on n. Suppose that the eigenvectors of the matrix A n ( a 0 , , a 2 n 1 ) are independent on numbers a 0 , , a 2 n 1 . For n = 1 , it is obvious that the eigenvectors of the matrix A 1 ( a 0 , a 1 ) can be chosen in the form a 1 + = ( 1 , 1 ) T , a 1 = ( 1 , 1 ) T , and the eigenvalues corresponding to them have the form μ 1 + = a 0 + a 1 , μ 1 = a 0 a 1 . For the matrix:
A 2 = a 0 a 1 a 2 a 3 a 1 a 0 a 3 a 2 a 2 a 3 a 0 a 1 a 3 a 2 a 1 a 0 = A 1 ( a 0 , a 1 ) A 1 ( a 2 , a 3 ) A 1 ( a 2 , a 3 ) A 1 ( a 0 , a 1 )
the eigenvectors have the form:
a 2 ( + , + ) = ( a 1 + , a 1 + ) T , a 2 ( , + ) = ( a 1 , a 1 ) T , a 2 ( + , ) = ( a 1 + , a 1 + ) T , a 2 ( , ) 2 = ( a 1 , a 1 ) T ,
or briefly, a 2 ( ± 1 , ± 2 ) = ( a 1 ± 1 , ± 2 a 1 ± 1 ) T . The + and − signs in the expressions ± 1 and ± 2 are values taken independently of each other. Indeed, the equalities:
A 2 a 2 ( ± 1 , ± 2 ) = A 1 ( a 0 , a 1 ) A 1 ( a 2 , a 3 ) A 1 ( a 2 , a 3 ) A 1 ( a 0 , a 1 ) a 1 ± 1 ± 2 a 1 ± 1 = A 1 ( a 0 , a 1 ) a 1 ± 1 ± 2 A 1 ( a 2 , a 3 ) a 1 ± 1 A 1 ( a 2 , a 3 ) a 1 ± 1 ± 2 A 1 ( a 0 , a 1 ) a 1 ± 1
= ( a 0 ± 1 a 1 ) a 1 ± 1 ± 2 ( a 2 ± 1 a 3 ) a 1 ± 1 ( a 2 ± 1 a 3 ) a 1 ± 1 ± 2 ( a 0 ± 1 a 1 ) a 1 ± 1 = a 0 ± 1 a 1 ± 2 ( a 2 ± 1 a 3 ) a 1 ± 1 ± 2 a 1 ± 1
= a 0 ± 1 a 1 ± 2 ( a 2 ± 1 a 3 ) a 2 ( ± 1 , ± 2 )
are true, and hence, a 1 ± 1 , ± 2 a 1 ± 1 T are the eigenvectors for four different combinations of signs ± 1 and ± 2 . It is seen that the eigenvectors a 2 ( ± 1 , ± 2 ) = 1 , ± 1 1 , ± 2 1 , ± 2 ± 1 1 T of the matrix A 2 ( a 0 , a 1 , a 2 , a 3 ) do not depend on numbers a k .
Further, assuming that the eigenvectors a n 1 0 , . . . , a n 1 2 n 1 1 of the matrix A n 1 ( a 0 , . . . , a 2 n 1 1 ) do not depend on its coefficients, we prove that this property is also true for the matrix A n ( a 0 , , a 2 n 1 ) .
Let μ n 1 0 , , μ n 1 2 n 1 1 be the eigenvalues corresponding to the above eigenvectors of the matrix A n 1 ( a 0 , , a 2 n 1 1 ) , independent of its coefficients, then vectors of the form a n k = a n 1 k , ± a n 1 k T , where k = 0 , , 2 n 1 1 , are the eigenvectors of the matrix A n ( a 0 , , a 2 n 1 ) . Indeed, we have:
A n a n k = A n a n 1 k ± a n 1 k = A n 1 ( a 0 , , a 2 n 1 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 0 , , a 2 n 1 1 ) a n 1 k ± a n 1 k
= A n 1 ( a 0 , , a 2 n 1 1 ) a n 1 k ± A n 1 ( a 2 n 1 , , a 2 n 1 ) a n 1 k A n 1 ( a 2 n 1 , , a 2 n 1 ) a n 1 k ± A n 1 ( a 0 , , a 2 n 1 1 ) a n 1 k
= μ n 1 k a n 1 k ± μ ^ n 1 k a n 1 k μ ^ n 1 k a n 1 k ± μ n 1 k a n 1 k = μ n 1 k ± μ ^ n 1 k a n 1 k ± a n 1 k = μ n 1 k ± μ ^ n 1 k a n k ,
where μ ^ n 1 k is the eigenvalue of the matrix A n 1 ( a 2 n 1 , , a 2 n 1 ) corresponding to the eigenvector a n 1 k . Obviously, there are 2 n vectors of the form a n k = a n 1 k , ± a n 1 k T . Therefore, the eigenvalues of the matrix A n ( a 0 , , a 2 n 1 ) are μ n k , ± = μ n 1 k ± μ ^ n 1 k .
Orthogonality: It is obvious that the eigenvectors a 1 + = ( 1 , 1 ) T , a 1 = ( 1 , 1 ) T of the matrix A 1 ( a 0 , a 1 ) are orthogonal. If the eigenvectors a n 1 k , k = 0 , , 2 n 1 1 of the matrix A n 1 ( a 2 n 1 , , a 2 n 1 ) are chosen as orthogonal, then the eigenvectors a n k = a n 1 k , ± a n 1 k T of the matrix A n ( a 0 , , a 2 n 1 ) are also orthogonal:
a n k 1 a n k 2 = a n 1 k 1 , ± a n 1 k 1 T a n 1 k 2 , ± a n 1 k 2 T = a n 1 k 1 a n 1 k 2 + a n 1 k 1 a n 1 k 2 = 0 , k 1 k 2
and a n 1 k , a n 1 k T a n 1 k , a n 1 k T = 0 . The theorem is proven. □
Corollary 3.
Let k = ( k n , , k 1 ) 2 , k i = 0 , 1 , then the eigenvector of the matrix A n numbered by k can be written in the form:
a n k = ( 1 , ( 1 ) k 1 , ( 1 ) k 2 , ( 1 ) k 2 + k 1 , ( 1 ) k 3 , ( 1 ) k 3 + k 1 , ( 1 ) k 3 + k 2 ,
( 1 ) k 3 + k 2 + k 1 , ( 1 ) k 4 , , ( 1 ) k n + + k 1 ) T = ( 1 ) k m m = 0 , , 2 n 1 ,
where k i ( k n k 1 ) 2 ( i n i 1 ) 2 = k n i n + + k 1 i 1 is a “scalar” product of the indexes ( k ) 2 and ( i ) 2 .
The eigenvalue corresponding to the eigenvector a n ( k n k 1 ) 2 can be written in a similar form:
μ n k μ n ( k n k 1 ) 2 = i = 0 2 n 1 ( 1 ) k i a i = i = 0 2 n 1 ( 1 ) k n i n + + k 1 i 1 a ( i n i 1 ) 2 .
Proof. 
Let us prove (13). For n = 1 , we have a 1 + = a 1 ( 0 ) 2 = ( ( 1 ) 0 0 , ( 1 ) 0 1 ) T , a 1 = a 1 ( 1 ) 2 = ( ( 1 ) 1 0 , ( 1 ) 1 1 ) T , and (13) is true. If Formula (13) is true for the vector a n 1 ( k n 1 k 1 ) 2 , then by Theorem 3, we have:
a n 1 ( k n 1 k 1 ) 2 , ± a n 1 ( k n 1 k 1 ) 2 T = a n 1 ( k n 1 k 1 ) 2 , ( 1 ) k n a n 1 ( k n 1 k 1 ) 2 T = ( 1 ) ( k n k n 1 k 1 ) 2 ( 0 m n 1 m 1 ) 2 m = 0 , , 2 n 1 1 , ( 1 ) ( k n k n 1 k 1 ) 2 ( 1 m n 1 m 1 ) 2 m = 0 , , 2 n 1 1 T = ( 1 ) k m m = 0 , , 2 n 1 1 , ( 1 ) k m m = 2 n 1 , , 2 n 1 T = ( 1 ) k m m = 0 , , 2 n 1 T = a n ( k n k n 1 k 1 ) 2
and hence, the formula (13) is also true for the vector a n ( k n k n 1 k 1 ) 2 = a n k .
Let us prove (14). For n = 1 , we have:
μ 1 k 1 = a 0 + ( 1 ) k 1 a 1 = ( 1 ) 0 a ( 0 ) 2 + ( 1 ) k 1 · 1 a ( 1 ) 2 ,
where k 1 = 0 , 1 . Assume that the formula (14) is valid for n = n 1 , and prove its validity for n. By Theorem 3, changing the notation ± = ( 1 ) k n , we write:
μ n k , ± = μ n 1 k ± μ ^ n 1 k = i = 0 2 n 1 1 ( 1 ) k n 1 i n 1 + + k 1 i 1 a ( i n 1 i 1 ) 2
+ i = 0 2 n 1 1 ( 1 ) k n · 1 + k n 1 i n 1 + + k 1 i 1 a ( 1 i n 1 i 1 ) 2 = i = 0 2 n 1 1 ( 1 ) k n · i n + k n 1 i n 1 + + k 1 i 1 a ( i n i n 1 i 1 ) 2
+ i = 2 n 1 2 n 1 ( 1 ) k n · i n + k n 1 i n 1 + + k 1 i 1 a ( i n i n 1 i 1 ) 2 = i = 0 2 n 1 ( 1 ) k n · i n + + k 1 i 1 a ( i n i n 1 i 1 ) 2 ,
which proves (14). The corollary is proven. □
Let us denote the operation of taking the adjoint matrix to the matrix A n ( a 0 , , a 2 n 1 ) , for the convenience of presentation, in the form adj ( A n ( a 0 , , a 2 n 1 ) ) = A n ¯ ( a 0 , , a 2 n 1 ) . Then, by the definition of the adjoint matrix:
A n ( a 0 , , a 2 n 1 ) A n ¯ ( a 0 , , a 2 n 1 ) = det A n I 2 n .
In the case n = 0 , we assume that a k ¯ = 1 . Obviously, A n ¯ ¯ = A n , and if the matrices A n and B n of the form of (6) are nonsingular, then, by virtue of Theorem 2, on the commutativity of A n and B n , we have:
A n B n ¯ = B n A n ¯ = det B n A n B n A n 1 = det B n A n A n 1 B n 1 = det A n A n 1 det B n B n 1 = A n ¯ B n ¯
Define the operation of multiplying a matrix C k 1 by a block matrix A k 1 0 A k 1 1 A k 1 1 A k 1 0 in the form:
C k 1 · A k 1 0 A k 1 1 A k 1 1 A k 1 0 C k 1 0 0 C k 1 A k 1 0 A k 1 1 A k 1 1 A k 1 0 = C k 1 A k 1 0 C k 1 A k 1 1 C k 1 A k 1 1 C k 1 A k 1 0 .
Note that if C k 1 = C k 1 ( c 0 , , c 2 k 1 1 ) , then C k = C k ( c 0 , , c 2 k 1 1 , 0 , , 0 ) :
C k 1 · A k 1 0 A k 1 1 A k 1 1 A k 1 0 = C k 1 0 0 C k 1 A k 1 0 A k 1 1 A k 1 1 A k 1 0 = C k A k = A k C k = A k · C k 1
and hence, by Theorem 2, the matrix C k 1 · A k has the form of (6).
Theorem 4.
The determinant of the matrix A n a 0 , , a 2 n 1 can be written in the form:
det A n a 0 , , a 2 n 1 = k = 0 2 n 1 i = 0 2 n 1 ( 1 ) k n i n + + k 1 i 1 a ( i n i 1 ) 2 .
For any m = 0 , , n , the equality holds:
A n ¯ a 0 , , a 2 n 1 = det A n m ¯ A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2                × A n m ¯ A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 ,
where det A n m is calculated for the block matrix A n m , first with numerical coefficients a ( k n k m + 1 ) 2 , then the corresponding matrices A m ( k n k m + 1 ) 2 are substituted for them, and the adjoint matrix is taken for the resulting matrix. In addition, the adjoint matrix A n m ¯ is also constructed for numerical coefficients a ( k n k m + 1 ) 2 , and then, the corresponding matrices A m ( k n k m + 1 ) 2 are substituted for them.
Proof. 
Since the determinant of a matrix is equal to the product of its eigenvalues, using Formula (14), we obtain:
det A n a 0 , , a 2 n 1 = k = 0 2 n 1 μ n ( k n k 1 ) 2 = k = 0 2 n 1 i = 0 2 n 1 ( 1 ) k n i n + + k 1 i 1 a ( i n i 1 ) 2 .
Let us prove (17). Note that all operations for constructing matrices in this formula are correct since the determinant of a matrix and the elements of an adjoint matrix are algebraic expressions and all matrices involved in these constructions are commutative.
Denote the matrix on the right side of (17) by M, and multiply it on the right by the matrix A n represented in the form of (10), then substitute numbers a ( k n k m + 1 ) 2 instead of matrices A m ( k n k m + 1 ) 2 ; after transformations, return the matrices A m ( k n k m + 1 ) 2 back:
M A n a 0 , , a 2 n 1 = M A n m A m ( 0 0 ) 2 , , A m ( 1 1 ) 2 = det A n m b ¯ a ( 0 0 ) 2 , , a ( k n k m + 1 ) 2 , , a ( 1 1 ) 2 A b n m ¯ a ( 0 0 ) 2 , , a ( k n k m + 1 ) 2 , , a ( 1 1 ) 2 A n m b a ( 0 0 ) 2 , , a ( k n k m + 1 ) 2 , , a ( 1 1 ) 2 = det A n m ¯ · I 2 n m b det A n m · I 2 n m b = k = 0 2 n m 1 i = 0 2 n m 1 ( 1 ) k i A m ( i n m i 1 ) 2 ¯ i = 0 2 n m 1 ( 1 ) k i A m ( i n m i 1 ) 2 · I 2 n m b = k = 0 2 n m 1 det i = 0 2 n m 1 ( 1 ) k n m i n m + + k 1 i 1 A m ( i n m i 1 ) 2 I 2 m · I 2 n m b = k = 0 2 n m 1 det i = 0 2 n m 1 ( 1 ) k n m i n m + + k 1 i 1 A m ( i n m i 1 ) 2 I 2 n λ I 2 n ,
where the formula (16) and an equality of the form of (15) are used and λ R . Here, the matrix I n m b has a superscript b indicating that it is a block matrix, each block of size 2 m × 2 m . The resulting equality proves (17). The theorem is proven. □
Corollary 4.
The matrix A n ¯ a 0 , , a 2 n 1 is a matrix of the form of (6). If inverse matrix A n 1 a 0 , , a 2 n 1 exists, then it also has the form of (6).
Proof. 
Obviously, A 1 ¯ a 0 , a 1 = A 1 a 0 , a 1 , and therefore, the matrix A 1 ¯ a 0 , a 1 is of the form of (6). Suppose the matrix A n 1 ¯ a 0 , , a 2 n 1 1 for n > 1 is of the form of (6). Then, using Formula (17) for m = n 1 , we obtain:
A n ¯ = det A 1 ¯ A n 1 0 , A n 1 1 · A 1 ¯ A n 1 0 , A n 1 1
= A n 1 0 2 A n 1 1 2 ¯ · A 1 A n 1 0 , A n 1 1 .
It is clear that the matrix det A 1 A n 1 0 , A n 1 1 = A n 1 0 2 A n 1 1 2 has the form of (6), and hence, by the induction hypothesis, the matrix A n 1 0 2 A n 1 1 2 ¯ is also a matrix of the form of (6). Since, as said before in Theorem 4, the matrix C n 1 · A n has the form of (6), and the product of the matrices of the form of (6) is again a matrix of the form of (6). The induction step is proven, and therefore, the corollary is true. □
Corollary 5.
The matrix A n ¯ a 0 , , a 2 n 1 can be calculated using the recurrent formula:
A n ¯ a 0 , , a 2 n 1 = det A n 1 ¯ A 1 ( 0 0 ) 2 , , A 1 ( k n k 2 ) 2 , , A 1 ( 1 1 ) 2 × A n 1 ¯ A 1 ( 0 0 ) 2 , , A 1 ( k n k 2 ) 2 , , A 1 ( 1 1 ) 2 = ( k n k 2 ) 2 = 0 2 n 1 1 ( i n i 2 ) 2 = 0 2 n 1 1 ( 1 ) k n i n + + k 2 i 2 A 1 ( i n i 2 ) 2 ¯ A n 1 ¯ b 0 , , b 2 n 1 1 | b k = A 1 k ,
where A 1 k = A 1 ( k n k 2 ) 2 = A 1 ( a ( k n k 2 0 ) 2 , a ( k n k 2 1 ) 2 ) .
This equality follows from (17) when m = n 1 , taking into account (16) and the fact that A 1 + B 1 ¯ = A 1 ¯ + B 1 ¯ .

3. Properties of Integrodifferential Operators

In this section, we present some well-known statements about fractional integrodifferential operators introduced in Section 1. In [27], the following assertions were proven.
Lemma 1.
Let α > 0 , μ 0 , 0 < λ < 1 and u x C λ + p Ω ¯ , p N 0 . Then:
(1) If μ > 0 , then J μ α u x C λ + p Ω ¯ ;
(2) If μ = 0 , then if the condition u 0 = 0 holds, the function J 0 α u x also belongs to the class C λ + p Ω ¯ , and the equality J 0 α u 0 = 0 is valid.
Lemma 2.
Let μ 0 , p 1 < α p , p = 1 , 2 , , 0 < λ < 1 , and u x C λ + q Ω ¯ , q p . Then, the function D μ α u x belongs to the class C λ + q p Ω ¯ , and the equality D 0 α u 0 = 0 holds.
Lemma 3.
Let μ 0 , p 1 < α p , p = 1 , 2 , , 0 < λ < 1 and u x C λ + q Ω ¯ , q l . Then, for any x Ω ¯ , the following equality holds:
J μ α D μ α u x = u x , μ > 0 u x u 0 , μ = 0 .
Lemma 4.
Let μ 0 , p 1 < α p , p = 1 , 2 , , 0 < λ < 1 and u x C λ + q Ω ¯ , q p . Then, for any x Ω ¯ for μ > 0 , the following equality holds:
D μ α J μ α u x = u x
For μ = 0 , the equality (19) is valid under the additional condition u ( 0 ) = 0 .
Lemma 5.
Let μ 0 , p 1 < α p , p = 1 , 2 , , f x be a smooth function in the domain Ω ¯ and Δ u x = f x , x Ω . Then, the equality is valid:
Δ D μ α u x = F x , x Ω ,
where:
F x = D μ + 2 α f x .
Lemma 6.
If μ = 0 , 0 < α 1 , then function F x from the equality (21) can be represented as:
F x = r d d r + 4 f 1 α x ,
where:
f 1 α x = J 4 1 α f x .

4. The Dirichlet Problem

In this section, we study the problem H for α = 0 , i.e., the following Dirichlet problem:
L n u ( x ) = f ( x ) , x Ω ,
u ( x ) = g 0 ( x ) , u ( x ) ν = g 1 ( x ) , x Ω .
In [20], the following assertion was proven.
Lemma 7.
([20], Lemma 3.1) Let S be an orthogonal matrix, then the operator I S u ( x ) = u ( S x ) and the Laplace operator Δ commute Δ I S u ( x ) = I S Δ u ( x ) on functions u C 2 ( Ω ) . The operator δ u ( x ) = i = 1 n x i u x i ( x ) and the operator I S also commute δ I S u ( x ) = I S δ u ( x ) on functions u C 1 ( Ω ¯ ) , and the equality I S = I S S T is valid.
Corollary 6.
If the function u ( x ) is biharmonic in the domain Ω, then the function u ( S n i n S 1 i 1 x ) = I S n i n S 1 i 1 u ( x ) , and hence, the function v ( x ) from (4) is also biharmonic in Ω.
Proof. 
Indeed, the matrix S n i n S 1 i 1 is symmetric and orthogonal as S n i n S 1 i 1 2 = I , and hence, by virtue of Lemma 7:
Δ 2 u ( x ) = 0 Δ 2 I S n i n S 1 i 1 u ( x ) = I S n i n S 1 i 1 Δ 2 u ( x ) = 0
and Δ 2 v ( x ) = 0 . It follows that if the function u ( x ) is biharmonic in Ω , it satisfies the homogeneous Equation (1) in Ω . The corollary is proven. □
The converse assertion is also valid.
Lemma 8.
Let the function u C 4 ( Ω ) satisfy homogeneous Equation (1) and i = 0 2 n 1 ( 1 ) k i a i 0 , for k = 0 , , 2 n 1 , then the function u ( x ) is biharmonic in Ω.
Proof. 
Let the function u C 4 ( Ω ) satisfy homogeneous Equation (1). Consider the function v ( x ) from (4):
v ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i u ( S n i n S 1 i 1 x ) .
It is evident that v ( x ) C 4 ( Ω ) and Δ 2 v ( x ) = 0 , x Ω , i.e., the function v ( x ) is biharmonic in Ω . By virtue of Corollary 6, functions v ( S n i n S 1 i 1 x ) are also biharmonic in Ω . Further, as stated in (5), for vectors:
U ( x ) = u ( x ) , , u ( S n i n S 1 i 1 x ) , , u ( S n 1 S 1 1 x ) T ,
V ( x ) = v ( x ) , , v ( S n j n S 1 j 1 x ) , , v ( S n 1 S 1 1 x ) T
the equality V ( x ) = A n U ( x ) holds, where A n = A n ( a 0 , , a 2 n 1 ) is a matrix of the form of (6). By the condition of the lemma and by virtue of Theorem 4, the determinant of this system does not vanish. By Corollary 4, the matrix A n 1 also has the form of (6). Let us introduce the notation B n b 0 , , b 2 n 1 = A n 1 ( a 0 , , a 2 n 1 ) . If the first row of the matrix B n is written as b = b 0 , , b 2 n 1 T , we obtain:
u ( x ) = b · V ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 b i v ( S n i n S 1 i 1 x ) .
This immediately implies that the function u ( x ) is biharmonic in Ω . The lemma is proven. □
Theorem 5.
If the coefficients of the operator L n satisfy the condition i = 0 2 n 1 ( 1 ) k i a i 0 , for k = 0 , , 2 n 1 and a solution to the Dirichlet problem (24) and (25) exists, then it is unique.
Proof. 
Let us prove that the homogeneous problem (24) and (25) has only a zero solution, and hence, the solution to the inhomogeneous problem (24) and (25), if it exists, is unique. Let u ( x ) be a solution to the homogeneous problem (24) and (25). By Lemma 8, the function u ( x ) is biharmonic in Ω and satisfies the homogeneous conditions (24). Therefore, the function u ( x ) is a solution to the following Dirichlet problem:
Δ 2 u ( x ) = 0 , x Ω ; u ( x ) Ω = 0 , u ( x ) ν Ω = 0 .
By virtue of the uniqueness of the solution to this Dirichlet problem, we have u ( x ) 0 in Ω . The theorem is proven. □
Let us consider a theorem on the existence of a solution to the problem (24) and (25).
Theorem 6.
If coefficients of the operator L n satisfy the condition i = 0 2 n 1 ( 1 ) k i a i 0 , k = 0 , , 2 n 1 , then for functions f ( x ) C λ Ω ¯ , g 0 ( x ) C λ + 2 Ω and g 1 ( x ) C λ + 1 Ω , 0 < λ < 1 , the solution to the problem (24) and (25) exists, is unique, and can be represented as:
u ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 b i v ( S n i n S 1 i 1 x ) ,
where the function v ( x ) is a solution to the Dirichlet problem:
Δ 2 v ( x ) = f ( x ) , x Ω ; v ( x ) = g ˜ 0 ( x ) , v ( x ) ν = g ˜ 1 ( x ) , x Ω ,
with functions:
g ˜ 0 ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 0 ( S n i n S 1 i 1 x ) , g ˜ 1 ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 1 ( S n i n S 1 i 1 x )
on the boundary, while b i , i = 0 , , 2 n 1 are the coefficients of the first row of the matrix B n b 0 , , b 2 n 1 inverse to the matrix A n = A n ( a 0 , , a 2 n 1 ) .
Proof. 
Let u ( x ) be a solution to the problem (24) and (25). Consider the function v ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i u ( S n i n S 1 i 1 x ) . Then, for the function v ( x ) , by virtue of Lemma 7, we obtain the following boundary value problem:
Δ 2 v ( x ) = f ( x ) , x Ω , j v ( x ) ν j Ω = δ j i = 0 2 n 1 a k u ( S n i n S 1 i 1 x ) | Ω = i = 0 2 n 1 a k I S n i n S 1 i 1 δ j u ( x ) | Ω = i = 0 2 n 1 a k I S n i n S 1 i 1 g j ( x ) = i = 0 2 n 1 a k g j ( S n i n S 1 i 1 x ) = g ˜ j ( x ) ,
where j = 0 , 1 . If g j ( x ) C 2 + λ j ( Ω ) , j = 0 , 1 , then it is obvious that:
g ˜ j ( x ) = i = 0 2 n 1 a k g j ( S n i n S 1 i 1 x ) C 2 + λ j ( Ω ) .
It is known (see, for example, [39]) that for given functions f ( x ) and g ˜ j ( x ) = i = 0 2 n 1 a k g j ( S n i n S 1 i 1 x ) , j = 0 , 1 , a solution to the Dirichlet problem (28) exists and is unique. As in the case of the equality (5) between functions v ( x ) and u ( x ) , we have the algebraic relation V ( x ) = A n U ( x ) . If i = 0 2 n 1 ( 1 ) k i a i 0 , then, by virtue of the equality (26), the unknown function u ( x ) is uniquely determined through the function v ( x ) . On the contrary, let the function v ( x ) be a solution to the problem (28). Let us show that the function u ( x ) defined by Formula (27) satisfies all conditions of the problem (24) and (25). Indeed, if f ( x ) C λ ( Ω ¯ ) , g j ( x ) C 2 + λ j ( Ω ) , j = 0 , 1 , then we obtain v ( x ) C 4 ( Ω ) C 2 + λ ( Ω ¯ ) . From here, it follows that u ( x ) C 4 ( Ω ) C 2 + λ ( Ω ¯ ) . Therefore, according to Lemma 7, in Ω , we have the equalities:
Δ 2 u ( x ) = j = 0 2 n 1 b j Δ 2 v ( S n j n S 1 j 1 x ) = j = 0 2 n 1 b j I S n j n S 1 j 1 Δ 2 v ( x ) = j = 0 2 n 1 b j I S n j n S 1 j 1 f ( x )
= j = 0 2 n 1 b j f ( S n j n S 1 j 1 x ) .
Let us denote w ( x ) = j = 0 2 n 1 b j f ( S n j n S 1 j 1 x ) and consider the function u ^ ( x ) = i = 0 2 n 1 a i w ( S n i n S 1 i 1 x ) . Then, for vectors:
W ( x ) = w ( x ) , , w ( S n i n S 1 i 1 x ) , , w ( S n 1 S 1 1 x ) T , F ( x ) = f ( x ) , , f ( S n j n S 1 j 1 x ) , , f ( S n 1 S 1 1 x ) T
the equalities W ( x ) = B n F ( x ) and U ^ ( x ) = A n W ( x ) hold, and hence, A n W ( x ) = A n B n F ( x ) = F ( x ) . Thus:
i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i Δ 2 u ( S n i n S 1 i 1 x ) i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i w ( S n i n S 1 i 1 x )                 = A n W ( x ) j = 0 = A n B n F ( x ) j = 0 = F ( x ) j = 0 = f ( x ) .
Therefore, the function u ( x ) satisfies Equation (24).
Let us check the boundary conditions (25) of the studied problem. Taking into account that:
G ˜ 0 ( x ) j = 0 = g ˜ 0 ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 0 ( S n i n S 1 i 1 x ) = A n G 0 ( x ) j = 0
for x Ω , according to Lemma 7, we obtain:
k u ( x ) ν k = δ k u ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 b i δ k v ( S n i n S 1 i 1 x ) = B n δ k V ( x ) j = 0                    = B n G ˜ k ( x ) j = 0 = B n A n G k ( x ) j = 0 = g k ( x ) ,
where k = 0 , 1 , and hence, the boundary conditions (25) for the function u ( x ) are satisfied. The theorem is proven. □

5. The Neumann Problem

In this section, we first consider the following analogue of the Neumann problem:
L n u x = f x , x Ω ,
u ( x ) ν = g 0 ( x ) , 2 u ( x ) ν 2 = g 1 ( x ) , x Ω .
First, let us consider a property of the solution to the Dirichlet problem for the classical biharmonic equation. Consider the following problem:
Δ 2 w x = F ( x ) , x Ω ; w x = h 0 x , w x ν = h 1 x , x Ω ,
where the function F ( x ) has the form F ( x ) = δ + 4 f ( x ) , and functions h ˜ 0 ( x ) and h ˜ 1 ( x ) are defined by the equalities:
h ˜ 0 ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i h 0 ( S n i n S 1 i 1 x ) , h ˜ 1 ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i h 1 ( S n i n S 1 i 1 x ) .
Let us assume that functions f ( x ) , h 0 ( x ) , h 1 ( x ) are quite smooth. First, we study the problem for the homogeneous equation, i.e., consider the problem:
Δ 2 w x = 0 , x Ω ; w x = h ˜ 0 ( x ) , w x ν = h ˜ 1 ( x ) , x Ω .
It is known (see, for example, [36]) that the solution to the problem (33) can be represented as w ( x ) = w 0 ( x ) + 1 | x | 2 w 1 ( x ) , where the functions w 0 ( x ) , w 1 ( x ) are solutions to the following problems:
Δ w 0 x = 0 , x Ω ; w x = h ˜ 0 ( x ) , x Ω .
Δ w 1 x = 0 , x Ω ; w 1 x = 1 2 h ˜ 1 ( x ) w 0 x ν , x Ω .
If we represent the solutions to the problems (34) and (35) in the form of the Poisson integral, then we obtain:
w 0 ( x ) = Ω P ( x , y ) h ˜ 0 ( y ) d s y = Ω P ( x , y ) i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i h 0 ( S n i n S 1 i 1 y ) d s y , w 1 ( x ) = 1 2 Ω P ( x , y ) i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i h 1 ( S n i n S 1 i 1 y ) w 0 y ν d s y ,
where P ( x , y ) = 1 ω l 1 | x | 2 | x y | l is the Poisson kernel of the Dirichlet problem. Further, it is obvious that P ( 0 , y ) = 1 ω l , and for the harmonic function w 0 ( x ) the equality:
Ω w 0 y ν d s y = 0 .
holds. Moreover, in [18] (Lemma 5.1), it was proven that:
Ω g ( S i y ) d s y = Ω g ( y ) d s y .
Then, for the function w ( x ) = w 0 ( x ) + 1 | x | 2 w 1 ( x ) , we obtain:
w 0 = w 0 0 + w 1 0 = 1 ω l Ω i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i h 0 ( S n i n S 1 i 1 y ) d s y 1 2 ω l Ω i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i h 1 ( S n i n S 1 i 1 y ) d s y = 1 ω l i = 0 2 n 1 a i Ω h 0 ( y ) 1 2 h 1 ( y ) d s y = 1 2 ω l i = 0 2 n 1 a i Ω 2 h 0 ( y ) h 1 ( y ) d s y .
Thus, if w ( x ) is a solution to the problem (33), then the equality:
w ( 0 ) = 1 2 ω l i = 0 2 n 1 a i Ω 2 h 0 ( y ) h 1 ( y ) d s y
is valid.
Let us study the problem (32) when the boundary conditions are homogeneous, i.e., consider the problem:
Δ 2 w x = δ + 4 f ( x ) , x Ω ; w x = 0 , w x ν = 0 , x Ω .
In [27], it was proven that the solution to the problem (37) satisfies the equality:
w ( 0 ) = 1 4 ω l Ω 1 | x | 2 f ( x ) d x .
Thus, using the equalities (36) and (38) to solve the problem (32), we obtain:
w ( 0 ) = 1 4 ω l Ω 1 | x | 2 f ( x ) d x + 1 2 ω l i = 0 2 n 1 a i Ω 2 h 0 ( y ) h 1 ( y ) d s y .
Hence, it follows that for the condition w ( 0 ) = 0 to be satisfied, it is necessary and sufficient that the equality:
1 2 Ω 1 | x | 2 f ( x ) d x + i = 0 2 n 1 a i Ω 2 h 0 ( y ) h 1 ( y ) d s y = 0
be satisfied. Thus, we proved the following assertion.
Lemma 9.
If the function w ( x ) is a solution to the problem (32), then for the condition w ( 0 ) = 0 to be satisfied, it is necessary and sufficient to satisfy the equality (39).
Let us consider an analogue of the Neumann problem (30) and (31). The following assertion is valid.
Theorem 7.
Let the coefficients of the operator L n satisfy the condition i = 0 2 n 1 ( 1 ) k i a i 0 , for k = 0 , , 2 n 1 and F x C λ + 1 Ω ¯ , g 0 x C λ + 4 Ω , g 1 x C λ + 3 Ω , 0 < λ < 1 . Then, for the solvability of the problem (30) and (31), it is necessary and sufficient that the condition:
1 2 Ω 1 | x | 2 f ( x ) d x = i = 0 2 n 1 a i Ω g 1 ( x ) g 0 ( x ) d S x
be satisfied. If a solution to the problem exists, it is unique up to a constant term and belongs to the class C λ + 4 Ω ¯ .
Proof. 
Assume that a solution u ( x ) to the problem (30) and (31) exists, and let δ = j = 1 n x j x j . Let us apply the operator δ to the function u ( x ) and denote v ( x ) = δ [ u ] ( x ) . Note that for x Ω , the equalities hold:
u ( x ) ν Ω = δ [ u ] ( x ) Ω = v ( x ) Ω ;               2 u ( x ) ν 2 Ω = δ 1 δ [ u ] ( x ) Ω = δ [ v ] ( x ) v ( x ) Ω .
Then, taking into account the equality Δ 2 δ [ u ] ( x ) = δ + 4 Δ 2 u ( x ) , where x Ω and the commutability of the operators δ and I S i , we obtain:
L n v ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i Δ 2 I S n i n S 1 i 1 δ u ( x ) = δ + 4 i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i Δ 2 u ( S n i n S 1 i 1 x ) = δ + 4 L n u ( x ) = δ + 4 f ( x ) F ( x ) .
From the boundary conditions of the problem (30) and (31), taking into account the equalities (41), we obtain:
v ( x ) Ω = g 0 ( x ) ; v ( x ) ν Ω = g 1 ( x ) + g 0 ( x ) .
Thus, if u ( x ) is a solution to the problem (30) and (31), then for the function v ( x ) = δ [ u ] ( x ) , we obtain the following Dirichlet problem:
L n v ( x ) = δ + 4 f ( x ) F ( x ) , x Ω ; v ( x ) Ω = g 0 ( x ) , v ( x ) ν Ω = g 1 ( x ) + g 0 ( x ) .
Moreover, from the equality v ( x ) = δ [ u ] ( x ) , it follows that the solution to this problem must satisfy the condition v ( 0 ) = 0 . Let us find out when this condition is fulfilled. We introduce the notation:
w ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i v ( S n i n S 1 i 1 x ) .
If v ( x ) is a solution to the problem (42), then the function w ( x ) from (43) satisfies the conditions of the problem (32) with the functions:
h ˜ 0 ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 0 ( S n i n S 1 i 1 x ) , h ˜ 1 ( x ) = h ˜ 0 ( x ) + i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 1 ( S n i n S 1 i 1 x ) .
From (43), it follows that:
w ( 0 ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i v ( 0 ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i v ( 0 ) .
Then, under the condition i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i 0 , the equality v ( 0 ) = 0 is satisfied if and only if the condition w ( 0 ) = 0 is satisfied. In turn, by the assertion of Lemma 9, for the equality w ( 0 ) = 0 to hold, it is necessary and sufficient to satisfy the condition (39). As in our case:
2 h 0 ( y ) h 1 ( y ) = 2 g 0 ( x ) g 1 ( x ) + g 0 ( x ) = g 0 ( x ) g 1 ( x ) ,
then the condition for the solvability of the problem (30) and (31) can be rewritten as (40). Thus, the necessity of fulfilling the condition (40) for the existence of a solution to the problem (30) and (31) is proven.
Let us show that the fulfillment of the condition (40) is also sufficient for the existence of a solution to the problem (30) and (31). To do this, consider in Ω the following Neumann problem with respect to the function z ( x ) :
Δ 2 z ( x ) = F ( x ) , x Ω ,
z ( x ) Ω = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 0 ( S n i n S 1 i 1 x ) h ˜ 0 ( x ) , z ( x ) ν Ω = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i g 1 ( S n i n S 1 i 1 x ) h ˜ 1 ( x ) .
It is known (see, for example, [36]) that the solvability condition of this problem can be written as:
1 2 Ω 1 | x | 2 F ( x ) d x = Ω h ˜ 1 ( x ) h ˜ 0 ( x ) d S x .
By virtue of [18] (Lemma 5.1), Ω g ( S i x ) d s x = Ω g ( x ) d s x , whence it follows that:
i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i Ω g 0 ( S n i n S 1 i 1 x ) d s x = i = 0 2 n 1 a i Ω g ( x ) d s x
and therefore, the condition (37) can be rewritten in the form of (40). If this condition is satisfied, then a solution to the problem (44) and (45) exists and is unique up to a constant term.
Further, as in the case of the Dirichlet problem, the solution to the Neumann problem (30) and (31) can be found by the formula:
u ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 b i v ( S n i n S 1 i 1 x ) .
Indeed, if we consider the vector Z = z ( x ) , , z ( S n j n S 1 j 1 x ) , , z ( S n 1 S 1 1 x ) T , then from the vector equality U = A n 1 Z , if the conditions of the theorem are satisfied, we can determine the vector U ( x ) = u ( x ) , , u ( S n i n S 1 i 1 x ) , , u ( S n 1 S 1 1 x ) T . As A n U = Z , then the function u ( x ) is uniquely determined through the function z ( x ) from (44) and (45) by Formula (47). In a similar way, as in the case of the Dirichlet problem, it can be shown that the function (47) satisfies all conditions of the problem (30) and (31). The theorem is proven. □
Further, we define the solvability condition for the problem H in the case α = 1 , μ = 0 . In this case, the boundary conditions are written as:
u ( x ) ν = g 0 x , 2 u ( x ) ν 2 + u ( x ) ν = g 1 x , x Ω .
The second condition from (48), taking into account the first condition, can be rewritten in the form:
2 u ( x ) ν 2 = g 1 x g 0 x .
Then, the problem under consideration is equivalent to the problem (30) and (31) with the following boundary conditions:
u ( x ) ν = g 0 x , 2 u ( x ) ν 2 = g 1 x g 0 x , x Ω .
Therefore, Theorem 7 implies the following assertion.
Corollary 7.
Let α = 1 , μ = 0 , the coefficients of the operator L n , satisfy the condition i = 0 2 n 1 ( 1 ) k i a i 0 , for k = 0 , , 2 n 1 and 0 < λ < 1 , f x C λ + 1 Ω ¯ , g 0 x C λ + 4 Ω , g 1 x C λ + 3 Ω . Then, for the solvability of the problem H, it is necessary and sufficient that the condition:
1 2 Ω 1 | x | 2 f ( x ) d x = i = 0 2 n 1 a i Ω g 1 ( x ) 2 g 0 ( x ) d S x
be satisfied. If a solution to the problem exists, it is unique up to a constant term and belongs to the class C λ + 4 Ω ¯ .

6. The General Case of the Main Problem

In this section, we consider the general case of the problem H. The following assertion is true.
Theorem 8.
Let 0 < α 1 , μ 0 , the coefficients of the operator L n , satisfy the condition i = 0 2 n 1 ( 1 ) k i a i 0 , for k = 0 , , 2 n 1 and f x C λ + 1 Ω ¯ , g 0 x C λ + 4 Ω , g 1 x C λ + 3 Ω , 0 < λ < 1 . Then:
(1) If μ > 0 , then a solution to the problem H exists, is unique, belongs to the class C λ + 4 Ω ¯ , and can be represented in the form:
u ( x ) = J μ α [ v ] ( x ) ,
where v ( x ) is a solution of the following Dirichlet problem:
L n v ( x ) = D μ + 4 α f ( x ) , x Ω ; v ( x ) = g 0 ( x ) , v ( x ) ν = g 1 ( x ) , x Ω ;
(2) If μ = 0 , then for solvability of the problem H, it is necessary and sufficient that the condition:
1 2 Ω 1 | x | 2 f 1 α ( x ) d x = k = 1 l a k Ω g 1 ( x ) 2 g 0 ( x ) d S x
be satisfied, where f 1 α ( x ) is defined by the equality f 1 α x = J 4 1 α f ( x ) .
If a solution to the problem exists, it is unique up to a constant term, belongs to the class C λ + 4 Ω ¯ , and can be represented as:
u x = C + J 0 α [ v ] ( x ) , C = const ,
where v ( x ) is a solution to the problem (51) for μ = 0 .
Proof. 
Let the function u ( x ) be a solution to the problem H. We apply the operator D μ α to this function and denote v ( x ) = D μ α u x . Let us apply the operator Δ 2 to this function. Then, by virtue of the equality (20), Δ 2 v ( x ) = D μ + 4 α [ Δ 2 u ] ( x ) . From here, it follows that for any k = 0 , , 2 n 1 , Δ 2 v ( S n i n S 1 i 1 x ) = D μ + 4 α [ Δ 2 u ] ( S n i n S 1 i 1 x ) . Hence,
L n v ( x ) = i = 0 2 n 1 a i Δ 2 D μ α u ( S n i n S 1 i 1 x ) = D μ + 4 α i = 0 2 n 1 a i Δ 2 u ( S n i n S 1 i 1 x ) = D μ + 4 α [ f ] ( x ) ,
where x Ω .
In [27], it was proven that the function D μ α + 1 [ u ] ( x ) can be represented as D μ α + 1 [ u ] ( x ) = δ D μ α + 1 [ u ] ( x ) . Then, from the boundary conditions (2) and (3), it follows that:
v x Ω = D μ α [ u ] ( x ) Ω = g 0 ( x ) , v x v Ω = δ D μ α [ u ] ( x ) Ω = g 1 ( x ) .
Thus, if u ( x ) is a solution to the problem H, then for the function v ( x ) = D μ α [ u ] ( x ) , we obtain the Dirichlet problem (51). If f ( x ) C λ + 1 Ω ¯ , then by virtue of Lemma 2, D μ + 4 α f ( x ) C λ Ω ¯ . Moreover, by the condition of the theorem g 0 x C λ + 4 Ω , g 1 x C λ + 3 Ω . Then, by virtue of the assertion of Theorem 6, a solution to the problem (51) exists, is unique, and belongs to the class C λ + 4 Ω ¯ . If we apply the operator J μ α to both sides of the equality v x = D μ α u x , then by virtue of Lemma 3 in the case μ > 0 , we obtain u ( x ) = J μ α v x , that is, if a solution to the problem H exists, then the representation (50) is valid for it.
Let, on the contrary, the function v ( x ) be a solution to the problem (51). Let us show that the function u ( x ) = J μ α v x satisfies all the conditions of the problem H. Indeed, if we apply the operator L n to this function, then we obtain:
L n u x = L n J μ α v x = J μ + 4 α L n v x = J μ + 4 α D μ + 4 α f x = f ( x ) , x Ω ,
i.e., the function u ( x ) = J μ α v x satisfies Equation (1). Let us check the fulfillment of the boundary conditions (2) and (3). By the assertion of Lemma 4, namely by virtue of the equality (19) in the case μ > 0 , the equalities:
D μ α [ u ] x Ω = D μ α J μ α v x Ω = v x Ω = g 0 ( x ) , D μ α [ u ] x Ω = r r D μ α J μ α v x Ω = r v x r Ω = v x ν Ω = g 1 ( x ) ,
are satisfied, i.e., the boundary conditions are also satisfied.
It remains to investigate the case μ = 0 . In this case, the function v ( x ) —a solution to the problem (51)—must satisfy the additional condition v ( 0 ) = 0 .
As the function F ( x ) = D μ + 4 α f ( x ) by virtue of Lemma 6 is represented as F x = δ + 4 f 1 α x , f 1 α x = J 4 1 α f x , then, by Lemma 9, for the equality v ( 0 ) = 0 to hold, it is necessary and sufficient that condition (39) be satisfied. In our case, 2 h 0 ( y ) h 1 ( y ) = 2 g 0 ( x ) g 1 ( x ) , and therefore, the condition (39) can be rewritten in the form of (52). Thus, the necessity of the fulfillment of the condition (52) for the existence of a solution to the problem H is proven. The other part of the theorem can be proven in the same way as in the case μ > 0 . The theorem is proven. □
Remark 3.
If α = 1 and μ = 0 , then f 0 ( x ) = J 4 0 f x f ( x ) , and the solvability condition (52) coincides with the condition (49).

7. Conclusions

Summarizing the investigation carried out, we note that with the help of transformations of the involution-type, a nonlocal analogue of the biharmonic operator was introduced. Then, for the corresponding biharmonic equation with multiple involutions, the solvability of the boundary value problems with a fractional-order boundary operator having a derivative of the Hadamard-type was investigated. Theorems on the existence and uniqueness of solutions to the problems under consideration were proven. Exact solvability conditions were found depending on the order of the boundary operators. In this work, we extended the correct statement of the boundary value problems for some class of nonlocal partial differential equations. In the future, we plan to investigate spectral questions for operators with multiple involutions.

Author Contributions

Conceptualization, B.T., V.K. and M.M.; investigation, B.T., V.K. and M.M.; writing—original draft preparation, B.T., V.K. and M.M.; writing—review and editing, B.T., V.K. and M.M. All authors read and agreed to the published version of the manuscript.

Funding

The research of the first and third authors was supported by the grant of the Committee of Sciences, Ministry of Education and Science of the Republic of Kazakhstan, Project AP09259074. The second author was supported by Act 211 of the Government of the Russian Federation, Contract No. 02.A03.21.0011.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data are present within the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nahushev, A.M. Equations of Mathematical Biology; Nauka: Moscow, Russia, 1995. (In Russian) [Google Scholar]
  2. Cabada, A.; Tojo, F.A.F. Differential Equations with Involutions; Atlantis Press: New York, NY, USA, 2015. [Google Scholar]
  3. Karapetiants, N.; Samko, S. Equations with Involutive Operators; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  4. Litvinchuk, G.S. Solvability Theory of Boundary Value Problems and Singular Integral Equations with Shift; Kluwer Academic Publishers: Boston, MA, USA, 2000. [Google Scholar]
  5. Al-Salti, N.; Kirane, M.; Torebek, B.T. On a class of inverse problems for a heat equation with involution perturbation. Hacet. J. Math. Stat. 2019, 48, 669–681. [Google Scholar] [CrossRef]
  6. Al-Salti, N.; Kerbal, S.; Kirane, M. Initial-boundary value problems for a time-fractional differential equation with involution perturbation. Math. Model. Nat. Phenom. 2019, 14, 1–15. [Google Scholar] [CrossRef] [Green Version]
  7. Andreev, A.A. Analogs of classical boundary value problems for a second-order differential equation with deviating argument. Differ. Equ. 2004, 40, 1192–1194. [Google Scholar] [CrossRef]
  8. Ashyralyev, A.; Sarsenbi, A.M. Well-posedness of an elliptic equation with involution. Electron. J. Differ. Equ. 2015, 284, 1–8. [Google Scholar]
  9. Ashyralyev, A.; Sarsenbi, A.M. Well-posedness of a parabolic equation with involution. Numer. Funct. Anal. Optim. 2017, 38, 1295–1304. [Google Scholar] [CrossRef]
  10. Baskakov, A.G.; Uskova, N.B. Fourier method for first order differential equations with involution and groups of operators. Ufa Math. J. 2018, 10, 11–34. [Google Scholar] [CrossRef] [Green Version]
  11. Baskakov, A.G.; Krishtal, I.A.; Uskova, N.B. On the spectral analysis of a differential operator with an involution and general boundary conditions. Eurasian Math. J. 2020, 11, 30–39. [Google Scholar] [CrossRef]
  12. Burlutskaya, M. Some properties of functional-differential operators with involution ν(x) = 1 − x and their applications. Russ. Math. 2021, 5, 89–97. [Google Scholar]
  13. Burlutskaya, M. Mixed problem for a first-order partial differential equation with involution and periodic boundary conditions. Comput. Math. Math. Phys. 2014, 54, 1–10. [Google Scholar] [CrossRef]
  14. Kirane, M.; Al-Salti, N. Inverse problems for a nonlocal wave equation with an involution perturbation. J. Nonlinear Sci. Appl. 2016, 9, 1243–1251. [Google Scholar] [CrossRef]
  15. Ruzhansky, M.; Tokmagambetov, N.; Torebek, B.T. On a non-local problem for a multi-term fractional diffusion-wave equation. Fract. Calc. Appl. Anal. 2020, 23, 324–355. [Google Scholar] [CrossRef]
  16. Torebek, B.T.; Tapdigoglu, R. Some inverse problems for the nonlocal heat equation with Caputo fractional derivative. Math. Methods Appl. Sci. 2017, 40, 6468–6479. [Google Scholar] [CrossRef]
  17. Przeworska-Rolewicz, D. Some boundary value problems with transformed argument. Comment. Math. 1974, 17, 451–457. [Google Scholar]
  18. Karachik, V.V.; Turmetov, B. On solvability of some nonlocal boundary value problems for biharmonic equation. Math. Slovaca 2020, 70, 329–342. [Google Scholar] [CrossRef]
  19. Karachik, V.V.; Turmetov, B. Solvability of one nonlocal Dirichlet problem for the Poisson equation. Novi Sad J. Math. 2020, 50, 67–88. [Google Scholar]
  20. Karachik, V.V.; Sarsenbi, A.M.; Turmetov, B. On the solvability of the main boundary value problems for a nonlocal Poisson equation. Turk. J. Math. 2019, 43, 1604–1625. [Google Scholar] [CrossRef]
  21. Yarka, U.; Fedushko, S.; Vesely, P. The Dirichlet Problem for the Perturbed Elliptic Equation. Mathematics 2020, 8, 2108. [Google Scholar] [CrossRef]
  22. Ashurov, R.; Fayziev, Y. On some boundary value problems for equations with boundary operators of fractional order. Int. J. Appl. Math. 2021, 34, 283–295. [Google Scholar]
  23. Vasylyeva, N. Local Solvability of a Linear System with a Fractional Derivative in Time in a Boundary Condition. Fract. Calc. Appl. Anal. 2015, 18, 982–1005. [Google Scholar] [CrossRef]
  24. Gorenflo, R.; Luchko, Y.F.; Umarov, S.R. On some boundary value problems for pseudo-differential equations with boundary operators of fractional order. Fract. Calc. Appl. Anal. 2000, 3, 453–468. [Google Scholar]
  25. Kirane, M.; Torebek, B.T. On a nonlocal problem for the Laplace equation in the unit ball with fractional boundary conditions. Math. Methods Appl. Sci. 2016, 39, 1121–1128. [Google Scholar] [CrossRef]
  26. Krasnoschok, M.; Vasylyeva, N. On a nonclassical fractional boundary-value problem for the Laplace operator. J. Differ. Equ. 2014, 257, 1814–1839. [Google Scholar] [CrossRef]
  27. Turmetov, B. On the solvability of some boundary value problems for the inhomogeneous polyharmonic equation with boundary operators of the Hadamard type. Differ. Equ. 2017, 53, 333–344. [Google Scholar] [CrossRef]
  28. Turmetov, B.; Nazarova, K. On fractional analogs of Dirichlet and Neumann problems for the Laplace equation. Mediterr. J. Math. 2019, 16, 1–17. [Google Scholar] [CrossRef]
  29. Turmetov, B.; Nazarova, K. On a generalization of the Neumann problem for the Laplace equation. Math. Nachrichten 2020, 293, 169–177. [Google Scholar] [CrossRef]
  30. Umarov, S.R. Some boundary value problems for elliptic equations with a boundary operator of fractional order. Dokl. Akad. Nauk. 1993, 333, 708–710. (In Russian) [Google Scholar]
  31. Veliev, T.M.; Ivakhnychenko, M.T.; Ahmedov, T. Fractional boundary conditions in plane waves diffraction on a strip. Prog. Electromagn. Res. 2008, 79, 443–462. [Google Scholar] [CrossRef] [Green Version]
  32. Ivakhnychenko, M.; Veliev, E.; Akhmedov, T. Fractional operators approach in electromagnetic wave reflection problems. J. Electromagn. Waves Appl. 2007, 21, 1787–1802. [Google Scholar] [CrossRef]
  33. Tabatadze, V.; Karaçuha, K.; Veliev, E. The solution of the plane wave diffraction problem by two strips with different fractional boundary conditions. J. Electromagn. Waves Appl. 2020, 34, 881–893. [Google Scholar] [CrossRef]
  34. Tabatdze, V.; Karaçuha, K.; Veliev, E.; Karaçuha, E. Diffraction of the electromagnetic plane waves by double half-plane with fractional boundary conditions. Prog. Electromagn. Res. M 2021, 101, 207–218. [Google Scholar]
  35. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  36. Bitsadze, A.V. On polyharmonic functions. Dokl. Akad. Nauk SSSR 1987, 294, 521–525. (In Russian) [Google Scholar]
  37. Karachik, V.V. Class of Neumann-type problems for the polyharmonic equation in a ball. Comput. Math. Math. Phys. 2020, 60, 144–162. [Google Scholar] [CrossRef]
  38. Karachik, V.V. Generalized Third Boundary Value Problem for the Biharmonic Equation. Differ. Equ. 2017, 53, 756–765. [Google Scholar] [CrossRef]
  39. Agmon, S.; Douglis, A.; Nirenberg, L. Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. I. Commun. Pure Appl. Math. 1959, 12, 623–727. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Turmetov, B.; Karachik, V.; Muratbekova, M. On a Boundary Value Problem for the Biharmonic Equation with Multiple Involutions. Mathematics 2021, 9, 2020. https://doi.org/10.3390/math9172020

AMA Style

Turmetov B, Karachik V, Muratbekova M. On a Boundary Value Problem for the Biharmonic Equation with Multiple Involutions. Mathematics. 2021; 9(17):2020. https://doi.org/10.3390/math9172020

Chicago/Turabian Style

Turmetov, Batirkhan, Valery Karachik, and Moldir Muratbekova. 2021. "On a Boundary Value Problem for the Biharmonic Equation with Multiple Involutions" Mathematics 9, no. 17: 2020. https://doi.org/10.3390/math9172020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop