Next Article in Journal
A Resource-Based View of Competitiveness in the Wind Energy Sector: The Case of Gran Canaria and Tenerife
Next Article in Special Issue
Major Depressive Disorder and Oxidative Stress: In Silico Investigation of Fluoxetine Activity against ROS
Previous Article in Journal
Bacterial Foraging-Based Algorithm for Optimizing the Power Generation of an Isolated Microgrid
Previous Article in Special Issue
Atomic-Scale Understanding of Structure and Properties of Complex Pyrophosphate Crystals by First-Principles Calculations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Understanding Electronic Structure and Chemical Reactivity: Quantum-Information Perspective

by
Roman F. Nalewajski
Department of Theoretical Chemistry, Jagiellonian University, Gronostajowa 2, 30-387 Kraków, Poland
Appl. Sci. 2019, 9(6), 1262; https://doi.org/10.3390/app9061262
Submission received: 18 February 2019 / Revised: 17 March 2019 / Accepted: 22 March 2019 / Published: 26 March 2019
(This article belongs to the Special Issue The Application of Quantum Mechanics in Reactivity of Molecules)

Abstract

:
Several applications of quantum mechanics and information theory to chemical reactivity problems are presented with emphasis on equivalence of variational principles for the constrained minima of the system electronic energy and its kinetic energy component, which also determines the overall gradient information. Continuities of molecular probability and current distributions, reflecting the modulus and phase components of molecular wavefunctions, respectively, are summarized. Resultant measures of the entropy/information descriptors of electronic states, combining the classical (probability) and nonclassical (phase/current) contributions, are introduced, and information production in quantum states is shown to be of a nonclassical origin. Importance of resultant information descriptors for distinguishing the bonded (entangled) and nonbonded (disentangled) states of reactants in acid(A)–base(B) systems is stressed and generalized entropy concepts are used to determine the phase equilibria in molecular systems. The grand-canonical principles for the minima of electronic energy and overall gradient information allow one to explore relations between energetic and information criteria of chemical reactivity in open molecules. The populational derivatives of electronic energy and resultant gradient information give identical predictions of electronic flows between reactants. The role of electronic kinetic energy (resultant gradient information) in chemical-bond formation is examined, the virial theorem implications for the Hammond postulate of reactivity theory are explored, and changes of the overall structure information in chemical processes are addressed. The frontier-electron basis of the hard (soft) acids and bases (HSAB) principle is reexamined and covalent/ionic characters of the intra- and inter-reactant communications in donor-acceptor systems are explored. The complementary A–B coordination is compared with its regional HSAB analog, and polarizational/relaxational flows in such reactive systems are explored.

1. Introduction

The quantum mechanics (QM) and information theory (IT) establish a solid basis for both determining the electronic structure of molecules and understanding, in chemical terms, general trends in their chemical behavior. The energy principle of QM has been recently interpreted [1,2,3] as equivalent variational rule for the overall content of the gradient information in the system electronic wavefunction, proportional to the state average kinetic energy. In the grand-ensemble representation of thermodynamic (mixed) states they both determine the same equilibrium of an externally open molecular system. This equivalence parallels identical predictions resulting from the minimum-energy and maximum-entropy principles in ordinary thermodynamics [4].
The generalized, Fisher-type gradient information in the specified electronic state is proportional to the system average kinetic energy. This allows one to interpret the variational principle for electronic energy as equivalent information rule. The energy and resultant-information/kinetic-energy rules thus represent equivalent sources of reactivity criteria, the populational derivatives of ensemble-average values of electronic energy or overall information, e.g., the system chemical potential (negative electronegativity) and hardness/softness descriptors. The IT transcription of the variational principle for the minimum of electronic energy allows one to interpret the familiar (energetical) criteria of chemical reactivity, the populational derivatives of electronic energy, in terms of the corresponding derivatives of the state-resultant information content. The latter combines the classical (probability) and nonclassical (current) contributions to the state kinetic energy of electrons, generated by the modulus and phase components of molecular wavefunctions, respectively. This proportionality between the state resultant gradient information and its kinetic energy also allows one to use the molecular virial theorem [5] in general reactivity considerations [1,2,3].
The resultant measures combining the probability and phase/current contributions allow one to distinguish the information content of states generating the same electron density but differing in their phase/current composition. To paraphrase Prigogine [6], the electron density alone reflects only the molecular static structure of “being”, missing the dynamic structure of “becoming” contained in the state current distribution. Both these manifestations of electronic “organization” in molecules ultimately contribute to resultant IT descriptors of the structural entropy/information content in generally complex electronic wavefunctions [7,8,9,10]. In quantum information theory (QIT) [7], the classical information contribution probes the entropic content of incoherent (disentangled) local “events” while its nonclassical supplement provides the information complement due to their coherence (entanglement).
The classical IT [11,12,13,14,15,16,17,18] has been already successfully applied to interpret the molecular probability distributions, e.g., References [19,20,21,22]. Information principles have been explored [1,2,3,23,24,25,26,27,28] and density pieces attributed to atoms-in-molecules (AIM) have been approached [22,26,27,28,29,30] providing the information basis of the intuitive (stockholder) division of Hirshfeld [31]. Patterns of chemical bonds have been extracted from electronic communications in molecules [7,19,20,21,32,33,34,35,36,37,38,39,40,41,42] and entropy/information distributions in molecules have been explored [7,19,20,21,43,44]. The nonadditive Fisher information [7,19,20,21,45,46] has been linked to electron localization function (ELF) [47,48,49] of modern density functional theory (DFT) [50,51,52,53,54,55]. This analysis has also formulated the contragradience (CG) probe [7,19,20,21,56] for localizaing chemical bonds and the orbital communication theory (OCT) of the chemical bond has identified the bridge bonds originating from the cascade propagations of information between AIM, which involve intermediate orbitals [7,21,57,58,59,60,61,62].
In entropic theories of molecular electronic structure, one ultimately requires the quantum (resultant) extensions of the familiar complementary measures of Fisher [11] and Shannon [13], of the information and entropy content in probability distributions, which are appropriate for the complex probability amplitudes (wavefunctions) of molecular QM. The wavefunction phase, or its gradient determining the current density and the associated velocity field, gives rise to nonclassical supplements in resultant measures of an overall entropic content of molecular states [7,63,64,65,66,67,68]. The information distinction between the bonded (entangled) and nonbonded (disentangled) states of subsystems, e.g., molecular substrates of a chemical reaction, also calls for such generalized information descriptors [69,70,71,72]. The extremum principles for the global and local measures of the resultant entropy have been used to determine the phase-equilibrium states of molecular systems, identified by their optimum (local, probability-dependent) “thermodynamic” phase.
Various DFT-based approaches to classical issues in reactivity theory [73,74,75,76,77,78,79] use the energy-centered arguments in justifying the observed reaction paths and relative yields of their products. Qualitative considerations on preferences in chemical reactions usually emphasize changes in energies of both reactants and of the whole reactive system, which are induced by displacements (perturbations) in parameters describing the relevant (real or hypothetical) electronic state. In such classical treatments, also covering the linear responses to these primary shifts, one also explores reactivity implications of the electronic equilibrium and stability criteria. For example, in charge sensitivity analysis (CSA) [73,74] the “principal” (energy) derivatives with respect to the system external potential (v), due to the fixed nuclei defining molecular geometry, and its overall number of electrons (N), as well as the associated charge responses of both the whole reactive system and its constituent subsystems, have been used as reactivity criteria. In R = acid(A)← base(B) ≡ A–B complexes, consisting of the coordinated electron-acceptor and electron-donor reactants, respectively, such responses can be subsequently combined into the corresponding in situ descriptors characterizing the B→A charge transfer (CT).
We begin this overview with a summary of the probability and current distributions, the physical attributes reflecting the modulus and phase components of quantum states, and an introduction to the resultant QIT descriptors. The phase equilibria, representing extrema of the overall entropy measures, will be explored and molecular orbital (MO) contributions to the overall gradient-information measure will be examined. Using the molecular virial theorem, the role of electronic kinetic energy, also reflecting the system resultant information, in shaping the electronic structure of molecules, will be examined. This analysis of the theorem implications will cover the bond-formation process and the qualitative Hammond [80] postulate of reactivity theory. The hypothetical stages of chemical reactions invoked in reactivity theory will be explored and the in situ populational derivatives will be applied to determine the optimum amount of CT in donor-acceptor coordinations. Populational derivatives of the resultant gradient information will be advocated as alternative indices of chemical reactivity, related to their energetical analogs. They will be shown to be capable of predicting both the direction and magnitude of electron flows in A–B systems. The frontier-electron (FE) [81,82,83] framework for describing molecular interactions will be used to reexamine Pearson’s [84] hard (soft) acids and bases (HSAB) principle of structural chemistry (see also Reference [85]) and electron communications between reactants will be commented upon. The ionic and covalent interactions between the “frontier” MO will be invoked to fully explain the HSAB stability predictions, and the “complementary” A–B complex will be compared with its regional-HSAB analog. The complementary preference will be explained by examining physical implications of the polarizational and relaxational flows in these alternative reactive complexes. In appendices, the continuity relations for the probability and phase distributions of molecular electronic states resulting from the Schrödinger equation (SE) of QM will be summarized, the dynamics of resultant gradient information will be addressed, the nonclassical origin of the overall gradient-information production will be demonstrated, and the grand-ensemble representation of open molecular systems will be outlined.

2. Physical Attributes of Quantum States and Generalized Information Descriptors

The electronic wavefunctions of molecules are determined by SE of QM. This fundamental equation also determines the dynamics of the modulus (probability) and phase (current) attributes of such elementary quantum states. In a discussion of “productions” of the resultant entropy/information quantities [7,69,72], it is of interest to examine implications of SE for the dynamics of these fundamental physical distributions of quantum states. For simplicity, let us first consider a single electron at time t in state |ψ(t)〉 ≡ |ψ(t)〉 ≡ |ψ〉, described by the (complex) wavefunction in position representation,
ψ(r, t) = 〈r|ψ(t)〉 = R(r, t) exp[iϕ(r, t)] ≡ R(t) exp[iϕ(t)] ≡ ψ(t),
where the real functions R(r, t) ≡ R(t) and ϕ(r, t) ≡ ϕ(t) stand for its modulus and phase parts, respectively. It determines the state probability distribution at the specified time t,
p(r, t) = 〈ψ(t)|r〉〈r|ψ(t)〉 = ψ(r, t)*ψ(r, t) = R(t)2p(t),
and its current density
j(r, t) = [ħ/(2mi)] [ψ(r, t)*ψ(r, t) − ψ(r, t) ∇ψ(r, t)*] = (ħ/m) Im[ψ(r, t)*ψ(r, t)]
= (ħ/m) p(r, t) ∇ϕ(r, t) ≡ p(t) V(t) ≡ j(t).
The effective velocity field V(r, t) = j(t)/p(t) ≡ V(t) of the probability “fluid” measures the local current-per-particle and reflects the state phase gradient:
V(t) = j(t)/p(t) = (ħ/m) ∇ϕ(t).
The wavefunction modulus, the classical amplitude of the particle probability density, and the state phase, or its gradient determining the effective velocity of the probability flux, thus constitute two physical degrees-of-freedom in the full IT treatment of quantum states of a monoelectronic system:
ψ ⇔ (R, ϕ) ⇔ (p, j).
One envisages the electron moving in the external potential v(r), due to the “frozen” nuclei of the Born–Oppenheimer (BO) approximation determining the system geometry, described by the electronic Hamiltonian
H ^ ( r ) = ( ħ 2 / 2 m ) 2 + v ( r ) T ^ ( r ) + v ( r ) ,
where T ^ ( r ) denotes its kinetic part. The quantum dynamics of a general electronic state of Equation (1) is generated by SE
ψ ( t ) / t = ( i ħ ) 1 H ^ ψ ( t ) ,
which also determines temporal evolutions of the state physical distributions: The (instantaneous) probability density p(t) and (local) phase ϕ(t) or its gradient reflecting the velocity field V(t), the current-per-particle of the probability “fluid” [7,69,72]. The relevant continuity relations resulting from SE are summarized in Appendix A.
To simplify the notation for the the specified time t = t0, let us suppress this parameter in the list of state arguments, e.g., ψ(r, t0) ≡ ψ(r) = 〈r|ψ〉, etc. We, again, examine the mono-electron system in (pure) quantum state |ψ〉. The average Fisher’s measure [11,12] of the classical gradient information for locality events, called the intrinsic accuracy, which is contained in the molecular probability density p(r) = R(r)2 is reminiscent of von Weizsäcker’s [86] inhomogeneity correction to the density functional for electronic kinetic energy:
I[p] = ∫p(r) [∇lnp(r)]2dr = 〈ψ|(∇lnp)2|ψ〉 = ∫[∇p(r)]2/p(r) dr = 4∫[∇R(r)]2 drI[R].
This local measure characterizes an effective “narrowness” of the particle spatial probability distribution, i.e., a degree of the particle position determinicity. It represents the complementary measure to the global entropy of Shannon [13,14], the position-uncertainty index,
S[p] = −∫p(r) lnp(r) dr = − 〈ψ|lnp|ψ〉 = −2∫R(r)2 lnR(r) drS[R],
which reflects the particle position indeterminicity, a “spread” of probability distribution. This classical descriptor also measures the amount of information received when the uncertainty about particle’s location is removed by an appropriate experiment: IS[p] ≡ S[p].
In QM, these classical measures can be supplemented by the associated nonclassical contributions in the corresponding resultant QIT descriptors [7,45,63,64,65,66,67]. The intrinsic accuracy concept then naturally generalizes into the associated overall descriptor, a functional of the quantum state |ψ〉 itself. This generalized Fisher-type measure is defined by the expectation value of the Hermitian operator I ^ ( r ) [7,45] of the overall gradient information,
I ^ ( r ) = 4 Δ = ( 2 i ) 2 = ( 8 m / ħ 2 ) T ^ ( r ) ,
related to kinetic energy operator T ^ ( r ) of Equation (6). Using integration by parts then gives:
I [ ψ ] = ψ | I ^ | ψ = 4 ψ ( r ) Δ ψ ( r ) d r = 4 | ψ ( r ) | 2 d r = I [ p ] + 4 p ( r ) [ ϕ ( r ) ] 2 d r p ( r ) [ I p ( r ) + I ϕ ( r ) ] d r I [ p ] + I [ ϕ ] I [ p , ϕ ] = I [ p ] + ( 2 m / ħ ) 2 p ( r ) 1 j ( r ) 2 d r I [ p ] + I [ j ] I [ p , j ] .
The classical and nonclassical densities-per-electron of this information functional read:
Ip(r) = [∇p(r)/p(r)]2  and  Iϕ(r) = 4[∇ϕ(r)]2.
The quantum-information concept I[ψ] = I[p, ϕ] = I[p, j] thus combines the classical (probability) contribution I[p] of Fisher and the nonclassical (phase/current) supplement I[ϕ] = I[j]. The positive sign of the latter expresses the fact that a nonvanishing current pattern introduces more structural determinicity (order information) about the system, which also implies less state indeterminicity (disorder information). This dimensionless measure is seen to reflect the average kinetic energy T[ψ] = 〈ψ| T ^ |ψ〉:
I[ψ] = (8m/ħ2) T[ψ] ≡ σ T[ψ].
One similarly generalizes the entropy (uncertainty) concept of the disorder information in probability density, e.g., the global quantity of Shannon or the gradient descriptor of Fisher, by supplementing the relevant classical measure of the information contained in probability distribution with the corresponding nonclassical complement due to the state (positive) phase or the associated current pattern [7,8,9,10]. The resultant Shannon-type global-entropy measure then reads
S[ψ] = − 〈ψ|lnp + 2ϕ|ψ〉 = S[p] − 2∫p(r) ϕ(r) drS[p] + S[ϕ] ≡ S[p, ϕ].
It includes the (positive) probability information IS[p] ≡ S[p] and (negative) nonclassical supplement S[ϕ] reflecting the state average phase. These entropy contributions also reflect the real and imaginary parts of the associated complex-entropy concept [8], the quantum expectation value of the non-Hermitian entropy operator S(r) = − 2lnψ(r),
S[p, ϕ] = − 2〈ψ|lnψ|ψ〉 ≡ S[p] + i S[ϕ].
The resultant gradient entropy similarly combines the (positive) Fisher probability information and (negative) phase contribution due to the current density:
M[ψ] = 〈ψ|(∇lnp)2 − (2∇ϕ)2|ψ〉 = I[p] − I[ϕ] ≡ M[p] + M[ϕ] ≡ M[p, ϕ].
The sign of the latter reflects an extra decrease of the state overall structure indeterminicity due to its nonvanishing current pattern.
The extrema of these resultant entropies identify the same optimum, equilibrium-phase solution ϕeq. ≥ 0 [7,63,64,65,66,67]:
{δS[ψ]/δψ*(r) = 0  or  δM[ψ]/δψ*(r) = 0}   ⇒   ϕeq.(r) = − (1/2) lnp(r).
This local “thermodynamic” phase generates the associated current contribution reflecting the negative probability gradient:
jeq.(r) = (ħ/m) p(r) ∇ϕeq.(r) = − [ħ/(2m)] ∇p(r).
The above one-electron development can be straightforwardly generalized into a general case of N-electron system in the specified (pure) quantum state |Ψ(N)〉, exhibiting the electron density ρ(r) = Np(r), where p(r) stands for the density probability (shape) factor. The corresponding N-electron information operator then combines terms due to each particle,
I ^ ( N ) = i = 1 N I ^ ( r i ) = ( 8 m / ħ 2 ) i = 1 N T ^ ( r i ) σ T ^ ( N ) ,
and determines the state overall gradient information,
I ( N ) = Ψ ( N ) | I ^ ( N ) | Ψ ( N ) = σ Ψ ( N ) | T ^ ( N ) | Ψ ( N ) = σ T ( N ) ,
proportional to the expectation value T(N) of the system kinetic-energy operator T ^ ( N ) . The relevant separation of the modulus and phase components of such general N-electron states calls for a wavefunction yielding the specified electron density [52]. For example, this goal can be accomplished using the Harriman–Zumbach–Maschke (HZM) [87,88] construction of DFT. It uses N (complex) equidensity orbitals, each generating the molecular probability distribution p(r) and exhibiting the density-dependent spatial phases, f(r) = f[ρ; r], which safeguard the MO orthogonality.
Consider the Slater-determinant describing an electron configuration defined by N (singly) occupied spin MO,
ψ = {ψs} = (ψ1, ψ2, …, ψN),   {ns = 1},
Ψ(N) = |ψ1ψ2ψN|.
The kinetic-energy/gradient-information descriptors then combine additive contributions due to each particle:
T ( N ) = s n s ψ s | T ^ | ψ s s n s T s = ( ħ 2 / 8 m ) s n s ψ s | I ^ | ψ s σ 1 s n s I s .
In the analytical (LCAO MO) representation, with MO expressed as linear combinations of the (orthogonalized) atomic orbitals (AO) χ= (χ1, χ2, …, χk, …),
|ψ〉 = |χC,   C = 〈χ|ψ〉 = {Ck,s = 〈χk|ψs〉},
the average gradient information in Ψ(N) for the unit matrix n = {ns δs,s} = {δs,s} of MO occupations, then reads
I ( N ) = s n s ψ s | I ^ | ψ s = k l { s C k , s n s C s , l } χ l | I ^ | χ k k l γ k , s I l , k = tr ( γ I ) .
Here, the AO matrix representation of the gradient-information operator,
I = { I k , l = χ k | I ^ | χ l = σ χ k | T ^ | χ l = σ T k , l } ,
and the charge/bond-order (CBO) (density) matrix of LCAO MO theory,
γ = CnC = χ | ψ n ψ | χ χ | P ^ ψ | χ ,
is the associated matrix representation of the projection operator onto the occupied MO-subspace,
P ^ ψ = N [ s | ψ s ( n s / N ) ψ s | ] N [ s | ψ s p s ψ s | ] N d ^ ,
proportional to the density operator d ^ of the configuration MO “ensemble”.
This expression for the average overall information in Ψ(N) thus assumes thermodynamic-like form, as trace of the product of the CBO matrix, the AO representation of the (occupation-weighted) MO projector determining the configuration density operator d ^ , and the corresponding AO matrix of the resultant gradient information related to that of the kinetic energy of electrons. It has been argued elsewhere [7,32,33,34,35,36,37,38,39,40,41,42] that elements of the CBO matrix generate amplitudes of electronic “communications” between AO “events” in the molecule. Therefore, the average gradient information of Equation (24) is seen to represent the communication-weighted (dimentionless) kinetic-energy descriptor.
The SE (7) also determines a temporal evolution of the average resultant descriptor of the gradient-information content in the specified (pure) quantum state (see Equation (11)). The time derivative of this overall information functional I[ψ] is addressed in Appendix B.

3. Probing Formation of the Chemical Bond

The association between the overall gradient information and electronic kinetic energy suggests the use of molecular virial theorem in extracting the physical origins of the chemical bonding [89,90,91,92] and for understanding general reactivity rules [1,2,3,5]. The previous analyses [89,90,91] have focused on the interplay between the longitudinal (in the bond direction, along “z” coordinate) and transverse (perpendicular to the bond axis, due to coordinates “x” and “y”) components of electronic kinetic energy. The former appears as the true driving force of the covalent-bond formation and the accompanying electron-delocalization process at an early approach by the two atoms, while the latter reflects an overall transverse contraction of the electron density in the attractive field of both nuclei.
It is of interest to examine the global production (Equation (A20)) of the resultant gradient measure of the electronic information due to the equilibrium current of Equation (18),
σIeq. = −σM eq. ∝ − ∫ jeq.(r)⋅∇v(r) dr ∝ ∫ ∇p(r) ⋅∇v(r) dr,
which accompanies a formation of the covalent chemical bond A–B (Figure 1). Reference to this figure shows that in the axial (bond) section of the molecule ∇p(r)⋅∇v(r) < 0, thus confirming a derease of the longitudinal contribution to the average structure information, i.e., an increase in the axial component of the overall gradient entropy (Equation (A24)) as a result of the chemical bond formation: σIeq.(axial) < 0 and σMeq(axial). > 0. This accords with the chemical intuition: electron delocalization in the covalent chemical bond at its equilibrium length R = Re should produce a higher indeterminicity (disorder, entropy) measure and a lower level of the determinicity (order, information) descriptor, particularly in the axial bond region between the two nuclei.
Since the gradient measure of the state overall gradient information reflects the system kinetic-energy content, one could indeed relate these conclusions to the known profiles of the longitudinal and transverse components of this energy contribution [89,90,91]. The former contribution effectively lowers the longitudinal inhomogeneity of molecular probability density, σIeq.(axial) < 0, particularly in the bond region between the two nuclei, while the (dominating) latter component implies an effective transverse contraction of the electron distribution, i.e., σIeq.(transverse) > 0. The bonded system thus exhibits a net increase in the probability inhomogeneity, i.e., a higher gradient information compared to the separated-atoms limit (SAL). This is independently confirmed by a lowering of the system overall potential-energy displacement ΔW(R) = W(R) − W(SAL) at the equilibrium bond-length Re,
ΔW(Re) = 2ΔE(Re) = −2ΔT(Re) < 0
Here,
ΔW(R) = ΔV(R) + [ΔUe(R) + ΔUn(R)] ≡ ΔV(R) + ΔU(R)
combines the (electron-nuclear) attraction (V) and repulsion (U = Ue + Un) energies between electrons (Ue) and nuclei (Un).
It is also of interest to examine variations of the resultant gradient information in specific geometrical displacements ΔR of this diatomic system. Its proportionality to the system kinetic-energy component again calls for using the molecular virial theorem, which allows one to partition the relative BO potential ΔE(R) = ΔT(R) + ΔW(R) into the SAL-related changes in the electronic kinetic energy [ΔT(R)] and its overall potential complement [ΔW(R)]. In the BO approximation the virial theorem for diatomics reads:
T(R)( + ΔW(R) + R[dΔE(R)/dR] = 0.
It implies the following kinetic and potential energy components:
ΔT(R) = − ΔE(R) − R [dΔE(R)/dR] = − d[RΔE(R)]/dR   and
ΔW(R) = 2ΔE(R) + R [dΔE(R)/∂R] = R−1 d[R2ΔE(R)]/dR.
Figure 2 presents qualitative plots of the BO potential ΔE(R) and its kinetic-energy contribution ΔT(R) in diatomics. The latter also reflects the associated displacement plots for the resultant gradient information ΔI(R) = σ ΔT(R). It follows from this qualitative diagram that, during a mutual approach by two constituent atoms, the kinetic-energy/gradient information is first diminished relative to SAL, due to the dominating longitudinal contribution related to Cartesian coordinate “z” (along the bond axis). However, at the equilibrium distance Re the resultant information already rises above the SAL value, due to the dominating increase in transverse components of the kinetic-energy/information (corresponding to coordinates “x” and “y” perpendicular to the bond axis). Therefore, at the equilibrium separation Re between atoms the bond-formation results in a net increase of the resultant gradient-information relative to SAL, due to—on average—more compact electron distribution in the field of both nuclei.
Consider next the (intrinsic) reaction coordinate Rc, or the associated progress variable P = |Rc|, of the arc length along this trajectory, for which the virial relations also assume the diatomic-like form. The virial theorem decomposition of the energy profile E(P) along Rc in bimolecular reaction
A + B → R → C + D,
where the R denotes the transition-state (TS) complex, then generates the associated profile of its kinetic-energy component T(P), which also reflects the associated resultant gradient information I(P). Such an application of the molecular virial theorem to endo- and exo-ergic reactions is presented in the upper panel of Figure 3, while the energy-neutral case of such a chemical process, on a “symmetric” potential energy surface (PES), refers to a lower panel in the figure.
The (qualitative) Hammond postulate [80] of reactivity theory relates a general resemblance/proximity of the reaction TS complex R to either its substrates α ∈ (A, B) or products β ∈ (C, D) to the reaction energy ΔEr = E(Pprod.) − E(Psub.): in exo-ergic (ΔEr < 0) processes, Rα and in endo-ergic (ΔEr > 0) reactions, Rβ. Accordingly, for the vanishing reaction energy ΔEr = 0, the position of TS complex is expected to be located symmetrically between the reaction substrates and products. A reference to Figure 3 indeed shows that the activation barrier appears “early” in exo-ergic reaction, e.g., H2 + F → H + HF, with the reaction substrates being only slightly modified in TS, R ≈ [A–B]. Accordingly, in the endo-ergic bond-breaking–bond-forming process, e.g., H + HF → H2 + F, the barrier is “late” along the reaction coordinate P and the activated complex resembles more reaction products: R ≈ [C–D]. This qualitative statement has been subsequently given several more quantitative formulations and theoretical explanations using both the energetic and entropic arguments [93,94,95,96,97,98,99,100]
Previous virial-theorem analyses [1,2,3,5] have shown that this qualitative rule is fully indexed by the sign of the P-derivative of the average kinetic energy or of the resultant gradient information at TS complex. The energy profile along the reaction “progress” coordinate P, ΔE(P) = E(P) − E(Psub.) is again directly “translated” by the virial theorem into the associated displacement in its kinetic-energy contribution ΔT(P) = T(P) − T(Psub.), proportional to the corresponding change ΔI(P) = I(P) − I(Psub.) in the system resultant gradient information, ΔI(P) = σ ΔT(P),
ΔT(P) = −ΔE(P) − P [dΔE(P)/dP] = − d[PΔE(P)]/dP.
A reference to qualitative plots in Figure 3 shows that the related ΔT(P) or ΔI(P) criteria distinguish these two directions by the sign of their geometrical derivative at TS complex:
endo-direction:   (dI/dP) > 0 and (dT/dP) > 0, ΔEr > 0;
energy-neutral:   (dI/dP) = 0 and (dT/dP) = 0, ΔEr = 0;
exo-direction:   (dI/dP) < 0 and (dT/dP) < 0, ΔEr < 0.
This observation demonstrates that the RC derivative of the resultant gradient information at TS complex, proportional to dT/dP|, can indeed serve as an alternative detector of the reaction energetic character: its positive/negative values respectively identify the endo/exo-ergic processes, exhibiting the late/early activation barriers, respectively, with the neutral case, ΔEr = 0 or dT/dP| = 0, exhibiting an “equidistant” position of TS between the reaction substrates and products on a symmetric PES, e.g., in the hydrogen exchange reaction H + H2 →H2 + H.
The reaction energy ΔEr determines the corresponding change in the resultant gradient information, ΔIr = I(Pprod.) − I(Psub.) = σ ΔTr, proportional to ΔTr = T(Pprod.) − T(Psub.) = −ΔEr. The virial theorem thus implies a net decrease of the resultant gradient information in endo-ergic processes, ΔIr(endo) < 0, its increase in exo-ergic reactions, ΔIr(exo) > 0, and a conservation of the resultant gradient information in the energy-neutral chemical processes: ΔIr(neutral) = 0. One also recalls that the classical part of this information displacement probes an average inhomogeneity of electronic density. Therefore, the endo-ergic processes, requiring a net supply of energy to R, give rise to more diffused electron distributions in the reaction products, compared to substrates. Accordingly, the exo-ergic transitions, which release the energy from R, generate a more compact electron distributions in products and no such change is predicted for the energy-neutral case.

4. Reactivity Criteria

The grand-ensemble basis of populational derivatives of the energy or information descritors in the externally open molecular systems [1,2,3,53,101,102] has been briefly summarized in Appendix C. The equilibrium energy function
E [ D ^ e q . ] = E ( μ , T ; v ) = i j P j i ( μ , T ; v ) E j i , D ^ e q . = i j | ψ j i P j i ( μ , T ; v ) ψ j i | D ^ ( μ , T ; v ) ,
is determined by the optimum probabilities {Pji(μ, T; v)} of the ensemble stationary states {|ψji〉}, eigenstates of Hamiltonians { H ^ ( N i , v ) }: H ^ ( N i , v ) |ψji〉 = Eji |ψji〉. These state probabilities correspond to the grand-potential minimum with respect to the ensemble density operator (see Equations (A29) and (A38)):
min D ^ Ω [ D ^ ] = Ω [ D ^ e q . ] .
Thermodynamic energy of Equation (33) identifies the two Lagrange multipliers involved in this variational rule as corresponding partial derivatives with respect to the constraint values:
μ = ( E N ) S | D ^ e q . and T = ( E S ) N | D ^ e q . .
The minimum-energy principle of Equation (34) (see also Equation (A38)) can be alternatively interpreted as the associated extremum rule for the overall gradient information [1,2,3,19,45],
σ min D ^ Ω [ D ^ ] = σ Ω [ D ^ e q . ] = I [ D ^ e q . ] + 8 m ħ 2 { W [ D ^ e q . ] μ N [ D ^ e q . ] T S [ D ^ e q . ] } ,
where the ensemble-average value of the system potential energy again combines the electron-nuclear attraction ( V ) and the repulsion ( U ) contributions: W [ D ^ e q . ] = V [ D ^ e q . ] + U [ D ^ e q . ] . This gradient-information/kinetic-energy principle is seen to contain the additional constraint of the fixed overall potential energy, 〈Wens. = W , multiplied by the Lagrange multiplier
λ W = σ = ( I W ) N , S | D ^ e q . κ .
It also includes modified “intensities” associated with the remaining constraints: information potential
λ N = σ μ = ( I N ) W , S | D ^ e q . ξ and
information “temperature”
λ S = σ T = ( I S ) W , N | D ^ e q . τ .
The conjugate thermodynamic principles, for the constrained extrema of the ensemble-average energy,
δ ( E [ D ^ ] μ N [ D ^ ] T S [ D ^ ] ) D ^ e q . = 0 ,
and thermodynamic gradient information,
δ ( I [ D ^ ] κ W [ D ^ ] ξ N [ D ^ ] τ S [ D ^ ] ) D ^ e q . = 0 ,
have the same state-probability solutions [1,2,3]. This manifests the physical equivalence of the energetic and entropic principles for determining the equilibrium states in thermodynamics [4].
The equilibrium value of resultant gradient information, given by the weighted expression in terms of the equilibrium probabilities in the grand-canonical mixed state,
I e n s . I [ D ^ e q . ] = tr ( D ^ e q . I ^ ) = i j P j i ( μ , T ; v ) ψ j i | I ^ | ψ j i i j P j i ( μ , T ; v ) I j i , I j i = σ ψ j i | T ^ | ψ j i σ T j i ,
is related to the ensemble average kinetic energy T :
T e n s . T = t r ( D ^ e q . T ^ ) = i j P j i ( μ , T ; v ) ψ j i | T ^ | ψ j i = i j P j i ( μ , T ; v ) T j i , = σ 1 I .
In this grand-ensemble approach the system chemical potential appears as the first (partial) populational derivative [53,73,74,101,102,103,104,105] of the system average energy. This interpretation also applies to the diagonal and mixed second derivatives of equilibrium electronic energy, which involve differentiation with respect to the electron-population variable N . In this energy representation, the chemical hardness reflects N -derivative of the chemical potential [53,73,74,106],
η = ( 2 E N 2 ) S | D ^ e q . = ( µ N ) S | D ^ e q . > 0 ,
while the information hardness measures N -derivative of the information potential:
ω = ( 2 I N 2 ) W , S | D ^ e q . = ( ξ N ) W , S | D ^ e q . = σ η > 0 .
By the Maxwell cross-differentiation relation, the mixed derivative of the energy,
f ( r ) = ( 2 E N υ ( r ) ) S | D ^ e q . = ( µ υ ( r ) ) S | D ^ e q . = ( ρ ( r ) N ) S | D ^ e q . ,
measuring the global Fukui Function (FF) [53,73,74,107], can be alternatively interpreted as either the density response per unit populational displacement, or the global chemical-potential response per unit local change in the external potential. The associated mixed derivative of the resultant gradient information then reads:
φ ( r ) = ( 2 I N υ ( r ) ) W , S | D ^ e q . = ( ξ υ ( r ) ) W , S | D ^ e q . = σ f ( r ) .
The positive signs of the diagonal descriptors assure the external stability of a molecule with respect to external flows of electrons, between the molecular system and its electron reservoir. Indeed, they imply an increase (a decrease) of the global energetic and information “intensities” conjugate to N = N, the chemical (μ), and information (ξ) potentials, in response to a perturbation created by an electron inflow (outflow) ΔN. This is in accordance with the familiar Le Châtelier and Le Châtelier–Braun principles of thermodynamics [4], that the secondary (spontaneous) responses in system intensities to an initial population displacement diminish effects of this primary perturbation.
Since reactivity phenomena involve electron flows between the mutually open substrates, only in such generalized, grand-ensemble framework can one precisely define the relevant CT criteria, determine the hypothetical “states” of subsystems, and eventually measure the effects of their mutual interaction. The open microscopic systems require the mixed-state description, in terms of the ensemble-average physical quantities, capable of reflecting the externally imposed thermodynamic conditions and defining the infinitesimal populational displacements invoked in reactivity theory. In this ensemble approach, the energetic and information principles are physically equivalent, giving rise to the same equilibrium probabilities. This basic equivalence is consistent with the alternative energetic and entropic principles invoked in equilibrium thermodynamics of macroscopic systems [4].

5. Donor-Acceptor Systems

In reactivity considerations one conventionally recognizes several hypothetical stages of chemical processes involving either the mutually closed (nonbonded, disentangled) or open (bonded, entangled) reactants α = {A, B} [1,2,3,73,74], e.g., substrates in a typical bimolecular reactive system R = A–B involving the acidic (A, electron acceptor) and basic (B, electron donor) subsystems. The nonbonded status of these fragments, when they conserve their initial (integer) overall numbers of electrons {Nα = Nα0} in the isolated (separated) reactants {α0}, is symbolized by the solid vertical line, e.g., in the intermediate, polarized reactive system R+ ≡ (A+|B+) combining the internally polarized but mutually closed subsystems. It should be emphasized that only due to this mutual closure the substrate identity remains a meaningful concept. Their descriptors in the final, equilibrium-reactive system R* ≡ (A*¦B*) ≡ R, combining the mutually open (bonded) fragments, as symbolized by the vertical broken line separating the two subsystems, can be inferred only indirectly [1,2,3], by externally opening the two mutually closed subsystems of R+ with respect to their separate (macroscopic) electron reservoirs {α} in the composite polarized system
R+ = (A¦A+|B+¦B) ≡ [A+|B+].
The subsystem densities {ρα = Nα pα}, with pα denoting the internal probability distribution in fragment α, are “frozen” in the promolecular reference R0 = (A0|B0) consisting of the isolated-reactant distributions {ρα0 = Nα0 pα0} shifted to their actual positions in the “molecular” system R. The polarized reactive system R+ combines the relaxed subsystem densities, modified in presence of the reaction partner at finite separation between both subsystems: {ρα+ = Nα+pα+, Nα+ = ∫ρα+dr = Nα0}. In the global equilibrium state of R as a whole, these polarized subsystem densities are additionally modified by the effective inter-reactant CT: {ρα* = Nα*pα*, Nα* = ∫ρα*drNα0}.
The overall electron density in the whole R+ is given by the sum of reactant densities, polarized due to “molecular” external potential v = vA + vB combining contributions due to the fixed nuclei in both substrates at their final mutual separation,
ρR+NR pR+ = ρA+ + ρB+NA+ pA+ + NB+ pB+,   Nα+ = ∫ρα+dr,   ∑α Nα+ = NR.
Here, {pα+ = ρα+/Nα+} stand for the internal probability densities in such promoted fragments, and the global probability distribution reflects the “shape” factor of the overall electron density,
pR+ = ρR+/NR = (NA+/NR) pA+ + (NB+/NR) pB+
PA+ pA+ + PB+ pB+,   ∫pR+dr = PA+ + PB+ = 1,
where condensed reactant probabilities {Pα+ = Nα+/NR = Nα0/NR = Pα0} denote fragment shares in NR. At this polarization stage, both fragments exhibit internally equalized chemical potentials {μα+ = μ[Nα0, v]}, different from the separate-reactant levels {μα0 = μ[Nα0, vα]}.
The two molecular subsystems lose their identity in the bonded status, as the mutually open parts of the externally closed reactive system R*, which allows for the inter-fragment (intra-R) flows of electrons. In such a global equilibrium each “part” effectively extends over the whole molecular system since the hypothetical boundary defining the fragment identity does not exists any more. Both subsystems then effectively exhaust the molecular electron distribution, their electron populations are equal to the global number of electrons,
ρA* = ρB* = ρR,   {μα* = μRμ[NR, v],   Nα* = NR,   pα* = ρR/NRpR},
and subsystem chemical potentials in R are equalized at molecular level μR: {μα* = μR}.
One can contemplate, however, the external flows of electrons, between the mutually closed (nonbonded) but externally open reactants and their separate (macroscopic) reservoirs {α}. The mutual closure then implies the relevancy of subsystem identities established at the polarization stage of R+, while the external openness in the composite subsystems {α+ = (α+¦α+)} allows one to independently “regulate” the external chemical potentials of both parts, {μα+ = μ(α)}, and hence also their average densities {ρα+ = Nα+pα+} and electron populations Nα+ = ∫ρα+dr. In particular, the substrate chemical potentials equalized at the molecular level in both subsystems, {μα+μ[Nα*, v] = μRμ[NR, v]}, which then also describes a common molecular reservoir {α(μR) = (μR)} coupled to both reactants, ((μR)¦A*¦B*), formally define the global equilibrium state in the molecular part R* = (A*¦B*) of the composite system which (indirectly) represents the global equilibrium in R as a whole:
R* = R+(μR) ≡ [A*(μR)|B*(μR)]
= [(μR)¦A+(μR)|B+(μR(μR)] ≡ [(μR)¦A*(μR)¦B*(μR)] ≡ (*¦R*).
This reactive system indeed implies an effective mutual openess of both reactants, realized through the direct external openness of both substrates to a common (molecular) reservoir (μR). It allows for an effective donor(B)→acceptor(A) flow of electrons, between subsystems of the molecular fragment R* of R*, while retaining the reactant identities assured by the direct mutual closeness of the polarized molecular part R+ in R+.
The final reactant distributions {ρα* = ρα+(μR)} in R* and the associated electron populations {Nα* = Nα+(μR)} are then modified by effects of the inter-reactant CT
NCT = NA*NA0 = NB0NB* > 0,
for the conserved overall (average) number of electrons in the globally isoelectronic processes in the reactive system as a whole:
NA* + NB*NR* = N(μR) = NR = NA+ + NB+NR+ = NA0 + NB0NR0.
Density changes due to these equilibrium redistributions of electrons are indexed by the corresponding in situ FF. In CSA [73,74], one introduces the reactant-resolved FF matrix of the substrate density responses to displacements in the fragment electron populations, for the fixed (molecular) external potential (geometry),
f+(r) = {fα,β(r) = [∂ρβ+(r)/∂Nα]v},
which generate the FF indices of reactants in the B→A CT:
fαCT(r) = ∂ρα+(r)/∂NCT = fα,α(r) − fβ,α(r); (α, βα) ∈ {A, B}.
One recalls, that these relative responses of reactants eventually combine into the corresponding global CT derivative, of the reactive system as a whole,
FRCT = ∂ρR+/∂NCT = ∑α=A,Bβ=A,B (∂ρβ+/∂Nα) (∂Nα/∂NCT)
= (fA,AfB,A) − (fB,BfA,B) ≡ fACTfBCT,
which represents the populational sensitivity of R* with respect to the effective internal CT, between the externally open but mutually closed reactants.
To summarize, the fragment identity remains a meaningful concept only for the mutually closed (nonbonded) status of the acidic and basic reactants, e.g., in the polarized reactive system R+ or in the R+ part of R+. The global equilibrium in R as a whole, R = R*, combining the effectively “bonded”, externally open but mutually closed subsystems {α*} in R*, accounts for the extra CT-induced polarization of reactants compared to R+. Descriptors of this state, of the mutually “bonded” reactants, can be inferred only indirectly, by examining the chemical potential equalization in the equilibrium composite system R*. Similar external reservoirs are involved when one examines independent population displacements on reactants, e.g., in defining the fragment chemical potentials and their hardness tensor in the substrate fragment of R+. In this hypothetical chain of reaction “events”, the polarized system R+ thus appears as the launching stage for the subsequent CT and the accompanying induced polarization, after the hypothetical barrier for the flow of electrons between subsystems has been effectively lifted.
Thus, the equilibrium case R* ofR+ also represents the effectively open reactants in R*. The equilibrium substrates {α*} indeed display the final equilibrium densities {ρα* = ρα(μR)} after the B→A CT, giving rise to molecular electron distribution
ρA* + ρB* = ρA+(μR) + ρB+(μR) = ρR(μR) ≡ ρR
and the associated populations {∫ρα+(μR) dr = Nα+(μR) = Nα*} corresponding to the chemical potential equalization in R = R* as a whole: μA* = μB* = μR. One observes that the reactant chemical potentials have not been equalized at the preceding polarization stage in the molecular part Rn+ = (A+|B+) of a general composite system
R+ = (A+|B+) = (A+¦A+|+¦RB+),   {α+ = α[μα+]},
when μA+[ρA+] < μB+[ρB+].
In the polarized reactive system, the fragment chemical potentials μR+= {μα+ = μα[Nα0, v]} and the resulting elements of the hardness matrix ηR+ = {ηα,β} represent populational derivatives of the system average electronic energy in reactant resolution, Ev({Nβ}). They are properly defined in R+, calculated for the fixed external potential v reflecting the “frozen” molecular geometry. These quantities represent the corresponding partials of the system ensemble-average energy with respect to ensemble-average populations {Nα} on subsystems in the mutually closed (externally open) composite subsystems {α+ = (α+¦α+)} in R+ = (A+|B+):
μα ≡ ∂Ev({Nγ})/∂Nα,   ηα,β = ∂2Ev({Nγ})/∂NαNβ = ∂μα/∂Nβ.
The global reactivity descriptors, of the R = (A*¦B*) ≡ R* part of R*, similarly involve differentiations with respect to the average number of electrons of R in the combined system R* = (R*¦*) = (A*¦B*¦*):
μR = ∂Ev(NR)/∂NR,  ηR = ∂2Ev(NR)/∂NR2 = ∂μR/∂NR.
The optimum amount of the (fractional) CT is determined by the populational “force”, measuring the difference between chemical potentials of the polarized acidic and basic reactants which defines the effective CT gradient,
μCT = ∂Ev(NCT)/∂NCT = μA+μB+ = σ−1[ξA+ξB+] ≡ σ−1ξCT < 0,
and the in situ hardness (ηCT) or softness (SCT) for this process,
ηCT = ∂μCT/∂NCT = (ηA,AηA,B) + (ηB,BηB,A+) ≡ ηAR + ηBR = SCT−1
= σ−1ξCT/∂NCT = σ−1 ωCT,
representing the effective CT Hessian and its inverse, respectively, proportional to the CT information hardness Hessian ωCT. The optimum amount of the inter-reactant CT,
NCT = −μCT/ηCT = −ξCT/ωCT > 0,
then generates the associated second-order stabilization energy due to CT,
ECT = μCT NCT/2 = − μCT2/(2ηCT) = σ1 [− ξCT2/(2ωCT)] ≡ σ1 ICT < 0,
proportional to the associated change ICT in the resultant gradient information.

6. HSAB Principle Revisited

The physical equivalence of reactivity concepts formulated in the energy and resultant gradient-information representations has direct implications for OCT, in which one treats a molecule as an information network propagating signals of the AO origins of electrons in the bond system determined by the configuration occupied MO. It has been argued that elements of the CBO matrix γ = {γk,l} of Equation (26), weighting factors in Equation (24), determine amplitudes of conditional probabilities defining molecular (direct) communications between AO. Entropic descriptors of such information channel generate the entropic bond orders and measures of their covalent/ionic components, which ultimately facilitate an IT understanding of molecular electronic structure in chemical terms [7,19,20,21,32,33,34,35,36,37,38,39,40,41,42]. The communication “noise” (orbital indeterminicity) in this network, measured by the channel conditional entropy, is due to the electron delocalization in the bond system of a molecule. It represents the system overall bond “covalency”, while the channel information “capacity” (orbital determinicity), reflected by the mutual information of the molecular communication network, measures its resultant bond “iconicity”. The more scattering (indeterminate) is the molecular information system, the higher its covalent character; a more deterministic (less noisy) channel thus represents a more ionic molecular system. These two bond attributes thus compete with one another.
In chemistry, the bond covalency, a common possession of electrons by interacting atoms, is indeed synonymous with an electron delocalization generating a communication noise. A classical example of a purely covalent interaction constitute bonds connecting identical atoms, e.g., hydrogens in H2 or carbons in ethylene, when the interacting AO in the familiar MO diagrams of chemistry exhibit the same AO energies. The bond ionicity accompanies large differences in electronegativities, generating a substantial CT between the interacting atoms. Such bonds correspond to a wide separation of AO energies in MO diagrams. The ionic bond component diminishes noise and introduces more determinicity into AO communications, thus representing the bond mechanism competitive with the bond covalency.
One of the celebrated (qualitative) rules of chemistry deals with stability preferences in molecular coordinations. The HSAB principle [84,85,106] predicts that chemically hard (H) acids prefer to coordinate hard bases in the [HH] complexes A–B, and soft (S) acids prefer to coordinate soft bases in the [SS] complexes, whereas the “mixed” [HS] or [SH] coordinations, of hard acids with soft bases or of soft acids with hard bases, are relatively unstable. Little is known about the communication implications of this principle. In such a communication perspective on reactive systems the H and S reactants correspond to internally ionic (deterministic) and covalent (noisy) substrate channels, respectively. The former involves relatively localized communications between AO, while the latter corresponds to delocalized probability scatterings between the basis states.
In the reactivity context, the following additional questions arise:
What is an overall character of communications responsible for the mutual interaction between reactants?
How does the HSAB rule influence the inter-reactant propagations of information, i.e., how do the [HH] and [SS] preferences shape the inter-reactant communications in these complexes?
Do the S substrates in [SS] complex predominantly interact “covalently”, and H substrates in the [HH] complex “ionically”?
In the frontier electron (FE) [81,82,83] approach to molecular interactions and CT phenomena, the orbital energy of the substrate highest occupied MO (HOMO) determines its donor (basic) level of the chemical potential, while the lowest unoccupied MO (LUMO) energy establishes its acceptor (acidic) capacity (see Figure 4). The HOMO–LUMO energy gaps in subsystems then reflect the substrate molecular hardnesses. The interaction between the reactant frontier MO of comparable orbital energies is predominantly covalent (chemically “soft”) in character, while that between the subsystem MO of distinctly different energies becomes mostly ionic (chemically “hard”). Figure 4 summarizes the alternative relative positions of the donor (HOMO) levels of the basic reactant, relative to the acceptor (LUMO) levels of its acidic partner, for all admissible hardness combinations in the reactive system R = A–B. In view of the proportionality relations between the energetic and information reactivity criteria, these relative MO energy levels also reflect the corresponding information potential and hardness descriptors of subsystems, including the in situ derivatives driving the information transfer between reactants.
A magnitude of the ionic, CT-stabilization energy in A–B systems is then determined by the corresponding in situ populational derivatives in R,
Δεion. = |ECT| = μCT2/(2ηCT) > 0,
where μCT and ηCT stand for the effective chemical potential and hardness descriptors of R involving the FE of reactants. Since the donor/acceptor properties of reactants are already implied by their known relative acidic or basic character, one applies the biased estimate of the CT chemical potential. In FE approximation, the chemical potential difference μCT for the effective internal B→A CT then reads (see Figure 4):
μCT(B→A) = μA(−)μB(+) = εA(LUMO) − εB(HOMO) ≈ IBAA > 0.
It determines the associated first-order energy change for the B→A transfer of NCT electrons:
ΔEB→A(NCT) = μCT(B→A) NCT > 0.
The CT chemical potential of Equation (66) thus combines the electron-removal potential of the basic reactant, i.e., its negative ionization potential IB = E(B+1) − E(B0) > 0,
μB(+) = εB(HOMO) ≈ − IB,
and the electron-insertion potential of the acidic substrate, i.e., its negative electron affinity
AA = E(A0) − E(A−1) > 0,
μA(−) = εA(LUMO) ≈ −AA.
The energy of the CT disproportionation process,
[A–––B] + [A–––B] → [A−1–––B+1] + [A+1–––B−1],
then generates the (unbiased) finite-difference measure of the effective hardness descriptor for this implicit CT:
ηCT = (IAAA) + (IBAB)
≈ [εA(LUMO) − εA(HOMO)] + [εB(LUMO) − εB(HOMO)]
= ηA + ηB > 0.
These in situ derivatives ultimately determine a magnitude of the CT stabilization energy of Equation (65), which reflects the ionic part of the overall interaction energy,
Δεion. = μCT2/(2ηCT) = [εA(LUMO) − εB(HOMO)]2/[2(ηA + ηB)].
In the FE framework of Figure 4, the CT-interaction energy is thus proportional to the squared gap between the LUMO orbital energy of the acidic reactant and the HOMO level of the basic substrate. This ionic interaction is thus predicted to be strongest in [HH] pairs of subsystems and weakest in [SS] arrangements, with the mixed [HS] and [SH] combinations representing the intermediate magnitudes of this ionic-stabilization effect.
It should be realized, however, that ionic and covalent energy contributions complement each other in the resultant bond energy [85]. Therefore, the [SS] complex, for which the energy gap εA(LUMO)−εB(HOMO) between the interacting orbitals reaches the minimum value, implies the strongest covalent stabilization in the reactive complex. Indeed, the lowest (bonding) energy level εb of this FE interaction, corresponding to the bonding combination of the (positively overlapping) frontier MO of subsystems,
φb = Nb [φB(HOMO) + λφA(LUMO)], S = 〈φA(LUMO)|φB(HOMO)〉 > 0,
then exhibits the maximum bonding energy due to the covalent interaction:
Δεcov. = εB(HOMO) − εb > 0.
It follows from the familiar secular equations of the Ritz method that this covalent energy can be approximated by the limiting MO expression
Δεcov. ≅ (βεb S)2/[εA(LUMO) − εB(HOMO)],
where the matrix element of the system electronic Hamiltonian, which couples the two states,
β = φ A ( LUMO ) | H ^ | φ B ( HOMO )
is expected to be proportional to the overlap integral S between the frontier MO involved. It indeed follows from Equation (75) that the maximum covalent component of the inter-reactant chemical bond is expected in interactions between soft, strongly overlapping reactants, since then the numerator assumes the highest value while the denominator reaches its minimum. For the same reason one predicts the smallest covalent stabilization in interactions between the hard, weakly overlapping substrates, with the mixed hardness combinations giving rise to intermediate bond covalencies.
To summarize, the [HH] complex exhibits the maximum ionic stabilization, the [SS] complex the maximum covalent interaction, while the mixed combinations of reactant hardnesses in [HS] and [SH] coordinations exhibit a mixture of moderate covalent and ionic bond components between the acidic and basic subsystems. Therefore, communications representing the inter-reactant bonds between the chemically soft (covalent) reactants are also expected to be predominantly “soft” (delocalized, indeterministic) in character, while those between the chemically hard (ionic) subsystems are predicted to be dominated by the “hard” (localized, deterministic) propagations in the communication system for R as a whole.
The electron communications between reactants {α = A, B} in the acceptor-donor reactive system R = A–B are determined by the corresponding matrix of conditional probabilities (or of their amplitudes) in AO resolution, which can be partitioned into diagonal blocks of the intra-reactant (internal) communications within individual substrates and off-diagonal blocks of the inter-reactant (external) communications between different subsystems:
[R→R] = {[αβ]} = {[αα]δα,β} + {[αβ] (1 − δα,β)} = {intra} + {inter}.
The [SS] complexes combining the “soft” (noisy, delocalized) internal blocks of such probability propagations imply similar covalent character of the external blocks of electron AO communications between reactanta, i.e., strongly indeterministic scatterings between subsystems: {intra-S} ⇒ {inter-S}. The “hard” (ionic) internal channels are similarly associated with the ionic (localized) external communications: {intra-H} ⇒ {inter-H}. This observation adds a new communication angle to the classical HSAB principle of chemistry.

7. Regional HSAB versus Complementary Coordinations

One of the still problematic issues in reactivity theory is the most favorable mutual arrangement of the acidic and basic parts of molecular reactants in donor-acceptor systems. It appears that the global HSAB preference is no longer valid regionally, in interactions between fragments of reactants, where the complementarity principle [108,109] establishes the preferred arrangement between the acidic and basic parts of both substrates. These more subtle reactivity preferences result from the induced electron flows in reactive systems, reflecting responses to the primary displacements in molecular complexes. Such flow patterns can be diagnosed, estimated, and compared using either the energetical or information reactivity criteria defined above.
Consider the reactive A–B complex consisting of the basic reactant B = (aB|…|bB) ≡ (aB|bB) and the acidic substrate A = (aA|…|bA) ≡ (aA|bA), where aX and bX denote the acidic and basic parts of X, respectively. The acidic (electron acceptor) part is relatively hard, i.e., less responsive to external perturbations, exhibiting lower values of the fragment FF descriptor, while the basic (electron donor) fragment is relatively soft and more polarizable, as reflected by its higher density or population response descriptors. The acidic part aX exerts an electron-accepting (stabilizing) influence on the neighboring part of another reactant Y, while the basic fragment bX produces an electron-donor (destabilizing) effect on a fragment of Y in its vicinity.
There are two ways in which both reactants can mutually coordinate in the reactive complexes. In the complementary (c) arrangement of Figure 5,
R c [ a A b B b A a B ] ,
the reactants orient themselves in such a way that geometrically accessible a-fragment of one reactant faces the geometrically accessible b-fragment of the other substrate. This pattern follows from the maximum complementarity (MC) rule of chemical reactivity [108,109], which reflects an electrostatic preference: an electron-rich (repulsive, basic) fragment of one reactant prefers to face an electron-deficient (attractive, acidic) part of the reaction partner. In the alternative regional HSAB-type structure of Figure 6,
R HSAB [ a A a B b A b B ] ,
the acidic (basic) fragment of one reactant faces the like-fragment of the other substrate.
The complementary complex, in which the “excessive” electrons of bX are placed in the attractive field generated by the electron “deficient” aY, is expected to be favored electrostatically since the other arrangement produces regional repulsions between two acidic and two basic sites of reactants. Additional rationale for this complementary preference over the regional HSAB alignment comes from examining charge flows created by the dominating shifts in the site chemical potential due to the presence of the (“frozen”) coordinated site of the nearby part of the reaction partner. At finite separations between the two subsystems, these displacements trigger the polarizational flows {PX} shown in Figure 5 and Figure 6, which restore the internal equilibria in subsystems, initially displaced by the presence of the other reactant.
In Rc, the harder (acidic) site aY initially lowers the chemical potential of the softer (basic) site bX, while bY rises the chemical potential level of aX. These shifts trigger the internal polariaztional flows {aXbX} in both reactants, which enhance the acceptor capacity of aX and donor ability of bX, thus creating more favourable conditions for the subsequent inter-reactant CT of Figure 5. A similar analysis of RHSAB (Figure 6) predicts the bXaX polarizational flows, which lower the acceptor capacity of aX and donor ability of bX, i.e., produce an excess electron accumulation on aX and stronger electron depletion on bX, thus creating less favourable conditions for the subsequent inter-reactant CT.
The complementary preference also follows from the electronic stability considerations, in spirit of the familiar Le Châtelier–Braun principle in ordinary thermodynamics [4]. In Figure 5 and Figure 6, the CT responses follow the preceding polarizations of reactants, the equilibrium responses to displacements {ΔvX} in the external potential on subsystems. In stability considerations one first assumes the primary (inter-reactant) CT displacements {ΔCT1, ΔCT2} of Figure 5 and Figure 6, in the internally “frozen” but externally open reactants, and then examines the induced secondary (intra-reactant) relaxational responses {IX} to these populational shifts in the CT-perturbed substrates.
Let us first examine the CT-displaced complementary structure Rc,
R c CT [ a A Δ CT 1 b B I A I B b A Δ CT 2 a B ] .
It corresponds to the primary CT perturbations of Figure 5:
[Δ(CT1) = ΔN(aA) = −ΔN(bB)] > [Δ(CT2) = ΔN(aB) = −ΔN(bA)].
In accordance with the Le Châtelier principle, an inflow/outflow of electrons to/from a given site x increases/decreases the site chemical potential, as indeed reflected by the positive value of the site diagonal hardness descriptor
ηx,x = ∂μx/∂Nxηx > 0.
The initial perturbations of the partial CT flows {ΔCTk} thus create the following shifts in the site chemical potentials, compared to the initially equalized levels in isolated reactants A0 = (aA0¦bA0) and B0 = (aB0¦bB0),
[ Δ μ a A ( CT 1 ) > 0 ] > [ Δ μ b A ( CT 2 ) < 0 ] , [ Δ μ a B ( CT 2 ) > 0 ] > [ Δ μ b B ( CT 1 ) < 0 ] .
These CT-induced shifts in the fragment electronegativities thus trigger the following secondary (induced) relaxational flows {IX} in RcCT,
a A I A b A and a B I B b B ,
which diminish effects of the CT perturbations by reducing the extra charge accumulations/ depletions created by the primary populational displacements.
Consider next the CT-displaced HSAB complex RHSAB,
R HSAB CT [ a A Δ CT 2 a B I A I B b A Δ CT 1 b B ] .
The primary CT shifts in the site electron populations,
[Δ(CT1) = ΔN(bA) = −ΔN(bB)] < [Δ(CT2) = ΔN(aA) = −ΔN(aB)],
where inequality reflects expected magnitudes of the associated in situ chemical potentials,
[|μ(CT1)| = μ(bB) − μ(bA)] < [|μ(CT2)| = μ(aB) − μ(aA)],
now induce the following internal relaxations in reactants:
a A I A b A and b B I B a B .
These charge responses further exaggerate charge depletions/accumulations created by the primary CT perturbations, thus giving rise to less stable reactive complex.

8. Conclusions

In this work we have emphasized the information contributions due to the modulus (probability) and phase (current) components of general (complex) electronic states in molecules, accounted for in generalized entropy/information concepts of the QIT description. The relevant continuity relations for these elementary physical distributions are summarized in Appendix A. In this overview, the physical equivalence of variational principles formulated in terms of the average electronic energy and the resultant-information/kinetic energy has been stressed. The quantum dynamics of the resultant gradient information measure is examined in Appendix B. The proportionality of the state average gradient information to electronic kinetic energy allows one to use the molecular virial theorem in the information exploration of general (energetical) rules of structural chemistry and chemical reactivity. The phase aspect of molecular states was shown to be vital for distinguishing the hypothetical bonded (entangled) and nonbonded (disentangled) states of molecular subsystems in reactive systems, for the same set of the fragment electron densities. The resultant-information analysis of reactivity phenomena complements earlier classical IT approaches to chemical reactions [110,111,112] as well as the catastrophe-theory or quantum-topology descriptions [113,114,115]. One also recalls a close connection between the ELF criterion of electron localization and the nonadditive component of the molecular Fisher information [46,56].
The grand-ensemble description of thermodynamic equilibria in externally open molecular systems is outlined in Appendix C. It has been argued that thermodynamic variational principles for the ensemble-average electronic energy of molecular systems and their resultant gradient information are physically equivalent. The populational derivatives of the resultant gradient information, related to the system average kinetic energy, have been suggested as alternative reactivity criteria, proportional to their energetical analogs and fully equivalent in describing the CT phenomena. Indeed, they were shown to correctly predict both the direction and magnitude of the electron flows in reactive systems. The virial theorem has been used in an information exploration of the bond-formation process and in probing the qualitative Hammond postulate of reactivity theory. The information production in chemical reactions has been addressed, and the ionic/covalent interactions between frontier-electrons of the acidic and basic reactants have been examined. The HSAB preferences in molecular coordinations have been explained and the communication perspective on interaction between reactants has been addressed. It has been argued that the internally soft and hard reactants prefer to externally communicate in the like manner, consistent with their internal communications. The HSAB preference is thus reflected by the predicted character of the inter-reactant bonds/communications: Covalent/noisy in [SS] and ionic/localized in [HH] complexes. The regional complementary and HSAB coordinations, between acidic and basic fragments of the donor and acceptor reactants, have been compared, and the complementary preference has been explained using the electrostatic, polarizational, and relaxational (stability) arguments.

Conflicts of Interest

The author declares no conflict of interest.

Nomenclature

The following notation is adopted: A denotes a scalar, A is the row or column vector, A represents a square or rectangular matrix, and the dashed symbol A ^ stands for the quantum-mechanical operator of the physical property A. The logarithm of Shannon’s information measure is taken to an arbitrary but fixed base: log = log2 corresponds to the information content measured in bits (binary digits), while log = ln expresses the amount of information in nats (natural units): 1 nat = 1.44 bits.

Appendix A. Continuities of Probability and Phase Distributions

Consider a general quantum state in mono-electronic system represented by the wavefunction of Equation (1). The total time derivative of the particle probability density p(t) = p[r(t), t] reads
dp(t)/dt = ∂p(t)/∂t + (dr/dt) ⋅ ∂p(t)/∂r = ∂p(t)/∂t + V(t) ⋅∇p(t) ≡ σp(t).
It determines the local probability “source” σp(r, t) ≡ σp(t), which measures the time rate of change in an infinitesimal volume element around r in the probability fluid, moving with the local velocity dr/dt = V(t), while the partial derivative ∂p(t)/∂t refers to the volume element around the fixed point in space:
∂p(t)/∂t = σp(t) − V(t) ⋅∇p(t)
= σp(t) − (ħ/2mi)[ψ(t)* Δψ(t) − ψ(t) Δψ(t)*] = σp(t) − ∇⋅ j(t)
= σp(t) − [V(t) ⋅∇p(t) + p(t) ∇⋅V(t)]
= σp(t) − (ħ/m) [∇ϕ(t) ⋅∇p(t) + p(t) ∇2ϕ(t)].
One also recalls that probability dynamics from SE expresses the sourceless continuity relation for the particle probability “fluid”, σp(t) = 0, or
p(t)/∂t + ∇⋅j(t) = ∂p(t)/∂t + ∇pV(t) + p ∇⋅V(t) = 0.
This relation thus implies the vanishing divergence of effective velocity field determined by the phase-Laplacian:
∇⋅V(t) = (ħ/m) ∇2ϕ(t) = 0.
The partial derivative of Equation (A2) expressing the probability dynamics thus reads:
∂p(t)/∂t = − V(t) ⋅∇p(t) = − (ħ/m) ∇ϕ(t) ⋅∇p(t)
or
dp(t)/dt = ∂p(t)/∂t + ∇⋅ j(t) = ∂p(t)/∂t + V(t) ⋅∇p(t) = 0.
The probability continuity also determines the dynamics of the state modulus component:
∂R(t)/∂t = − (ħ/m) ∇ϕ(t) ⋅∇R(t).
For example, for a general local-phase expression [86,87]
ϕ(r) = kf(r) + C,
where the constant C remains unspecified in QM, one determines the probability-velocity field at point r = {xα}: V(r) = (ħ/m) k ∇⋅f(r). Its vanishing divergence then implies a local condition
k ⋅∇[∇⋅f(r)] ≡ k ⋅∇2f(r) = ∑αβ kα [∂2fβ(r)/∂xαxβ] = 0.
The effective velocity V(t) also determines the phase current, the flux concept associated with the state phase: J(t) = ϕ(t) V(t). The scalar field ϕ(t) and its conjugate current density J(t) then generate a nonvanishing phase source in the associated continuity equation:
σϕ(t) ≡ dϕ(t)/dt = ∂ϕ(t)/∂t + ∇⋅ J(t) = ∂ϕ(t)/∂t + V(t) ⋅ ∇ϕ(t) ≠ 0   or
ϕ(t)/∂tσϕ(t) = −∇⋅ J(t) = − (ħ/m) [∇ϕ(t)]2.
The phase dynamics from SE [7],
ϕ/∂t = [ħ/(2m)] [R−1ΔR − (∇ϕ)2] − v/ħ,
finally identifies the phase source:
σϕ = [ħ/(2m)] [R−12R + (∇ϕ)2] − v/ħ.
As an illustration consider the stationary wavefunction corresponding to energy Es,
ψs(t) = Rs(r) exp[iϕs(t)],   ϕs(t) = − (Es/ħ) t = − ωs t,
representing the eigenstate of electronic Hamiltonian of Equation (6):
H ^ ( r ) R s ( r ) = ( ħ 2 / 2 m ) 2 R s ( r ) + v ( r ) R s ( r ) = E s R s ( r ) .
It exhibits the stationary probability distribution, ps(t) ≡ Rs(r)2 = ps(r), the purely time-dependent phase ϕs(t), Vs(t) = 0 and hence also js(t) = Js(t) = 0. The phase-dynamics Equations (A11) and (A12) then recover the preceding stationary SE and identify the constant phase source (see Equation (A13)):
σϕ[ψs] = [ħ/(2m)] (Rs−12Rs) − v/ħ = − ωs = − (Es/ħ) = const.

Appendix B. Information Dynamics

Let us now examine a temporal evolution of the overall integral measure of the gradient-information content in the specified (pure) quantum state |ψ(t)〉 of such a mono-electronic system (Equation (11)). In the Schrödinger dynamical picture, the time change of the resultant gradient information, the operator of which does not depend on time explicitly,
I ^ ( r ) = 4 2 = ( 8 m / ħ 2 ) T ^ ( r ) σ T ^ ( r ) ,
results solely from the time dependence of the system state vector itself. The time derivative of the average (Fisher-type) gradient information is then generated by the expectation value of the commutator
[ H ^ , I ^ ] = [ v , I ^ ] = 4 [ 2 , v ] = 4 { [ , v ] · + · [ , v ] } , [ , v ] = v ,
d I ( t ) / d t = ( i / ħ ) ψ ( t ) | [ H ^ , I ^ ] | ψ ( t ) ,
and the integration by parts implies:
ψ(t)|∇ψ(t)〉 = − 〈∇ψ(t)|ψ(t)〉 ≡ 〈∇ψ(t)|ψ(t)〉   or   ∇ = −∇.
Hence, the total time derivative of the overall gradient information, i.e., the integral production (source) of this information descriptor, reads:
σI(t) ≡ dI(t)/dt = (4i/ħ) {〈ψ(t)|∇v ⋅ |∇ψ(t)〉 − 〈∇ψ(t)| ⋅∇v|ψ(t)〉}
= − (8/ħ) Im 〈ψ(t)|∇v ⋅ |∇ψ(t)〉
= − (8/ħ) Im[∫ψ(t)*v ⋅∇ψ(t) dr]
= − (8/ħ) ∫p(t) ∇ϕ(t) ⋅∇v dr = − σj(t) ⋅∇v dr.
This derivative is seen to be determined by the current content of the molecular electronic state. Therefore, it identically vanishes for the zero current density everywhere, when the local component of the state phase identically vanishes, thus confirming its nonclassical origin.
This conclusion can be also demonstrated directly (see Equation (11)):
σI(t) ≡ dI(t)/dt = dI[p]/dt + dI[ϕ]/dt = dI[ϕ]/dt,
since by the sourceless probability continuity of Equation (A6),
σp(r) = dp(r)/dt = 0,
and hence
dI[p]/dt = ∫ [dp(r)/dt] [δI[p]/δp(r)] dr = 0.
Therefore, the integral source of resultant gradient information in fact reflects the total time derivative of its nonclassical contribution I[ϕ]. Hence the associated derivative of the overall gradient entropy of Equation (16):
σM(t) ≡ dM(t)/dt = dI[p]/dtdI[ϕ]/dt = −dI[ϕ]/dt = −σI(t).
This result is in accordance with the intuitive expectation that an increase in the state overall structural determinicity (order) information, σI(t) > 0, implies the associated decrease in its structural indeterminicity (disorder) information (entropy): σM(t) < 0.

Appendix C. Ensemble Representation of Thermodynamic Conditions

Only the ensemble average overall number of electrons 〈Nens. N of the (open) molecular part M(v), identified by the external potential v of the specified (microscopic) system, in the composite (macroscopic) system = [M(v] including the external electron-reservoir ,
N = tr ( D ^ N ^ ) = i N i ( i P j i ) i N i P i , i P i = 1 ,
exhibits a continuous (fractional) spectrum of values, thus justifying the very concept of the populational ( N ) derivative itself. Here, N ^ = ∑i Ni (∑j |ψji〉 〈ψji|) stands for the particle-number operator in Fock’s space and the density operator D ^ = ∑ij |ψ jiPjiψ ji| identifies the statistical mixture of the system (pure) stationary states {|ψji〉 ≡ |ψj(Ni)〉} defined for different (integer) numbers of electrons {Ni}, which appear with the (external) probabilities {Pji} in the ensemble. Such N -derivatives are involved in definitions of familiar CT criteria of chemical reactivity, e.g., the chemical potential (negative electronegativity) [53,101,102,103,104,105] or the chemical hardness/softness [106], and Fukui function (FF) [107] descriptors (see also References [53,73,74]).
Such N -derivatives are thus definable only in the mixed electronic states, e.g., those corresponding to thermodynamic equilibria in externally open molecules. In the grand ensemble, this state is determined by the density operator specified by the relevant (externally imposed) intensive parameters, the chemical potential of the electron reservoir, μ = μ, and the absolute temperature T of a heat bath , T = T: D ^ e q . D ^ ( μ , T ; v ) . The optimum state probabilities {Pji(μ, T; v)} correspond to the minimum of the associated thermodynamic potential, the Legendre transform
Ω = E ( E / N ) N ( E / S ) S = E μ N T S
of the ensemble-average energy
E [ D ^ ] ( N , S ; v ) = tr ( D ^ H ^ ) = i j P j i E j i ,
called the grand potential. The latter corresponds to replacing the “extensive” (ensemble-average) state parameters, the particle number 𝒩 and thermodynamic entropy
S [ D ^ ] = tr ( D ^ S ^ ) = i j P j i S j i ( P j i ) } , S j i ( P j i ) = k B ln P j i ,
where kB denotes the Boltzmann constant, by their respective “intensive” conjugates defining the applied thermodynamic conditions: μ = μ and T = T. The grand potential of Equation (A26) includes these externally imposed intensities as Lagrange multipliers enforcing constraints of the specified values of system’s average number of electrons, 〈Nens. = N , and its thermodynamic-entropy, 〈𝒮ens. = 𝒮, at the grand-potential minimum:
min D ^ Ω D ^ = Ω [ D ^ ( μ , T ; v ) ] Ω ( μ , T ; v ) .
The externally imposed parameters (μ, T) then determine the optimum probabilities of the ensemble stationary states, eigenstates of the Hamiltonians { H ^ ( N i , v ) }, in the (mixed) equilibrium state of the grand ensemble,
Pji(μ, T; v) = Ξ−1 exp[β(μNiEji)],
and the associated density operator
D ^ ( μ , T ; v ) = i j | ψ j i P j i ( μ , T ; v ) ψ j i | D ^ e q . .
Here, Ξ stands for the grand-ensemble partition function and β = (kBT)−1. The equilibrium probabilities of Equation (A30) represent eigenvalues of the grand-canonical statistical operator acting in Fock’s space:
d ^ ( μ , T ; v ) = exp { β [ μ N ^ H ^ ( v ) ] } / tr { β [ μ N ^ H ^ ( v ) ] } .
In the T→0 limit [53,101,102] only two ground states (j = 0), |ψ0i〉 and |ψ0i+1〉, corresponding to the neighboring integers “bracketing” the given (fractional) 〈Nens. = 𝒩,
Ni𝒩Ni + 1,
appear in the equilibrium mixed state. Their ensemble probabilities for the specified
Nens. = i Pi(T→0) + (i + 1)[1 − Pi(T→0)] = 𝒩
read:
Pi(T→0) = 1 + i𝒩 ≡ 1 − ω  and   Pi+1(T→0) = 𝒩iω.
The continuous energy function (𝒩, 𝒮) then consists of the straight-line segments between the neighboring integer values of 𝒩. This implies constant values of the chemical potential in all such admissible (partial) ranges of the average number of electrons, and the μ-discontinuity at the integer values of the average electron-number {𝒩 = Ni}.
This zero-temperature mixture of the molecular ground states {|ψ0i〉 = ψ[Ni, v]}, defined for the integer number of electrons Ni = 〈ψi| N ^ |ψi〉 and corresponding to energies
E 0 i = ψ 0 i | H ^ ( N i , v ) | ψ 0 i = E [ N i , v ] ,
which appear in the ensemble with the equilibrium thermodynamic probabilities Pi(μ, T→0; v), represents an externally open molecule 〈M(μ, T→0; v)〉ens. in these thermodynamic conditions.
In this thermodynamic scenario the pure-state probabilities thus result from the variational principle for the thermodynamic potential
Ω [ D ^ ] = E [ D ^ ] μ N [ D ^ ] T S [ D ^ ] = tr ( D ^ Ω ^ ) , Ω ^ ( μ , T ; v ) = H ^ ( v ) μ N ^ T S ^ , S ^ = k B ln d ^ ,
min D ^ Ω [ D ^ ] = Ω [ D ^ ( μ , T ; v ) ] = E [ D ^ e q . ] μ N [ D ^ e q . ] T S [ D ^ e q . ] .
The relevant ensemble averages of the system energy and electron number read:
E ( μ , T ) e n s . = E [ D ^ e q . ] = i j p j i ( μ , T ; v ) E j i , N ( μ , T ) e n s . = N [ D ^ e q . ] = i [ j p j i ( μ , T ; v ) ] N i = i p i ( μ , T ; v ) N i
and von Neumann’s [116] ensemble entropy
S ( μ , T ) e n s . = S [ D ^ e q . ] = tr [ D ^ e q . S ^ ( μ , T ; v ) ] = k B tr ( D ^ e q . ln D ^ e q . ) k B i j P j i ( μ , T ; v ) i j P j i ( μ , T ; v ) S j i ( μ , T ; v ) , S ^ ( μ , T ; v ) = i i | ψ j i S j i ( μ , T ; v ) ψ j i | , S j i ( μ , T ; v ) = ψ j i | S ^ ( μ , T ; v ) | ψ j i = k B ln P j i ( μ , T ; v ) .
The latter identically vanishes in the pure quantum state ψji, when Pji = 1 for the vanishing remaining state probabilities.

References

  1. Nalewajski, R.F. On Entropy/Information Description of Reactivity Phenomena. In Advances in Mathematics Research; Baswell, A.R., Ed.; Nova Science Publishers: New York, NY, USA, 2019; Volume 26, in press. [Google Scholar]
  2. Nalewajski, R.F. Information description of chemical reactivity. Curr. Phys. Chem. 2019, in press. [Google Scholar]
  3. Nalewajski, R.F. Role of electronic kinetic energy (resultant gradient information) in chemical reactivity. J. Mol. Model. 2019. submitted. [Google Scholar]
  4. Callen, H.B. Thermodynamics: An Introduction to the Physical Theories of Equilibrium Thermostatics and Irreversible Thermodynamics; Wiley: New York, NY, USA, 1962. [Google Scholar]
  5. Nalewajski, R.F. Virial theorem implications for the minimum energy reaction paths. Chem. Phys. 1980, 50, 127–136. [Google Scholar] [CrossRef]
  6. Prigogine, I. From Being to Becoming: Time and Complexity in the Physical Sciences; Freeman WH & Co.: San Francisco, CA, USA, 1980. [Google Scholar]
  7. Nalewajski, R.F. Quantum Information Theory of Molecular States; Nova Science Publishers: New York, NY, USA, 2016. [Google Scholar]
  8. Nalewajski, R.F. Complex entropy and resultant information measures. J. Math. Chem. 2016, 54, 1777–1782. [Google Scholar] [CrossRef]
  9. Nalewajski, R.F. On phase/current components of entropy/information descriptors of molecular states. Mol. Phys. 2014, 112, 2587–2601. [Google Scholar] [CrossRef]
  10. Nalewajski, R.F. Quantum information measures and their use in chemistry. Curr. Phys. Chem. 2017, 7, 94–117. [Google Scholar] [CrossRef]
  11. Fisher, R.A. Theory of statistical estimation. Proc. Camb. Phil. Soc. 1925, 22, 700–725. [Google Scholar] [CrossRef]
  12. Frieden, B.R. Physics from the Fisher Information—A Unification; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  13. Shannon, C.E. The mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–493, 623–656. [Google Scholar] [CrossRef]
  14. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois, Urbana: Champaign, IL, USA, 1949. [Google Scholar]
  15. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  16. Kullback, S. Information Theory and Statistics; Wiley: New York, NY, USA, 1959. [Google Scholar]
  17. Abramson, N. Information Theory and Coding; McGraw-Hill: New York, NY, USA, 1963. [Google Scholar]
  18. Pfeifer, P.E. Concepts of Probability Theory; Dover: New York, NY, USA, 1978. [Google Scholar]
  19. Nalewajski, R.F. Information Theory of Molecular Systems; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  20. Nalewajski, R.F. Information Origins of the Chemical Bond.; Nova Science Publishers: New York, NY, USA, 2010. [Google Scholar]
  21. Nalewajski, R.F. Perspectives in Electronic Structure Theory; Springer: Heidelberg, Germany, 2012. [Google Scholar]
  22. Nalewajski, R.F.; Parr, R.G. Information theory, atoms-in-molecules and molecular similarity. Proc. Natl. Acad. Sci. USA 2000, 97, 8879–8882. [Google Scholar] [CrossRef]
  23. Nalewajski, R.F. Information principles in the theory of electronic structure. Chem. Phys. Lett. 2003, 272, 28–34. [Google Scholar] [CrossRef]
  24. Nalewajski, R.F. Information principles in the Loge Theory. Chem. Phys. Lett. 2003, 375, 196–203. [Google Scholar] [CrossRef]
  25. Nalewajski, R.F. Electronic structure and chemical reactivity: density functional and information theoretic perspectives. Adv. Quantum Chem. 2003, 43, 119–184. [Google Scholar]
  26. Nalewajski, R.F.; Parr, R.G. Information-theoretic thermodynamics of molecules and their Hirshfeld fragments. J. Phys. Chem. A 2001, 105, 7391–7400. [Google Scholar] [CrossRef]
  27. Nalewajski, R.F. Hirschfeld analysis of molecular densities: subsystem probabilities and charge sensitivities. Phys. Chem. Chem. Phys. 2002, 4, 1710–1721. [Google Scholar] [CrossRef]
  28. Parr, R.G.; Ayers, P.W.; Nalewajski, R.F. What is an atom in a molecule? J. Phys. Chem. A 2005, 109, 3957–3959. [Google Scholar] [CrossRef] [PubMed]
  29. Nalewajski, R.F.; Broniatowska, E. Atoms-in-Molecules from the stockholder partition of molecular two-electron distribution. Theor. Chem. Acc. 2007, 117, 7–27. [Google Scholar] [CrossRef]
  30. Heidar-Zadeh, F.; Ayers, P.W.; Verstraelen, T.; Vinogradov, I.; Vöhringer-Martinez, E.; Bultinck, P. Information-theoretic approaches to Atoms-in-Molecules: Hirshfeld family of partitioning schemes. J. Phys. Chem. A 2018, 122, 4219–4245. [Google Scholar] [CrossRef]
  31. Hirshfeld, F.L. Bonded-atom fragments for describing molecular charge densities. Theor. Chim. Acta 1977, 44, 129–138. [Google Scholar] [CrossRef]
  32. Nalewajski, R.F. Entropic measures of bond multiplicity from the information theory. J. Phys. Chem. A 2000, 104, 11940–11951. [Google Scholar] [CrossRef]
  33. Nalewajski, R.F. Entropy descriptors of the chemical bond in Information Theory: I. Basic concepts and relations. II. Application to simple orbital models. Mol. Phys. 2004, 102, 531–546, 547–566. [Google Scholar] [CrossRef]
  34. Nalewajski, R.F. Entropic and difference bond multiplicities from the two-electron probabilities in orbital resolution. Chem. Phys. Lett. 2004, 386, 265–271. [Google Scholar] [CrossRef]
  35. Nalewajski, R.F. Reduced communication channels of molecular fragments and their entropy/information bond indices. Theor. Chem. Acc. 2005, 114, 4–18. [Google Scholar] [CrossRef]
  36. Nalewajski, R.F. Partial communication channels of molecular fragments and their entropy/information indices. Mol. Phys. 2005, 103, 451–470. [Google Scholar] [CrossRef]
  37. Nalewajski, R.F. Entropy/information descriptors of the chemical bond revisited. J. Math. Chem. 2011, 49, 2308–2329. [Google Scholar] [CrossRef]
  38. Nalewajski, R.F. Quantum information descriptors and communications in molecules. J. Math. Chem. 2014, 52, 1292–1323. [Google Scholar] [CrossRef]
  39. Nalewajski, R.F. Multiple, localized and delocalized/conjugated bonds in the orbital-communication theory of molecular systems. Adv. Quantum Chem. 2009, 56, 217–250. [Google Scholar]
  40. Nalewajski, R.F.; Szczepanik, D.; Mrozek, J. Bond differentiation and orbital decoupling in the orbital communication theory of the chemical bond. Adv. Quant. Chem. 2011, 61, 1–48. [Google Scholar]
  41. Nalewajski, R.F.; Szczepanik, D.; Mrozek, J. Basis set dependence of molecular information channels and their entropic bond descriptors. J. Math. Chem. 2012, 50, 1437–1457. [Google Scholar] [CrossRef]
  42. Nalewajski, R.F. Electron Communications and Chemical Bonds. In Frontiers of Quantum Chemistry; Wójcik, M., Nakatsuji, H., Kirtman, B., Ozaki, Y., Eds.; Springer: Singapore, 2017; pp. 315–351. [Google Scholar]
  43. Nalewajski, R.F.; Świtka, E.; Michalak, A. Information distance analysis of molecular electron densities. Int. J. Quantum Chem. 2002, 87, 198–213. [Google Scholar] [CrossRef]
  44. Nalewajski, R.F.; Broniatowska, E. Entropy displacement analysis of electron distributions in molecules and their Hirshfeld atoms. J. Phys. Chem. A 2003, 107, 6270–6280. [Google Scholar] [CrossRef]
  45. Nalewajski, R.F. Use of Fisher information in quantum chemistry. Int. J. Quantum Chem. 2008, 108, 2230–2252. [Google Scholar] [CrossRef]
  46. Nalewajski, R.F.; Köster, A.M.; Escalante, S. Electron localization function as information measure. J. Phys. Chem. A 2005, 109, 10038–10043. [Google Scholar] [CrossRef] [PubMed]
  47. Becke, A.D.; Edgecombe, K.E. A simple measure of electron localization in atomic and molecular systems. J. Chem. Phys. 1990, 92, 5397–5403. [Google Scholar] [CrossRef]
  48. Silvi, B.; Savin, A. Classification of chemical bonds based on topological analysis of electron localization functions. Nature 1994, 371, 683–686. [Google Scholar] [CrossRef]
  49. Savin, A.; Nesper, R.; Wengert, S.; Fässler, T.F. ELF: The electron localization function. Angew. Chem. Int. Ed. Engl. 1997, 36, 1808–1832. [Google Scholar] [CrossRef]
  50. Hohenberg, P.; Kohn, W. Inhomogeneous electron gas. Phys. Rev. 1964, 136B, 864–971. [Google Scholar] [CrossRef]
  51. Kohn, W.; Sham, L.J. Self-consistent equations including exchange and correlation effects. Phys. Rev. 1965, 140A, 133–1138. [Google Scholar] [CrossRef]
  52. Levy, M. Universal variational functionals of electron densities, first-order density matrices, and natural spin-orbitals and solution of the v-representability problem. Proc. Natl. Acad. Sci. USA 1979, 76, 6062–6065. [Google Scholar] [CrossRef]
  53. Parr, R.G.; Yang, W. Density-Functional Theory of Atoms and Molecules; Oxford University Press: New York, NY, USA, 1989. [Google Scholar]
  54. Dreizler, R.M.; Gross, E.K.U. Density Functional Theory: An Approach to the Quantum Many-Body Problem; Springer: Berlin, Germany, 1990. [Google Scholar]
  55. Nalewajski, R.F. (Ed.) Density Functional Theory I-IV. Topics in Current Chemistry; Springer: Berlin/Heidelberg, Germany, 1996; Volume 180–183. [Google Scholar]
  56. Nalewajski, R.F.; de Silva, P.; Mrozek, J. Use of nonadditive Fisher information in probing the chemical bonds. J. Mol. Struct. 2010, 954, 57–74. [Google Scholar] [CrossRef]
  57. Nalewajski, R.F. Through-space and through-bridge components of chemical bonds. J. Math. Chem. 2011, 49, 371–392. [Google Scholar] [CrossRef]
  58. Nalewajski, R.F. Chemical bonds from through-bridge orbital communications in prototype molecular systems. J. Math. Chem. 2011, 49, 546–561. [Google Scholar] [CrossRef]
  59. Nalewajski, R.F. On interference of orbital communications in molecular systems. J. Math. Chem. 2011, 49, 806–815. [Google Scholar] [CrossRef]
  60. Nalewajski, R.F.; Gurdek, P. On the implicit bond-dependency origins of bridge interactions. J. Math. Chem. 2011, 49, 1226–1237. [Google Scholar] [CrossRef]
  61. Nalewajski, R.F. Direct (through-space) and indirect (through-bridge) components of molecular bond multiplicities. Int. J. Quantum Chem. 2012, 112, 2355–2370. [Google Scholar] [CrossRef]
  62. Nalewajski, R.F.; Gurdek, P. Bond-order and entropic probes of the chemical bonds. Struct. Chem. 2012, 23, 1383–1398. [Google Scholar] [CrossRef]
  63. Nalewajski, R.F. Exploring molecular equilibria using quantum information measures. Ann. Phys. 2013, 525, 256–268. [Google Scholar] [CrossRef]
  64. Nalewajski, R.F. On phase equilibria in molecules. J. Math. Chem. 2014, 52, 588–612. [Google Scholar] [CrossRef]
  65. Nalewajski, R.F. Quantum information approach to electronic equilibria: Molecular fragments and elements of non-equilibrium thermodynamic description. J. Math. Chem. 2014, 52, 1921–1948. [Google Scholar] [CrossRef]
  66. Nalewajski, R.F. Phase/current information descriptors and equilibrium states in molecules. Int. J. Quantum Chem. 2015, 115, 1274–1288. [Google Scholar] [CrossRef]
  67. Nalewajski, R.F. Quantum Information Measures and Molecular Phase Equilibria. In Advances in Mathematics Research; Baswell, A.R., Ed.; Nova Science Publishers: New York, NY, USA, 2015; Volume 19, pp. 53–86. [Google Scholar]
  68. Nalewajski, R.F. Phase Description of Reactive Systems. In Conceptual Density Functional Theory; Islam, N., Kaya, S., Eds.; Apple Academic Press: Waretown, NJ, USA, 2018; pp. 217–249. [Google Scholar]
  69. Nalewajski, R.F. Entropy Continuity, Electron Diffusion and Fragment Entanglement in Equilibrium States. In Advances in Mathematics Research; Baswell, A.R., Ed.; Nova Science Publishers: New York, NY, USA, 2017; Volume 22, pp. 1–42. [Google Scholar]
  70. Nalewajski, R.F. On entangled states of molecular fragments. Trends Phys. Chem. 2016, 16, 71–85. [Google Scholar]
  71. Nalewajski, R.F. Chemical reactivity description in density-functional and information theories. Acta Phys.-Chim. Sin. 2017, 33, 2491–2509. [Google Scholar]
  72. Nalewajski, R.F. Information equilibria, subsystem entanglement and dynamics of overall entropic descriptors of molecular electronic structure. J. Mol. Model. 2018, 24, 212–227. [Google Scholar] [CrossRef]
  73. Nalewajski, R.F.; Korchowiec, J.; Michalak, A. Reactivity Criteria in Charge Sensitivity Analysis. Topics in Current Chemistry: Density Functional Theory IV; Nalewajski, R.F., Ed.; Springer: Berlin/Heidelberg, Germany, 1996; Volume 183, pp. 25–141. [Google Scholar]
  74. Nalewajski, R.F.; Korchowiec, J. Charge Sensitivity Approach to Electronic Structure and Chemical Reactivity; World Scientific: Singapore, 1997. [Google Scholar]
  75. Geerlings, P.; De Proft, F.; Langenaeker, W. Conceptual density functional theory. Chem. Rev. 2003, 103, 1793–1873. [Google Scholar] [CrossRef] [PubMed]
  76. Nalewajski, R.F. Sensitivity analysis of charge transfer systems: In situ quantities, intersecting state model and ist implications. Int. J. Quantum Chem. 1994, 49, 675–703. [Google Scholar] [CrossRef]
  77. Nalewajski, R.F. Charge sensitivity analysis as diagnostic tool for predicting trends in chemical reactivity. In Proceedings of the NATO ASI on Density Functional Theory (Il Ciocco, 1993); Dreizler, R.M., Gross, E.K.U., Eds.; Plenum: New York, NY, USA, 1995; pp. 339–389. [Google Scholar]
  78. Chattaraj, P.K. (Ed.) Chemical Reactivity Theory: A Density Functional View; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  79. Gatti, C.; Macchi, P. Modern Charge-Density Analysis; Springer: Berlin, Germany, 2012. [Google Scholar]
  80. Hammond, G.S. A correlation of reaction rates. J. Am. Chem. Soc. 1955, 77, 334–338. [Google Scholar] [CrossRef]
  81. Fukui K Theory of Orientation and Stereoselection; Springer-Verlag: Berlin, Germany, 1975.
  82. Fukui, K. Role of frontier orbitals in chemical reactions. Science 1987, 218, 747–754. [Google Scholar] [CrossRef] [PubMed]
  83. Fujimoto, H.; Fukui, K. Intermolecular Interactions and Chemical Reactivity. In Chemical Reactivity and Reaction Paths; Klopman, G., Ed.; Wiley-Interscience: New York, NY, USA, 1974; pp. 23–54. [Google Scholar]
  84. Pearson, R.G. Hard and Soft Acids and Bases; Dowden, Hutchinson and Ross: Stroudsburg, PA, USA, 1973. [Google Scholar]
  85. Nalewajski, R.F. Electrostatic effects in interactions between hard (soft) acids and bases. J. Am. Chem. Soc. 1984, 106, 944–945. [Google Scholar] [CrossRef]
  86. von Weizsäcker, C.F. Zur theorie der kernmassen. Z. Phys. Hadron. Nucl. 1935, 96, 431–458. [Google Scholar]
  87. Harriman, J.E. Orthonormal orbitals fort the representation of an arbitrary density. Phys. Rev. 1980, A24, 680–682. [Google Scholar]
  88. Zumbach, G.; Maschke, K. New approach to the calculation of density functionals. Phys. Rev. 1983, A28, 544–554, Erratum in 1984, A29, 1585–1587. [Google Scholar] [CrossRef]
  89. Ruedenberg, K. The physical nature of the chemical bond. Rev. Mod. Phys. 1962, 34, 326–376. [Google Scholar] [CrossRef]
  90. Feinberg, M.J.; Ruedenberg, K. Paradoxical role of the kinetic-energy operator in the formation of the covalent bond. J. Chem. Phys. 1971, 54, 1495–1512. [Google Scholar] [CrossRef]
  91. Feinberg, M.J.; Ruedenberg, K. Heteropolar one-electron bond. J. Chem. Phys. 1971, 55, 5805–5818. [Google Scholar] [CrossRef]
  92. Bacskay, G.B.; Nordholm, S.; Ruedenberg, K. The virial theorem and covalent bonding. J. Phys. Chem. 2018, A122, 7880–7893. [Google Scholar] [CrossRef]
  93. Marcus, R.A. Theoretical relations among rate constants, barriers, and Broensted slopes of chemical reactions. J. Phys. Chem. 1968, 72, 891–899. [Google Scholar] [CrossRef]
  94. Agmon, N.; Levine, R.D. Energy, entropy and the reaction coordinate: Thermodynamic-like relations in chemical kinetics. Chem. Phys. Lett. 1977, 52, 197–201. [Google Scholar] [CrossRef]
  95. Agmon, N.; Levine, R.D. Empirical triatomic potential energy surfaces defined over orthogonal bond-order coordinates. J. Chem. Phys. 1979, 71, 3034–3041. [Google Scholar] [CrossRef]
  96. Miller, A.R. A theoretical relation for the position of the energy barrier between initial and final states of chemical reactions. J. Am. Chem. Soc. 1978, 100, 1984–1992. [Google Scholar] [CrossRef]
  97. Ciosłowski, J. Quantifying the Hammond Postulate: Intramolecular proton transfer in substituted hydrogen catecholate anions. J. Am. Chem. Soc. 1991, 113, 6756–6761. [Google Scholar] [CrossRef]
  98. Nalewajski, R.F.; Formosinho, S.J.; Varandas, A.J.C.; Mrozek, J. Quantum mechanical valence study of a bond breaking—Bond forming process in triatomic systems. Int. J. Quantum Chem. 1994, 52, 1153–1176. [Google Scholar] [CrossRef]
  99. Nalewajski, R.F.; Broniatowska, E. Information distance approach to Hammond postulate. Chem. Phys. Lett. 2003, 376, 33–39. [Google Scholar] [CrossRef]
  100. Dunning, T.H., Jr. Theoretical studies of the energetics of the abstraction and exchange reactions in H + HX, with X = F−I. J. Phys. Chem. 1984, 88, 2469–2477. [Google Scholar] [CrossRef]
  101. Gyftopoulos, E.P.; Hatsopoulos, G.N. Quantum-thermodynamic definition of electronegativity. Proc. Natl. Acad. Sci. USA 1965, 60, 786–793. [Google Scholar] [CrossRef]
  102. Perdew, J.P.; Parr, R.G.; Levy, M.; Balduz, J.L. Density functional theory for fractional particle number: Derivative discontinuities of the energy. Phys. Rev. Lett. 1982, 49, 1691–1694. [Google Scholar] [CrossRef]
  103. Mulliken, R.S. A new electronegativity scale: Together with data on valence states and on ionization potentials and electron affinities. J. Chem. Phys. 1934, 2, 782–793. [Google Scholar] [CrossRef]
  104. Iczkowski, R.P.; Margrave, J.L. Electronegativity. J. Am. Chem. Soc. 1961, 83, 3547–3551. [Google Scholar] [CrossRef]
  105. Parr, R.G.; Donnelly, R.A.; Levy, M.; Palke, W.E. Electronegativity: The density functional viewpoint. J. Chem. Phys. 1978, 69, 4431–4439. [Google Scholar] [CrossRef]
  106. Parr, R.G.; Pearson, R.G. Absolute hardness: Companion parameter to absolute electronegativity. J. Am. Chem. Soc. 1983, 105, 7512–7516. [Google Scholar] [CrossRef]
  107. Parr, R.G.; Yang, W. Density functional approach to the frontier-electron theory of chemical reactivity. J. Am. Chem. Soc. 1984, 106, 4049–4050. [Google Scholar] [CrossRef]
  108. Nalewajski, R.F. Manifestations of the maximum complementarity principle for matching atomic softnesses in model chemisorption systems. Top. Catal. 2000, 11, 469–485. [Google Scholar] [CrossRef]
  109. Chandra, A.K.; Michalak, A.; Nguyen, M.T.; Nalewajski, R.F. On regional matching of atomic softnesses in chemical reactions: Two-reactant charge sensitivity study. J. Phys. Chem. 1998, A102, 10182–10188. [Google Scholar] [CrossRef]
  110. Hô, M.; Schmider, H.L.; Weaver, D.F.; Smith, V.H., Jr.; Sagar, R.P.; Esquivel, R.O. Shannon entropy of chemical changes: SN2 displacement reactions. Int. J. Quantum Chem. 2000, 77, 376–382. [Google Scholar] [CrossRef]
  111. López-Rosa, S.; Esquivel, R.O.; Angulo, J.C.; Antolin, J.; Dehesa, J.S.; Flores-Gallegos, N. Fisher information study in position and momentum spaces for elementary chemical reactions. J. Chem. Theory Comput. 2010, 6, 145–154. [Google Scholar] [CrossRef] [PubMed]
  112. Esquivel, R.O.; Liu, S.B.; Angulo, J.C.; Dehesa, J.S.; Antolin, J.; Molina-Espiritu, M. Fisher information and steric effect: Study of the internal rotation barrier in ethane. J. Phys. Chem. 2011, A115, 4406–4415. [Google Scholar] [CrossRef]
  113. Krokidis, X.; Noury, S.; Silvi, B. Characterization of elementary chemical processes by catastrophe theory. J. Phys. Chem. 1997, A101, 7277–7282. [Google Scholar] [CrossRef]
  114. Ndassa, I.M.; Silvi, B.; Volatron, F. Understanding reaction mechanisms in organic chemistry from catastrophe theory: Ozone addition on benzene. J. Phys. Chem. 2010, A114, 12900–12906. [Google Scholar] [CrossRef] [PubMed]
  115. Domingo, L.R. A new C-C bond formation model based on the quantum chemical topology of electron density. Rsc Adv. 2014, 4, 32415–32428. [Google Scholar] [CrossRef]
  116. Von Neumann, J. Mathematical Foundations of Quantum Mechanics; Princeton University Press: Princeton, NJ, USA, 1955. [Google Scholar]
Figure 1. Schematic diagram of the axial (bond) profiles, in section containing the “z” direction of the coordinate system (along the bond axis), of the external potential (v) and electron probability (p) in a diatomic molecule A–B demonstrating a negative character of the scalar product ∇p(r) ⋅∇v(r). It confirms the negative equilibrium contribution σIeq.(axial) of the resultant gradient information (Equations (A20) and (A21)) and positive source σMeq.(axial) of the resultant gradient entropy (Equation (A24)) in the bond formation process, due to the equilibrium current of Equation (18), jeq.(r) ∝ −∇p(r).
Figure 1. Schematic diagram of the axial (bond) profiles, in section containing the “z” direction of the coordinate system (along the bond axis), of the external potential (v) and electron probability (p) in a diatomic molecule A–B demonstrating a negative character of the scalar product ∇p(r) ⋅∇v(r). It confirms the negative equilibrium contribution σIeq.(axial) of the resultant gradient information (Equations (A20) and (A21)) and positive source σMeq.(axial) of the resultant gradient entropy (Equation (A24)) in the bond formation process, due to the equilibrium current of Equation (18), jeq.(r) ∝ −∇p(r).
Applsci 09 01262 g001
Figure 2. Variations of the electronic energy ΔE(R) (solid line) with the internuclear distance R in a diatomic molecule and of its kinetic energy component ΔT(R) (broken line) determined by the virial theorem partition.4. Reactivity Implications of Molecular Virial Theorem.
Figure 2. Variations of the electronic energy ΔE(R) (solid line) with the internuclear distance R in a diatomic molecule and of its kinetic energy component ΔT(R) (broken line) determined by the virial theorem partition.4. Reactivity Implications of Molecular Virial Theorem.
Applsci 09 01262 g002
Figure 3. Variations of the electronic total (E) and kinetic (T) energies in exo-ergic (ΔEr < 0) or endo-ergic (ΔEr > 0) reactions (upper Panel (a)), and on the symmetrical BO potential energy surface (PES) (ΔEr = 0) (lower Panel (b)).
Figure 3. Variations of the electronic total (E) and kinetic (T) energies in exo-ergic (ΔEr < 0) or endo-ergic (ΔEr > 0) reactions (upper Panel (a)), and on the symmetrical BO potential energy surface (PES) (ΔEr = 0) (lower Panel (b)).
Applsci 09 01262 g003
Figure 4. Schematic diagram of the in situ chemical potentials μCT(B→A), determining the effective internal charge transfer (CT) from basic (B) reactant to its acidic (A) partner in A–B complexes, for their alternative hard (H) and soft (S) combinations. The subsystem hardnesses reflect the HOMO-LUMO gaps in their orbital energies.
Figure 4. Schematic diagram of the in situ chemical potentials μCT(B→A), determining the effective internal charge transfer (CT) from basic (B) reactant to its acidic (A) partner in A–B complexes, for their alternative hard (H) and soft (S) combinations. The subsystem hardnesses reflect the HOMO-LUMO gaps in their orbital energies.
Applsci 09 01262 g004
Figure 5. Polarizational {Pα = (aαbα)} and charge-transfer {CTα = (bαaβ)} electron flows, (α, βα) ∈ {A, B}, involving the acidic A = (aA|bA) and basic B = (aB|bB) reactants in the complementary arrangement Rc of their acidic (a) and basic (b) fragments, with the chemically “hard” (acidic) fragment of one substrate facing the chemically “soft” (basic) part of its reaction partner. The polarizational flows {Pα} (black arrows) in the mutually closed substrates, relative to the substrate “promolecular” references, preserve the overall numbers of electrons of isolated reactants {α0}, while the two partial {CTi} fluxes (white arrows), from the basic fragment of one reactant to the acidic part of the other reactant, generate a substantial resultant B→A transfer of NCT = CT1 − CT2 electrons between the mutually open reactants. These hypothetical electron flows in such a “complementary complex” are seen to produce an effective concerted (“circular”) flux of electrons between the four fragments invoked in this regional “functional” partition, which precludes an exaggerated depletion or concentration of electrons on any fragment of reactive system.
Figure 5. Polarizational {Pα = (aαbα)} and charge-transfer {CTα = (bαaβ)} electron flows, (α, βα) ∈ {A, B}, involving the acidic A = (aA|bA) and basic B = (aB|bB) reactants in the complementary arrangement Rc of their acidic (a) and basic (b) fragments, with the chemically “hard” (acidic) fragment of one substrate facing the chemically “soft” (basic) part of its reaction partner. The polarizational flows {Pα} (black arrows) in the mutually closed substrates, relative to the substrate “promolecular” references, preserve the overall numbers of electrons of isolated reactants {α0}, while the two partial {CTi} fluxes (white arrows), from the basic fragment of one reactant to the acidic part of the other reactant, generate a substantial resultant B→A transfer of NCT = CT1 − CT2 electrons between the mutually open reactants. These hypothetical electron flows in such a “complementary complex” are seen to produce an effective concerted (“circular”) flux of electrons between the four fragments invoked in this regional “functional” partition, which precludes an exaggerated depletion or concentration of electrons on any fragment of reactive system.
Applsci 09 01262 g005
Figure 6. Polarizational {Pα = (bαaα)} and charge-transfer, CT1 = (bBbA) and CT2 = (aBaA), electron flows, involving the acidic A = (aA|bA) and basic B = (aB|bB) reactants in the regional HSAB complex RHSAB, in which the chemically hard (acidic) and soft (basic) fragments of one reactant coordinate to the like fragment of the other substrate. The two partial {CTi} fluxes (white arrows) now generate a moderate overall B→A transfer of NCT = CT1 + CT2 electrons between the mutually open reactants. These hypothetical electron flows in the regional HSAB complex are seen to produce a disconcerted pattern of fluxes producing an exaggerated outflow of electrons from bB and and their accentuated inflow to aA.
Figure 6. Polarizational {Pα = (bαaα)} and charge-transfer, CT1 = (bBbA) and CT2 = (aBaA), electron flows, involving the acidic A = (aA|bA) and basic B = (aB|bB) reactants in the regional HSAB complex RHSAB, in which the chemically hard (acidic) and soft (basic) fragments of one reactant coordinate to the like fragment of the other substrate. The two partial {CTi} fluxes (white arrows) now generate a moderate overall B→A transfer of NCT = CT1 + CT2 electrons between the mutually open reactants. These hypothetical electron flows in the regional HSAB complex are seen to produce a disconcerted pattern of fluxes producing an exaggerated outflow of electrons from bB and and their accentuated inflow to aA.
Applsci 09 01262 g006

Share and Cite

MDPI and ACS Style

Nalewajski, R.F. Understanding Electronic Structure and Chemical Reactivity: Quantum-Information Perspective. Appl. Sci. 2019, 9, 1262. https://doi.org/10.3390/app9061262

AMA Style

Nalewajski RF. Understanding Electronic Structure and Chemical Reactivity: Quantum-Information Perspective. Applied Sciences. 2019; 9(6):1262. https://doi.org/10.3390/app9061262

Chicago/Turabian Style

Nalewajski, Roman F. 2019. "Understanding Electronic Structure and Chemical Reactivity: Quantum-Information Perspective" Applied Sciences 9, no. 6: 1262. https://doi.org/10.3390/app9061262

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop