Next Article in Journal
Gradings, Braidings, Representations, Paraparticles: Some Open Problems
Next Article in Special Issue
Fat Triangulations, Curvature and Quasiconformal Mappings
Previous Article in Journal / Special Issue
Introduction to the Yang-Baxter Equation with Open Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Foundations of Inference

1
Departments of Physics and Informatics, University at Albany (SUNY), Albany, NY 12222, USA
2
Maximum Entropy Data Consultants Ltd., Kenmare, County Kerry, Ireland
*
Author to whom correspondence should be addressed.
Axioms 2012, 1(1), 38-73; https://doi.org/10.3390/axioms1010038
Submission received: 20 January 2012 / Revised: 1 June 2012 / Accepted: 7 June 2012 / Published: 15 June 2012
(This article belongs to the Special Issue Axioms: Feature Papers)

Abstract

:
We present a simple and clear foundation for finite inference that unites and significantly extends the approaches of Kolmogorov and Cox. Our approach is based on quantifying lattices of logical statements in a way that satisfies general lattice symmetries. With other applications such as measure theory in mind, our derivations assume minimal symmetries, relying on neither negation nor continuity nor differentiability. Each relevant symmetry corresponds to an axiom of quantification, and these axioms are used to derive a unique set of quantifying rules that form the familiar probability calculus. We also derive a unique quantification of divergence, entropy and information.
Classification:
PACS 02.50.Cw
Classification:
MSC 06A05

Graphical Abstract

1. Introduction

The quality of an axiom rests on it being both convincing for the application(s) in mind, and compelling in that its denial would be intolerable.
We present elementary symmetries as convincing and compelling axioms, initially for measure, subsequently for probability, and finally for information and entropy. Our aim is to provide a simple and widely comprehensible foundation for the standard quantification of inference. We make minimal assumptions—not just for aesthetic economy of hypotheses, but because simpler foundations have wider scope.
It is a remarkable fact that algebraic symmetries can imply a unique calculus of quantification. Section 2 gives the background and outlines the procedure and major results. Section 3 lists the symmetries that are actually needed to derive the results, and the following Section 4 writes each required symmetry as an axiom of quantification. In Section 5, we derive the sum rule for valuation from the associative symmetry of ordered combination. This sum rule is the basis of measure theory. It is usually taken as axiomatic, but in fact it is derived from compelling symmetry, which explains its wide utility. There is also a direct-product rule for independent measures, again derived from associativity. Section 6 derives from the direct-product rule a unique quantitative divergence from source measure to destination.
In Section 7 we derive the chain product rule for probability from the associativity of chained order (in inference, implication). Probability calculus is then complete. Finally, Section 8 derives the Shannon entropy and information (a.k.a. Kullback–Leibler) as special cases of divergence of measures. All these formulas are uniquely defined by elementary symmetries alone.
Our approach is constructivist, and we avoid unnecessary formality that might unduly confine our readership. Sets and quantities are deliberately finite since it is methodologically proper to axiomatize finite systems before any optional passage towards infinity. R.T. Cox [1] showed the way by deriving the unique laws of probability from logical systems having a mere three elementary “atomic” propositions. By extension, those same laws applied to Boolean systems with arbitrarily many atoms and ultimately, where appropriate, to well-defined infinite limits. However, Cox needed to assume continuity and differentiability to define the calculus to infinite precision. Instead, we use arbitrarily many atoms to define the calculus to arbitrarily fine precision. Avoiding infinity in this way yields results that cover all practical applications, while avoiding unobservable subtleties.
Our approach unites and significantly extends the set-based approach of Kolmogorov [2] and the logic-based approach of Cox [1], to form a foundation for inference that yields not just probability calculus, but also the unique quantification of divergence and information.

2. Setting the Scene

We model the world (or some interesting aspect of it) as being in a particular state out of a finite set of mutually exclusive states (as in Figure 1, left). Since we and our tools are finite, a finite set of states, albeit possibly very large in number, suffices for all practical modeling.
As applied to inference, each state of the world is associated, via isomorphism, with a statement about the world. This results in a set of mutually exclusive statements, which we call atoms. Atoms are combined through logical OR to form compound statements comprising the elements of a Boolean lattice (Figure 1, right), which is isomorphic to a Boolean lattice of sets (Figure 1, center). Although carrying different interpretations, the mathematical structures are identical. Set inclusion “⊂” is equivalent to logical implication “⇒”, which we abstract to lattice order “<”. It is a matter of choice whether to include the null set ∅, equivalent to the logical absurdity ⊥. The set-based view is ontological in character and associated with Kolmogorov, while the logic-based view is epistemological in character and associated with Cox.
Figure 1. The Boolean lattice of potential states (center) is constructed by taking the 2 N powerset of an antichain of N mutually exclusive atoms (in this case a 1 , a 2 , a 3 , left). This lattice is isomorphic to the Boolean lattice of logical statements ordered by logical implication (right).
Figure 1. The Boolean lattice of potential states (center) is constructed by taking the 2 N powerset of an antichain of N mutually exclusive atoms (in this case a 1 , a 2 , a 3 , left). This lattice is isomorphic to the Boolean lattice of logical statements ordered by logical implication (right).
Axioms 01 00038 g001
Quantification proceeds by assigning a real number m ( x ) = x , called a valuation, to elements x . (Typewriter font denotes lattice elements x , whereas their associated valuations (real numbers) x are shown in italic.) We require valuations to be faithful to the lattice, in the sense that
x < y lattice elements x < y real numbers
so that compound elements carry greater value than any of their components. Clearly, this by itself is only a weak restriction on the behavior of valuation.
Combination of two atoms (or disjoint compounds) into their compound is written as the operator ⊔, for example z = x y . Our first step is to quantify the combination of disjoint elements through an operator ⊕ that combines values (Table 1 below lists such operators and their eventual identifications).
z = x y real numbers representing z = x y joined elements
We find that the symmetries underlying ⊔ place constraints on ⊕ that effectively require it to be addition +. At this stage, we already have the foundation of measure theory, and the generalization of combination (of disjoint elements) to the lattice join (of arbitrary elements) is straightforward. The wide applicability of these underlying symmetries explains the wide utility of measure theory, which might otherwise be mysterious.
Table 1. Operators and their symbols.
Table 1. Operators and their symbols.
OperationSymbolQuantification(Eventual form)
ordering<<
combination(addition)
direct product×(multiplication)
chaining,(multiplication)
We can consider the atoms a 1 , a 2 , a 3 , , a N and b 1 , b 2 , , b M from separate problems as N M composite atoms c i j = a i × b j in an equivalent composite problem. The direct-product operator ⊗ quantifies the composition of values:
c = a b real numbers representing c = a × b composite element
We find that the symmetries of × place constraints on ⊗ that require it to be multiplication.
It is common in science to acquire numerical assignments by optimizing a variational potential. By requiring consistency with the numerical assignments of ordinary multiplication, we find that there is a unique variational potential H ( p q ) , of “ p log p ” form, known as the (generalized Kullback–Leibler) Bregman divergence of measure p from measure q .
Inference involves the relationship of one logical statement (predicate x ) to another (context t ), initially in a situation where x t so that the context includes subsidiary predicates. To quantify inference, we assign real numbers p ( x t ) , ultimately recognised as probability, to predicate–context intervals [ x , t ] . Such intervals can be chained (concatenated) so that [ x , z ] = [ [ x , y ] , [ y , z ] ] , with ⊙ representing the chaining of values.
p ( x z ) = p ( x y ) p ( y z ) real numbers representing [ x , z ] = [ [ x , y ] , [ y , z ] ] chained intervals
We find that the symmetries of chaining require ⊙ to be multiplication, yielding the product rule of probability calculus. When applied to probabilities, the divergence formula reduces to the information, also known as the Kullback–Leibler formula, with entropy being a variant.

2.1. The Order-Theoretic Perspective

The approach we employ can be described in terms of order-preserving (monotonic) maps between order-theoretic structures. Here we present our approach, described above, from this different perspective.
Order-theoretically, a finite set of exclusive states can be represented as an antichain, illustrated in Figure 1(left) as three states a 1 , a 2 , and a 3 situated side-by-side. Our state of knowledge about the world (more precisely, of our model of it—we make no ontological claim) is often incomplete so that we can at best say that the world is in one of a set of potential states, which is a subset of the set of all possible states. In the case of total ignorance, the set of potential states includes all possible states. In contrast, perfect knowledge about our model is represented by singleton sets consisting of a single state. We refer to the singleton sets as atoms, and note that they are exclusive in the sense that no two can be true.
The space of all possible sets of potential states is given by the partially-ordered set obtained from the powerset of the set of states ordered by set inclusion. For an antichain of mutually exclusive states, the powerset is a Boolean lattice (Figure 1, center), with the bottom element optional. By conceiving of a statement about our model of the world in terms of a set of potential states, we have an order-isomorphism from the Boolean lattice of potential states ordered by set inclusion to the Boolean lattice of statements ordered by logical implication (Figure 1, right). This isomorphism maps each set of potential states to a statement, while mapping the algebraic operations of set union ∪ and set intersection ∩ to the logical OR and AND, respectively.
The perspective provided by order theory enables us to focus abstractly on the structure of a Boolean lattice with its generic algebraic operations join ∨ and meet∧. This immediately broadens the scope from Boolean to more general distributive lattices — the first fruit of our minimalist approach. For additional details on partially ordered sets and lattices in particular, we refer the interested reader to the classic text by Birkhoff [3] or the more recent text by Davey & Priestley [4].
Quantification proceeds by assigning valuations m ( x ) = x to elements x , to form a real-valued representation. For this to be faithful, we require an order-preserving (monotonic) map between the partial order of a distributive lattice and the total order of the chains that are to be found within. Thus x < y is to imply that x < y , a relationship that we call fidelity. The converse is not true: the total order imposed by quantification must be consistent with but can extend the partial order of the lattice structure.
We write the combination of two atoms into a compound element (and more generally any two disjoint compounds into a compound element) as ⊔, for example z = x y . Derivation of the calculus of quantification starts with this disjoint combination operator, where we find that its symmetries place constraints on its representation ⊕ that allow us the convention of ordinary addition “ = + ”. This basic result generalizes to the standard join lattice operator ∨ for elements that (possibly having atoms in common) need not be disjoint, for which the sum rule generalizes to its standard inclusion/exclusion form [5], which involves the meet ∧ for any atoms in common.
There are two mathematical conventions concerning the handling the nothing-is-true null element ⊥ at the bottom of the lattice known as the absurdity. Some mathematicians opt to include the bottom element on aesthetic grounds, whereas others opt to exclude it because of its paradoxical interpretation [4]. If it is included, its quantification is zero. Either way, fidelity ensures that other elements are quantified by positive values that are positive (or, by elementary generalization, zero). At this stage, we already have the foundation of measure theory.
Logical deduction is traditionally based on a Boolean lattice and proceeds “upwards” along a chain (as in the arrows sketched in Figure 1). Given some statement x , one can deduce that x implies x OR y since x OR y includes x . Similarly, x AND y implies x since x includes x AND y . The ordering relationships among the elements of the lattice are encoded by the zeta function of the lattice [6]
zeta function : ζ ( x , y ) = 1 if x y 0 if x y }
Deduction is definitive.
Inference, or logical induction, is the inverse of deduction and proceeds “downwards” along a chain, losing logical certainty as knowledge fragments. Our aim is to quantify this loss of certainty, in the expectation of deriving probability calculus. This requires generalization of the binary zeta function ζ ( x , y ) to some real-valued function p ( x y ) which will turn out to be the standard probability of x GIVEN y. However, a firm foundation for inference must be devoid of a choice of arbitrary generalizations. By viewing quantification in terms of an order-preserving map between the partial order (Boolean lattice) and a total order (chain) subject to compelling symmetries alone, we obtain a firm foundation for inference, devoid of further assumptions of questionable merit.
By considering atoms (singleton sets, which are the join-irreducible elements of the Boolean lattice) as precise statements about exclusive states, and composite lattice elements (sets of several exclusive states) as less precise statements involving a degree of ignorance, the two perspectives of logic and sets, on which the Cox and Kolmogorov foundations are based, become united within the order-theoretic framework.
In summary, the powerset comprises the hypothesis space of all possible statements that one can make about a particular model of the world. Quantification of join using + is the sum rule of probability calculus, and is required by adherence to the symmetries we list. It fixes the valuations assigned to composite elements in terms of valuations assigned to the atoms. Those latter valuations assigned to the atoms remain free, unconstrained by the calculus. That freedom allows the calculus to apply to inference in general, with the mathematically-arbitrary atom valuations being guided by insight into a particular application.

2.2. Commentary

Our results—the sum rule and divergence for measures, and the sum and product rules with information for probabilities—are standard and well known (their uniqueness perhaps less so). The matter we address here is which assumptions are necessary and which are not. A Boolean lattice, after all, is a special structure with special properties. Insofar as fewer properties are needed, we gain generality. Wider applicability may be of little value to those who focus solely on inference. Yet, by showing that the basic foundations of inference have wider scope, we can thereby offer extra—and simpler—guidance to the scientific community at large.
Even within inference, distributive problems may have relationships between their atoms such that not all combinations of states are allowed. Rather than extend a distributive lattice to Boolean by padding it with zeros, the tighter framework immediately empowers us to work with the original problem in its own right. Scientific problems (say, the propagation of particles, or the generation of proteins) are often heavily conditional, and it could well be inappropriate or confusing to go to a full Boolean lattice when a sparser structure is a more natural model.
We also confirm that commutativity is not a necessary assumption. Rather, commutativity of measure is imposed by the associativity and order required of a scalar representation. Conversely, systems that are not commutative (matrices under multiplication, for example) cannot be both associative and ordered.

3. Symmetries

Here, we list the relevant symmetries on which our axioms are based. All are properties of distributive lattices, and our descriptions are styled that way so that a reader wary of further generality does not need to move beyond this particular, and important, example. However, one may note that not all the properties of a distributive lattice (such as commutativity of the join) are listed, which implies that these results are applicable to a broader class of algebraic structures that includes distributive lattices.
Valuation assignments rank statements via an order-preserving map which we call fidelity .
Symmetry 0 : x < y lattice elements x < y real numbers
It is a matter of convention that we choose to order the valuations in the same sense as the lattice order (“more is bigger”). Reverse order would be admissible and logically equivalent, though less convenient.
In the specific case of Boolean lattices of logical statements, the binary ordering relation, represented generically by <, is equivalent to logical implication (⇒) between different statements, or equivalently, proper subset inclusion (⊂) in the powerset representation. Combination preserves order from the right and from the left
Symmetry 1 : x < y x z < y z z x < z y }
for any z (a property that can be viewed as distributivity of ⊔ over <) on the grounds that ordering needs to be robust if it is to be useful. Combination is also taken to be associative
Symmetry 2 : ( x y ) z = x ( y z )
Independent systems can be considered together (Figure 2).
Figure 2. One system might, for example, be playing-card suits x { , , , } , while another independent system might be music keys t { , , } . The direct-product combines the spaces of x and t to form the joint space of x × t with atoms like × .
Figure 2. One system might, for example, be playing-card suits x { , , , } , while another independent system might be music keys t { , , } . The direct-product combines the spaces of x and t to form the joint space of x × t with atoms like × .
Axioms 01 00038 g002
The direct-product operator × is taken to be (right-)distributive over ⊔
Symmetry 3 : ( x × t ) ( y × t ) = ( x y ) × t
so that relationships in one set, such as perhaps , remain intact whether or not an independent element from the other, such as perhaps ♮, is appended. Left distributivity may well hold but is not needed. The direct product of independent lattices is also taken to be associative (Figure 3).
Symmetry 4 : ( u × v ) × w = u × ( v × w )
Figure 3. Associativity of direct product can be viewed geometrically.
Figure 3. Associativity of direct product can be viewed geometrically.
Axioms 01 00038 g003
Finally, we consider a totally ordered set of logical statements that form a chain x < y < z < t . We focus on an interval on the chain, which is defined by an ordered pair of logical statements [ x , t ] . Adjacent intervals can be chained, as in [ x , y ] , [ y , z ] = [ x , z ] , and chaining is associative
[ x , y ] , [ y , z ] , [ z , t ] = [ x , y ] , [ y , z ] , [ z , t ]
Using Greek symbols to represent an interval, α = [ x , y ] , β = [ y , z ] , γ = [ z , t ] , we have
Symmetry 5 : ( α , β ) , γ = α , ( β , γ )
These and these alone are the symmetries we need for the axioms of quantification. They are presented as a cartoon in the “Conclusions” section below.

4. Axioms

We now introduce a layer of quantification. Our axioms arise from the requirement that any quantification must be consistent with the symmetries indicated above. Therefore, each symmetry gives rise to an axiom. We seek scalar valuations to be assigned to elements of a lattice, while conforming to the above symmetries (#0—#5)for disjoint elements.
Fidelity (symmetry #0) requires us to choose an increasing measure so that, without loss of generality, we may set m ( ) = 0 and thereafter
Axiom 0 : x > 0
To conform to the ordering symmetry #1, we require ⊕ as set up in Equation 2 to obey
Axiom 1 : x < y x z < y z z x < z y }
To conform to the associative symmetry #2, we also require ⊕ to obey
Axiom 2 : ( x y ) z = x ( y z )
These equations are to hold for arbitrary values x, y, z assigned to the disjoint x , y , z . Appendix A will show that these order and associativity axioms are necessary and sufficient to determine the additive calculus of measure.
To conform to the distributive symmetry #3, we require ⊗ as set up in Equation 3 to obey
Axiom 3 : ( x t ) ( y t ) = ( x y ) t
for disjoint x and y combined with any t from the second lattice. Presence of t may change the measures, but does not change their underlying additivity. To conform to the associative symmetry #4, we also require ⊗ to obey
Axiom 4 : ( u v ) w = u ( v w )
These axioms determine the multiplicative form of ⊗ and also lead to a unique divergence between measures.
To conform to the associative symmetry #5, we require ⊙ as set up in Equation 4 to obey
Axiom 5 : p ( α ) p ( β ) p ( γ ) = p ( α ) p ( β ) p ( γ )
where α = [ x , y ] , β = [ y , z ] , γ = [ z , t ] are individual steps concatenated along the chain α , β , γ , which is [ [ x , y ] , [ y , z ] , [ z , t ] ] = [ x , t ] . This final axiom will let us pass from measure to probability and Bayes’ theorem, and from divergence to information and entropy. For each operator (Table 1), the eventual form satisfies all relevant axioms, which assures existence. Uniqueness remains to be demonstrated.

5. Measure

Preliminary to investigating probability, we attend to the foundation of measure.

5.1. Disjoint arguments

According to the scalar associativity theorem (Appendix A), an operator ⊕ obeying axioms 1 and 2 exists and can without loss of generality be taken to be addition +, giving the sum rule.
Sum rule : x y = x + y
Commutativity x y = y x , though not explicitly assumed, is an unsurprising property. In accordance with fidelity (axiom 0), element values are strictly positive x > 0 . In this form, positive-valued valuation m ( x ) = x of lattice elements is known as a measure. If the null element is included as the bottom of the lattice, it has zero value.
Whilst we are free to adopt additivity as a convenient convention, we are also free to adopt any order-preserving regrade Θ for which the rule would be
x y = Θ - 1 Θ ( x ) + Θ ( y )
This carries no extra generality because this form can be reverted to additivity by applying Θ, but we need such alternative grading later to avoid inconsistency between different assignments. There is no other freedom. If the linear form of sum rule is to be maintained, the only freedom is linear rescaling Θ ( x ) = K x , with K > 0 to retain positivity.
Measure theory (see for example [7]) is usually introduced with additivity (countably additive or σ-additive) and non-negativity as “obvious” basic assumptions, with emphasis on the technical control of infinity in unbounded applications. Here we emphasize the foundation, and discover the reason why measure theory is constructed as it is. The symmetries of combination require it. Any other formulation would break these basic properties of associativity and order, and would not yield a widely useful theory.

5.2. Arbitrary Arguments

For elements x and y that need not be disjoint, their join ∨ is defined as comprising all their constituent atoms counted once only, and the meet ∧ as comprising those atoms they have in common. In inference, ∨ is logical OR and ∧ is logical AND.
By putting x = u v and y = v w for disjoint u , v , w , we reach the general “inclusion/exclusion” sum rule for arbitrary x and y
( m ( x y ) + m ( x y ) = m ( x ) + m ( y ) )
Commutativity of join and meet follow:
m ( x y ) = m ( y x ) , m ( x y ) = m ( y x ) .

5.3. Independence

From the associativity of direct product (axiom 4), the associativity theorem (Appendix A again) assures the existence of an additivity relationship of the form
Θ ( x t ) = Θ ( x ) + Θ ( t )
for some invertible function Θ of the measures x = m ( x ) , t = m ( t ) and x t = m ( x × t ) . We can not proceed as before to re-grade in terms of Θ ( m ) to supersede m, because we are already using additivity
x t + y t = ( x + y ) t
(axiom 3, distributivity of ⊗ over = + ) to define the grade. Instead, we require consistency with the sum-rule behavior for x t and y t . Defining Ψ = Θ - 1 gives, term by term,
Ψ ( ξ + τ ) + Ψ ( η + τ ) = Ψ ( ζ ( ξ , η ) + τ )
where
ξ = Θ ( x ) , η = Θ ( y ) , ζ = Θ ( x + y ) , τ = Θ ( t ) .
Among these variables, ξ , η , τ are independent, but (through the sum rule), ζ depends on ξ and η but not τ. This is the product equation. By definition, Ψ returns a measure, so it is positive.
The product theorem (Appendix B) shows Θ to be logarithmic, with Equation 23 reading
1 A log x t C = 1 A log x C + 1 A log t C
with A and C universal constants (A cancelling out), and C being positive. The obvious convention C = 1 loses no generality, and shows ⊗ to be simple multiplication
Direct - product rule : x t = x t
Measures are required to multiply, because of associativity of direct product, and the “ t ” operation is represented by “scale by t”. This is consistent with linear rescaling (here depending on the second factor t) being the only allowed freedom for the measure assigned to the first factor x.

6. Variation

Variational principles are common in science—minimum energy for equilibrium, Hamilton’s principle for dynamics, maximum entropy for thermodynamics, and so on—and we seek one for measures. The aim is to discover a variational potential H ( m ) whose constrained minimum allows the valuations m = ( m 1 , m 2 , , m N ) of N atoms to be assigned subject to appropriate constraints of the form f ( m ) = constant . (The vectors which appear in this section are shown in bold-face font.)
The variational potential is required to be general, applying to arbitrary constraints. Just like values themselves, constraints on individual atom values can be combined into compound constraints that influence several values: indeed the constraints could simply be imposition of definitive values. Such combination allows a Boolean lattice, entirely analogous to Figure 1, to be developed from individual atomic constraints. The variational potential H is to be a valuation on the measures resulting from these constraints, combination being represented by some operator so that
H ( x WITH y ) = H ( x ) H ( y )
for constraints acting on disjoint atoms or compounds.
Adding extra constraints always increases H, otherwise the variational requirement would be broken, so H must be faithful to chaining in the lattice.
x < y chained H ( x ) < H ( y ) real numbers
We also have order
H ( x ) < H ( y ) H ( x ) H ( z ) < H ( y ) H ( z ) H ( z ) H ( x ) < H ( z ) H ( y )
because if y is a “harder” constraint than x (meaning H ( y ) > H ( x ) ), that ranking should not be affected by some other constraint on something else. Associativity
( H ( x ) H ( y ) ) H ( z ) = H ( x ) ( H ( y ) H ( z ) )
is likewise required and expresses the combination of three constraints. It would also be natural to assume commutativity, H ( x ) H ( y ) = H ( y ) H ( x ) , but that is not necessary because we already recognize Equations 30–32 as our axioms 0, 1, 2. Hence, using Appendix A again, there exists a “ = +” grade on which H is additive.
H ( m ) = atoms i H i ( m i )
We have now justified additivity, thus filling a gap in traditional accounts of the calculus of variations.
Under perturbation, the minimization requirement is
δ H ( m ) 0 when δ f 1 ( m ) = δ f 2 ( m ) = = 0
The standard “ = + ” form of the sum rule happens to be continuous and differentiable, so is applicable to valuation of systems that differ arbitrarily little. We adopt it, and can then justifiably require the variational potential to be valid for arbitrarily small perturbations:
d H ( m ) = 0 when d f 1 ( m ) = d f 2 ( m ) = = 0
This limit Equation 35 is weaker than the original Equation 34 not only because of the restricted context, but also because the nature of the extremum (maximum or minimum or saddle) is lost in the discarded second-order effects. However, it still needs to be satisfied. It also shows that any variational potential must by its nature be differentiable at least once.
One now invents supposedly constant “Lagrange multiplier” coefficients λ 1 , λ 2 , and considers what appears at first to be the different problem of solving
d ( H ( m ) - λ 1 f 1 ( m ) - λ 2 f 2 ( m ) - = 0 under arbitrary perturbation
for m . Clearly, Equation 36 is equivalent to Equation 35 for perturbations that happen to hold the f’s constant ( d f = 0 ). However, the values those f’s take may well be wrong. The trick is to choose the λ’s so that the f’s take their correct constraint values. That being done, Equation 36 solves the variational problem Equation 35.
Let the application be two-dimensional, x-by-y, in the sense of applying to values m ( x × y ) of elements on a direct-product lattice. Suppose we have x-dependent constraints that yield m ( x ) = m x on one factor (say the card suits in Figure 2 above), and similar y-dependent constraints that yield m ( y ) = m y on the other factor (say music keys in Figure 2). Both factors being thus controlled, their direct-product is implicitly controlled by the those same constraints. Here, we already know the target value m ( x × y ) = m x m y from the direct-product rule Equation 28. Hence the variational assignment for the particular value m ( x × y ) derives from
H x y ( m x m y ) = λ 1 f 1 ( m x ) + λ 2 f 2 ( m y )
(where′ indicates derivative). The variational theorem (Appendix C) gives the solution of this functional equation as
H i ( m i ) = A i + B i m i + C i ( m i log m i - m i )
for the individual valuation being considered, where A i , B i , C i are constants. Combining all the atoms yields
H ( m ) = atoms i A i + B i m i + C i ( m i log m i - m i )
The coefficient C i represents the intrinsic importance of atom a i in the summation, but usually the atoms are a priori equivalent so that the C’s take a common value. The scaling of a variational potential is arbitrary (and is absorbed in the Lagrange multipliers), so we may set C = 1 , ensuring that H has a minimum rather than a maximum. Alternatively, C = - 1 would ensure a maximum. However, the settings of A and B depend on the application.

6.1. Divergence and Distance

One use of H is as a quantifier of the divergence of destination values w from source values u that existed before the constraints that led to w were applied. For this, we set C = 1 to get a minimum, B i = - log u i to place the unconstrained minimizing w at u , and A i = u i to make the minimum value zero. This form is
Divergence : H ( w u ) = atoms i ( u i - w i + w i log ( w i / u i )
This formula is unique: none other has the properties Equations 33,37 that elementary applications require. Equivalently, any different formula would give unjustifiable answers in those applications. Plausibly, H is non-negative, H ( w u ) 0 with equality if and only if w = u , so that it usefully quantifies the separation of destination from source.
In general, H obeys neither commutativity nor the triangle inequality, H ( w u ) H ( u w ) and H ( w u ) H ( w v ) + H ( v u ) . Hence it cannot be a geometrical “distance”, which is required to have both those properties. In fact, there is no definition of geometrical measure-to-measure distance that obeys the basic symmetries, because H is the only candidate, and it fails.
Here again we see our methodology yielding clear insight. “From–to” can be usefully quantified, but “between” cannot. A space of measures may have connectedness, continuity, even differentiability, but it cannot become a metric space and remain consistent with its foundation.
In the limit of many small values, H admits a continuum limit
H ( w u ) = ( u ( θ ) - w ( θ ) + w ( θ ) log ( w ( θ ) / u ( θ ) ) d θ
The constraints that force a measure away from the original source may admit several destinations, but minimizing H is the unique rule that defines a defensibly optimal choice. This is the rationale behind maximum entropy data analysis [8].

7. Probability Calculus

In inference, we seek to impose on the hypothesis space a quantified degree of implication p ( x t ) , to represent the plausibility of predicate x conditional on current knowledge that excludes all hypotheses outside the stated context t . This is accomplished via a bivaluation, which is a functional that takes a pair of lattice elements to a real number. This bivaluation should depend on both x (obviously) and t (otherwise it would be just the measure assigned to x ). The natural conjecture is that probability should be identified with a normalized measure, and we proceed to prove this—measures can have arbitrary total but probabilities will (according to standard convention) sum to unity.
At the outset, though, we simply wish to set up a bivaluation for predicate x within context t .

7.1. Chained Arguments

Within given context t , we require p ( x t ) to have the order and associative symmetries #1 and #2 that define a measure. Consequently, p obeys the sum rule
p ( x y t ) = p ( x t ) + p ( y t )
for disjoint x and y with x y < t . It is the dependence on t that remains to be determined.
Associativity of chaining (axiom 5) for a < b < c < d is represented by
p ( a b ) p ( α ) p ( b c ) p ( β ) p ( α , β ) p ( c d ) p ( γ ) = p ( a b ) p ( α ) p ( b c ) p ( β ) p ( c d ) p ( γ ) p ( β , γ )
We do not have commutativity, ( α , β ) = [ [ a , b ] , [ b , c ] ] = [ a , c ] not being the same as ( β , α ) (which is meaningless), but we do have associativity and we do have order along the chain. By the associativity theorem, ⊙ exists and there is a scale on which it is simple addition. However, we can not regrade to that scale and discard the original because we have already fixed the grade of p to be additive with respect to its first argument. Instead, we infer additivity on some other grade Θ ( p )
Θ p ( a c ) p ( α ) p ( β ) = Θ p ( a b ) p ( α ) + Θ p ( b c ) p ( β )
required to be consistent with the sum-rule behavior of p. Defining Ψ = Θ - 1 gives
p ( a c ) p ( α ) p ( β ) = Ψ Θ ( p ( a b ) p ( α ) ) + Θ ( p ( b c ) p ( β ) )
Substituting this in the sum rule Equation 42, term by term, yields the same “product Equation” 25
Ψ ( ζ ( ξ , η ) + τ ) = Ψ ( ξ + τ ) + Ψ ( η + τ )
as before, where
ξ = Θ p ( x z ) , η = Θ p ( y z ) , ζ = Θ p ( x y z ) , τ = Θ p ( z t ) .
Through the sum rule, ζ depends as shown on ξ and η but not τ. The independent variables are ξ , η , τ .
The solution (Appendix B again) shows Θ to be logarithm, so that ⊙ was multiplication and
p ( x z ) = p ( x y ) p ( y z ) / C
in which p (positive by virtue of being a measure on predicates) takes the sign of a universal constant C. Without loss of generality, we assign the scale of p by fixing C = 1 , giving the standard product rule for conditioning.
Chain - product rule : p ( x z ) = p ( x y ) p ( y z )

7.2. Arbitrary Arguments

The chain-product rule, which as written above is valid for any chain, can be generalized to accommodate arbitrary elements. This is accomplished by noting that x y = x in a chain where x < y , so that p ( x y y ) = p ( x y ) . The general form
p ( a b c ) = p ( a b c ) p ( b c )
follows by observing that x = a b c , y = b c and z = c form a chain and hence are subject to the chain rule.
The special case p ( t t ) = 1 is obtained by setting y = z = t in the chain-product rule. For any x t , ordering requires p ( x t ) p ( t t ) = 1 , so that the range of values is 0 p 1 and we recognize p as probability, hereafter denoted Pr.
Probability calculus is now proved:
Range Sum rule Chain - product 0 = Pr ( t ) < Pr ( x t ) Pr ( t t ) = 1 Pr ( x y t ) + Pr ( x y t ) = Pr ( x t ) + Pr ( y t ) Pr ( x y t ) = Pr ( x y t ) Pr ( y t )
The top element of the current lattice, t , is the (provisional) truth, often written ⊤.
From the commutativity Pr ( x y t ) = Pr ( y x t ) associated with ∧, we obtain Bayes’ Theorem
Pr ( x θ t ) Pr ( θ t ) = Pr ( θ x t ) Pr ( x t )
which can be simplified by making the common context implicit and writing
Pr ( x θ ) Likelihood Pr ( θ ) Prior = Pr ( θ x ) Posterior Pr ( x ) Evidence t
to relate data x and parameter θ (with context t understood). Do not misinterpret the abbreviated notation. Probability is always and necessarily, by construction, a bivaluation that assigns a real number to a pair of elements in a Boolean lattice. In addition, one does not need to differentiate between likelihood, prior, posterior, and evidence by giving each one a different notation. The terms that comprise Bayes’ Theorem represent the same bivaluation applied to different pairs of elements.

7.3. Probability as a Ratio

The equations of probability calculus (range, sum rule, and chain-product rule) can all be subsumed in the single expression
Pr ( x t ) = m ( x t ) m ( t ) x , t
for probability as a ratio of measures. Thus the calculus of probability is nothing more than the elementary calculus of proportions of measure. As anticipated, within its context t , a probability distribution is simply the shape of the confined measure, automatically normalized to unit mass.
This is, essentially, the original discredited frequentist definition (see [9]) of probability, as the ratio of number of successes to number of trials. However, it is here retrieved at an abstract level, which bypasses the catastrophic difficulties of literal frequentism when faced with isolated non-reproducible situations. Just as ordinary addition is forced for measures in [ 0 , ) , so ordinary proportions in [ 0 , 1 ] are forced for probability calculus.
Whereas the sum rule for measure and probability generalizes to the inclusion/exclusion form for general elements which need not be disjoint, so does the ratio form of probability allow generalization from intervals [3] to generalized intervals, consisting of arbitrary pairs [ x , t ] which need not be in a chain. The bivaluation form Equation 53 still holds but now represents a general degree of implication between arbitrary elements.

8. Information and Entropy

Here, we take special cases of the variational potential H, appropriate for probability distributions instead of arbitrary measures.

8.1. Information

Within a given context, probability is a measure, normalized to unit mass. The divergence H of destination probability p from source probability q then simplifies to
Information : H ( p q ) = k p k log p k q k
In statistics, this is known as the Kullback–Leibler formula [10].
If the final destination is a fully determined state, with a single p equal to 1 while all the others are necessarily 0, then we have the extreme case
H ( p q ) = - log q k when p k = 1 .
This represents the information gained on acquiring the knowledge that the specific k was true—equivalently the surprise at finding k instead of any available alternative. Generally, H is the amount of compression (logarithmically, with respect to the source) induced by the constraints that modulate source into destination.
In the limit of many small values, H admits a continuum limit
H ( p q ) = p ( x ) log p ( x ) q ( x ) d x
sometimes (with a minus sign) known as the cross-entropy.

8.2. Entropy

The variational potential
H ( p ) = k A k + B k p k + C ( p k log p k - p k )
can also quantify uncertainty. For this, we require zero uncertainty when one probability value equals to 1 (definitely present) and all the others are necessarily 0 (definitely not present). This is accomplished by setting A k = 0 and B k = C . Setting C = - 1 gives the conventional scale, and yields
Entropy : S ( p ) = - k p k log p k
We call this “entropy”, and give it a separate symbol S as well as a separate name, to distinguish it from the previous “information” special case of divergence.
Entropy happens to be the expectation value of the information gained by deciding on one particular cell instead of any of the others in a partition.
S ( p ) = - log p k k
It is a function of the partitioning as well as the probability distribution, which is why it does not have a continuum limit. Plausibly, entropy has the following three properties:
  • S is a continuous function of its arguments.
  • If there are n equal choices, so that p k = 1 / n , then S is monotonically increasing in n.
  • If a choice is broken down into subsidiary choices, then
    S adds according to probabilistic expectation, meaning
    S ( p 1 , p 2 , p 3 ) = S ( p 1 , p 2 + p 3 ) + ( p 2 + p 3 ) S ( p 2 , p 3 ) .
These are the three properties from which Shannon [11] originally proved the entropy formula. Here, we see that those properties, like that formula, are inevitable consequences of seeking a variational quantity for probabilities.
Information and entropy are near synonyms, and are often used interchangeably. As seen here, though, entropy S is different from H. It is a property of just one partitioned probability distribution, it has a maximum not a minimum, and it does not have a continuum limit. Its least value, attained when a single probability is 1 and all the others are 0, is zero. Its value generally diverges upwards as the partitioning deepens, whereas H usually tends towards a continuum limit.

9. Conclusions

9.1. Summary

We start with a set { a 1 , a 2 , a 3 , , a N } of “atomic” elements which in inference represent the most fundamental exclusive statements we can make about the states (of our model) of the world. Atoms combine to form a Boolean lattice which in inference is called the hypothesis space of statements. This structure has rich symmetry, but other applications may have less and we have selected only what we needed, so that our results apply more widely and to distributive lattices in particular. The minimal assumptions are so simple that they can be drawn as the cartoon below (Figure 4).
Axiom 1 represents the order property that is required of the combination operator ⊔. Axiom 2 says that valuation must conform to the associativity of ⊔. These axioms are compelling in inference. By the associativity theorem (Appendix A — see the latter part for a proof of minimality) they require the valuation to be a measure m ( x ) , with ⊔ represented by addition (the sum rule). Any 1:1 regrading is allowed, but such change alters no content so that the standard linearity can be adopted by convention. This is the rationale behind measure theory.
The direct product operator × that represents independence is distributive (axiom 3) and associative (axiom 4), and consequently independent measures multiply (the direct-product rule). There is then a unique form of variational potential for assigning measures under constraints, yielding a unique divergence of one measure from another.
Probability Pr ( x t ) is to be a bivaluation, automatically a measure over predicate x within any specified context t . Axiom 5 expresses associativity of ordering relations (in inference, implications) and leads to the chain-product rule which completes probability calculus. The variational potential defines the information (Kullback–Leibler) carried by a destination probability relative to its source, and also yields the Shannon entropy of a partitioned probability distribution.

9.2. Commentary

We have presented a foundation for inference that unites and significantly extends the approaches of Kolmogorov [2] and Cox [1], yielding not just probability calculus, but also the unique quantification of divergence and information. Our approach is based on quantifying finite lattices of logical statements in such a way that quantification satisfies minimal required symmetries. This generalizes algebraic implication, or equivalently subset inclusion, to a calculus of degrees of implication. It is remarkable that the calculus is unique.
Our derivations have relied on a set of explicit axioms based on simple symmetries. In particular, we have made no use of negation (NOT), which in applications other than inference may well not be present. Neither have we assumed any additive or multiplicative behavior (as did Kolmogorov [2], de Finetti [12], and Dupré & Tipler [13]). On the contrary, we find that sum and product rules follow from elementary symmetry alone.
Figure 4. Cartoon graphic of the symmetries invoked, and where they lead. Ordering is drawn as upward arrows.
Figure 4. Cartoon graphic of the symmetries invoked, and where they lead. Ordering is drawn as upward arrows.
Axioms 01 00038 g004
We find that associativity and order provide minimal assumptions that are convincing and compelling for scalar additivity in all its applications. Associativity alone does not force additivity, but associativity with order does. Positivity was not assumed, though it holds for all applications in this paper.
Commutativity was not assumed either, though commutativity of the resulting measure follows as a property of additivity. Associativity and commutativity do not quite force additivity because they allow degenerate solutions such as a b = max ( a , b ) . To eliminate these, strict order is required in some form, and if order is assumed then commutativity does not need to be. Hence scalar additivity rests on ordered sequences rather than the disordered sets for which commutativity would be axiomatic.
Associativity + Order Additivity allowed Commutativity Associativity alone Additivity allowed Associativity + Commutativity Additivity allowed
Aczél [14] assumes order in the form of reducibility, and he too derives commutativity. However, his analysis assumes the continuum limit already attained, which requires him to assume continuity.
Associativity + Order + Continuity Additivity allowed Commutativity
Our constructivist approach uses a finite environment in which continuity does not apply, and proceeds directly to additivity. Here, continuity and differentiability are merely emergent properties of + as the continuum limit is approached by allowing arbitrarily many atoms of different type.
Yet there can be no requirement of continuity, which is merely a convenient convention. For example, re-grading could take the binary representations of standard arguments ( 101 . 011 2 representing 5 3 8 ) and interpret them in base-3 ternary (with 101 . 011 3 representing 10 4 27 ), so that Θ ( 10 4 27 ) = 5 3 8 . Valuation becomes discontinuous everywhere, but the sum rule still works, albeit less conveniently. Indeed, no finite system can ever demonstrate the infinitesimal discrimination that defines continuity, so continuity cannot possibly be a requirement of practical inference.
At the cost of lengthening the proofs in the appendices, we have avoided assuming continuity or differentiability. Yet we remark that such infinitesimal properties ought not influence the calculus of inference. If they did, those infinitesimal properties would thereby have observable effects. But detecting whether or not a system is continuous at the infinitesimal scale would require infinite information, which is never available. So assuming continuity and differentiability, had that been demanded by the technicalities of mathematical proof (or by our own professional inadequacy), would in our view have been harmless. As it happens, each appendix touches on continuity, but the arguments are appropriately constructed to avoid the assumption, so any potential controversy over infinite sets and the rôle of the continuum disappears.
Other than reversible regrading, any deviation from the standard formulas must inevitably contradict the elementary symmetries that underlie them, so that popular but weaker justifications (e.g., de Finetti [12]) in terms of decisions, loss functions, or monetary exchange can be discarded as unnecessary. In fact, the logic is the other way round: such applications must be cast in terms of the unique calculus of measure and probability if they are to be quantified rationally. Indeed, we hold generally that it is a tactical error to buttress a strong argument (like symmetry) with a weak argument (like betting, say). Doing that merely encourages a skeptic to sow confusion by negating the weak argument, thereby casting doubt on the main thesis through an illogical impression that the strong argument might have been circumvented too.
Finally, the approach from basic symmetry is productive. Goyal and ourselves [15] have used just that approach to show why quantum theory is forced to use complex arithmetic. Long a deep mystery, the sum and product rules of complex arithmetic are now seen as inevitably necessary to describe the basic interactions of physics. Elementary symmetry thus brings measure, probability, information and fundamental physics together in a remarkably unified synergy.

Acknowledgements

The authors would like to thank Seth Chaikin, Janos Aczél, Ariel Caticha, Julian Center, Philip Goyal, Steve Gull, Jeffrey Jewell, Vassilis Kaburlasos, Carlos Rodríguez, and a thoughtful anonymous reviewer. KHK was supported in part by the College of Arts and Sciences and the College of Computing and Information of the University at Albany, NASA Applied Information Systems Research Program (NASA NNG06GI17G) and the NASA Applied Information Systems Technology Program (NASA NNX07AD97A). JS was supported by Maximum Entropy Data Consultants Ltd.

References

  1. Cox, R.T. Probability, frequency, and reasonable expectation. Am. J. Phys. 1946, 14, 1–13. [Google Scholar] [CrossRef]
  2. Kolmogorov, A.N. Foundations of the Theory of Probability, 2nd English ed.; Chelsea: New York, NY, USA, 1956. [Google Scholar]
  3. Birkhoff, G. Lattice Theory; American Mathematical Society: Providence, RI, USA, 1967. [Google Scholar]
  4. Davey, B.A.; Priestley, H.A. Introduction to Lattices and Order; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  5. Klain, D.A.; Rota, G.-C. Introduction to Geometric Probability; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  6. Knuth, K.H. Deriving Laws from Ordering Relations. In Bayesian Inference and Maximum Entropy Methods in Science and Engineering; Erickson, G.J., Zhai, Y., Eds.; Jackson: Hole, WY, USA, 2003. [Google Scholar]
  7. Halmos, P.R. Measure Theory; Springer: Berlin/Heidelberg, Germany, 1974. [Google Scholar]
  8. Gull, S.F.; Skilling, J. Maximum entropy method in image processing. IEE Proc. 131F, 646–659. [CrossRef]
  9. Von Mises, R. Probability, Statistics, and Truth; Dover: Mineola, NY, USA, 1981. [Google Scholar]
  10. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Statist. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  11. Shannon, C.F. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  12. De Finetti, B. Theory of Probability, Vol. I and Vol. II; John Wiley and Sons: New York, NY, USA, 1974. [Google Scholar]
  13. Dupré, M.J.; Tipler, F.J. New axioms for rigorous Bayesian probability. Bayesian Anal. 2009, 4, 599–606. [Google Scholar] [CrossRef]
  14. Aczél, J. Lectures on Functional Equations and Their Applications; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  15. Goyal, P.; Knuth, K.H.; Skilling, J. Origin of complex quantum amplitudes and Feynman’s rules. Phys. Rev. A 2010, 81, 022109. [Google Scholar] [CrossRef]

A. Appendix A: Associativity Theorem

Atoms x, y, z,..., or disjoint lattice elements more generally, are to be assigned valuations x , y , z , . If valuations coincide (though other marks may differ), such atoms are said to be of the same type. We allow arbitrarily many atoms of arbitrarily many types. Our proof is constructive, with combinations built as sequences of atoms appended one at a time, x y having valuation x y . The consequent stand-alone derivation is rather long, but avoids making what would in our finite environment be an unnatural assumption of continuity. We also avoid assuming that an inverse to combination exists.
We merely assume order (axiom 1)
Axiom 1 a : Axiom 1 b : x < y x z < y z z x < z y
and associativity (axiom 2)
Axiom 2 : ( x y ) z = x ( y z )
Theorem:
Axiom 1 ( order ) and axiom 2 ( associativity ) imply that x y = Θ - 1 Θ ( x ) + Θ ( y ) for any order-preserving regrade Θ of = + applied to scalar values .

A.1. Proof: 

The form quoted in the theorem is easily seen to satisfy both axioms 1 and 2, which demonstrates existence of a calculus ⊕ of quantification. The remaining question is whether this calculus is unique.
We start by building sequences from just one type of atom before introducing successively more types to reach the general case. In this way, we lay down successively finer grids. Whenever another atom is introduced to generate a new sequence, that new sequence’s value inevitably lies somewhere at, between, or beyond previously assigned values. If it lies within an interval, we are free to choose it to be anywhere convenient. Such choice loses no generality, because the original value could be recovered by order-preserving regrade of the assignments. Values can be freely and reversibly regraded in and only in any way that preserves their order. Any such mapping preserves axiom 1, but reversal of ordering would allow the axiom to be broken.
Most points of the continuum escape this approach and are never accessed, so we do not allow ourselves continuum properties such as continuity. We build our finite system from the bottom up, using only those values that we actually need.
By interchanging x and y in axiom 1, the same relationship holds when “<” is replaced throughout by “>”, and replacement by “=” holds trivially. So, in effect, the axiom makes a three-fold assertion
x < = > y x z < = > y z and z x < = > z y
Because these three possibilities ( < , > , = ) are exhaustive, consistency implies the reverse, sometimes called “cancellativity”:
x z < = > y z or z x < = > z y x < = > y

A.2. One Type of Atom

Consider a set of disjoint atoms { a 1 , a 2 , a 3 , , a r , a r + 1 , , a N } , each of which is associated with the same value so that m ( a i ) = a for all i [ 1 , N ] . We will append such atoms one at a time, using the combination operator ⊔ to construct compound elements
( ( ( a 1 a 2 ) ) a r ) a r + 1
which are to be valued as
( ( ( m ( a 1 ) m ( a 2 ) ) ) m ( a r ) ) m ( a r + 1 ) .
Since the atoms a i all have the same value, the subscripts are immaterial for valuation and we may write
1 of a a 1 , so that m ( 1 of a ) = m ( a 1 ) = m ( a ) = a
and
2 of a a 1 a 2 , so that m ( 2 of a ) = m ( a 1 a 2 ) = m ( a a ) = a
and so on with the addition of
0 of a , so that m ( 0 of a ) = m ( ) m
In principle, we could have any of
m ( 0 of a ) < = > m ( 1 of a ) positive style null style negative style
Null-style atoms all share the same value m . If there were two such values, say m and m , then the equalities
m ( x ) = m m ( x ) = m m ( x )
for any x would, by cancellativity, make them equal.
We proceed with atoms restricted to positive style, leaving the extension to negative (if required) until the end. Chaining a sequence of positive a ’s with another a yields, successively, the same nature of relationship between m ( 1 of a ) and m ( 2 of a ) , then m ( 2 of a ) and m ( 3 of a ) , and by induction m ( r of a ) and m ( r + 1 of a ) . Hence successive multiples are ranked by cardinality, and can continue indefinitely.
m ( ) < m ( 1 of a ) < m ( 2 of a ) < < m ( r of a ) < m ( r + 1 of a ) <
Whatever values were initially proposed, we are free to regrade to other values of our choice, provided only that relevant order is preserved. Here, we are free to assign values as multiples
m ( r of a ) = r a
of any positive value a > 0 . The basic linear additive scale is now in place.

Illustration

We are not forced to adopt this linear scale, and a user’s original assignments may well not have used it. We can allow other increasing series, such as m ( r of a ) = r 3 a , but we could not use a non-increasing series like m ( r of a ) = a sin ( r ) without some values being the wrong way round. The only acceptable grades preserve order so that they can be monotonically reverted to the adopted integer scale (Figure 5).
Figure 5. Ordered multiples can be placed on an integer scale, here drawn with a = 1 .
Figure 5. Ordered multiples can be placed on an integer scale, here drawn with a = 1 .
Axioms 01 00038 g005

A.3. Induction to More Than One Type of Atom

Suppose that sequences of atoms drawn from up to k types { a , , c } are quantified as the grid of values
μ ( r , , t ) m ( r of a and and t of c ) ( multiples of up to k types in any order = r a + + t c ( corresponding terms
for positive multiples r , , t . Any individual marks that the atoms may possess beyond their type are ignored in this scalar representation. This hypothesis Equation 71 is already the assignment for k = 1 , and we aim to develop it to all k by induction. Before doing this, we note that commutativity is implicit in Equation 71 for atoms or sequences drawn from the original k types, because
μ ( r + r , , t + t ) = μ ( r , , t ) + μ ( r , , t )
But commutativity for k > 1 is not being improperly assumed, because the inductive proof starts from k = 1 , for which Equation 71 reduces to the proven Equation 70.
We now append an extra type d of atom, and investigate values of the extended function
μ ( r , , t ; u ) = m ( r of a and and t of c ) m ( u of d )
formed by appending, successively, u = 1 , 2 , 3 , new atoms. If a new value coincides with an already-assigned value, it is thereby determined. Otherwise, the new value must interleave (including lying beyond) existing ones, and we are free to assign it any convenient value within that particular interval (Figure 6).
Figure 6. A new value, displaced away from the existing grid, must lie within some interval. Any assignment outside the strict interior would be wrongly ordered, while any value inside could be reverted to some other selection by order-preserving regrade.
Figure 6. A new value, displaced away from the existing grid, must lie within some interval. Any assignment outside the strict interior would be wrongly ordered, while any value inside could be reverted to some other selection by order-preserving regrade.
Axioms 01 00038 g006

A.3.1 Repetition Lemma

To proceed, we need the repetition lemma, that if
μ ( r , , t ) < = > μ ( r 0 , , t 0 ; u )
then
μ ( n r , , n t ) < = > μ ( n r 0 , , n t 0 ; n u )
for n-fold repetition.
Suppose the lemma does hold for n. Prefix Equation 74 with “ n r 0 of a and ...and n t 0 of c ”, and postfix with n u of d .
μ ( n r 0 + r , , n t 0 + t ) m ( n u of d ) < = > μ ( n + 1 ) r 0 , , ( n + 1 ) t 0 ; ( n + 1 ) u
Prefix Equation 75 with “r of a and ...and t of c ”.
μ ( n + 1 ) r , , ( n + 1 ) t < = > μ n r 0 + r , , n t 0 + t ; n u
Because
μ ( n r 0 + r , , n t 0 + t ) m ( n u of d ) = μ n r 0 + r , , n t 0 + t ; n u
(these two expressions being alternative notations for the same quantity), the relationships Equation 77 and Equation 76 combine to give
μ ( n + 1 ) r , , ( n + 1 ) t < = > μ ( n + 1 ) r 0 , , ( n + 1 ) t 0 ; ( n + 1 ) u
So, if Equation 75 holds for n, it also holds for n + 1 . It does hold for n = 1 , proving by induction the repetition lemma for all n = 1 , 2 , 3 , .

A.3.2 Separation 

We define the relevant intervals for the new sequences μ ( r 0 , , t 0 ; u ) by listing the previous values Equation 71 that lie below (set A ), at (set B ), and above (set C ) the new targets (Figure 7).
A B C : { r , , t ; u } such that μ ( r , , t ) < = > μ ( r 0 , , t 0 ; u )
Figure 7. The interval encompassing the new value lies above set A and below set C .
Figure 7. The interval encompassing the new value lies above set A and below set C .
Axioms 01 00038 g007
This decomposition must hold consistently across all new sequences, for all u. Values for any particular target multiplicity u lie in subsets of A , B , C with u fixed appropriately. It is convenient to denote provenance with a suffix (1 for A , 2 for B , 3 for C ), so that these definitions can be alternatively written as
A : { r 1 , , t 1 ; u 1 } such that μ ( r 1 , , t 1 ) < μ ( r 0 , , t 0 ; u 1 ) B : { r 2 , , t 2 ; u 2 } such that μ ( r 2 , , t 2 ) = μ ( r 0 , , t 0 ; u 2 ) C : { r 3 , , t 3 ; u 3 } such that μ ( r 3 , , t 3 ) > μ ( r 0 , , t 0 ; u 3 )
Apply repetitions n = u 2 u 3 for set A , and n = u 1 u 3 for set B , and n = u 1 u 2 for set C .
A : μ ( u 2 u 3 r 1 , , u 2 u 3 t 1 ) < μ ( u 2 u 3 r 0 , , u 2 u 3 t 0 ; u 1 u 2 u 3 ) B : μ ( u 1 u 3 r 2 , , u 1 u 3 t 2 ) = μ ( u 1 u 3 r 0 , , u 1 u 3 t 0 ; u 1 u 2 u 3 ) C : μ ( u 1 u 2 r 3 , , u 1 u 2 t 3 ) > μ ( u 1 u 2 r 0 , , u 1 u 2 t 0 ; u 1 u 2 u 3 )
Prefix various multiples of “ r 0 of a and ...and t 0 of c ”.
A : μ ( u 1 u 2 + u 1 u 3 ) r 0 + u 2 u 3 r 1 , , ( u 1 u 2 + u 1 u 3 ) t 0 + u 2 u 3 t 1 < Q B : μ ( u 1 u 2 + u 2 u 3 ) r 0 + u 1 u 3 r 2 , , ( u 1 u 2 + u 2 u 3 ) t 0 + u 1 u 3 t 2 = Q C : μ ( u 1 u 3 + u 2 u 3 ) r 0 + u 1 u 2 r 3 , , ( u 1 u 3 + u 2 u 3 ) t 0 + u 1 u 2 t 3 > Q
where
Q = μ ( u 1 u 2 + u 1 u 3 + u 2 u 3 ) r 0 , , ( u 1 u 2 + u 1 u 3 + u 2 u 3 ) t 0 ; u 1 u 2 u 3
Evaluate the left-hand sides and eliminate the common right-hand sides Q.
( u 1 u 2 + u 1 u 3 ) r 0 + u 2 u 3 r 1 a + + ( u 1 u 2 + u 1 u 3 ) t 0 + u 2 u 3 t 1 c < ( u 1 u 2 + u 2 u 3 ) r 0 + u 1 u 3 r 2 a + + ( u 1 u 2 + u 2 u 3 ) t 0 + u 1 u 3 t 2 c < ( u 1 u 3 + u 2 u 3 ) r 0 + u 1 u 2 r 3 a + + ( u 1 u 3 + u 2 u 3 ) t 0 + u 1 u 2 t 3 c
Subtract ( u 1 u 2 + u 1 u 3 + u 2 u 3 ) ( r 0 a + + t 0 c ) and divide by u 1 u 2 u 3 .
( r 1 - r 0 ) a + + ( t 1 - t 0 ) c / u 1 any member of A < ( r 2 - r 0 ) a + + ( t 2 - t 0 ) c / u 2 any member of B < ( r 3 - r 0 ) a + + ( t 3 - t 0 ) c / u 3 any member of C
Taking ( ( r - r 0 ) a + + ( t - t 0 ) c ) / u as the statistic, all members of A lie beneath all members of B , which in turn lie beneath all members of C . We can now assign the value of μ ( r 0 , , t 0 ; u ) for some target multiple u. The treatment differs somewhat according to whether or not B is empty.

A.3.3 Assignment When B Has Members 

If B is non-empty, we now show that all its members share a common value. Let two members be { r , , t ; u } and { r , , t ; u } (the suffix “2” is temporarily redundant), so that, by definition,
μ ( r , , t ) = μ ( r 0 , , t 0 ; u ) μ ( r , , t ) = μ ( r 0 , , t 0 ; u )
Apply repetitions by u and u, respectively.
μ ( u r , , u t ) = μ ( u r 0 , , u t 0 ; u u ) μ ( u r , , u t ) = μ ( u r 0 , , u t 0 ; u u )
Prefix multiples u and u of “ r 0 of a and ...and t 0 of c ”.
μ ( u r 0 + u r , , u t 0 + u t ) = μ ( u r 0 + u r 0 , , u t 0 + u t 0 ; u u ) μ ( u r 0 + u r , , u t 0 + u t ) = μ ( u r 0 + u r 0 , , u t 0 + u t 0 ; u u )
Evaluate the left-hand sides and eliminate the common right-hand side.
( u r 0 + u r ) a + + ( u t 0 + u t ) c = ( u r 0 + u r ) a + + ( u t 0 + u t ) c
Subtract ( u + u ) ( r 0 a + + t 0 c ) and divide by u u ,
( r - r 0 ) a + + ( t - t 0 ) c u = ( r - r 0 ) a + + ( t - t 0 ) c u = d
in which d denotes this common value now seen to be shared by all members of B . Using the definitions again, evaluating, and using this common value gives
μ ( r 0 , , t 0 ; u ) = μ ( r , , t ) = r a + + t c = r 0 a + + t 0 c + u d
where d is seen to be the value m ( d ) of a single atom of type d . By Equation 91, this value is rationally related to the previous values a , , c .

Illustration

Suppose for simplicity that only one type of atom has previously been assigned ( k = 1 ), according to the integer scale m ( r of a ) = r a with a = 1 . Suppose that the new atom d has value d = 5 3 , rationally related to a. By 3-fold repetition, this means that m ( 3 of d ) lies exactly at 5, and is a member of set B . Again by 3-fold repetition, m ( 1 of d ) cannot lie at or below 1 because that would wrongly imply m ( 3 of d ) 3 . Similarly, it cannot lie at or above 2 because that would imply m ( 3 of d ) 6 . So m ( 1 of d ) necessarily lies between 1 (which lies in set A ) and 2 (which lies in set C ) and can without loss of generality be assigned 5 3 . Similarly, m ( 2 of d ) necessarily lies between 3 and 4 and can without loss of generality be assigned 10 3 , and so on (Figure 8).
Figure 8. Multiples of a new type of atom can be assigned linear values.
Figure 8. Multiples of a new type of atom can be assigned linear values.
Axioms 01 00038 g008
These assignments obey axioms 1 and 2, and we now have a and d on the same linear scale.

A.3.4 Assignment When B Has no Members 

When B is empty, the strict inequalities Equation 86 separating A and C imply that partitioning between them can be accomplished by some real δ.
( r 1 - r 0 ) a + + ( t 1 - t 0 ) c u 1 any member of A < δ < ( r 3 - r 0 ) a + + ( t 3 - t 0 ) c u 3 any member of C
For the target multiplicity u, the definitions Equation 80 showed μ ( r 0 , , t 0 ; u ) to be bounded below by those members of A having u 1 = u , and bounded above by those members of C having u 3 = u . These constraints relevant to the target u are
r 1 a + + t 1 c subset u 1 = u of A < μ ( r 0 , , t 0 ; u ) < r 3 a + + t 3 c subset u 3 = u of C
which is equivalent to
( r 1 - r 0 ) a + + ( t 1 - t 0 ) c u subset u 1 = u of A < μ ( r 0 , , t 0 ; u ) - ( r 0 a + + t 0 c ) u < ( r 3 - r 0 ) a + + ( t 3 - t 0 ) c u subset u 3 = u of C
Because this refers to subsets involving a single u rather than the entire sets involving all u, it is a weaker constraint than the preceding global constraint Equation 93 was on δ. In other words, its central quantity μ ( r 0 , , t 0 ; u ) - ( r 0 a + + t 0 c ) / u is allowed to lie anywhere within an interval that contains the narrower interval containing δ. Accordingly, it is legitimate to assign
μ ( r 0 , , t 0 ; u ) - ( r 0 a + + t 0 c ) u = δ
which automatically satisfies all the relevant constraints Equation 95. So the simple assignment
μ ( r 0 , , t 0 ; u ) = r 0 a + + t 0 c + u δ
automatically falls in the correct interval. The only freedom is regrade to some alternative value within the relevant interval.

Illustration

Suppose that three types of atom have previously been assigned ( k = 3 ), according to
m ( r of a ) m ( s of b ) m ( t of c ) = r a + s b + t c
with a = 1 , b = 2 , c = 3 . Now introduce a fourth type d . Omitting r 0 , s 0 , t 0 for simplicity, we might find that multiples u of d fall into successive intervals as follows.
0 2 . 0000 = 2 a < m ( 1 of d ) < a + b = 0 2 . 4142 0 4 . 4641 = a + 2 c < m ( 2 of d ) < 2 b + c = 0 4 . 5605 0 6 . 6569 = a + 4 b < m ( 3 of d ) < 5 a + c = 0 6 . 7321 22 . 3424 = 14 a + b + 4 c < m ( 10 of d ) < 9 a + 7 b + 2 c = 22 . 3636
These are the constraints Equation 94 relevant to each individual u = 1 , 2 , 3 , , 10 , . It is guaranteed that there exists some δ such that the relevant interval for each target multiplicity u covers u δ , as illustrated by the diagonal line of slope 1 / δ in the diagram. Any breakout from these intervals would have contradicted axiom 2 thereby showing that δ had been incorrectly assigned (Figure 9).
According to Equation 93 with r 0 = s 0 = t 0 = 0 , the value of δ = m ( u of d ) / u is constrained by all the members of A , B and C .
Figure 9. Multiples of a new atom can always be assigned linear values δ , 2 δ , 3 δ , . An individual multiple can be assigned anywhere within the corresponding interval, but the linear assignment can always be chosen.
Figure 9. Multiples of a new atom can always be assigned linear values δ , 2 δ , 3 δ , . An individual multiple can be assigned anywhere within the corresponding interval, but the linear assignment can always be chosen.
Axioms 01 00038 g009
By the time these sets have expanded to cover up to 10 copies of d , the surviving interval is
2 . 2360 = ( 8 a + 7 c ) / 9 from u 1 = 9 < δ < ( 7 a + 5 c ) / 7 from u 3 = 7 = 2 . 2372
and by the time 1000 copies are allowed, the union of all the constraints fixes δ to 10 decimal places.
2 . 236067977497 = 1345 a + 56 b + 359 c 915 from u 1 = 915 < δ < 80 a + 545 b + 286 c 602 from u 3 = 602 = 2 . 236067977505
(The example happened to have δ = 5 .)

Accuracy

The gap between A and C might allow δ to be uncertain. We assume that δ is bounded below, otherwise the appended atoms of type d never have measurable effect. This implies the existence of u such that u δ > n a for any multiple n, no matter how large. We also assume that δ is bounded above, otherwise even a single d atom always overwhelms everything else. This implies the existence of a greatest r n such that r a < u δ for that u. Taking other types of atom to be absent for simplicity, we have
( r , 0 , , 0 ; u ) A and ( r + 1 , 0 , , 0 ; u ) C
where r can be indefinitely large. The corresponding inequalities r a / u < δ < ( r + 1 ) a / u from (93) fix δ to accuracy 1 part in r (1 in n or better).
This proves that δ can be found to arbitrarily high accuracy by allowing sufficiently high multiples. Denote the limiting value of δ by d. This value m ( d ) = d of a single atom of type d is now fixed to unlimited accuracy, but has no rational relationship to the previous values a , , c .

A.3.5 End of Inductive Proof 

Whether or not B had members, the assignment
μ ( r 0 , , t 0 ; u ) = r 0 a + + t 0 c + u d
obeys all the defining inequalities Equation 80. This updates the original assignment Equation 71 from k atom types to k + 1 , so by induction from k = 1 it holds for any k.
m ( r of a and and t of c and and v of e ) ( any number of types in any order = r a + + t c + + v e ( corresponding terms
Atom types in the above expression are often different, but do not need to be, and the formula represents the quantification of a general sequence. Embedded in it, and equivalent to it, is the sum rule x y = x + y for the values m ( x ) = x and m ( y ) = y of arbitrary sequences. Any order-preserving regrade Θ is also permitted, but no order-breaking transform is permitted.
This completes the inductive proof for atoms of positive style. The proof holds equally well for atoms of negative style, for which the values are negative. Meanwhile, Equation 68 shows that atoms of null style have zero value. So, even if the atoms may have arbitrary style, Equation 104 offers the only consistent combination rule. The result thus holds for atom values of arbitrary sign and arbitrary magnitude, though the nature of the constructive proof requires atom multiplicities to be non-negative. ☐

A.4. Axioms are Minimal

Theorem:
Axioms 1a, 1b, 2 are individually required.
Proof:
We construct operators (“not quite ⊕”) which deny each axiom in turn, while not being a monotonic strictly increasing regrade of addition.
Without axiom 1a (postfix ordering), the definition
a b = a + b
where a is the integer at or immediately below a, satisfies axioms 1b and 2 but cannot be equivalent to addition because it is not commutative; a b ≠ b a. So axiom 1a is required.
Without axiom 1b (prefix ordering), the definition
a b = a + b
satisfies axioms 1a and 2, but cannot be equivalent to addition because it is not commutative. So axiom 1b is required.
Without axiom 2 (associativity), the definition
x y = x 2 + y 2
satisfies axioms 1a and 1b (ordering), and also happens to be continuous and commutative ( x y = y x ). Yet it cannot be equivalent to addition because Θ ( x y ) = Θ ( x ) + Θ ( y ) has no solution that would enable a regrade Θ. That can be shown by appropriately differencing δ x δ y to reach Θ ( z + ) 2 Θ ( z ) + Θ ( z ) = 0 whose solution Θ ( z ) = A z + B fails to satisfy the supposedly defining Equation 107. Hence ordering is insufficient even when accompanied by continuity and commutativity. Axiom 2 (associativity) is definitely required. ☐

B. Appendix B: Product Theorem

Theorem:
The solution of the functional product Equation
Ψ ( τ + ξ ) + Ψ ( τ + η ) = Ψ τ + ζ ( ξ , η )
in which τ, ξ and η are independent real variables and Ψ is positive is
Ψ ( x ) = C e A x
where A and C are constants (C necessarily being positive).

B.1. Proof: 

The quoted solution is easily seen to satisfy the product equation, which demonstrates existence. The remaining question is whether the solution is unique.
First, we take the special case ξ = η , so that ζ - ξ and ζ - η take a common value a. This gives a 2-term recurrence
2 Ψ ( τ + ζ - a ) = Ψ ( τ + ζ )
in which τ and ζ remain independent, though a might be constant. In fact, a must be constant, otherwise there would be no solution for Ψ. Consequently, Ψ behaves geometrically with
Ψ ( θ + n a ) = 2 n Ψ ( θ )
for any integer n, θ being arbitrary. Although this plausibly suggests that Ψ will be exponential, that is not yet proved because Ψ could still be arbitrary within any assignment range of width a.
To complete the proof, take a second special case where ζ - ξ and ( ζ - η ) / 2 take a common value b. This gives a 3-term recurrence
Ψ ( τ + ζ - b ) + Ψ ( τ + ζ - 2 b ) = Ψ ( τ + ζ )
in which τ and ζ remain independent, though b might be constant. In fact, b must be constant, otherwise there would be no solution for Ψ. The solution is
Ψ ( θ + m b ) = 2 Ψ ( θ ) 5 + 5 + Ψ ( θ + b ) 5 1 + 5 2 m + 2 Ψ ( θ ) 5 - 5 - Ψ ( θ + b ) 5 - 2 1 + 5 m
for any integer m, θ being arbitrary.
This combines with the 2-term formula to make
Ψ ( θ + m b - n a ) = 2 Ψ ( θ ) 5 + 5 + Ψ ( θ + b ) 5 e m log 1 + 5 2 - n log 2 + ( - 1 ) m 2 Ψ ( θ ) 5 - 5 - Ψ ( θ + b ) 5 e - m log 1 + 5 2 - n log 2
For any integer n, there is an even integer m for which 0 m b - n a < 2 b so that all three arguments of Ψ lie in the range [ θ , θ + 2 b ] . As n is allowed to increase indefinitely, so does this m in proportion m / n a / b . Depending on the sign of n, at least one of the exponents ± m log 1 + 5 2 - n log 2 can become indefinitely large and positive. Unbounded values of Ψ being unacceptable, the coefficient of that exponent must vanish. So either
Ψ ( θ + m b - n a ) = Ψ ( θ ) e m log 1 + 5 2 - n log 2
(first term only) or
Ψ ( θ + m b - n a ) = ( - 1 ) m Ψ ( θ ) e - m log 1 + 5 2 - n log 2
(second term only, and even m makes the sign ( - 1 ) m = 1 ). In the first case, bounded Ψ requires
b a = log 1 + 5 2 log 2
and in the second case, bounded Ψ requires
b a = - log 1 + 5 2 log 2
Either way,
Ψ ( θ + m b - n a ) = Ψ ( θ ) e A ( m b - n a )
with A constant.
Although this strongly suggests that Ψ will be exponential, that is not yet fully proved because offsets m b - n a with even m are only a subset of the reals. There could be one scaling for arguments θ of the form m b - n a , another for the form 2 + m b - n a , yet another for π + m b - n a , and so on. Fortunately, b / a is irrational, so the offset m b - n a can approach any real value x arbitrarily closely. Express x as x = m b - n a + ϵ with m and n chosen to make ϵ arbitrarily small. Then
Ψ ( x ) = e A ( m b - n a ) Ψ ( ϵ ) = e A ( x - ϵ ) Ψ ( ϵ ) e A x Ψ ( ϵ )
because e A ϵ 1 . Separating variables, Ψ ( ϵ ) constant , giving
( solution ) Ψ ( x ) = C e A x
to arbitrarily high precision ( ϵ 0 ) with constant C.
This obeys the original product equation without further restriction and is the general solution, with corollary e A ξ + e A η = e A ζ defining ζ ( ξ , η ) and confirming that a = A - 1 log 2 and b = A - 1 log ( 1 + 5 2 ) were appropriate constants. ☐
The sought inverse, in terms of the constants A and C, is
( inverse ) Θ ( u ) = 1 A log u C
in which u and hence C are both positive.

C. Appendix C: Variational Theorem

Theorem:
The solution of the functional variational equation
H ( m x m y ) = λ ( m x ) + μ ( m y )
with positive m x and m y is
H ( m ) = A + B m + C ( m log m - m )
where A, B, C are constants.

C.1. Proof: 

The quoted solution is easily seen to satisfy the variational equation, with corollaries that the functions λ and μ are logarithmic, which demonstrates existence. The remaining question is whether the solution is unique.
Write log m x = u , log m y = v , and rewrite the functions as λ * ( u ) , μ * ( v ) and H ( m ) = h ( log m ) .
h ( u + v ) = λ * ( u ) + μ * ( v )
Put v = 0 to get λ * ( u ) = h ( u ) - constant and u = 0 to get μ * ( v ) = h ( v ) - constant .
h ( u + v ) = h ( u ) + h ( v ) - B
This is Cauchy’s functional equation ([14])
f ( u + v ) = f ( u ) + f ( v )
for f ( t ) = h ( t ) - B from which f ( n t ) = n f ( t ) and then f ( r n t ) = r n f ( t ) follow by induction for integer r and n. Hence
f ( t ) = c t
where c = f ( t 0 ) / t 0 evaluated at any convenient base t 0 . Awkwardly, the recurrence only relates to a rational grid—there could be one value of c for rational multiples of 1, another value for rational multiples of 2 , yet another for rational multiples of π, and so on. Fortunately, the sought function H is an integral of f, on which such infinitesimal detail has no effect.
To show that, we blur functions ϕ ( u , v ) by convolving them with the following unit-mass ellipse, chosen to blur u, v and u + v equally, according to
Φ ( u , v ) = d x d y 1 ( x 2 + x y + y 2 < 3 4 ϵ 2 ) 3 π ϵ 2 / 2 ϕ ( u - x , v - y )
For small width ϵ, blurring has negligible macroscopic effect. The convolution transforms the Cauchy equation to the same form
F ( u + v ) = F ( u ) + F ( v )
as before, with the new function
F ( t ) = - ϵ ϵ d x 2 ϵ 2 - x 2 π ϵ 2 f ( t - x )
being a continuous version of the original f, narrowly blurred over finite support. With continuity in place, the Cauchy solution
F ( t ) = C t
can only have one value for the constant C.
Finally, the definition d H / d m = h ( log m ) = B + f ( log m ) yields
H ( m ) = B m + m f ( log m ) d m (integrate) = B m + log m f ( t ) e t d t (change variable) = B m + - ϵ ϵ d x 2 ϵ 2 - x 2 π ϵ 2 log m f ( t ) e t d t (insert blurring) = B m + - ϵ ϵ d x 2 ϵ 2 - x 2 π ϵ 2 x + log m f ( t - x ) e t - x d t (offset dummy t ) B m + - ϵ ϵ d x 2 ϵ 2 - x 2 π ϵ 2 log m f ( t - x ) e t d t ( | x | ϵ small) ! = B m + log m F ( t ) e t d t (definition of F ) = B m + C log m t e t d t (substitute)
Hence, to arbitrarily high precision ( ϵ 0 ), H integrates to
H ( m ) = A + B m + C ( m log m - m ) .
This obeys the original variational equation with corollaries λ ( x ) = B 1 + C log ( x ) and μ ( x ) = B 2 + C log ( x ) where B 1 + B 2 = B , and is the general solution. ☐

Share and Cite

MDPI and ACS Style

Knuth, K.H.; Skilling, J. Foundations of Inference. Axioms 2012, 1, 38-73. https://doi.org/10.3390/axioms1010038

AMA Style

Knuth KH, Skilling J. Foundations of Inference. Axioms. 2012; 1(1):38-73. https://doi.org/10.3390/axioms1010038

Chicago/Turabian Style

Knuth, Kevin H., and John Skilling. 2012. "Foundations of Inference" Axioms 1, no. 1: 38-73. https://doi.org/10.3390/axioms1010038

Article Metrics

Back to TopTop