Next Article in Journal
Enhanced Negative Nonlocal Conductance in an Interacting Quantum Dot Connected to Two Ferromagnetic Leads and One Superconducting Lead
Next Article in Special Issue
Dissipation in Non-Steady State Regulatory Circuits
Previous Article in Journal
Permutation Entropy-Based Analysis of Temperature Complexity Spatial-Temporal Variation and Its Driving Factors in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fitness Gain of Individually Sensed Information by Cells

by
Tetsuya J. Kobayashi
* and
Yuki Sughiyama
Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(10), 1002; https://doi.org/10.3390/e21101002
Submission received: 26 August 2019 / Revised: 27 September 2019 / Accepted: 10 October 2019 / Published: 13 October 2019
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)

Abstract

:
Mutual information and its causal variant, directed information, have been widely used to quantitatively characterize the performance of biological sensing and information transduction. However, once coupled with selection in response to decision-making, the sensing signal could have more or less evolutionary value than its mutual or directed information. In this work, we show that an individually sensed signal always has a better fitness value, on average, than its mutual or directed information. The fitness gain, which satisfies fluctuation relations (FRs), is attributed to the selection of organisms in a population that obtain a better sensing signal by chance. A new quantity, similar to the coarse-grained entropy production in information thermodynamics, is introduced to quantify the total fitness gain from individual sensing, which also satisfies FRs. Using this quantity, the optimizing fitness gain of individual sensing is shown to be related to fidelity allocations for individual environmental histories. Our results are supplemented by numerical verifications of FRs, and a discussion on how this problem is linked to information encoding and decoding.

1. Introduction

Most biological systems are equipped with active sensing machinery to monitor the ever-changing environment. The fidelity of sensing is crucial to choosing appropriate states and behaviors in response to changes in environmental states [1,2,3]. Instantaneous mutual information, path-wise mutual information, and its causal variant, directed information, have been used to quantitatively characterize the performance of the sensing and information transduction, theoretically [4,5,6,7] and experimentally [8,9,10,11]. These information measures are also fundamental to the thermodynamic cost of sensing [12,13,14,15,16].
However, it is still elusive whether these measures can appropriately quantify the biological and fitness value of sensed information. Despite intensive works on the fitness value of information [17,18,19,20,21,22,23,24,25], almost all works considered a biologically unrealistic situation in which all cells or organisms in a population receive a common sensing signal, which is the requisite for proving that the fitness value of sensing is bounded by the information measures. Few studies have conjectured that biologically realistic sensing by individual organisms may have greater fitness value than these measures [22,25].
In this work, we resolve this problem by generally proving that the individual sensing always has greater fitness value than common sensing does. The additional fitness gain, which satisfies fluctuation relations (FRs), is attributed to the selection of organisms that obtains a correct sensing signal by chance. A new quantity, which is similar to the coarse-grained entropy production in information thermodynamics, is introduced to quantify the total fitness gain from the individual sensing, the upper bound of which is strictly higher than the directed information. We further show that the optimization of this quantity is closely related to optimizing an auto-encoding network, in which sensing, phenotypic switching, and metabolic allocation work as encoding, processing, and decoding, respectively. Our general results, especially those for FRs, are verified by a numerical simulations.

2. Modeling Sensing and Adaptation Processes

We consider a population of an asexual organism that replicates with an instantaneous replication rate k ( x , y ) , depending on its phenotype x S x and the state of environment y S y , where the phenotypic and environmental states are assumed to be discrete and finite, for simplicity. The organism switches its phenotype stochastically from x to x by exploiting sensing signal z S z with a transition probability T F ( x | x , z ) within a small time interval Δ t . Depending on the physical entity of z, the sensing can be categorized as either individual or common sensing [22,25]. In the case of individual sensing, z is the state of a sensing system of the organism, such as the activity of receptors. Due to the stochasticity in the sensing process, the individual organisms receive different sensing signals z (Figure 1a). By assuming that the stochastic sensing output z depends on the state of the environment y as T S ( z | z , y ) , we describe the dynamics of the number of organisms N t Y ( x t , z t ) that have phenotypic state x t with sensing signal z t at t as
N t Y ( x t + 1 , z t + 1 ) = e k ( x t + 1 , y t + 1 ) × x t , z t T F ( x t + 1 | x t , z t + 1 ) T S ( z t + 1 | z t , y t + 1 ) N t Y ( x t , z t ) ,
where Y t : = { y 0 , , y t } is the history of the environmental state, the statistical properties of which are characterized by a path probability Q [ Y t ] . In this representation, we implicitly assume a causal dependency among x t , y t , and z t as y t z t x t y t + 1 z t + 1 x t + 1 . Such representation has been conventionally used for notational simplicity.
In contrast, in the case of common sensing, z is assumed to be partial information on the environmental state that is common to all organisms [26,27] (Figure 1b). An example is an extracellular chemical that correlates with the environmental state and can be sensed by the organisms with negligible error. The dynamics of the number of organisms N t Y , Z ( x t ) with phenotypic state x at time t under a realization of environmental and common signal histories, Y t and Z t , can be represented as
N t + 1 Y , Z ( x t + 1 ) = e k ( x t + 1 , y t + 1 ) × x t S x T F ( x t + 1 | x t , z t + 1 ) N t Y , Z ( x t ) .
We assume that the history of the common signal Z t : = { z 0 , , z t } follows a statistical law Q [ Z t Y t ] , which is causally conditional on the environmental history. At this stage, the statistical property of the common signal is abstractly represented as Q [ Z t Y t ] . However, in the following, we are going to assume that Q [ Z t Y t ] is identical to the path probability generated by the sensing law of the individual sensing, T S ( z | z , y ) , to clarify the difference of individual and common sensing. While common sensing is not biologically realistic enough, most previous works on the fitness value of information only addressed common sensing, and proved that the fitness gain of common sensing is upper bounded by the directed information [26,27].
Before deriving relations between fitness gain and information measures, we will mention the limitations of the model we assumed. In our modeling above, we did not include the carrying capacity of the environment, which works to reduce the growth of individual cells when the number of cells in the environment approaches the capacity. Additionally, we also assumed that cells cannot affect the behavior of the environment. Even though these factors are biologically important and also theoretically intriguing, we did not assume them, as the following information-theoretic analysis of the population dynamics cannot be applicable at this moment. We touch on potential extensions of this work to include these factors in our Discussion.

Fitness of a Population with Individual and Common Sensing

The fitness of a population with individual sensing Ψ i [ Y t ] and with common sensing Ψ c [ Y t , Z t ] can be defined respectively as
Ψ i [ Y t ] : = ln N t Y N 0 Y , Ψ c [ Y t , Z t ] : = ln N t Y , Z N 0 Y , Z ,
where N t Y : = x t , z t N t Y ( x t , z t ) and N t Y , Z : = x t N t Y , Z ( x t ) . We define a pathwise historical fitness [28]
K [ X t , Y t ] : = τ = 0 t - 1 k ( x τ + 1 , y τ + 1 ) ,
and path probabilities for phenotypic and signal histories
P F [ X t Z t ] : = τ = 0 t - 1 T F ( x τ + 1 | x τ , z τ + 1 ) p F ( x 0 | z 0 ) ,
P S [ Z t Y t ] : = τ = 0 t - 1 T S ( z τ + 1 | z τ , y τ + 1 ) p S ( z 0 | y 0 ) ,
respectively. Then, by using Equations (1) and (2), we can explicitly represent the fitnesses [26,27,28,29] as
Ψ i [ Y t ] = ln e K [ X t , Y t ] P F , S [ X t | Y t ] ,
Ψ c [ Y t , Z t ] = ln e K [ X t , Y t ] P F [ X t Z t ] ,
where · P [ X t ] is the average with respect to P [ X t ] , and P F , S [ X t | Y t ] : = Z t P F [ X t Z t ] P S [ Z t Y t ] (see also Appendix A.1 for the derivation). Here, ∥ is the Kramer’s causal conditioning, which indicate a causal relation between the conditioning and the conditioned histories [30,31]. Using the path representation of the fitnesses, we can define the time-backward retrospective path probabilities as
P B i [ X t , Z t | Y t ] : = N t Y [ X t , Z t , Y t ] N t Y = e K [ X t , Y t ] - Ψ i [ Y t ] P F [ X t Z t ] P S [ Z t Y t ] ,
P B c [ X t | Y t , Z t ] : = N t Y , Z [ X t , Z t , Y t ] N t Y , Z = e K [ X t , Y t ] - Ψ c [ Y t , Z t ] P F [ X t Z t ] , ,
where N t Y [ X t , Z t , Y t ] is the number of cells at time t that have phenotypic and individual sensing histories, X t and Z t , under the realization of environmental history Y t . Similarly, N t Y , Z [ X t , Z t , Y t ] is the number of cells at time t that have phenotypic history, X t , under the realization of environmental and common sensing histories Y t and Z t . Thus, P B i and P B c can be interpreted as the probabilities of observing a phenotypic history X t when we trace the phenotypic history from time t to time 0 in a time-backward manner, retrospectively [26,27,29]. In contrast, P F [ X t Z t ] is the probability of observing X t when we trace the phenotypic history in a time forward manner [26,27,29]. The difference between the two is attributed to the impact of selection, which can be characterized by investigating a population after selection, retrospectively.

3. Stochastic Trajectories of Individual and Common Sensing

In order to provide numerical examples of the difference between individual and common sensing, we consider a Markovian environment with three states, S y = { s 1 y , s 2 y , s 3 y } , and a population with two phenotypic states, S x = { s 1 x , s 2 x } . Of the three environmental states, s 1 y and s 2 y are nutrient A- and nutrient B-rich environments, respectively. The environmental states fluctuate between these two states, most of time (Figure 2a). In contrast, s 3 y is a nutrient-poor environment, in which the growth of the population is limited (Figure 2b). The environmental state occasionally sojourns in this state s 3 y from either s 1 y or s 2 y (Figure 2a). The rule for these stochastic transitions among the environmental states is specified by a stochastic transition matrix, T E F ( y | y ) , from y to y :
{ T E F ( y | y ) } = s 1 y s 2 y s 3 y s 1 y s 2 y s 3 y ( 0.70 0.25 0.25 0.25 0.70 0.25 0.05 0.05 0.50 ) .
The two phenotypic states, s 1 x and s 2 x , are assumed to be adapted specifically to the nutrient A-rich state s 1 y and the nutrient B-rich state s 2 y , respectively. These are modeled by the replication rates k ( s 1 x , s 1 y ) and k ( s 2 x , s 2 y ) in the adaptive environments, which are higher than those of k ( s 1 x , s 2 y ) and k ( s 2 x , s 1 y ) in the non-adaptive environment (Figure 2b):
{ e k ( x , y ) } = s 1 y s 2 y s 3 y s 1 x s 2 x ( 2.24 0.32 0.08 0.32 2.24 0.08 ) .
The sensing signal has two states, S z = { s 1 z , s 2 z } , which correspond to the nutrient A- and nutrient B-rich environments, s 1 y and s 2 y , respectively. A cell in the case of individual sensing, or cells in the case of the common sensing, receive s 1 z or s 2 z with high probability when the environmental state is s 1 y or s 2 y , respectively. If the environment is in the nutrient-poor s 3 y state, a cell or cells obtain s 1 z or s 2 z with equal probability. Here, the sensing is assumed to be memory-less as T S ( z | z , y ) = T S ( z | y ) , and, thus, its stochastic behavior is defined by a transition matrix, T S ( z | y ) , for individual sensing, and by T E F ( z | y ) for common sensing (Figure 2c):
{ T S ( z | y ) } = { T E F ( z | y ) } = s 1 y s 2 y s 3 y s 1 z s 2 z ( 0.8 0.2 0.5 0.2 0.8 0.5 ) .
In order to compare individual and common sensing, we set the accuracy of sensing to be equal, T S ( z | y ) = T E F ( z | y ) , for all y S y and z S z . Finally, a cell is assumed to switch into phenotypic state s i x with high probability when it receives a sensing signal s i z for i = { 1 , 2 } (Figure 2d):
{ T F ( x | z ) } = s 1 z s 2 z s 1 x s 2 x ( 0.95 0.05 0.05 0.95 ) ,
where the phenotypic switching is set to be memory-less T F ( x | x , z ) = T F ( x | z ) .
Given these conditions, Figure 3 illustrates the population dynamics of cells with individual sensing Figure 3a,b and with common sensing Figure 3c,d under two different realizations of the environment. For the first realization, shown in Figure 3a,c,e, Ψ i [ Y t ] is higher than Ψ c [ Y t , Z t ] (see red and blue solid lines in Figure 3e), whereas, for the second realization (Figure 3b,d,f)), Ψ c [ Y t , Z t ] is greater than Ψ i [ Y t ] (Figure 3f). This clearly illustrates that the fitness advantages of individual and common sensing are strongly dependent on the actual realization of the environment and the common sensing signal. When common sensing produces a correct signal by chance, the population with common sensing can enjoy a higher fitness gain than that with individual sensing. By contrast, the population with common sensing loses fitness when the signal is incorrect. Figure 4 also shows the behaviors of Ψ i [ Y t ] (Figure 4b) and Ψ c [ Y t , Z t ] (Figure 4c) under 100 different realizations of { Y t , Z t } , which reinforces the observation that both Ψ i [ Y t ] and Ψ c [ Y t , Z t ] can fluctuate significantly, depending on the realizations. However, an ensemble average of the fitness shows that Ψ i Q is greater than Ψ c Q , at least for this specific instance (the red and blue solid lines in Figure 4a).

4. Value of Individual Sensing is Always Greater than that of Common Sensing

In order to characterize the fitness difference between individual and common sensing in general, we derive a detailed fluctuation relation for the fitness difference g [ Y t , Z t ] : = Ψ i [ Y t ] - Ψ c [ Y t , Z t ] from Equations (9) and (10) as
e - g [ Y t , Z t ] = e Ψ c [ Y t , Z t ] e Ψ i [ Y t ] = P B i [ X t , Z t | Y t ] P B c [ X t | Y t , Z t ] P S [ Z t Y t ] = P B i [ Z t | Y t ] P S [ Z t Y t ] ,
where P B i [ Z t | Y t ] : = X t P B i [ X t , Z t | Y t ] (see also Appendix A.2 for the derivation). By assuming that the statistical property of common sensing is the same as that of individual sensing, Q [ Z t Y t ] = P S [ Z t Y t ] , as in Figure 3 and Figure 4, we obtain the average fluctuation relation (FR) as
Ψ i [ Y t ] Q [ Y t ] - Ψ c [ Y t , Z t ] Q [ Y t , Z t ] = G ,
where
G : = g Q = D [ P S [ Z t Y t ] Q [ Y t ] P B i [ Z t | Y t ] Q [ Y t ] ] ,
is the Kulback–Leibler (KL) divergence between the time-forward sensing behavior, P S [ Z t Y t ] , and the time-backward behavior, P B i [ Z t | Y t ] (see also Appendix A.3 for derivation). Together with the non-negativity of the KL divergence, the average FR indicates that the average fitness of individual sensing is always greater than that of common sensing by G 0 . As individual and common sensing are assumed to have the same statistical property, the source of the gain G is attributed to the individuality of the sensing. In the case of individual sensing, the organisms receiving the correct signal by chance grow more than those that receive incorrect signal do. Thus, the retrospective signal histories P B i [ Z t | Y t ] are biased by the selection from the time-forward signal histories P S [ Z t Y t ] . The gain G is exactly this bias, quantified by the KL divergence. No such gain is obtained from the common sensing, because the sensing signal is common to all organisms and, thus, no bias is induced by selection. This result clearly indicates that the fitness value of individual sensing cannot be properly evaluated by considering only the time-forward behavior of the signal and the environment. Even though individual sensing gains more fitness than common sensing does, on average, as demonstrated in Figure 4a, g [ Y t , Z t ] fluctuates significantly and common sensing can gain more fitness than individual sensing does, by chance (Figure 3b,d and Figure 5a). From the detailed FR for g [ Y t , Z t ] Equation (15), we also derive the integral fluctuation relation (IFR):
e - g [ Y t , Z t ] Q [ Y t , Z t ] = e - ( Ψ i [ Y t ] - Ψ c [ Y t , Z t ] ) Q [ Y t , Z t ] = 1 ,
which clarifies that g [ Y t , Z t ] fluctuates, such that the positive g [ Y t , Z t ] balances the negative g [ Y t , Z t ] to satisfy the equality. The integral FR is also verified numerically in Figure 5a,b.

4.1. The Gain of Fitness by Individual Sensing

We further investigate Ψ i [ Y t ] to clarify how the fitness of the organisms with individual sensing is shaped. To this end, as in a previous work [27], which investigated the fitness value of common sensing, we additionally assume that k ( x , y ) can be decomposed as e k ( x , y ) = e k m a x ( y ) T K ( y | x ) [27]. In this decomposition, k m a x ( y ) can be interpreted as the maximum replication rate that would be attained if the organisms allocated all their metabolic resources to adapt only to the environmental state y. Therefore, under this extreme allocation, the organisms die out under the environmental states other than y. The decomposition effectively means that we presume that each phonotypic state can be characterized by how a cell allocates its metabolic resources to different environmental conditions. T K ( y | x ) is the fraction of metabolic resources allocated to the environmental state y in a phenotypic state x. Thereby, we call T K ( y | x ) metabolic allocation strategy of the organisms in this work. The biological motivation behind the metabolic allocation is the problem of generalist and specialist. In order to adapt to a changing environment, a cell has essentially two tactics: One is to equip a cell with single fixed phenotypic state that distributes the metabolic resources over all the possible environmental states. Thereby, such a cell can, at least, evade extinction under any environmental state. The other is to switch between multiple specialized phenotypic states, each of which allocates the metabolic resources to a small number of environmental states. In this case, each phenotypic state runs the risk of extinction, but such risk is hedged as a population by stochastic switching of the phenotypic states. These two are continuously interpolated, and the optimal one depends on the way the environment fluctuates. The introduction of the metabolic allocation strategy enables us to consider a wider spectrum of biological adaption in which the character of each phenotypic state is also optimized evolutionarily.
By defining
P K [ Y t X t ] : = τ = 0 t - 1 T K ( y τ + 1 | x τ + 1 ) ,
K max [ Y t ] : = τ = 1 t k max ( y τ ) ,
the historical fitness, Equation (4), is decomposed as
K [ X t , Y t ] = K max [ Y t ] + ln P K [ Y t X t ] .
By introducing this decomposition into Equation (9), we obtain
e Ψ i [ Y t ] - Ψ 0 [ Y t ] = P K [ Y t X t ] P F [ X t Z t ] P S [ Z t Y t ] P B i [ X t , Z t | Y t ] Q [ Y t ] ,
where Ψ 0 [ Y t ] : = K max [ Y t ] + ln Q [ Y t ] , the average of which is known to bound the average fitness of a population without sensing [27] (see Appendix A.4 for derivation). By taking the marginalization with respect to X t and Z t , we have
Ψ i [ Y t ] = Ψ 0 [ Y t ] + σ [ Y t ] = K max [ Y t ] + ln P K F S [ Y t | Y t ] ,
where
P K F S [ Y t | Y t ] : = X t , Z t P K [ Y t X t ] P F [ X t Z t ] P S [ Z t Y t ] ,
and
σ [ Y t ] : = ln P K F S [ Y t | Y t ] Q [ Y t ] .
Since the average of Ψ 0 [ Y t ] is the tight bound of the fitness without sensing, σ [ Y t ] is the gain of fitness by individual sensing. Here, P K F S [ Y t | Y t ] is the probability that an organism allocates its metabolic resources to an environmental history Y t when it experiences environmental history Y t . Thus, P K F S [ Y t | Y t ] measures the probability that the metabolic resource is correctly allocated to the actual environmental history Y t , and 1 - P K F S [ Y t | Y t ] is the probability of an incorrect allocation. In other words, P K F S [ Y t | Y t ] characterizes how accurately the individual sensing, phenotypic switching, and metabolic allocation together respond to the actual environment. From an information-theoretic viewpoint, this cascade from environment to metabolic allocation via sensing and phenotypic switching is very similar to the auto-encoding and decoding of information Y t via multiple layers [32]. The sensing works as the encoding of an environmental history Y t into Z t . The signal-dependent phenotypic switching is the processing of the encoded signal in the internal layers. The metabolic allocation is the decoding process to recover the original information, Y t , from X t . Under this interpretation, P K F S [ Y t | Y t ] determines the statistical correspondence between the encoded information Y t and the decoded information Y t , and P K F S [ Y t | Y t ] is the probability that the encoded data Y t is correctly decoded as Y t . Therefore, the total fidelity can be quantified as
γ t : = ln Y t P K F S [ Y t | Y t ] = ln e σ [ Y t ] Q [ Y t ] .
Formally, similar quantities to σ [ Y t ] and γ t were introduced by Sagawa and Ueda as the coarse-grained entropy production and the efficiency parameter of feedback control in information thermodynamics [33,34]. Using γ t , σ [ Y t ] can be decomposed as
σ [ Y t ] = γ t - ln Q [ Y t ] P γ [ Y t ] ,
where
P γ [ Y t ] : = e - γ t P K F S [ Y t | Y t ] ,
is a path probability. By combining this with Equation (22), we have
Ψ i [ Y t ] = Ψ 0 [ Y t ] + γ t - ln Q [ Y t ] P γ [ Y t ] .
By taking the average with respect to Q [ Y t ] , we obtain
Ψ i Q = Ψ 0 Q + γ t - D [ Q [ Y t ] P γ [ Y t ] ] Ψ 0 Q + γ t .
Equations (25) and (26) can be regarded as detailed and average FRs, respectively, with respect to Ψ 0 [ Y t ] + γ t - Ψ i [ Y t ] . As Ψ 0 Q is the tight upper bound of the average fitness without sensing, this relation means that γ t is an upper bound of the fitness gain from individual sensing. Moreover, γ t is an intrinsic quantity of the population, in the sense that it is determined irrespective of the actual statistical law of the environment, Q [ Y t ] . The deviation of Ψ i [ Y t ] from Ψ 0 Q + γ t satisfies an integral FR as
e - ( Ψ 0 [ Y t ] + γ t - Ψ i [ Y t ] ) Q [ Y t ] = e - ( γ t - σ [ Y t ] ) Q [ Y t ] = 1 ,
the behaviors of which are illustrated numerically in Figure 5c,d.

4.2. Connection with Other Information Measures

In order to link the quantities σ and γ t with other common information measures, we further assume that the environment is Markovian:
Q [ Y t ] = τ = 0 t - 1 T E F ( y τ + 1 | y τ ) p E ( y 0 ) ,
and that the sensing is memory less as
T S ( z t + 1 | Z t , y t + 1 ) = T S ( z t + 1 | y t + 1 ) .
Then, we obtain the joint time-forward probability for Y t and Z t and its Bayesian causal decomposition (see Appendix A.5 for derivation) as
P S [ Y t , Z t ] : = P S [ Z t Y t ] Q [ Y t ] = P S B [ Y t Z t ] P S B [ Z t Y t - 1 ] .
In this decomposition,
P S B [ Y t Z t ] : = τ = 0 t - 1 T S B ( y τ + 1 | z τ + 1 , y τ ) p ( y 0 | z 0 ) ,
P S B [ Z t Y t - 1 ] : = τ = 0 t - 1 T S B ( z τ + 1 | y τ ) p ( z 0 )
are path probabilities generated by two pairs of new transition probabilities that are obtained by Bayes’ theorem as
T S B ( z t + 1 | Y t ) : = y t + 1 T S ( z t + 1 | y t + 1 ) T E F ( y t + 1 | Y t ) ,
T S B ( y t + 1 | z t + 1 , Y t ) : = T S ( z t + 1 | y t + 1 ) T E F ( y t + 1 | Y t ) T S B ( z t + 1 | Y t ) ,
where T S B ( y t + 1 | z t + 1 , Y t ) is the Bayesian posterior of the environmental state, y t + 1 , given the information of the sensed signal z t + 1 and the previous environmental state y t . In this procedure, we switch the causal order of y t and z t by using Bayes’ theorem. Then, by using Equations (15) and (21) can be rearranged as
e - Ψ i [ Y t ] - ( Ψ 0 [ Y t ] + i [ Z t Y t ] + g [ Y t , Z t ] ) = P S B [ Y t Z t ] P K , F [ Y t | Z t ] ,
where P K , F [ Y t | Z t ] : = X t P K [ Y t X t ] P F [ X t Z t ] and i [ Z t Y t ] : = ln P S B [ Y t Z t ] / Q [ Y t ] is the pointwise directed information from Z t to Y t (see Appendix A.6 for derivation). This is another detailed FR with individual sensing, the average version of which can be obtained by taking the average with respect to P S [ Y t , Z t ] :
Ψ i Q = Ψ 0 Q + I Z t Y t + G - D loss ,
where D loss = D [ P S [ Y t , Z t ] P K , F [ Y t | Z t ] P S B [ Z t Y t - 1 ] ] and I Z t Y t : = i [ Z t Y t ] P S [ Y t , Z t ] is the directed information [31]. Directed information is an extension of mutual information of two trajectories by considering the causal relationship between the trajectories. Similarly to transfer entropy, the directed information quantifies the causal dependency between two trajectories. Transfer entropy is related to the upper bound of the rate of the directed information [35]. Their integral version is illustrated numerically in Figure 5e,f. Since g [ Y t , Z t ] = Ψ i [ Y t ] - Ψ c [ Y t , Z t ] , we can immediately see that Equations (35) and (36) are equivalent to the detailed and average FRs, respectively, for the fitness with common sensing:
e - Ψ c [ Y t , Z t ] - ( Ψ 0 [ Y t ] + i [ Z t Y t ] ) = P S B [ Y t Z t ] P K , F [ Y t | Z t ] ,
and
Ψ c Q = Ψ 0 Q + I Z t Y t - D loss .
These relations for the common sensing were originally derived in [27]. For a given and fixed sensing property, T S ( z τ | y τ ) , the maximum gain of the average fitness by common sensing is shown to be bounded by I Z t Y t as
max T F , T K Ψ c Q - Ψ 0 Q I Z t Y t ,
where the equality is attained when D loss = 0 . D loss is the loss of fitness due to an imperfect implementation of a sequential Bayesian inference, and becomes 0 if and only if the phenotypic switching strategy, P F * [ X t Z t ] , and the metabolic allocation strategy, P K * [ Y t X t ] , are jointly optimized to implement the Bayesian sequential inference as P K , F * [ Y t | Z t ] = P S B [ Y t Z t ] , where
P K , F * [ Y t | Z t ] : = X t P K * [ Y t X t ] P F * [ X t Z t ] .
An instance of the optimal metabolic allocation and phenotypic switching strategies is T K * ( y | x ) = δ x , y and T F * ( x | x , z ) = T E B ( y | z , y ) y = x , y = x , when S x = S y . These optimal strategies mean that, for each phenotypic state, all metabolic resources are allocated to one of the environmental states, and the phenotype switches with the probability that is exactly the same as the Bayesian posterior of the environment given the sensing signal. In other words, cells with phenotype x can survive and grow only when the environment is in the state to which all the metabolic resource is allocated under the phenotype x, and the cells change their phenotypic state by calculating the Bayesian posterior probability of the next environmental state given the common sensing signal.
In contrast, in the case of individual sensing, the Bayesian inference is no longer optimal, as G is dependent on the strategies of phenotypic switching and metabolic allocation, and { P F * , P K * } may not be the maximizer of G . This fact is more clearly shown as
max T F , T K Ψ i Q Ψ i * Q = Ψ 0 Q + I Z t Y t + G * ,
where Ψ i * and G * are obtained by inserting P F * and P K * that satisfy D loss = 0 . Equivalently, from σ [ Y t ] = Ψ i [ Y t ] - Ψ 0 [ Y t ] , we have
σ [ Y t ] Q [ Y t ] = I Z t Y t + G - D loss ,
and
max T F , T K σ [ Y t ] Q [ Y t ] I Z t Y t + G * .
This inequality further indicates that the maximum average fitness gain from individual sensing for a fixed sensing strategy is greater than or equal to the directed information plus G * , which means that the sequential Bayesian inference is no longer optimal. It is optimal in the case of the common sensing as the sensing signal is common and the subsequent phenotypic diversification by following the sequential Bayesian inference can hedge the risk of the error optimally. In the individual sensing, in contrast, stochastic individual sensing automatically induces a diversification in a population, which makes subsequent diversification by following Bayesian posterior suboptimal and redundant. Moreover, the information measure of the sensing, such as directed information, may not be an appropriate quantity to capture the efficiency of the overall decision-making process with individual sensing.

5. Discussion and Future Works

These results indicate that σ [ Y t ] and γ t are more relevant quantities for characterizing the fitness gain from the individual sensing. From the average FR of σ [ Y t ] :
σ Q = γ t - D [ Q [ Y t ] P γ [ Y t ] ] ,
the maximization of σ Q is reduced to balancing the maximization of the total fidelity γ t and the minimization of D [ Q [ Y t ] P γ [ Y t ] ] . As both γ t and P γ [ Y t ] depend on the actual strategies of organisms, there exists a tradeoff between them, in general.
In the analogy of autoencoding and decoding, γ t becomes higher when each input Y t is decoded more correctly. In contrast, D [ Q [ Y t ] P γ [ Y t ] ] is minimized when the relative fidelity for Y t matches the probability, Q [ Y t ] , that the environmental history Y t appears, since P γ [ Y t ] measures the relative fidelity of decoding Y t , given Y t as encoding information. From the definition of P γ [ Y t ] , Equation (24), P γ [ Y t ] e - γ t must hold for each Y t . If the total fidelity γ t is fixed and small enough to satisfy max Y t Q [ Y t ] e - γ t , balancing sensing, phenotypic switching, and metabolic allocation to satisfy P γ [ Y t ] = Q [ Y t ] becomes the optimal strategy to maximize σ . This observation suggests that, under biologically realistic situations with moderate total fidelity, P γ [ Y t ] = Q [ Y t ] can be regarded as a proxy of the optimal strategy with individual sensing. If the total fidelity is too high to violate max Y t Q [ Y t ] < e - γ t , however, D [ Q [ Y t ] P γ [ Y t ] ] = 0 cannot be achieved, and more complicated optimization is required.
These investigations in conjunction with the analogy of the problem with autoencoding and decoding, show that in order to understand the decision-making of cells and organisms with individual sensing, we should consider a joint optimization of sensing, phenotypic switching, and metabolic allocation, rather than an optimization of a part of them with the other fixed and given [36]. In the evolution of cellular and organismal decision-making, these three factors are concurrently subject to natural selection, and we have to frame this problem appropriately. This challenge may lead to a deeper understanding of thermodynamics with feedback, because similar quantities to σ [ Y t ] and γ t have appeared already in the problem of feedback efficiency in information thermodynamics [33,34]. Moreover, the analogy of the problem with auto-encoding may pave the way to link the field of machine learning and deep learning with that of evolutionary biology and optimization.
Finally, we should note that all the information-theoretic relations derived in this work as well as others in previous works basically assume no cell interactions and no feedback from organisms to the environment. Even though extending the relations to relax such assumptions is a difficult problem, it can substantially expand the applicability of the information-theoretic approach to various biological problems.

Author Contributions

Conceptualization, Formal analysis, Writing – original draft and Funding Acquisition:T.J.K. and Y.S.; Software and Visualization: T.J.K.

Funding

This research is supported partially by JST PRESTO Grant Number JPMJPR15E4, JST CREST Grant Number JPMJCR1927, JSPS KAKENHI Grant Numbers 16K17763 and 19H05799, and the 2016 Inamori Research Grants Program, Japan.

Acknowledgments

We acknowledge Yuichi Wakamoto, Takahiro Sagawa, and Takashi Nozoe for their useful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of Equations

Appendix A.1. Derivation of Equations (7) and (8)

Ψ i [ Y t ] = ln X t , Z t τ = 0 t - 1 e k ( x τ + 1 , y τ + 1 ) T F ( x τ + 1 | x τ , z τ + 1 ) T S ( z τ + 1 | z τ , y τ + 1 ) p F ( x 0 | z 0 ) p S ( z 0 | y 0 ) = ln X t , Z t τ = 0 t - 1 e k ( x τ + 1 , y τ + 1 ) τ = 0 t - 1 T F ( x τ + 1 | x τ , z τ + 1 ) p F ( x 0 | z 0 ) τ = 0 t - 1 T S ( z τ + 1 | z τ , y τ + 1 ) p S ( z 0 | y 0 ) = ln X t , Z t e K [ X t , Y t ] P F [ X t Z t ] P S [ Z t Y t ] = ln X t e K [ X t , Y t ] P F , S [ X t | Y t ] = ln e K [ X t , Y t ] P F , S [ X t | Y t ] ,
and
Ψ c [ Y t , Z t ] = ln X t τ = 0 t - 1 e k ( x τ + 1 , y τ + 1 ) T F ( x τ + 1 | x τ , z τ + 1 ) p F ( x 0 | z 0 ) = ln X t τ = 0 t - 1 e k ( x τ + 1 , y τ + 1 ) τ = 0 t - 1 T F ( x τ + 1 | x τ , z τ + 1 ) p F ( x 0 | z 0 ) = ln X t e K [ X t , Y t ] P F [ X t Z t ] = ln e K [ X t , Y t ] P F [ X t Z t ] .

Appendix A.2. Derivation of Equation (15)

e - g [ Y t , Z t ] = e Ψ c [ Y t , Z t ] e Ψ i [ Y t ] = e K [ X t , Y t ] P F [ X t Z t ] P B c [ X t | Y t , Z t ] P B i [ X t , Z t | Y t ] e K [ X t , Y t ] P F [ X t Z t ] P S [ Z t Y t ]
= P B i [ X t , Z t | Y t ] P B c [ X t | Y t , Z t ] P S [ Z t Y t ] = P B i [ Z t | Y t ] P S [ Z t Y t ] ,
where we used Equations (9) and (10) in the first line and marginalized the numerator and the denominator with respect to X t in the second line.

Appendix A.3. Derivation of Equations (16) and (17)

By taking the average of
g [ Y t , Z t ] = Ψ i [ Y t ] - Ψ c [ Y t , Z t ] ,
with respect to Q [ Y t , Z t ] = Q [ Z t Y t ] Q [ Y t ] = P S [ Z t Y t ] Q [ Y t ] , we have
g [ Y t , Z t ] Q [ Y t , Z t ] = Ψ i [ Y t ] Q [ Y t ] - Ψ c [ Y t , Z t ] Q [ Y t , Z t ] .
Similarly, by taking the average of
g [ Y t , Z t ] = ln P S [ Z t Y t ] P B i [ Z t | Y t ] = ln P S [ Z t Y t ] Q [ Y t ] P B i [ Z t | Y t ] Q [ Y t ] ,
with respect to Q [ Y t , Z t ] = Q [ Z t Y t ] Q [ Y t ] = P S [ Z t Y t ] Q [ Y t ] , we have
g [ Y t , Z t ] Q [ Y t , Z t ] = D [ P S [ Z t Y t ] Q [ Y t ] P B i [ Z t | Y t ] Q [ Y t ] ] = G .
Thus,
Ψ i [ Y t ] Q [ Y t ] - Ψ c [ Y t , Z t ] Q [ Y t , Z t ] = G .

Appendix A.4. Derivation of Equation (21)

From Equation (9),
P B i [ X t , Z t | Y t ] P F [ X t Z t ] P S [ Z t Y t ] = e K [ X t , Y t ] - Ψ i [ Y t ] = e Ψ 0 [ Y t ] - Ψ i [ Y t ] P K [ Y t X t ] Q [ Y t ]
where we used Equation (22) to obtain the last equality. By rearranging the first and the last terms, we have
e Ψ i [ Y t ] - Ψ 0 [ Y t ] = P K [ Y t X t ] P F [ X t Z t ] P S [ Z t Y t ] P B i [ X t , Z t | Y t ] Q [ Y t ] .

Appendix A.5. Derivation of Equation (30)

From Equations (28) and (29), we have
P S [ Y t , Z t ] : = P S [ Z t Y t ] Q [ Y t ] = τ = 0 t - 1 T S ( z τ + 1 | y τ + 1 ) p S ( z 0 | y 0 ) τ = 0 t - 1 T E F ( y τ + 1 | y τ ) p E ( y 0 ) = τ = 0 t - 1 T S ( z τ + 1 | y τ + 1 ) T E F ( y τ + 1 | y τ ) p S ( z 0 | y 0 ) p E ( y 0 ) = τ = 0 t - 1 T S B ( y τ + 1 | z τ + 1 , y τ ) T S B ( z τ + 1 | y τ ) p ( y 0 | z 0 ) p ( z 0 ) = τ = 0 t - 1 T S B ( y τ + 1 | z τ + 1 , Y t ) p ( y 0 | z 0 ) τ = 0 t - 1 T S B ( z τ + 1 | y τ ) p ( z 0 ) = P S B [ Y t Z t ] P S B [ Z t Y t - 1 ] ,
where we used Equation (33) and (34) to obtain the third line and p ( y 0 | z 0 ) p ( z 0 ) = p S ( z 0 | y 0 ) p E ( y 0 ) from Bayes’ theorem.

Appendix A.6. Derivation of Equation (35)

By marginalizing the numerator and denominator of Equation (21) with respect to X t , we have
e Ψ i [ Y t ] - Ψ 0 [ Y t ] = X t P K [ Y t X t ] P F [ X t Z t ] P S [ Z t Y t ] X t P B i [ X t , Z t | Y t ] Q [ Y t ] = P K , F [ Y t | Z t ] P S [ Z t Y t ] P B i [ Z t | Y t ] Q [ Y t ] .
By rearranging this equality, we have
e - Ψ i [ Y t ] + Ψ 0 [ Y t ] = P B i [ Z t | Y t ] P S [ Z t Y t ] Q [ Y t ] P S B [ Y t Z t ] P S B [ Y t Z t ] P K , F [ Y t | Z t ] = e - i [ Z t Y t ] e g [ Y t , Z t ] P S B [ Y t Z t ] P K , F [ Y t | Z t ] ,
where we use Equation (15) and the definition of i [ Z t Y t ] . Thus, we have
e - Ψ i [ Y t ] - ( Ψ 0 [ Y t ] + i [ Z t Y t ] + g [ Y t , Z t ] ) = P S B [ Y t Z t ] P K , F [ Y t | Z t ] .

References

  1. Perkins, T.J.; Swain, P.S. Strategies for cellular decision-making. Mol. Syst. Biol. 2009, 5, 326. [Google Scholar] [CrossRef] [PubMed]
  2. Kobayashi, T.J.; Kamimura, A. Theoretical aspects of cellular decision-making and information-processing. Adv. Exp. Med. Biol. 2012, 736, 275–291. [Google Scholar] [PubMed]
  3. Bowsher, C.G.; Swain, P.S. ScienceDirect Environmental sensing, information transfer, and cellular decision-making. Curr. Opin. Biotech. 2014, 28, 149–155. [Google Scholar] [CrossRef] [PubMed]
  4. Tostevin, F.; Wolde, P.R.t. Mutual Information between Input and Output Trajectories of Biochemical Networks. Phys. Rev. Lett. 2009, 102, 218101-4. [Google Scholar] [CrossRef] [PubMed]
  5. Kobayashi, T.J. Implementation of dynamic Bayesian decision making by intracellular kinetics. Phys. Rev. Lett. 2010, 104, 228104. [Google Scholar] [CrossRef] [PubMed]
  6. Bowsher, C.G.; Swain, P.S. Identifying sources of variation and the flow of information in biochemical networks. Proc. Natl. Acad. Sci. USA 2012, 109, E1320-8. [Google Scholar] [CrossRef]
  7. Mayer, A.; Mora, T.; Rivoire, O.; Walczak, A.M. Transitions in optimal adaptive strategies for populations in fluctuating environments. Phys. Rev. E 2017, 96, 032412. [Google Scholar] [CrossRef] [Green Version]
  8. Tkacik, G.; Callan, C.G.; Bialek, W. Information flow and optimization in transcriptional regulation. Proc. Natl. Acad. Sci. USA 2008, 105, 12265–12270. [Google Scholar] [CrossRef] [Green Version]
  9. Cheong, R.; Rhee, A.; Wang, C.J.; Nemenman, I.; Levchenko, A. Information transduction capacity of noisy biochemical signaling networks. Science 2011, 334, 354–358. [Google Scholar] [CrossRef]
  10. Brennan, M.D.; Cheong, R.; Levchenko, A. Systems biology. How information theory handles cell signaling and uncertainty. Science 2012, 338, 334–335. [Google Scholar] [CrossRef]
  11. Uda, S.; Saito, T.H.; Kudo, T.; Kokaji, T.; Tsuchiya, T.; Kubota, H.; Komori, Y.; Ozaki, Y.i.; Kuroda, S. Robustness and Compensation of Information Transmission of Signaling Pathways. Science 2013, 341, 558–561. [Google Scholar] [CrossRef] [PubMed]
  12. Barato, A.C.; Hartich, D.; Seifert, U. Nonequilibrium sensing and its analogy to kinetic proofreading. New J. Phys. 2014, 17, 055026-19. [Google Scholar]
  13. Das, S.G.; Iyengar, G.; Rao, M. A lower bound on the free energy cost of molecular measurements. Arxiv 1443, arXiv:14433578534543697987related:QygubTRfTsgJ. [Google Scholar]
  14. Lahiri, S.; Sohl-Dickstein, J.; Ganguli, S. A universal tradeoff between power, precision and speed in physical communication. Arxiv 2016, arXiv:1603.07758. [Google Scholar]
  15. Bo, S.; Giudice, M.D.; Celani, A. Thermodynamic limits to information harvesting by sensory systems. J. Stat. Mech. 2015, P01014. [Google Scholar] [CrossRef]
  16. Ouldridge, T.E. The importance of thermodynamics for molecular systems, and the importance of molecular systems for thermodynamics. Nat. Comput. 2018, 17, 3–29. [Google Scholar] [CrossRef]
  17. Stephens, D.W. Variance and the value of information. Am. Nat. 1989, 134, 128–140. [Google Scholar] [CrossRef]
  18. Haccou, P.; Iwasa, Y. Optimal mixed strategies in stochastic environments. Theor. Populat. Biol. 1995, 47, 212–243. [Google Scholar] [CrossRef]
  19. Shannon information and biological fitness. In Proceedings of the Information Theory Workshop, San Antonio, TX, USA, 24–29 October 2004.
  20. Kussell, E.; Leibler, S. Phenotypic diversity, population growth, and information in fluctuating environments. Science 2005, 309, 2075–2078. [Google Scholar] [CrossRef]
  21. Donaldson-Matasci, M.C.; Bergstrom, C.T.; Lachmann, M. The fitness value of information. Oikos 2010, 119, 219–230. [Google Scholar] [CrossRef] [Green Version]
  22. Rivoire, O.; Leibler, S. The value of information for populations in varying environments. J. Stat. Phys. 2011, 142, 1124–1166. [Google Scholar] [CrossRef]
  23. Pugatch, R.; Barkai, N.; Tlusty, T. Asymptotic Cellular Growth Rate as the Effective Information Utilization Rate. Arxiv 2013, arXiv:1308.0623v3. [Google Scholar]
  24. Rivoire, O.; Leibler, S. A model for the generation and transmission of variations in evolution. Proc. Natl. Acad. Sci. USA 2014, 111, E1940-9. [Google Scholar] [CrossRef] [PubMed]
  25. Rivoire, O. Informations in Models of Evolutionary Dynamics. J. Stat. Phys. 2015, 162, 1324–1352. [Google Scholar] [CrossRef] [Green Version]
  26. Kobayashi, T.J.; Sughiyama, Y. Fluctuation Relations of Fitness and Information in Population Dynamics. Phys. Rev. Lett. 2015, 115, 238102-5. [Google Scholar] [CrossRef] [PubMed]
  27. Kobayashi, T.J.; Sughiyama, Y. Stochastic and Information-thermodynamic Structures of Population Dynamics in Fluctuating Environment. Arxiv 1703, arXiv:1703.00125. [Google Scholar]
  28. Leibler, S.; Kussell, E. Individual histories and selection in heterogeneous populations. Proc. Natl. Acad. Sci. USA 2010, 107, 13183–13188. [Google Scholar] [CrossRef] [Green Version]
  29. Sughiyama, Y.; Kobayashi, T.J.; Tsumura, K.; Aihara, K. Pathwise thermodynamic structure in population dynamics. Phys. Rev. E 2015, 91, 032120. [Google Scholar] [CrossRef]
  30. Kramer, G. Directed Information for Channels with Feedback. Ph.D. Thesis, Swiss federal institute of technology, Zurich, Switzerland, 1998. [Google Scholar]
  31. Permuter, H.H.; Kim, Y.-H.; Weissman, T. Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing. IEEE Trans. Inform. Theory 2011, 57, 3248–3259. [Google Scholar] [CrossRef]
  32. Baldi, P. Autoencoders, Unsupervised Learning, and Deep Architectures. JMLR Workshop Conf. Proc. 2012, 27, 37–50. [Google Scholar]
  33. Sagawa, T. Thermodynamics of Information Processing in Small Systems; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  34. Sagawa, T.; Ueda, M. Nonequilibrium thermodynamics of feedback control. Phys. Rev. E 2012, 85, 021104-16. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, Y.; Aviyente, S. The relationship between transfer entropy and directed information. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, MI, USA, 5–8 August 2012; pp. 73–76. [Google Scholar]
  36. Govern, C.C.; Wolde, P.R. Optimal resource allocation in cellular sensing systems. Proc. Natl. Acad. Sci. USA 2014, 111, 17486–17491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Schematic diagrams of population dynamics of cells with individual (a) and common (b) sensing. The colors of cells and molecules on the cells represent phenotypic states and sensing signal, respectively. Bars on the diagrams indicate the histories of environmental states and common sensing. In (a), the sensing singal of each cell is correlated with the environmental state but has an intercellular variation due to the stochasticity of individual sensing. In (b), on the other hand, all the cells at certain time points share the same sensing signal, which is shown by the background colors in the diagram.
Figure 1. Schematic diagrams of population dynamics of cells with individual (a) and common (b) sensing. The colors of cells and molecules on the cells represent phenotypic states and sensing signal, respectively. Bars on the diagrams indicate the histories of environmental states and common sensing. In (a), the sensing singal of each cell is correlated with the environmental state but has an intercellular variation due to the stochasticity of individual sensing. In (b), on the other hand, all the cells at certain time points share the same sensing signal, which is shown by the background colors in the diagram.
Entropy 21 01002 g001
Figure 2. (a) A diagrammatic representation of state transitions of the environment used for simulation in Figure 3, Figure 4 and Figure 5. Three states are assumed for the environment; (b) Replication rates of cells with two different phenotypic states under different environmental states; (c) Environment-dependence of the sensing signal, and the probabilities to obtain a certain sensing signal under each environmental state; (d) Signal-dependent phenotype switching; The thickness of arrows represent relative probabilities and rates of replications. The values of the parameters used for the simulation are given by Equations (11)–(14).
Figure 2. (a) A diagrammatic representation of state transitions of the environment used for simulation in Figure 3, Figure 4 and Figure 5. Three states are assumed for the environment; (b) Replication rates of cells with two different phenotypic states under different environmental states; (c) Environment-dependence of the sensing signal, and the probabilities to obtain a certain sensing signal under each environmental state; (d) Signal-dependent phenotype switching; The thickness of arrows represent relative probabilities and rates of replications. The values of the parameters used for the simulation are given by Equations (11)–(14).
Entropy 21 01002 g002
Figure 3. (a,b) Trajectories of population sizes with individual sensing under two different realizations of the environment. The history of the environment, Y t , is shown by the colored bar on each panel. The color represents the state of the environment, which was defined in Figure 2a. Each colored line corresponds to the population size of the cells with phenotypic state x and sensing signal z; the actual value of ( x , z ) is designated in the panels. The gray lines in (a,b) are the replicate of the trajectories in (c,d), respectively, for comparison. (c,d) Trajectories of population sizes with common sensing under the same realizations of the environment as in (a) and (b), respectively. On each panel, the history of the common signal, Z t , is additionally shown by the colored bar. Each colored line corresponds to the population size of the cells with phenotypic state x, with the actual value of x designated in the panels. (e,f) Fitnesses of the populations with the individual and the common sensing, Ψ i [ Y t ] (red solid curve) and Ψ c [ Y t , Z t ] (blue solid curve) under the same realizations of the environment and common signal as in (a,c) and (b,d). Related quantities are also shown for comparison.
Figure 3. (a,b) Trajectories of population sizes with individual sensing under two different realizations of the environment. The history of the environment, Y t , is shown by the colored bar on each panel. The color represents the state of the environment, which was defined in Figure 2a. Each colored line corresponds to the population size of the cells with phenotypic state x and sensing signal z; the actual value of ( x , z ) is designated in the panels. The gray lines in (a,b) are the replicate of the trajectories in (c,d), respectively, for comparison. (c,d) Trajectories of population sizes with common sensing under the same realizations of the environment as in (a) and (b), respectively. On each panel, the history of the common signal, Z t , is additionally shown by the colored bar. Each colored line corresponds to the population size of the cells with phenotypic state x, with the actual value of x designated in the panels. (e,f) Fitnesses of the populations with the individual and the common sensing, Ψ i [ Y t ] (red solid curve) and Ψ c [ Y t , Z t ] (blue solid curve) under the same realizations of the environment and common signal as in (a,c) and (b,d). Related quantities are also shown for comparison.
Entropy 21 01002 g003
Figure 4. (a) Average values of fitnesses and related quantities; (b,c) Fluctuation of the fitness with individual sensing Ψ i [ Y t ] (b); and that with common sensing Ψ c [ Y t ] (c); (df) Fluctuation of Ψ 0 [ Y t ] (d); Ψ 0 [ Y t ] + i [ Z t Y t ] + g [ Y t , Z t ] (e); and Ψ 0 [ Y t ] + i [ Z t Y t ] (f).
Figure 4. (a) Average values of fitnesses and related quantities; (b,c) Fluctuation of the fitness with individual sensing Ψ i [ Y t ] (b); and that with common sensing Ψ c [ Y t ] (c); (df) Fluctuation of Ψ 0 [ Y t ] (d); Ψ 0 [ Y t ] + i [ Z t Y t ] + g [ Y t , Z t ] (e); and Ψ 0 [ Y t ] + i [ Z t Y t ] (f).
Entropy 21 01002 g004
Figure 5. Numerical verification of IFRs for g [ Y t ] (a,b); γ t - σ [ Y t ] (c,d); and Ψ 0 [ Y t ] + i [ Z t Y t ] + g [ Y t , Z t ] - Ψ i [ Y t ] (e,f). Left panels are behaviors of the integrands of the IFRs for 100 different realizations of the environmental and common signal histories. Right panels are the sample averages of the integrands of the IFRs. Thin colored curves are obtained by averaging 10 5 different samples, and the thick black curves are obtained by the average of 1 . 2 × 10 8 samples.
Figure 5. Numerical verification of IFRs for g [ Y t ] (a,b); γ t - σ [ Y t ] (c,d); and Ψ 0 [ Y t ] + i [ Z t Y t ] + g [ Y t , Z t ] - Ψ i [ Y t ] (e,f). Left panels are behaviors of the integrands of the IFRs for 100 different realizations of the environmental and common signal histories. Right panels are the sample averages of the integrands of the IFRs. Thin colored curves are obtained by averaging 10 5 different samples, and the thick black curves are obtained by the average of 1 . 2 × 10 8 samples.
Entropy 21 01002 g005

Share and Cite

MDPI and ACS Style

Kobayashi, T.J.; Sughiyama, Y. Fitness Gain of Individually Sensed Information by Cells. Entropy 2019, 21, 1002. https://doi.org/10.3390/e21101002

AMA Style

Kobayashi TJ, Sughiyama Y. Fitness Gain of Individually Sensed Information by Cells. Entropy. 2019; 21(10):1002. https://doi.org/10.3390/e21101002

Chicago/Turabian Style

Kobayashi, Tetsuya J., and Yuki Sughiyama. 2019. "Fitness Gain of Individually Sensed Information by Cells" Entropy 21, no. 10: 1002. https://doi.org/10.3390/e21101002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop