Next Article in Journal
Constructal Optimization of Rectangular Microchannel Heat Sink with Porous Medium for Entropy Generation Minimization
Next Article in Special Issue
Quantifying Decoherence via Increases in Classicality
Previous Article in Journal
The Solvability of the Discrete Boundary Value Problem on the Half-Line
Previous Article in Special Issue
Thermality versus Objectivity: Can They Peacefully Coexist?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limits to Perception by Quantum Monitoring with Finite Efficiency

by
Luis Pedro García-Pintos
1,2,* and
Adolfo del Campo
1,3,4,5,*
1
Department of Physics, University of Massachusetts, Boston, MA 02125, USA
2
Joint Center for Quantum Information and Computer Science and Joint Quantum Institute, NIST/University of Maryland, College Park, MD 20742, USA
3
Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg
4
Donostia International Physics Center, E-20018 San Sebastián, Spain
5
IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao, Spain
*
Authors to whom correspondence should be addressed.
Entropy 2021, 23(11), 1527; https://doi.org/10.3390/e23111527
Submission received: 19 October 2021 / Revised: 13 November 2021 / Accepted: 14 November 2021 / Published: 17 November 2021
(This article belongs to the Special Issue Quantum Darwinism and Friends)

Abstract

:
We formulate limits to perception under continuous quantum measurements by comparing the quantum states assigned by agents that have partial access to measurement outcomes. To this end, we provide bounds on the trace distance and the relative entropy between the assigned state and the actual state of the system. These bounds are expressed solely in terms of the purity and von Neumann entropy of the state assigned by the agent, and are shown to characterize how an agent’s perception of the system is altered by access to additional information. We apply our results to Gaussian states and to the dynamics of a system embedded in an environment illustrated on a quantum Ising chain.

Quantum theory rests on the fact that the quantum state of a system encodes all predictions of possible measurements as well as the system’s posterior evolution. However, in general, different agents may assign different states to the same system, depending on their knowledge of it. Complete information of the physical state of a system is equated to pure states, mathematically modeled by unit vectors in Hilbert space. By contrast, mixed states correspond to a lack of complete descriptions of the system, either due to uncertainties in the preparation, or due to the system being correlated with secondary systems. In this paper, we address how the perception of a system differs among observers with different levels of knowledge. Specifically, we quantify how different the effective descriptions that two agents provide of the same system can be, when acquiring information through continuous measurements.
Consider a monitored quantum system, that is, a system being continuously measured in time. An omniscient agent O is assumed to know all interactions and measurements that occur to the system. In particular, she has access to all outcomes of measurements that are performed. As such, O has a complete description of the pure state  ρ t O = ρ t O 2 of the system.
While not necessary for subsequent results, we model such a monitoring process by continuous quantum measurements [1,2,3] as a natural test-bed with experimental relevance [4,5,6]. For ideal continuous quantum measurements, the state ρ t O satisfies a stochastic equation dictating its change,
d ρ t O = i H , ρ t O d t + Λ ρ t O d t + α I A α ρ t O d W t α .
The dephasing superoperator Λ ρ t O is of Lindblad form,
Λ ρ t O = α 1 8 τ m α A α , A α , ρ t O
for the set of measured physical observables { A α } , and the “innovation terms” are given by
I A α ρ t O = 1 4 τ m α { A α , ρ t O } 2 Tr A α ρ t O ρ t O .
The latter account for the information about the system acquired during the monitoring process, and model the quantum back-action on the state during a measurement. The characteristic measurement times  τ m α depend on the strength of the measurement, and characterize the time over which information of the observable A α is acquired. The terms d W t α are independent random Gaussian variables of zero mean and variance d t .
An agent A without access to the measurement outcomes possesses a different–incomplete description of the state of the system. The need to average over the unknown results implies that the state ρ t A assigned by A satisfies the master equation
d ρ t A = i H , ρ t A d t + Λ ρ t A d t ,
obtained from (1) by using that d W t α = 0 , where · denote averages over realizations of the measurement process [1]. Assuming that agent A knows the initial state of the system before the measurement process, ρ 0 O = ρ 0 A , the state that she assigns at later times is ρ t A ρ t O .
As a result of the incomplete description of the state of the system, agent A suffers from a growing uncertainty in the predictions of measurement outcomes. We quantify this by means of two figures of merit: the trace distance and the relative entropy.
The trace distance between states σ 1 and σ 2 is defined as
D ( σ 1 , σ 2 ) = σ 1 σ 2 1 2 ,
where the trace norm for an operator with a spectral decomposition A = j λ j | j j | is A 1 = j | λ j | . Its operational meaning derives from the fact that the trace distance characterizes the maximum difference in probability of outcomes for any measurement on the states σ 1 and σ 2 :
D ( σ 1 , σ 2 ) = max 0 P 𝟙 | Tr P σ 1 Tr P σ 2 | ,
where P is a positive-operator valued measure. It also quantifies the probability p of successfully guessing, with a single measurement instance, the correct state in a scenario where one assumes equal prior probabilities for having state σ 1 or σ 2 . Then, the best conceivable protocol gives p = 1 2 1 + D ( σ 1 , σ 2 ) . Thus, if two states are close in trace distance they are hard to distinguish under any conceivable measurement [7,8,9].
The relative entropy also serves as a figure of merit to quantify the distance between probability distributions, in particular characterizing the extent to which one distribution can encode information contained in the other one [10]. In the quantum case, the relative entropy is defined as:
S σ 1 | | σ 2 Tr σ 1 log σ 1 Tr σ 1 log σ 2 .
In a hypothesis testing scenario between states σ 1 and σ 2 , the probability p N of wrongly believing that σ 2 is the correct state scales as p N e N S σ 1 | | σ 2 in the limit of large N, where N is the number of copies of the state that are available to measure on [11,12]. That is, σ 2 is easily confused with σ 1 if S σ 1 | | σ 2 is small [13,14].

1. Quantum Limits to Perception

Lack of knowledge of the outcomes from measurements performed on the system induces A to assign an incomplete, mixed, state to the system. This hinders the agent’s perception of the system (see illustration in Figure 1). We quantify this by the trace distance and the relative entropy.
We are interested in comparing A ’s incomplete description to the pure state ρ T O assigned by O , i.e., to the complete description. Under ideal monitoring of a quantum system, the pure state ρ T O remains pure. Therefore, the following holds [7]:
1 Tr ρ T O ρ T A D ρ T O , ρ T A 1 Tr ρ T O ρ T A .
One can then directly relate the average trace distance to the purity P ρ T A Tr ρ T A 2 of state ρ T A as
1 P ρ T A D ρ T O , ρ T A 1 P ρ T A ,
by using Jensen’s inequality and the fact that the square root is concave. The level of mixedness of the state ρ T A that A assigns to the system provides lower and upper bounds to the average probability of error that she has in guessing the actual state of the system ρ T O . This provides an operational meaning to the purity of a quantum state, as a quantifier of the average trace distance between a state ρ t O and post-measurement (average) state ρ t A .
To appreciate the dynamics in which the average trace distance evolves, we note that at short times
T τ D D ρ T O , ρ T A T τ D ,
where the decoherence rate is given by [15,16]
1 τ D = α 1 4 τ m α Var ρ 0 A ( A α ) ,
in terms of the variance Var ρ 0 A ( A α ) of the measured observables over the initial pure state ρ 0 A . Analogous bounds can be derived at arbitrary times of evolution for the difference of perceptions among various agents (see Appendix A).
For the case of the quantum relative entropy between states of complete and incomplete knowledge, the following identity holds:
S ρ t O | | ρ t A = S ρ t A ,
proven by using that ρ t O is pure and that the von Neumann entropy of a state σ is S σ : = Tr σ log σ . Thus, the entropy of the state assigned by the agent A fully determines the average relative entropy with respect to the complete description ρ t O (alternative interpretations to this quantity have been given in [17,18]).
Similar calculations allow to bound the variances of D ρ T O , ρ T A and of S ρ t O | | ρ t A as well. The variance of the trace distance, Δ D T 2 D 2 ρ T O , ρ T A D ρ T O , ρ T A 2 , satisfies
Δ D T 2 P ρ T A P ρ T A 2 ,
while for the variance of the relative entropy it holds that
Δ S 2 ρ t O | | ρ t A Tr ρ t A log 2 ρ t A S 2 ρ t A .
The right-hand side of this inequality admits a classical interpretation in terms of the variance of the surprisal ( log p j ) over the eigenvalues p j of ρ t A [14]. We thus find that, at the level of a single realization, the dispersion of the relative entropy between the states assigned by the agents O and A is upper bounded by the variance of the surprise in the description of A . The later naturally vanishes when ρ t A is pure, and increases as the state becomes more mixed.

2. Transition to Complete Descriptions

So far we considered the extreme case of comparing the states assigned by A , who is in complete ignorance of the measurement outcomes, and by an omniscient agent O . One can in fact consider a continuous transition between these limiting cases, i.e., as the accuracy in the perception of the monitored system by an agent is enhanced, as illustrated in Figure 1. Consider a third agent B , with access to a fraction of the measurement output. This can be modeled by introducing a filter function η ( α ) [ 0 , 1 ] characterizing the efficiency of the measurement channels in Equation (1) [1]. Then, the dynamics of state ρ t B is dictated by
d ρ t B = i H , ρ t B d t + Λ ρ t B d t + α η ( α ) I A α ρ t B d V t α ,
with d V t α Wiener noises for observer B . It holds that ρ t B ρ t O B , where the average is now over the outcomes obtained by O that are unknown to B [1].
Note that the case with null measurement efficiencies η ( j ) = 0 gives the exact same dynamics as that of a system in which the monitored observables { A α } are coupled to environmental degrees of freedom, producing dephasing [19,20]. Equations (15) and (1) then correspond to unravellings in which partial or full access to environmental degrees of freedom allow learning the state of the system by conditioning on the state observed in the environment. Therefore, knowing how D ρ t B , ρ t O and S ρ t O | | ρ t B decrease as η increases directly informs of how much the description of an open system can be improved by observing a fraction of the environment. This is reminiscent of the Quantum Darwinism approach, whereby fractions of the environment encode objective approximate descriptions of the system. While in the Darwinistic framework the focus is on environmental correlations, we focus on the state of the system itself.
The results of the previous section hold for partial-ignorance state ρ t B as well:
1 P ρ T B D ρ T O , ρ T B B 1 P ρ T B
S ρ t O | | ρ t B B = S ρ t B .
Similar extensions are obtained for the variances. This allows exploring the transition from the incomplete description of A , to a complete description of the state of the system as η 1 . Note that these results hold for each realization of a trajectory of B ’s state ρ t B , and that if one averages over the measurement outcomes unknown to both agents A and B , Equation (16b) gives S ρ t O | | ρ t B = S ( ρ t B ) .
These results allow to compare the descriptions of different agents that jointly monitor a system [1,20,21,22,23]. We show in Appendix A that
Tr ρ T A 2 Tr ρ T B 2 D ρ T A , ρ T B A B 1 Tr ρ T A 2 + 1 Tr ρ T B 2 .
The joint monitoring of a system by independent observers has been realized experimentally in [24,25].

3. Illustrations

3.1. Evolution of the Limits to Perception

Consider a 1D transverse field Ising model, with the Hamiltonian
H = h j N σ j x J j N 1 σ j z σ j + 1 z ,
where σ j x and σ j z denote Pauli matrices on the x and z directions, and { h , J } denote coupling strengths.
We study the case of observer O monitoring the individual spin z components. Equation (1) thus governs the evolution of the state ρ t O , with { A α } = { σ j z } . Meanwhile, the state assigned by observers with partial access to measurement outcomes follows Equation (15). The case η ( j ) = 0 gives equivalent dynamics to that of an Ising chain in which individual spins couple to environmental degrees of freedom via σ j z , producing dephasing.
Figure 2 illustrates the evolution of the averaged relative entropy S ρ t O | | ρ t B between the complete description and B ’s partial one, for different values of the monitoring efficiency η . The average · is over all measurement outcomes. Analogous results for the average trace distance can be found in Appendix A. The dynamics are simulated by implementation of the monitoring process as a sequence of weak measurements, which can be modeled by Kraus operators acting on the state of the system. Specifically, the evolution of ρ t O and corresponding state ρ t B with partial measurements is numerically obtained from assuming two independent measurement processes, as in [1].

3.2. Transition to Complete Descriptions

Consider the case of a one-dimensional harmonic oscillator with position and momentum operators X and P. We assume agent B is monitoring the position of the oscillator with an efficiency η . The dynamics is dictated by Equation (15) for the case of a single monitored observable X, and can be determined by a set of differential equations on the moments of the Gaussian state ρ t B [1,21].
We prove in Appendix A that the purity of the density matrix for long times has a simple expression in terms of the measurement efficiency, satisfying P ρ T B η for long times. Equation (16) and properties of Gaussian states [22,23,24,25,26] then imply
1 η D ρ T O , ρ T B B 1 η ,
and
S ρ t O | | ρ t B B = 1 2 η + 1 2 log 1 2 η + 1 2 1 2 η 1 2 log 1 2 η 1 2 .
See [27] for further results on the gains in purity that can be obtained from conditioning on measurement outcomes in Gaussian systems. Figure 3 depicts the trace distance D ρ t B , ρ t O B and the relative entropy S ρ t O | | ρ t B B as a function of the measurements efficiency of B ’s measurement process, illustrating the transition from least accurate perception to most accurate perception and optimal predictive power as η 1 . Note that, since both the bounds on the trace distance and relative entropy are independent of the parameters of the model in this example, the transition to most accurate perceptions of the system is solely a function of the measurement efficiency. The figures show that a high knowledge of the state of the system is gained for η 0 as η increases. This gain decreases for larger values of η . This observation is confirmed by explicit computation using the relative entropy, which satisfies d d η S ρ t O | | ρ t B B = log 1 η 1 + η / ( 4 η 3 / 2 ) . Thus, its rate of change and the information gain diverges for η 0 as a power law d d η S ρ t O | | ρ t B B = ( 1 / 6 + 1 / 2 η ) + O ( η 2 ) , while it becomes essentially constant for intermediate values of η . In the transition to most accurate perception the effective description of the system changes from a mixed to a pure state, and the information gain becomes divergent as well as η 1 .

4. Discussion

Different levels of information of a system amount to different effective descriptions. We studied these different descriptions for the case of a system being monitored by an observer, and compared this agent’s description to that of other agents with a restricted access to the measurement outcomes. With continuous measurements as an illustrative case study, we put bounds on the average trace distance between states that different agents assign to the system, and obtained exact results for the average quantum relative entropy. The expressions solely involve the state assigned by the less-knowledgeable agent, providing estimates for the distance to the exact state that can be calculated by the agent without knowledge of the latter.
The setting we presented here has a natural application to the case of a system interacting with an environment. For all practical purposes, one can view the effect of an environment as effectively monitoring the system with which it interacts [28,29]. Without access to the environmental degrees of freedom, the master equation that governs the state of the system takes a Lindblad form with Hermitian operators, as in Equation (4). However, access to the degrees of freedom of the environment can provide information of the state of the system, effectively leading to a dynamics governed by Equation (15). Access to a high fraction of the environment leads to a dynamics as in Equation (1), providing complete description of the state of the system by conditioning on the observed state of the environmental degrees of freedom. With this in mind, our results shed light on how much one can improve the description of a given system by incorporating information encoded in an environment [29,30,31,32,33,34,35], as experimentally explored in [36,37]. Note that since our bounds depend on the state assigned by the agent with less information, the above is independent of the unraveling chosen. It would also be interesting to extend our results and the connections to the dynamics of open systems to more general monitoring dynamics (e.g., non-Hermitian operators or other noise models).
As brought up by an analysis of a continuously-monitored harmonic oscillator, a large gain of information about the state of the system occurs when an agent has access to a small fraction of the measurement output, when quantified both by the trace distance and by the relative entropy. Our results thus complement the Quantum Darwinism program and related approaches [29,30,31,32,33,34,35], where the authors compare the state of a system interacting with an environment and the state of fractions of such an environment. While those works focused on the correlation buildup between the system and the environment, we instead address the subjective description that observers assign to the state of the system, conditioned on the information encoded in a given measurement record.

Author Contributions

Formal analysis, L.P.G.-P. and A.d.C.; Investigation, L.P.G.-P. and A.d.C.; Writing—original draft, L.P.G.-P. and A.d.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the John Templeton Foundation, UMass Boston (project P20150000 029279), and DOE grant DE-SC0019515.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Derivation of Bounds to Average Trace Distance

Using Equations (2) and (4) in the main text and that ρ 0 O = ρ 0 A , we find
1 Tr ρ T O ρ T A = Tr ρ 0 O ρ 0 A Tr ρ T O ρ T A d Tr ρ t O ρ t A = F 0 F T d Tr ρ t A ρ t A = 2 0 T Tr ρ t A Λ ρ t A d t = + 2 α 1 8 τ m α 0 T Tr A α , A α , ρ t A ρ t A d t = α 1 4 τ m α 0 T Tr ρ t A , A α A α , ρ t A d t .
This identity can be conveniently expressed in terms of the 2-norm of the commutator [ ρ t A , A ] as
1 Tr ρ T O ρ T A = α 1 4 τ m α 0 T ρ t A , A α 2 2 d t = α T 4 τ m α ρ t A , A α 2 2 ¯ ,
where we denote the time-average of a function f by f ¯ 0 T f ( t ) d t / T . Note that the expression α 1 4 τ m α ρ t A , A α 2 2 ¯ plays the role of a time-averaged decoherence time [15,16], generalizing Equation (11) in the main text.
This sets alternative bounds on the average distance between the state ρ t A assigned by A and the actual state of the system ρ t O , in terms of the effect of the Lindblad dephasing term acting on the incomplete-knowledge state ρ t A ,
T α 1 4 τ m α ρ t A , A α 2 2 ¯ D ρ T O , ρ T A T α 1 4 τ m α ρ t A , A α 2 2 ¯ .
A short time analysis provides a sense of the evolution of the upper and lower bounds on the trace distance and how they compare to its variance. To leading order in a Taylor series expansion,
P ρ τ A 1 + 2 Tr ρ 0 A Λ ρ 0 A τ = 1 α 1 4 τ m α Tr ρ 0 A , A α A α , ρ 0 A τ ,
and one finds
τ α 1 4 τ m α ρ 0 A , A α 2 2 D ρ τ O , ρ τ A τ α 1 4 τ m α ρ 0 A , A α 2 2 .
Note that the behavior of the trace distance is determined by the timescale in which decoherence occurs.
Using Equation (9) in the main text and Jensen’s inequality, one obtains
D 2 ρ T O , ρ T A 1 P ρ T A ,
which implies that the variance Δ D T 2 D 2 ρ T O , ρ T A D ρ T O , ρ T A 2 satisfies
Δ D T 2 P ρ T A P ρ T A 2 .
In the short time limit this becomes
Δ D τ 2 2 Tr ρ 0 A Λ ρ 0 A τ .

Appendix A.2. Derivation of the Average and Variance of the Quantum Relative Entropy

Using that ρ t O is pure, and that the von Neumann entropy is given by S ρ Tr ρ log ρ , we obtain that the average over the results unknown to agent A satisfy
S ρ t O | | ρ t A = Tr ρ t O log ρ t O Tr ρ t O log ρ t A = 0 Tr ρ t A log ρ t A = S ρ t A .
This sets a direct connection between the average error induced by assigning state ρ t A instead of the exact state ρ t O , as quantified by the relative entropy, in terms of the von Neumann entropy of the state accessible to agent A .
In turn, the variance of the relative entropy satisfies
Δ S 2 ρ t O | | ρ t A = S 2 ρ t O | | ρ t A S ρ t O | | ρ t A 2 = Tr ρ t O log ρ t A 2 S 2 ρ t A Tr ρ t O Tr ρ t O log 2 ρ t A S 2 ρ t A = Tr ρ t A log 2 ρ t A S 2 ρ t A ,
using the Cauchy–Schwarz inequality in the third line. Note that this expression is identical to the variance of the operator log ρ t A , which can be thought of as the quantum extension to the notion of the “information content” or “surprisal” log p in classical information theory.

Appendix A.3. Bounds to the Difference between Perceptions of Multiple Agents

Consider two agents A and B who simultaneously monitor different observables on a system. Each one has access to the measurement outcomes of their devices, but not to the results obtained by the other agent. The states ρ T A and ρ T B that A and B assign to the system differ from the actual pure state ρ T O that corresponds to the complete description of the system. For simplicity let us consider that A monitors a single observable A and B monitors a single observable B. The complete-description state of the system assigned by all-knowing agent O evolves according to
d ρ t O = L ρ t O d t + I A ρ t O d W t A + I B ρ t O d W t B ,
with the Lindbladian L ρ t O i H , ρ t O + Λ A ρ t O + Λ B ρ t O , with corresponding dephasing terms on observables A and B. The innovation terms I A and I B are defined as in Equation (3) in the main text, and d W t A and d W t B are independent noise terms.
The states of both observers satisfy
d ρ t A = L ρ t A d t + I A ρ t A d V t A
d ρ t B = L ρ t B d t + I B ρ t B d V t B .
Consistency between observers implies that their noises are related to the ones appearing in Equation (A10) by [1,3]:
d W t A = Tr ρ t A A Tr ρ t O A d t τ m + d V t A d W t B = Tr ρ t B B Tr ρ t O B d t τ m + d V t B .
As the state of each observer satisfies Equation (9), the triangle inequality provides the upper bound
D ρ T A , ρ T B A B 1 Tr ρ T A 2 + 1 Tr ρ T B 2 ,
and the lower bound
D ρ T A , ρ T B A B Tr ρ T A 2 Tr ρ T B 2 .

Appendix A.4. Illustration—Evolution of Limits to Perception

We consider the case of observer O monitoring the spin components σ j z on a 1D transverse field Ising model, with the Hamiltonian defined in Equation (18) of the main text. Figure A1 shows the evolution of the average trace distance D ρ T O , ρ T B between the complete description and B ’s partial one, along with the bounds (16), for different values of the monitoring efficiency η . Figure A2 shows the evolution of the average relative entropy S ρ T O | | ρ T B . The dynamics are simulated by implementation of the monitoring process as a sequence of weak measurements modeled by Kraus operators acting on the state of the system. Specifically, the evolution of ρ t O and corresponding state ρ t B with partial measurements is numerically obtained from assuming two independent measurement processes, as in [1].
Figure A1. Evolution of the average trace distance and its bounds. Simulated evolution of the average trace distance D ρ T O , ρ T B between complete and incomplete descriptions for a spin chain initially in a paramagnetic state on which individual spin components σ j z are monitored. The simulation corresponds to N = 6 spins, with couplings J τ m = h τ m = 1 / 2 . The upper and lower bounds (16) on the average trace distance is depicted by dashed lines, while the shaded area represents the (one standard deviation) confidence region obtained from the upper bound (13) on the standard deviation in the main text, calculated with respect to the mean distance. For η = 0 (left), agent A , without any access to the measurement outcomes, has the most incomplete description of the system. After gaining access to partial measurement results, with η = 0.5 (center) B gets closer to the complete description of the state of the system. Finally, when η = 0.9 (right), access to enough information provides B with an almost complete description of the state. Importantly, in all cases the agent can bound how far the description possessed is from the complete one solely in terms solely of the purity P ρ T B .
Figure A1. Evolution of the average trace distance and its bounds. Simulated evolution of the average trace distance D ρ T O , ρ T B between complete and incomplete descriptions for a spin chain initially in a paramagnetic state on which individual spin components σ j z are monitored. The simulation corresponds to N = 6 spins, with couplings J τ m = h τ m = 1 / 2 . The upper and lower bounds (16) on the average trace distance is depicted by dashed lines, while the shaded area represents the (one standard deviation) confidence region obtained from the upper bound (13) on the standard deviation in the main text, calculated with respect to the mean distance. For η = 0 (left), agent A , without any access to the measurement outcomes, has the most incomplete description of the system. After gaining access to partial measurement results, with η = 0.5 (center) B gets closer to the complete description of the state of the system. Finally, when η = 0.9 (right), access to enough information provides B with an almost complete description of the state. Importantly, in all cases the agent can bound how far the description possessed is from the complete one solely in terms solely of the purity P ρ T B .
Entropy 23 01527 g0a1
Figure A2. Evolution of the average relative entropy and its bounds. Simulated evolution of the average relative entropy S ρ T O | | ρ T B between complete and incomplete descriptions for a spin chain on which the z components of individual spins are monitored. The shaded area represents the (one standard deviation) confidence region obtained from the upper bound on the standard deviation of the relative entropy, Equation (14) in the main text. As in the case of the trace distance, access to more information leads to a more accurate state assigned by the agent.
Figure A2. Evolution of the average relative entropy and its bounds. Simulated evolution of the average relative entropy S ρ T O | | ρ T B between complete and incomplete descriptions for a spin chain on which the z components of individual spins are monitored. The shaded area represents the (one standard deviation) confidence region obtained from the upper bound on the standard deviation of the relative entropy, Equation (14) in the main text. As in the case of the trace distance, access to more information leads to a more accurate state assigned by the agent.
Entropy 23 01527 g0a2

Appendix A.5. Illustration—Transition to Complete Descriptions

Consider the case of a one-dimensional harmonic oscillator with position and momentum operators X and P, respectively. We assume agent B is monitoring the position of the harmonic oscillator, with an efficiency η . The dynamics of state ρ t B is dictated by Equation (15) in the main text for the case of a single monitored observable, with
Λ ρ t B = 1 8 τ m X , X , ρ t B ; I X ρ t B = 1 4 τ m { X , ρ t B } 2 Tr X ρ t B ρ t B .
Such a dynamics preserves the Gaussian property of states. For these, the variances
v x Tr ρ t B X 2 Tr ρ t B X 2 ,
v p Tr ρ t B P 2 Tr ρ t B P 2 ,
and covariance
c x p Tr ρ t B { X , P } 2 Tr ρ t B X Tr ρ t B P ,
satisfy the following set of differential equations (in natural units) [1,21]:
d d t v x = 2 ω c x p η τ m v x 2 ,
d d t v p = 2 ω c x p + 1 4 τ m η τ m c x p 2 ,
d d t c x p = ω v p ω v x η τ m v x c x p .
While the first moments do evolve stochastically, the second moments above satisfy a set of deterministic coupled differential equations. This in turn implies that the purity of the state, which can be obtained from the covariance matrix [22,23,24,25,26]
σ ( t ) v x c x p c x p v p
as
P ρ T B = 1 2 det [ σ ( t ) ] ,
evolves deterministically as well.
The solution for long times can be derived from Equations (A20), giving
c x p s s = ω τ m ± ω 2 τ m 2 + η / 4 η ,
v x s s = 2 ω τ m η c x p s s ,
v p s s = v x s s 1 + η ω τ m c x p s s ,
which provides the long-time asymptotic value of the purity as a function of the measurement efficiency. The latter turns out to have the following simple expression
P ρ T B = 1 2 v x s s v p s s ( c x p s s ) 2 = 1 2 2 ω τ m η c x p s s 1 + η ω τ m c x p s s ( c x p s s ) 2 = 1 2 2 ω τ m η c x p s s + ( c x p s s ) 2
= 1 2 τ m η 1 4 τ m η τ m ( c x p s s ) 2 + ( c x p s s ) 2 = 1 2 1 4 η = η .
Using that
1 P ρ T B D ρ T O , ρ T B B 1 P ρ T B ,
then implies
1 η D ρ T O , ρ T B B 1 η .
The entropy of a 1-mode Gaussian state can be expressed in terms of the purity of the state as
S ρ T B = 1 2 P ρ T B + 1 / 2 log 1 2 P ρ T B + 1 / 2 1 2 P ρ T B 1 / 2 log 1 2 P ρ T B 1 / 2 .
Then, using that S ρ t O | | ρ t B B = S ρ t B and Equation (A25), we obtain that for long times,
S ρ t O | | ρ t B B = S ρ T B = 1 2 η + 1 2 log 1 2 η + 1 2 1 2 η 1 2 log 1 2 η 1 2 .

References

  1. Jacobs, K.; Steck, D. A straightforward introduction to continuous quantum measurement. Contemp. Phys. 2006, 47, 279–303. [Google Scholar] [CrossRef] [Green Version]
  2. Wiseman, H.M.; Milburn, G.J. Quantum Measurement and Control; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  3. Jacobs, K. Quantum Measurement Theory and Its Applications; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  4. Murch, K.W.; Weber, S.J.; Macklin, C.; Siddiqi, I. Observing single quantum trajectories of a superconducting quantum bit. Nature 2013, 502, 211–214. [Google Scholar] [CrossRef] [Green Version]
  5. Devoret, M.H.; Schoelkopf, R.J. Superconducting Circuits for Quantum Information: An Outlook. Science 2013, 339, 1169–1174. [Google Scholar] [CrossRef] [Green Version]
  6. Weber, S.J.; Chantasri, A.; Dressel, J.; Jordan, A.N.; Murch, K.W.; Siddiqi, I. Mapping the optimal route between two quantum states. Nature 2014, 511, 570. [Google Scholar] [CrossRef] [Green Version]
  7. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information: 10th Anniversary Edition; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar] [CrossRef] [Green Version]
  8. Wilde, M.M. Quantum Information Theory; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar] [CrossRef] [Green Version]
  9. Watrous, J. The Theory of Quantum Information; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar] [CrossRef] [Green Version]
  10. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar] [CrossRef]
  11. Hiai, F.; Petz, D. The proper formula for relative entropy and its asymptotics in quantum probability. Commun. Math. Phys. 1991, 143, 99–114. [Google Scholar] [CrossRef]
  12. Ogawa, T.; Nagaoka, H. Strong converse and Stein’s lemma in quantum hypothesis testing. In Asymptotic Theory of Quantum Statistical Inference: Selected Papers; World Scientific: Singapore, 2005; pp. 28–42. [Google Scholar] [CrossRef] [Green Version]
  13. Schumacher, B.; Westmoreland, M.D. Relative entropy in quantum information theory. Contemp. Math. 2002, 305, 265–290. [Google Scholar] [CrossRef]
  14. Vedral, V. The role of relative entropy in quantum information theory. Rev. Mod. Phys. 2002, 74, 197–234. [Google Scholar] [CrossRef] [Green Version]
  15. Chenu, A.; Beau, M.; Cao, J.; del Campo, A. Quantum Simulation of Generic Many-Body Open System Dynamics Using Classical Noise. Phys. Rev. Lett. 2017, 118, 140403. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Beau, M.; Kiukas, J.; Egusquiza, I.L.; del Campo, A. Nonexponential Quantum Decay under Environmental Decoherence. Phys. Rev. Lett. 2017, 119, 130401. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Barchielli, A. Entropy and information gain in quantum continual measurements. In Quantum Communication, Computing, and Measurement 3; Springer: Berlin/Heidelberg, Germany, 2002; pp. 49–57. [Google Scholar] [CrossRef] [Green Version]
  18. Barchielli, A.; Gregoratti, M. Quantum Trajectories and Measurements in Continuous Time: The Diffusive Case; Springer: Berlin/Heidelberg, Germany, 2009; Volume 782. [Google Scholar] [CrossRef] [Green Version]
  19. Zurek, W.H. Decoherence and the Transition from Quantum to Classical. Phys. Today 1991, 44, 36–44. [Google Scholar] [CrossRef]
  20. Schlosshauer, M.A. Decoherence: And the Quantum-to-Classical Transition; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar] [CrossRef] [Green Version]
  21. Doherty, A.C.; Jacobs, K. Feedback control of quantum systems using continuous state estimation. Phys. Rev. A 1999, 60, 2700–2711. [Google Scholar] [CrossRef] [Green Version]
  22. Paris, M.G.A.; Illuminati, F.; Serafini, A.; De Siena, S. Purity of Gaussian states: Measurement schemes and time evolution in noisy channels. Phys. Rev. A 2003, 68, 012314. [Google Scholar] [CrossRef] [Green Version]
  23. Ferraro, A.; Olivares, S.; Paris, M. Gaussian States in Quantum Information; Napoli Series on physics and Astrophysics; Bibliopolis: Pittsburgh, PA, USA, 2005. [Google Scholar]
  24. Wang, X.B.; Hiroshima, T.; Tomita, A.; Hayashi, M. Quantum information with Gaussian states. Phys. Rep. 2007, 448, 1–111. [Google Scholar] [CrossRef] [Green Version]
  25. Weedbrook, C.; Pirandola, S.; García-Patrón, R.; Cerf, N.J.; Ralph, T.C.; Shapiro, J.H.; Lloyd, S. Gaussian quantum information. Rev. Mod. Phys. 2012, 84, 621–669. [Google Scholar] [CrossRef]
  26. Adesso, G.; Ragy, S.; Lee, A.R. Continuous variable quantum information: Gaussian states and beyond. Open Syst. Inf. Dyn. 2014, 21, 1440001. [Google Scholar] [CrossRef] [Green Version]
  27. Laverick, K.T.; Chantasri, A.; Wiseman, H.M. Quantum State Smoothing for Linear Gaussian Systems. Phys. Rev. Lett. 2019, 122, 190402. [Google Scholar] [CrossRef] [Green Version]
  28. Schlosshauer, M. Decoherence, the measurement problem, and interpretations of quantum mechanics. Rev. Mod. Phys. 2005, 76, 1267–1305. [Google Scholar] [CrossRef] [Green Version]
  29. Zurek, W.H. Quantum darwinism. Nat. Phys. 2009, 5, 181. [Google Scholar] [CrossRef]
  30. Zwolak, M.; Quan, H.T.; Zurek, W.H. Redundant imprinting of information in nonideal environments: Objective reality via a noisy channel. Phys. Rev. A 2010, 81, 062110. [Google Scholar] [CrossRef] [Green Version]
  31. Jess Riedel, C.; Zurek, W.H.; Zwolak, M. The rise and fall of redundancy in decoherence and quantum Darwinism. New J. Phys. 2012, 14, 083010. [Google Scholar] [CrossRef]
  32. Zwolak, M.; Zurek, W.H. Complementarity of quantum discord and classically accessible information. Sci. Rep. 2013, 3, 1729. [Google Scholar] [CrossRef]
  33. Brandão, F.G.S.L.; Piani, M.; Horodecki, P. Generic emergence of classical features in quantum Darwinism. Nat. Commun. 2015, 6, 7908. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Horodecki, R.; Korbicz, J.K.; Horodecki, P. Quantum origins of objectivity. Phys. Rev. A 2015, 91, 032122. [Google Scholar] [CrossRef] [Green Version]
  35. Le, T.P.; Olaya-Castro, A. Strong Quantum Darwinism and Strong Independence are Equivalent to Spectrum Broadcast Structure. Phys. Rev. Lett. 2019, 122, 010403. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Ciampini, M.A.; Pinna, G.; Mataloni, P.; Paternostro, M. Experimental signature of quantum Darwinism in photonic cluster states. Phys. Rev. A 2018, 98, 020101. [Google Scholar] [CrossRef] [Green Version]
  37. Chen, M.C.; Zhong, H.S.; Li, Y.; Wu, D.; Wang, X.L.; Li, L.; Liu, N.L.; Lu, C.Y.; Pan, J.W. Emergence of classical objectivity of quantum Darwinism in a photonic quantum simulator. Sci. Bull. 2019, 64, 580–585. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of the varying degrees of perception by different agents. The amount of information that an agent possesses of a system can drastically alter its perception, as the expectations of outcomes for measurements performed on the system can differ. (a) The state ρ t O assigned by omniscient agent O , who has full access to the measurement outcomes, corresponds to a complete pure-state description of the system. O thus has the most accurate predictive power. (b) An agent A completely ignorant of measurement outcomes possesses the most incomplete description of the system. (c) A continuous transition between the two descriptions, corresponding to the worst and most complete perceptions of the system respectively, is obtained by considering an agent B with partial access to the measurement outcomes of the monitoring process.
Figure 1. Illustration of the varying degrees of perception by different agents. The amount of information that an agent possesses of a system can drastically alter its perception, as the expectations of outcomes for measurements performed on the system can differ. (a) The state ρ t O assigned by omniscient agent O , who has full access to the measurement outcomes, corresponds to a complete pure-state description of the system. O thus has the most accurate predictive power. (b) An agent A completely ignorant of measurement outcomes possesses the most incomplete description of the system. (c) A continuous transition between the two descriptions, corresponding to the worst and most complete perceptions of the system respectively, is obtained by considering an agent B with partial access to the measurement outcomes of the monitoring process.
Entropy 23 01527 g001
Figure 2. Evolution of the average relative entropy. Simulated evolution of the average S ρ t O | | ρ t B = S ρ t B of the relative entropy between complete and incomplete descriptions for a spin chain initially in a paramagnetic state on which individual spin components σ j z are monitored. Here · denotes an average over all measurement outcomes, and ρ t B = ρ t O B is the state assigned by agent B after discarding the outcomes unknown to him. The simulation corresponds to N = 6 spins, with couplings J τ m = h τ m = 1 / 2 . For η = 0 (black continuous curve), agent A , without any access to the measurement outcomes, has the most incomplete description of the system. For η = 0.5 (red dashed curve), B gets closer to the complete description of the state of the system, after gaining access to partial measurement results. Finally, when η = 0.9 (blue dotted curve), access to enough information provides B with an almost complete description of the state. Importantly, in all cases the agent can estimate how far the description possessed is from the complete one solely in terms of the entropy S ( ρ t B ) .
Figure 2. Evolution of the average relative entropy. Simulated evolution of the average S ρ t O | | ρ t B = S ρ t B of the relative entropy between complete and incomplete descriptions for a spin chain initially in a paramagnetic state on which individual spin components σ j z are monitored. Here · denotes an average over all measurement outcomes, and ρ t B = ρ t O B is the state assigned by agent B after discarding the outcomes unknown to him. The simulation corresponds to N = 6 spins, with couplings J τ m = h τ m = 1 / 2 . For η = 0 (black continuous curve), agent A , without any access to the measurement outcomes, has the most incomplete description of the system. For η = 0.5 (red dashed curve), B gets closer to the complete description of the state of the system, after gaining access to partial measurement results. Finally, when η = 0.9 (blue dotted curve), access to enough information provides B with an almost complete description of the state. Importantly, in all cases the agent can estimate how far the description possessed is from the complete one solely in terms of the entropy S ( ρ t B ) .
Entropy 23 01527 g002
Figure 3. Transition between levels of perception. Bounds on average trace distance (left) and average relative entropy (right) as function of measurement efficiency for a harmonic oscillator undergoing monitoring of its position. For such a system the purity of the state ρ t B depends solely on the measurement efficiency with which observer B monitors the system. This illustrates the transition from complete ignorance of the outcomes of measurements performed ( η = 0 ), to the most complete description as η 1 —the situation with the most accurate perception. Efficient use of information happens when a small fraction of the measurement output is incorporated at η 1 , as then both D ρ t B , ρ t O and the relative entropy S ρ t O | | ρ t B decay rapidly.
Figure 3. Transition between levels of perception. Bounds on average trace distance (left) and average relative entropy (right) as function of measurement efficiency for a harmonic oscillator undergoing monitoring of its position. For such a system the purity of the state ρ t B depends solely on the measurement efficiency with which observer B monitors the system. This illustrates the transition from complete ignorance of the outcomes of measurements performed ( η = 0 ), to the most complete description as η 1 —the situation with the most accurate perception. Efficient use of information happens when a small fraction of the measurement output is incorporated at η 1 , as then both D ρ t B , ρ t O and the relative entropy S ρ t O | | ρ t B decay rapidly.
Entropy 23 01527 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

García-Pintos, L.P.; del Campo, A. Limits to Perception by Quantum Monitoring with Finite Efficiency. Entropy 2021, 23, 1527. https://doi.org/10.3390/e23111527

AMA Style

García-Pintos LP, del Campo A. Limits to Perception by Quantum Monitoring with Finite Efficiency. Entropy. 2021; 23(11):1527. https://doi.org/10.3390/e23111527

Chicago/Turabian Style

García-Pintos, Luis Pedro, and Adolfo del Campo. 2021. "Limits to Perception by Quantum Monitoring with Finite Efficiency" Entropy 23, no. 11: 1527. https://doi.org/10.3390/e23111527

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop