Next Article in Journal
Image Entropy-Based Interface Evaluation Method for Nuclear Power Plants
Previous Article in Journal
Efficient Computation of Spatial Entropy Measures
Previous Article in Special Issue
The Fundamental Tension in Integrated Information Theory 4.0’s Realist Idealism
 
 
Opinion
Peer-Review Record

Carving Nature at Its Joints: A Comparison of CEMI Field Theory with Integrated Information Theory and Global Workspace Theory

Entropy 2023, 25(12), 1635; https://doi.org/10.3390/e25121635
by Johnjoe McFadden
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Entropy 2023, 25(12), 1635; https://doi.org/10.3390/e25121635
Submission received: 6 August 2023 / Revised: 29 November 2023 / Accepted: 6 December 2023 / Published: 8 December 2023
(This article belongs to the Special Issue Integrated Information Theory and Consciousness II)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

 

1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

 

2. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

(i)               There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Line 280-282. Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

 

3. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

 

4. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

 

5. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

 

6. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160

 

7. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

 

Typos:

Line 133. Giulio is misspelled.

Line 238. Barrett is misspelled.

 

Author Response

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 1

The article compares the author’s conscious electromagnetic information (CEMI) field theory with integrated information theory (IIT) and global workspace theory (GWT) in terms of their ability to successfully differentiate between conscious and non-conscious physical processes. There is some stimulating discussion, but there are a number of points that should be addressed.

  1. The author’s starting point should be more clearly laid out at the beginning. A definition of consciousness should be provided, and different possible stances on panpsychism should be discussed and acknowledged. For instance, I find the categorisation of conscious and non-conscious entities in lines 193-204 problematic. Surely, a theory should be tested on the various states of the adult human brain in the first instance. Prioritising whether or not a theory behaves “intuitively” outside this domain, where we can’t really know for sure what’s going on, seems strange. It’s also a debatable starting point to make a binary distinction between conscious and not conscious.

Response: Defining consciousness is famously difficult and I generally try to avoid it. We also cannot agree on a definition of life but that doesn’t stop biologists from making progress. I have inserted a line (51 – 52) to make that point. However, at the reviewer’s request, I have added my own definition, lines 54 – 56, that “a conscious system is one in which physically integrated information that is sufficiently complex to encode thoughts is generated by a system and acts on that same system to drive motor actions that report thoughts and confer agency on that system.”

  1. There are a number of misunderstandings with respect to IIT.

Line 236-262. A few points to raise here:

  • There is a misunderstanding here about how Phi is not intrinsic. It is not the case that as you keep expanding the size of the system that you measure Phi for, that the value increases. The whole point of Phi is that it very loosely calculates the extent to which the whole is greater than the sum of the parts, in terms of ability to predict near past and future states. The overall Phi of 2 humans is very low because there is very little information transfer between them in comparison to the level of information transfer within the integrated core of the thalamocortical system. The ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Phi than S1, then there is no consciousness associated with S1. Thus according to IIT, a city or a county would not be conscious in its own right.

Response: I did not mean to suggest that bigger physical systems are automatically assigned higher values of phi only that the boundaries of the system are not defined within IIT. However, I accept that this is unclear so I have replaced the previous text with the argument (lines 252 – 258) “For example, the ‘exclusion’ axiom of IIT states that if a system S1 overlaps with another system S2 that has higher Φ than S1, then there is no consciousness associated with S1. However, if S1 and S2 are, for example, AI systems that are initially isolated, then some subsystem within both S1 and S2 with the highest value of Φ would be independently conscious. If S1 is then connected to S2 such that it exchanges information in both directions, then S1 would lose consciousness in favour of S2. If the links were then severed then S1 would regain consciousness. This doesn’t seem to make sense. “

(ii)              Key to the observer independence constraint, there must be an observer independent graining, in space and time and the set of states of the individual components, with respect to which the true value of Phi derives from. According to IIT, the correct graining is the one that maximises Phi. It is computationally intractable for an external observer to carry out this maximisation, and hence Phi is uncomputable. However, from the intrisinc perspective, Giulio Tononi argues that just as the planets don’t need to themselves calculate their trajectories through space, nature doesn’t need to calculate Phi, the consciousness associated with the intrinsic value just simply occurs. Personally, I find this rather inelegant, since the maximisation is a much more involved thing than a planet following a local gravitational field, but that’s the theory according to Giulio Tononi.

Response: I agree with the reviewer and have added a similar argument, lines 264 – 282.

(iii)            Phi doesn’t actually succeed in being observer independent at the moment, because there are a few choices in its mathematical construction that don’t follow automatically from the axioms.

Line 263. Following on from the above, it is not strong enough to say that “Phi is notoriously difficult to calculate”. It is mathematically impossible to calculate, and in fact not even well-defined. Firstly, no algorithm has been produced for carrying out the above-mentioned maximisation over grainings. Second, if there is one graining for which the dynamics are not Markovian (not memoryless), then Phi can’t be computed (Barrett and Mediano, 2019). It has only been computed on toy model systems in an imaginary universe composed of indivisible binary components that evolve in discrete time with a well-defined transition probability matrix.

Response: I thank the reviewer for pointing this out. I have made amended the text (lines Line 267-271) to make this clear.

Leaving aside the issues with Phi, and assuming a well-defined measure similar to Phi is found, and one finds a small non-zero value for the systems described here: I find this a weak argument against IIT. Giulio Tononi would refer to these systems as “ontological dust”! As long as their Phi remains many orders of magnitude less than the Phi of an adult human, it doesn’t really detract from the theory, in my view.

Response: As far as I am aware, neither Tononi nor other proponents of IIT have proposed that a minimum value of phi is required for consciousness. I would be grateful is the reviewer refers me to a statement by Tononi or IIT colleagues that states that there is some minimal value of Phi that is necessary for consciousness. How high Phi needs to be to confer consciousness is, as far as I am aware, never explained in the theory and although some systems, such as toasters may be described as ‘ontological dust’ as they will likely have very low values of phi, it surely cannot explain why highly-interconnected systems, such as the mammalian immune system or the internet, that are likely to have very high Phi values appear to be non-conscious.

Line 298: For reasons stated above, no “approximation” to Phi has ever been computed. Replace this word with the word “proxy”. This mistake has been made many times in the literature and is extremely misleading!

Response: done.

Line 309-320. While it is true that Phi hasn’t been calculated for the cerebellum (or for anything else in the real world), the structure of the cerebellum is likely to only support a much lower value of any Phi-type measure than the cortex due to its general structure. The arguments made for this are heuristic rather than formal in nature, but are seen as supporting empirical evidence to encourage the development of IIT type theories.

Response: I reference the Merker, Williford and Rudrauf article who, I believe, completely undermine the heuristic case for the cerebellum having a low value of any Phi-like measure.

Line 321-331. Again, I don’t think it’s a problem that IIT doesn’t predict zero consciousness in sleep and anaesthesia, a drop of several orders of magnitude is sufficient.

Response: Again, I don’t’ believe Tononi has proposed any minimal value of Phi required for consciousness so I don’t see why a drop of several orders of magnitude should be sufficient to explain unconsciousness.

  1. The description of GWT is mostly fine, although I think it could be summarised succinctly that GWT just makes the distinction between conscious and unconscious within the (adult) human brain, and is incapable of providing any generalisation beyond that arena, and can’t be considered a fundamental theory of consciousness.

Response: I have added a line to this effect.

  1. (Line 307) This statement that a theory of consciousness should predict synchronously firing neurons as the correlate of consciousness is a bit strange in my view. We only ever record from a few neurons at a time, and there will always be some neurons in synch and others not in synch. IIT states that the level and contents of consciousness derive both from the instantaneous firing pattern and the instantaneous effective connectivity, in ways that are fairly complicated. It doesn’t rule out consciousness in the case of there being a momentary large amount of synchronisation. However, pathological sustained synchronisation, as one gets during an epileptic seizure is certainly a correlate of the loss of consciousness.

Response: Crick famously proposed that explaining correlates of consciousness should be the first step to any theory of consciousness, as I pointed out in line 45. There is clearly a range of neural synchronization in the brain and the most synchronized neurons tend to encode conscious information. That is surely a fact that needs explaining.  I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460.

  1. (Line 416-424). The level of development of CEMI field theory is overstated here. It is not a binary thing whether an EM field is or isn’t causally affecting the behaviour of a certain neuron. Does the theory have any way of defining a threshold for this, and hence for defining the locus of the physical substrate of consciousness within the brain?

Response: I thank the reviewer for making this important point. In [McFadden, Journal of Consciousness Studies 9, 23-50 (2002)] I argue that “for any induced field to have a significant effect, its strength would be expected to be greater than the spontaneous random fields generated by thermal noise in the neuronal membrane. The size of voltage fluctuations in the membrane due to thermal noise has been estimated (Valberg et al., 1997) to be 2,600 V/m for the frequency range 1–100 Hz (encompassing the frequency range typical of the mammalian brain waves), which translates to 13 μV across a 5 nm cell membrane. It should be noted that this value is well below the several millivolt transmembrane signal that is expected to be generated by the brain’s endogenous extracellular em fields (above)”. I add a line to make this point in the revised manuscript, lines lines 137 – 140.

  1. CEMI field theory doesn’t at the moment have any ideas for how to explain any phenomenology, unlike IIT which has a potential method to do so from the causal architecture. I know that going into this isn’t the purpose of this article, but it would be good to have a comment on this. Even subscribing to IIT in a very loose way, it has been able to account for the spatial aspect of visual experience:

Haun, A., Tononi, G. (2019). Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience.  Entropy 21, 1160. https://doi.org/10.3390/e21121160.

Response: This is a good point. Although I have not addressed phenomenological aspects of EM field theories of consciousness in my own papers, a recent paper by Jesse Winters (Winters, The temporally-integrated causality landscape: reconciling neuroscientific theories with the phenomenology of consciousness. Frontiers in Human Neuroscience 15, 768459 (2021)) explores the phenomenology of his own ‘Temporally Integrated Causality Landscape theory,  or TICL, which refers to a complex arrangement of causally-active electromagnetic fields in the brain. I briefly outline both the Haun and Tononi and Winters papers in the revised manuscript (lines 548 and 557).

  1. It might be good to discuss how some of the ideas of IIT and CEMI field theory might synergise to make progress in this field. There could be some useful ideas in this paper:

Mediano, P.A.M., Rosas, F.E., Bor, D., Seth, A.K., & Barrett, A.B. (2022). The strength of weak integrated information theory. Trends Cogn Sci. 26(8). https://doi.org/10.1016/j.tics.2022.04.008

Response: An excellent suggestion. I have added a short final paragraph to make the point that a successful theory of consciousness might be constructed with aspects of IIT, GWT and CEMI field theory.

Typos: Line 133. Giulio is misspelled.  Line 238. Barrett is misspelled.

Response: Thank you. Both are corrected

 

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Reviewer 2 Report

Comments and Suggestions for Authors

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Author Response

Reviewer 2

The paper purports to compare the Conscious Electromagnetic Information (CEMI) Field theory with two popular theories of consciousness, namely, Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT).  If this was the first exposition of CEMI I would support the publication but the author has done an admirable job of describing his theory in several prior publications.  Much of what he discusses about CEMI here he has already done in multiple publications, most recently in "Consciousness, Matter or EMF.," Frontiers in Human Neuroscience, January 2023.  I could not find anything about CEMI that was novel in this paper.  What then of the comparison with IIT and GNWT?  The main argument seems to be that IIT and GNWT lead to pan-psychism while CEMI does not. While this might be true for IIT, it is difficult to see how GNWT can be accused of this given that, as the author himself points out, GNWT theorists limit their attention to the brain.  The comparison only works if the author gives his preferred theory every benefit of doubt and stretches the non-preferred theories into something their proponents probably would not recognize.

I understand the author's enthusiasm for CEMI; I share it.  But this paper does nothing to advance our understanding of CEMI or consciousness.

Response: Although cemi field theory is familiar to this reviewer, it will not be to most Entropy readers, hence the need to explain the theory. The comparison with IIT and GWT is novel, as is recognized by Reviewers 1 and 3.

Reviewer 3 Report

Comments and Suggestions for Authors

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

 

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

 

1.     The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

 

2.     Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.

 

3.     There are multiple points where statements about IIT or GNWT are not quite correct.

 

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

 

 

Detailed comments.

 

·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

 

 

·      “So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

 

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

 

 

·      “…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

 

We do not really know this. I would say this is an opinion, not a fact.

 

 

·      “Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

 

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

 

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

 

 

·      Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

 

 

·      For IIT, please consider citing:

 

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

 

 

·      The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

 

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

 

·      Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

 

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains. So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

 

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

 

 

·      Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

 

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

 

 

·      “Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

 

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

 

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

 

 

·      Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

 

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

 

 

·      “But none of these electronic systems is considered to be conscious.”

 

Well, how do we know?

 

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

 

 

·      Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

 

 

·      Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

 

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

 

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

 

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

 

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

 

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

 

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

 

 

·      “information encoded in matter is always discrete and localized”

 

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

 

 

·      “information encoded in EM fields is always integrated and delocalized”

 

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

 

 

·      Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

 

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

 

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

 

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

 

 

·      Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

 

 

·      Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

 

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

 

 

·      Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

 

 

·      Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

 

 

·      Ref. 75 is an "INVALID CITATION".

Author Response

Reviewer 3

The paper by J. McFadden addresses the ability of different theories of consciousness to describe reality. The author discusses their CEMI theory in comparison with the IIT and GNWT and concludes that the CEMI theory is superior in being able to account for the phenomenology of consciousness.

While the topic is certainly very interesting, the approach and conclusions are not convincing. At the high level, I see 3 major problems.

  1. The manuscript contains no data, no computational analysis, and no math whatsoever, but rather presents some logical considerations and opinions. As such, it may be fine as an opinion article, but that has to be clearly specified.

Response: This was an invited submission and I explained to the commissioning editor that the paper would have no new data. This is not uncommon for theory of consciousness papers.

  1. Even then, the arguments presented for why CEMI is a superior theory are not convincing and are often based on assumptions that are not fully correct. Please see details below. In addition, the manuscript does not provide clear quantitative measures for the different aspects of consciousness that a ‘good’ theory should satisfy (and why). That is, instead of focusing on numbers from specific measurements, the manuscript discusses general ideas about how IIT and GNWT cannot capture certain things. Of course, proponents of these theories would have no problem explaining that they can capture these phenomena.
  2. There are multiple points where statements about IIT or GNWT are not quite correct.

I must say that I have no problem with the CEMI theory itself and I do think it is possible that EM field plays a role in consciousness or may even be its carrier. But at this point there is certainly not enough evidence to dismiss IIT or GNWT. In fact, the author makes some good points about how aspects of IIT and GNWT can be realized in the brain’s EM fields. I would suggest that a fruitful direction may be to emphasize these points and the overlap between CEMI theory, IIT, and GNWT. In any case, the manuscript in its current form would need to be thoroughly revised, and I believe it is necessary to indicate that this is an opinion article, unless new data and/or calculations are provided.

Response: Reviewer 1 also suggested emphasizing the overlap between cemi field theory and both IIT and GWT. I have attempted to do that in the revised manuscript, particularly in the final paragraph.

Detailed comments.·      Line 28: it says "Introduction", but this is in the abstract. The actual Introduction starts further below.

Response: I thank the reviewer for pointing this out. It has now been corrected.

“So the brain’s EM field will be dominated by signals generated by synchronously-firing neurons. Numerous studies have demonstrated that consciousness in both man and animals is strongly correlated with synchronously-firing neurons (5, 17-24).”

This is true to an extent. But, if there is too much synchronization, like in epilepsy, then consciousness ceases. It is important to acknowledge the fact that synchronization per se, or at least the very strong synchronization, appear to be detrimental to consciousness.

Response: Reviewer 1 also made this point. As I mentioned in Response, I explain why excess synchronization in epileptic seizures causes loss of consciousness in lines 449 – 460. I have added a note to point this out in line 109 and added additional text to emphasize this point in lines 459 – 462 pointing out the widespread use of measures of entropy, a surrogate for information, in EEG signals as a diagnostic tool for detecting seizures.

“…but plants, microbes and inanimate objects such as toasters, computers, rocks, photodiodes, electrical grid systems, or our very complex and integrated immune system, are not conscious.”

We do not really know this. I would say this is an opinion, not a fact.

Response: I agree with the reviewer and make this point in the text lines 195 – 198  “We tend to infer that objects that behave somewhat ourselves, such as dogs and cats, are conscious; whereas objects that behave very differently, such as toasters, plants or rocks, are not conscious.” I go on to argue that this commonsense inference is ‘not without foundation’ and that theories that make alternative predictions to these commonsense inferences should “ at the very least, be supported by evidence.”

“Human consciousness is the driver of what we call “free will” (28) which allows us to choose to swim upstream in a river rather than, like rocks or floating plants, go with the flow, downstream. In this sense, consciousness confers agency. Fish, dogs, cats and other animals that are similarly able to resist the predominant thermodynamic flows in their world are also considered to possess a degree of agency. Rocks, toasters, plants and computers that lack agency are considered to also lack consciousness.”

I find this line of reasoning problematic. We do not know that consciousness drives or confers free will. It is entirely possible that consciousness is an epiphenomenon and free will operates independently of consciousness. On the other hand, we also don't know with any level of certainty whether free will really exists at all.

As to resisting thermodynamic flow, that's what any living organisms do, including bacteria and viruses. One can argue that computers can do that too, or, at least, it is easy to imagine designing machines that would do so. I would say that resisting thermodynamic flow has more to do with 'life' rather than 'consciousness'.

Response: I agree with the reviewer that the connection between consciousness and free will is uncertain. I have changed the text to ‘Human consciousness is perceived to be the driver of what we call “free will”.

I agree with the reviewer and have modified the text to “Human consciousness is perceived to be the driver of what we call “free will” (28), the impression that some of our actions are driven by our conscious choices or agency, rather than automatic responses.  It seems reasonable to infer that dogs, cats, primates and other higher animals that are generally considered to possess at least a degree of agency may also be conscious.”

Line 221: There has been some debate as to whether primary visual cortex is a part of the conscious complex in the brain or not, but I do not think that debate has been settled. Given the existing evidence, the answer is not yet clear.

Response: Agree. I have changed to “Moreover, several specialist structures in the brain, such as the cerebellum or primary visual cortex appear to operate without consciousness.”

 

      For IIT, please consider citing:

Tononi et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44

  • The passage on lines 235-262 is puzzling, because it describes something that IIT supposedly does, but that description is incorrect. In fact, the whole point of IIT is that one should consider all elements of the system and ALL possible partitions of these elements. Thus, yes, one would consider not just the brain, but the whole body and ultimately the whole universe. And one needs to look at all possible partitions of the constituent elements. Then, certain partitions would maximize Phi relative to other partitions. The vast majority of partitions will drop out because their cause-effect power is 0. What will be left are complexes that are separate from each other, each with its own Phi.

 

 So, according to the IIT, there's probably a big conscious complex in the human thalamocortical system with high Phi, which we usually consider as our consciousness. But it doesn't mean that everything else has Phi=0. In fact, IIT framework suggests that other complexes may exist in the brain, with their own Phi values (presumably, smaller values). So, the cerebellum quite possibly would have some non-zero Phi, but that would have no bearing on the Phi of the main thalamocrotical complex.

The IIT math is complex, but it does describe something along the lines of the above. The author certainly should adjust their reasoning to reflect this.

 

Response:  I have rewritten this section to read

 “IIT identifies consciousness with maximally integrated intrinsic information, that is information processing that possesses the highest value of Φ(65). In his 2008 IIT ‘Provisional Manifesto’ Tononi insisted that ‘to generate consciousness, a physical system must be … unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts(66). Essentially, all elements of the system and all possible partitions of these elements are considered and those that maximize Φ are considered to be conscious. All sub-maximally integrated information processing is proposed to be nonconscious. Importantly, IIT’s exclusion principle states that “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components”(67).”

Which I hope clarifies the reviewer’s points.  However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” As all parts of the brain share information and can thereby be described as ‘overlapping sets of elements’ I cannot see how a singular consciousness for the brain is avoidable. I note that the none of the other reviewers have argued this point.

  • Lines 263-282: well, mutual information may be similar to Phi in spirit, but it's not the same. It is not certain that high mutual information means high Phi or vice versa.

Response: As Φ is impossible to calculate it is impossible to prove or disprove that mutual information is similar to Φ. However, as the reviewer notes, they are similar so it seems likely that they would deliver a similar ranking of values, at least at the extreme ends of the ranking. I have added a reference to mutual information being used to assess states of consciousness to emphasize the point that they are both good at measuring degrees of consciousness.

The examples brought up by the author, like the immune system, may very well have relatively high Phi. We don't know for sure, but it is likely that their Phi is still much lower than the Phi in our brains.

So, yes, systems like that might have some inkling of consciousness according to the IIT, but that consciousness is separate from the brain’s consciousness and is likely much weaker.

Response: I don’t see why that is likely. But it is a moot point

"No evidence for consciousness in any of these systems" is too strong of a statement. We simply don't know. And imagine, just for the sake of argument, that your immune system has consciousness that is much higher than the one in the brain, but it doesn't communicate to the one in the brain in any way except for doing its job of monitoring and suppressing disease. Would we know of its existence?

Response: I agree that these issues cannot be resolved but if we are to limited to only provable consciousness then we are limited to only our own consciousness as everyone else may be zombies. This is why I prefaced the article with the case that a theory of consciousness should carve the joints between entities that are generally agreed – though on nonprovable criteria – as being either conscious or nonconscious.

 Lines 283-295: yes, IIT suggests that many systems have some amount of consciousness, and that consciousness is not a binary state but can have a spectrum of values. To me, this is a very reasonable conclusion. Most phenomena in nature have the same property, rather than being either 0 or 1. And "common sense" has little value when we consider consciousness.

Response: I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. Similarly a particle is electrically charged or not, it possesses mass or not.  Both this and the preceding point pivot on whether or not we accept panpsychism, which is inevitable in IIT. Many ancient and modern religions similarly proposed/propose animism, the theory that all matter shares in the property of life.  I know of no animist biologist. I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.

"IIT alone is clearly unable to carve nature at the joints" is simply wrong. As mentioned earlier, IIT has the mathematical apparatus that provides such carving by identifying collections of elements in a system that are forming conscious entities, thus determining borders. Carving at the joints is exactly what the IIT does.

Response: But the point of my article is to argue that the carving provided by IIT does not correspond to the entities that usually described or nonconscious.

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

Again, this is only true to some extent. Epileptic brains are highly synchronized, and yet are unconscious.

Response: I have dealt with this point above – too much synchrony and the brain’s EM field is empty of information and thereby non-conscious.

This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.

Response: Again, I point out in my article that IIT and cemi field theory agree on this point.

Lines 324-330: This passage mischaracterizes the IIT’s predictions. Sure, there will always be some value of Phi associated with the brain. But that value of Phi can vary drastically depending on the brain state. According to the IIT, most likely Phi of a brain during Non-REM sleep or during coma are non-zero, but they are many, many orders of magnitude smaller than Phi during awake state.

Response: As argued above, as far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.

Again, the issue here is that Phi can take on a great range of values. It's not just zero or 'a lot'. And the non-zero Phi in a comatose brain still would feel like not much at all, so, no consciousness in the colloquial sense, but some minimal amount in more precise sense, whereas much bigger Phi in the awake state feels like what we are used to calling consciousness, which is what we experience in such an awake state.

Response: As above

“But none of these electronic systems is considered to be conscious.” Well, how do we know?

Response: Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency. I will believe my laptop is conscious when it begs me not to turn it off. Of course, these are woolly criteria but some criteria is needed for there a academic study of consciousness studies. As far as I know, no scientist studies consciousness in immune systems or electric grids. Why not? Because it is generally believed that they are not conscious. That belief may be wrong but, I would argue, it is up to the scientists who propose IIT to test their theory by demonstrating some attributes of consciousness in systems that are predicted to be conscious by IIT, but not by other theories of consciousness, such as cemi field theory or GWT.

Even though I agree that current computer systems are unlikely to be as conscious as an awake human, I wouldn't assume that their consciousness is exactly zero. It can be some small but non-zero amount. That is fine from the GWT's point of view. It is also fine from the IIT's point of view. If we are not used to that point of view from our simplistic common-sense attitudes about consciousness, it doesn't mean that is a wrong view. To me, that makes much more sense than the mysterious arrangement where consciousness is either not there or it is there, with nothing in between.

Response: As above

Lines 391-393: In the excerpts that the author quotes above this sentence, both "world-wide web" and "great mosaics" refer to the cortico-thalamic system. Something is not quite right in this sentence.

Response: I have split into two quotes which makes more sense.

Lines 401-404: The examples of integration in the case of weight or compass needle are crucial. They refer to integration in the sense that many degrees of freedom (interactions of many particles via fields) are reduced to a single or a few observables, like the weight or the direction of the mean field. However, this is not the type of integration usually meant in the context of consciousness, because the latter carries integrated experience where multiple aspects of the percept are present simultaneously, like in the quote of Proust used in the manuscript. In other words, the integration in the case of weight and direction of magnetic field can be referred to as reduction – from many degrees of freedom to one. Consciousness-associated integration is when many elements remain non-reduced but are combined into a whole.

Response: Interesting point but I disagree. From the perspective of the field, EM (for compass needle or consciousness) gravitational (for weight) the reduction does not occur in the field but only in the interaction with matter. From the frame of the field, there is no time nor space between a mountain in Australia and the chair beneath me so, from the reference frame of the field, all of the information is present in the field at the point that I sit on the chair though with amplitude according to the square of distance. But it never goes to zero. Similarly, from the frame of reference of the brain’s EM field – the conscious mind, all of the brain’s information is present as an integrated whole at every neuron.

I have no issue with the idea that EM fields can produce integration of this second type. But matter might be able to do that as well. This is closely related to the concept of quantum entanglement, or, in a more general case that includes classical systems, the concept of non-separability of degrees of freedom. This may occur in both matter-based and field-based physical systems.

Response: yes matter can do this but only in special quantum states such as Bose-Einstein condensates, which are infeasible in the brain. Otherwise information coded in matter is discrete. That is surely what we mean by matter.

It can be useful to cite a highly relevant paper about how this might be related to consciousness:

Arkhipov A. Non-Separability of Physical Systems as a Foundation of Consciousness. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539

Response: I thank the reviewer but I don’t think the paper is relevant to this article.

Also, it is important to note that EM field is not automatically integrated in this second sense. In most cases, an EM field can be decomposed into INDEPENDENT components with specific polarization, temporal frequency, and wavelength. This is almost always the case, and in this situation, the EM field is not integrated - it exists as a collection of completely independent components. There are cases when the EM field cannot be decomposed into independent components like that, but they are rather exotic. Please consider citing the following papers about such situations:

Zhan, Q. Entanglement goes classically high-dimensional. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w

Response: indeed, for example, a Fourier transform will decompose a field into its components but I don’t field that means that the field itself is decomposed. But these are subtle points

Spreeuw, R.J.C. A Classical Analogy of Entanglement. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245

  • “information encoded in matter is always discrete and localized”

This is not always the case. In particular, the brain itself clearly integrates information in non-localized way via interactions between neurons – this occurs on the relevant timescale (~100 ms) across tens of centimeters. This is not controversial, as such integration is the foundation of perception, multisensory integration, memory, and many other well studied phenomena and is clearly mediated by synaptic connections (of course, possibly by other mechanisms too).

Response: I dealt with this point in my earlier paper (J. McFadden, Neuroscience of Consciousness 2020, niaa016 (2020). ) and have added a few lines to make this point “Note however the difference between information that is physically integrated, as in physical fields or exotic states of matter, such as Bose-Einstein condensates, and integration in the sense of an effect with multiple causes that is integrated in time(29). For example, the last logic gate in a computer to predict the weather that outputs “rain” does not physically integrate all of the information that went into that prediction. It likely integrated only a few bits of information from its input nodes. It is an integration in time, rather than space and does not correspond to field-based integration where all the components of the information is physically integrated at each point in the space of the field.”

  • “information encoded in EM fields is always integrated and delocalized”

This is not always the case. For example, in the focus of a laser, the EM field is highly localized and barely integrates anything due to high coherence (that is, it's described by very few variables, like it has the constant wavelength, etc.).

Response: EM fields, as in a laser, can be very simple entities that integrate very little information, such as a frequency. But that is not relevant to the brain’s EM field.

Lines 444-446: There are certainly electric fields present in the cerebellum, with rather rich phenomenology. For example, synchronous oscillations, which the author rightfully appreciates, are present there in multiple bands. See, e.g.,

Robinson, J.C., Chapman, C.A. & Courtemanche, R. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5

Cheron, G., Márquez-Ruiz, J. & Dan, B. Oscillations, Timing, Plasticity, and Learning in the Cerebellum. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9

D'Angelo, E. et al. Timing in the cerebellum: oscillations and resonance in the granular layer. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

Lines 454-459: The same argument applies to what neurons do during seizures and accounts for the loss of consciousness in theories like IIT.

Response: of course all electrical activity generates EM fields but the point is that these fields are weak so they are undetectable by EEG and likely also unable to influence neural firing.

Lines 461-474: we know that synaptic activity in the brain is crucial for consciousness and thought. The fact that EEG and MEG are typically used to detect signs of consciousness is due to the accessibility of these signals. Surely, we'd love to use for that purpose the activity of billions of neurons, resolved for every single neuron, instead of EEG and MEG, but that is not technically or ethically possible at the moment.

Response: I agree

If EEG and MEG are flat in dead or comatose brains, then so are much of the neuronal activity. There's no reasonable argument here to suggest that EEG or MEG is somehow superior to synaptic activity in terms of being associated with consciousness.

Response: I agree that it is impossible to dissociate electrical activity of neurons from the EM fields that they generate so I have never suggested that EEG or MEG are superior to measurement of synaptic activity. My argument for placing consciousness in the EM field, rather than the matter of the brain, is based on other arguments, such as its ability to solve the binding problem.

Lines 475-481: the problem with CEMI theory is that, while neurons might "feel" the EM field from anywhere in the brain, it is not a very high-resolution way of exchanging information. At the distances of centimeters and beyond, the "voices" of individual neurons in the EM field are all jumbled together into a mean field, which contains much less information than the activity of the original individual neurons. From this point of view, synaptic communication is much better, since it preserves single-neuron messages over such distances.

Response: I agree that synaptic information processing is superior to EM field processing for many or most brain activities. It is why most of what we do does not involve consciousness. But I also argue that EM field processing has its own advantages.

Conclusions: I think there is a good point to be made that EM fields may provide a substrate for integration - or, in other words, non-separability - of the brain's degrees of freedom. Whether this is the only substrate or even the dominant one, I do not think we are in a position to claim as of yet. In my opinion, theories like the IIT or generation of consciousness from non-separability could be reconciled with CEMI theory in that the CEMI theory proposes the EM field to be such a strong substrate and these other theories could potentially be used to compute whether that is the case.

Response: I agree. In the current version of the paper I have added a section to argue that aspects of IIT, cemi field and GWS may themselves be integrated into an inclusive theory of consciousness.

 

 Ref. 75 is an "INVALID CITATION".

Response. Now corrected.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript has improved. Just a couple of further comments:

 

Line 288-296: There is still some confusion here. It would depend on mathematical details of the connections what would happen. It is possible that the two consciousnesses of S1 and S2 combined to form a super-ordinate consciousness, but it is also possible that it doesn’t happen. It depends how much information is integrated by the connection.

 

Re a threshold value of Phi for being conscious. IIT would say any non-zero Phi is conscious. However, that’s not claiming that the associated phenomenology is anything like as rich as the phenomenology of an adult human. Regarding the systems mentioned in the rebuttal that might have high Phi, I disagree that these are problematic. Having high Phi once doesn’t equal having rich phenomenology. There needs to be a lot of interesting structure, and importantly, the system needs to actually in practice explore a wide range of states to have a rich inner life. The internet and the mammalian immune system arguably do not have anywhere near the same complexity to their dynamics as the human brain.

 

Author Response

Please see attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The author has declined to address the issues I had with the paper so my initial recommendation stands.  This paper is more a polemic than an academic paper.  The editors may wish to publish this as an opinion piece but it should not be published as a regular paper.

Author Response

Please see attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

In the revised version, the author made some changes and additions based on the reviewers’ comments, which improved the manuscript. However, my key concerns have not been addressed. These are, mainly, as follows.

 

1.     The manuscript conveys the author’s opinion that CEMI Field theory is superior to IIT and GWT without any new data or calculations. As such, the paper should be clearly labeled as “Opinion” rather than a research article. If there is no article type in this journal that would be suitable for that, the author can modify the title by adding “Opinion: ” at the start.

 

2.     The author’s criticisms of the IIT mischaracterize that theory. Some of the points, e.g., about the nature of EM field vs. matter or EM fields in the cerebellum, should be revised to be more consistent with existing evidence. Almost none of the additional references that I suggested in the previous review were included in the new version. The revisions are therefore insufficient. Additional revisions and references are necessary to clarify the statements, remove misinterpretations, and provide the important background literature.

 

Related specific points are below.

 

 

In his response the author wrote:

 

“However, my reading of IIT is that there is a single consciousness in the brain, as required by its exclusion principle […] I cannot see how a singular consciousness for the brain is avoidable.”

 

And, accordingly, the manuscript contains text (much of it was added at this round of revisions) elaborating on this point, such as in lines 286-321.

 

But this view of IIT is incorrect. This can be seen in many of the IIT papers, or the author can email Giulio Tononi and check. But most certainly the IIT does not posit that there’s one and only one consciousness in the brain. It says that some part of the thalamo-cortical system forms a complex with high Phi, and then there are many other assemblies of neurons in the brain that form their own, separate complexes that maximize Phi for those assemblies (for example, cerebellum can be one such complex). Presumably, their Phi is much lower than that of the thalamo-cortical main complex. But, according to IIT, these smaller complexes also have some amount of consciousness. Tononi calls these smaller complexes “ontological dust”.

 

In the example of systems S1 and S2 described by the author, the following will happen according to the IIT: if S1 overlaps with S2, and S2 has higher Phi, then it’s S2 that is conscious. The part of S1 that is within S2 is then part of that conscious complex. The rest of S1, that does not overlap with S2, is then a separate system (we can call it S3), which has its own Phi – presumably, much smaller than the one of S2. Then, according to IIT, we have two conscious systems – S2 and S3, although the amount of consciousness in S3 might be tiny.

 

This whole argument about singular consciousness in the brain as currently described in the text should be re-written, as it doesn’t correctly represent what IIT says.

 

 

Regarding the point of whether consciousness is binary or can have gradations, the author wrote in the response:

 

“I disagree. Life appears to be a 0 or 1 phenomenon. It has no grades. […] I believe that neurobiologist should similarly hold higher criteria, than a calculation, for accepting that an entity is consciousness, hence the point of this article. Those criteria may be woolly, but they are better than nothing.”

 

I am sorry, but life is not a 0 or 1 phenomenon. How about an animal that has just died – it’s brain might be dysfunctional, but many of its tissues are still alive. Is that animal dead or alive? How about a cell that is undergoing apoptosis? At which point does it stop being alive and becomes dead? How about viruses, prions, transposons? Many people consider them alive, and many consider them dead matter, exactly because the definition of life as 1 or 0 is basically impossible.

 

Likewise with other emergent properties. Electrical charge is a property of every elementary particle. But how about wetness? Water in a full cup is wet. But how about 100 molecules of water? How about 10?

 

With respect to consciousness, we do not know whether it is binary or not. The author is entitled to his opinion, but it is certainly true that many well-respected researchers in the field think that consciousness might be non-binary. At the very least, the paper should acknowledge that this is a matter of debate, and deciding which theory is correct based on whether it gives us binary consciousness or not is premature.

 

 

About synchrony, I wrote in the previous review:

 

“This is, by the way, something that IIT gets right too - according to IIT, no interactions (or no synchrony) corresponds to zero or small Phi, moderate levels of interactions maximize Phi, but too strong interactions (high synchrony) again lead to small or zero Phi.”

 

The author responded:

“Again, I point out in my article that IIT and cemi field theory agree on this point.”

 

But then the main text does not fully reflect that. For example, one finds in lines 415-418:

“Moreover, the theory does not predict that consciousness should be associated with synchronously-firing neurons which, as highlighted above, are the strongest correlates of consciousness in the brain.”

 

First of all, they are not the strongest correlates, exactly because too much synchrony in deep sleep or epilepsy wipes out consciousness. More importantly, the author agrees that IIT and CEMI Field theory reach similar conclusions with respect to synchronization. In fact, the metric inspired by IIT, the PCI, correctly showed that brains with strong synchrony during deep sleep are unconscious and those with moderate synchrony in an awake state are conscious.

 

So, the statements like the one above should be changed from saying the IIT gets that wrong to the opposite.

 

 

“As far as I know, Tononi has never claimed that Phi has a minimal value for consciousness.”

 

There is no minimal value. The point is that low Phi corresponds to some very tiny amount of consciousness, which feels like nothing compared to consciousness in an awake state. Thus, a brain in coma may have some non-zero Phi associated with it, but it is very small. Colloquially, we might say “there is no consciousness”, whereas IIT says there is some, but exceedingly little. But it means the same thing – there are essentially no feelings in that state.

 

 

“Hardly anyone in the AI field believes that current AI is conscious because it does exhibit the features that we expect for conscious entities that I described in the article, such as agency.”

 

As far as I know, many people in AI are worrying that their AI systems might be getting close to being conscious or are conscious already. But, in any case, what someone believes has no bearing on the discussion. One does not define a phenomenon based on beliefs.

 

 

“I will believe my laptop is conscious when it begs me not to turn it off.”

 

It is very easy to write a program that will do exactly that, presumably without any conscious agency.

 

 

Regarding integration in EM field vs. matter: there is a confusion in terminology and physical concepts. What the author calls integration of EM field in space is known as non-locality. On the other hand, integration, such as integration of information, involves causal interactions among degrees of freedom in a system, which is known as non-separability. See, e.g.,

Khrennikov, A. Quantum Versus Classical Entanglement: Eliminating the Issue of Quantum Nonlocality. Found. Phys., 50, 1762–1780 (2020).

 

The EM field is an infinite-dimensional system, meaning that it has infinite number of degrees of freedom. However, these degrees of freedom, such as Fourier components of a field, are usually completely independent – unless we have a case of very strong fields in non-linear optics. Yes, each component is non-local (though, for classical electrodynamics, speed of light still constrains this non-locality), but the components are not affecting each other, which crucially prevents them from integrating information or establishing cause-effect power upon each other.

 

By contrast, even classical systems of regular matter (or matter plus field) can have degrees of freedom that are integrated in terms of cause-effect relationships.

 

Interestingly, the case of Bose-Einstein condensate that the author quotes is different. In that case, all particles in the system are in their lowest quantum state, thus each of them occupying exactly the same state as every other. The system behaves as one not because particles are integrated, but because their collective state is highly degenerate.

 

There is much interesting physics to discuss here, but that is clearly beyond the scope of the article. What I would suggest is that the author tones down some of the claims about integration in EM fields vs. matter and cites the paper above and the others I mentioned in the previous review to refer readers to a more extensive consideration of these phenomena.

 

 

About cerebellum: LFP magnitudes in the hundreds of microvolts are observed in the cerebellum. This is comparable in magnitude to the cortical LFP. Thus, if cortical LFP affects firing of cortical neurons, then cerebellar LFP likewise might affect firing of cerebellar neurons. We do not yet fully understand ephaptic coupling either in the cortex or in the cerebellum, so it is not warranted to claim that EM field integrates information in one but not the other. At the very least, the author should acknowledge that and cite some of the papers about LFP oscillations in the cerebellum.

Author Response

Please see attachment

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

Comments and Suggestions for Authors

There is still an issue with lines 749-766, in relation to point 1 from my previous review. The lack of split consciousness in split brain patients is not evidence against IIT. There are still indirect connections between the two hemispheres in these patients, via sub-cortical structures. We don't know what IIT would predict here, if the quantities of IIT really could be computed. You can't say that IIT predicts split consciousness.

 

Comments on the Quality of English Language

Fine.

Author Response

Rev 1

There is still an issue with lines 749-766, in relation to point 1 from my previous review. The lack of split consciousness in split brain patients is not evidence against IIT. There are still indirect connections between the two hemispheres in these patients, via sub-cortical structures. We don't know what IIT would predict here, if the quantities of IIT really could be computed. You can't say that IIT predicts split consciousness.

 

Author Response: To make the point that the proponents of IIT have claimed that a surgically split -brain gives rise to split consciousnesses is predicted by IIT (i.e. it is not just my interpretation of IIT), I have added a quotation from Tononi and Koch’s 2015 paper. However, in response to the reviewer’s point, in lines 515 – 18 I have added “It should, however, be noted that indirect connections exist between the two hemispheres even after severance of the corpus callosum so exactly what IIT would predict in this more complex situation isn’t clear.”

Reviewer 2 Report

Comments and Suggestions for Authors

I think publishing this as an opinion piece makes much more sense than as a research paper.  I think it can be improved if the author dials back on his maximalist approach both in his objections to competing ToCs and to support of his own theory.

For example, he ascribes "rampant" panpsychism to GNWT based on the fact that GNWT correlates consciousness (or at least conscious access) to the availability of a a global workspace.  He argues that such workspaces both animate and inanimate are all over the place - for example, "the air around us pools and transmits lots of information encoded in acoustic vibrations generated by spoken language that it makes available to anyone in within earshot." Therefore, he argues, GWT based theories do not "carve nature at its" joints.  However when it comes to the universality of electromagnetic fields the author points out (correctly in my opinion) that in the case of the brain we have a situation where "neurons became more and more tightly packed within space-limited hard skulls."  This tight packing of neurons and the interaction of neuronal firings and the electromagnetic field provides for the development of consciousness and the lack of such density of interconnections provides an argument against CEMI being vulnerable to the charge of pan-psychism.  The thing is, this exact argument is also available to GNWT.  If this argument can be used to absolve CEMI of pan-psychism why can it not absolve GNWT of the same?

If the author would present a fair view of CEMI as well as the competing ToCs I think this would be a useful piece.  As it stands, this degenerates into a polemic.

Additional comment:  Could the author show how CEMI handles binocular rivalry?  It seems CEMI would fail here - only predictive processing theories seem to be able to handle this problem. 

Author Response

I think publishing this as an opinion piece makes much more sense than as a research paper.  I think it can be improved if the author dials back on his maximalist approach both in his objections to competing ToCs and to support of his own theory.

For example, he ascribes "rampant" panpsychism to GNWT based on the fact that GNWT correlates consciousness (or at least conscious access) to the availability of a a global workspace.  He argues that such workspaces both animate and inanimate are all over the place - for example, "the air around us pools and transmits lots of information encoded in acoustic vibrations generated by spoken language that it makes available to anyone in within earshot." Therefore, he argues, GWT based theories do not "carve nature at its" joints.  However when it comes to the universality of electromagnetic fields the author points out (correctly in my opinion) that in the case of the brain we have a situation where "neurons became more and more tightly packed within space-limited hard skulls."  This tight packing of neurons and the interaction of neuronal firings and the electromagnetic field provides for the development of consciousness and the lack of such density of interconnections provides an argument against CEMI being vulnerable to the charge of pan-psychism.  The thing is, this exact argument is also available to GNWT.  If this argument can be used to absolve CEMI of pan-psychism why can it not absolve GNWT of the same?

Response: I have never claimed that “the lack of such density of interconnections provides an argument against CEMI being vulnerable to the charge of pan-psychism”. My principle argument for cemi field theory not being rampantly pan-psychist is its proposal that consciousness is an EM field-based computational engine is a complex device that has been honed by natural selection through selection for the positive influences of EM fields on neural firing patterns that deliver higher fitness visible to natural selection. To further emphasize this point, I have modified the text to “So, in a sense, consciousness is the EM field equivalent of a living cell or a von Neuman architecture computer: a complex computational device that is highly unlikely to have arisen spontaneously and must thereby be the product of design, either by natural selection or some artificial design process. In CEMI field theory, consciousness is proposed to have evolved when, during the process of evolution, neurons became more and more tightly packed within space-limited hard skulls such that EM field interference began to influence neural firing so was captured by natural selection to deliver novel capabilities, such as EM field computing (25, 101). Sometimes called ‘quantum-like’ computing (101, 102) EM field computing confers, according on the CEMI field theory (9), the capability of computing with meaningful gestalt objects, such as the idea of the town of Combray, transmitted into the brain’s EM field – its global workspace - by synchronously-firing neurons. Note that consciousness thereby confers agency, the capability of working towards complex meaningful goals, on conscious minds (103, 104). Note that this EM field-based gestalt computational capability is (according to cemi field theory) entirely lacking in the synaptic neuronal computations of the nonconscious mind and; and is also absent from conventional computers that are designed to avoid EM field interference between electrical components and thereby only compute with meaningless binary digits. So the CEMI field theory predicts that conventional computers or electrical devices that generate EM fields, such as power lines or diodes, are not conscious.” (lines 555 – 76) to make this clearer.

EM fields play no role in GNWT so the same argument does not apply to GNWT, unless it is accepted, as in the cemi field theory, that the brain’s global workspace is its EM field. But that is not part of GNWT since its workspace is presumed to be composed networks of interacting neurons. So I am not sure what point the reviewer is making here.

 

If the author would present a fair view of CEMI as well as the competing ToCs I think this would be a useful piece.  As it stands, this degenerates into a polemic.

Response: I do not agree that the paper is polemic and, I believe, that I have addressed all of the reviewer’s excellent points and helpful comments.

 

Additional comment:  Could the author show how CEMI handles binocular rivalry?  It seems CEMI would fail here - only predictive processing theories seem to be able to handle this problem.

Response: I have dealt with binocular rivalry in previous papers. For example, in McFadden 2020 I state that in cemi field theory “the dominant information in consciousness will then be the one that is associated with the strongest EM field perturbation capable of modulating neural firing within that singular field. This has been demonstrated in numerous studies, for example, in his 2005 study (Doesburg et al. 2005) demonstrated that increased gamma-band synchrony [generate strong EM fields as detected by EEG] predicts switching of conscious perceptual objects in classic binocular rivalry.”

Reviewer 3 Report

Comments and Suggestions for Authors

Minor revisions have been made, but the main problems remain.

 

  1. The opinion about the IIT expressed by the author is incomplete and biased. OK, the author is entitled to their opinion, but, in that case, it should be clearly stated that this is an opinion article.

 

  1. Instead of addressing issues in the manuscript, the author prefers to argue with reviewers in the response. In particular, I requested multiple times to cite relevant literature, and the author declines to do so. Here’s the list from my first review.

 

  • Tononi et al. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44
  • Arkhipov A. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539
  • Zhan, Q. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w
  • Spreeuw, R.J.C. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245
  • Robinson, J.C., Chapman, C.A. & Courtemanche, R. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5
  • Cheron, G., Márquez-Ruiz, J. & Dan, B. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9
  • D'Angelo, E. et al. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048

 

So far, the author only cited the first of these papers. In the latest response, he said that he added references. However, they didn’t include anything from this list except Tononi et al.

 

My impression is that the author is not interested in doing anything substantial to address the serious feedback that I and the other reviewers provided (which require some effort and non-trivial amount of time). So, I would like to leave it up to the Editors – please decide what the journal needs in this case and communicate that to the author.

Author Response

Minor revisions have been made, but the main problems remain.

The opinion about the IIT expressed by the author is incomplete and biased. OK, the author is entitled to their opinion, but, in that case, it should be clearly stated that this is an opinion article.

Response: I do not accept that my article is incomplete and biased. I do, of course, have an opinion on the relative merits of different theories of consciousness but I do not accept that my opinions are unsupported by either argument or evidence.

 

Instead of addressing issues in the manuscript, the author prefers to argue with reviewers in the response. In particular, I requested multiple times to cite relevant literature, and the author declines to do so. Here’s the list from my first review.

 

  1. Tononi et al. Nat Rev Neurosci 17, 450–461 (2016). https://doi.org/10.1038/nrn.2016.44
  2. Arkhipov A. Entropy. 2022; 24(11):1539. https://doi.org/10.3390/e24111539 Non-Separability of Physical Systems as a Foundation of Consciousness.
  3. Zhan, Q. Light. Sci. Appl. 2021, 10, 81. https://doi.org/10.1038/s41377-021-00521-w. Entanglement goes classically high-dimensional.
  4. Spreeuw, R.J.C. Found. Phys. 1998, 28, 361–374. https://doi.org/10.1023/A:1018703709245. A Classical Analogy of Entanglement.
  5. Robinson, J.C., Chapman, C.A. & Courtemanche, R. Cerebellum 16, 802–811 (2017). https://doi.org/10.1007/s12311-017-0858-5. Gap Junction Modulation of Low-Frequency Oscillations in the Cerebellar Granule Cell Layer.
  6. Cheron, G., Márquez-Ruiz, J. & Dan, B. Cerebellum 15, 122–138 (2016). https://doi.org/10.1007/s12311-015-0665-9. Oscillations, Timing, Plasticity, and Learning in the Cerebellum.
  7. D'Angelo, E. et al. Neuroscience, Volume 162, Issue 3, 805 – 815. https://doi.org/10.1016/j.neuroscience.2009.01.048. Timing in the cerebellum: oscillations and resonance in the granular layer.

 So far, the author only cited the first of these papers. In the latest response, he said that he added references. However, they didn’t include anything from this list except Tononi et al.

Response: I have added the titles of the suggested papers to the reviewer’s list and numbered them. I do not see the relevance of references 2 – 4 to my article but I thank the reviewer for reminding me to add the references 5 – 7 in the section where I discuss the cerebellum (line 585).

 

My impression is that the author is not interested in doing anything substantial to address the serious feedback that I and the other reviewers provided (which require some effort and non-trivial amount of time). So, I would like to leave it up to the Editors – please decide what the journal needs in this case and communicate that to the author.

Response: I believe that I have now addressed all of the reviewers’ concerns and criticisms and am grateful for their input.

Round 4

Reviewer 1 Report

Comments and Suggestions for Authors

Ready to accept  this.

Reviewer 2 Report

Comments and Suggestions for Authors The author and I seem to be talking by each other. If the other reviewers are OK with it, I will keep quiet and not object.

 

Back to TopTop