First off, please feel free to contact me at selfconsciousdotcom@gmail.com.

So now I’m going to lay out my very own theory of consciousness. For the record: no, I’m not delusional. As a scientist myself, I’m vividly aware that my theory is highly speculative. But, in light of all the salient facts of neuroscience and physics, it’s the only theory I can think of that makes any sense whatsoever... Please remember that, when it comes to consciousness, we’re starting at square one. Nobody has even the faintest idea what it is. Nature has made us very desperate, and I’m not the only one grasping at straws.

Today’s leading theories of consciousness (Global Workspace Theory, Integrated Information Theory, Recurrent Processing Theory, etc.) all treat the brain as a biochemical machine that somehow becomes conscious when it displays certain properties. For example, the biochemical machine of the brain would somehow become conscious when it broadcasts information, when it integrates information, when it contains recurrent networks, etc. And it’s very easy to get lost in such sophisticated, abstract concepts as integrated information, recurrent networks, and the like. But, at the end of the day, if the brain is just a biochemical machine, where the firing of every neuron can be explained mechanistically by thermodynamics and biochemistry, then consciousness becomes totally superfluous. It’s one thing if these theories merely try to describe the correlates, the signatures of consciousness (and there’s undoubtedly a lot of value in such an effort, as it serves to guide the whole field in the right direction). But if they downright state "Consciousness is X," whether X be integrated information, recurrent networks, or what have you, then no, in every case, you don’t need consciousness to bring about X, thermodynamics and biochemistry take care of everything.

Consciousness is superfluous to a machine, but what exactly is a machine? Well, the essence of a machine (be it mechanical or biological) is that it obeys something called local realism. Nuts and bolts may well contribute to the operation of the greater machine, but they still experience only local forces (such as torque). If, likewise, neurons fire based only on local deterministic cues (e.g. neurotransmitters, ion gradients, electromagnetic fields, incoming action potentials), then they too are merely nuts and bolts in a machine. And consciousness, not being one of the local deterministic cues I just listed, becomes irrelevant, both to these nuts and bolts (neurons) and the greater machine (the brain). Why would nature evolve such a wondrous thing as consciousness, if it has no actual part to play in the brain? But then where, in nature, can we find non-locality?

Up until about a century ago, there was not a single phenomenon in all of known physics that displayed non-locality. Today, however, there is: quantum entanglement. I give here the Wikipedia definition of entanglement: "Quantum entanglement occurs when a group of particles are generated or interact in a way such that the quantum state of each particle cannot be described independently of the state of the others, including when the particles are separated by a large distance." Let us describe a simple example of entanglement: a pair of particles whose total spin is zero. Prior to any measurement, the particles are in a superposition of two states, since they have both clockwise and counterclockwise spins. Measuring the spin of either particle randomly collapses it to one of the two (say, clockwise), which, in turn, instantly collapses the other particle to the opposite spin (counterclockwise), even if they’re separated by great distances (hence the non-local quality of entanglement). Note these two particles are fundamentally unlike those nuts and bolts that independently contribute to the operation of the greater machine. Due to the non-locality of entanglement, the two particles evolve over time together, collectively, as one.

However, there is one seemingly insurmountable obstacle to the widespread adoption of quantum theory in neuroscience: the brain is much too "warm, wet, and noisy" to allow classical quantum phenomena. The whole field of quantum mechanics, after all, is devoted to studying superpositions of quantum states, which are notoriously fragile. Any interaction with the environment induces decoherence, which is why quantum computers have to be kept well below freezing temperature. But the term superposition is misleading, for quantum theory doesn’t state that a particle actually spins in both directions at once. Quantum theory merely says the particle could have either a clockwise or counterclockwise spin, and, prior to collapse, the probability of each spin is given by the particle’s wave function. The particle’s spin is uncertain. And if you study a lone particle in a physics lab (at well below freezing temperature), you can keep that uncertainty alive for a long time, safe from decoherence or collapse. But what if the particle’s spin is uncertain at only a single point in time? Do the predictions of quantum theory change dramatically? No, they would just apply to only a single point in time. For one instant, the particle’s spin would have a certain probability of being clockwise or counterclockwise. Likewise, in the brain, let us assume that, at a single point in time, there is a non-zero probability the system will evolve into one of two states. And by a non-zero probability, I don’t mean it the same way that a flipped coin has a 50/50 chance of landing heads or tails. Such a probability is merely a reflection of our ignorance of all the physical forces at play. If we precisely modeled the momentum my thumb imparts to the coin, the force of gravity, the drag exerted by air molecules, and so on, we could predict with 100% accuracy whether the coin will land heads or tails. No, I mean that, even if we knew the position of every last atom in the brain, at this one point in time, we still could not predict, with 100% accuracy, how the system will evolve. Even if we were omniscient, all-knowing, there would still be a non-zero probability that the brain will evolve into one of two states. Indeed, the brain is known to display a phenomenon called chaos (or the butterfly effect), whereby an infinitesimally small event can determine the future of the whole system. If that infinitesimally small event (like a single ion crossing a membrane channel) is truly probabilistic, there’s simply no way, on Earth or in Heaven, to predict the final outcome. We are then perfectly entitled to speak of the wave function of a macroscopic system like the brain, even if, at the very next point in time, the wave function will have decohered or (what is far more likely) collapsed, as the brain actually assumes one of the two possible states. Note there is no talk here of superpositions, of the brain being in two states at once. The brain, unlike a particle, cannot remain in a state of uncertainty for extended periods of time. No, we can only speak of the wave function of the brain at specific points in time, when the future state of the brain is fundamentally undetermined.

But what exactly are these states the brain could inhabit, with a certain probability? I mentioned that existing theories of consciousness might be of great value, if taken to describe the correlates, the signatures of consciousness. Below is an edited excerpt from a great paper by Victor A. F. Lamme (which you can find here):


"There is little controversy on how visual processing proceeds. With every saccade, the eye lands on a new scene and the image is processed by the retina. In about 50 ms, this information has reached primary visual cortex [area of the brain that processes visual information], and from there on is distributed along a large number of other areas. During this rapid feedforward sweep, many features have been extracted from the image. Neurons have detected shapes, colours, motion or depth. Higher level cells have signalled the presence of faces or animals or other complex constellations that we are strongly familiar with (e.g. letters and words). You might say that almost everything there is to know about the image has been signalled by the powerful machinery of visual cortex and its many low- and high-level feature selective cells (Figure 1).

This feedforward processing itself is unconscious. The feedforward sweep, as intelligent as it may be, apparently does not suffice to give you a conscious sensation of the image that ignited it. So what is generating the conscious sensation that we automatically have when something hits our eyes?

What typically follows feedforward processing is recurrent processing. Via horizontal and feedback connections, neurons that initially responded to very different parts of the scene, or that had extracted different types of lower- or higher-level features, start to influence each other’s activity patterns (Figure 1). For example, neurons in the primary visual cortex typically are selective for particular features within a small visual field—say the orientation of lines within a scene. Because of their small fields, the neurons will respond identically, regardless of these lines belonging to either background or foreground. That is, up to about 100 ms [duration of feedforward processing]. After that, the neurons’ responses start to diverge for foreground and background, as if they suddenly start to care about the larger perceptual context of the lines they are responding to. These interactions are crucial for the emergence of conscious experience. They enable consciousness of the visual stimulus: you SEE. Various manipulations of conscious perception consistently show to influence EEG activity in the 200 ms latency range—known as the visual awareness negativity (VAN)—which is the typical latency of recurrent visual processing.

Recurrent processing does not stop in visual cortex. The more activity advances towards frontal and motor regions, the larger the network of recurrent interactions becomes. Fronto-parietal activity may induce attention-related modulation of visual responses. Evaluation by medial prefrontal cortex will cause emotional valence to influence sensory activity, and so on. Thus, the network of recurrent interactions grows in size and complexity, making responses of neurons in this network more and more interdependent (Figure 1). With that, the neural signals also come at longer latencies than those of purely visual recurrent signals. In humans, using electroencephalography (EEG), we are dealing with latencies of up to 300–400 ms, for example, as expressed in P300 responses. The disruption of recurrent interactions with longer latencies causes a failure to report conscious percepts. These manipulations typically leave early recurrent signals relatively intact, while selectively disrupting later recurrent signals (e.g. P300).

While the importance of recurrent processing for visual consciousness is widely accepted, the main controversy lies in the question of what extent of recurrent processing is sufficient for conscious experience to arise. The two main positions and their respective arguments (briefly) are:


1. Only when recurrent processing includes the fronto-parietal network do we experience the visual input. At the basis of this line of reasoning is that for sensory information to become conscious it must become available to what is called the Global Neuronal Workspace (global ignition in Figure 1). This global availability enables the cognitive manipulation of that information, conscious access to the information, and eventually also the ability to report it. Hence, when subjects report not to have seen visual (or other modality) stimuli—as happens in cases like change blindness, attentional blink, neglect, and so on—this is taken at face value: it is then considered unconscious. The neural correlates of such global availability are the involvement of fronto-parietal activation, and the later onset (P300) recurrent interactions.


2. Recurrent processing between visual areas suffices for conscious visual experience to arise. The hallmark of conscious vision, it is argued, is the integration of visual features into a coherent single scene. Once this is achieved, all the necessary and sufficient requirements for conscious vision have been fulfilled. Unconscious vision, on the other hand, is characterized by unbound visual features, detected by neurons in distributed areas. The boundary between unconscious and conscious vision is, therefore, put at the transition between feedforward and recurrent processing (Figure 1b). Adding more widespread recurrent interactions—including fronto-parietal activation—may give you the ability to cognitively manipulate and report the conscious visual percept, but this is not explaining the unconscious/conscious transition itself. Conditions like change blindness, neglect, etc., are considered failures of attention and report, not of conscious vision. Experimental support comes primarily from findings showing that report, cognitive access or attention do not change the quality of (neural) visual representations; they just add report, access and attention.


Further arguments in favour of the one or the other have been laid out extensively elsewhere. The discussion is reminiscent of some older philosophical discussions, for example about the distinction between A (access) and P (phenomenal) consciousness. The controversy between the two ideas seems to be in a stalemate, with support for one or the other swinging back and forth. There is sufficient data for either point of view, it mainly boils down to the two theories having clearly different explananda: are you interested in explaining the seeing or the knowing that we see? Crucial is whether we should consider consciousness as closely intertwined with attention and cognition (as in the first view) or as a function or phenomenon with its own ontological status (as in the second view)."


I highly recommend looking at Figure 1, which summarizes everything very nicely. Lamme, of course, supports the second view. In the interest of fairness and balance, below is a similar synopsis supporting the first view, adapted from an equally great review by Stanislas Dehaene et al. (which you can find here):


"Information only becomes conscious when it is widely broadcast across the brain. The Global Neuronal Workspace (GNW) acts as a distributed 'router' associated with many brain regions through which information can be broadcast. Broadcasting implies that the information in the GNW becomes available to many local processors, and the wide accessibility of this information is hypothesized to constitute conscious experience. The prefrontal cortex is posited to play a key role in the GNW, due to its greater density of neurons thought to be critical for global broadcasting of information.

The GNW activates in a non-linear manner called 'ignition,' triggering a sudden, coherent, and reverberant neural activity that mediates global broadcasting. Ignition amplifies and sustains information, allowing it to be broadcast to local processors. Ignition may be triggered by an external stimulus, or it may occur spontaneously and stochastically at rest. In the latter case, even during the unstimulated resting state, simulations show that the GNW is subject to a continuous stream of stochastic spontaneous activity, thereby implementing a source of diversity that can continuously activate mental representations in an endogenous manner. This property of the model fits with the constant variations in fMRI and EEG that are observed in the awake resting brain and that vanish during anesthesia or in patients with disorders of consciousness.

In the case of vision, unseen, subliminal stimuli may result in a long series of evoked brain activations, sometimes lasting several seconds. The difference, however, is that subliminal stimuli do not evoke a sudden ignition but rather a slowing, decaying wave of activity. To become conscious, the signal must be strong enough to reach the GNW, which in turn leads to the activation—or ignition—of a reverberant network. It is this reverberation that allows the signal to be sustained over time."


The key intuition behind the GNW theory is that, once information becomes conscious, you can do anything with it. If you see a cluster of grapes on your kitchen table, you can count the grapes, you can describe their appearance, you can eat them, etc. Clearly, information is being broadcast to different specialized brain centers (e.g. for counting), and, according to the GNW theory, this very broadcasting of information is what makes up consciousness. Here’s more:


"Over the past 15 years, abundant neurophysiological and neuroimaging studies in humans have provided evidence in support of the ignition concept. These investigations reported the existence of a sudden divergence in brain activity, around 200 to 300 ms after stimulus onset, between trials with or without conscious perception, with an intense propagation of additional activity, particularly toward the prefrontal and parietal cortex, on conscious trials. This non-linear divergence occurs regardless of the stimulus modality or paradigm used to manipulate consciousness (e.g. reduced visibility, masking, inattention). For instance, Noel et al. (2018) tested three putative signatures of conscious access for auditory, visual, and audiovisual trials in human EEG. They found that sudden late ignition was the only clear signature common to all three conditions. Similarly, Sanchez et al. (2019) evaluated conscious perception in the visual, auditory, and tactile modalities. Multivariate decoders trained to classify perceived versus unperceived stimuli identified a late sudden ignition (>200 ms) that generalized across modalities. Importantly, the supramodal activity patterns signaling conscious perception included late activity in sensory regions belonging to other modalities (for instance, auditory detection could be detected from visual areas), compatible with the idea that consciously perceived stimuli were broadcasted globally in a top-down manner.

The first 200 ms of brain activity, corresponding to early perceptual processing, can be fully preserved on trials without conscious perception, particularly under inattention conditions. Instead, conscious appraisal correlates with late events that typically lag stimulus onset by at least 200 ms, such as the P300 or 'late-positive' component of scalp-event-related potentials. For instance, the crossing of the threshold for auditory or visual conscious perception is associated with a sudden increase in the P300 component, and only this component vanishes almost entirely under inattention conditions or during sleep. These convergent findings indicate that conscious access is not attached to early sensory processing but instead relates to a late stage whose timing is often decoupled from the timing of the actual stimulus.

In our previous review, we stressed the P3b (a subcomponent of P300) as the most consistent scalp-recorded correlate of conscious ignition, common to auditory and visual, as well as many paradigms (masking, attentional blink, etc.). This proposal received support but also criticism, because an earlier negative event (called Visual Awareness Negativity [VAN]) peaking at ~260 ms and with a total duration of ~200 ms is also often observed when contrasting conscious to unconscious stimuli. VAN has been suggested as the earliest electrophysiological correlate of visual awareness, and this claim has been corroborated with magnetoencephalography (MEG). It remains unclear whether P3b is correlated with awareness, post-perceptual processes, or both. In many experiments, the VAN simply precedes the P3b, and their succession may index the spread of global ignition as reflected in intracranial and MEG signals. However, the two waves occasionally dissociate. Most importantly, only the VAN remains under conditions where the stimuli are task-irrelevant yet reported to be consciously perceived. The (unresolved) controversy surrounds the issue of whether one can ascertain that such stimuli are truly seen as opposed to being merely potentially visible but unattended: intermediate-latency sustained negativities may reflect a neural state of information accessibility, whereas the P3b would reflect genuine conscious access and processing.

In terms of mechanisms, the current understanding is that during visual processing, activity from lower visual areas is fed forward to higher areas and then fed back to form recurrent loops. The early recurrent loops occur in lower areas and may correspond to VAN. As later recurrent loops involve higher areas, including the frontal-parietal network, global recurrent processing ensues. This process may be captured by the late positive, and it enables subjects to report their awareness. Although non-response tasks may be a promising approach to separating neural correlates of awareness from those of post-perceptual processes, experimental findings so far do not resolve this discussion. Feedback processing within sensory cortices (e.g. during figure-ground segregation) could be important for phenomenal consciousness, whereas more global feedback (e.g. in frontal-parietal networks) is important for access consciousness. At present, however, short of an experimental method for producing conscious experience in the absence of conscious access, this proposal remains untested."


Now let’s get back to the key question: what precisely are these states the brain could inhabit, with a certain probability? Well, there is one thing, and only one thing, that you can do to any object of consciousness: you can focus your attention on it. You can’t be conscious of something you can’t focus your attention on, and, conversely, you can’t focus your attention on something unconscious. Some objects of consciousness you can count, others you can love, still others you can remember, but the one thing you can do to all objects of consciousness is focus your attention on it. And note you can only focus your attention on one object of consciousness at a time. Here I could cite a lot of neuroscience studies with fancy names (like binocular rivalry), but it’s something we are all familiar with. You simply can’t focus your attention on two different things at once. You can lump them together and consider them in aggregate, but you can’t focus on them separately, for there is only one global workspace. The act of focusing your attention on an object of consciousness (called ignition by Dehaene) corresponds to the measurable P300. It results in the broadcast of information throughout the brain’s global workspace, information first encoded in lower recurrent signals of the brain (corresponding, in the case of vision, to the measurable VAN).

So, in light of the above, how would the brain evolve (probabilistically) into one of multiple possible states? It’s very simple: every VAN-associated packet of information has a certain probability of inducing the P300-associated ignition of the one and only global workspace. In other words, all VAN-associated packets of information compete for the spotlight, and each has a certain probability of capturing it. Look out your window, and everything you see has a certain probability of catching your eye. Perhaps due to the sheer complexity of the human brain, perhaps due to a neural mechanism yet unknown, it would be impossible, even in theory, to predict what you will focus your attention on at any point in time. This agrees with our most intimate experience of life: each moment of consciousness is really a choice of what to focus your attention on (even if it’s just the gestalt). Of course, all probabilities must sum to one, which gives rise to quantum entanglement. For entanglement arises when you can’t assign separate wave functions to different parts of a system, when they all share one wave function. If each VAN-associated information packet has a certain probability of inducing ignition, together, these probabilities must sum to one. Therefore, all such information packets must share a single wave function, meaning they must be entangled. Such entanglement readily solves the so-called "binding problem" of consciousness, namely how every sensation, every thought, every emotion you experience at any given moment, how all of these are integrated into one consciousness. In short, consciousness is the collective wave function of every packet of information that can, with a certain probability, at a specific point in time (again, there are no long-lasting superpositions in the brain), lead to ignition, to its broadcast throughout the brain. Note such a theory places consciousness in the time window between VAN and P300, rather than linking it to one or the other.

But I haven’t yet described how the brain actually conjures up a state of uncertainty where multiple information packets could potentially trigger ignition. The answer may lie in the issue of synchronization. Here I turn to the brilliant early work (which you can find here) of Lucia Melloni, from which I quote:


On neural synchrony being the earliest correlate of consciousness: "The first electrographic difference between conscious and nonconscious stimulus processing was increased phase-locking of induced gamma oscillations across widely distributed cortical regions. This suggests that early large-scale synchronization could be the event that triggers ignition of the global workspace of consciousness, as postulated by Dehaene et al. [...] [P]hase synchronization across electrodes clearly differentiated between conditions, suggesting enhanced long-range coordination of oscillatory activity only in the visible condition. Several authors have proposed that conscious perception should be related to coordinated dynamical states of the cortical network, rather than to the activation of specific brain regions. Our results offer direct support for this notion. [...] The transient character of the long-distance synchronization is not entirely compatible with models such as global workspace (Dehaene) and recurrent processing (Lamme) because these predict a more sustained response for consciously perceived stimuli. Our results show increased neural synchrony for the visible condition, which lasts ∼100 ms but reaches significance only during a short time window (∼50 ms), suggesting that neural synchronization could last longer but is nonetheless transient."

On neural synchrony preceding P300-related ignition: "Interestingly, the global long-distance synchronization found in the visible condition was very transient and the earliest event differentiating conscious from nonconscious processing. After this, other electrophysiological measures, such as P3a [a subcomponent of P300] and theta oscillations, continue to differentiate between consciously and nonconsciously perceived words. This suggests that long-distance synchronization plays a role in triggering the cognitive processes associated with conscious awareness. [...] Our results on ERPs [Event-Related Potentials, like P300] agree with the data of Sergent et al., which suggests that ERPs evoked by perceived and unperceived stimuli start to diverge around 270 ms. Interestingly, these ERP differences occur only after the end of the transient increase in phase synchrony."


I further quote from her book chapter entitled Distinct characteristics of conscious experience are met by large-scale neuronal synchronization:


"Synchronization is also ideally suited to contribute to the selection of contents for access to consciousness. Synchronization enhances the saliency of signals and thereby facilitates their propagation in sparsely connected networks such as the cerebral cortex. Gamma band synchronization, in particular, assures coincidence among distributed inputs with millisecond precision."


Indeed, the very nice work of Yuji Ikegaya shows that coincident, synchronized presynaptic inputs are far more likely to trigger an action potential in the postsynaptic neuron than asynchronized inputs. Let us assume that synchronized inputs enhance the probability of synaptic transmission, which, due to the noisy character of synapses, is fundamentally probabilistic. Neural synchrony would then naturally be the earliest correlate of consciousness since, due to such widespread synchrony across the cortex, multiple information packets would have non-zero probabilities of triggering ignition. Note the precise timing of neural synchrony, always preceding as it does ignition, nicely supports such a hypothesis. Also note how the oscillating nature of neural synchrony results in probability waves, much like wave functions in quantum mechanics.

So far, I’ve described how the entangled wave function that is consciousness comes into being, but what precisely dictates the content of the wave function, and, by extension, the content of consciousness? Why does the color blue look the way it does? First, let’s clarify that entanglement protects from randomness the relation between things. For example, in the case of two particles with zero total spin, the relation between them is that they have opposite spins. The key idea is that, while wave function collapse may be probabilistic, the two particles together must still obey universal conservation laws of physics. If both particles collapsed independently, and, by chance, landed on the same spin, they would together violate such a conservation law: the conservation of angular momentum. And if neural dynamics in the time window between VAN and P300 are probabilistic as well, then entanglement must protect from randomness the relation between neural signals during that time window. Let us assume each neuron has, on average, 10,000 synapses. Furthermore, let us assume, for simplicity’s sake, the output signal is a weighted sum of the 10,000 inputs, such that the relation between all of them becomes: soutput = β1*s1 + β2*s2 + β3*s3 + ... + β10,000*s10,000 (where β’s are weighting coefficients). As soutput in turn becomes an input to downstream neurons, entanglement has to protect from randomness an increasingly complex web of relations between neural signals, lest conservation laws of physics be broken.

Thus, the content of consciousness is the information contained, not in absolute signals (e.g. the number of times a neuron fired per millisecond), but in the relations between neural signals in the time window between VAN and P300. It should now be clear why, in stark contrast to the genetic code that translates DNA into amino acids, there’s no canonical "neural code" that translates a neuron’s absolute firing rate into conscious experience. For instance, you don’t see black because a neuron in your brain fired once per millisecond, or white because it fired twice. Indeed, the relations between neural activation patterns in response to color stimuli map well to the relations between colors we actually experience (to quote a recent paper: “fMRI activation patterns associated with red were more similar to orange-related patterns than to green-related patterns, mirroring the structural organization of the associated experiential domain”). These relations between neural signals (resulting from relative differences in synaptic strength at the same neuron, and safeguarded by entanglement) are what make blue look the way it does (more similar to purple than yellow).

While features (like the color green) are defined relative to each other by their distinctive patterns of neural activity in a cortical area (like the visual cortex), what binds together different features (like color and location) of the same object encoded in different cortical areas? After all, we do not just see swirls of shapes and colors, we see coherent, integrated objects that combine a variety of features (an apple, for instance, is both round and green). To quote again from the GNW review (edited): “Cortical neurons coding for the various features of attended objects enhance their firing rate, and these attentional effects are widespread: they occur in all cortical regions, ranging from primary sensory areas to the motor cortex. A cortical area devoid of attentional influences on neuronal firing rates remains to be discovered. Attention is object based, which means that the attentional selection of one feature of a perceptual object, represented in one brain region, causes the co-selection of other features of the same object, represented in different brain regions. In the visual modality, the spread of enhanced neuronal activity through the network of corticocortical connections enables binding operations, which establish the relations between visual features. An example is the determination of a shape at a cued location. In this case, neural activity spreads from regions that represent the location of the cue to the brain regions that represent the cued shape.” So, going back to my original question, what binds together different features of the same object (like shape and location) encoded in different cortical areas? The above-described synchronization between cortical areas. It is because cortical areas can become synchronized that, upon focusing your attention on one feature (like the object’s color) and thereby enhancing neural firing in the associated cortical area (like the visual cortex), neural firing in the other cortical area encoding the other feature (like the object’s location) is automatically enhanced. We can now define an object as any collection of features whose underlying cortical areas, being synchronized, are constrained to be co-activated. As described above, such synchronization, even as it binds together different features into one object, induces a non-zero probability of triggering ignition, meaning the object can become the focus of your attention.

The wave function of an entangled system is often described by physicists as "smeared-out" information. But why would information itself be conscious? Why can’t it be unconscious, like inanimate objects? As I see it, consciousness arises from the need for information to exist for itself. Every piece of information encoded in this paragraph I’m writing requires a reader to decode it. It lacks any existence of its own. The same can’t hold true for the "smeared-out" information in my entangled brain, since no one’s there to decode it. That information must have an existence of its own. It must exist for itself. The for-itself, incidentally, is what Sartre used to call consciousness.

Finally, I may have described how consciousness comes into being, but not how we make spontaneous choices, how we enjoy pure freedom. From the moment you wake up in the morning to the moment you fall asleep at night, consciousness presents you with a rich, coherent, multi-faceted picture, and you can choose to focus your attention on any facet of this picture. It’s the only real choice you make, and it leads your stream of consciousness down one path or another. The choice of whether to wake up your iPhone or not is first a choice of whether to attend to it at all. If you choose to keep focusing on that side button, eventually you will click it. But why do you need to make such choices? Imagine there was no such thing as free will. Now, for every input, the brain has to come up with a corresponding output, like a computer. But there’s an almost infinite number of possible inputs to the human brain! Think of every combination of sensations and emotions and ideas you can possibly experience! How can the brain map every possible state of consciousness to a corresponding course of action? Whereas if you react to every moment of consciousness with a spontaneous choice, you’re infinitely flexible, and can deal with any situation whatsoever.