Addressing the temporality of phenomenal experience, the diverse treatments of phenomenal consciousness range in their methodology from philosophy, through surveys and synthesis of behavioral and neuroscientific findings, to computational analysis.
Addressing the temporality of phenomenal experience, the diverse treatments of phenomenal consciousness range in their methodology from philosophy, through surveys and synthesis of behavioral and neuroscientific findings, to computational analysis.
A useful theory linking dynamical systems to phenomenal experience will be a story thrice told. It will involve some description of phenomenal experience, which should be true. It will also involve some sort of dynamical model (second). But (third) the model will have to be at least plausibly implementable in human beings – that’s where the theory becomes useful. Finally, once all three stories are told, they must align. It should be evident to all that the phenomenal story, the dynamical story, and the implementation story are really one story, about one entity, described in three different ways, akin to one story as it might be told in three different languages. A theory of consciousness then is an exercise in translation, somewhat like deciphering the Rosetta Stone. This chapter outlines a possible alignment with respect to a foundational, structural property of experience, namely, time.
For a neuroscientist working under the assumption of a complete correspondence between mind and brain, conscious awareness poses a profound mystery. It is a unique phenomenon in which a state of a physical system – i.e. a distributed pattern of neuronal activity is inexplicably transformed into an absolutely private and internal mental experience. Thus, the relevant point of view that needs to be taken when considering neuronal mechanisms underlying phenomenal experience must also be intrinsic – that of the brain’s neurons themselves. In contrast, the perspective taken by most neuroscience research examines neuronal activity from an external observer perspective. Here I will conjecture that the intrinsic perspective is implemented in the brain through rapid and recurrent neuronal activity – local neuronal “ignitions”. In this dynamics, the information about the state of a local neuronal assembly is distributed back to the neurons that form the assembly through recurrent activations. A conscious percept emerges when, through these neuronal reflections, the ambiguity inherent in the activity of isolated neurons is converted into a unique and meaningful assembly state. The rapid distribution of assembly information necessitates high firing rates, sustained activity, and dense local connectivity. All these conditions fit nicely with recent experimental findings. The fourth consequence – the founding of conscious awareness on local reverberatory activity – is still highly controversial – and should be viewed, at this stage, as a prediction of the local “ignition” hypothesis.
A brain charged with guiding its body through a complex and lively world from a position of solitary confinement inside its opaque skull faces a set of functional challenges beset with inverse and ill-posed problems at every turn. Uncertainty and ambiguity therefore encumber all cortical labors, making probability distributions the natural medium of its disambiguating inferential operations. This chapter proposes that those operations take place unconsciously, in keeping with Helmholtz’ original suggestion, and that the functional logic of an inherently probabilistic cortex implies a need for an extracortical “global best estimate buffer” as a means to complete cortical sensory disambiguation through a definitive but ephemeral estimate of current sensory circumstances. It further proposes that the contents of that extracortical buffer are conscious, not by virtue of anything being “added” to buffer operations in order to “make them conscious”, but by virtue of the format of buffer contents alone: its dynamics issue in a nested arrangement placing an ego-center in perspectival relation to a neural model of body-world interactions. Finally, the organization of the higher order nuclei of the dorsal thalamus are scrutinized for their suitability to implement the putative global best estimate buffer, with particular attention to the possibility that the caudal reaches of the dorsal pulvinar might host its specifically sensory aspects, i.e. sensory awareness. Keywords: ambiguity; architecture of consciousness; best estimate buffer; constraint satisfaction; format of sensory awareness; phenomenal content; probabilistic operations; pulvinar
The proponents of machine consciousness predicate the mental life of a machine, if any, exclusively on its formal, organizational structure, rather than on its physical composition. Given that matter is organized on a range of levels in time and space, this generic stance must be further constrained by a principled choice of levels on which the posited structure is supposed to reside. Indeed, not only must the formal structure fit well the physical system that realizes it, but it must do so in a manner that is determined by the system itself, simply because the mental life of a machine cannot be up to an external observer. To illustrate just how tall this order is, we carefully analyze the scenario in which a digital computer simulates a network of neurons. We show that the formal correspondence between the two systems thereby established is at best partial, and, furthermore, that it is fundamentally incapable of realizing both some of the essential properties of actual neuronal systems and some of the fundamental properties of experience. Our analysis suggests that, if machine consciousness is at all possible, conscious experience can only be instantiated in a class of machines that are entirely different from digital computers, namely, timecontinuous, open analog dynamical systems.
This chapter explores the consequences of treating consciousness as a fuzzy dynamical system. A fuzzy dynamical system is one in which labeled concepts and percepts are altered as a function of context and conditions, and these changes occur continuously in time. We offer speculations on the groundwork for a consciousness state space in which the sets of trajectories over time form tube-like structures called cylinder sets. Consciousness is a trajectory within this structure, and it passes by or through those fuzzy concepts. When the trajectory of a mental event travels close to a particular concept or percept, one experiences awareness of that concept or percept. As these constrained pathways in mental state space become more heavily traveled, they develop increased density (or attraction strength) in their central threads, and more and more nearby trajectories get captured by that cylinder. At the same time, these tubes slowly gravitate toward short cuts in the state space over the lifespan, thus gradually straightening out and skipping past intermediating concepts that used to get visited as part of the sequence. The fringes of these concepts become a part of conscious experience, and for everyday coping, do not need to be recruited for explicit awareness. The formation and streamlining of cylinder sets over the course of learning may have the paradoxical result. of producing an increase in tacit conscious experience (of “being in time”) and a decrease in explicit awareness (of this or that labeled concept).
I describe and partially formalize two aspects of Edmund Husserl’s phenomenological philosophy, in a way that highlights their relevance to cognitive science. First, I describe “constitutive phenomenology”, the study of structures (what I call phenomenological “models”) that constitute a person’s sense of reality. These structures develop incrementally over the course of a person’s life, and serve a variety of functions, e.g. generating expectations relative to actions, and determining the contents of context awareness. Second, I describe “transcendental-eidetic phenomenology”, which posits a hierarchy of laws, each governing the way consciousness must be organized in order for a particular type of thing (a physical thing, a person, a social institution, etc.) to appear.
We review theories and empirical research on underlying mechanisms of selfhood, awareness, and conscious experience. The mechanisms that have been identified for these phenomena are many and multifarious, lying at many levels of space and time, and complexity and abstractness. Proposals have included the global workspace for conscious information, action and its centrality to self awareness, the role for social information and narrative, and more. We argue that phenomenal experience, whatever it “really is,” is probably dependent upon all of these levels simultaneously. We end with two challenges for consciousness research. Both are couched in terms of the dynamics of phenomenal experience. The first is to investigate the sustained dynamics of phenomenal experience; the second is to unveil the way that multi-scale processes in the cognitive system interact to produce that richness of experience. We do not aim to solve the hard problem, but argue that any solution will require this plural characteristic.
Addressing the temporality of phenomenal experience, the diverse treatments of phenomenal consciousness range in their methodology from philosophy, through surveys and synthesis of behavioral and neuroscientific findings, to computational analysis.
Addressing the temporality of phenomenal experience, the diverse treatments of phenomenal consciousness range in their methodology from philosophy, through surveys and synthesis of behavioral and neuroscientific findings, to computational analysis.
A useful theory linking dynamical systems to phenomenal experience will be a story thrice told. It will involve some description of phenomenal experience, which should be true. It will also involve some sort of dynamical model (second). But (third) the model will have to be at least plausibly implementable in human beings – that’s where the theory becomes useful. Finally, once all three stories are told, they must align. It should be evident to all that the phenomenal story, the dynamical story, and the implementation story are really one story, about one entity, described in three different ways, akin to one story as it might be told in three different languages. A theory of consciousness then is an exercise in translation, somewhat like deciphering the Rosetta Stone. This chapter outlines a possible alignment with respect to a foundational, structural property of experience, namely, time.
For a neuroscientist working under the assumption of a complete correspondence between mind and brain, conscious awareness poses a profound mystery. It is a unique phenomenon in which a state of a physical system – i.e. a distributed pattern of neuronal activity is inexplicably transformed into an absolutely private and internal mental experience. Thus, the relevant point of view that needs to be taken when considering neuronal mechanisms underlying phenomenal experience must also be intrinsic – that of the brain’s neurons themselves. In contrast, the perspective taken by most neuroscience research examines neuronal activity from an external observer perspective. Here I will conjecture that the intrinsic perspective is implemented in the brain through rapid and recurrent neuronal activity – local neuronal “ignitions”. In this dynamics, the information about the state of a local neuronal assembly is distributed back to the neurons that form the assembly through recurrent activations. A conscious percept emerges when, through these neuronal reflections, the ambiguity inherent in the activity of isolated neurons is converted into a unique and meaningful assembly state. The rapid distribution of assembly information necessitates high firing rates, sustained activity, and dense local connectivity. All these conditions fit nicely with recent experimental findings. The fourth consequence – the founding of conscious awareness on local reverberatory activity – is still highly controversial – and should be viewed, at this stage, as a prediction of the local “ignition” hypothesis.
A brain charged with guiding its body through a complex and lively world from a position of solitary confinement inside its opaque skull faces a set of functional challenges beset with inverse and ill-posed problems at every turn. Uncertainty and ambiguity therefore encumber all cortical labors, making probability distributions the natural medium of its disambiguating inferential operations. This chapter proposes that those operations take place unconsciously, in keeping with Helmholtz’ original suggestion, and that the functional logic of an inherently probabilistic cortex implies a need for an extracortical “global best estimate buffer” as a means to complete cortical sensory disambiguation through a definitive but ephemeral estimate of current sensory circumstances. It further proposes that the contents of that extracortical buffer are conscious, not by virtue of anything being “added” to buffer operations in order to “make them conscious”, but by virtue of the format of buffer contents alone: its dynamics issue in a nested arrangement placing an ego-center in perspectival relation to a neural model of body-world interactions. Finally, the organization of the higher order nuclei of the dorsal thalamus are scrutinized for their suitability to implement the putative global best estimate buffer, with particular attention to the possibility that the caudal reaches of the dorsal pulvinar might host its specifically sensory aspects, i.e. sensory awareness. Keywords: ambiguity; architecture of consciousness; best estimate buffer; constraint satisfaction; format of sensory awareness; phenomenal content; probabilistic operations; pulvinar
The proponents of machine consciousness predicate the mental life of a machine, if any, exclusively on its formal, organizational structure, rather than on its physical composition. Given that matter is organized on a range of levels in time and space, this generic stance must be further constrained by a principled choice of levels on which the posited structure is supposed to reside. Indeed, not only must the formal structure fit well the physical system that realizes it, but it must do so in a manner that is determined by the system itself, simply because the mental life of a machine cannot be up to an external observer. To illustrate just how tall this order is, we carefully analyze the scenario in which a digital computer simulates a network of neurons. We show that the formal correspondence between the two systems thereby established is at best partial, and, furthermore, that it is fundamentally incapable of realizing both some of the essential properties of actual neuronal systems and some of the fundamental properties of experience. Our analysis suggests that, if machine consciousness is at all possible, conscious experience can only be instantiated in a class of machines that are entirely different from digital computers, namely, timecontinuous, open analog dynamical systems.
This chapter explores the consequences of treating consciousness as a fuzzy dynamical system. A fuzzy dynamical system is one in which labeled concepts and percepts are altered as a function of context and conditions, and these changes occur continuously in time. We offer speculations on the groundwork for a consciousness state space in which the sets of trajectories over time form tube-like structures called cylinder sets. Consciousness is a trajectory within this structure, and it passes by or through those fuzzy concepts. When the trajectory of a mental event travels close to a particular concept or percept, one experiences awareness of that concept or percept. As these constrained pathways in mental state space become more heavily traveled, they develop increased density (or attraction strength) in their central threads, and more and more nearby trajectories get captured by that cylinder. At the same time, these tubes slowly gravitate toward short cuts in the state space over the lifespan, thus gradually straightening out and skipping past intermediating concepts that used to get visited as part of the sequence. The fringes of these concepts become a part of conscious experience, and for everyday coping, do not need to be recruited for explicit awareness. The formation and streamlining of cylinder sets over the course of learning may have the paradoxical result. of producing an increase in tacit conscious experience (of “being in time”) and a decrease in explicit awareness (of this or that labeled concept).
I describe and partially formalize two aspects of Edmund Husserl’s phenomenological philosophy, in a way that highlights their relevance to cognitive science. First, I describe “constitutive phenomenology”, the study of structures (what I call phenomenological “models”) that constitute a person’s sense of reality. These structures develop incrementally over the course of a person’s life, and serve a variety of functions, e.g. generating expectations relative to actions, and determining the contents of context awareness. Second, I describe “transcendental-eidetic phenomenology”, which posits a hierarchy of laws, each governing the way consciousness must be organized in order for a particular type of thing (a physical thing, a person, a social institution, etc.) to appear.
We review theories and empirical research on underlying mechanisms of selfhood, awareness, and conscious experience. The mechanisms that have been identified for these phenomena are many and multifarious, lying at many levels of space and time, and complexity and abstractness. Proposals have included the global workspace for conscious information, action and its centrality to self awareness, the role for social information and narrative, and more. We argue that phenomenal experience, whatever it “really is,” is probably dependent upon all of these levels simultaneously. We end with two challenges for consciousness research. Both are couched in terms of the dynamics of phenomenal experience. The first is to investigate the sustained dynamics of phenomenal experience; the second is to unveil the way that multi-scale processes in the cognitive system interact to produce that richness of experience. We do not aim to solve the hard problem, but argue that any solution will require this plural characteristic.