What is Informational Identity Theory?

(This is a philosophy and cognitive science article originally published at What is Informational Identity Theory? I have added the Preliminary Remarks section for this post.)

Or: How to solve Putnam’s multiple realisability problem with traditional identity theory by picking the right process types.

Identity theory about mind has been around a long time. Basically — and accurately stated — it is the materialist (or physicalist) theory that the mind is identical to the physical brain. More specifically and more accurately it is the theory that the mind is identical to — the same thing as — ‘brain states’ or the states of the brain, that mental events are identical to (and reduce completely to) neurological events, and that mental processes are identical to (and reduce completely to) brain processes, or neurological processes, broadly construed.

Identity theory was one of the first and most successful theories of mind — or of how the mind is realised by the brain — in cognitive science. It has had many revisions and is generally regarded as having been superseded by various other theories such as various versions of epiphenomenalism and — more recently —information processing theories informed by computer science and more modern neuroscience.

Preliminary Remarks and Cautions

Information-theoretic conceptions of mind and cognition have also been around since at least the early part of the 20th Century. Recently, Integrated Information Theory (IIT) has been proposed. Although similar to that IIT in many ways, this IIT (Informational Identity Theory) is not that IIT (Integrated Information Theory.Before proceeding, some cautions and clarification. Using the term 'information process' (and the alternative term 'informational process') is problematic, since it can be variously construed as vacuous, tautological, vague, and incoherent.

As a philosopher and philosopher of psychology and of science, I tend to defer to the best and most influential scientific theories for definitions and conceptions. The prevailing and important conception of information generation, processing, encoding, decoding, and transmission is that of Claude E. Shannon's 1948 research article A Mathematical Theory of Communication. According to that theory, an information source is a either a continuous, or else a discrete, stochastic (able to be modelled statistically) physical process. Shannon's measure of information is a measure of the objective reduction in (frequentist) statistical uncertainty at the receiver-destination about the state of the source. 

An interesting and salient adjunct note is that some James Ladyman and Don Ross have proposed a physicalist-frequentist conception of probabilities. Researching this is left to the reader. However, I mention it here because it's one way of keeping everything in the informational world convincingly physical. (I acknowledge the problem of what it even means for something to b physical, but defer to an adaptation of Ian Hacking's 'spray stuff' move: if X can interact causally with a quantum field per QFT then X is physical.)

It's long been observed that this 'hard science first' approach doesn't solve all of the salient conceptual problems. For example, there is an equally important conception of information based upon the program complexity measure of Andre Kolmogorov. That measure of information is based upon the length of the minimal program/description for producing/generating a given piece of data. However, Kolmogorov's concept is more abstract, and doesn't apply directly to physical processes except inasmuch as such a process - in Shannon's theory - can produce a series of discrete symbols or data.

Now, Shannon was an applied mathematician, not a philosopher or a metaphysician. So it is understandable that there are some problems with his concepts. The most impactful confusion is one which had no deleterious impact on Shannon's practical objectives: he often conflates the concept of information (which is also a form of entropy in a physical process or source) with the quantitative measure of information he presents. It's clear, however, that even Shannon could not have thought these two things were the same. His measure of information is a number representing the aforementioned objective reduction in objective, frequentist statistical uncertainty at the destination about the source. The information is what that measure is intended to measure. Or, if that's too confusing, then it's Shannon's version of entropy at the source which is being measured.

For the purposes of this article, I will regard information as something realised by and inhering in physical stochastic processes (per Shannon) or - at minimum - physical structures (per Kolmogorov and - indirectly - Shannon.)

Some ‘Theory of Mind’ Contenders — Briefly

According to epiphenomenalism, the mind is an ‘epiphenomenon’ of the brain. It is a phenomenon that happens ‘around’ (‘epi’) the functioning, processes, systems, and mechanisms of our neurology.

According to the original computational theory of mind (CTM) the brain is a symbol-processing system like modern digital computers are (although the processing and the symbols are both different — and not digital.) The exact nature of the symbols is not known, but is a ‘promissory note’ in neuroscience. It is a cheque written to be filled by neuroscience in the future. According to this theory, the symbols are generally taken to comprise a language of thought, or mentalese.

According to representational theories of mind (RTMs), the brain is a system that creates, manipulates, and processes representations of the world and our environment. There are many theories of mental representation on offer. Literally dozens. Hundreds, probably. Here is Stephen Stich writing in Mind in 1992:

Theories of mental representation were multiplying already in the 1980s

There is a lot of shared conceptual territory between CTMs and RTMs (both plural because of the many versions of each) and sometimes mental symbols in CTM mentalese (the proposed symbolic language of thought) are also representations, depending upon which theory one chooses. They don’t generally stray far from materialism and physicalist neuroscience, although the question of the nature of representations is much debated.

There are many other contender theories, including various versions of mental dualism (the classical Cartesian version of which is not taken seriously now by neuroscience) and various functionalist approaches to mind.

Here is Place surveying the theory of mind battlefield in 1988, and referring to his original 1956 treatise (Note the reference to the Eccles as sole dualism defender, and note also the reference to the rise of materialism):

U T Place writing in 1988 returns to his 1956 thesis

Note also the uncertainty and variety of views about the nature of mental states (apricot/orange highlighter):

U T Place writing in 1988 returns to his 1956 thesis

Even though these theorists had a psychophysical brain-state theory of mind, they couldn’t figure out what a brain state and a mental state are supposed to be exactly! (Pedantic exactitude is the grist of the mill of analytic philosophy, and it is kind of important in neuroscience too!)

To be fair, they saw the diabolical details as a promissory note to be fulfilled by neuroscience. However, even above that level of abstraction and explanation they could not decide on the right concepts.

According to functionalism about mind and cognition, it is not the semantic, conceptual, or informational content of mental states that is important, but the functions that mental states play in relation to perception, cognition, and behaviour. Functionalism is thus similar to psychological behaviourism in the sense that it abstracts away from the details of cognition and focusses upon more easily observable macroscopic phenomena. (Behaviourism focused instead on the behaviour of the organism. Functionalism focusses instead on the apparent bodily function(s) performed by a mental process or state, where neurological functions that occur within the central and peripheral nervous system are also bodily.)

Here is philosopher David Lewis reviewing a presentation of functionalism by philosopher Hilary Putnam (in 1969!):

Lewis explaining Putnam’s functionalist theory of mind

The overarching ideas is that the outputs from the mental states perform — or cause to be performed — some function realised via (bodily) action(s) that are conducive to survival in the environment.

There are various combinations all of these theories, and lots of variations of each of them. These are all what cognitive scientists and philosophers broadly refer to as theories of mind.

All of these theories of mind and cognition have limitations and conceptual aporia, or ‘blind spots’. For example one of the many problems with epiphenomenalism is that it has never been clear what, exactly, the non-physical, mental epiphenomenon is exactly.

If it is emphatically not the information processing being done in neural subsystems nor the neural subsystems themselves, then what exactly is it? It seems hard to get a definitive and coherent answer and epiphenomenalists generally don’t offer a method of empirically and experimentally verifying epiphenomena.

Epiphenomenalists tend only to tell us what they think mind is not, not what it is. As in what it is empirically and testably per normative scientific method and experimentation.

Moreover, it is not clear what, if any, causation occurs between such ontologically mysterious epiphenomena (whatever they exactly, specifically, are) and the brain state (which they’re emphatically not.)

Epiphenomenalists also tend to get stuck on the question of ‘downward causation’ — or the problem of whether the epiphenomenon of the mind can cause changes in processes and states in the brain or are instead themselves only caused (upward causation) by changes in states and processes in the brain.

To be frank, some of the problems with epiphenomenalism make traditional identity theory that focusses on the ill-defined concept of brain states look almost attractive again.

Types, Tokens, and Multiple Realisability: Problems for Identity Theory

My own response to the problems of epiphenomenalism is that they’re a result of wrongly avoiding the idea that mental processes and events are identical to brain (neurological) processes and events. In other words — epiphenomenalism is an explanatorily inadequate, neo-dualist alternative to traditional identity theory.

Traditional identity theory has a lot of problems, but it is not as bad as it is often made out to be. It is just that its original proponents set some premises and expectations for it that were unnecessary and too strong, and that talk of brain states was a poor choice for discussing what the mind was identical too.

Identity theory pioneer U T Place’s original talk of brain processes as the target of the identity was thus more coherent but his discussion focused upon consciousness and Place was working with 1950s neuroscience that had only just been introduced to Shannon’s information theory with its mathematically rigorous conceptions and definitions of information sources.

In any case, the term ‘identity theory’ is often a misrepresented strawman target for epiphenomenalists and neo-dualists who essentially do not like any kind of reductionist, materialist, or physicalist theory of mind — often (but not always) on a religious or spiritual basis. When dealing with theories of mind one must always keep in mind that something so important and intrinsic to human nature will be a standing target of various kinds of emotionality and religious fervor.

Physicalism and reductionism are usually (not always) unpopular with mystics and with mysterians about consciousness or those philosophers of mind and consciousness who think that the mind and its properties are the primary demonstration of the truth of anti-physicalism. Consciousness is, say mysterians about consciousness, ineffable to science in-principle and so some mental things are not physical.

Nevertheless, traditional brain-mind state identity theory is not paid much attention any more. It tends to be hopelessly inexact and non-specific about what a brain state is in terms of contemporary information-theoretic neuroscience. Traditional brain process identity theory — originally proposed by U T Place in 1956 — is more coherent and based upon a more neuro-scientifically tractable and workable model. Yet it has still had multiple problems historically.

There were two primary problems that plagued traditional identity theory of both the state-focused and process-focused variety (and of varieties that discussed both more or less interchangeably). One was the distinction between types and tokens of brain states, and the other more serious challenge was the associated problem of multiple realisability proposed by philosopher Hilary Putnam.

Tokens of brain states are specific instances of types of brain states. Similarly for brain events and processes that sustain brain states in various ways (although as I have indicated, the details of how processes map to states was left unclear and incomplete.) The type-token distinction created problems because identity theory meant something different depending upon which one — types or tokens of brain states/processes/events— one was referring to. Early identity theorists were writing in the 1950s and 1960s, and at that time even David Armstrong’s naturalistic treatises about nominalism and universals (types) were well in the future (1978).

Moreover, mind-Brain type identity theory (where the type of mind state is identical to the type of brain state) had problems because, as Putnam argued, different earthly organisms and different possible life forms or artificial intelligences might realise the same mind state using completely different physical brain states with completely different structures and physical bases. If different natural kinds of organism and perhaps even different artificial kinds of system (AI) can produce the same mental states or processes using different and differently configured brain states and processes, then a given mental state/process/event does not map to a single type of neuronal state/process/event.

This thesis — called multiple realisability — was partly a result of the rise of CTMs and RTMs: computational and representational theories of mind. If computers were able to process symbols to perform tasks, then perhaps it followed that ultimately mental states could be proposed to be realised on a similar basis.

Informational Identity Theory

According to my proposed informational identity theory the identity should be applied between mental processes referred to as mental information sources, and information and information processing in sub-personal information sources (modelled at an appropriate level of abstraction). The sub-personal source(s) doing the processing and the information processed are both included in what is identical to mind processes or mental information sources.

Sources here are defined according to classical information theory. According to Claude E Shannon’s 1948 The Mathematical Theory of Communication an information source is a physical stochastic process. That is a physical process that changes its structure or configuration over time such that it exhibits statistical patterns. The term ‘subpersonal source’ is simply my paraphrase of Daniel Dennett’s concept of a subpersonal process: a process in the brain that performs a specific task, which process we are not subjectively familiar with or even aware of. These are neurological information processes comprised of neurological subprocesses that are completely hidden from us subjectively yet sustain our cognition, perception, and conscious awareness.

There are a number of helpful features of this theory:

  1. Information sources are rigorously mathematically defined and yet they can be any physical stochastic process.
  2. Information sources can be comprised of other information sources where is any natural number. (In the brain there might be an upper limit on this, but it will be a very large and would be of little importance to our purposes. It might somehow become relevant in studies of consciousness, which I have suggested likely requires a minimum number of sources of adequate complexity.)
  3. The information sources that comprise a larger information source can be distributed in an arbitrary manner. One can have an information source comprised of the Pope’s entourage, a telephone conversation with your bestie, and the sensory-perceptual processing in the brain of a bumblebee pollinating a flower. (It probably wouldn’t be very useful, but it is a Shannon information source.)
  4. Subpersonal sources can be defined and measured at any arbitrary level of abstraction from the quantum level (if one has the ability) to the neurochemical level, to the level of neural circuits and ensembles, to the level of electrical activity in regions of the brain (and any combination thereof.)

What I call informational identity theory says that our minds are identical to our cognition, which is identical to our brain’s physical processing of information in sub-personal sources, or neurologically-sustained information sources.

In case this is still somewhat unclear because of the terms and concepts involved, so here is a basic taxonomy of views to help bring out the distinctions:

1 Brain-mind state identity theory:

  • Mental states broadly (and not very exactly or consistently) construed are identical to brain states (also not terribly well defined).
  • Type and token versions exist.

2 Brain-mind process identity theory:

  • Compatible with (1) as brain states are normally taken to be based upon processing and events.
  • Type and token versions exist.
  • Processes are not necessarily construed as information sources, but are often associated with the general concept of processing information. However, they’re often associated instead with signal processing, data processing, knowledge processing, thought processing, mental processing, perceptual processing, and with just neurological processing and processes that have neurological functions.

3 Brain-mind event identity theory.

  • Type and token versions exist.
  • Reconciles with (1) and (2) since processes can be regarded as series of events.
  • Lots of historical confusion about the relationship between events and states.
  • This is usually not stand alone, but comes with (1) or (2).

4 Brain-mind informational identity theory.

  • Loose types only at lower, more detailed levels of abstraction.
  • Types are types of information sources and are only not loose (allowing small variations in details of tokens) at higher, more general levels of abstraction.
  • Similar to process identity theory but instead of processes it is specifically information sources and their information that the mental process is identical to.

The information of a source is set according to:

  1. The structure of the source.
  2. What that structure indicates about inputs to the source, or any signal or message that it indicates has configured it.
  3. Anything the structure of the source might somehow contribute to representing in intentional terms (e.g. picking out entities/events/processes/objects/dynamics in the distal and proximal environment.)
  4. The causal outcomes of the source and anything that they might indicate or signal in order to serve the survival and functional interests of the organism.

Functional conceptions of information can be included, but are not necessary. Algorithmic conceptions of information and the nature of information can also be included or applied.

According to informational identity theory, subpersonal sources have states per Shannon’s The Mathematical Theory of Communication, and these states are a more exact way of defining brain states. What ever the total set of sub-personal sources is that realise a given mental process is itself a sub-personal source.

However, there is no need to talk about mental states at all in informational identity theory. Mental processes identical to sub-personal sources and their information (processing) will do.

If some of this seems pedantic and misguided, then consider an exercise: Not to want to simply play language games, but can you specify exactly and definitively the difference between, and the relationship between, a mental event and a mental state or a brain event and a brain state? Can a brain event sometimes be a brain state? Are states just big events? Are states made up of events?

Think this is otiose to the issues? (This is a forgivable response). Here is Place again in 1988, struggling to get a peg in the theoretic and conceptual ground:

And here is Lewis chipping away at Putnam’s claim that there are only brain functions and no brain states:

Do yuo have pain in your brain from reading this? Putnam said that pain is only a brain function. Lewis demonstrates that it can still be a brain state (whatever that is exactly) as well.

When considering the most complex functional information processing entity science knows of — ambiguity and inexactitude are probably not conducive to successful theorising.

The information processing model of cognition has been around since the 1980s in neuropsychology and cognitive science. Moreover, information processing was often understood to be relevant — to varying degrees — to process-focused mind-brain identity theories like that of Place. However, those theories do not see information processing as the solution to the multiple realisability challenge and to problems with type identity theory. I suggest it offers the best solution, because it avails us of a way to manage types at different levels of abstraction elegantly and with coherence.

David Lewis realised the salience of the degree of variety and complexity (structural and functional) in neurological systems and processes within and between — for example — individuals and species:

“…a reasonable brain-state theorist would anticipate that pain might well be one brain state in the case of men, and some other brain (or non-brain) state in the case of mollusks. It might even be one brain state in the case of Putnam, another in the case of Lewis.”

There have been retorts to, and attempted rebuttals of, this position. However, although Lewis’ view is not considered a knock-down argument against identity-theory-killing multiple-realisability, neither are responses to it considered knock-down.

Putnam’s multiple realisability argument killed a certain very unnecessary version of identity theory, but ultimately served to direct the theory into better versions.

The important point to grasp from an information-theoretic and information-processing perspective is that there are different physical, neurological things that mental events, processes, and states can be identical to. If one pitches the target of the identity at the right level of abstraction — information processing and the information sources that do it — then the problems presented by multiple realisability for types and tokens are no longer relevant. In fact, multiple realisability is revealed as a natural feature of the way in which mental processes are identical to specific instances of neurological information processing.

Of course as I have said — for this to be a physicalist informational identity theory, one must regard casual physical structure and processes as necessary conditions for information to exist.

Newer approaches to state and process identity theory were already based upon adding context and variation for brain states and processes as allowable in mind-brain identity. For example — adding that the identity between a mental state and a neurological state must occur at time T. This pushes type identity theory towards something more like token identity theory. However, according to my informational approach that more fine-grained kind of loose or weak type theory is a feature, not a bug.

To have a working identity theory, we only need any given mental process to be identical to — to reduce to at the appropriate level of abstraction — a specific set of physical neurological processes defined in an appropriate way. Types-versus-tokens be hanged, as it were. The instance of the information being processed and the sub-personal sources doing the processing — modelled at a relevant level of abstraction — will do.

We don’t expect every token of a given type of sub-personal source (Shannonian-Dennettian neurological process) to be perfectly structurally identical. We expect them to be informationally identical at an adequate level of abstraction. However, we expect them to all be physico-structurally and thus psycho-physically non-identical to each other, or at least certainly not perfectly identical.

The mental process or event is identical to the information of the neurological information processing, but the neuro-structural underpinnings of instances of such processing are all naturally necessarily going to be at least a tiny bit different (although often they might be very, very close to ontic-structurally the same when viewed at the right ‘magnification’ or level of abstraction.)

(‘Naturally necessarily’ is a reference to natural necessity. Natural necessity is the way in which nomic constraints or laws of nature force things to be a certain way. Like the way in which one can have a two ton ball of blueberry yoghurt and suffer no serious consequences provided one does not drown in it, where as a two ton ball of unstable isotope Uranium 235 will cause serious problems very fast.)

So what my informational identity theory gets one is ‘typoken’ identity theory where the ‘tokeny types’ are the information source(s) and the information processing. They are ‘loosely’ typed according to what neurological systems are deployed and what causal information is processed by them (the sub-personal sources or Shannonian-Dennettian sub-personal processes). They’re ‘loosely’ typed because one doesn’t need them to be neurologically exactly the same between token instances to every level of structural and dynamic detail. Indeed this is in-principle impossible per natural necessity. (I will present a thought experiment below involving monozygotic twins that brings this out and brings out also the advantages of this approach.)

So here is the key structural concept again. No two ‘typoken’ (yes I know it’s awful, but it’s easy to use, clean, and remember) instances of information processing in token instances of sub-personal sources are not expected to be exactly the same. The causal information can and might be exactly the same at some relevant level of abstraction. In fact the information processing sub-personal neurological sources (remember this is just a Shannonian relabelling of Dennett’s subpersonal processes) are expected to be structurally different on some (low) level of detail in every case.

It is still identity theory. Why? Because in each loose/weak ‘typoken’ instance of a neurologically-realised information process in the brain is nothing more than, and completely reduces to, the physical information processing done in the sub-personal sources of the right natural kind and type (essentially — the right neurological subsystems in the given neurological and psycho-physical phenotype).

What Could Go Wrong?

In cognitive science the answer to this question is always ‘a lot’ and the above excerpts from the writings of Lewis and Place are a demonstration thereof. What went wrong is always up to one’s philosophical opponents to decide. However, I can anticipate at least one important issue.

If there is a cost, or a weakness, with this approach, it is that one must regard information processing as necessarily physical, and also regard that physical dynamic structures are a necessary and sufficient condition for information. (Again here I suggest that the kind of necessity involved is natural necessity, but this is harder to explain.)

So in the same way that brain-state identity theorists have problems with what a mental state is, and RTM-ers have problems with the nature of mental representation (refer to Stephen Stich’s lamentations above): there’s a question about the nature of information.

Being that I am a physicalist and ontic structural realist about information, I don’t in fact see this requirement of physicalism about information processing (and realisation) as a weakness or disadvantage. However, critically: the nature of information is not decided upon. It is an open question. Not a done deal. It might never be a done deal.

Moreover, pluralism about the nature of information — or the idea that information is different things in different scientific theoretic settings — is also coherent and prevalent in science. (Although science tends to be dominated by Shannon’s theory and a few well known algorithmic alternatives due to Alan Turing and Andre Kolmogorov. Algorithmic information still has physical structure as a necessary condition, or so I contend.)

In some ways the problem with information is worse than that with states and representations, because there are a lot of different ways of defining the nature of information. However, some of those ways are very scientifically rigorous, and conducive to very clear and well-defined scientific modelling and conceptual analysis. In that sense the situation for information and neurological-mental information sources (or sub-personal sources) is not remotely as bad as it is for mental representations and mental states. In fact it is a vast improvement.

That all said: the best live alternative to my causal-structural view of the nature of information is probably functional information. The two can be reconciled in the sense that causal structural information is almost certainly a necessary — but not sufficient — condition for the obtaining of functional information.

Functional information can reconcile with informational identity theory and may in fact strengthen it. There is no in-principle reason why subpersonal information sources (neural information processes) cannot be analysed as sustaining functions, and in fact neuroscience does so often. The mental process/state simply becomes identical to the physical neurological processing and realisation of functional information.

An entire sub-personal source can be considered an output, and that output can be monitored by other sub-personal sources and thus be an input for them. This is intermonitoring.

I don’t eschew functional information (nor structured algorithmic information) but for the purposes of informational identity theory I will stick first and foremost with the causal, dynamic-structural processual conception of information sources familiar from the metaphysics and model of Shannon’s theory. Such information is related to functioning of the brain in terms of the causal power — or the ability to cause certain kinds of effects — of specific processes with specific structures or configurations. (A necessary condition for functional information.)

In other words — if one wants functional information then one necessarily needs first to have certain processes with certain structures to realise it (and said processes may well be evolved). If you profess to be able to get functional information any other way then I confess that you have me stumped.

More About Information Theory and Brain States

U T Place above introduced us to the problems with defining brain states and mental states. I have always disliked the terminology ‘brain states’ because although it is coherent in some ways, it is also misleading by way of being an incomplete abstraction.

Talk of mental states as brain states is not very helpful in the context of informational identity theory which focusses upon the multiple various, distributed, interacting and inter-monitoring neurological processes that occur in the brain to effect perception and cognition. Those collective, interacting, sub-personal processes are what sustains what we might call states of mind.

Recall that such sub-personal processes are also sub-personal information sources, since according to Claude Shannon’s Mathematical Theory of Communication information sources are stochastic (exhibit statistical patterns) physical processes.

In computer science, engineering, and in any applied science that deals with any structurally and functionally complicated systems, the verb ‘abstraction’ means to abstract-out — or to abstract away from — the details of some systems in order to focus on analysing and explaining the function, mechanism, and structure of others. In computer science and neuroscience this approach to ‘abstracting out’ some subsystems to focus on others is often called ‘black boxing’ and is part of modularising or modularity: breaking a complex system down into manageable modules. The details of a connected or interdependent system are hidden or ignored (‘black boxed’) and only the inputs to and outputs from that system are included in any analysis or modelling.

The term ‘brain state’ tends to give the impression that the mind is some kind of static entity when of course — as neuroscience and our own subjective experience both tells us, and as Place rightly assumed (as does early 20th century phenomenologist Maurice Jean Jacques Merleau-Ponty’s phenomenology) — the brain and cognition are very dynamic. The brain is constantly changing and processing new (and memory-stored) information.

We’ve already witnessed in earlier sections Places struggle with this throughout the second half of last century. (Keep in mind that U T Place wrote his original treatise in 1956 — eight years after the publication of Shannon’s 1948 The Mathematical Theory of Communication with its central conception of information sources as physical stochastic processes. Shannon’s theory had already had a major impact on many sciences by the late 1950s, including psychology and neuroscience.)

Subjectively, mentally healthy individuals experience a given fairly (although not reliably-completely) singular state of mind in order to perform functions and tasks and to focus on solving problems. Objectively, the many subsystems of the brain — from neural circuits to neural ensembles and assemblies of neurons to rhythmic glial pulses or waves — work in a networked, unified manner in order to sustain such task-orientated ‘states’.

However, processes and information processing are the basis of all of these ‘states’. In information-theoretic terms, a brain state is a discrete state of a set of dynamic, structured systems and processes that add up to an information source: the sub-personal sources I have introduced multiply above.

The brain certainly contains various somewhat static structures. These are the basis of our long term memory types: our episodic memory (experiences), our semantic memory (what things mean), and our procedural memory (how to perform functions and tasks.) However, even these change often — and certainly no electro-chemically active neuroprotein-and-neurochemical-based system that depends upon a dynamic metabolism and such processes as the glymphatic clearing of metabolic toxins from the brain bath — will be completely static. With a truly static neurological basis of memory we would not be able to learn, nor to forget redundant information.

Yet the concept of brain states is in fact — ironically — itself largely grounded in information theory. In Claude E Shannon’s 1948 The Mathematical Theory of Communication an information source is defined as a physical stochastic process. Shannon’s applied statistical model is sophisticated and elegant. Importantly, it has both a continuous and a discrete interpretation, or version.

It turns out — for practical mathematical reasons — that modelling continuous natural information sources like the continuous undulating sound of someone producing phonemic speech (human language) is most easily done by approximating such sources using the discrete model. This is based upon the same discretising principle as is digitising music and video. One samples discrete values at fixed time intervals from the continuous source and stores that sequence of values.

Similarly in Shannon’s theory one discretises the continuous source into a long sequence of discrete states. Thus, at any given moment, the continuous information source (physical stochastic process) has a specific discrete state. Each time-point or momentary snapshot of the source is a discrete source state. How long is the moment? That’s arbitrary, but clearly it needs to be non-zero. Long enough to get a set of values for all of the variables of interest that belong to that source at the relevant or chosen level of abstraction.

(Note that a level of abstraction is not a degree of abstraction. It is the level at which the abstraction of the model is set: the set of physical variables chosen to be of interest at a given ontological level for a given analysis or explanation. For example, the level of abstraction for quantum mechanics is much lower and more detailed than the level of abstraction for — say — Newtonian mechanics applied to aircraft flight dynamics.)

People often like to refer to having a certain ‘state of mind’ and of course there are many features and properties of our conscious awareness and thinking that are relatively stable over time. That is one of the functions that memory and our neurologically-sustained stream of consciousness, or conscious awareness, performs for us. It gives us the ability to retain focus and hold information pertaining to situational awareness, long-term goals, and everyday tasks.

This kind of mental activity is organised and carried out by our executive function. The neurological seats of our executive function is called the central executive. In neuro-anatomy terms it resides mostly in our pre-frontal cortexes at the front of our brains above our eyes. It is also called the fronto-parietal network (FPN). It sustains rational problem assessment, solving, and planning.

The central executive — itself a collection of inter-monitoring and interacting sub-personal information sources in the FPN — constantly refers to other sub-personal sources sustaining our long and short term memory using neural circuits in our hippocampus and cerebellum, and networks of neurons across both working in unison (as well as other highly interconnected areas of the brain). It involves ongoing neurologically-sustained information processing that continues even while we sleep — albeit at various much lower activity levels of mental ‘idling’ that happen in conjunction with operation of what is called the default mode network or DMN.

So — sub-personal information processes or neurological processing of information is a necessary condition for brain states, which latter are not really static, and are an idealisation even in Shannon’s information-theoretic terms. Indeed, brain state types could only really ever have been based upon an idealisation that abstracts out neuro-structural and neuro-processual differences that might — or might not — make a difference to brain functions.

Two token instances of a certain natural kind of neurological information processing system might have the same basic phenotypic components such as, say, the same basic components of the hippocampus doing memory processing of similar information according to the multiple trace theory of memory function. Yet those two instances will be naturally necessarily not exactly identically structured token instances of the same type of system.

The tokens of the sub-personal information processing system are of a loose (pheno)type. They process the same information the ‘same’ way, but are naturally-necessarily not internally-structurally exactly the same.

An Illustrative Thought Experiment

Each token instance of a certain natural kind of neurological information processing system — even in two monozygotic twins — will necessarily exhibit tiny micro-structural and neurostructural variations.

In other words the two monozygotic twins might be processing visual imagery of the exact same object from exactly the same angle in exactly the same lighting conditions (although presumably they will have to do this at two separate times) and yet even if all of the external perceptual, distal, and proximal details could be held constant/equal between to the two viewings, the internal neurological processing structures simply cannot have the exact same structure in each twin’s brain by natural necessity.

So — some clever technicians have set up the experiment so that it manages to be ideal — to perfectly replicate all of the variables between experiments (impossible to do). They take the twins one at a time and screw their noggins into a brace that prevents cranial movement, adjusting for any tiny variations in skull shape and sub-cutaneous cellulite levels, and even ensuring that the two subjects have the exact same fitness levels and dietary health…

Clearly — the experiment is an impossible idealisation. Even if each twin deploys exactly the same set of neurological subsystems (sub-personal sources) in perceptual and cognitive processing, there will be unavoidably ineliminable differences in the configuration of each at the microstructural, neurostructural level. Moreover, it’s impossible that the rolling out of the states of the deployed subpersonal sources in each twin will match over time.

In much the same way — the two sub-personal, neurological, weakly or loosely ‘typed’ ‘typoken’ information-processing systems will have unavoidably different configurations at multiple levels of (albeit very detailed, neuron-level) abstraction. There might be less difference at more macroscopic levels of abstraction, but certainly at lower levels the divergence will increase by natural necessity (due to entropic nomic constraints at minimum).

It is not physically possible to align and match the between-subjects experiments perfectly. The more detailed the experiment and the more detailed the level of abstraction, the more impossible it is to get perfect alignment and structural mapping (or anything remotely like it.) There are going to be a bunch of neurons misaligned, a bunch missing, and some neural circuits laid out differently spatiotemporally at the neurostructural and microstructural level.

Naturally necessarily.

Put in other words — even if the experimenters choose a certain set of subsystems at a certain level of abstraction to focus on a certain set of very specific physical variables it will be impossible to get alignment and perfect 1–1 structural and processual mapping.

To be frank, given contemporary neuroscience and our understanding of the complexity of human neurology since at least the 1970s — it is somewhat trivially true that this is a natural necessity. In other words: it is somewhat obvious and undeniable. The more structurally and processually complex the instances of the systems being matched or compared the more different they will naturally necessarily be at low or detailed levels of ontological abstraction.

The more structurally and functionally complex an information processing system and its configurations are the more unlikely that two token instances of the same system processing the same information in two conspecific organisms will be perfect duplicates of each other.

Informational Identity Theory Answers Putnam

My proposed informational identity theory solves Putnam’s multiple realisability challenge in at least two ways:

  1. By simply changing the focus of the analysis to information-process ‘typokens’ of ‘loose’ types of information processing and processes and
  2. By allowing abstracting-out certain details of the neurological processes at an arbitrary chosen level of abstraction to facilitate (1).

(1) means that it doesn’t necessarily matter what specific ‘typoken’ configuration of a physical processing mechanism (sub-personal source) is deployed to realise the mental source, so long as the same information is processed appropriately to sustain whatever causes, outcomes, actions, functions and semantic and conceptual (cognitive) content are realised, required, or involved.

However, informational identity theory also allows for the possibility that — as philosopher David Lewis argued — there might be very large number of variations of more general process-state types. According to informational identity theory, mental processes and states are realised by the appropriate loose, or weak, type of processing of the appropriate information.

However, it is possible that the same information processing might be achieved by physical systems and processes that are differently configured to varying degrees. Their structural differences might be small or large, but their informational differences either not very different or else even identical, depending upon the level of abstraction at which information processing is realised.

At the right level of granularity and abstraction the differences may become irrelevant in terms of bits of information, indication of inputs and upstream causal sources, causal outputs, and functional information outputs. The correct type(s) of indication, intentional content, function, and action will be instantiated as a result of the configuration and output of the sub-personal sources.


Comments