Select Page

Reductionism

Reductionism illustrated as love equals oxytocin

Oxytocin is the cuddle hormone – it is what makes us feel affectionate.
So where is love in all this?

Is this chemical all that there is to this powerful emotion?

Adapted from an image of the oxytocin molecule in Wikimedia Commons
Edgar181 – Accessed 8 Sept. 2015

Introduction – Reductionism

Doesn’t biology, when all is said and done, really boil down to just physics and chemistry?

What are we to make of an assertion like this?

Is it correct . . . or is it just playing with words? Is there something special and unique about biology that cannot be expressed in a simple physicochemical way? If it is true, then will we, in the future, find ourselves replacing biological language with physicochemical language? If you think this is unlikely, then what are the problems or misconceptions that will prevent it from happening?

The questions outlined above relate to one of the most vexed questions in science today – the relationship between the various scientific disciplines: what characteristics do they share, and what makes each distinct? Are some disciplines more scientific than others? What are the differences in principles and methodology between, on the one hand, the so-called hard sciences like physics and chemistry and, on the other, the soft or special sciences like biology, sociology, politics, and economics? Is there a foundational subject or subjects? If biological explanations are ‘improved’ by being ‘broken down’ (explained or analyzed) in terms of physics and chemistry (reductionism), then can explanations be equally improved by being ‘built up’ into greater wholes?

The problem

The problem of reductionism is notoriously complicated because it touches on so many concerns in the philosophy of science. Among the major topics are: the relationship between physics and the rest of science including debates concerning the difference between ‘hard’ and ‘soft’ science, and between physics and the special sciences. What exactly, is the difference (if any) between physics and biology, and between the brain as physico-chemical processes and the mind or consciousness.

A brief scanning of the literature on reductionism quickly reveals its complexity as it hits up against a whole lexicon of daunting specialist terms, all warning you of the minefield ahead: constructivism, emergence, foundationalism, holism, organicism, vitalism, eliminativism, supervenience, multiple realization, epiphenomena, teleology, qualia, degeneracy, consilience, granularity, realism and anti-realism, perspectivism, and much more . . . all seemingly mixed up into a scientific and philosophical soup of ideas.

Reductionism lies at the heart of two competing notions or paradigms concerning the way we should be doing science based on different metaphysical systems . . . different assumptions about the nature of reality itself. The two paradigms are not distinct, being related in complex ways. However, for ease of exposition they can be contrasted as, on the one hand, reductionism (foundationalism) and, on the other hand, holism (emergentism).

As understood here:

 

Reductionism emphasizes: the unity of science; the ideal of mathematics; the foundation of science in the laws, theories and concepts of physics; and the primacy of analysis as a mode of explanation – the understanding of scientific entities in terms of the operation of their parts.

Holism challenges the notion of a unified science, advocates anti-foundationalism by asserting the validity of independent domains of discourse, and the equivalent use and validity of synthesis as a means of scientific explanation – the understanding of scientific entities in terms of their relationship to more encompassing wholes.

 

The question being posed is whether biology . . . differs in its subject-matter, conceptual framework and methodology from the physical sciences’.[12]

We may be suspicious of reductionist claims but rarely are they subject to close scrutiny by practicing biologists. In this article I shall try to draw some of the threads of this vexed problem together. For simplicity, and to challenge the reader, the article will develop an overall claim based on a series of challengeable principles.

The reductionist challenge

One way of loosely circumscribing reductionism is to regard it as the translation of ideas from one domain of knowledge to another. In this form it is often used as a way of simplifying, debunking, or explaining away.

Reductionist claims usually take the form ‘A is really just B’ or ‘A is nothing but B’. A well-known example would be the statement ‘humans are really just DNA’s way of making more DNA’. Is this a serious, valid, and useful scientific claim?

After being confronted by a reductionist claim of this sort we are left wondering whether we have been cheated – thinking that something important, even critical, has been ignored or passed over – but not knowing what that is.

There is the implication that some subjects, language, or ideas are superfluous, that they can be eliminated altogether or explained and understood in a scientifically more respectable way.

Examples

Actual examples of reduction in the history of science are quite rare but there is the move from classical thermodynamics to statistical mechanics; the transition from physical optics to Maxwell’s electromagnetic theory whose equations have resulted in smartphones and TVs; and the transition from Newtonian mechanics to Einsteinian relativistic mechanics. Maybe in biology there is the translation of Mendel’s gene theory into the biochemistry of DNA. But when do we know that such a task has been completed successfully?

One famous attempt at reduction was that of English philosophers Alfred North Whitehead and Bertrand Russell who, in Principia Mathematica (1910, 1912, 1913, & 2nd edn 1927), examined the foundations of mathematics. By using axioms and inference rules they tried, unsuccessfully, to explain mathematics purely in terms of logic and set theory.

To understand the provocative flavour of reductionism here are some further examples . . .

It might be claimed, for instance, that history is just a fancy name for what is really only biology – the study of human behaviour; that morality is just a set of functional biological adaptations; that concepts and mental images in our minds are just physicochemical processes in our brains; or that sociology is a fiction because there is no such thing as ‘society’ just groups of individuals.

Various forms of reduction have become ‘-isms’: like psychologism – the claim that many aspects of our behaviour can be explained in purely psychological terms; biological determinism – that human behaviour can be explained in purely biological terms; environmental determinism – that social development is largely a consequence of environmental factors; genetic determinism – that genes determine our behaviour more than culture.

In the humanities similar cases, perhaps slightly different in character, might be for example a historical or literary analysis from the perspective of psychoanalysis or Marxist theory.

Of special current relevance is the way that the mind reduces to the brain, the mental to the neural, and the neural to the physico-chemical.One articulate contemporary statement of scientific reductionism in general is that of Alex Rosenberg, Professor of Philosophy at Duke University in America. In The Atheist’s Guide to Reality: Enjoying Life without Illusions (2011) Rosenberg poses his thesis with an uncompromising directness. He is an advocate of scientism[1] and the claim that ‘science alone gives us genuine knowledge of reality‘, that ‘What ultimately exist are just fermions and bosons and the physical laws that describe the way these particles and that of the larger objects made up of them behave‘, that ‘the physical facts fix all the facts‘ in a ‘purposeless and meaningless world‘. Further, since ‘morality is illusory‘, the consistent atheist must be a nihilist, ‘albeit a nice one‘.

Rosenberg’s view may be characterized as an extreme variant of materialism[5] or physicalism[2] within the host of philosophical positions that can be adopted on such matters.[3][4][5][6][7][8][9] However, this is a particularly hard-nosed approach[10] when compared to naturalism[3] and scientific realism[4] which have a more relaxed attitude to the claims of science.

Rosenberg’s stance is that of physicalist reductionism[12] and it provides a useful target for those with different views.

Unpacking ‘reduction’

Already you may be thinking that these examples are just a matter of woolly thinking, oversimplifications, unreasoned comparisons, or semantic confusions . . . so let’s be more specific.

To get started we need to establish a common understanding, some ground rules. What exactly do we mean by ‘reduction’? What is being reduced when we suggest reducing X to Y . . . are we talking about one, several, or all of the following: properties, terms, concepts, physical objects or phenomena, explanations, meanings, theories, principles, and laws?

Philosophers have found a way of simplifying this problem by distinguishing three kinds of reduction as it relates to: existence, explanation and methodology. This provides us with a broad classification of, on the one hand, the objects of reduction and, on the other, what is being claimed for a reduction.

EXISTENCE             –         what exists
EXPLANATION      –         how we can claim that reduction has been achieved
METHODOLOGY   –        how the reduction is carried out

This distinction will be needed in the discussions to come as it helps to clarify any disagreements about reductionist claims.

Principle 1 – when encountering a reductionist claim it helps to distinguish whether the claim is about existence, explanation, or methodology

Existence

These are reductionist (metaphysical) claims aims about what ‘really’ exists by reducing entities or phenomena to others – often relating to modes of representation and the distinction between ‘appearance’ and ‘reality’. For example, it may be claimed that that a table or chair, though seeming to be a solid unitary object, is ‘really’ made up of molecules that consist mostly of space (ontological reduction)

Explanation

These are reductionist claims about our ways of knowing, understanding, and explaining – such as the reduction of one theory to another. For example, that it is best to explain genes in terms of molecular biology (epistemological reduction).

Methodology

This relates to the problems of translating one form of knowledge into another: chemistry into physics, biology into chemistry and so on. For example, what exactly are the factors complicating the translation of words like ‘mitosis’, ‘predator’, ‘hibernation’, and ‘interest rates’ into physicochemical language (methodological reduction)?

Five Faces of reduction

I shall now ask you, the reader, to examine a set of challengeable principles that can be used to assess reductionist claims. The principles are organized around five topics that are key ingredients in the confusions and problems relating to reductionism:

1. The representation of reality in perception, cognition, and language
2. Explanation & causation
3. Scientific fundamentalism – that there is a unity of science based on a foundation of mathematics, physics, universal physical laws and constants, and fundamental particles
4. Emergentism (holism) – wholes, parts, and emergent properties
5. Domains of knowledge and the translation of ideas from one domain of knowledge into those of another

1. Reality & representation
The first topic looks at the ambiguities and confusions that can arise from the way we intuitively structure reality, the way our perception and cognition filters all our experience to give us a uniquely human outlook on reality. Also the way scientific language can conceal errors and ambiguities in the way we describe and represent the world.

2. Explanation & causation
Causation is the glue that we use to bind our scientific explanations and it is, we believe, the bedrock on which science builds its observations and predictions. And yet causation is problematic to scientists and philosophers alike.

3. Foundationalism
This article examines the claim that there is a unity of science based on the foundations of physics and mathematics, that ‘physics fixes all the facts’. That, at least in principle, everything in the universe can be explained in terms of the fundamental constituents of matter their relations – including normativity, function, purpose, mind, meaning, thoughts and representations. The actual foundations may be treated in terms of matter (the foundational physics of fundamental particles out of which all matter is made) or explanatory axioms (the physical laws that underpin the order of the cosmos).

4. Wholes and parts
This article examines the challenge to foundationalism, that wholes are in some sense more than an aggregation of parts, and that novelty has emerged in the universe in an unpredictable way by giving rise to new and unexpected features and properties – like the emergence of life from inanimate matter, and consciousness from brains.

5. Domains of knowledge
Sciences tends to arrange its subdisciplines in a sequence that runs from mathematics and logic to physics, chemistry, biology, behaviour and psychology, then the economic, social and political sciences. Is the segregation of scientific knowledge into these domains just a matter of convenience or does it relate in some way to the structure of the world? Related to this question there is the way that each scientific discipline has developed its own particular language, principles, practices, and academic empires. How are these domains of knowledge to communicate with one-another? Is it possible to translate one discipline into another?

 

A preliminary thought to ponder: the challenge for science and philosophy in the 21st century is not just to devise a physical account of the material universe in the form of some kind of unified field theory of space-time, M-theory, or string theory, but to provide an intellectually coherent account of the world that encompasses, among (many) other things, matter, life, the mind, consciousness, normativity, function and purpose, information, meaning, and representation.

1. Reality & representation

Whether something can be ‘reduced’ to something else depends largely on our intuitions about what there ‘is’ . . . about the nature of the objects in existence or, at least, the way we represent them in scientific theories, laws etc. This is a complex topic addressed in the article on representation.

2. Explanation & causation

Causation underlies the workings of the universe and our discourse about it. Anyone who is curious about the natural world must at some time or another in their lives have wondered about the true nature of causation, especially those people with a scientific curiosity. This series of articles on causation became necessary, not only for these reasons, but because causation is so frequently called on to do work in the philosophical debate about reductionism and today’s competing scientific world views.

It is dubious whether the reduction of causal relations to non-causal features has been achieved and scientific accounts are strong alternatives with revisionary non-eliminative accounts finding favour. Can emergent entities play a causa role in the worlds? But is causation confined to the physical realm?

The issue to be addressed here is, firstly, can causation itself be reduced to something simpler. But the role that causation plays in causal interactions that operate within and between domains of knowedge. The outline of this article follows the account given by Humphreys in the Oxford Handbook of Causation of 2009.[2]

At the outset it is important to distinguish between reduction between the objects of investigation themselves (ontological reduction) and linguistic or conceptual reduction as the reduction of our representations of those objects.

Reduction of causation itself
Eliminative reduction of causation
We must decide whether causation is itself amenable to reductive treatment. Reduction may be eliminative reduction in which the reduced entity is considered dispensable because inaccessible (Hume’s claim that we do not experience causal connection) so we can therefore eliminate it from our theoretical discourse and/or real objects (ontology) (the Mill/Ramsay/Lewis model) substituting phenomena that are more amenable to direct empirical inspection. The most popular theory of this kind is Humean lawlike regularity but in this group would be the logical positivists, logical empiricists (e.g. Ernest Nagel, Carl Hempel), Bertrand Russell, and many contemporary physicalists with an empiricist epistemology. Hume’s view was that we arrive at cause through the habit of association and in this way he removed causal necessity from the world by giving it a psychological foundation. A benign expression of this view would be that ‘C caused E when from initial conditions A described using law-like statements it can be deduced that E’.

Non-eliminative reduction of causation
Causation is so central to everyday explanation, scientific experiment, and action that many have adopted a non-eliminative position. X is reduced to Y but not eliminated, simply expressed in different concepts like probabilities, interventions, or lawlike regularities. Non-eliminativists like the late Australian philosopher David Armstrong hold that causation is essentially a primitive concept that we can at least sometimes access epistemically as contingent relations of nomic necessity among universals and thus amenable to multiple realization. language or with eliminativist accounts explaining causation in non-causal terms.

Revisionary reduction of causation
Here the reduced concept is modified somewhat, as when folk causation is replaced by scientific causation. Most philosophical and self-conscious accounts of causation are revisionary to a greater or lesser degree.

Circularity
Many accounts of causation include reference to causation-like factors as occurs with natural necessity, counterfactual conditionals, and dispositions in what has become known as the modal circle. The fact that no fully satisfactory account of causation can totally eliminate the notion of cause itself is support for a primitivist case.

Domains of reduction
Discussions in both science and philosophy refer to ‘levels’ or ‘scales’ or ‘domains’ of both objects and discourse. So physics is overtopped by progressively more complex or inclusive layers of reality such as chemistry, biochemistry, biology, sociology etc. This hierarchically stratified characterization of reality is discussed elsewhere. Here the task is to examine the way causation might operate within and between these different objects and and domains of discourse.
The attempt at reducoing one domain to another is not a straightforward translation as an account must be given of the different objects, terms, theories, laws, properties and their role in causal processes. The preferred theory of causation (whether, say, a singularist or regularity theory) will be pertinent to what kind of causal reduction may be possible.

Relations between domains
Suppose we are engaged in the reduction of a biological process to one in physics and chemistry, say the reduction of Mendelian genetics to biochemistry, then what kinds of causal interactions might we invoke? The causal relation might be: a relation of identity; an explicit definition; an implicit definition via a theory; a contingent statement of a lawlike connection; a elation of natural or metaphysical necessitation as in supervenience; an explanatory relation; a relation of emergence; a realization relation; a relation of constitution; even causation itself. If indeed the causation were different in different domains then this might render reduction restricted or impossible. Accounts like counterfactual analysis are domain independent.(p. 636)

However, there are domain-specific claims such as physicalism’s Humean supervenience. Under some theories causation is restricted to physical causation as the transfer of conserved physical quantities and this is difficult to apply to the social sciences.

Domain-specific causation & physicalism
Could it be that causation in biology is different from that in physics or sociology or is causation of the same general kind – is their ‘social cause’ and ‘biological cause’ or just ’cause’? The most contentious area here is mental causation where intentionality is often treated as ‘agency’ rather than ‘event’ causation.

Supervenience
In the 1960s domain reduction was promoted through the reduction of theories via bridging laws (Ernest Nagel). One major challenge for such an approach has been multiple realization whereby something like ‘pain’ can be expressed physically in so many ways that this renders its further reduction unlikely although this has been countered by supervenience accounts. For example Humean supervenience regards the world as the spatio-temporal distribution of locaized physical particulars with everything else including laws of nature and causal relations supervening on this.(p. 639) Supervenience is generally regarded a a non-reductive relation.

Functionalism
Multiple realization characterizes properties in terms of their causal roles. Money is causally realized by coins, cheques, promissory note etc. The role of ‘doorstop’ can be functionally and reducibly defined so not all cases of multiple realization are irreducible, irreducibility needs to be taken case by case. For Kim (1997;1999) ‘Functionalization of a property is both necessary and sufficient for reduction …. it explains why reducible properties are predictable and explainable’. Since almost all properties can be functionalized few need to be candidates for emergent properties (p. 644)

Upward & downward causation
The restriction of cause to physical domains is supported by the downward causation and exclusion argument.

Causal exclusion principle & non-reductive physicalism
The causal exclusion principle states that there cannot be more than one sufficient cause for an effect. If we accept this then how are we to account for the causes we allocate at large scales, say the cause of a rise in interest rates? What is the causal relevance of multiply realizable or functional properties (redness, pain, and mental properties)? Does this principle automatically devolve into smallism, that we ultimately explain everything all the way down to leptons and bosons, or smaller and more basic entities when we find them because they are the ones doing the causal work? How can a macro situation have causal relevance if it can be fully accounted for at the micro scale. These properties then become epiphenomena, a by-product or phenomenon with no physical basis.

If C is causally sufficient for E then any other event D is causally irrelevant. Every physical event E has a physical event C causally sufficient for E. If event D supervenes on C then D is distinct from C.

There is increasing evidence supporting the causal autonomy of disciplinary discourse or non-reductive physicalism. Properties in the special sciences are not identical to physical properties since they are multiply realized although they do supervene on (instances of) physical properties since changes in the special properties entail changes in the physical properties further the special properties are causes and effects of other special properties.

A large-scale cause can exclude a small-scale cause. Pain might cause screaming while there is no equivalent neural property. This occurs when the trigger is extrinsic to the system. The pain resulting from a pin prick is initiated by the pin; it cannot possibly be initiated at the neural scale.

The exclusion principle can be applied to any kind of event that supervenes on physical events and shpows that there is no clear causal role for supervening events.

The main questions to be addressed in relation to causation and reduction are: can causation itself be reduced; is there a base-level physicochemical causation underlying all other forms of causation; how does causation operate within a. non-physicochemical domains of discourse and scales and b. between non-physicochemical domains of discourse and scales.

In posing these questions it should be noted that it is cutomary to discuss different academic disciplines, as different domains of knowledge that use their own specific terminology, theories and principles. So for example we have physics, chemistry, biology, and sociology being refereed to as ‘domains of discourse’ and stratified or into ‘levels’ or ‘scales’ of existence. From the outset a careful distinction must be made between ontological reduction, the reductive relations between objects themselves, and linguistic or conceptual reduction which deals with our representations of these objects.

Cause & reductionism
So far in discussing reductionism it has been noted that at present we explain the world scientifically using several scales or perspectives. These scales correspond approximately to particular specialised academic disciplines with their own objects of study including their terminologies, theories, and principles. One possible way of expressing this would be: matter, energy, motion, and force (physics), living organisms (biology), behaviour (psychology), and society (sociology, politics, economics). Each discipline has its own specialist objects of study like be quarks (physics), lungs (biology), desires (psychology), and interest rates (economics). Since it has been argued that each disciplines is addressing the same physical reality from different perspectives or scales the question arises as to the causal relationships between these various objects of study. This raises the question about the relationship between causes at different scales, perspectives, or, in the old terminology, ‘levels of organisation’ when they deal with different entities. How do we reconcile causation at the fundamental particle scale with causation at the political scale assuming the physical reality that they are dealing with is the same?

To answer this question we need to do some groundwork … our modest philosophical program is to ask: What is causation and in what sense does it exist? Is it something that exists independently of us and, if not, in what way does in depend on us? Is causation part of the human-centred Manifest Image? What role does causation play in our reasoning? In other words we need to demonstrate that causation is either a fundamental fact of the universe, or some kind of mental construct, or it can be explained in different and simpler terms.

If we assume the process of explanation proceeding by analysis or synthesis and we regard fermions and bosons as the smallest units of matter then causation must act primarily from the wider context. A rise in interest rates, or the pumping of a heart cannot be initiated by fermions and bosons themselves. To make sense of the fermions and bosons that exist in a heart we must consider their wider context.

Does causation occurs at all scales depending on its initiators or is there a privileged foundational with macroscales explained by microscales, that genes coding (in humans about 25,000 genes and 100,000 proteins) for proteins, cells, tissues, organs, and the organism. That is, a causal chain that leads to progressively larger, more inclusive, and complex structures. This is the central dogma of genetic determinism. But does causation occur between cells, organs, or tissues? Are genes triggered by transcription factors that turn them on and off. Is the environment causal from outside the organism along with other constraining factors at all scales. Homeostasis. Evolution occurs through changes in the genotype that are produced by selection of the phenotype as natural selecrtion expresses the organism-environment continuum.

If ‘levels’ or ‘scales’ do not exist as separate physical objects then there is only one fundamental mode of being. This is simply one physical reality that can be interpreted or explained in different ways: it has no foundationalscale or level.

Weak emergence: descriptions at scale X are shorthand for those at scale Y; strong when X cannot be given for Y.

Universal laws apply to biology, an unsupported elephant will fall to the ground, but biology has its own causal regularities that are, of their very nature, restricted to living organisms.

A cause can be sufficient for its effect but not necessary (a piece of glass C starting a fire E) – we can infer E from C but not vice-versa; it may be necessary but not sufficient (presence of oxygen C in a fire-prone region E) – we can infer E from C but not vice-versa. Under this characterization cause can be defined as either sufficient conditions (or even necesary and sufficient conditions).

Some scales of explanation or causal description are more appropriate than others. It is possible to provide an explanation that is either overly general or overly detailed. What is appropriate depends on the causal structure, what would provide the most effective terms and structures for empirical investigation. This contrasts with the view that there is a fundamental or foundational scale at which explanation is most complete. (Woodward 2009). Causes need to be appropriate to their effects. Bosons nfluencing interest rates. Interest rates affecting the configuration of sub-atomic particles. Fine-grained explanations may be more stable but not always. (Woodward 2009).

One are where this tension expresses itself is in the argument over the mechanism of biological selection in evolution. Should we regard natural selection as ultimately and inevitably a consequence of what is going on in the genes (see Richard Dawkins book The Selfish Gene) or are there causal influences that operate between cells, between tissues, between individuals, between populations, and in relation to causes generated by the environment?

Noble, D. 2012. A Theory of Biological Relativity. Interface Focus 2: 55-64.

It is widely assumed that large-scale causes can be reduced to small-scale causes, the macro to micro: that macro causation frequently (but not always) falls under micro laws of nature. This presupposes a means of correlating the relata at the different scales. This might be interpreted as microdeterminism, the claim that the macro world is a consequence of the micro world. The causal order of the macro world emerges out of the causal order of the micro world. A strict interpretation might be that a macro causal relation exists between two events when there are micro descriptions of the events instantiating a physical law of nature and a more relaxed version that there are causal relations between events that supervene. It might also be the case that even if there is causal sufficiency and completeness the existence of necessitating lawful microdeterminism (laws) does not entail causal completeness. Perhaps in some cases there is counterfactual dependence at the macro but not the micro scale.

Granularity & reductionism
We are tempted to think that we can improve on the precision of causal explanations. Could or should we try to improve the precision of of causal explanations by giving more detail or being more scientific? For example I might explain how driving over a dog was related to my personal psychology, the biochemical activity going on in my brain, the politics of the suburb where the accident occurred and so on. That is, the explanation could be given using language and concepts taken from different domains of knowledge: psychology, politics, sociology, biochemistry and so on. The same situation can be described using different domains of knowledge, scales of existence, and so on. What is of special interest is that the cause will be different depending on the perspective chosen. For simplicity the choice of detail chosen for the explanation is referred to as its granularity. This raises the problems of reduction that is discussed elsewhere. Is there a foundational or more informative scale or terminology that can be used? Is an explanation taken to the smallest possible physical scale the best explanation? Are the causal relations dependant on more metaphysically basic facts like fundamental laws? Do facts about organisms beneficially reduce to biochemical facts … and so on. Is fine grain the best?

Principle 3 – Any description of causation presents the metaphysical challenge of selecting the grain of the terms and conditions to be employed

We can appear to express the same cause using different terms that seem to alter the meaning and therefore the causal relations under consideration, for example: we might replace ‘The match caused the fire’ with ‘Friction acting on phosphorus produced a flame that caused the fire’. This raises the question ‘But what was really the cause?’ with the potential for seemingly different answers when we want only one. The depth of detail in terminology is sometimes referred to as granularity and it raises the question of whether some explanations are more basic or fundamental that others, that some statements can be beneficially reduced to others (reductionism).

This gives us an extended definition of science: science studies the order of the world by investigating causal processes. Causal processes are of many kinds: there are, we might say for example, that Though contentious we might add that we must resist the temptation to reduce causes of one kind to causes of another kind. Causally it makes no sense to reduce biology to physics by saying that fermions and bosons cause the heart to beat. A heart might consist of fermions and bosons but these do not have causal efficacy in this sense. This takes us away from the traditional method of attempting to define science which has been in terms of its methodology (the hypothetico-deductive or deductive-nomological method).

Multiple realization
Physicalists can be divided into two camps: those that think everything can be reduced to physics (reductive physicalists) and those that do not (nonreductive physicalists). The reductionist physicalist claims a type-identity thesis such that, for example, mental properties like feelings are identical with physical properties: that all mental properties are caused by physical properties. Assuming we have two entities, one acting causally on the other seems mistaken the two being, in fact, one and the same. Similarly the non causal connection between temperature and mean molecular kinetic energy. Also life and complex biochemistry? The question arises though as to the identity of objects. Is pain physically identical in a human and a herring? Here it seems that pain can be expressed in many different physical ways, known as ‘multiple realization’. This attack on the type-identity thesis led to the modified claim that mental states are identifiable with functional states which then allows multiple realization, a functional property being understood in terms of the causal role it plays. However, we can think of pain as being either coarse-grained, or fine-grained. ??Either one thing, a mix of properties hardly warranting aggregation under a single category, or OK.

Emergence
Reduction is generally contrasted with emergence. Acounts of emergence are rarely causal in form. Why cannot ‘horizontal’ causation give rise to emergent features within the same domain?

3. Scientific fundamentalism

It might be assumed that science provides us with the most secure form of knowledge and that, within science, the most secure forms of knowledge are mathematics and physics. But why is this so?

The explanatory regress

We explain one ‘thing’ in terms of another – we do not explain it in terms of itself. Reductionism, like all science, is a form of explanation: it gives a clarification, simplification, reasons, or justification. And it does so by explaining the whole in terms of its parts.

Justification

Aristotle observed that explanations, to be logically consistent, require further explanation. Like a child, we can continue to ask ‘but why?’, demanding yet more explanations as justification.

In practice, at some stage in the explanatory process we accept one particular explanation as sufficient for our purposes – but that does not mean that, logically, the demand for further explanation cannot continue.

Fundamentalism

Explanations, like philosophical justification, can enter an infinite regress or lapse into circularity. The only way out of this dilemma is to draw a line in the sand, to accept one particular explanation as sufficient for purpose, and then use this as a point of security or foundation for further inferences.

This fundamentalism can then serve as an unquestioned bedrock of self-evident or unjustified truth or axiom (sometimes called a primitive or brute fact). A good example of a scientific brute fact is a law of physics.

We feel a compulsion to be as fundamental as possible in our explanations: if further questions can be posed then the problem has not been adequately addressed. Scientific explanation seems to stop at physical constants and laws – even though we cannot explain why these laws are as they are or, indeed, why there are any laws at all. Mathematics is the cardinal case of theories and explanations built on axioms.

Coherentism

An alternative to fundamentalism is coherentism whereby beliefs must hang together, forming a coherent web of interlocked ideas.

Semantics, metaphor, definition

Much turns on what we assume is meant by ‘foundational, or ‘fundamental’ and our mental characterization of ‘reduction’.

Fundamental

We have seen that one way of bringing a regress to a halt is to find an explanation that does not need justification – one that is beyond question, primitive, self-evident, or a brute fact. It then becomes futile looking for further definitions, explanations or proofs because such foundational concepts presuppose the things they are meant to be explaining.

In mathematics these basic assumptions are known as axioms and they form the foundational logical structure on which all mathematics rests: if the axioms are unreliable, then the entire edifice comes crashing down.

This is the mode of thinking that we can call scientific fundamentalism. Aristotle used this principle to underpin his logic of scientific demonstration – the famous deductive syllogism. This was a form of argument which first stated a universally secure foundational principle, then declares a particular instance, such that the premise necessarily entails the conclusion (e.g. All swans are white (foundational or universal principle), this is a swan (particular instance), therefore this swan is white).

(e.g. This is a swan, all observed swans have been white, therefore this swan is probably white). The conclusion of a deductive argument appears certain while that of an inductive argument has degrees of probability that depend on the quality of evidence.

Principle 2 – Foundationalism – is the search for secure assertions that can be taken as the underpinning for other statements and assertions

The overwhelming character of foundationalism or fundamentalism is that of ranked dependency: some entities only exist, have authenticity, or can be explained because of others. They are diminished in relation to something else of greater significance.

Principle 3 – foundationalism or fundamentalism are relations of dependency – where one object depends on, or is subordinate to, the existence, explantion, or method of investigation, of another

Reduction

How do we represent ‘reduction’ in our mind’s eye? There are two objects: the reducee (that which is reduced) and reducer (that to which it is reduced). The word is a metaphor derived from the Latin reducere to bring back, to be assimilated by, or to diminish. We imagine the reducer as in some way prior to, or more basic than the reducee. Sometimes this is treated as a process of elimination (eliminativism) as when we regard the description of mental illness (reducer) as eliminating or substituting for possession by demons (reducee), or the idea of oxygen (reducer) replacing that of phlogiston (reducee).

Whether the reducee is eliminated, subsumed, or replaced by the reducer, a prioritization or ranking has taken place: the reducer has been prioritized over the reducee (for whatever reason). Ranking and prioritization are characteristics of our minds, not of nature so whenever we perform a ‘reduction’ we need to determine whether we are assuming that the reduction occurs in nature or in our minds.

Principle 4 – ‘reduction’ is metaphorical language used fore the prioritization or ranking of something in relation to something else. It occurs in our minds, not in nature

Maths & physics

‘All science is either physics or stamp collecting’

Ernest Rutherford, British chemist and physicist c. 1900

Many reasons can be found for placing mathematics and physics at the forefront of the sciences. Since at least the time of the classical philosophers of Ancient Greece, mathematics has been treated as a model or template for all knowledge, including physics, as the mode of thinking towards which all other thinking should aspire. A sign above the entrance to Plato’s Academy in ancient Athens read: ‘Let no-one ignorant of geometry enter here‘.

Geometry

Artist’s impression of Gravity Probe B orbiting the Earth to measure space-time

This is a four-dimensional description of the universe including height, width, length, and time using differential geometry
Differential geometry is the language in which Einstein’s General Theory of Relativity expresses the smooth manifold that is the curvature of space-time – which allows us to position satellites in orbit around the earth. Differential geometry is also used to study gravitational lensing and black holes
The Riemannian geometry of relativity is a non-Euclidian geometry of curved space
Courtesy Wikimedia Commons
Image sourced from NASA at http://www.nasa.gov/mission_pages/gpb/gpb_012.html[/caption]

In the ancient civilizations of Egypt and Mesopotamia it was mathematics, physics, and astronomy that dominated scientific enquiry. These disciplines were used to study the celestial realm of mystery and gods as they recorded their observations in tables and charts that later became known to the Greeks.

Mathematics had practical application beyond astronomy, it provided the precision needed to engineer the magnificent monumental architecture we associate with classical civilization. Numerologists like Pythagoras (c. 570–495 BCE) became cult figures for thinking men. The pre-Socratic philosophers had examined the nature of substance, looking for universal properties and fundamental elements, bequeathing to their successors the idea of four foundational elements – Earth, Air, Fire, and Water – in a tradition that continued into the Medieval world, along with Democritus’s idea of matter being composed of tiny indivisible particles of matter called atoms. The study of living organisms, we believe, did not really get started until the time of Aristotle (zoology) and Theophrastus (botany). Only then do we see the emergence of a critical analytic curiosity in organisms themselves rather than just their utilitarian value as food, medicines, and materials. So biology, it seems, arrived as an afterthought in scientific enquiry as expressed so eloquently in Aristotle‘s ‘Invitation to Biology”.

‘It is not good enough to study the stars no matter how perfect they may be. Rather we must also study the humblest creatures even if they seem repugnant to us. And that is because all animals have something of the good, something of the divine, something of the beautiful’ … ‘inherent in each of them there is something natural and beautiful. Nothing is accidental in the works of nature: everything is, absolutely, for the sake of something else. The purpose for which each has come together, or come into being, deserves its place among what is beautiful’Aristotle – De Partibus Animalium (The Parts of Animals) – 645 a15

The universality of mathematics

One feature of the 17th century Scientific Revolution was the unification by Kepler, Newton, and others of subjects like optics and astronomy with physics to yield what are sometimes referred to as the ‘mathematical’ or ‘exact’ sciences. These approximate the exactness and precision of mathematics. Philosophers from Descartes, Leibniz, and Kant to Bertrand Russell and the logical positivists have regarded these subjects as paradigms of rational and objective knowledge because they are quantitative investigations of the physical causes of natural phenomena using rigorous hypothesis testing to yield precisely quantifiable predictions.

Mathematical knowledge has a unique and appealing beauty: it gives us knowledge that is: certain; incorrigible (it does not undergo revision in the way that empirical facts do); timeless or eternal (we are inclined to think that 2 = 2 = 4 must always be true: it was true before humans occupied the world and it would even be true if no universe existed); and it is necessary (its truths seem to lie outside our world of space and time and yet they can be grasped by our reason, they could not be otherwise). In addition, numbers are not causally interactive.

All this makes mathematical knowledge highly abstract since we are not really sure what it is actually about. The simple answer is of course ‘it is about numbers’, but the concept of number has baffled philosophers from the earliest times. If numbers do not actually exist in space and time and they are causally inert (they are abstract objects) then how can we have any knowledge of them? There is no universally-agreed answer to this question but three broad approaches. Either they are independent abstract objects, or they are in the world, or they are mental constructs. The details need not concern us but if numbers do not depend on experience then perhaps we have some special faculty of numerical perception (say, the intuitive abstract objects of Kant), or we can relate them to set theory, to objects in the world (logicism) or the yare simply mental constructs. Each of these positions has major difficulties and the question still has no universally accepted answer. The fact that mathematics is so abstract means we have every reason to dismiss it as some kind of mental construct, a phantom of our minds. But maths has been applied directly to the world in a practical and economic way that has had an immeasurable impact on human life (see, for example, Gravity Probe B illustrated above). There are the many facts about the material world that were first suggested by mathematics before being empirically proven – for example, the Higgs Boson, gravity waves, the existence of Neptune, and the speed of light.

Because numbers seem to have a special kind of reality (and probably under the influence of the charismatic Pythagoras) Plato postulated his world of Forms, (Plato’s world of forms was a world of timeless truths, of generalities, not to be thought of like a separate place from Earth, like a heaven), it was a realm of ideas that could be accessed and applied by reason. This was a special kind of objective knowledge superior to empirical knowledge which, being derived from experience and sensation, was contingent and corrigible.

But how can we possibly believe in the objectivity of such an abstract realm and, anyway, how could we possibly connect with it?

Aristotle did not believe in Plato’s world of Forms, considering number to exist in the world as a property of objects. But, as philosophers later pointed out, how can number exist in a pair of shoes (one pair or two shoes)? Is the property in such a case 1 or 2? Philosopher Kant believed mathematics to be a form of innate intuition, an expression of our human sense of space and time. Arithmetic expressed, through number, our linear and sequential experience of time, while geometry was a way of representing our sense of space. For Kant then mathematics was an abstraction that came from our heads, it did not exist objectively in the world.

The subjectivity or objectivity of number (whether numbers are real) remains a matter for intense intellectual debate. The impact of mathematics on the world cannot be questioned, and the security we feel as a consequence of its necessity, universality, and certainty have given it a special place in the scientific vision of reality . . . so it is hardly surprising that it has been emulated by other disciplines. In physics we see its universality reflected in the laws of physics.

Modernity has maintained its reverence for the application of mathematics to scientific theories and concepts but with the recognition that maths, at its core, is logical not empirical, it is founded on subjunctive statements (if … then): if X (this may be an axiom) then Y. As philosopher David Hume expressed it, maths is about ‘relations of ideas’ not ‘matters of fact’ … it is not empirical.

Principle 5 –Mathematics was inherited from the ancient world as the most secure form of knowledge. Since mathematics provided certain, necessary, timeless and universal truth it was regarded as the form of knowledge against which the statements all science could be measured, and to which all science should aspire

Smallism

Physics, in investigating the nature of matter, proceeds analytically by breaking it up into ever smaller parts, a process that, over the years, has always found (albeit different) the world’s apparent ‘rock bottom’ material constituents. In 1947 these physical building blocks were electrons, protons and neutrons, later it became quarks and other sub-atomic particles, today we have fermions and bosons. In this way all our explanations of matter have brought us to an end point, what we might indeed call the ‘fundamental reality’ of matter and existence . . . the smallest scientifically acceptable units as described by physics.

Principle 3 – Smallism – physics explains matter by proceeding analytically and experimentally to discover its smallest indivisible constituents, its fundamental particles. These are sometimes regarded as the foundational ingredients of ‘reality’

Fundamentalism

We can refer to our intuition that the small units of physics and chemistry are fundamental to both matter and material explanation as ‘scientific fundamentalism’. From this flows the sense of what has also been called ‘generative atomism’, the belief that, like a child’s Lego set any whole can be built out of its fundamental building blocks. To understand the whole we must start with the parts. Small units, it might seem, somehow have greater scientific credibility; they are more authoritative and reliable; they provide better explanations; they are less complicated and therefore more easily understood and they are objects studied by physics.

Principle 4 – Fundamentalism – is the assumption that all scientific explanation of matter must ultimately reduce to explanation of the smallest known particles of matter and their interactions

In arriving at the smallest or fundamental constituents of matter we have a feeling of finality: being fundamental we might feel that these constituents are in some sense more real than the wholes of which they were a part. But this is clearly some kind of mental trickery, a cognitive illusion. There is nothing more ‘real’ about a fundamental particle than an elephant. Indeed, because we can see, touch, and hear an elephant we might argue that the elephant is more empirically real than an invisible fermion or boson (which has a smaller wavelength than that of light). We regard small thigs as special not because of their mere existence (their ontology or being) but because of their role in analysis and explanation (their significance is epistemological). They are part of our habitual explanation of wholes in terms of their components and the relations between these components. Following Aristotle’s explanatory regress our explanations must therfore bottom-out at the smallest particles we know at any point in history.

Sometimes referred to as ‘ontological reduction’ this principle asserts that no physical object ‘exists’ more or less than any other. Smaller units of matter are no more ‘real’ than larger units of matter, nor are more inclusive or less inclusive units, or even more or less complex units. In terms of existence or reality atoms, rocks, bacteria, and humans are equals.

Principle 5 – All matter exists equally: no physical object ‘exists’ more or less than any other. Smaller units of matter are no more ‘real’ than larger units of matter, nor are more inclusive or less inclusive units, or more complex or less complex units (principle of flat ontology)

Reduction, organization, explanatory power

What is controversial in reductionism and science today is not the matter itself (ontological reduction) – but the nature of its organisation, the relations between its parts (epistemological reduction) – especially the parts of living organisms. We must therefore look for other reasons for our prioritization of one domain of knowledge over another, for the intuition that explanations in one domain are in some way superior (have greater explanatory power) than those in another: why, for example, we might consider it useful to think of biology in terms of physico-chemical processes. Why does scientific fundamentalism have such persuasive power over our general attitude to science. If all matter is ontologically equivalent then it is our cognitive focus that is making a distinction between different domains or scales of existence (the physicochemical, biological, social, psychological and so on). Analysis has explanatory power but this does not make the parts under consideration, either their size or inclusiveness, more ‘real’. On reflection we realise that no sort of matter is more real or fundamental in itself. Matter is just matter: small matter is just smaller than big matter, it does not have properties that make it existentially privileged in any way. So, in terms of material reality or existence (ontology) a bison is just as real as a boson.

When we take an overview of all the sciences is it true that ‘Particle physics is the foundational subject underlying – and in some sense explaining – all the others‘?[1] Could this be simply a comment on the way analysis is a habituated mode of explanation? To investigate the regress of scientific explanation to foundational particle physics we need to look at different kinds of explanation.

Explanatory rock bottom and adequate explanation

We might assume that, of necessity, the explanatory regress passes to ever smaller and ‘more fundamental’ material objects. But this is not inevitable: sometimes one particular kind of explanation is sufficient. Sometimes we feel no need to enter an explanatory regress. One particular answer is adequate.

Here are a couple of everyday examples of explanation. First, if asked ‘Why did the chicken cross the road?’ we could call on answers from scientific specialists such as a chicken biochemist, a neurologist, an endocrinologist, and an animal psychologist. But what if we were told that the chicken was being chased by a fox. This, surely, for most of us, is a satisfying and sufficient answer to our question. We do not need or desire to be told anything else. Does this mean that in this case scientific answers were incorrect or inferior in some way? No, only that their explanations were not the most appropriate for the circumstances under consideration. Statements like ‘polar bears hibernate in winter’, ‘inflation can be managed by adjusting interest rates’, ‘evolution is replication with variation under selection’, or even ‘e = mc2’ appear sufficient in themselves: their veracity may be challenged but we do not think they need reformulating or reducing to improve or clarify what is being expressed.

Practical incoherence

Firstly, there is the logical absurdity of trying to explain all phenomena in terms of the smallest workable scientific particles. What is to be achieved by explaining many biological facts in this way, like the fact that polar bears hibernate in winter? Examples become more ludicrous as we consider wider scientific contexts. How could we possibly explain a rise in interest rates in terms of fundamental physical particles and the laws of physics? What would such an explanation possibly look like? It is not that such a situation is logically impossible. We can imagine a supercomputer of the future that could enumerate the many causal factors at play in such a situation but we simply do not think this way, and nor do we need to. Explaining the causes of an interest rate rise in physicochemical terms would not simplify matters and give greater clarity, it would entail an explanation so complex as to be barely imaginable.

What then constitutes a satisfactory scientific answer to a scientific question?

Principle 6 – The principle of sufficient explanation: explanations are fit for purpose, they do not need to be circular, foundational, or part of an infinite regress

This example demonstrates the multi-causal nature of many occurrences – like car and plane crashes. Questions about cause(s) in such situations are not abandoned because of their complexity since they must achieve a resolution in a court of law. In many instances, in spite of the apparent complexity, rulings are readily made.

Our intuitive desire for foundational explanations creates several difficulties.

The primacy of analysis – generative atomism

If someone asks you ‘What is a heart and how does it work?’ we might answer analytically by treating the heart as a whole and explaining the parts and how they interact. Alternatively we might answer synthetically by treating the heart as a part and explaing how it interacts with other organs to contribute to the functioning of the body as a whole.

Much of science proceeds by explanatory analysis, breaking down physical entities into their constituent parts. But here too Aristotle’s dictum applies as we are inclined to proceed in a regress to ever smaller parts until we feel we have reached rock bottom, the world’s fundamental particles. There has, in the course of history, been a variable rock bottom. If the future continues as the past then there is nothing absolute, necessary, or certain about the particles that make up rock bottom. Democritus defined atoms as indivisible particles but physics has split the atom again an again with today perhaps fermions and bosons approximating the foundational bricks out of which the universe is constructed.

Scope – universality of physical constants

Physics approaches mathematics in the (near) universality of of its physical constants. Since it has a universal scope it also has an all-embracing character that is not shared by other sciences: its principles, theories, and laws are of such generality that they encompass all matter excepts under the most extreme situations. A falling stone and a falling monkey both conform to Newton’s laws of gravitational attraction. Physics tries to explain the world at not only the smallest scale as the behaviour of fundamental particles but also at the widest scale as constants or constraints that apply universally to all matter.
The foundations of science are generally taken to lie in mathematics and physics because their basic assumptions have universal application in two important ways: firstly, physics works with the stuff of the universe at its extremes – from the smallest particles to the cosmos in its entirety; secondly

Principle 7 – Physics combines with mathematics to formulate constants and constraints that apply to not only the smallest known particles but to the universe as a whole and therfore its scope is wider than that of other sciences

The challenge to scientific fundamentalism

So what have we decided constitutes something being more scientific or less scientific?

Arguing that that one is ‘more scientific’ than another requires an extended justification. So far we might claim, for example, that physics encompasses all matter, while biology only deals with living matter. Physics deals with generalities and regularities that apply throughout the universe while biology only deals with the subset of generalities that relate to living orgnisms. Whatever principles and generalities we can establish in relation to life appear to lack the scope and reliability that we see in physical laws.

Because both a rock and an elephant conform to the same effects of gravity does not automatically mean that physics is more fundamental.

Adding value

We might intuitively feel that the objects of an explanation (the explanans) are more fundamental than the object being explained (the explanandum)

True science, special science, hard and soft science

Has this account so far established a clear distinction between fundamental or foundational science and other science? Can we distinguish between hard and soft sciences, or indeed between science and non-science – or are such distinctions just a matter of semantics? The term ‘special sciences’ is generally used to denote those sciences dealing with a restricted class of objects as, say, biology (living organisms), and psychology (minds) while physics, in contrast, is kown as ‘general science’. Reductionism would maintain that the special sciences are, in principle, reducible to physics or entities that may be described by physics.

Can we establish a clear benchmark using criteria of certainty, necessity, universality, corrigibility (falsifiability), certainty, or predictive capacity by which to rank in order the following areas of study: mathematics, physics, astrology, genetics, biology, psychoanalysis, psychology, history, political science, sociology, and economics. Would this establish a reliable table of scientific merit? Are such ranking criteria appropriate or should other factors be considered and, if so, what would they be?

In spite of many historical attempts, the philosophy of science has failed to establish uncontroversial necessary and sufficient conditions that would satisfy a definition of ‘science’ (see Science and reason). At present it appears that what we call science is, more or less, our most rigorous application of reason to an assemblage of theories, principles, and practices that share a family resemblance as a means of enquiry. It is this that has proved our most effective way of organising the knowledge we use to understand, explain, and manage the natural world.

In at least a practical and intellectual sense the special sciences are autonomous, their explanations, methodologies, terms, and objects of study are perceived as self-sufficient without any requirement or benefits flowing from translation to another scale or ‘lower level’ in spite of assumptions about successful reductions in the past and the causal completeness of physics.

Fundamental can be ontic (that out of which everything is made – microphysics) or epistemic (that to which everything conforms).

When we reduce are we suggesting a relation of identity between the reduced and reducing entities that justifies the elimination of the reduced entity: or are we merely referring to differe3nt modes of describing the same thing?

Method & subject-matter

Abstraction-reduction

It is a characteristic of explanation that it abstracts: it considers one particular aspect of the natural world to the exclusion of a more general context. In general our focus is on the explanation, not the context, the context being assumed or taken for granted. When a biologist gives an explanation of the way a heart pumps blood, it is assumed that the laws of physics are in operation – this does not have to be stated. Thus all explanations we provide have two key characteristics: firstly, abstraction – that is, they abstract from a greater whole, they focus on a particular situation or object while ignoring the context; secondly, they enter a potential analytic or synthetic regress. Explanations thus resemble our perceptive and cognitive focus by paying attention to a particular set of circumstances (foreground) while ignoring the wider context (background). In providing an explanation there is a kind of unspoken rider … something along the lines … ‘assuming the uniformity of nature, and other things being equal (ceteris paribus)’.

Principle 8 – Explanations abstract information from a wider context

(It is a characteristic of explanations that they tend to abstract (reduce) from the whole (the wider context). An explanation considers one aspect to the exclusion of others. An explanation is regarded as satisfactory, or ‘complete’, when it is sufficient for its purpose; it cannot account for the full context – which is taken for granted. When a biologist explains the way a heart pumps blood, it is assumed that the laws of physics are in operation – this does not have to be stated, in addition, to make the explanation complete.
Though most explanations ignore the wider context, they have the potential to enter either an analytic or synthetic regress. That is, the explanation can procede by progressive reduction and simplification (analysis) or it can consider an ever widening context (synthesis).
Explanations resemble our perceptive and cognitive focus by paying attention to a particular set of circumstances (foreground) while ignoring the wider context (background). In providing any explanation there is an unspoken rider . . . something along the lines . . . ‘assuming the uniformity of nature, and other things being equal (ceteris paribus)’.)

Proximate & ultimate explanation

Is sex for recreation or procreation?

A proximate explanation is the explanation that is closest to the event that is to be explained while an ultimate explanation is a more distant reason. In behaviour a proximate cause is the immediate trigger for that behaviour: the proximate cause for running might be a gun shot, the ultimate cause being survival. Biology itself divides in its approach to proximal and ultimate causes. Ultimate causes usually relate to evolution and adaptation and therefore function, answering the question of why selection favoured that trait – and the answers tend to be teleological. Proximal causes deal with day-to-day situations and immediate causation. Proximate and ultimate explanations are complementary, they are not in opposition with one being better or more explanatory than the other, both have their place. This is a trap for the unwary since proximate answers can be mistakenly given to ultimate questions.

So, one possibility is that there is no privileged perspective that entails all others, each is equally valid and the explanation that is most appropriate will depend on the particular circumstances. In all this we are abstracting and studying certain factors while ignoring others. When we study the genetic code we do not consider it appropriate to think about electrons and quantum mechanics: when we study the heart we do not worry about gravity or consult the periodic table.

Principle 9 – Satisfactory explanations generally depend, not on the size of the units under consideration or the inclusiveness of the frame of reference, but the plausibility, effectiveness, or utility of the answer in relation to the question posed.

 

So, sex is for both procreation and pleasure.

(Is the explanation contingent on our human interests and limitations or is it a full causal account?)

4. Unity of science, spatiotemporal boundaries, scope & scale

As science progressed it provided increasingly elegant summations of knowledge about the physical world. Apparently disparate phenomena were united under common laws that could be expressed using mathematical equations: the motion of the planets, the behaviour of fluids, electricity, and light. The integration of physics and mathematics had such explanatory and predictive power in relation to so many phenomena that there seemed no end to what they might achieve. Gravity was a universal force that treated falling rocks and falling monkeys with absolute equality. Physics embraced space and time, matter and energy – and that was mighty close to everything. Its explanatory breadth and predictive power was, and still is, thoroughly demonstrated through its spin-off technology. Today our GPS systems integrate space flight and complex electronics with relativity theory and quantum physics to provide flat earth maps on our car navigation systems. There was a vision of physics as a fundamental discipline incorporating all other knowledge. Physics was universal in scope and scale while other scientific disciplines dealt with only sub-sets of the physics enterprise. So, for example, physics encompassed all matter, biology only living matter, animal behaviour all sentient living matter, sociology humans as they interact in groups, anthropology human beings, human psychology human brains and behaviour. This characterization of science presents us with a metaphysical monism: there is one scientific truth for one reality based on one set of underlying principles (scientific laws). This vision is generally referred to as the ‘unity of science’.

Principle 10 – Scientific fundamentalism is a metaphysical monism: there is one scientific truth for one reality based on one set of underlying principles (scientific laws). This monistic vision is generally referred to as the ‘Unity of Science’

 

All the convoluted complication of complexity – the mess of multiplicity of objects – their properties, relations, and aggregations – can be simplified and reduced by analysis as the adoption of a philosophy approximating monism as a description of the many in terms of the few. Scientifically we do this by means of the elementary particle, generalization to principles and laws, and systematization.

For some physicists there is a goal like a ‘unified field theory’: when quantum mechanics is reconciled with relativity then our account of the physical world will be complete.

Principle 11 – The unity of science (metaphysical monism) – there is one scientific truth for one reality based on one set of underlying principles (scientific laws)

Does this universal character of physics give some kind of precedence to physics: does it make physics more ‘fundamental’?

Principle 12 – Because physics is broad in scope it seems to encompass or absorb other disciplines of more limited scope.


Citations
[1] Ellis 2005
[2] see Naomi Thompson and Fictionalism about grounding https://www.youtube.com/watch?v=yMO64-21aik
[3] Fictionalism can apply across many domains. So, for example, we can be fictionalist about numbers (i.e. numbers have no referents, but they are useful) and morality (there is no objective right or wrong, but the notion of right and wrong, good and bad serve an important role in human life)

References
Ellis, G.F.R. 2005. Physics, complexity and causality. Nature 435: 743

Physical reductionism is possible but explanatory reductionism is not.
Supervenience of th emental on the neuralogical was an idea introduced by Donald Davidson as a dependence relationship.

The article on reality and representation also discussed the way our minds, that is, our cognition based on the objects of our perception, attempt to put order into the confusing complexity of mental categories that make up reality. Working on the scientific image can improve the categories we use to describe the nature of reality but it does not give is an overall structure. We give structure to reality by applying metaphors that generally work well for us in daily life – by distinguishing between: what is bigger and what is smaller; what is contained in or is a part of something else; what is simple and what is tied to other factors in a complex relationship; and by what can be ranked or valued in relation to something else.

It was also noted that when we describe the physical world we do so from different perspectives: we can give different accounts and explanations of the same physical state of affairs. So, for example, we can give physical, chemical, biological, psychological, sociological accounts of what is the same physical situation.

The question than arises as to whether any one particular mode of explanation and description should have priority over others and, if so, for what reason? That is the topic of this article.

The problem of reduction in science brings together a web of ideas, beliefs and assumptions about the world. To help connect some of the threads of this story I have organized the discussion into a set of principles that can be used for easy reference.

So far we have considered cognitive segregation, the way our minds divide the world into meaningful categories of understanding, our percepts and concepts, and the way that our cognition allows us to, as it were, look beyond the world of our biologically-given human perception (the manifest image) to a less anthropocentric world that allows us to not only investigate the way other sentient organisms perceive the world but to investigate the composition and operation of the external world itself.

What about the world of solid objects around us? Our curiosity about substance stretches back to at least Democritus’s and his world of fundamental indivisible particles called atoms. This was not an observed world but a postulated metaphorical world. By the 1940s it was thought that we had reached the truly fundamental constituents of matter when atoms were split into protons, electrons and neutrons. The metaphor was still of ‘particles’ like billiard balls rotating in a solar-system-like way around a nucleus. The world of particles would be transformed into one of forces made up of fields. Since the 1940s the metaphor has been changed again. Particles have been replaced by waves: so the world outside our minds is perhaps best characterized as space which consists of interacting vibrating fields. The Higgs field explains where ‘particles’ get their masses.

Humans are, nevertheless, special. Our unique mode of representation and comprehension (our reasoning faculty and the capacity to communicate and store information using symbolic languages) allows us to look beyond the world of direct experience (the manifest image) towards the way the world actually is (the scientific image).

This is amply demonstrated by the time-honoured deference to ‘hard’ sciences like maths, physics and chemistry when compared to a ‘soft’ science like biology.

A force is due to a field and a field is something that has energy and a value in space and time like a magnetic field, temperature, and wind speed.
With increasing complexity comes greater difficulty in predicting outcomes. As a consequence biological principles and patterns seem to lack the precision and universality that we see in the laws of physics. Biological principles are derived from highly complex organisational and causal networks and open systems with a vast number of variables in which no two organisms are structurally identical. We might think that physics in accounting for the behaviour of the planets in the solar system has achieved much but the impressive and universal predictive laws of celestial mechanics can be derived relatively simply from the positions and momenta of planetary bodies in a relatively closed system. The number of variables is few.

Because the physical world ‘contains’ living organisms as a part, does it follow that the the universal laws of physics ‘contain’ those of biology it is tempting to assume that its scope is universal and that other realms of knowledge are simply sub-sets of physics. For example, biology is spatiotemporally bounded,[8] it takes the laws of physics as given; it is answering different questions in a different realm of thought. We could conceive replacing biology with the physical sciences thus making biology part of a system of strict universal laws but even if that were possible ‘we would not have explained the phenomena of biology. We would have rendered them invisible’.[9] Some laws apply over the whole range of scales.

Perhaps an explanation at one level does not require an explanation at another – or, at least, not at a level that is distant from it? We might explain chemistry in terms of physics but biology is conceptually more distant. We can feel cognitive focus at work here … atomic numbers, Maxwell’s equations, or the theory of relativity are not directly relevant when we work within the biological domain, or at least they are taken for granted as background. Hence the absurdity of explaining sociological phenomena in terms of physics and chemistry.

For example, since the large is explained analytically in terms of the small, we intuitively place greater value on the small giving it ontological precedence simply by virtue of size (but see Principle 3). Biologists no longer claim, as they once did, that living matter is quite different in kind from inanimate matter but this is a matter of perspective (all matter is physical matter but not all matter is biological matter). Many people once believed that the mind was inhabited by a spirit or soul and that, in a similar way, bodies were also inhabited by some special kind of spirit or vital force (elan vitale, entelechy). This general view, known as vitalism, is now discredited. The existence of such forces is not only implausible but, since they cannot be detected and studied, are of no explanatory value. They are best ignored.

Physics has its own problems with scale as it wrestles to reconcile the behaviour of matter at the small distances of quantum physics and the vast scale of cosmology, the break-down of laws in at the Big Bang or the singularities of Black Holes. Whether we look at the patterns in nature described by Newton, Einstein, Joule, Faraday, Maxwell or the various laws of thermodynamics the link to biology frequently seems tenuous. Of course the physics of matter is important to know about when studying nerves and macromolecules, but much of this is incidental to many biological questions.

In its most basic form foudationalism regards matter as the only reality but even the mechanistic philosophers of the Scientific Revolution recognised that this matter was in motion and today we realize the sigificance of not just matter but its mode of organization.

A flat ontology removes the necessity for the grounding of an object in something other than itself. There is no need for the Principle of Sufficient Reason. Explanations and reasons do not provide underlying truth or get closer to reality, they simply express or ,reduce, one scale or mode of existence in terms of another.

Commentary

Scientific fundamentalism & the unity of science

We can define scientific fundamentalism as the view that the smallest particles of matter and the principles and theories of physics and chemistry underpin all other science. There are at least five reasons why this view has appeal.

1. The analytic process of explaining wholes in terms of their constituent parts has explanatory weight that suggests parts are in some way more real or fundamental than wholes.
2. Second, analytic explanation, like the philosophical requirement for rational justification or causal origin, leads to an explanatory regress seeking ever more ‘fundamental’ solutions and suggesting that there must be rock bottom or ultimate explanation that can only lie within physics
3. The explanation of the complex in terms of the simple reduces causal complexity
4. The scope of physics (the universe, space, time, and matter) suggests that it must incorporate or subsume all other scientific disciplines
5. Fifth, both philosophers and scientists when explaining natural phenomena employ the metaphorical hierarchical language and imagery of levels of organisation. Though a convenient mental device hierarchical thinking suggests that the natural world is itself ranked from high to low (with physics as a foundation) . Talk of hierarchical organisation is better replaced by the language of scale.
6. It is a consequence of the historical tradition coming to us from antiquity wjereby the physics of astronomy and mathematics both preceded and received greater attention than biology although subjects that today we might call political and social science were regarded a very important.

Aristotle’s gave science its foundation in reason through deductive logic while scientists of the early modern period emphasized inductive logic and the importance of an emphasis on the world itself, on experiment and observation. Up to the 1960s there was a hope and belief that science could be defined and unified under a common set of principles. Today this ambition is meeting strong opposition because it seems that we have no conclusive criterion clearly demarcating science from non-science. That does not mean that astrology is science: robust scientific explanation entails many demanding criteria that astrology fails meet. But the distinction between the sciences of physics, chemistry, biology, the social sciences, history, and everyday reasoning is one of family resemblance or degree, not necessary and sufficient demarcation. Foundationalism with its insistence on science as a unique and special form of knowledge grouded in physics has been replaced by coherentism or pragmatism, the view of science as a coherent system of justified belief, a system of shared ideas that work.

The view that physics somehow expresses ‘reality’ more effectively than other disciplines (scientific fundamentalism) comes from the general impression given by these factors. However, parts do not have some special quality (ontological privilege) or are more ‘real’ than wholes. Scientifically credible units of matter have no intrinsic (ontological) precedence over one-another based on size or inclusiveness alone. Smaller units of matter (molecules) are no more ‘real’ than larger units of matter (dogs and cats). However they might have utility in explanation and large units may be more complex in terms of their causation and our conceptual understanding of them.

All explanation abstracts from a wider context and, in this sense, it is reduction. Though it is in the nature of explanation to ‘reduce’ by looking at constituent parts the adequacy of the explanation does not depend on the size of the units under consideration, but the plausibility, effectiveness, or utility of the answer in relation to the question posed. Using parts to explain wholes gives parts explanatory value but does not make them more ‘fundamental’ in any meaningful physical sense.

Though the objects of physics are no more real or fundamental than those of biology, it is evident that adaptive complexity (life) involves intricate systems of causality that increases the difficulty of prediction at smaller scales. The greater complexity (causal relations) of the domain units under consideration, the greater the difficulties in prediction, communication, and translation into other domains.

Aristotle’s Objection
Pre-Socratic natural philosophers were materialists who regarded nature as consisting only of matter (Earth, Air, Fire, and Water in some combination). Aristotle criticised this view because matter is always changing. Any functional structure such as an organism can have all or some of its matter replaced by different matter and yet retain its identity as a particular organism. That is, continuity is maintained, but not through the matter of an organism but through its arrangement or functional structure; although we have a concept of ‘dog’, each individual and kind of dog consists of different matter; matter is just ‘stuff’, when an organism grows it grows in a particular structured way, it does not simply add to what is already there by simply getting larger.

For Aristotle an, organism’s form rather than its matter is its nature. To understand an animal or plant we need to know not only its constituent matter but the way it is structured and why it is structured in a particular way. Matter is necessary to create form but it is subordinate to it.

Biology is not just molecules, it is molecules of certain kinds integrated in ways that give rise to unique properties. A living organism (life) is very different from a rock (inanimate matter). Every physical thing is physical, but not every physical thing is biological. There is no privileged bottom level or a universe consisting of one stuff: all representations are partial.

In science ‘black box’ refers to a system whose inputs and outputs are known but not the inner workings. We really need a corresponding term ‘white box’ to indicate the explanation of the inner workings of a system that ignores or takes for granted the context outside the system which, in much of science, might be expressed as ‘the uniformity of nature’.

LaPlace’s demon
Scientific explanation is steeped in the culture of causation and hence determinism. Lurking in the background there is always the figure of Laplace’s demon, the claim that someone (the demon) who knows the precise location and momentum of every atom in the universe (to infinite precision) at a given time should, in principle, be able to calculate all past and future states of the universe.

Explanation by analysis & synthesis
All explanations abstract certain features from a wider circumstance and in this sense they are reductionist.

When we wish to explain the structure and/or function of a particular physical object, as we have seen, we do not explain it in terms of itself but either in terms of the structures out of which it is composed or the role that it plays within a greater whole (or both). Which option we choose (analysis or synthesis) depends to some extent on the particular object that we choose to explain and understand. If, say, the object is gold, Au, then I tend to proceed by analysis, looking for the atomic number, density, bouiling point and so on. It is true that I gain a better understanding of gold if I see where it fits in the periodic table in relation to other elements but my focus of interest is on the element itself and the method of analysis. In contrast, if I want to understand and explain the heart then, although I can explain its division into auricle, ventricle, valves and so on, but it is difficult to just rely on such factors without explaining the role that the heart plays within a body, that is its relation to the other organs within a greater whole. In this case we proceed by both analysis and synthesis.

This is the methodology of explanation but, also as already considered, the success of the outcome depends on the purpose for which the explanation was given.

Science has always fought over what appear to be these alternative or opposing methodologies. On the one hand knowledge and understanding is to be gained by placing an object in its full and natural context (synthesis). On the other hand we try to understand the same object by isolating it from its natural context in order to better understand its unique features (analysis).

Scientific utility
We may simply choose the explanations, terms, definitions, laws, and assumptions (categories) that provide answers to the particular questions that concern us.

Principle 14 – there is no unequivocal criterion that distinguishes science from non-science

If we assume that science proceeds by the constant critical scrutiny and refinement of our scientific categories (which include theories and generalisations, principles, names, definitions, laws, phenomena, and so on) as we map our concepts onto the natural world itself (reality). The better we can explain and understanding the world the better we can manage it. And of course science has extended our senses through technology like microscopes and telescopes which have allowed us to experience the world that lies beyond our natural biology and sensory input.

Principle 15 -Scientific categories help us to organise the knowledge we use to understand, explain and manage the natural world.

hierarchy operates like the ‘stacking’ or subroutines in computer programming as nested subroutines are completed, returning to the primary routine: it also resembles the nesting and trees that occur in generative grammar (Chomsky hierarchy).

As we establish new cognitive frames of reference with the macro-microscope so causation appears to occur between the different cognitive categories. A molecule causes W, a leg causes X, a body causes Y, a colony causes Z. Molecules do not cause legs, legs do not cause colonies, colonies do not cause biomes. We have to ask whether causation can occur in this way according to each frame (do causal ‘levels’ make sense?). Is there a nested hierarchy of causation. And does any particular kind of causation take priority? How does this relate to material, formal, efficient, and final cause of Aristotle. Adaptive significance deals with ultimate or teleological causes while mechanistic and developmental analysis deals with proximate causes.Explaining bird song

Principle 10 – The analytic process of explanation of large and complex in terms of small and simple persuades us that parts have some ontological privilege (are more real) than wholes – but parts can also be explained synthetically by considering their role within a greater whole

The use of the word reduction emphasises the size of the units under consideration rather than the actual source of the process which is based in the abstractive process of cognitive focus on scale.

A cognitive dissonance arises when we realize that we can think of such a grouping in two ways – either as progressive division (analysis) or progressive addition (synthesis) depending on whether we begin our thinking with the most-inclusive or least-inclusive category. The dissonance seems to arise in part because we think of groups as ranks and it is then difficult to think of ranks as being of equal status, we find it very difficult to resist our impulse to create rank-value: we also find it difficult to think of a particular system in terms of analysis and synthesis at the same time, and for similar reasons.

Principle 10 – Nested hierarchies can be understood in two ways as being either progressively inclusive or progressively divisive – to understand and describe the objects within the hierarchy we can proceed either by analysis or by synthesis (or both)

Explore top-down and bottom-up. Is the world nested?

Life is not just stuff but the dynamic constraints operational within dynamic structural relations that are inherited by the work of negentropy.

Senses in which all science is grounded in physics:
1. It aspires to the certain, necessary, timeless and universal truth of mathematics

The nature of explanation see [2]
We might regard metaphysics as the study of ‘what there is’ and/or the study of ‘what depends on what’. The latter refers to the way the human mind struggles to find order and the slippery relation between our mental ordering processes and the order of the world. Explanations proceed by ‘grounding’, by providing reasons. One ‘thing’ can be grounded in many ways and we can express grounding in many ways – as a means of justification, a reason, a cause, a foundational axiom, ‘because’ etc. So, for example, we explain wholes in terms of their parts.

The position argued here is that this grounding is illusory but it cannot be simply removed (eliminativism) because it serves valuable role (fictionalism)[3]. The grounding relation maybe between, say, facts and material objects.

Grounding also relates to our intuitions about the structure of reality – say, for example, that facts about biology, depend on facts about, chemistry, which depend on facts about physics etc. Dependence relations thus present us with structure which facilitates thought and further explanation. ‘Grounders’ (people who support the idea of grounding) support their views in several ways: that grounding is asymmetric higher level scientific facts depend on lower level scientific facts (if biology is explained by chemistry, then chemistry cannot be explained by biology); it is irreflexive (it cannot ground itself); it is transitive (if biological facts depend on chemical facts and chemical facts depend on physical facts then biological facts depend on physical facts); there is a fundamental or foundational ‘level’ of explanation where the process of grounding must stop and that this foundation explains everything else. Philosophers use the idea of supervenience to try and come to grips with grounding.

Examples of possible ‘grounds’ might be: for morality – non-moral properties like happiness, pleasure or pain; for material objects – the smallest possible particles; for logic – the way true propositions are based in the world (that the proposition ‘snow is white’ is true if in fact snow is white); the logically complex is grounded in the logically simple.

Grounding talk expresses our intuitions about dependence relations in reality – that some things are less ‘real’, or less significant than others.

Realists and eliminativists
Realists hold that grounding relations hold independently of what people may think or say; they are are independent of conceptual and linguistic schemes and people. They are discovered, not created. Eliminativists hold that grounding talk is incoherent or unintelligible and should be abandoned. Perhaps there are composition relations in tables but that is all etc., there is no role for grounding talk.

COMMENTARY
Reductionism is not just a thesis about the way the world is, iyt is also a thesis about what the mind is like as well.

Ultimate reality
The question of ultimate reality is a metaphysical question. Consider the following: any understanding of the universe can only be established from a particular point of view. There must be, as it were, an independent interpreter of whatever there is – a ‘point of view of the universe’. No such point of view exists with the unlikely exception of God who, arguably, must have a God’s-eye view. Secondly, it is reasonable to claim that the only worthwhile, relatively reliable, or non-controversial answer to such a question posed in human terms must ultimately rest on empirical evidence. With this tacitly agreed, ultimate reality then translates into the best that science has to tells us. For some reason many people then interpret the question as one about the nature of matter and its relations. Falling back on our predilection for analytic explanation and the mistaken conviction that the smallest is the most real the discussion of ultimate reality falls into a debate about fundamental particles, waves, fields, and the like. But the boson is no more real than a bison, or a human being.

[pac_divi_table_of_contents included_headings=”on|on|on|on|off|off” active_link_highlight=”on” marker_position=”outside” level_markers_1=”icons” level_markers_2=”icons” level_markers_3=”icons” level_markers_4=”icons” level_markers_5=”icons” level_markers_6=”icons” headings_overflow_1=”ellipsis” title_container_bg_color=”#bb9d13″ body_area_text_link_color_h1=”#DFB758″ body_area_text_link_color_active=”#DFB758″ body_area_text_link_underline_active=”#DFB758″ admin_label=”Table Of Contents Maker” _builder_version=”4.21.0″ _module_preset=”default” title_font_size=”17px” heading_all_font_size=”11px” heading_all_line_height=”20px” heading1_font=”|||on|||||” heading1_font_size=”14px” heading_all_active_font=”|700|||||||” border_radii_keyword_highlight=”on|0px|0px|0px|0px” border_width_all_keyword_highlight=”0px” global_module=”284584″ global_colors_info=”{}”][/pac_divi_table_of_contents]

Reductionism – Afterthoughts

Reductionism reminds us of how, in apparent defiance of the Second Law of Thermodynamics,the iniverse has accumulated local regions of negentropy and structure. The universe began as plasma before forming hydrogen and helium and later the elements of the Periodic Table and life. How do we explain the emergence of complexity in the universe? What do we know about the causal processes intervening between the universe at the Big Bang when it was undifferentiated plasma and objects in the universe today that are as structurally complex as living organisms?

Many scientists would answer that complexity arises as the laws of physics play out in a deterministic way. This is one aspect of reductionism, which we need to define.

Classical reductionism (determinism)
The path of cosmic destiny is determinate. Knowing the precise conditions at the origin of the universe should, in principle, be sufficient to explain the emergence of smartphones and you, here, right now. Expressed another way – it should, in theory, be possible to explain sociology in terms of particle physics (though complex in the extreme), that is, the small-scale accounts for the large-scale. Physics is based on the proven high predictive capacity of mathematics.

By this account the regularities of chemistry, biology, psychology and the social sciences are epiphenomena (by-products) because they are grounded in physical causation. All physical objects are composed of elementary particles under the influence of the four fundamental forces and physical laws alone give a unique outcome for each set of initial data. English theoretical physicist Paul Dirac (1902-1984) expressed a first step down this path when he claimed that ‘chemistry is just an application of quantum physics’. One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales. The physical sciences of the 17th to 19th centuries were characterised by systems of constant conditions involving very few variables: this gave us simple physical laws and principles about the natural world that underpinned the production of the telephone, radio, cinema, car and plane.[1 pg]

There are many objections to such a view and since the 1970s these objections have become the focus of studies in complexity theory. After 1900 with the development of probability theory and statistical mechanics it became possible to take into account regularities emerging from a vast number of variables working in combination: though the movement of 10 billiard balls on a table may be difficult to predict, when there are extremely large numbers of balls it becomes possible to answer and quantify general questions that relate to the collective behaviour of the balls (how frequently will they collide, how far will each one move on average before it is hit etc.) when we have no idea of the behaviour of any individual ball. In fact, as the number of variables increases certain calculations become more accurate say, the average frequency of calls to a telephone exchange or the likelihood of any given number being rung by more than one person. It allows, for example, insurance companies and casinos to calculate odds and ensure that the odds are in their favour. This applies even when the individual events (say the age of death) are unpredictable unlike the predictable way a billiard ball behaves. Much of our knowledge of the universe and natural systems depends on calculations of such probabilities.

Science in the 21st century is tackling complex systems. People wish to know what the weather will be like in a fortnight’s time; to what extent is climate changes anthropogenic; what is the probability that I might die of some heritable disease; how does the brain work; what is the degree of risk related to the use of a particular genetically modified organism; will interest rates be higher in six months’ time and, if so, by how much? This is the world of organic complexity, neural networks, chaos theory, fractals, and complex networks like the internet. In contrast the processes going on in biological systems seemed to involve many subtly interconnected variables that were difficult to measure and whose behaviour was not amenable to the formation of law-like patterns similar to those of the physical sciences. Up to about 1900 then much of biological science was essentially descriptive with meagre analytical, mathematical or quantitative foundations. But there are systems that are organised into functioning wholes: labour unions, ant colonies, the world-wide-web, the biosphere. Consists of many simple components interconnected, often as a network, through a complex non-linear architecture of causation, no central control producing emergent behaviour. Emergent behaviour as scaling laws can entail hierarchical structure (nested, scalar), coordinated information-processing, dynamic change and adaptive behaviour (complex adaptive systems [ecosystems, biosphere, stock market] self-organising, non-conscious evolution, ‘learning’, or feedback). Examples: are an ant colony, economic system, brain.

However, when a living organism is split up into its component molecules there is no remaining ingredient such as ‘the spark of life’ or the ‘emergence gene’ so emergent properties do not have some form of separate existence. And yet emergent properties are not identical to, reducible to, predictable from, or deducible from their constituent parts – which do not have the properties of the whole. The brain consists of molecules but individual molecules do not think and feel, and we could not predict the emergence of a brain by simply looking at an organic molecule.

Levels of reality
Many scientists and philosophers find it useful to grasp complexity in the world through the metaphor of hierarchy as ‘levels of organisation’ with its associations of ‘higher’ and ‘lower’ and a world that is in some way layered. But ‘as if’ (metaphorical) language can be mistakenly taken for reality and best minimised unless it serves a clear purpose or is unavoidable.

What exactly do we mean by ‘levels’ in nature and can these ideas be expressed more clearly? Hierarchies rank their objects as ‘higher’ or ‘lower’ with their ‘level’ based on some ranking criterion.

Scale
’Level’ is used in various senses. Firstly, it expresses scale/size as we move from small to large in a sequence like – molecules – cells – tissues – organs – organisms, and from large to small as we pass along the sequence – universe – galaxy – solar system – rock – molecule – quark.

Complexity
But it cannot be just a matter of physical size because organisms are generally treated as ‘higher’ in such hierarchies than, say, a large lump of rock. So secondly, or perhaps in addition, we are referring to complexity, the fact that an organism has parts that are closely integrated in a complex network of causation in a way that does not occur in a rock. There are difficulties here too in definition e.g. how do we rank against one-another society, a human, an ecosystem. A microorganism can well be considered more complex than the universe.

Context
But then, thirdly, ‘levels’ also suggest frames of reference, one special set of things that can be related to another set of things so we can viewing one set of things from a ‘higher’ or ‘lower’ vantage point, and here context or scope becomes a key factor.

There are then three major criteria on which scientific hierarchies are built: scale (inclusiveness or scope), causal complexity, and context. When considering any particular scientific hierarchy it helps to consider these factors (separately or in combination).

There are a few complications. Sometimes the layering is expressed in terms of disciplines or domains of knowledge as physics, chemistry, biology, psychology, and sociology which is different from the phenomena that the disciplines study. In this case each discipline or domain constitutes a contextual ‘level’ with its own language, vocabulary, and mathematical equations that are valid for the restricted conditions and variables of that level. There is increased decoupling with increased separation of domains.

Sometimes the hierarchy is given as a loose characterisation of what these subjects deal with – laws of the universe, molecules, organisms, minds, humans in groups, or some-such. Sometimes the nature of the physical matter is given emphasis – whether it is organic or inorganic.

At the core of the scientific enterprise is the idea of causation and for eminent physicist George Ellis it is causal relations acting within hierarchical systems that are the source of complexity in the universe. His views are summarised in his paper On the Nature of Causation in Complex Systems in which he outlines the controversial idea of ‘top-down’ causation.[1] I briefly outline these ideas below.

My preference would be to regard causation as acting between the cognitive categories we use to denote phenomena in the world. We take these categories to vary in both their inclusiveness and complexity. This, for me, is a more satisfactory mental representation of ‘reality’ than a layered hierarchy of objects arranged above and below one-another. I would use the expression ‘more inclusive and complex to less inclusive and complex’ in preference to the expression ‘top-down’ but the convenience of the shorthand is obvious.

Degree of causal complexity in parts & wholes
Although we can speak of parts and wholes in general terms, actual instances demonstrate varying degrees of causal interdependence. At one end of the spectrum are holons (holon – a category that can be treated as either a whole or a part) are aggregates with minimal interdependence of constituents, and at the other we have living organisms where the significance of a constituent, like a heart, depends strongly on its relation to the body.

As we shift cognitive focus the relationship between wholes and parts can display varying degrees of interdependence: removing a molecule from an ant body is unlikely to be problematic, and similarly removing one ant from its colony but removing an organ from a body could be.

Wholes sometimes only exist because of the precise relations of the parts – in other wholes this does not matter. Sometimes characteristics appear ‘emergent’ (irreducible as in highly organised wholes) and sometimes they appear ‘reducible’ (as in aggregative wholes): we need to consider each instance in context. Some holons are straightforwardly additive (sugar grains aggregated into a sugar lump) but others grade into kinds that are not so amenable to reduction – consider the music produced by an orchestra, carbon dioxide, a language, a painting, an economic system, the human body, consciousness, and the sum 2 = 2 = 4.

Neither of these factors need affect the following account which is highly abbreviated from Ellis:

Causation
We can define causation simply as ‘a change in X resulting in a reliable and demonstrable change in Y in a given context’. Particular causes are substantiated by experimental verification and these are isolated from surrounding noise.
Causation, or causality, is the capacity of one variable to influence another. The first variable may bring the second into existence or may cause the incidence of the second variable to fluctuate. A distinction is sometimes made between causation and correlation, the latter being

Philosopher Hume saw even the laws of physics as a matter of constant conjunction rather than the kind of causation we generally refer to. One event (effect) can have multiple causes: one cause can have multiple effects.

The claim is that causation is not restricted to physics and physical chemistry as is frequently maintained. Examples of ‘bottom-up’ causation would be the brain as a neural network of firing neurons or the view that there is a direct causal chain from DNA to phenotype.

?? p.3 Complexity emerges through whole-part two-way causation as cause-initiating wholes (scientific categories) become progressively more inclusive and complex and the entities at a particular scale are precisely defined. (with properties associated with hierarchies: information hiding, abstraction, inheritance, encapsulation, transitivity etc.)

Top-down causation
Ellis claims that bottom-up causation is limited in the complexity it can produce, genuine complexity requires a reversal of information flow from ‘bottom-up’ to ‘top-down’ and a coordination of its effects. To understand what a neuron does we must explain not only its structure or parts (analysis) but how it fits into the function of the brain as a whole (synthesis). Fractal nature is 3-D and fractal geometry is an important mathematical application two-way.

‘Higher’ levels are causally real because they have causal influence over ‘lower’ levels. It is ‘top-down’ causation that gives rise to complexity – such as computers and human beings. As we pass from ‘lower’ to ‘higher’ categories there is a loss of detailed information but an increase in generality (coarse-graining) which is why car mechanics do not need to understand particle physics. As wholes and parts become more widely separated (‘higher’ from ‘lower’) so the equivalence of language and concepts generally becomes more obscure although sometimes translation (coarse-graining) is possible.

Top-down causation is demonstrated when a change in high level variables results in a demonstrable change in low-level variables in a reliable (non-random) way, this being repeatable and testable. The change must depend only on the higher level when the context variables cannot be described at a lower level state. It is common in physics, chemistry and biology, for example the influence of the moon on tides and subsequent effect of tides on organisms. Biological ‘function’ derives from higher-order constructs in downward causation. Cultural neuroscience is an excellent example of a synthetic discipline dominated by top-down causation.

Equivalence classes relate higher level behaviour to (often many different) lower level states. The same top-level state must lead to the same top-level outcome regardless of which lower level state produces it.

Top-down causation occurs when higher level variables set the context for lower-level action.

Ellis recognises five kinds of top-down causation:
Algorithmic – high level variables have causal power over lower level dynamics so that the outcome depends uniquely on the higher level structural, boundary and initial conditions e.g. algorithmic computation where a program determines the machine code which determines the switching of transistors; nucleosynthesis in early universe determined by pressure and temperature; developmental biology a consequence of reading the DNA but also responding to the environment at all stages as developmental plasticity.

Non-adaptive information control – Non-adaptive higher-level entities influence lower level entities towards particular ends through feedback control loops. The outcome is not determined by the initial or boundary conditions but by the ‘goals’ e.g. a thermostat and homeostatic systems.

Adaptive selection (teleonomy) – variation in entities with subsequent selection of specific kinds suited to the environment and survival e.g. the products of Darwinian evolution. Selection discards unimportant information. Genes do not determine outcomes alone but as dictated by the environment. This is like non-adaptive information control but with particular kinds of outcomes selected rather than just one. In evolution we see convergence from a different starting point. Novel complex outcomes can be achieved from a simple algorithm or underlying set of rules. Adaptive selection allows local resistance to entropy with a corresponding build-up of useful information.

Adaptive information control – adaptive selection of goals in feedback control system such as Darwinian evolution that results in homeostatic systems; associative learning.

Intelligence – the selection of goals involves the symbolic representation of objects, states and relationships to investigate the possible outcome of goal choices. The implementation of thoughts and plans using language including the quantitative and geometric representations of mathematics. These are all abstract and irreducible higher-level variables. They can be represented in spoken and written form. Causally effective are the imagined realities (trust) of ideologies, money, laws, rules of sport, values, all abstract but causal. Ultimately our understanding of meaning and purpose.

So it is the goals that determine outcomes and the initial conditions are irrelevant.
In living systems the best example of downward causation is adaptation in which it is the environment that is a major determinant of the evolution in the structure of the DNA.

For higher levels to be causally effective there must be some causal access (causal slack) at lower levels and this comes from: the way the system is structured constraining lower level dynamics; openness allowing new information across the boundary and affecting local conditions as in changing the nature of the lower elements as in cell differentiation and humans in society; micro-indeterminism combine with adaptive selection.

Top-down causation in computers
What happens in this hierarchy. Top-down causation occurs when the boundary conditions (the extremes of an independent variable) and initial conditions (lowest values of the variable) determine consequences.

Top-down causation is especially prevalent in biology but also in digital computers – the paradigm of mechanistic algorithmic causation such that it is possible without contradicting the causal powers of the underlying micro physics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.

Computer systems illustrate downward causation where the software tells the hardware what to do – and what the hardware does will depend on the different software. What drives it is the abstract informational logic in the system, not the physical software on the USB stick. The context matters.

Abstract causation
Non-physical entities can have causal efficacy. High levels drive the low levels in the computer. Bottom levels enable but do not cause. Program is not the same as its instantiations. Which of the following are abstract? Which are real? Which exist? Which can have causal influence: values, moral precepts, social laws, scientific laws, numbers, computer programs, thoughts, equations. Can something have causal influence and not exist? In what sense?

A software program is abstract logic: it is not stored electronic states in computer memory, but their precise pattern (a higher level relation) not evident in the electrons themselves.

Logical relations

High level algorithms determine what computations occur in an abstract logic that cannot be deduced from physics.

Universal physics
The physics of a computer does not restrict the logic, data, and computation that can be used (except the processing speed). It facilitates higher-level actions rather than constraining them.

Multiple realization
The same high level logic can be implemented in many ways (electronic transistors and relays, hydraulic valves, biological molecules) demonstrating that lower level physics is not driving the causation. Higher level logic can be instantiated in many ways by equivalence classes of lower level states. For example, our bodies are still the same, they are still us, even though the cells are different from those we had 10 years ago. The letter ‘p’ on a computer may be bold, italic, capital, red, 12 pt, light pixels or printed ink … but still the letter ‘p’. The higher level function drives the lower level interactions which can happen in many different ways (information hiding) so a computer demonstrates the emergence of a new kind of causation, not out of the underlying physics but out of the logic of higher level possibilities. Complex computer functioning is a mix of bottom-up causation and contextual effects.

Thoughts, like computer programs and data, are not physical entities?

How can there be top-down causation when the lower-level physics determines what can happen given the initial conditions? Well, simply by placing constraints on what is possible at the lower level; by changing properties when in combination as when an independent hydrogen molecule combines with oxygen to form water; where low level entities cannot exist outside their higher-level context, like a heart without a body; when selection creates order by deleting or constraining lower-level possibilities; when random fluctuation and quantum indeterminacy affect low level physics.

Supervenience
Is an ontological relation where upper level properties are determined by or depend on (supervene on) lower level properties, say – social on psychological, psychological on biological, biological on chemical, etc. Do mental properties supervene on neural properties? Properties of one kind are dependent on (but not determined by in a causal sense) those of another kind. For example can the same mental state be supported by different brain states (yes). Why do we need statements about mental states if we know the underlying brain states?

A set of properties A supervenes upon another set B when no two things can differ with respect to A-properties without also differing with respect to their B-properties: there cannot be an A-difference without a B-difference.(Stanford Encyclopaedia).

((Everyone agrees that reduction requires supervenience. This is particularly obvious for those who think that reduction requires property identity, because supervenience is reflexive. But on any reasonable view of reduction, if some set of A-properties reduces to a set of B-properties, there cannot be an A-difference without a B-difference. This is true both of ontological reductions and what might be called “conceptual reductions”—i.e., conceptual analyses.
The more interesting issue is whether supervenience suffices for reduction (see Kim 1984, 1990). This depends upon what reduction is taken to require. If it is taken to require property identity or entailment, then, as we have just seen (Section 3.2), even supervenience with logical necessity is not sufficient for reduction. Further, if reduction requires that certain epistemic conditions be met, then, once again, supervenience with logical necessity is not sufficient for reduction. That A supervenes on B as a matter of logical necessity need not be knowable a priori.))

SUMMARY
Reductionism regards a partial cause as a whole cause with analysis passing down to the smallest scales in physics and causation then proceeding up from these levels, this being a representation of both reality and process of science. Recent work has muddied the simplicity of this approach in many ways. For example, current inflationary cosmology suggests that the origin of the galaxies is a consequence of random or uncertain quantum fluctuations in the early universe. If this is the case then prediction becomes a false hope, even at this early stage apart from any other chaotic factors arising from complexity. Reductionism does not deny emergent phenomena but claims the ability to understand phenomena completely in terms of constituent processes.

Biology presents us with many fascinating examples of how organic complexity arises, for example, how the iteration of simple rules can give rise to complexity as with the fractal genes that produce the scale-free bifurcating systems of the pulmonary, nervous and blood circulatory systems. In complex systems there is often strength in quantity, randomness, local interactions with simple iterated rules, gradients, generalists operate more effectively than specialists. They are directed towards optimal adaptation. Multiple individuals acting randomly assume some kind of spontaneous ordering or structuring.

When some elements combine they take on a completely new and independent character.

These are presented as systems operating ‘bottom-up’, the ‘parts’ being unaware of the ‘whole’ that has emerged, much as Wikipedia emerges from a grass-roots base ‘bottom-up’ rather than scholarly entries ‘top-down’.

We cannot predict the future structure of DNA coding given its own structure – this is determined by the environment.

In language we have letters, words, sentences, paragraphs exhibiting increasing complexity and inclusiveness with meaning an emergent property. Meaning determines a top-down constraint on the choice of words but the words constrain the meanings that can be expressed.

Emergent properties cannot be explained at a ‘lower level’ – they are not present at ‘lower levels’. Rather than showing that ‘higher-level activities do not exist it is the task of mechanistic explanation to show how they arise from the parts.

Fundamentalism suggests that a partial cause is the whole cause.

In sociology although agency seems to ultimately derive from the individual we nevertheless live within the structure of social networks of varying degrees of complexity. Though a problem like obesity can be investigated by the sociologist in terms of the supply and nature of foods, marketing, sedentary lifestyles and so on, weight variation can also be strongly correlated with social networks. There appears to be a collective aspect to the problem of obesity. One way of looking at this is to realise that change is not always instigated by altering the physical composition of a whole but by changing the rules of operation: in the case of society this could be social laws or customs of various kinds.

This has long been a source of ambiguity in sociological methodology. Adam Smith claimed that common good could be achieved through the selfish activities of individuals (methodological individualism) while Karl Marx and Emile Durkheim saw outcomes as a result of collective forces like class, race, or religion (methodological holism). Modern examination of social networks can combine these approaches by regarding individuals as nodes in a web of connections.

Does emergence illegitimately get something from nothing? Are the properties and organisation subjective qualities?

We assess life-related systems in terms of both structure and function. Structure relates to parts, function mostly to wholes. Perhaps strangely, we perceive causation as being instigated by either parts (structure) or wholes (function).

The characteristics of emergent or complex systems include: ‘self-regulation’ by feedback loops; a large number of variables that are causally related but in a ‘directed’ way, exhibiting some form of natural selection through differential survival and reproduction or with unusual constrained path-dependent outcomes: as occurs in markets. Emergence may be a particular consequence of diversity and complexity, organisation and connectivity.

Economist Jeffrey Goldstein in the journal Emergence isolates key characteristics of emergence: it involves radical novelty; coherence (sometimes as ‘self-regulation’); integrity or ‘wholeness’; it is the product of a dynamic process (it evolves); it is supervenient (lower-level properties of a system determine its higher level properties).

Chaos theory draws attention to the way that complex systems are aperiodic unpredictable (chaotic), the way that variability in a complex system is not noise but the way the system works. Chaos in dynamical systems is sensitive dependence on initial conditions and how the interation of simple patterns can produce complexity.

[Are levels fractal – is there the same noise at each level?]

Simplicity with few variables: disorganised complexity of many variables that can be averaged and in which the whole is greater than the sum of the parts: organised complexity where the behaviour is not simply the sum of the parts.

Chaos
Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows:[5] Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Fractals
Self-similarity at different scales mathematically created through iteration. Evident in biological networks – branching, veins, nervous system, roots, lungs: a form of optimal space-filling that can be applied to human networks. It is also a means of packing information into small space as in the brain.

Citations & notes
Complex systems
Part of the modern scientific enterprise is to examine and provide explanations for what happens in complex systems like the human body and human societies.

Chaos theory
Chaos theory noted that in complex systems there was often minuter and unpredictable variability making the system non-linear, non-additive, non-periodic and chance-prone. Though it is predictive over short time spans long-term predictions are not possible. Minute differences at a particular state of a complex system can become amplified into large and unpredictable effects (butterfly flapping its wings changing a major weather pattern, the ‘butterfly effect’). The unpredictability is scale-free, a fractal (fractional dimension).

Scale-free systems
These are often produced as a logical and most efficient solution to a biophysical problem.

Some systems and patterns are scale-free: they look and ?behave the same at any scale. For example, bifurcating systems where a system repeatedly divides into two, like the branching of a tree, occur in neurons, circulatory system (where no cell is more than 5 cells from the circulatory system although the circulatory system consists of no more the 5% of the total body mass), and pulmonary system.

Scale-free systems can be generated from simple rule(s) and one property of such systems is that the variability that occurs at any particular scale is proportionally the same: it does not decrease as the scale is reduced.

Much of the complexity of living systems can be accounted for by ‘fractal genes’ which code complex systems with simple rules and mutations in fractal genes can be recognised easily.

Simple rules can arise in nature through a variety of sources:

• Attraction-repulsion (gives rise to patterns that we see, for example, in urban planning)
• Swarm intelligence (like ants finding the shortest distance between numbers of points.
• Power-law distributions (which are fractals – the neurons of the cortex follow a power-law distribution of its dendrites, making it an ideal structure as a neural network for pattern recognition)
• Wisdom of the crowd when the crowd components are truly experts and non (or evenly) biased
• One feature of complex systems is that you cannot predict a finishing state from a starting state of the system – but often from widely divergent starting states we see a convergence to a particular state, as we see in convergent evolution where biologically different organisms take on similar forms in particular environments. (origin of agriculture).

Emergence

Primacy of explanation
For example, in providing explanations that ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation. Human behaviour can, for instance, be explained in terms of its effect on other people, in terms of the hormones that drive it, or in terms of the genes that trigger the production of the hormones, or the processes that are going on in the brain when the behaviour occurs, or even in terms of evolutionary theory, long-term selective pressures and reproductive outcomes. In other words, when we ask for the reason for a particular kind of behaviour we will probably get different answers from a sociologist, evolutionary biologist, evolutionary psychologist, clinical psychologist, anatomist, molecular biologist, behavioural geneticist, endocrinologist, neuroscientist or someone trained in some other discipline. The important point is that there is no privileged perspective that entails all the others, each is equally valid and which explanation is most appropriate will depend on the particular circumstances.

Nature and nurture is a subtly nuanced interaction remembering also that though the brain can influence behaviour the body can influence the brain. There is also a subtle causal interplay between brain and body.

Complexification & prediction
Given certain conditions in the universe certain other consequences will follow. Though complexity is not inevitable – there is in fact a universal physical law of entropy, a tendency to randomness – it is a historical fact that …

1. As things get more complex they become less predictable.
2. Quantity can produce quality (chimps have 98% of our DNA; half the difference is olfactory, the rest is quantity of neurons and genes that release the brain from genetic influence)
3. The simpler the constituent parts the better
4. More random noise produces better solutions in networks
5. There is much to be gained from power of gradients, attraction and repulsion
6. Generalists work better than specialists (more adaptive)
7. All emergent adapted systems ‘work’ from the bottom up not the top down: they do not require blueprints and the people who construct them, they arise without a blueprint (e.g. Wikipedia)
8. There is no ‘ideal’ or optimal complex system except insofar as it is the ‘best adapted’ which is a general not a precise condition.

Hierarchies and heterarchies.

Citations & notes
[1] Ellis 2008. http://www.mth.uct.ac.za/~ellis/Top-down%20Ellis.pdf

General references
Ellis, G. 2008. On the nature of causation in complex systems. An extended version of RSSA Centenary Transactions paper
Ellis, G. 2012. Recognising top-down causation. Personal Web Site http://fqxi.org/community/forum/topic/1337
Gleick Chaos and complexity.
Sapolsky, R. Chaos and complexity Youtube. & introduction to human behavioural biology.
[1] Weaver, W. 1948. Science and Complexity. American Scientist 36:536
http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L1/Material/WEAVER1947.pdf

If we explain something by considering it as an effect of some preceding cause then this chain of cause-and-effect regress goes on ad infinitum, or it ends in a primordial cause which, since it cannot be related to a preceding cause, does not explain anything. Hume suggested that when billiard balls collide there is not something additional to the collision ‘the cause’ that is some external factor or force acting on the billiard balls – the balls simply collide.

While free will entails purpose and meaningful choice.

Ontology is clear at all levels except the quantum level.

Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.

Where do goals come from – goals are adaptively selected in organism. Adaptive selection mind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.

Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.

You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why do aircraft fly? A bottom-up physical explanation might refer to aerodynamics: a top-down explanation might refer to the design of the plane, its pilot etc.

When we consider cause as relating to context we might also consider Aristotle’s categorisation of cause into four kinds:

Material – lower-level (physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing, its organisation
Efficient – immediate higher (contextual) – ‘source of change’
Final cause – the ultimate higher level cause – ‘that for the sake of which’

When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.

Randomness at the bottom level is needed for selection to occur.

Like any explanation biology itself is contextual it uses the world of physics as background noise and then explains its own domain as best it can.

Martin Nowak, Harvard University Professor of Mathematics and Biology.

Explanation, testing and description.

‘Function’ derives from higher-order constructs in downward causation.

To understand what a neuron does we must know not only its structure or parts (analysis) but how it fits into the function of the brain as a whole (synthesis).

When we sense something, we are receiving a physically existent phenomenon, ie it exists independently of our sensing of it.

Scope

Continua
Selective perception and cognition have the potential to create discrete categories out of objects that in nature we know to be continua (e.g. the continuous colour spectrum split into individual colours, or the continuous sound waves of spoken language broken up into words and meanings) or to make continua out of things that in nature we know to be discrete (as we do with all universals like ‘tree’, ‘table’ or ‘coat’). We can underestimate how different entities are when they occur in the same mental category and overestimate how similar they might be when placed in different categories. When using reducing categories we can lose sight of the big picture.

Causation, explanation, justification
Why did the hooligan smash the shop window?

Because in her evolutionary history violence was a useful adaptation
Because political parties are too soft-handed about law and order nowadays
Because she came from a rough neighbourhood
Because the police were out on strike
Because there was nobody around
Because of the negative influence of her peer group
Because her boyfriend told her to
Because her parents failed to teach her to respect property
Because her body produced a temporary surge in testosterone
Because neurons were exploding in the anger region of her brain
Because her genes indicate that she was predisposed to violence

Are all of these simultaneously true and relevant or can they be prioritised in some way? If prioritised – on what grounds?

What matters most in science – explanation, testing, or description?

Adaptive selection allows local resistance to entropy with a corresponding build up of useful information.

So what we can see at the largest and smallest scales is approaching what will ever be possible, except for refining the details.

Anton Biermans

If we only understand something by If we understand something only if we can explain it as the effect of some cause, and understand this cause only if we can explain it as the effect of a preceding cause, then this chain of cause-and-effect either goes on ad infinitum, or it ends at some primordial cause which, as it cannot be reduced to a preceding cause, cannot be understood by definition.

Causality therefore ultimately cannot explain anything. If, for example, you invent Higgs particles to explain the mass of other particles, then you’ll eventually find that you need some other particle to explain the Higgs, a particle which in turn also has to be explained etcetera.

If you press the A key on your computer keyboard, then you don’t cause the letter A to appear on your computer screen but just switch that letter on with the A tab, just like when you press the heck, you don’t cause the door to open, but just open it. Similarly, if a let a glass fall out of my hand, then I don’t cause it to break as it hits the floor, I just use gravity to smash the glass so there’s nothing causal in this action.

Though chaos theory often is thought to say that the antics of a moth at one place can cause a hurricane elsewhere, if an intermediary event can cancel the hurricane, then the moth’s antics only can be a cause in retrospect, if the hurricane actually does happens, so it cannot cause the hurricane at all. Though events certainly are related, they cannot always be understood in terms of cause and effect.

The flaw at the heart of Big Bang Cosmology is that in the concept of cosmic time (the time passed since the mythical bang) it states that the universe lives in a time continuum not of its own making, that it presumes the existence of an absolute clock, a clock we can use to determine what in an absolute sense precedes what.

This originates in our habit in physics to think about objects and phenomena as if looking at them from an imaginary vantage point outside the universe, as if it is legitimate scientifically to look over God’s shoulders at His creation, so to say.

However, a universe which creates itself out of nothing, without any outside interference does not live in a time continuum of its own making but contains and produces all time within: in such universe there is no clock we can use to determine what precedes what in an absolute sense, what is cause of what.

For a discussion why big bang cosmology describes a fictitious universe, see my essay ‘Einstein’s Error.’

Paul

While free will entails purpose and meaningful choice.

There is no experiment which says that an act is good or bad. There are no units of good and bad, no measurements.

We are concerned with knowledge of reality, not reality itself. When we sense something, we are sensing something that exists independently of our sensing of it.

Ontology is clear at all levels except the quantum level.

Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.

Where do goals come from – goals are adaptively selected in organism. Adaptive selection mind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.

Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.

You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why do aircraft fly? A bottom-up physical explanation might refer to aerodynamics: a top-down explanation might refer to the design of the plane, its pilot etc.

Aristotle was right about cause.

Material – lower-level (Physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing
Efficient – immediate higher (contextual) – ‘source of change’
Final cause – the ultimate higher level cause – ‘that for the sake of which’

When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.

Cultural neuroscience is a good example.

Cosmic context. Physics cannot tell us because quantum uncertainty means physical outcome indeterminate (34 mins).

Tidal wave patterns in the sand or the mesmerising movement of flocks of birds which are governed by a local rule cannot compare with organic complexity.

Ethics has no measurement units and no way of testing good or bad.

The problem of prediction

At around 50 words we are struggling to understand a sentence.

Which of the following are abstract? Which are real? Which exist? Which can have causal influence: values, moral precepts, social laws, scientific laws, numbers, computer programs, thoughts, equations. Can something have causal influence and not exist? In what sense?

Randomness at the bottom level is needed for selection to occur.

Martin Nowak, Harvard University Professor of Mathematics and Biology.

Complexity
Since the 1970s these issues have been subsumed by the study of complexity theory and complex systems – everything from organic complexity and neural networks, to chaos theory, the internet and so on. At the core of the scientific enterprise is causation and are the causal relations between phenomena and it is causation acting within hierarchical systems that is, for physicist George Ellis, the source of complexity in the universe, as summarised in his paper On the Nature of Causation in Complex Systems which I outline briefly here.[1] In this paper Ellis explains the idea of ‘top-down’ causation.
http://humbleapproach.templeton.org/Top_Down_Causation/

Causation
We can define causation simply as ‘a change in X resulting in a reliable and demonstrable change in Y in a given context’.

Top-down causation
Higher levels are causally real because they have causal influence over lower levels. It is top-down causation that gives rise to complexity – such as computers and human beings.

Ellis claims that bottom-up causation is limited in the complexity it can produce, genuine complexity requiring a reversal of information flow from bottom-up to top-down, some coordination of effects. Tidal wave patterns in the sand or the mesmerising movement of flocks of birds which are governed by a local rule cannot compare with organic complexity.

Top-down causation in computers
What happens in this hierarchy. Top-down causation occurs when the boundary conditions (the extremes of an independent variable) and initial conditions (lowest values of the variable) determine consequences.

Top-down causation is especially prevalent in biology but also in digital computers – the paradigm of mechanistic algorithmic causation such that it is possible without contradicting the causal powers of the underlying micro physics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.

Abstract causation
Non-physical entities can have causal efficacy. High levels drive the low levels in the computer. Bottom levels enable but do not cause. Program is not the same as its instantiations.

A software program is abstract logic: it is not stored electronic states in computer memory, but their precise pattern (a higher level relation) not evident in the electrons themselves.

Logical relations
High level algorithms determine what computations occur in an abstract logic that cannot be deduced from physics.

Universal physics
The physics of a computer does not restrict the logic, data, and computation that can be used (except the processing speed). It facilitates higher-level actions rather than constraining them.

Multiple realization
The same high level logic can be implemented in many ways (electronic transistors and relays, hydraulic valves, biological molecules) demonstrating that lower level physics is not driving the causation. Higher level logic can be instantiated in many ways by equivalence classes of lower level states. For example, our bodies are still the same, they are still us, even though the cells are different from those we had 10 years ago. The letter ‘p’ on a computer may be bold, italic, capital, red, 12 pt, light pixels or printed ink … but still the letter ‘p’. The higher level function drives the lower level interactions which can happen in many different ways (information hiding) so a computer demonstrates the emergence of a new kind of causation, not out of the underlying physics but out of the logic of higher level possibilities. Complex computer functioning is a mix of bottom-up causation and contextual effects.

Thoughts, like computer programs and data, are not physical entities?

How can there be top-down causation when the lower-level physics determines what can happen given the initial conditions? Well, simply by placing constraints on what is possible at the lower level; by changing properties when in combination as when an independent hydrogen molecule combines with oxygen to form water; where low level entities cannot exist outside their higher-level context, like a heart without a body; when selection creates order by deleting or constraining lower-level possibilities; when random fluctuation and quantum indeterminacy affect low level physics .

SUMMARY
Classical reductionism
Given the initial conditions and sufficient information we can predict future states, outcomes are determinate. Physicist Dirac claimed that ‘chemistry is just an application of quantum physics’. This appears to be physically untrue in many ways. Current inflationary cosmology suggests that the origin of the galaxies is a consequence of random or uncertain quantum fluctuations in the early universe. If this is the case then prediction becomes a false hope, even at this early stage apart from any other chaotic factors arising from complexity.

The origin of novelty & complexity
Biology presents us with many fascinating examples of how organic complexity arises, for example, how the iteration of simple rules can give rise to complexity as with the fractal genes that produce the scale-free bifurcating systems of the pulmonary, nervous and blood circulatory systems.

What is not clear is why this should be considered in some way special or unaccounted for by molecular interaction.

Can a reductionist view of reality account for the origin of complexity: can examining parts explain how the whole hangs together?

Clearly material organisational novelty must arise since the universe which was once undifferentiated plasma now contains objects as structurally complex as living organisms.

When some elements combine they take on a completely new and independent character. Examples of emergence would be

Multiple individuals acting randomly assume some kind of spontaneous ordering or structuring.

Within the theme of emergence is often observation of the teleonomic character of evolutionary novelty and the way organic systems are functional wholes. As with, illustrating the teleonomy of one organism acting under natural selection and how novel complex outcomes can be achieved from a simple algorithm or underlying set of rules.

Reductionism does not deny emergent phenomena but claims the ability to understand phenomena completely in terms of constituent processes.

Top-down causation
Top-down causation is more common than bottom-up.

We cannot predict the future structure of DNA coding given its own structure – this is determined by the environment.

These are presented as systems operating ‘bottom-up’, the ‘parts’ being unaware of the ‘whole’ that has emerged, much as Wikipedia emerges from a grass-roots base ‘bottom-up’ rather than scholarly entries ‘top-down’.

Hierarchy of causation
The idea of hierarchy can add further complication and confusion through the idea of ‘bottom-up’ or ‘top-down’ causality and the fact that biological systems have a special kind of history as a consequence of the teleonomic character of natural selection which leads us to ask about function: what is the whole or part for (implying the future)?

In complex systems there is often strength in quantity, randomness, local interactions with simple iterated rules, gradients, generalists operate more effectively than specialists. They are directed towards optimal adaptation.

Unpredictable – is there a blueprint from the ‘start’? In evolution we see convergence from a different starting point.

Simile and association.

Hierarchy
Emergent properties cannot be explained at a ‘lower level’ – they are not present at ‘lower levels’. Rather than showing that ‘higher-level activities do not exist it is the task of mechanistic explanation to show how they arise from the parts.

Examples:
Examples of emergence come from many disciplines.

In language we have letters, words, sentences, paragraphs exhibiting increasing complexity and inclusiveness with meaning an emergent property. Meaning determines a top-down constraint on the choice of words but the words constrain the meanings that can be expressed.

When a living organism is split up into its component molecules there is no remaining ingredient such as ‘the spark of life’ or the ‘emergence gene’ so emergent properties do not have some form of separate existence. And yet emergent properties are not identical to, reducible to, predictable from, or deducible from their constituent parts – which do not have the properties of the whole. The brain consists of molecules but individual molecules do not think and feel, and we could not predict the emergence of a brain by simply looking at an organic molecule.

In sociology although agency seems to ultimately derive from the individual we nevertheless live within the structure of social networks of varying degrees of complexity. Though a problem like obesity can be investigated by the sociologist in terms of the supply and nature of foods, marketing, sedentary lifestyles and so on, weight variation can also be strongly correlated with social networks. There appears to be a collective aspect to the problem of obesity. One way of looking at this is to realise that change is not always instigated by altering the physical composition of a whole but by changing the rules of operation: in the case of society this could be social laws or customs of various kinds.

This has long been a source of ambiguity in sociological methodology. Adam Smith claimed that common good could be achieved through the selfish activities of individuals (methodological individualism) while Karl Marx and Emile Durkheim saw outcomes as a result of collective forces like class, race, or religion (methodological holism). Modern examination of social networks can combine these approaches by regarding individuals as nodes in a web of connections.

Does emergence illegitimately get something from nothing? Are the properties and organisation subjective qualities?

We assess life-related systems in terms of both structure and function. Structure relates to parts, function mostly to wholes. Perhaps strangely, we perceive causation as being instigated by either parts (structure) or wholes (function).

The characteristics of emergent or complex systems include: ‘self-regulation’ by feedback loops; a large number of variables that are causally related but in a ‘directed’ way, exhibiting some form of natural selection through differential survival and reproduction or with unusual constrained path-dependent outcomes: as occurs in markets. Emergence may be a particular consequence of diversity and complexity, organisation and connectivity.

Economist Jeffrey Goldstein in the journal Emergence isolates key characteristics of emergence: it involves radical novelty; coherence (sometimes as ‘self-regulation’); integrity or ‘wholeness’; it is the product of a dynamic process (it evolves); it is supervenient (lower-level properties of a system determine its higher level properties).

To summarise: when considering wholes and parts in general we need to consider specific instances. Some wholes are more or less straightforwardly additive (sugar grains aggregated into a sugar lump) but other wholes seem to grade into kinds that are not so amenable to reduction – consider the music produced by an orchestra, carbon dioxide, a language, a painting, an economic system, the human body, consciousness, and the sum 2 = 2 = 4.

Part of the disagreement between reduction and emergence can be explained by regarding wholes as having parts that are more or less interdependent. At one end of the spectrum are aggregates and at the other living organisms. As we shift cognitive focus the relationship between wholes and parts can display varying degrees of interdependence: removing a molecule from an ant body is unlikely to be problematic although removing an organ could be, while removing an ant from its colony is probably unproblematic. Wholes sometimes only exist because of the precise relations of the parts – in other wholes it does not matter. Sometimes characteristics appear ‘emergent’ (irreducible as in organised wholes) and sometimes they appear ‘reducible’ (as in aggregative wholes).

Chaos
Chaos theory draws attention to the way that complex systems are aperiodic unpredictable (chaotic), the way that variability in a complex system is not noise but the way the system works.

(methodological reductionism assumes a causal relationship between the elements of structure and higher-order constructs (“function”). This criticism is deep, because it does not only claim that the whole cannot be understood by only looking at the parts, but also that the parts themselves cannot be fully understood without understanding the whole. That is, to understand what a neuron does, one must understand in what way it contributes to the organization of the brain (or more generally of the living entity. you can’t understand a phenomenon just looking at its elements (at whatever scale defined) but you also have to take into account all the relationships between them. )).
Methodol redn the right way, or the only way, to understand the whole is to understand the elements that compose it.))))))

[Are levels fractal – is there the same noise at each level?]

Complex systems
What kind of scientific questions will we want to answer in the 21st century?

There appears to have been a change in the character of scientific questions towards the end of the 20th century. Science in the 21st century is dealing much more with complex systems. People wish to know what the weather will be like in a fortnight’s time; to what extent is climate changes anthropogenic; what is the probability that I might die of some heritable disease; what is the degree of risk related to the use of a particular genetically modified organism; will interest rates be higher in six months’ time and, if so, by how much?

Fundamentalism suggests that a partial cause is the whole cause. Why does a plane fly? (air molecules under the wing, it has a pilot, it was designed to fly, there is a timetable, airline must make a profit)

Historical background
Few-variable simplicity
In very general terms the physical sciences of the 17th to 19th centuries were characterised by systems of constant conditions involving very few variables: this gave us simple physical laws and principles about the natural world that underpinned the production of the telephone, radio, cinema, car and plane.[1 pg]

In contrast the processes going on in biological systems seemed to involve many subtly interconnected variables that were difficult to measure and whose behaviour was not amenable to the formation of law-like patterns similar to those of the physical sciences. Up to about 1900 then much of biological science was essentially descriptive and meagre analytical, mathematical or quantitative foundations.

Disorganised complexity
After 1900 with the development of probability theory and statistical mechanics it became possible to take into account regularities emerging from a vast number of variables working in combination: though the movement of 10 billiard balls on a table may be difficult to predict, when there are extremely large numbers of balls it becomes possible to answer and quantify general questions that relate to the collective behaviour of the balls (how frequently will they collide, how far will each one move on average before it is hit etc.) when we have no idea of the behaviour of any individual ball. In fact, as the number of variables increases certain calculations become more accurate say, the average frequency of calls to a telephone exchange or the likelihood of any given number being rung by more than one person. It allows, for example, insurance companies and casinos to calculate odds and ensure that the odds are in their favour. This applies even when the individual events (say the age of death) are unpredictable unlike the predictable way a billiard ball behaves. Much of our knowledge of the universe and natural systems depends on calculations of such probabilities.

Organised complexity
Disorganised complexity has predictive power because of the predictable randomicity of the behaviour of its components, the mathematics of averages.

But there are systems that are organised into functioning wholes: labour unions, ant colonies, the world-wide-web, the biosphere.

Chaos in dynamical systems is sensitive dependence on initial conditions and how the interation of simple patterns can produce complexity.

Consists of many simple components interconnected, often as a network, through a complex non-linear architecture of causation, no central control producing emergent behaviour. Emergent behaviour as scaling laws can entail hierarchical structure (nested, scalar), coordinated information-processing, dynamic change and adaptive behaviour (complex adaptive systems [ecosystems, biosphere, stock market] self-organising, non-conscious evolution, ‘learning’, or feedback).
Examples: are an ant colony, economic system, brain.

Simplicity with few variables: disorganised complexity of many variables that can be averaged and in which the whole is greater than the sum of the parts: organised complexity where the behaviour is not simply the sum of the parts.

omplex systems
Dynamic systems theory works on the mathematics of how systems change.

Chaos
Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows:[5] Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Fractals
Self-similarity at different scales mathematically created through iteration. Evident in biological networks – branching, veins, nervous system, roots, lungs: a form of optimal space-filling that can be applied to human networks. It is also a means of packing information into small space as in the brain.

Nature is 3-D and fractal geometry is an important mathematical application.

Citations & notes
[1] Weaver, W. 1948. Science and Complexity. American Scientist 36:536

http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L1/Material/WEAVER1947.pdf

Complex systems
Part of the modern scientific enterprise is to examine and provide explanations for what happens in complex systems like the human body and human societies.

Reductionism
Western science has for about 500 years tackled such situations by breaking them down into their component parts. Concluding that if you know the state of a system at time A then it should be possible to determine its state at time B. The whole is the sum of its parts: complex systems are additive.

Chaos theory
Chaos theory noted that in complex systems there was often minuter and unpredictable variability making the system non-linear, non-additive, non-periodic and chance-prone. Though it is predictive over short time spans long-term predictions are not possible. Minute differences at a particular state of a complex system can become amplified into large and unpredictable effects (butterfly flapping its wings changing a major weather pattern, the ‘butterfly effect’). The unpredictability is scale-free, a fractal (fractional dimension).

Scale-free systems
These are often produced as a logical and most efficient solution to a biophysical problem.

Some systems and patterns are scale-free: they look and ?behave the same at any scale. For example, bifurcating systems where a system repeatedly divides into two, like the branching of a tree, occur in neurons, circulatory system (where no cell is more than 5 cells from the circulatory system although the circulatory system consists of no more the 5% of the total body mass), and pulmonary system.

Scale-free systems can be generated from simple rule(s) and one property of such systems is that the variability that occurs at any particular scale is proportionally the same: it does not decrease as the scale is reduced.

Much of the complexity of living systems can be accounted for by ‘fractal genes’ which code complex systems with simple rules and mutations in fractal genes can be recognised easily.

Simple rules can arise in nature through a variety of sources:

• Attraction-repulsion (gives rise tro patterns that we see, for example, in urban planning)
• Swarm intelligence (like ants finding the shortest distance between numbers of points.
• Power-law distributions (which are fractals – the neurons of the cortex follow a power-law distribution of its dendrites, making it an ideal structure as a neural network for pattern recognition)
• Wisdom of the crowd when the crowd components are truly experts and non (or evenly) biased

Emergence
One feature of complex systems is that you cannot predict a finishing state from a starting state of the system – but often from widely divergent starting states we see a convergence to a particular state, as we see in convergent evolution where biologically different organisms take on similar forms in particular environments. (origin of agriculture).

(cellular automata)

Holism

Explanatory frameworks & categories

Continua
Sometimes, for convenience, we break down things which are continuous in nature into discontinuous categories. The most obvious example is the colour spectrum which though physically continuous we break up into the colours of the rainbow’. Though we are aware of what we are doing we are less aware of some of the consequences: that we underestimate how different entities are when they occur in the same category; overestimate how different they are when placed in different categories; and when using reducing categories we can lose sight of the big picture.

Primacy of explanation
For example, in providing explanations that ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation. Human behaviour can, for instance, be explained in terms of its effect on other people, in terms of the hormones that drive it, or in terms of the genes that trigger the production of the hormones, or the processes that are going on in the brain when the behaviour occurs, or even in terms of evolutionary theory, long-term selective pressures and reproductive outcomes. In other words, when we ask for the reason for a particular kind of behaviour we will probably get different answers from a sociologist, evolutionary biologist, evolutionary psychologist, clinical psychologist, anatomist, molecular biologist, behavioural geneticist, endocrinologist, neuroscientist or someone trained in some other discipline. The important point is that there is no privileged perspective that entails all the others, each is equally valid and which explanation is most appropriate will depend on the particular circumstances.

Nature and nurture is a subtly nuanced interaction remembering also that though the brain can influence behaviour the body can influence the brain. There is also a subtle causal interplay between brain and body.

Complexification & prediction
Given certain conditions in the universe certain other consequences will follow. Though complexity is not inevitable – there is in fact a universal physical law of entropy, a tendency to randomness – it is a historical fact that …

9. As things get more complex they become less predictable.
10. Quantity can produce quality (chimps have 98% of our DNA; half the difference is olfactory, the rest is quantity of neurons and genes that release the brain from genetic influence)
11. The simpler the constituent parts the better
12. More random noise produces better solutions in networks
13. There is much to be gained from power of gradients, attraction and repulsion
14. Generalists work better than specialists (more adaptive)
15. All emergent adapted systems ‘work’ from the bottom up not the top down: they do not require blueprints and the people who construct them, they arise without a blueprint (e.g. Wikipedia)
16. There is no ‘ideal’ or optimal complex system except insofar as it is the ‘best adapted’ which is a general not a precise condition.

Hierarchies and heterarchies.

One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales.

The edge of the observable universe is about 46–47 billion light-years away.

What matters most – explanation, testing, or description?

The key point about adaptive selection (once off or repeated) is that it lets us locally go against the flow of entropy, and this lets us build up useful information.

Daniel Bernstein
Though I believe that any and all interactions can be expressed and described in terms of the fundamental aspects of reality, we lack the theory to do so. And even if we did have such theory that would show all higher scale interactions to be emerging from the fundamental interactions, the amount of data necessary to track every elementary particle and force would prohibit the description of even the simplest systems.

My understanding is that objects are structurally bound if, within a given scale of reality and under effect of a given force associated with the given scale on them, they behaves as a single object. So, the mathematical models of a particular scale of physical reality can threat composite objects as “virtually fundamental” in such a way that the top-down or bi-directional causalities not only make sense, but becomes the only workable alternative to tracking the interactions between the fundamental particles composing the interacting structures.

So what we can see at the largest and smallest scales is approaching what will ever be possible, except for refining the details.

Anton Biermans
I’m afraid that you (and everybody else, for that matter) confuse causality with reason.

If we understand something only if we can explain it as the effect of some cause, and understand this cause only if we can explain it as the effect of a preceding cause, then this chain of cause-and-effect either goes on ad infinitum, or it ends at some primordial cause which, as it cannot be reduced to a preceding cause, cannot be understood by definition.

Causality therefore ultimately cannot explain anything. If, for example, you invent Higgs particles to explain the mass of other particles, then you’ll eventually find that you need some other particle to explain the Higgs, a particle which in turn also has to be explained etcetera.

If you press the A key on your computer keyboard, then you don’t cause the letter A to appear on your computer screen but just switch that letter on with the A tab, just like when you press the heck, you don’t cause the door to open, but just open it. Similarly, if a let a glass fall out of my hand, then I don’t cause it to break as it hits the floor, I just use gravity to smash the glass so there’s nothing causal in this action.

Though chaos theory often is thought to say that the antics of a moth at one place can cause a hurricane elsewhere, if an intermediary event can cancel the hurricane, then the moth’s antics only can be a cause in retrospect, if the hurricane actually does happens, so it cannot cause the hurricane at all. Though events certainly are related, they cannot always be understood in terms of cause and effect.

The flaw at the heart of Big Bang Cosmology is that in the concept of cosmic time (the time passed since the mythical bang) it states that the universe lives in a time continuum not of its own making, that it presumes the existence of an absolute clock, a clock we can use to determine what in an absolute sense precedes what.

This originates in our habit in physics to think about objects and phenomena as if looking at them from an imaginary vantage point outside the universe, as if it is legitimate scientifically to look over God’s shoulders at His creation, so to say.

However, a universe which creates itself out of nothing, without any outside interference does not live in a time continuum of its own making but contains and produces all time within: in such universe there is no clock we can use to determine what precedes what in an absolute sense, what is cause of what.

For a discussion why big bang cosmology describes a fictitious universe, see my essay ‘Einstein’s Error.’

Ontology is clear at all levels except the quantum level.

Computer systems illustrate downward causation where the software tellsthe hardware what to do – and what the hardware does will depend on the different software. What drives it is the abstract informational logic in the system, not the physical software on the USB stick. The context matters. There are 5 kinds of downward causation: algorithmic,
non-adaptive – thermostat, heart beat, body temperature and systems with feedback
adaptive, intelligent.

So it is the goals that determine outcomes and the initial conditions are irrelevant.
In living systems the best example of downward causation is adaptation in which it is the environment that is a major determinant of the structure of the DNA.

Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.

Where do goals come from – goals are adaptively selected in organism. Adaptive selectionmind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.

Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.

You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why does an aircraft fly? Bottom-up physical explanation due to air pressure. Top-down is that it was designed to fly (pilot, timetable, makes a profit). All are simultaneously true and relevant.

Aristotle was right about cause.

Material – lower-level (Physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing
Efficient – immediate higher (contextual) – ‘source of change
Final cause – the ultimate higher level cause – ‘that for the sake of which’

When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.

Cultural neuroscience is a good example.

Determinism is challenged by chaos theory, quantum uncertainty (entanglement).

Recognise we are not at the centre of the knowledge universe. Epistemologically the axiomatic method has limitations. Human conceptual frameworks do not set the limits to knowledge – we need ways of understanding how machines represent the world.

Reduction & causation

Causation underlies the workings of the universe and our discourse about it. Anyone who is curious about the natural world must at some time or another in their lives have wondered about the true nature of causation, especially those people with a scientific curiosity. This series of articles on causation became necessary, not only for these reasons, but because causation is so frequently called on to do work in the philosophical debate about reductionism and today’s competing scientific world views.

It is dubious whether the reduction of causal relations to non-causal features has been achieved and scientific accounts are strong alternatives with revisionary non-eliminative accounts finding favour. Can emergent entities play a causa role in the worlds? But is causation confined to the physical realm?

The issue to be addressed here is, firstly, can causation itself be reduced to something simpler. But the role that causation plays in causal interactions that operate within and between domains of knowedge. The outline of this article follows the account given by Humphreys in the Oxford Handbook of Causation of 2009.[2]

At the outset it is important to distinguish between reduction between the objects of investigation themselves (ontological reduction) and linguistic or conceptual reduction as the reduction of our representations of those objects.

Reduction of causation itself
Eliminative reduction of causation
We must decide whether causation is itself amenable to reductive treatment. Reduction may be eliminative reduction in which the reduced entity is considered dispensable because inaccessible (Hume’s claim that we do not experience causal connection) so we can therefore eliminate it from our theoretical discourse and/or real objects (ontology) (the Mill/Ramsay/Lewis model) substituting phenomena that are more amenable to direct empirical inspection. The most popular theory of this kind is Humean lawlike regularity but in this group would be the logical positivists, logical empiricists (e.g. Ernest Nagel, Carl Hempel), Bertrand Russell, and many contemporary physicalists with an empiricist epistemology. Hume’s view was that we arrive at cause through the habit of association and in this way he removed causal necessity from the world by giving it a psychological foundation. A benign expression of this view would be that ‘C caused E when from initial conditions A described using law-like statements it can be deduced that E’.

Non-eliminative reduction of causation
Causation is so central to everyday explanation, scientific experiment, and action that many have adopted a non-eliminative position. X is reduced to Y but not eliminated, simply expressed in different concepts like probabilities, interventions, or lawlike regularities. Non-eliminativists like the late Australian philosopher David Armstrong hold that causation is essentially a primitive concept that we can at least sometimes access epistemically as contingent relations of nomic necessity among universals and thus amenable to multiple realization. language or with eliminativist accounts explaining causation in non-causal terms.

Revisionary reduction of causation
Here the reduced concept is modified somewhat, as when folk causation is replaced by scientific causation. Most philosophical and self-conscious accounts of causation are revisionary to a greater or lesser degree.

Circularity
Many accounts of causation include reference to causation-like factors as occurs with natural necessity, counterfactual conditionals, and dispositions in what has become known as the modal circle. The fact that no fully satisfactory account of causation can totally eliminate the notion of cause itself is support for a primitivist case.

Domains of reduction
Discussions in both science and philosophy refer to ‘levels’ or ‘scales’ or ‘domains’ of both objects and discourse. So physics is overtopped by progressively more complex or inclusive layers of reality such as chemistry, biochemistry, biology, sociology etc. This hierarchically stratified characterization of reality is discussed elsewhere. Here the task is to examine the way causation might operate within and between these different objects and and domains of discourse.
The attempt at reducoing one domain to another is not a straightforward translation as an account must be given of the different objects, terms, theories, laws, properties and their role in causal processes. The preferred theory of causation (whether, say, a singularist or regularity theory) will be pertinent to what kind of causal reduction may be possible.

Relations between domains
Suppose we are engaged in the reduction of a biological process to one in physics and chemistry, say the reduction of Mendelian genetics to biochemistry, then what kinds of causal interactions might we invoke? The causal relation might be: a relation of identity; an explicit definition; an implicit definition via a theory; a contingent statement of a lawlike connection; a elation of natural or metaphysical necessitation as in supervenience; an explanatory relation; a relation of emergence; a realization relation; a relation of constitution; even causation itself. If indeed the causation were different in different domains then this might render reduction restricted or impossible. Accounts like counterfactual analysis are domain independent.(p. 636)

However, there are domain-specific claims such as physicalism’s Humean supervenience. Under some theories causation is restricted to physical causation as the transfer of conserved physical quantities and this is difficult to apply to the social sciences.

Domain-specific causation & physicalism
Could it be that causation in biology is different from that in physics or sociology or is causation of the same general kind – is their ‘social cause’ and ‘biological cause’ or just ’cause’? The most contentious area here is mental causation where intentionality is often treated as ‘agency’ rather than ‘event’ causation.

Supervenience
In the 1960s domain reduction was promoted through the reduction of theories via bridging laws (Ernest Nagel). One major challenge for such an approach has been multiple realization whereby something like ‘pain’ can be expressed physically in so many ways that this renders its further reduction unlikely although this has been countered by supervenience accounts. For example Humean supervenience regards the world as the spatio-temporal distribution of locaized physical particulars with everything else including laws of nature and causal relations supervening on this.(p. 639) Supervenience is generally regarded a a non-reductive relation.

Functionalism
Multiple realization characterizes properties in terms of their causal roles. Money is causally realized by coins, cheques, promissory note etc. The role of ‘doorstop’ can be functionally and reducibly defined so not all cases of multiple realization are irreducible, irreducibility needs to be taken case by case. For Kim (1997;1999) ‘Functionalization of a property is both necessary and sufficient for reduction …. it explains why reducible properties are predictable and explainable’. Since almost all properties can be functionalized few need to be candidates for emergent properties (p. 644)

Upward & downward causation
The restriction of cause to physical domains is supported by the downward causation and exclusion argument.

Causal exclusion principle & non-reductive physicalism
The causal exclusion principle states that there cannot be more than one sufficient cause for an effect. If we accept this then how are we to account for the causes we allocate at large scales, say the cause of a rise in interest rates? What is the causal relevance of multiply realizable or functional properties (redness, pain, and mental properties)? Does this principle automatically devolve into smallism, that we ultimately explain everything all the way down to leptons and bosons, or smaller and more basic entities when we find them because they are the ones doing the causal work? How can a macro situation have causal relevance if it can be fully accounted for at the micro scale. These properties then become epiphenomena, a by-product or phenomenon with no physical basis.

If C is causally sufficient for E then any other event D is causally irrelevant. Every physical event E has a physical event C causally sufficient for E. If event D supervenes on C then D is distinct from C.

There is increasing evidence supporting the causal autonomy of disciplinary discourse or non-reductive physicalism. Properties in the special sciences are not identical to physical properties since they are multiply realized although they do supervene on (instances of) physical properties since changes in the special properties entail changes in the physical properties further the special properties are causes and effects of other special properties.

A large-scale cause can exclude a small-scale cause. Pain might cause screaming while there is no equivalent neural property. This occurs when the trigger is extrinsic to the system. The pain resulting from a pin prick is initiated by the pin; it cannot possibly be initiated at the neural scale.

The exclusion principle can be applied to any kind of event that supervenes on physical events and shpows that there is no clear causal role for supervening events.

The main questions to be addressed in relation to causation and reduction are: can causation itself be reduced; is there a base-level physicochemical causation underlying all other forms of causation; how does causation operate within a. non-physicochemical domains of discourse and scales and b. between non-physicochemical domains of discourse and scales.

In posing these questions it should be noted that it is cutomary to discuss different academic disciplines, as different domains of knowledge that use their own specific terminology, theories and principles. So for example we have physics, chemistry, biology, and sociology being refereed to as ‘domains of discourse’ and stratified or into ‘levels’ or ‘scales’ of existence. From the outset a careful distinction must be made between ontological reduction, the reductive relations between objects themselves, and linguistic or conceptual reduction which deals with our representations of these objects.

Cause & reductionism
So far in discussing reductionism it has been noted that at present we explain the world scientifically using several scales or perspectives. These scales correspond approximately to particular specialised academic disciplines with their own objects of study including their terminologies, theories, and principles. One possible way of expressing this would be: matter, energy, motion, and force (physics), living organisms (biology), behaviour (psychology), and society (sociology, politics, economics). Each discipline has its own specialist objects of study like be quarks (physics), lungs (biology), desires (psychology), and interest rates (economics). Since it has been argued that each disciplines is addressing the same physical reality from different perspectives or scales the question arises as to the causal relationships between these various objects of study. This raises the question about the relationship between causes at different scales, perspectives, or, in the old terminology, ‘levels of organisation’ when they deal with different entities. How do we reconcile causation at the fundamental particle scale with causation at the political scale assuming the physical reality that they are dealing with is the same?

To answer this question we need to do some groundwork … our modest philosophical program is to ask: What is causation and in what sense does it exist? Is it something that exists independently of us and, if not, in what way does in depend on us? Is causation part of the human-centred Manifest Image? What role does causation play in our reasoning? In other words we need to demonstrate that causation is either a fundamental fact of the universe, or some kind of mental construct, or it can be explained in different and simpler terms.

If we assume the process of explanation proceeding by analysis or synthesis and we regard fermions and bosons as the smallest units of matter then causation must act primarily from the wider context. A rise in interest rates, or the pumping of a heart cannot be initiated by fermions and bosons themselves. To make sense of the fermions and bosons that exist in a heart we must consider their wider context.

Does causation occurs at all scales depending on its initiators or is there a privileged foundational with macroscales explained by microscales, that genes coding (in humans about 25,000 genes and 100,000 proteins) for proteins, cells, tissues, organs, and the organism. That is, a causal chain that leads to progressively larger, more inclusive, and complex structures. This is the central dogma of genetic determinism. But does causation occur between cells, organs, or tissues? Are genes triggered by transcription factors that turn them on and off. Is the environment causal from outside the organism along with other constraining factors at all scales. Homeostasis. Evolution occurs through changes in the genotype that are produced by selection of the phenotype as natural selecrtion expresses the organism-environment continuum.

If ‘levels’ or ‘scales’ do not exist as separate physical objects then there is only one fundamental mode of being. This is simply one physical reality that can be interpreted or explained in different ways: it has no foundationalscale or level.

Weak emergence: descriptions at scale X are shorthand for those at scale Y; strong when X cannot be given for Y.

Universal laws apply to biology, an unsupported elephant will fall to the ground, but biology has its own causal regularities that are, of their very nature, restricted to living organisms.

A cause can be sufficient for its effect but not necessary (a piece of glass C starting a fire E) – we can infer E from C but not vice-versa; it may be necessary but not sufficient (presence of oxygen C in a fire-prone region E) – we can infer E from C but not vice-versa. Under this characterization cause can be defined as either sufficient conditions (or even necesary and sufficient conditions).

Some scales of explanation or causal description are more appropriate than others. It is possible to provide an explanation that is either overly general or overly detailed. What is appropriate depends on the causal structure, what would provide the most effective terms and structures for empirical investigation. This contrasts with the view that there is a fundamental or foundational scale at which explanation is most complete. (Woodward 2009). Causes need to be appropriate to their effects. Bosons nfluencing interest rates. Interest rates affecting the configuration of sub-atomic particles. Fine-grained explanations may be more stable but not always. (Woodward 2009).

One are where this tension expresses itself is in the argument over the mechanism of biological selection in evolution. Should we regard natural selection as ultimately and inevitably a consequence of what is going on in the genes (see Richard Dawkins book The Selfish Gene) or are there causal influences that operate between cells, between tissues, between individuals, between populations, and in relation to causes generated by the environment?

Noble, D. 2012. A Theory of Biological Relativity. Interface Focus 2: 55-64.

It is widely assumed that large-scale causes can be reduced to small-scale causes, the macro to micro: that macro causation frequently (but not always) falls under micro laws of nature. This presupposes a means of correlating the relata at the different scales. This might be interpreted as microdeterminism, the claim that the macro world is a consequence of the micro world. The causal order of the macro world emerges out of the causal order of the micro world. A strict interpretation might be that a macro causal relation exists between two events when there are micro descriptions of the events instantiating a physical law of nature and a more relaxed version that there are causal relations between events that supervene. It might also be the case that even if there is causal sufficiency and completeness the existence of necessitating lawful microdeterminism (laws) does not entail causal completeness. Perhaps in some cases there is counterfactual dependence at the macro but not the micro scale.

Granularity & reductionism
We are tempted to think that we can improve on the precision of causal explanations. Could or should we try to improve the precision of of causal explanations by giving more detail or being more scientific? For example I might explain how driving over a dog was related to my personal psychology, the biochemical activity going on in my brain, the politics of the suburb where the accident occurred and so on. That is, the explanation could be given using language and concepts taken from different domains of knowledge: psychology, politics, sociology, biochemistry and so on. The same situation can be described using different domains of knowledge, scales of existence, and so on. What is of special interest is that the cause will be different depending on the perspective chosen. For simplicity the choice of detail chosen for the explanation is referred to as its granularity. This raises the problems of reduction that is discussed elsewhere. Is there a foundational or more informative scale or terminology that can be used? Is an explanation taken to the smallest possible physical scale the best explanation? Are the causal relations dependant on more metaphysically basic facts like fundamental laws? Do facts about organisms beneficially reduce to biochemical facts … and so on. Is fine grain the best?

Principle 3 – Any description of causation presents the metaphysical challenge of selecting the grain of the terms and conditions to be employed

We can appear to express the same cause using different terms that seem to alter the meaning and therefore the causal relations under consideration, for example: we might replace ‘The match caused the fire’ with ‘Friction acting on phosphorus produced a flame that caused the fire’. This raises the question ‘But what was really the cause?’ with the potential for seemingly different answers when we want only one. The depth of detail in terminology is sometimes referred to as granularity and it raises the question of whether some explanations are more basic or fundamental that others, that some statements can be beneficially reduced to others (reductionism).

This gives us an extended definition of science: science studies the order of the world by investigating causal processes. Causal processes are of many kinds: there are, we might say for example, that Though contentious we might add that we must resist the temptation to reduce causes of one kind to causes of another kind. Causally it makes no sense to reduce biology to physics by saying that fermions and bosons cause the heart to beat. A heart might consist of fermions and bosons but these do not have causal efficacy in this sense. This takes us away from the traditional method of attempting to define science which has been in terms of its methodology (the hypothetico-deductive or deductive-nomological method).

Multiple realization
Physicalists can be divided into two camps: those that think everything can be reduced to physics (reductive physicalists) and those that do not (nonreductive physicalists). The reductionist physicalist claims a type-identity thesis such that, for example, mental properties like feelings are identical with physical properties: that all mental properties are caused by physical properties. Assuming we have two entities, one acting causally on the other seems mistaken the two being, in fact, one and the same. Similarly the non causal connection between temperature and mean molecular kinetic energy. Also life and complex biochemistry? The question arises though as to the identity of objects. Is pain physically identical in a human and a herring? Here it seems that pain can be expressed in many different physical ways, known as ‘multiple realization’. This attack on the type-identity thesis led to the modified claim that mental states are identifiable with functional states which then allows multiple realization, a functional property being understood in terms of the causal role it plays. However, we can think of pain as being either coarse-grained, or fine-grained. ??Either one thing, a mix of properties hardly warranting aggregation under a single category, or OK.

Emergence
Reduction is generally contrasted with emergence. Acounts of emergence are rarely causal in form. Why cannot ‘horizontal’ causation give rise to emergent features within the same domain?

Commentary & Key points

Science is divided not only over what exactly we mean by ‘science’ but the relationship between the various scientific disciplines in terms of their scientific ranking and validity. This reflects, in part, differing metaphysical assumptions, contrasting views on the underlying structure of the universe, causation, and the nature of reality. It also draws on differing methodologies and modes of explanation.

If we take physicalism to be the scientific canon then non-reductive physicalism, as some version of emergentism, is becoming the new orthodoxy. Reductive physicalism assumes that the special sciences ultimately reduce to the physical sciences. Non-reductive physicalism claims that the special sciences supervene on the physical sciences: where there can be no change at one scale without a change at the other.

Domains of knowledge

 

The sciences (like normativity, consciousness, and biological complexity) – exhibit a gradation in character rather than sharp disjunctions. With increase in physical complexity comes greater imprecision.

Academic disciplines or domains of knowledge – maths, physics, biology, social science, English literature, history etc. – as taught in universities and schools have arisen historically largely as a matter of convenience and historical circumstance not because they are thought to reflect the structure of reality. The disciplines of science are, however, different because many scientists believe its subjects reflect the structure of the natural world. When we divide disciplines into say, physics, chemistry, biology, geology, social science, history, and geography what criteria distinguish one from the other. How useful are the criteria we use and could we devise some system that expresses the relationship between domain categories in a coherent way?

To illustrate the way domain categories can influence the way we thing consider the following: many philosophers of science believe today that there is no single factor demarcating science from other intellectual pursuits, if this becomes widely accepted then any clear demarcation between science and the humanities becomes untenable. ‘Geography, like biology and sociology, is a huge and loosely defined field (so loosely defined, in fact, that since the 1940s many universities have decided that it is not an academic discipline at all and have closed their geography departments)’.[1, Morris, Why West Rules] A prominent historian has suggested that the word ‘history’ is a pretentious way of talking about human behaviour in the past, in which case the subject History might be better treated as a sub-discipline of a more inclusive domain like ‘animal behaviour’ or ‘sociobiology’ which could also include subjects like human psychology, political science and sociology. Maybe we should include anthropology and archaeology here. Would economics and political science be more logically considered as sub-disciplines of sociology and therefore of slightly lower rank? If biology is really about physics and chemistry then maybe it would make more sense if deliberately taught as a sub-discipline of these subjects? Then there all the hybrid disciplines like astrobiology, biophysics, biogeography, biochemistry, geochemistry, social science, scientific humanities. Old arrangements are clearly archaic relics but the question now is whether subjects academic disciplines should simply be a matter of convenience or reflect some coherent rationale or, indeed, the world.

In practical terms a restructuring of the sciences is unlikely. Scientists today tend to work within their own fields, each with their own procedures, principles, theories and technical terms, resulting in physical and intellectual separation that resists fragmentation or combination. Academic territory, whether of theoretical knowledge or physical space, will be defended. Real or imagined disciplinary imperialism is a factor to be noted in any analysis of reductionism.

The point is that we take our existing taxonomy of domains of knowledge for granted when there are no unambiguous criteria on which our classification is based. Which are the wholes and which are the parts? Are our classifications a matter of subjective convenience and utility or are they sometimes founded in the objective nature of the world as we might suppose for science?

Principle 1 -Reductionism challenges the boundaries (criteria) that we use to distinguish one domain of knowledge from another, asking if we can devise scientifically acceptable criteria for dividing up the natural world into domains that more closely mirror the world itself.

Principle 2 – Classifications proceed (like reduction and explanation in general) by abstraction – by simplifying complexity. Classifications also establish relations between items and in so doing they contribute to theory-construction, description, explanation and, importantly, prediction.

There does appear to be a great opportunity for consilience here – a reconsideration of the way we both represent all knowledge and teach it I our schools and universities.

Domain units
If we accept Principle 1 – that no piece of matter exists in a more fundamental way than any other – (all matter is ontologically equal – either it exists or it does not) then can we make use of the scale units of various disciplines?

The important points is that no domain of knowledge or scale that is ‘in reality’ (ontologically) more fundamental than any other but talk of ‘scale’ and ‘domains’ now gives us some useful categories to work with.

Principle 3 – Explanations in one particular domain do not take precedence over those in any other – although some explanations may carry higher degrees of confidence than others and some domains may contain more high confidence explanations than other domains

Consider the material reality of the following: molecules (chemistry), organisms in general and ant colonies in particular (biology), historical events (history), society (organisations, trading blocs, communities). There seems to be some abstraction going on as we move through this series: we are passing from the language of brute matter into worlds with some conceptual loading. We will return to this later – suffice it to say at this stage that the status of these units is more fuzzy.

THE UNITY OF SCIENCE
A universal language
At present science is divided into disciplines with effectively different languages and objects of study. Wouldn’t it be easier if we abandoned all talk of scales and levels and spoke in a language where all units were comprised of the same thing? This would be like suddenly enjoying the efficiency of having a world currency instead of many, or a single global language instead of the confusing babble of different and often uncomprehensible languages of many nations? Instead of the diverse terminologies of sociology, politics, biology and chemistry we could have one single language – that of physics.

This may be possible in theory but could never eventuate in practice in spite of some unification. In theory it may be possible to describe cell division in molecular terms but in practice the translation of structures, variables and pathways of interaction would be phenomenally and prohibitively complex.

We can see here how our cognition is coping with complexity by imposing ‘scales’ on the world. Scales close to one-another, like physics and chemistry, operate in similar ways and use similar terminologies so explaining characteristics of one in the specialist technical terms and theories of the other may not present major problems. We can easily understand the close connections between molecular biology, biochemistry, and genetics. But as the difference in scale units increases so too do the difficulties in the translation of one domain of knowledge into another – and there is a corresponding decrease in the benefits of doing so. Explaining the major concepts and theories of political science in terms of atoms and molecules appears, at face value, absurd. Predicting weather patterns is difficult enough in the terms we use today without breaking our explanations down into the causal interrelationships of every molecule within the system – even though, in theory, this may be possible. This is not because this is logically impossible but because of our cognitive limitations: we do not have the computing power to do this and so find simpler modes of mental synthesis.

From this we can derive two general principles: On the other hand the relationship between biochemistry, molecular genetics, and chemistry is clearly and so in this case ‘reduction’ is much more credible. On the other hand the general use of physicochemical or molecular language and ideas in describing the generalities of ecology doesn’t really make sense. The question ‘Do we only need one scale or ‘level’ to explain everything’ is not so much a question of feasibility, more a question of utility. We need the cognitive convenience of talk at different scales.

THE METHODOLOGY OF REDUCTION
It is one thing to speak of reducing theory A into theory B but quite another to carry it out. In fact there is a subtle distinction between the ways that this can be done – whether it be done by translation, derivation, explanation or some combination of these.

1. Translation
One way of expressing the methodological aspect of reduction is to consider the reduction of one knowledge domain or theory to that of another: maths to logic, consciousness to physics and chemistry. This may be done by: translating key concepts of A into those of B; deriving key ideas of B from those of A; or when all the circumstances of B are explained by A.

The attempt to translate the terms of one discipline into those of another has proved too problematic to be realistic. Even where terms apply to the same object they may have slightly different independent meanings.

THE REDUCTION
It has been agreed that in terms of ‘matter’ an organism it consists of molecules. It has also been suggested that molecules are not ‘fundamental’. After all, we could also say that organisms consist of cells, tissues, or organs without threatening their physical ‘reality’. Indeed, rather than looking at ‘wholes’ that are larger then molecules we can be more fundamental by describing an organism in terms of its sub-atomic particles. But at this point it will probably be argued that subatomic particles are not informative – it is the way they are organised into functional parts and the relationship between these parts that is important.

We now need to ask how we can translate one scale, level, or knowledge domain into another. How do we translate language about molecules into language about cells into language about tissues, into language about organs, into language about organisms. We must also ask whether the ‘reality’ of the units chosen.

Theory reduction
Philosophers have asked whether the theories or principles of the biological sciences can be demonstrated as logical consequences of the theories of the physical sciences?

In the 1960s American philosopher Ernest Nagel (an early logical empiricist philosopher of biology along with Carl Hempel, and followed in the 1970s by David Hull) suggested a theoretical in-principle model for this logical reduction. A target theory was deduced from a base theory via bridging laws. This has proved difficult although it is still pursued in potentially comparable fields of study, one example being the reduction of classical Mendelian genetics to molecular biology. There was a messiness and imprecision because concepts vary and there are problems over the equivalence of terms, and target theories need subsequent modification.

If nature is indeed arranged hierarchically then we can perhaps take advantage of the principles of hierarchical classification: inclusivity, transitivity, association and distinction, and exclusivity.

Supposing the successful reduction of biology to physics and chemistry why not continue to, say, sociology. Would sociologists find it of any use addressing the consequences of the protestant work ethic in physicochemical terms?

We fall back on the principle of explanatory utility: biological terminology is generally more useful than physicochemical terminology. For the most part there seems little to be gained from a statement like ‘leg = block of physicochemical processes X’. There will, of course, be times when we need to know the chemical composition of a leg, and nowadays in many circumstances we might want to think of a gene in chemical terms rather than as a blob of matter like a bead on a string, so a reduction here is useful. In this sense reduction is neither in some way mistaken, misrepresenting, or inadequate – just totally impractical.

For example the word ‘cell’ in biology when used in a general sense is a useful biological concept but it can refer to many different kinds of physical objects and no individual cell is uniquely indispensable: we cannot say, for example, ‘x is a cell iff x is ‘physical expression’. That is, different structures can produce the same outcomes as when compensatory adjustments are made if brain damage occurs (a phenomenon referred to as multiple realization or degeneracy). Some form of physicochemical shorthand expression might convey the general meaning ‘leaf’ but individual leaves will have unique molecular structures. In general the vocabulary of biology does not map easily onto that of physics.

Interaction between levels or scales
In providing explanations and ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation or scales. Society is not always concerned with large things and physics small things.

Insofar as science is concerned with the structure of the natural world then it encourages the improved understanding of the structure and behaviour of matter. Simply stating something in different words is unproductive unless something is gained in the process.

Principle 3 – reduction is only scientifically useful when it improves our understanding by providing a better explanation (by giving a necessary and sufficient answer to the question being posed). Provided scientific units are credible then the scale we use for explanation is simply a matter of utility.

Probability (degree of confidence)
To some extent we measure the scientific merit of an explanation through our confidence in in its predictability – the probability of a particular outcome. However, if we predict the likelihood of the sun coming up tomorrow morning as being very near to 100% and the likelihood of climate change causing a rise of 2oC over the next 50 years as 70% can we say that one statement is more scientific than the other? It would seem not but this draws attention to the fact that there do seem to be greater degrees of certainty in (parts of) some domains rather than others. In cases like this we understand a scientific explanation as being the one that is the best we can achieve at present.

REDUCING BIOLOGY TO PHYSICS & CHEMISTRY
Why shouldn’t biology become a branch of the physical sciences? And in exactly what way can biological organisation be something over and above the molecules out of which an organism is composed?

In some respects it already is since as lumps of matter organisms obey many of the laws of physics, such as those of gravitation. And we can recognise how much of genetics has moved from the domain of general biology into the world of biochemistry and molecular biology. Since the human body undoubtedly consists of physicochemical objects and interactions perhaps we can envisage a super-computer that one day might be able to formulate a vast algorithm that simulates how all these molecules interact as the body goes about its daily business. But are there any problems that make the proposition theoretically impossible?

Because we can study both the brain and the gene in terms of molecules then for some biologists it might appear that we have, in the macro-molecule, a more fundamental level, scale, or source of unity. The question is whether such explanations are feasible, and if they are, whether the answers they give are informative or not. We are yet to resolve this.

Principle 7 – what is controversial about organic wholes is not their existence but the nature of their origins, their differentiation into parts, and the interaction of those parts.

Is there something about organic organisation and the complex interaction of the parts and their properties that defies reduction to explanations in terms of constituent physicochemical processes?
The biological theory of ‘emergence’ claims that this is so.

Biology & its link to other disciplines
Living matter is variable replicating matter that has the capacity, over many generations, to incorporate physical changes in response to influences from its surroundings. The variation that facilitates the persistence and replication of this matter is incorporated as physical change over many generations since changes that do not permit replication simply cease to exist. In mechanical terminology this is fine-tuning using feedback.

In biological terminology we have environmental adaptation by natural selection – descent with modification as a result of heritable variation and differential reproduction. Natural selection is the way we account for adaptive complexity and design in nature – the complex interplay of parts serving some function – and it is the process underlying the evolution of the entire community of life.

The process of natural selection introduces several crucial ideas:

1. There is a clear distinction between evolving and non-evolving matter: living and inanimate matter
2. Living matter cannot exist independently of its surroundings and therefore exists in a kind of organism-environment continuum
3. Natural selection gives a naturalistic account of the self-evident design we see in nature: it is the mindless way in which functional organized organic complexity, including humans and their brains, arose
4. Natural selection is a process that discriminates (selects) and which can therefore succeed or fail. Living matter has rudimentary ‘interests’ in the sense that some changes in the environment facilitate its survival and reproduction while others do not
5. ?, life ‘adapts’ to its environment (value and reason); thirdly, the interplay between life and inanimate matter as a kind of continuum.

Commentary & Key points

Science is divided not only over what exactly we mean by ‘science’ but the relationship between the various scientific disciplines in terms of their scientific ranking and validity. This reflects, in part, differing metaphysical assumptions, contrasting views on the underlying structure of the universe, causation, and the nature of reality. It also draws on differing methodologies and modes of explanation.

If we take physicalism to be the scientific canon then non-reductive physicalism, as some version of emergentism, is becoming the new orthodoxy. Reductive physicalism assumes that the special sciences ultimately reduce to the physical sciences. Non-reductive physicalism claims that the special sciences supervene on the physical sciences: where there can be no change at one scale without a change at the other.

Media Gallery

Emergence – How Stupid Things Become Smart Together

Kurzgesagt – In a Nutshell – 2017 – 7:30

What’s Strong Emergence?

Closer to Truth – 2020 – 26:47

First published on the internet – 1 March 2019

Print Friendly, PDF & Email