r _Web.log

tag: emergence


Emergence ch15: Is Anything Ever New?

in project: emergence-advent

James P. Crutchfield - Is Anything Ever New? Considering Emergence (1999)

James Crutchfield is a veteran of the Santa Fe institute and director of UC's Complexity Sciences Center. From an information-theoretic standpoint, he here considers the optimal approach for an observer to explain the behaviours emerging from a black-box natural system. The solution put forward here is to attempt to built a machine which generates a corresponding output, minimising:

  • the model size, and
  • the error margin between our model and the observed data

From the complexity of this model (which here takes the form of an FSA-like ε-machine), we can deduce the structural complexity of the underlying natural system. These ideas form the core of the computational mechanics field, behind which lie Crutchfield, Shalizi and others.

It's an incredibly dense yet engaging paper, itself a reduction of The Calculi of Emergence (pdf), probably the most essential piece of work on quantifying emergence and effective complexity.

Emergence ch14: The Theory of Everything

in project: emergence-advent

Robert Laughlin and David Pines - The Theory of Everything (1999)

Read as PDF

In which Laughlin and Pines continue the many-body physics discussion of Anderson, arguing that the "more is different" tenet holds so strongly in certain contexts that the idea of a reductive Theory Of Everything is effectively impossible.

The objective of a Theory Of Everything is a set of base-level equations which underpin all activity in the universe, from which the phenomena of higher levels can be constructed. Evidently, this is quickly computationally unfeasible for (say) a biosystem. Laughlin and Pines' position is stronger than this, however, citing principles such as Laughlin's fractional quantum Hall effect as transcendent "higher organizing principles", in that:

"..they would continue to be true and lead to exact results even if the Theory of Everything were changed. Thus the existence of these effects is profoundly important, for it shows us that for at least some fundamental things in nature the Theory of Everything is irrelevant."

The effects in question relate to their notion of a "quantum protectorate", key to the FQHE, in which the effects of macroscopic principles eclipse those on the microscopic level, to the point that the latter becomes negligible. The consequence is that strongly emergent laws do exist, structurally independent of the underlying equations that govern single-particle interactions.

My flimsy understanding of theoretical physics forbids me from attempting any further analysis of this paper. Interested readers can find it here; Laughlin's A Different Universe expands his ideas into book form, most notably the view that emergent processes should be the central focus of theoretical physics.

Emergence ch13: Alternative Views of Complexity

in project: emergence-advent

Herbert Simon - Alternative Views of Complexity (1996)

Another all-too-brief excerpt, this chapter is best treated as a trailer for Herbert Simon's The Sciences of the Artificial, his canonical ode to design and structural organisation. It's a bite-sized tour of the fashions in complexity theory since WW2:

Worth a read for some context - though note that this chapter was written in 1981, so many more recent models (neural nets, self-organized criticality, agent-based models) have since had their time in the complexity limelight.

Emergence ch12: Sorting and Mixing: Race and Sex

in project: emergence-advent

Thomas Schelling - Sorting and Mixing: Race and Sex (1978)

Read as PDF

Moving into the socioeconomic applications of emergence theory, Schelling (Nobel laureate #2 in this collection) gives a model-based analysis of feedback loops within free market social systems, taken from his text Micromotives and Macrobehaviour. The result is a somewhat more insidious emergent result than the constructive formations we've seen in previous accounts: Schelling's models suggests that minor dispositions against living in a racial minority can result in a neighbourhood's total racial segregation.

Appearing at almost precisely the same time as the rise in popularity of game theory, it incorporates many of the same ideas and approaches. It seems that Micromotives is to economics what The Evolution of Cooperation and The Selfish Gene are to evolutionary theory. Schelling's work is similarly appealing in scope and presentation, and significantly less dogmatic than the hardline reductionism of Dawkins.

Emergence ch11: Emergence

in project: emergence-advent

Andrew Assad and Norman Packard - Emergence (1992)

This chapter marks a watershed as the first from the perspective of computational modelling and artificial life. It's very brief, with its prime contributions being an outline of a couple of key characteristics of (epistemic-computational) emergence plus a useful bibliography from the field: Bergson, Langton, Kauffman, Pattee, Cariani (who, I would argue, is by far the most glaring omission as an author in this book).

Assad and Packard offer a yardstick scale of emergence, based on mechanical deducibility of behaviours:

Non-emergent: Behavior is immediately deducible upon inspection of the specification or rules generating it
Weakly emergent: Behavior is deducible in hindsight from the specification after observing the behavior
...
Strongly emergent: Behavior is deducible in theory, but its elucidation is prohibitively difficult
Maximally emergent: Behavior is impossible to deduce from the specification.

It strikes me that, if we are to maintain an axiom of fundamental reducibility, the "maximally emergent" pole must be approached asymptotically (ie, cannot be attained) as "impossible to deduce" implies that the base-level laws are insufficient to explain the properties - so we have smuggled in (in Bedau's terminology) strong emergence.

More interestingly, they suggest a hierarchy of subsets of the types of thing that emerge from a substrate: structure (in space-time or symbolic space); from which arises computation (information-processing capabilities); from which then arises functionality (towards beneficial objectives). This seems like an elegant and useful formulation which can clearly be see when looking back at the emergence of complexity described in the previous chapter.

Emergence ch10: More Is Different

in project: emergence-advent

P.W. Anderson - More Is Different (1972)

Read as PDF

Progressing into the second part of the collection, we now switch perspectives to those from the scientific community. Nobel laureate PW Anderson writes from his work within condensed matter physics; this paper addresses the ways in which structures of increasing size and complexity begin to shift further from the symmetry we expect from particle physics, giving rise to quasi-stable far-from-equilibrium states which escape the pull of the Second Law of Thermodynamics.

It's a nice insight into how complexity emerges at the most fundamental level, filling out the justifications demanded by those who claim that special sciences of level X are "nothing but" an applied form of level Y: it's clear that new (constructive) causal explanations are needed as we shift from the point of view of electrodynamic equilibrium to the information-processing work of biology. This doesn't detract from the acceptance that level X can still be ontologically reduced to level Y.

Amusingly, I read this in the wake of skimming the "doctoral" dissertation of creationist Kent Hovind (which is quite a piece of work; it begins with the word "Hello", for god's sake). Hovind's opening argument, based on a fallacious extrapolation of thermodynamics, is essentially as follows: anything in the universe, if left to itself, will tend towards maximal entropy and go to shit (and thus, "This clearly indicates a a Creator"). Yes, this is true for a closed system, but it's hardly true that the aquatic wetlabs which first spawned life on earth were isolated from the immense energy of the sun or the environment beyond.

Emergence ch8: Downward Causation and Autonomy in Weak Emergence

in project: emergence-advent

Mark A Bedau - Downward Causation and Autonomy in Weak Emergence (2003)

In which we are given the first serious and hopeful treatment of 'weak emergence', a term coined by Bedau some time prior to this stellar piece of work. Focusing squarely on the objectivist models of emergence (that is, those which do not rely on some subjective element of surprise), Bedau lays down a convincing argument that the ability to reduce an emergent property to its underlying parts does not make it uninteresting or insignificant.

Weak emergence sits in the middle of a trio of identified categories: "nominal emergence", which stipulates simply that a macro-level property depends on a collection of micro-level parts but cannot be held individually by these parts; and "strong emergence", which requires the existence of irreducible, supervenient macro-level properties and causal powers. Bedau makes his thoughts on strong emergence clear when he states: "Strong emergence starts where scientific explanation ends".

A phenomena which falls within the class of "weak emergence" can, given archangel-like computational powers, be derived from the network of interactions through which it emerges. However, such a network may comprise of "myriad non-linear and context-dependent micro-level interactions", making it unfeasible to forecast its outcome without simply iterating through these interactions. There is no "short-cut" derivation, to use Bedau's terminology; the extreme context sensitivity means we must simply churn through the micro-level processes until the macro-level outcome has been determined. This touches on the Kolmogorov-Chaitin notions of algorithmic complexity and incompressibility: there is no shorter way to calculate the algorithm's output than simply executing the algorithm itself.

One immediate question that surfaces is how complex this complexity needs to be. Surely some derivations are reasonably obvious, with quasi-shortcuts or broadly general solutions. Bedau addresses this by affirming a "spectrum of more or less weak emergence", with prospective properties being intractable to simulate. This fits into the intuition that there is no clear line between emergent and non-emergent properties.

We finally see how "weak emergence" tallies up with the perennial problems of causal exclusion and downwards causation. Since there is no longer the unbridgeable rift of duality that strong emergence imposes between macro and micro, the macro causal structure is equal to the aggregate of micro causal elements, and so no micro causal laws are overturned or out-prioritized. The vicious circle argument (in which a macro property can, at a given time t, theoretically affect and scoop out its own subvenient base) is not applicable because weak emergence is inherently diachronic; a macro pattern can subvert its micro constituents at time t+1, but that's OK -- this is exactly what happens in the real world (we experience neural pain as a headache, we take a painkiller, the underlying neural cause dissipates and we no longer have the conscious experience of pain).

The last worry, then, is that we are back to a plain epistemic mode of emergence: the entire causal structure at the macro level can be predicted, given knowledge of the micro-level constituents and sufficient processing time. This is true. However, given that a macro behaviour can be realised through many different routes, whole new general classes of macro entities can be created, with autonomy from particular micro pathways (and here, it's noticeable that the language switches to talking about the same "kinds of" macro behaviours). The justification is that the same process is used to justify causal autonomy between, say, chemistry and biology. This defence is only somewhat convincing, though feels like it is lacking in rigour.

As an addendum, most of Bedau's novel examples are given by way of Conway's Game of Life automata, its first major appearance in this reader. We'll be seeing more of it in the following chapters.

Sorry to all of you who have been checking back each day, only to be brutally rebuffed by the lack of any new doors. With luck, the missing days should be made up very shortly.

Emergence ch7: Making Sense of Emergence

in project: emergence-advent

Jaegwon Kim - Making Sense of Emergence (1999)

Read as RTF

Kim's 'Making Sense of Emergence' ploughs thoroughly through a number of the most major questions for the philosophy of emergence:

downwards causation: is it possible or even necessary for a macro-level entity to be able to exert causal powers on micro-level parts (and, beyond that, its own micro-level parts)?

explainability, predictability, reducibility: can these properties be meaningfully decoupled, and which can then be applied to a truly emergent property?

synchronic vs diachronic causality: does it make sense for emergence to be divorced from a temporal base?

The conclusion is that the only well-formed foundation for strong emergence is one that is diachronically causal. A clearly seminal paper, but resulted in another feeling of metaphysical fatigue.

Emergence ch6: How Properties Emerge

in project: emergence-advent

Paul Humphreys - How Properties Emerge (1997)

Read as Word document

The goal of Humphreys' paper is to coherently formulate a generalised position that does not fall prey to two key problems for mental causation: the exclusion argument (PDF) and Kim's downward causation problem. The solution is to create a logical "fusion" operation which, by creating compounds of micro-level properties, is proposed as the actual source of emergence (not the instantiation of the micro-level base). This also serves to resolve the problem of downward causation by giving us chains of causal couples which always begin and end at the base level, through potentially mediated through higher-level causal structures.

Again, however, this gives us a coherent metaphysical possibility, without very much real-world meat. Humphreys hazards quantum entanglement as one potential example, but concludes that the jury is still out.

Emergence ch5: Aggregativity: Reductive Heuristics for Finding Emergence

in project: emergence-advent

William C Wimsatt - Aggregativity: Reductive Heuristics for Finding Emergence (1997)

Rather than focusing on seeking the essential characteristics of emergence, Wimsatt's paper takes the opposite approach and attempts to pin down the set of properties for a property to be definitively non-emergent. We saw earlier that it's not a straightforward process to distinguish between the two in any case, with certain "obviously" linear-additive properties being a little more complex on inspection, and vice versa. Wimsatt throws in another nice example of nonlinear composition, that of volume under dissolution: the volume of a salt-water solute is actually less than the volume of either of its constituents. Sometimes, more is less.

The key thesis is that emergence is a consequence of certain organisational properties, combined with context-sensitivity of the parts that constitute this organisation. Non-emergent systems properties are termed "aggregates" by Wimsatt. To be truly aggregative, a property must be functionally invariant when its parts are subjected to any of the following transformations:

  • intersubstitution (that is, rearranging or substituting parts for others)
  • size scaling (adding or subtracting parts)
  • decomposition and reaggregation (of parts)
  • linearity

For the macro-scale systems property to not vary under any of these transformations, it is pretty clear that it must be radically functionally homogeneous. Wimsatt observes that the only paradigmatic aggregative properties are those governed by major laws of conservation: mass, energy, momentum and charge.

Perhaps the most rewarding movement of this argument is where Wimsatt takes it in the final couple of pages. With a background in the philosophy of biology, he writes on the structures that underlie natural selection and the models that we, as scientists, impose to understand them. Here, he criticises the "nothing but X" language of radical reductionism, such as in the oft-touted "genes are the only units of selection". However, if we take the complex dynamical systems that comprise the natural world and attempt to reduce them to a model based on one underlying constituent unit (the gene, the agent, the neuron), we cannot then make claims to universality of our model: this is what Wimsatt terms the functional localisation fallacy. Such a decomposition is useful to study some aspects of a system, but it should be understood that it must be complemented by other such decompositions from different levels and angles.