r _Web.log

Archives: 2009-12


Tomorrow's World Looks To The Eighties: An Illustrated Guide

The close of the decade is almost upon us, and so it is time for us to join Tomorrow's World, the British Broadcasting Corporations's flagship documentary programme, and look to the future of science and technology.

Tomorrow's World cover

We'll be taking you through each of the gloriously illustrated sections of this period-defining book, checking out the advances that are predicted for the 1980s and seeing how they tally up with the miserable hindsight of three decades.

Without further ado...

Lifestyle

The boundless optimism of the early silicon age shines through in Tomorrow's World's lookahead to how daily life will be transformed by the nascent technologies of integrated circuits and networked telecommunications:

"You will awaken some morning five years hence, speak a few simple words from your bed to your toaster, coffee pot and frying pan, and walk into the kitchen fifteen minutes later to a fully prepared breakfast.

"The same computer that is wired into the walls of your house and built to recognise your voice will turn on lights when you walk into the kitchen and turn them off when you leave."

Sounds idyllic. Sadly, as the authors themselves observe, such predictions are made on a regular basis. Thirty years later, countless automated housing schemes have appeared and often subsequently vanished: the Xanadu Houses (1979-2005), Japan's TRON House (1989-1992), and Honeywell's TotalHome (1992-). Integrating automation into an existing structure is difficult and expensive, and the TRON commentary describes early journalistic objections to intelligent homes, describing the experience as "like a haunted house".

However, miniaturization has today progressed and stabilized to the point where realizing many of these technologies is relatively straightforward. Smart homes and automation (under the sometime-aegis of "domics") are pushing forward, with open-source software making implementation possible for eager hackers. Even Microsoft are chasing the action.

In the living room, Tomorrow's World report on the imminent rollout of interactive television in the form of QUBE. Linking viewers to the TV stations, these home controllers allowed for live interaction for applications such as quiz shows and Comedy Store-style popularity metres. A noble idea, but a false start in this case: seven years later, the Qube network was axed in a cost-cutting exercise, and it took another decade to catch on once again. Today, we have TiVo and countless other interactive TV endeavours (including, of course, the UK's freeview monstrosity that is Rabbit Chat And Date), though it's still hardly commonplace.

Cookery's salvation is on the horizon in the form of "microwaves"; despite resulting in "sausages which are both limp and colourless" and "chicken which looks, and tastes, slightly parched", they're on the money here. However, a bleak warning is given:

The beam is inherently dangerous. Subject your hand, or worse still, your brain, to that agitation and it will be comprehensively addled.

Be careful, readers.

These's also some coverage of infra-red ovens, which it seems are currently returning to fashion.

Finally, the office has the prospect of electronic word-processing systems, capable of digitally storing text before it is printed to paper, saving on materials and obsoleting the typing pool in one fell swoop. Can't really fault this prediction.

Transport

Written shortly after the last Concorde was born, the Tomorrow's World team are a little more cautious in their transport forecasts: removable, high-capacity batteries may begin to give us electric vehicles (well, nearly), and the prototypal Advanced Passenger Train promises high-speed rail travel at over 100mph (which we finally got, two decades later, with the Pendolino, bringing us almost to where the French have been for aeons.

More exciting is their account of an in-car auto-navigation system, using induction loops laid along the length of the road system and bleeping when a turn should be taken. Being then trialled in Germany, such a system could also warn of impending fog, ice or traffic, but must first be "programmed by a traffic policeman".

Natural Power

"What will happen when the oil runs out?", asks Tomorrow's World. Thirty years later, it's still unclear. This chapter is dedicated to renewable power sources, yet it's clear to see that its optimism has thus far failed: at the turn of the millennium, only 3% of UK electricity came from renewable sources, and it's only a couple of percent higher today.

One radical idea, illustrated by this lovely diagram (above), is farming seaweed for biofuel: offshore kelp plants produce micro-organisms on a huge scale, which are then dried and fermented to produce methane. It's still being investigated today.

Inner Space

Slightly out of left-field, a whole chapter dedicated to deep-sea exploration. Perhaps it was popular at the time. First up is several pages dedicated to dredging for "nodules", accumulated nuggets of manganese, nickel, iron, and other valuable minerals which lie on the sea floor. It seems that these were a great hope at the time, but, according to Wikipedia, the prohibitive expense and proliferation of terrestrial resources caused interest in nodule extraction to wane. Shame.

We've also got the prospect of deep-sea rescue missions and the WASP submersible, for a single diver to explore the untapped wealth of the ocean. It seems it was successful in its goals, and one-man submersibles continue to look cooler and cooler.


Media and Telecoms

"One day we all may find it useful to have a facility for sending documents, writing and pictures across the telephone lines". The breathless coverage of the future of telecoms is a winner, particularly the up-and-coming Prestel technology, rolled out by the UK Post Office to provide interactive data through our television sets via telephone lines. The author describes using Prestel for such purposes as to determine whether to adopt a child, as well as for informing your wife that you are not coming home.

We are introduced to technologies such as LaserDisc, home video and the digital audio revolution. There is also a sombre page-long description of the "knuckle-whitening" thrills of the new new loop-the-loop rollercoasters.

Outer Space

Another advance which today remains in the eternal "forthcoming" pipeline is the Powersat, or space-based solar power. Collecting solar energy with satellites from outside the atmosphere and beaming it to earth via non-ionizing EM waves, the theory remains sound; earlier this year, PowerSat corp filed a new patent for space-solar tech.

Likewise the space colony (below), home to the first galactic settlers, estimated to be "well under way within fifteen years". We've had a good shot with Mir, but it's a far cry from the colonies of 10,000 described here.

Whatever Happened To..?

What really brought Tomorrow's World into its own is its championing the off-kilter and quintessentially British inventions which were to define our future age. As a rather sad closer to this look to the future, we hear back from two inventions prototyped earlier in the 1970s.

The 360 Degree Scissors, from Devonshire designer Richard Hawkins, are an ingeniusly simple (if faintly hazardous) concept: with double-sided blades, they can spin round fully to be operated equally effectively by right- or left-handed users. Hawkins took his idea to the show, making repeated journeys to the scissor-forgers of Sheffield to gather support. However, a manufacturing deal with Wilkinson Sword was thwarted at the last minute: it turns out that his idea had been patented over 50 years earlier, through a now-dormant patent, meaning that it could be freely manufactured to a US market and bypassing Hawkins.

The last news is that Hawkins was investing £5,000 of his own capital to manufacture a limited run in the UK. I can't track down any further trace; Richard Hawkins, if you're out there, I'd love to hear from you.

And finally, the Moulton Coach (above). There's no designing the streamlined elegance of this vehicle, constructed from parts in DIY assembly kits and using a simple, rigid frame. Yet, despite passing its wind-tunnel tests and being described by William Woollard as possessing "remarkable" braking power, the coach was deemed to not be cost-effective. The Moulton Coach never went into production.

Emergence ch15: Is Anything Ever New?

in project: emergence-advent

James P. Crutchfield - Is Anything Ever New? Considering Emergence (1999)

James Crutchfield is a veteran of the Santa Fe institute and director of UC's Complexity Sciences Center. From an information-theoretic standpoint, he here considers the optimal approach for an observer to explain the behaviours emerging from a black-box natural system. The solution put forward here is to attempt to built a machine which generates a corresponding output, minimising:

  • the model size, and
  • the error margin between our model and the observed data

From the complexity of this model (which here takes the form of an FSA-like ε-machine), we can deduce the structural complexity of the underlying natural system. These ideas form the core of the computational mechanics field, behind which lie Crutchfield, Shalizi and others.

It's an incredibly dense yet engaging paper, itself a reduction of The Calculi of Emergence (pdf), probably the most essential piece of work on quantifying emergence and effective complexity.

Emergence ch14: The Theory of Everything

in project: emergence-advent

Robert Laughlin and David Pines - The Theory of Everything (1999)

Read as PDF

In which Laughlin and Pines continue the many-body physics discussion of Anderson, arguing that the "more is different" tenet holds so strongly in certain contexts that the idea of a reductive Theory Of Everything is effectively impossible.

The objective of a Theory Of Everything is a set of base-level equations which underpin all activity in the universe, from which the phenomena of higher levels can be constructed. Evidently, this is quickly computationally unfeasible for (say) a biosystem. Laughlin and Pines' position is stronger than this, however, citing principles such as Laughlin's fractional quantum Hall effect as transcendent "higher organizing principles", in that:

"..they would continue to be true and lead to exact results even if the Theory of Everything were changed. Thus the existence of these effects is profoundly important, for it shows us that for at least some fundamental things in nature the Theory of Everything is irrelevant."

The effects in question relate to their notion of a "quantum protectorate", key to the FQHE, in which the effects of macroscopic principles eclipse those on the microscopic level, to the point that the latter becomes negligible. The consequence is that strongly emergent laws do exist, structurally independent of the underlying equations that govern single-particle interactions.

My flimsy understanding of theoretical physics forbids me from attempting any further analysis of this paper. Interested readers can find it here; Laughlin's A Different Universe expands his ideas into book form, most notably the view that emergent processes should be the central focus of theoretical physics.

Emergence ch13: Alternative Views of Complexity

in project: emergence-advent

Herbert Simon - Alternative Views of Complexity (1996)

Another all-too-brief excerpt, this chapter is best treated as a trailer for Herbert Simon's The Sciences of the Artificial, his canonical ode to design and structural organisation. It's a bite-sized tour of the fashions in complexity theory since WW2:

Worth a read for some context - though note that this chapter was written in 1981, so many more recent models (neural nets, self-organized criticality, agent-based models) have since had their time in the complexity limelight.

Emergence ch12: Sorting and Mixing: Race and Sex

in project: emergence-advent

Thomas Schelling - Sorting and Mixing: Race and Sex (1978)

Read as PDF

Moving into the socioeconomic applications of emergence theory, Schelling (Nobel laureate #2 in this collection) gives a model-based analysis of feedback loops within free market social systems, taken from his text Micromotives and Macrobehaviour. The result is a somewhat more insidious emergent result than the constructive formations we've seen in previous accounts: Schelling's models suggests that minor dispositions against living in a racial minority can result in a neighbourhood's total racial segregation.

Appearing at almost precisely the same time as the rise in popularity of game theory, it incorporates many of the same ideas and approaches. It seems that Micromotives is to economics what The Evolution of Cooperation and The Selfish Gene are to evolutionary theory. Schelling's work is similarly appealing in scope and presentation, and significantly less dogmatic than the hardline reductionism of Dawkins.

Emergence ch11: Emergence

in project: emergence-advent

Andrew Assad and Norman Packard - Emergence (1992)

This chapter marks a watershed as the first from the perspective of computational modelling and artificial life. It's very brief, with its prime contributions being an outline of a couple of key characteristics of (epistemic-computational) emergence plus a useful bibliography from the field: Bergson, Langton, Kauffman, Pattee, Cariani (who, I would argue, is by far the most glaring omission as an author in this book).

Assad and Packard offer a yardstick scale of emergence, based on mechanical deducibility of behaviours:

Non-emergent: Behavior is immediately deducible upon inspection of the specification or rules generating it
Weakly emergent: Behavior is deducible in hindsight from the specification after observing the behavior
...
Strongly emergent: Behavior is deducible in theory, but its elucidation is prohibitively difficult
Maximally emergent: Behavior is impossible to deduce from the specification.

It strikes me that, if we are to maintain an axiom of fundamental reducibility, the "maximally emergent" pole must be approached asymptotically (ie, cannot be attained) as "impossible to deduce" implies that the base-level laws are insufficient to explain the properties - so we have smuggled in (in Bedau's terminology) strong emergence.

More interestingly, they suggest a hierarchy of subsets of the types of thing that emerge from a substrate: structure (in space-time or symbolic space); from which arises computation (information-processing capabilities); from which then arises functionality (towards beneficial objectives). This seems like an elegant and useful formulation which can clearly be see when looking back at the emergence of complexity described in the previous chapter.

Emergence ch10: More Is Different

in project: emergence-advent

P.W. Anderson - More Is Different (1972)

Read as PDF

Progressing into the second part of the collection, we now switch perspectives to those from the scientific community. Nobel laureate PW Anderson writes from his work within condensed matter physics; this paper addresses the ways in which structures of increasing size and complexity begin to shift further from the symmetry we expect from particle physics, giving rise to quasi-stable far-from-equilibrium states which escape the pull of the Second Law of Thermodynamics.

It's a nice insight into how complexity emerges at the most fundamental level, filling out the justifications demanded by those who claim that special sciences of level X are "nothing but" an applied form of level Y: it's clear that new (constructive) causal explanations are needed as we shift from the point of view of electrodynamic equilibrium to the information-processing work of biology. This doesn't detract from the acceptance that level X can still be ontologically reduced to level Y.

Amusingly, I read this in the wake of skimming the "doctoral" dissertation of creationist Kent Hovind (which is quite a piece of work; it begins with the word "Hello", for god's sake). Hovind's opening argument, based on a fallacious extrapolation of thermodynamics, is essentially as follows: anything in the universe, if left to itself, will tend towards maximal entropy and go to shit (and thus, "This clearly indicates a a Creator"). Yes, this is true for a closed system, but it's hardly true that the aquatic wetlabs which first spawned life on earth were isolated from the immense energy of the sun or the environment beyond.

Emergence ch9: Real Patterns

in project: emergence-advent

Daniel Dennett - Real Patterns (1991)

Dennett elegantly bridges the chasm between cellular automata and human social intentionality by leveraging the concept of a stable pattern and its status of reality within the world. "Real Patterns" a great piece of work, and its logic is worth following closely.

The presentation of a "pattern" is done by recourse to information theory and Chaitin's compressibility of a data stream. Though it's not explicitly mentioned, there's also an assumption of what Shannon and Weaver would term a "sender", encoding some structure based on an underlying pattern (through which the output is compressed - using, say, the shared grammar of a chess game). The receiver then interprets this data, perceiving the pattern through its underlying actuality. Chess pieces placed randomly would, to a professional, be significantly harder to perceive and recall; to a non-player, however, both layouts would appear arbitrary, and no pattern could be perceived.

Thus, patterns are real yet potentially only discernible from a given perspective. Dennett asserts that a pattern exists in some data "if there is a description of the data that is more efficient than the bit map, whether or not anyone can concoct it". We can infer from this, then, that there are relative magnitudes of pattern-ness, correlating with the degree of information compressibility that we can apply.

When we apply our formidable pattern-matching apparatus in the real world, we form what Sellars terms a "manifest image", overlaid onto our sensations through acquired knowledge and folk psychology, which allows us to make judgements as to what is presented to us and so make intentional decisions. This is done through a significantly statistical, inductive process: a highly weighted network of probabilities based on accumulated experience.

Now, back to the Game of Life. Dennett's critical move here is to go beyond glider guns and explain how we can create a Turing-complete machine from aggregates of Life cells, essentially constructing three new levels in the Life hierarchy

  • at L=0, individual cells
  • at L=1, persistent aggregates of gliders, blinkers, beehives, etc
  • at L=2, aggregates of L=1 units which can perform logical operations
  • at L=3, aggregates of logic structures capable of playing a (deterministic or pseudo-random) game of chess

The thought experiments that we are left to take away include: what is the ontological status of the patterns (glider guns, logic gates, etc) that have been created through these illusory collections of cells? At what perspectives would we be able to perceive our Life chess-player as such, and at what perspectives would it appear to be a random, chaotic mulch? Does the latter matter?

A really beautiful work, and one which subtly begins to also emphasise the statistical nature of how such patterns (on a vastly complex scale) may function in consciousness and other real-world emergent scenario. Just as Bedau argued previously, what arises are whole classes of macro phenomena which can be grouped by some mean tendencies: the tendency for 2-2 Life to result in a chaotic slime, the tendency of birds to flock in synchrony, the tendency of a human agent to act in loosely predictable, intentional ways. The metaphysical reality of an abstract centrepoint to such tendencies is difficult to confirm, but the broader reality of such persistent, useful patterns is difficult to deny.

This marks the end of the "Philosophical Perspectives on Emergence" section. Next up: Scientific Perspectives.

Emergence ch8: Downward Causation and Autonomy in Weak Emergence

in project: emergence-advent

Mark A Bedau - Downward Causation and Autonomy in Weak Emergence (2003)

In which we are given the first serious and hopeful treatment of 'weak emergence', a term coined by Bedau some time prior to this stellar piece of work. Focusing squarely on the objectivist models of emergence (that is, those which do not rely on some subjective element of surprise), Bedau lays down a convincing argument that the ability to reduce an emergent property to its underlying parts does not make it uninteresting or insignificant.

Weak emergence sits in the middle of a trio of identified categories: "nominal emergence", which stipulates simply that a macro-level property depends on a collection of micro-level parts but cannot be held individually by these parts; and "strong emergence", which requires the existence of irreducible, supervenient macro-level properties and causal powers. Bedau makes his thoughts on strong emergence clear when he states: "Strong emergence starts where scientific explanation ends".

A phenomena which falls within the class of "weak emergence" can, given archangel-like computational powers, be derived from the network of interactions through which it emerges. However, such a network may comprise of "myriad non-linear and context-dependent micro-level interactions", making it unfeasible to forecast its outcome without simply iterating through these interactions. There is no "short-cut" derivation, to use Bedau's terminology; the extreme context sensitivity means we must simply churn through the micro-level processes until the macro-level outcome has been determined. This touches on the Kolmogorov-Chaitin notions of algorithmic complexity and incompressibility: there is no shorter way to calculate the algorithm's output than simply executing the algorithm itself.

One immediate question that surfaces is how complex this complexity needs to be. Surely some derivations are reasonably obvious, with quasi-shortcuts or broadly general solutions. Bedau addresses this by affirming a "spectrum of more or less weak emergence", with prospective properties being intractable to simulate. This fits into the intuition that there is no clear line between emergent and non-emergent properties.

We finally see how "weak emergence" tallies up with the perennial problems of causal exclusion and downwards causation. Since there is no longer the unbridgeable rift of duality that strong emergence imposes between macro and micro, the macro causal structure is equal to the aggregate of micro causal elements, and so no micro causal laws are overturned or out-prioritized. The vicious circle argument (in which a macro property can, at a given time t, theoretically affect and scoop out its own subvenient base) is not applicable because weak emergence is inherently diachronic; a macro pattern can subvert its micro constituents at time t+1, but that's OK -- this is exactly what happens in the real world (we experience neural pain as a headache, we take a painkiller, the underlying neural cause dissipates and we no longer have the conscious experience of pain).

The last worry, then, is that we are back to a plain epistemic mode of emergence: the entire causal structure at the macro level can be predicted, given knowledge of the micro-level constituents and sufficient processing time. This is true. However, given that a macro behaviour can be realised through many different routes, whole new general classes of macro entities can be created, with autonomy from particular micro pathways (and here, it's noticeable that the language switches to talking about the same "kinds of" macro behaviours). The justification is that the same process is used to justify causal autonomy between, say, chemistry and biology. This defence is only somewhat convincing, though feels like it is lacking in rigour.

As an addendum, most of Bedau's novel examples are given by way of Conway's Game of Life automata, its first major appearance in this reader. We'll be seeing more of it in the following chapters.

Sorry to all of you who have been checking back each day, only to be brutally rebuffed by the lack of any new doors. With luck, the missing days should be made up very shortly.

Emergence ch7: Making Sense of Emergence

in project: emergence-advent

Jaegwon Kim - Making Sense of Emergence (1999)

Read as RTF

Kim's 'Making Sense of Emergence' ploughs thoroughly through a number of the most major questions for the philosophy of emergence:

downwards causation: is it possible or even necessary for a macro-level entity to be able to exert causal powers on micro-level parts (and, beyond that, its own micro-level parts)?

explainability, predictability, reducibility: can these properties be meaningfully decoupled, and which can then be applied to a truly emergent property?

synchronic vs diachronic causality: does it make sense for emergence to be divorced from a temporal base?

The conclusion is that the only well-formed foundation for strong emergence is one that is diachronically causal. A clearly seminal paper, but resulted in another feeling of metaphysical fatigue.

Emergence ch6: How Properties Emerge

in project: emergence-advent

Paul Humphreys - How Properties Emerge (1997)

Read as Word document

The goal of Humphreys' paper is to coherently formulate a generalised position that does not fall prey to two key problems for mental causation: the exclusion argument (PDF) and Kim's downward causation problem. The solution is to create a logical "fusion" operation which, by creating compounds of micro-level properties, is proposed as the actual source of emergence (not the instantiation of the micro-level base). This also serves to resolve the problem of downward causation by giving us chains of causal couples which always begin and end at the base level, through potentially mediated through higher-level causal structures.

Again, however, this gives us a coherent metaphysical possibility, without very much real-world meat. Humphreys hazards quantum entanglement as one potential example, but concludes that the jury is still out.

Emergence ch5: Aggregativity: Reductive Heuristics for Finding Emergence

in project: emergence-advent

William C Wimsatt - Aggregativity: Reductive Heuristics for Finding Emergence (1997)

Rather than focusing on seeking the essential characteristics of emergence, Wimsatt's paper takes the opposite approach and attempts to pin down the set of properties for a property to be definitively non-emergent. We saw earlier that it's not a straightforward process to distinguish between the two in any case, with certain "obviously" linear-additive properties being a little more complex on inspection, and vice versa. Wimsatt throws in another nice example of nonlinear composition, that of volume under dissolution: the volume of a salt-water solute is actually less than the volume of either of its constituents. Sometimes, more is less.

The key thesis is that emergence is a consequence of certain organisational properties, combined with context-sensitivity of the parts that constitute this organisation. Non-emergent systems properties are termed "aggregates" by Wimsatt. To be truly aggregative, a property must be functionally invariant when its parts are subjected to any of the following transformations:

  • intersubstitution (that is, rearranging or substituting parts for others)
  • size scaling (adding or subtracting parts)
  • decomposition and reaggregation (of parts)
  • linearity

For the macro-scale systems property to not vary under any of these transformations, it is pretty clear that it must be radically functionally homogeneous. Wimsatt observes that the only paradigmatic aggregative properties are those governed by major laws of conservation: mass, energy, momentum and charge.

Perhaps the most rewarding movement of this argument is where Wimsatt takes it in the final couple of pages. With a background in the philosophy of biology, he writes on the structures that underlie natural selection and the models that we, as scientists, impose to understand them. Here, he criticises the "nothing but X" language of radical reductionism, such as in the oft-touted "genes are the only units of selection". However, if we take the complex dynamical systems that comprise the natural world and attempt to reduce them to a model based on one underlying constituent unit (the gene, the agent, the neuron), we cannot then make claims to universality of our model: this is what Wimsatt terms the functional localisation fallacy. Such a decomposition is useful to study some aspects of a system, but it should be understood that it must be complemented by other such decompositions from different levels and angles.

Emergence ch4: Emergence and Supervenience

in project: emergence-advent

Brian McLaughlin - Emergence and Supervenience (1997)

Read as PDF

In terms of its subject matter, McLaughlin's second paper in the collection follows on chronologically from his first. In the wake of quantum mechanics and other modern scientific advances, he affirms that: "On the current evidence, it appears that all fundamental forces are exerted below the level of the atom". So, is it still logically tenable to appeal to truly "emergent" forces which are genuinely radically unexplainably from underlying processes?

Yes, is the result, though it's highly unclear that there actually exist any such forces. In brief, McLaughlin completes the hard work of the later Emergentists by formulating a rigorous definition of what it would mean to be truly, intuitively emergent:

If P is a property of w, then P is emergent if and only if (1) P supervenes with nomological necessity, but not with logical necessity, on properties the parts of w have taken separately or in other combinations; and (2) some of the supervenience principles linking properties of the parts of w with w's having P are fundamental laws.

Got that?

The magic lies in the use of "nomological necessity", which approximates to a semantic relationship of implication (if "the parts have property A" is true, then "the whole has property B" is true) rather than necessity through logical deduction (such as that of "all bachelors are male"); and the "fundamental laws" clause, which are akin to the Emergentist "configurational laws": that is, non-deducible first principles.

Supervenience This definition is constructed through the use of the supervenience relation (see left), which sees wide use throughout analytic philosophy. To say that mental states supervene on neural states is to say that any change in mental state also entails (or, alternatively considered, requires) a change in neural state. Conversely, many neural states (labelled A on diagram) may potentially map to the same mental state (B).

So, there we go. Through this modal-logic scaffolding, emergence has been shown to be logically valid. However, McLaughlin himself is the first to admit that, even so, the only remaining known candidate for true emergence is consciousness - and this too is only left as an "open question". The resolution will come if it is ever revealed that the principles on which conscious states supervene are "fundamental" (i.e., in accordance with vitalism) or otherwise. My feeling is the latter.

Emergence ch3: Reductionism and the Irreducibility of Consciousness

in project: emergence-advent

John Searle - Reductionism and the Irreducibility of Consciousness (1992)

So far in this festive season of emergence, we have seen the radically strong position (i.e., emergent properties are real and ontologically irreducible to their parts); and the radically weak position (i.e., emergent properties are illusory, a consequence of our present ignorance of their true causal factors). With Searle's Reductionism and the Irreducibility of Consciousness, we start hitting the nitty-gritty, philosophically refining the space between these two polar points. We do so by walking into the minefield of philosophy of mind.

Consciousness is perhaps the paradigmatic example of a radically emergent phenomenon. By its nature, it is intrinsically subjective and complex (see Nagel's What Is It Like To Be A Bat?). Science is faring little better - no neurophysiological correlate has yet been found to allow us to predict reliably whether a subject is experiencing consciousness.

Searle's account begins with the claim that consciousness is emergent not just from the spatial relationships between the mind's constituent neurons, but from the causal interactions between them. He accepts that mental features (those of experience) are caused by their neural substrate, but denies that they can simply be ontologically reduced to them, in the same way that the liquidity of a substance cannot be reduced to the spatial configuration of its molecules; instead, both rely on "causal" emergence, in which the causal powers of consciousness can be fully explained by the causal powers of its underlying neurons.

Here, linguistic concepts are added to the mix, as Searle looks at how emergent concepts are formalised. Take the example of "redness". Starting with a subjective experience of red things in the world, we advance our scientific knowledge and come to the understanding that "redness" is caused by the reflection of a certain range of wavelengths of light. We then redefine "redness" as this objective, underlying principle, and our subjective experience of red things becomes relative to this real-world fact. In Searle's terminology, we "carve off the surface features" of redness - the surplus contained within a subjective experience - and are left with a relationship between affect and reality.

He proceeds to argue that, given that consciousness is itself the "subjectiveness" of experience, there is nothing to carve off, and no underlying reference point. We can no longer distinguish between the referent and our experience of it - indeed, the underlying phenomena in question is subjectivity itself. So, the two have converged, meaning that this technique of "reduction" cannot apply to consciousness, by definition.

This is all fine. However, I can't help but feel a little short-changed: all we are left with is the outcome of a metaphysical game.

Searle uses the convergence exercise to argue that "consciousness" is an irreducible fact, after whose application "we are still left with a universe that contains an irreducibly subjective physical component as a component of physical reality". That is, consciousness exists, and we cannot use the carving-off technique to attach it to some external pattern. But, in the neural state space, is it not possible that there is some continuous subspace which directly correlates to the experience of X conscious state? If so, would it not be acceptable to come to refer to this fuzzy state space as "consciousness"?

I'm away tomorrow and over the weekend, so normal advent programming will resume on Monday. Apologies for any distress this may cause.

Emergence ch2: On The Idea of Emergence

in project: emergence-advent

Carl Hempel and Paul Oppenheim - On The Idea of Emergence (1965)

Moving from the previous chapter's account of perhaps the strongest ontological statement of emergence, Hempel and Oppenheim make the counterpoint by arguing that the appearance of "emergent" phenomena is, in fact, a result of our ignorance of intermediary laws. In the tradition of logical positivism, they state that, until we have a micro-theory that gives us insight into the "inner mechanism" of a phenomenon, we do not truly have "real scientific understanding" of it. This epistemic limitation gives rise to our surprise when encountering such a phenomenon, creating illusory emergence, which later dissipates as our knowledge of the world develops.

Emergence is, then, relative to a set of theories, which include a set of bridging laws to map between (say) physico-chemical terms and biological terms. A sufficiently advanced theory-set allows us to deductively infer the relevant macro property, and the emergence vanishes.

What we are left with is the weak emergence in vogue today, against the Emergentist tradition of strong, ontological emergence. Both will be refined further over the next few weeks.

One interesting addendum is their criticism of the resultant/emergent dichotomy adopted by many of the British Emergentists. A property is said to be resultant if it can obviously be deduced from the sum of its parts; for example, the mass of a stone is equal to the additive sum of the mass of its constituent molecules. It is said to be emergent if it is not explainable or predictable from the combination of its parts. However, both of the two classes are really subjective judgments. "Obviousness" is in the eye of the beholder, and a compound behaviour which may seem unpredictable under some set of theories can seem obvious - logically deductable, even - under some other set. Additionally, under the mechanics of relativity, the "obvious" additive property of mass is not, in fact, a linear resultant, and so the argument loses its remaining support.

Note, however, that this is dependent on our beholder having sufficiently broad perspective as to encompass all of the relevant theories and bridge laws, and potentially unlimited computational powers (a la Broad's "mathematical archangel") to be able to follow through the labyrinthine web of causation that may lead to our emergent...

Emergence ch1: The Rise and Fall of British Emergentism

in project: emergence-advent

Brian McLaughlin - The Rise and Fall of British Emergentism (1992)

The first and longest of the papers published in Emergence, McLaughlin's Rise and Fall puts the collection in context by providing an overview of the first major, sustained philosophical discussion of emergence: between a series of British thinkers, from JS Mill's System of Logic (1843) to the scientific advances of the 1920s onwards.

First, as the opening of this series, an overview of what's at stake. "Emergence" is the phenomenon of macro-level properties or behaviours that are a product of an aggregate of micro-level parts. Popular examples include nature's synchronised swarming behaviours, physical phase transitions (say, from solid to liquid) at a critical temperature, consciousness and thought, the phenomenological experience of colour, etc. The Stanford Encyclopaedia provides a potted overview.

Emergentism is a strong philosophical brand of belief in emergence, which states that there are emergent phenomenon which can in no way, ontologically or epistemologically, be fully explained from their lower-level constituents. Working from the assumptions that:

  • everything can be reduced to matter, with some underlying level of elementary particles; and
  • there is a hierarchy of levels above this; from bottom up: physics, chemistry, biology, psychology… (cf xkcd)

..the British Emergentists claim that, though the matter of a level B may be comprised of the same stuff as its underlying A, it may be able to exhibit special "configurational forces" which cannot be explained or deduced from the forces of A. The motivation for such thinking was the current interest in chemical reactions, and the unexplainability of (say) the dissolution of salt in water from elementary particles.

Unfortunately, as McLaughlin observes, although this form of emergence may be logically coherent, it only remains empirically viable so long as we have no scientific way of understanding how such laws can emerge without resorting to some mystical forces; this is a God of the Gaps scenario. McLaughlin refers repeatedly to the "natural piety" that Alexander recommends we adopt for such faith-reliant situations. And, as Schrödinger and Einstein's leaps in quantum mechanics provided explanations of chemical bonding which did, indeed, bridge between levels, the a posteriori basis of British Emergentism collapsed.

The punchline of the chapter is that, today, we must accept with "natural piety" the difficult fact that high-level concepts such as production do indeed supervene on the same minimal set of forces as electromagnetic bonds.

Emergence Advent Calendar

K http://www.erase.net/.../emergence-advent/

It's the last month of the last month of the last year of the decade. The list of good intentions that I have failed to accomplish this decade would probably take the entirety of the remaining month to recite; instead, I'd like to commit to spend at least some of it doing these things.

Most prominent on my to-read list is Bedau and Humphreys' Emergence, a reader of seminal philosophical and scientific texts in the field of emergent phenomena. It's occurred to me that I could apply the same brute-force methods of the hackpact to a kind of public read-through of the book. Noticing its 24-chapter girth, I am thus proud to begin:

The Emergence Advent Calendar

Over the first 24 days of December, I will be reading each of the 24 chapters that comprise the Emergence reader, and writing a brief critical overview of each. It's partly a sustained research project, partly an exercise in self-discipline, and partly an ongoing secular-academic Christmas gift.