The Cone of Plausible Pasts

14 September, 2010

In the first meeting of Art Kleiner's scenario planning class at ITP — The Future of the Infrastructure — Art drew a diagram on the board he labeled "The Cone of Plausability"

Cone of plausibility

The point at the bottom of the cone represents the present moment, its surface the most widely divergent plausible futures we can imagine at any given future time. Hence, the cone widens as time progresses and more widely differing futures become possible. All the points within the circle at the top of the cone represent scenarios we consider plausible at whatever date we decide to consider, in this example 2025. And all the lines within the cone represent series of events linking the present to those divergent future situations.

In one example we considered in class regarding the future of the economic recovery, three plausible points on the final circle represented: 1) a full economic recovery lead by the existing elites who respond to the current troubles by taking them as a spur to needed reforms; (2) a "wisdom of crowds" scenario where the elites fail to respond adequately to the troubles and are hence overtaken and made irrelevant by a new grassroots economy enabled by digital organization tools; (3) a scenario of "chaos and collapse" where the elites fail to respond but the wise crowd also fails to emerge to take its place.

Exploring the differences between these scenarios and the drivers that would possibly get us from here to there makes up the primary work of scenario planning. And this exercise was meant to get us thinking like scenario planners.

A key premise of the exercise is that the current moment, the point at the bottom of the cone, represents the moment about which we know the most, the moment of least uncertainty. As we project out further into the future the cone gets wider because we know less about how things will turn out and more — and more divergent — scenarios become possible.

However, in thinking about this diagram further, it occurred to me that it is radically incomplete: it doesn't include the past. Even worse, it implicitly assumes that the past is a solid line, a series of known points like the present proceeding backwards away from the cone along a solid and well-understood path.

As even a dabbling historian myself, I know quite well that the past is at least as ambiguously and incompletely known to us as the future. The further afield you venture from your present (chronologically or culturally) the cloudier the past becomes. This cloudiness ranges from disagreement over matters of interpretation (to keep to the economic theme: what ended the great depression? Was it progressive economic policy that reduced inequality or was it simply the tremendous amount of economic activity required by the advent of World War II?) all the way to disagreement over basic facts (to name an extreme example: the recent revision of the archaeological record to remove triceratops and other species much beloved in elementary schools from the list of real creatures; or, to name a more mundane example: events in ancient Greece which come to us only through a small number of historical texts many of whose trustworthiness, on some issues, is questionable at best).

It seems to me that rather than a solid line leading up to the present moment, the past actually looks a lot more like a cone of its own as the plausible historical scenarios diverge from the present in a mirror image of the future ones:

On first thought this might not seem to alter the situation of a scenario planner much. After all, however we got to the present moment, we're here now, we know, more or less, our current situation, and what we're interested in is what will happen as the cone widens into the future.

On the contrary though, I will argue that our sense of the past, of which track through the cone of plausible pasts represented in the lower cone we believe lead to the present moment, will make an enormous difference in the range and type of future scenarios we are able to imagine.

The question is one of both meaning and momentum. How we interpret and prioritize driving forces when examining future scenarios will depend on our sense of how those forces have acted up to now.

Let me use an extremely common example to illustrate my point: the increase in computational power over time represented by Moore's law. Taken as raw data, many current forecasters look at the logarithmic growth of computing power graphed against its biological counterpart and start spinning scenarios in which artificial intelligence begins to play a large role in the development of practical technology along a relatively short timeline — since machine computing capacity will exceed that of human brains before too long.

Now, let me try to place that piece of data in a few different historical contexts in order to illustrate how our choice of historical "world track" (the line through the cone of plausible pasts we tend to believe in) might influence what scenarios we can conjure for the future.

In the 50s and 60s the dominant thinkers in computing research were devotees of the artificial intelligence vision of computer development. People like JCR Licklider who was in charge of computer funding at ARPA, John McCarthy, a prominent researcher at MIT who went on to found the Stanford Artificial Intelligence Lab, and others pictured the goal of computing research as creating computer "colleagues" who could collaborate with researchers as equals. (cf. Bootstrapping by Thierry Bardini).This group produced a number of innovations that lead to advances in factory automation and other fields.

Time Magazine cover featuring Thomas Watson from 1955.

Deep in the shadow of this group, a small minority of researchers working under Doug Englebart at the Augmentation Research Center saw computing as a means of enhancing and extending the human ability to recall, create, and manipulate information. Englebart's group focused on developing means of interacting with the computer: the mouse, the graphical user interface, the idea of the computer as a communication device.

Let me rephrase these two views of computing in terms of the cones of plausible history. There are two tracks through the part of the cone relevant to the development of computation. Both have in common the belief that the quantity of raw computing power increased wildly since the early 60s. However they differ in their interpretation of the ultimate purpose of computer technology, of what it's greatest impact will end up being on society.

For the view oriented towards AI very little of the potential impact of computers has been realized so far. In our present moment, computers are not much like "colleagues", but as the raw quantity of computation reaches and exceeds that embodied by the human brain, they would argue, this fact will change and AI technology will have a transformative effect on human civilization that makes the role of computers thus far seem minor.

Advocates of the Augment view, on the other hand would argue that, in the last 40 years, computers have, in fact, dramatically transformed a wide array of human endeavors. From communication via cell phones and electronic networks to computer-aided design and fabrication techniques, wherever computers act to extend already existing human abilities, our economic and social realities have been transformed.

Now, having reviewed the history, evaluated two let us return to the present. Each of these two different paths through the history cone reaches the present moment with an interpretive momentum that will alter how it sees driving forces such as the likely growth of computer capacity beyond that of the human brain. A proponent of the AI view has specific expectations of transformative events they expect to be caused by this change; a set of ideas that tends to get grouped as The Singularity. Augment devotees however might have a range of different scenarios they might imagine playing out in a future with greater-than-human raw computing power ranging from incremental non-transformative improvements in current devices and networks all the way to the advent of transformative new ones that depend on the new capability such as augmented reality.

Hopefully, at this point, I've made a pretty compelling case that how we interpret history might change how we receive driving forces in the present and how we spin them into scenarios for the future. As a novice scenario planner I have little sense of whether or not this kind of thinking is conventionally part of the practice or of what more structured role it could productively play. As an enthusiast of computer history, I'll be keeping a close eye on these questions as the class proceeds.