Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Web Development

The Blind Men and The Elephant


March, 2005: The Blind Men and The Elephant

Michael is editor-at-large for DDJ. He can be contacted at [email protected].


From one perspective, it makes no difference whether a programming problem is solved by 200 lines of Fortran code or a handful of Java classes or a set of Lisp functions. Any general-purpose programming language can, in principle, solve any problem that any other general-purpose programming language can solve: That's what it means to be a general-purpose programming language. And if it's solved, it's solved.

From another perspective—the working programmer's—it can make a great deal of difference what language you use to solve a problem. There is typically a domain of problems that is natural to a given programming language. Using the right tool saves time and effort.

Exploring the implications of a set of premises is what Prolog was designed for. Fortran was originally created to speed up large but straightforward mathematical calculations. Snobol is all about manipulating strings of characters. When Perl fanatics show you that they can build a spreadsheet in Perl, they may be demonstrating the flexibility of Perl or demonstrating their chops while busting yours, but they're probably not demonstrating sensible professional programming behavior.

What goes for programming languages also goes for other programming tools, whether they are called libraries or toolsets or frameworks or methodologies or whatever. Like the hammer that conditions the carpenter to think of every problem as a nail, programming tools are all just different paradigms, different perspectives from which we look at problems. All perspectives are in one sense equivalent, but for any given problem, one perspective may reveal a solution much more quickly and naturally than another.

I've been thinking a lot recently about this idea of perspectives that are in one sense equivalent but in another sense very different. I was led into these thoughts by two things: playing around with the latest version of Mathematica, and rereading the key chapter of Stephen Wolfram's A New Kind of Science. These thoughts, in turn, led me to research an old story about some blind men and an elephant, and to realize that the moral of that story might be very different from what I always took it to be.

All of which led to the following tentative reflections on paradigms and perspectives and programming and science and relativism.

A Hindu Parable

Do you know the story of the blind men and the elephant? If you do, you probably either have read the poem by John Godfrey Saxe or have been introduced to the blind men by some speaker or writer using the story to illustrate a point. The poem ties up the story with a straightforward moral, and the essayists and lecturers use it similarly, but the original Hindu parable, at least in the version that I've seen, is surprisingly ambiguous.

In the parable, a raja sent a servant to gather several men who were born blind and to have them examine an elephant and report on their findings. The servant showed each blind man a different part of the elephant, and predictably, each reported on the aspect of the elephant that he had experienced—the one that had touched its side saying that it resembled a wall, the one that felt its tusk saying that it resembled a plowshare, and so on. And each thought that his experience of the elephant was the complete and correct view of the beast. So certain were they that they came to blows over this matter of the nature of the elephant. The raja, according to the parable, was delighted with this scene. Go figure.

Actually, there is a frame-story wrapped around this one, in which the Buddha relates this story of the raja and the servant—who seems to enjoy playing practical jokes on the visually impaired—for a purpose: The Buddha wants to teach his disciples a lesson about those who argue over whether the world is infinite or finite or whether the soul dies with the body or lives forever. His lesson is that, in their quarreling, each clings to his own view and sees only one side of the issue.

I've heard or read the story several times, always presented to make a point about the need to recognize the limits of your present perspective. But on rereading it, it seems ambiguous. Does the Buddha expect his disciples to "see" the true nature of the universe, or merely to recognize their own blindness? The blind men are, after all, congenitally blind. Are we to believe that there is what Albert Einstein called, in a different context, a "privileged perspective," a nonblind view of the elephant, of reality? Or do we get only a choice of different but equivalent perspectives—which are not views of some underlying reality, but are themselves all there is to reality? I think the story can be read either way.

Blind Men and Programming Paradigms

In the case of programming languages as perspectives, it seems to me that the second interpretation of the story is the relevant one. In other words, there is no elephant: no privileged programming language, or privileged programming paradigm, merely a possibly infinite set of functionally equivalent ways of going about solving problems of computation.

Mathematica, the symbolic mathematics software invented by Stephen Wolfram, is a good playpen for fooling around with different programming paradigms. You can use it as a cookbook for procedural programming languages like Basic or Fortran:

z = a;
Do[Print[z *= z + 1], {i, 3}]

or as a functional language like Lisp, in which everything is a function call and functions can be treated as data objects:

Nestlist[(1 + #) ^ 2 &, x, 3]

or as a string-manipulation language like Snobol or some of the popular "little" or scripting languages:

StringReplace[s,
{"AG" -> "AC", "GT" -> "GT"}]

or as a Prolog-like rule-based language:

p[x_ + y_] := p[x] + p[y]
p[a + b + c]

or define objects as in object-oriented programming languages, or mix paradigms in one program.

But there is, according to Wolfram, one unifying idea underlying Mathematica: Everything can be represented as a symbolic expression of the form

head[arg1,arg2...].

Every operation in Mathematica is ultimately a transformation of such a symbolic expression. So maybe for Mathematica, there is a privileged perspective: symbolic expressions transformed by transformation rules.

Does the fact that Mathematica seems to have a privileged paradigm mean that there is some privileged programming paradigm in a general sense? I don't think so. Surely Mathematica's privileged perspective is simply a consequence of its architecture.

But what about assembly language or machine language? Might that be the "true" perspective against which high-level languages are merely distorted views, not from blindness maybe, but through tinted glasses?

I suspect not. I think that when we talk about the perspective of a programming language, we are not talking about a particular implementation on particular hardware, but about the programming paradigm behind that language—object-oriented programming, for example, or declarative programming. And if the question is really about paradigms, and about full computational systems that include the hardware, then it doesn't seem that there is any privileged perspective. There are practical reasons for building the underlying logic hardware the way we do, but not fundamental logical reasons.

The CA paradigm

It is, of course, of great practical importance that different programming paradigms work better for different purposes. Particular paradigms are easier to apply, more natural in particular contexts.

Stephen Wolfram's preferred programming paradigm seems to consist of the following components:

  1. A set of transformational rules.
  2. Data to operate on.
  3. An engine that applies the rules to the data.

That's loose enough to describe Mathematica or an expert-system inference engine or any of a number of other programming systems. If you add the assumptions that the data enter only at the beginning of the process as the initial condition of the system, and that the engine keeps applying the same rules to the output of its previous application of the rules, then what you have is a pretty good definition of a cellular automaton (CA). The most famous example of a cellular automaton is the Game of Life popularized in the 1970s in the pages of Scientific American magazine by John Horton Conway and Martin Gardner.

The CA paradigm turns out to be capable of emulating a Turing machine, and is therefore computationally equivalent to any general-purpose programming language. It's a paradigm that Wolfram has spent the past 20 years studying. The question raised by his research is: Is the CA paradigm the best perspective for studying the universe? Is it, in fact, the universe's privileged perspective?

My reading of Stephen Wolfram on science is that, contrary to the situation with programming, there is a privileged perspective in science. That may not seem particularly strange: It is hardly shocking, in fact may be a little old-fashioned, to suggest that there is a fundamental reality behind our various views of the universe, that there is a real elephant behind the differing reports of the blind men. The curious thing, though, is that, for Stephen Wolfram, this privileged perspective is itself functionally equivalent to a programming language.

Blind Men and the Universe

I've written before about Wolfram's magnum opus A New Kind of Science, but I never really did justice to the key chapter of the book, the one in which he explains his Principle of Computational Equivalence. I don't know that I can do better now. I keep trying to absorb it, but I begin to suspect that the simple writing style that Wolfram adopted for the book is inadequate for fully explaining this concept, which he claims is broader than previously established deep results about computation, with richer implications than the laws of thermodynamics: a new law of nature, an abstract fact, and a powerful and enlightening definition.

That's a lot to claim. But if you take Wolfram seriously, and his intellect makes it foolish not to at least give him a hearing, the concept is central to understanding a great many things, including the question of whether or not there is a privileged perspective on the universe. He says:

[I]t has become particularly common in the academic humanities in the past few decades to believe that there can be no valid absolute conclusions about the world—only statements made relative to particular cultural contexts...But the Principle of Computational Equivalence implies that in the end essentially any method of perception and analysis that can actually be implemented in our universe must have a certain computational equivalence, and must therefore at least in some respects come to the same absolute conclusions.
—Stephen Wolfram,
A New Kind of Science, p. 1131

Before he can explain his Principle of Computational Equivalence, though, Wolfram has to demonstrate what he could call (but doesn't) the Principle of Computational Ubiquity.

Part of the 1200-page book consists of detailed demonstrations that computations that are similar to, and computationally equivalent to, cellular automata can be found just about everywhere in nature. Wolfram's researches take him into crystal structures, fracture patterns in materials, fluid flow, and patterns in biological morphology. He examines growth patterns in plants and animals, with hundreds of illustrations showing the similarity between the output of a simple program and the structure of a particular leaf. He reasons from the ubiquitous appearance of the angle 137.5 degrees in plant structures to the likelihood of an underlying process that is very much like a cellular automaton. His detailed study of the shapes of seashells is reminiscent of Darwin.

Other chapters in the book explore the way in which such seemingly simple computations show up in other realms, like fundamental physics. One highly interesting assumption of Wolfram's is that these discrete computations are adequate to capture all of physics. He doesn't insist, but he does apparently believe, that the universe is discrete, and that continuous functions are a mathematical abstraction with no direct realization in nature.

There are, I guess, two points to be made here. First, that Wolfram finds computations everywhere. Where the ancients thought that all was fire or earth or air or water or some combination of these elements, and more recently "all is atoms" was a mantra of science, Wolfram holds that "all is computation." And second, the computations that he finds everywhere tend to be, or at least appear to be, quite simple, either cellular automata or equivalent systems.

The reason for this, Wolfram tells us, is that there are no more complex calculations than these simple CAs.

PCE

Wolfram's Principle of Computational Equivalence, the punchline of his book, states that almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. In particular, simple CA systems no more elaborate than Conway's Game of Life are computationally equivalent to powerful computer systems.

It says that once you get beyond very simple systems, all systems immediately attain the highest level of complexity possible, and are computationally equivalent to all other nonsimple systems. The Principle of Computational Equivalence, Wolfram says, "tells us what kinds of computations can and cannot happen in our universe [and] summarizes purely abstract deductions about possible computations, and provides foundations for more general definitions of the very concept of computation." [A New Kind of Science, p. 719.] It introduces a new law of nature asserting that "no system can ever carry out explicit computations that are more sophisticated than those carried out by systems like cellular automata and Turing machines."

One consequence of the Principle is that the detailed behavior of most systems that are not trivially simple cannot be known without in effect running the computation and observing the behavior directly. Because any accurate theory, model, or simulation of the system is necessarily of the same degree of complexity as the system itself. This runs counter to our idea of how science works, but this is, Wolfram says, because science today restricts itself to those systems that are simple enough to produce only repetitive or nested patterns of behavior. Science today ignores the vast majority of the processes of nature, looking only at those where easy answers can be found. Whereas the new kind of science revealed by Stephen Wolfram boldly takes on all the hard questions that no scientist has ever had the courage or imagination to tackle before.

Sorry; I got carried away. It's hard to characterize Stephen Wolfram's views without a little of the Wolfram ego slipping in.

So what is the scientific status of this Principle of Computational Equivalence? Well, the whole of A New Kind of Science is an argument for the Principle. And Wolfram acknowledges that the Principle is so fundamental that it may not be directly testable by the conventional methods of science. But he argues that the large amount of data presented in the book and the new perspective that the book opens up strongly support the Principle. Perhaps, he suggests, various aspects of the Principle will come to be accepted, until eventually the whole thing seems too obvious even to mention.

Time will tell.

The Privileged Perspective?

Wolfram titled his book A New Kind of Science because, essentially, nobody has ever done science in the way he proposes. Scientific method has traditionally consisted of looking at complex processes and discovering simple regularities in the output of these processes. These regularities are invariably either repetitions or, as in the case of fractals, nested regularities. Wolfram proposes studying the processes themselves in all their computationally irreducible complexity. Because he finds computational systems everywhere in nature, he concludes that this means studying the behavior and properties of computational systems that are equivalent to cellular automata.

And so, the image emerges of the entire universe as a vastly complex system creating itself anew each instant from a possibly simple set of initial conditions and a possibly simple transformational rule.

Now, although a CA can be emulated by a Turing machine or other programming paradigm, we know that one programming paradigm is usually the most convenient, the most natural, in a given context.

For the universe, is that most natural paradigm the cellular automaton?

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.