Channels ▼

Community Voices

Dr. Dobb's Bloggers

A New Kind of Science and Wolfram's Computational Equivalence

October 15, 2009

With the official launch of Wolfram|Alpha earlier this year, and the recent release of the Wolfram|Alpha Webservice API, I got to thinking about Stephen Wolfram and his contributions to computing. These classic 2002 entries from Michael Swaine's long-running Programming Paradigms column in DDJ examine Wolfram's theories as laid out in his seminal book, A New Kind of Science.

A New Kind of Science

by Michael Swaine

Michael examines Stephen Wolfram's magnum opus, A New Kind of Science.

"I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

- Isaac Newton

In this column, I'm writing about Stephen Wolfram's magnum opus, A New Kind of Science (Wolfram Media, 2002; ISBN 1-57955-008-8). This 1200-page, half-a-million-word book is the most astonishing thing I've read in a long time. One of the astonishing things about the book is its ambition. Like Newton, Wolfram sees the achievements of science as instances of playing in puddles on the shore while the great ocean of truth lies undiscovered near at hand. Unlike Newton, Wolfram wades in.

"[T]he entire fabric of human reason which we employ in the inquisition of nature is badly put together and built up, and like some magnificent structure without any foundation," Francis Bacon wrote in his magnum opus, The New Organon, published in 1620. "There was but one course left, therefore - to try the whole thing anew upon a better plan, and to commence a total reconstruction of sciences, arts, and all human knowledge, raised upon the proper foundations."

Wolfram's ambition is no less modest - he intends a total reconstruction of science and all human knowledge, erected on the foundation he lays out in this book.

Wolfram says that science has thus far huddled on that safe shore, asking only those questions that its established methods have a good chance of answering. In doing so, it has avoided a vastly larger realm of questions - questions that Wolfram proposes tackling with the method he has been exploring for the past 20 years. That's what his book is about.

I won't attempt here to judge how well it succeeds, but I'll at least try to give a sense of what Wolfram thinks he's figured out. I also have another installment of my thread on quantum computing.

The Key to Everything

Ten years ago this fall, DDJ senior editor Ray Valdez and I sat down with Stephen Wolfram to talk about Mathematica (which he created), mathematics, and whatever might be on Wolfram's mind. The result of that conversation was published in this column in the January and February 1993 issues of DDJ. At the end of that published interview, after sharing his views on more narrow technical issues, Wolfram said:

It's actually an interesting historical thing that I've been studying, how partial differential equations ended up being thought by people to be the fundamental equations of physics. It's very bizarre, because it isn't true, and not only is it not true, even the fact that atoms exist makes it clear that it's not true. So why is it that people will [say] that the fundamental equations of physics are partial differential equations?

What happened, I think, is that when these models were first developed, the only methods for figuring out what the consequences were was hand calculation. Computers are a very recent phenomenon in the history of science, and the fundamental models that exist in science have not yet adapted to computation. And that's my next big thing.

That was his plan 10 years ago: To use basic ideas from computation to figure out the correct models of physics. "Next big thing" indeed. At the time, this looked like an outrageously ambitious plan, but I suspected that Wolfram might just be the guy to pull it off. I do think that Wolfram may be a Newton-scale intellect. I'm sure he has Newton-scale ambition.

Ten years later, the ambition of Wolfram's plan has ballooned. In 1992, he was still willing to talk about looking for the fundamental equations of physics. Now, he's apparently convinced that the whole idea of looking for simple equations is the wrong way to do science. Or put it this way: That the way of doing science pursued by every scientist since Isaac Newton amounts to picking the low-hanging fruit from the tree of knowledge. Based on what he said in that interview, in 1992 Wolfram saw computers as a tool for finding the right equations. Today, he sees computation as the method for doing direct experimental research on the fundamental processes by which the universe works. He isn't talking about simulation - his approach isn't to model the clockworks of the universe, but to find the actual program by which today makes tomorrow. Program, not equation: For equations, Wolfram would substitute simple programs as the proper form of scientific law. And along the way, he would throw out the continuum: For continuous models like partial differential equations, he would substitute discrete computations. Again, not as an approximation, not as a simulation.

He thinks the universe is discrete. He thinks it's digital. And it's no longer just physics that he's trying to rebuild from the foundation up. All of science, Wolfram believes, needs to be reconstructed along lines that he lays out in his book. Physics. Biology. Cognitive science. Economics. His approach promises answers, or a path to answers, to such puzzles as free will, intelligence, and how all this bewildering complexity could have come from simple beginnings. He thinks he's found the key to everything.

The Genius In the Attic

"In my early years," Wolfram says, "I was very much a part of the traditional scientific community."

Technically correct, but Wolfram was never much a part of the traditional system of educating scientists. He never had the patience to attend college classes or to finish any traditional academic degree program, although after he had spent a year at Cal Tech, the university granted him a Ph.D. The number and quality of his professional publications demanded it. At Cal Tech, and later at the Institute for Advanced Studies and the University of Illinois, there was something about Wolfram that rubbed people the wrong way. He had little patience with people whom he considered his intellectual inferiors - and that was most of the planet - and he had no tact at all.

"[H]ad I remained there," he says, "I have little doubt that I would never have been able to create something of the magnitude [of what] I describe in this book."

But he didn't have to remain there. While he was at Cal Tech, Wolfram became the youngest person ever to be awarded a MacArthur Fellowship, often called the "genius grants." The MacArthur Prize carried with it enough money to let a genius pursue his or her research interests for a good stretch of time, although probably not as long as Wolfram has been at it. For that, he has Wolfram Research, the company he built to sell the program that he wrote, Mathematica. The success of Mathematica and his ownership of this profitable, private company created an intriguing situation: One of the brightest people on the planet has had all the money he needs and precisely the tools he needs to pursue whatever research he wishes with no academic responsibilities.

Wolfram has made the most of it, devoting nearly all his time to a single area of research. He is right, too: As part of the traditional scientific community, he probably could not have retreated to an attic room for a decade obsessed with what his colleagues regard as mere recreational mathematics, and it would have been a bit of a challenge to get 10 years of funding for a plan to prove that the god program of the universe is a cellular automaton.

Cellular Automata

Some of us discovered cellular automata, or CAs, through Martin Gardner's February 1971 "Mathematical Games" column in Scientific American, where he introduced John Horton Conway's CA called the "Game of Life." I probably don't need to remind you that the Game of Life involves binary cells in a grid, each either "on" or "off" and all simultaneously updated on each play of the game according to a simple rule. Conway's rule is: Each on cell with two or three orthogonally adjacent on neighbors stays on; each off cell with exactly three on neighbors goes on; every other cell stays off or turns off. I should remind you that Conway chose this rule carefully to make the behavior of the system over time interesting and unpredictable.

(First aside: Conway's still into games, as his talk at http://technetcast.ddj.com/tnc_play_stream.html?stream_id=672 proves. Second aside: While researching something else, I stumbled across what is probably my own first implementation of the Game of Life. Since it was written under weekly deadline pressure for the October 11, 1982 issue of InfoWorld and coded in The Software Works Forth for the Osborne 1 computer, I'll spare you the code.)

Others discovered cellular automata through some of the fascinating work done on them in the field known as artificial life, popularized in Steven Levy's Artificial Life (Vintage Books, 1992; ISBN 0-679-74389-8). But the history of CAs extends back decades earlier: Edward Fredkin, former head of MIT's Project MAC, was exploring CAs in 1960, and John von Neumann and Stanislaw Ulam really pioneered the field over a decade before that. Konrad Zuse, one of the inventors of the digital computer, independently invented CAs. Tommaso Toffoli and Alvy Ray Smith have also done significant CA work.

I have to give credit to the other explorers in this territory because giving credit to others is not Wolfram's long suit. "His gracelessness toward his predecessors knows no bounds," Levy wrote in Artificial Life.

But maybe Wolfram has improved. Although the 800-plus pages in the main body of A New Kind of Science read as though no one had ever thought any of these things before Stephen Wolfram shone the light of his prodigious intellect on them, the 300-plus pages of notes go some distance toward acknowledging others' work. (And the notes, printed in smaller type, actually run 50,000 words longer than the main text.)

It is a fact that the study of cellular automata was languishing before Wolfram dismayed his IAS colleagues by dropping quantum chromodynamics to take up this seemingly frivolous subject, and that it was reinvigorated by a series of papers by Wolfram in the 1980s.

Wolfram prefers the simplicity of the one-dimensional CA. Here's a description: Picture an infinitely long line of pixels, each either black or white. That's your input to your 1D CA; the initial conditions. Wolfram often works with extremely simple initial conditions, like one black pixel and the rest white, or alternating black and white pixels. Now apply a nearest neighbor rule to all the pixels simultaneously, a rule like "if the pixels immediately to the right and left of me are black and I am white, I go black; otherwise I become or remain white." Apply the rule repeatedly. To see what is happening over time, Wolfram shows each successive generation of pixels below the previous one, so the diagrams of his 1D CAs are two-dimensional, but the second dimension is time.

Wolfram noticed something odd about some of these CAs. One in particular seemed to be able to produce infinite complexity from an extremely simple rule and trivially simple initial conditions. Others had noticed this odd feature of CAs. It was almost what Conway had in mind in picking the Game of Life rule so as to make its behavior "interesting and unpredictable." But Wolfram began to understand this as a discovery of the utmost importance. In information-theoretic terms, it looks like a violation of the second law of thermodynamics: How does all that complexity come out of such simplicity? Where does it come from? And what are the implications of being able to generate complexity out of simplicity?

How Today Makes Tomorrow

That's the question on which Wolfram has spent the past 10 years of his life, the question on which he hangs the half-million words of this book, and the question that he thinks knocks the props out from under science as we know it. The answer, briefly, is CAs.

CAs are computer programs, but they are vastly simpler than the kind of programs that scientists usually use in trying to model nature or physics.

They are no less powerful, though. Edward Fredkin preceded Wolfram in seeing that certain simple CAs were complex enough to model physics, and the artificial life people see them as a way to explore the processes of life. But Wolfram has carefully detailed all the complex systems to which CAs are computationally equivalent. For starters, Turing machines. Simple CAs have the computational power of a Turing machine, which means that they are computationally equivalent to any computer. Wolfram doesn't stop with stating this theoretical result; he shows how to compute with CAs. He explores generalized CAs (continuous, mobile, totalizing), substitution systems (including fractals), production systems or string rewriting systems, register machines, recursive functions, and the tag systems developed by Emil Post for implementing syntactic reduction rules in Principia Mathematica. He shows how to emulate these kinds of systems with CAs, and how to emulate CAs with these kinds of systems.

They are all, he says, equivalent. Wolfram enshrines this equivalence in a principle he calls the "Principle of Computational Equivalence." Not only are CAs and Turing machines and such computationally equivalent, but they are all computationally equivalent to thunderstorms, airfoil turbulence, and human consciousness. So far, this sounds like the Church-Turing thesis.

I'm not sure, at least after one reading of the book, what Wolfram intends here beyond what is already well accepted, unless it is that this computational equivalence is not just a fact about math or computer science or logic, but is a fact about the physical nature of the universe.

But if that's what he means, David Deutsch said it first in 1985. Deutsch, a professor at the University of Oxford, is one of the leading lights in quantum computing, credited with first elucidating the concept of quantum parallelism.

I think I'll have to study Wolfram's Principle of Computational Equivalence further and, if I figure it out, discuss it here in my next column. In any case, Wolfram demonstrates that CAs can embody more general rules than can the mathematical equations that make up most of science. And this means that they can go to places where science has not gone before. Put it this way: Despite the dazzling success of science in uncovering connections among the phenomena of nature and giving us better guesses about aspects of the future, the mystery of the succession of events remains. At one instant, everything that is, is exactly the way it is; in the next instant, everything is different. What is the rule that maps one instant onto the next? Isn't this the real job of science: to explain how today makes tomorrow? That's the challenge Wolfram lays out in A New Kind of Science.

The Quantum Thread

One huge gap in A New Kind of Science is its almost total ignoring of quantum physics, especially quantum computation. Deutsch thinks that this alone wrecks Wolfram's thesis. Responding to a request from the Daily Telegraph for his first impressions on A New Kind of Science, Deutsch wrote: "I was disappointed that there is only the barest mention of quantum computation. If computation-based ideas really are going to play a fundamental role in physics, it will have to be through the quantum theory of computation...not the classical one that this book is based on."

Indeed, Richard Feynman, the previous-generation boy genius whom boy genius Wolfram encountered at Cal Tech, pointed out as early as 1981 that there are quantum phenomena, like the EPR effect, that simply cannot be emulated by a classical computer - only by a quantum computer. Deutsch again: "Hence my first impression is that the book's central thesis is false: I do not think that the sciences...will be revolutionised by reinterpreting nature in terms of simple computational rules rather than simple equations."

On the other hand, maybe Wolfram's approach can handle quantum phenomena. Edward Fredkin would probably think so, since he, like Wolfram, believes that the universe is discrete, not continuous; that it is fundamentally computational, and that the ultimate program of the universe is a cellular automaton. Fredkin calls his theory "Digital Mechanics" and he is convinced that it should be able to explain quantum phenomena like the EPR effect.

 

 

Wolfram's Computational Equivalence

by Michael Swaine

What do Stephen Wolfram, author of A New Kind of Science, Ed Fredkin, quantum computers, and BASIC have in common? They're all topics that Michael examines in this column.



In the last column, I wrote about Stephen Wolfram's big book, A New Kind of Science (Wolfram Media 2002; ISBN 1-57955-008-8) and concluded by saying that I would have to study Wolfram's Principle of Computational Equivalence further and, if I figured it out, discuss it here. I fear that what I meant was that I would figure out exactly what Wolfram was getting at with this Principle and what all the implications of this Principle are, and that I would explain it all succinctly and entertainingly here in 2700 words or less. If that's what I meant, I have to confess failure.

Here's what I can do: I can summarize what Wolfram says about computational equivalence and computational universality and the universe as computation and whether the universe is continuous or discrete, and provide some context on some of these issues. But as to how important or even how original Wolfram's Principle is, my ego is not big enough to think that I have the answers.

Besides, I have to devote some of this column space to a couple of other obsessions. In my continuing thread on quantum computing, I'll touch on computational universality as it pertains to the possibility of building quantum computers. And just in case you were wondering, I'll answer the question, whatever happened to Turbo Basic?

Wolfram's Big Idea

Wolfram spends 700 pages building up to this Principle of Computational Equivalence. He thinks it's hugely important, and isn't reticent about saying so. Here is Wolfram's Principle of Computational Equivalence, stated in what he says are the most general terms: "[A]lmost all processes that are not obviously simple can be viewed as computations of equivalent sophistication."

Several ideas are packed into this simple-sounding statement. First, there is the claim that all processes can be viewed as computations. Among DDJ readers, that claim is probably not terribly contentious. We are not put off by the notion that DNA performs computations, or that certain physical processes "compute" a fast Fourier transform. But Wolfram does mean all processes - analytical thought, perception, biological evolution, and the unfolding of the universe since the Big Bang, to name a few.

Second, the implicit message that this is a productive, probably the most productive, way of looking at things. That scientific inquiry ought to involve, in a much greater sense than it has in the past, the search for the program behind the process. Again, not obviously heretical. Science has generally meant a search for general laws rather than detailed programs, but most scientists today would probably agree with a statement along the lines of "Computer simulations of natural processes can be a useful tool for the advancement of scientific understanding." Most would probably not go along with Wolfram's view that science before Stephen Wolfram and his new kind of computation-centric science has just been picking the low-hanging fruit off the tree of knowledge.

Third, discreteness: Discrete and continuous models of the universe have battled for supremacy since before Plato's time, and Wolfram comes down on the side of the atomists. This definitely is controversial. He does supply examples of continuous programs in his book and he does take pains to show that discreteness is not a necessary feature of the cellular automata examples that he uses to make his points. But nearly all the cellular automata in the book are discrete. In these cellular automata, discrete cells in a discrete grid-like space nudge their neighbors into discrete changes of state in discrete time increments. Ultimately, Wolfram makes it clear that he doubts that continuous mathematical functions have any connection with physical reality, and suspects that space is not a continuum, but rather something discrete. Probably not a cellular automata-style grid, but maybe a network. And time: "There is nothing to say that on shorter scales, time is not discrete." Wolfram would find it convenient if the cosmic clockworks were shown to be digital. He dances around the discrete/continuous issue for a while, but eventually commits himself: "[M]y strong, strong suspicion is that at a fundamental level, absolutely every aspect of our universe will in the end turn out to be discrete." This is certainly not mainstream scientific thinking.

Fourth, Wolfram's criterion of "obviously simple" is a very low threshold. Absurdly simple cellular automata with trivial initial conditions are sometimes complex enough to clear the bar. It was confronting these apparently simple programs that generate apparently infinitely complex output that set Wolfram off on the 20-year research program that culminated in A New Kind of Science.

Fifth, the "equivalent sophistication" part. He's saying that, in some sense, there is no fundamental difference between a Rule-30 cellular automaton and a human mind, or between a hurricane and the entire universe.

They are all computations of equivalent sophistication. Not just in some sense, but in a very important sense: All such processes are computationally equivalent, meaning that they all have the same fundamental limits. The human mind is capable of the same set of computations as a dust cloud or a banana slug - and no more.

Some Implications

In A New Kind of Science, Wolfram describes the simple programs that caught his attention so dramatically two decades ago. He shows how they can produce output of unlimited complexity, that in fact, they can be universal computers. He shows how they can be made to mirror processes in nature, the form and structure of living things, turbulent fluids, economic behavior. He asserts that these processes are also effectively universal computers. And he asserts that all of these things are computationally equivalent - that there is really only one level of computational capability beyond the trivial. It's sort of a You Got It or You Don't Got It thing, and all these processes have got it. They are universal computers and, therefore, they are computationally equivalent. "[I]f one looks at a sequence of systems with progressively more complicated rules, one should expect that the overall behavior they produce will become more complex only until the threshold of universality is reached. [After that,] there should be no further fundamental change..."

One implication of his Principle of Computational Equivalence, which Wolfram draws explicitly, is that no system of any kind can ever carry out a computation that is more sophisticated than what can be carried out on a Turing machine or with a cellular automaton. That includes human brains and quantum computers.

On the other hand, the Principle of Computational Equivalence implies that any computation that can be performed by the most powerful computer can, in principle, be performed by any process that passes its low threshold of complexity. A cloud can compute a Quicksort - in principle. Another implication of the Principle of Computational Equivalence is that many of the unsolved problems of science and mathematics may remain unsolved - because they are computationally irreducible. Many of the cellular automata that Wolfram explores in loving detail in the book produce output that seems to be unpredictable by any means other than running the program and seeing what it does. Given the initial conditions and the rule governing the process, it is still impossible to short-cut the process to determine, say, the color of cell c at time t.

In traditional science and mathematics, the length of the derivation is, at most, an aesthetic issue. But Wolfram's computational approach makes explicit the fact that there is always a race between the system used to predict behavior and the system whose behavior is being predicted. The tacit assumption of science has always been that our models, through their sophistication, can outperform the process we are trying to model. The Principle of Computational Equivalence says that, unless we are willing to model limited aspects of the target system, this is not the case - our models cannot be more sophisticated than the processes they are modeling. Not if they are computations of equivalent sophistication.

Fredkin's Digital Philosophy

It needs to be said that others have said many of these things. Ed Fredkin, for one.

Fredkin has been many things, including head of MIT's AI Lab. He is one of the world's experts on cellular automata, like Stephen Wolfram; is an atomist, like Stephen Wolfram; and has for many years entertained some of the same ideas that Wolfram spells out in his book. I can't go into all of them, but Mark Henschel pointed me to Fredkin's web site (http://www.digitalphilosophy.org/) where you can read many of his (very readable and interesting) papers. At that site, you will find many of the ideas discussed here, as well as many of the ideas in Wolfram's book not discussed here, such as the law of conservation of information. And you'll read this: "[Digital Philosophy] suggests that the Universe, with finite resources, is busy computing its future as fast as it can [and] that there is no way from within the DM Universe to, in general, predict exact future states sooner than the Universe will get to those states." Sound familiar?

Then there's David Deutsch.

Deutsch is the winner of the 1998 Paul Dirac prize for theoretical physics and a researcher at the Centre for Quantum Computation at Oxford University. Deutsch has come up with a Principle, too, one that he thinks is as important as the laws of thermodynamics. Here's how he puts it: "There exists an abstract universal computer whose repertoire includes any computation that any physical possible object can perform." Deutsch's Principle is clearly related to Wolfram's, and the relationship is a little more obvious when you realize that Deutsch's principle is merely a translation into the realm of physical objects of the familiar Church-Turing thesis: "Every function which would naturally be regarded as computable can be computed by the universal Turing machine." [Alan Turing] I leave it to you to decide how original Wolfram's Principle is.

How Today Makes Tomorrow

One aspect of A New Kind of Science that I do find revolutionary is this idea of attacking head-on the question of how today makes tomorrow. Wolfram's little cellular automata chug along, generating a new generation of black and white cells at each time increment, and, for the most interesting ones, there is (Wolfram tells us) no shortcut to finding out the color of cell c at time t. The only way to find out is to run the program up to time t. Most of science as we know it, I take Wolfram to be saying, is about finding shortcuts. And most of the time, he says, there are none. You have to run the program.

Wolfram says that science as we have known it is successful because it is designed to ask only the kinds of questions that are easy for its methods to solve. When you ask, what is the relationship between this variable and that, you are not asking how some process actually works. You are asking if there is a shortcut to finding certain values. And in simple processes that unfold in symmetric or nested ways, there typically are shortcuts. So science investigates such simple processes and ignores the processes that generate most of the complexity in the universe. But true knowledge of the universe consists of understanding exactly how it works, and that means finding the program. Laws don't explain what the universe is doing; they just point out some interesting consequences of what it's doing. What it's doing is what it's doing - the program that it's executing. And in most cases, we don't know that. In most cases, Wolfram suggests, we never will. I find this viewpoint both inspiring and humbling.

Inspiring because it attempts to remove the magical action at a distance inherent in science as it is done today. Most scientific theories state a causal connection between two phenomena without specifying the steps that mediate that connection. Wolfram advocates a science that finds the actual program that nature is executing.

Humbling because he thinks that, in many cases, this will be impossible, because the program that nature is executing is like one of his cellular automata whose state at time t can only be discovered by running the program until time t. Having identified a vast realm of ignorance, Wolfram is saying that much of this realm lies forever outside the light cone of human knowledge.

Universal Quantum Computers

Universality is the threshold of seriousness in computer design: If it's not universal, it's just a glorified calculator. Wolfram has shown that the threshold is a lot easier to reach than von Neumann and other computer pioneers thought, but it's not quite trivial.

Early designs for quantum computers didn't rise to the threshold. It was a real breakthrough when Paul Benioff designed the first quantum computer in 1981, but Benioff's model was highly abstract and idealized. Richard Feynman's 1984 quantum computer design was worse than the first digital computers that had to be rewired for every computation. As Julian Brown points out in Minds, Machines, and the Multiverse: The Quest for the Quantum Computer (Simon & Schuster, 2000; ISBN 0684814811), you would have had to effectively reinvent Feynman's computer for every new problem.

Then, in 1985, David Deutsch (him again!) published the paper "Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer." While Deutsch's plan for a universal quantum computer would not have been practical to build, it did advance the field in several ways. Feynman had pushed for quantum computers because he thought that classical computers could not adequately model quantum physics - and Deutsch's quantum computer satisfied Feynman's wish: It could act as a quantum simulator. Beyond that, it was a universal computer and could do anything that any computer could do. Deutsch achieved this in a familiar way: He made the machine model an abstract Turing machine, as Benioff had done earlier, more obscurely. Beyond this, Deutsch's quantum computer could take advantage of quantum parallelism. This meant, according to the thinking of quantum computer designers, that it would be able to do things that classical computers could not do.

It is worth noting that Stephen Wolfram expresses strong doubts that this ever will prove to be the case.

Nevertheless, Deutsch had demonstrated that a universal quantum computer was a real possibility. The challenge then became to come up with a practical design and build one. To be continued...

Whatever Happened To...?

Tom Hanlin at PowerBasic wrote to call my attention to PowerBasic. A visit to the PowerBasic site (http://www.powerbasic .com/) and some further research told me what happened to Borland's Turbo Basic. Turbo Basic was a follow-on product to Borland's hugely popular Turbo Pascal and a response to Microsoft's QuickBasic - which was a precursor to Visual Basic. Borland didn't commit to Turbo Basic for long, though. After introducing the product in 1985 or '86, within four years, Borland had sold all rights to developer Bob Gale and he had already started a company and published the first version of the successor product, PowerBasic.

By 1993, PowerBasic 3.0 was capable of compiling to 386 opcodes and featured an inline assembler. PC Magazine gave it high honors. As recently as three years ago, PowerBasic was judged the fastest DOS-based Basic around. And to answer the obvious question, quite a few. Even today, there are still quite a few DOS-based Basics around. PowerBasic is one of them, but it also comes in Windows versions. A complete standalone .EXE Hello World program written in PowerBasic reportedly compiles to just 6144 bytes on disk, or 3208 bytes in memory. That's the Windows version. The DOS version doesn't appear to have been updated in years, but it's still for sale. Remember TSRs? PowerBasic lets you create one with just five lines of code. If you don't remember TSRs, they were small programs and the initials stood for terminate-and-stay ready -- and beyond that my memory is a little hazy.

Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 


Video