At the turn of the millennium, the Freedom CPU project was just starting to get widespread press, including in these pages (“Open Hardware,” Editor’s Notes, April 2000). The goal? Design and make freely available a 64-bit RISC processor for application-specific integrated circuits or other custom uses. It should emphasize user-level parallelism “derived through intelligent architecture rather than an advanced silicon process,” according to the project constitution, and be low-cost, “including monetary and thermal considerations.”
But while some similar open hardware projects have had small successes, the ambitions of F-CPU have proven difficult to attain. Software Development sought answers on what’s gone right and wrong thus far from Yann Guidon, a developer who lives outside Paris, enjoys computer science, electronics and music, and keeps the Freedom CPU dream alive. His fellow F-CPUer, University of Hanover student Michael Riepe, contributed some comments as well.
Software Development: What’s your role?
Yann Guidon, a.k.a. whygee, holds the F-CPU banner outside the 2002 Libre Software Meeting in Bordeaux, France.
What is F-CPU?
F-CPU (don’t forget the dash!) is one of the most ambitious ideas of the free or open source hardware wave of the late ’90s. First, it’s a CPU “soft core” distributed under a copyleft license, currently the GNU GPL. No restriction should exist against distributing the source code and the derived physical circuits, examining and modifying it for your own purpose. We use our copyright to keep the source free and evolving. This should enable “coopetition” in the highly proprietary microelectronics world.
F-CPU is a modern architecture, more ’90s than the usual ’80s-like CPUs that are designed by other open hardware teams. This has put the bar very high, and to top that, we take the scalability, reliability and portability issues very seriously. This position is more GNU-ish than Linux-ish. New contributors are not attracted, partly because implementing fancy ideas is not always that fancy.
Michael Riepe: Last time I looked, there weren’t any “open” 64-bit CPUs under active development.
What’s happened since 2000?
Guidon: Many things. I joined the Freedom CPU project in December 1998, and the F-CPU architecture was defined in mid-1999 with heated, endless mailing list discussions. The F-CPU features were agreed upon and have remained stable.
Then we had to “make it”—write the VHDL source code—and it became less sexy to most of the mailing list subscribers. It’s more interesting to debate the definition of a feature, such as interfaces or structures, than to solve a syntax or compilation option problem. Most people were also software-oriented and few could help about VHDL, silicon or real microprocessor architecture fabrication—most gurus in this area are under NDA from their employers.
Are you near completion?
Far from it: Only a quarter of the units have been implemented, and several
aren’t completely tested. This is a painful task, even for the most
straightforward modules: All units must have an exhaustive test bench that
works on all simulation and synthesis tools. It takes weeks (at 12 hours
a day) to do this. Since most units depend on other’s signals, touching
one unit implies engineering the neighborhood, which requires a deep understanding
of the architecture.
The development started with the easily “parallelizable” parts: Most integer execution units (add/sub, multiply, shift, Boolean logic and so on) are available; students from [the French computer science institute] ISIMA are currently developing a floating-point unit. The next difficult step is to gather and homogenize these modules, then connect them and finally to make them work together. And that’s only the data path—accessing the memory uses a sophisticated mechanism that isn’t yet well documented.
Can F-CPU fuel other efforts?
The latest F-CPU manual can be found here. Click here for source code snapshots.
Other projects usually don’t care a lot about this: They have a design tool, so they use it to lay the circuit out quickly, often bumping into small tool-specific details. Not only are these tools often difficult to get, but the tool maker can decide to change the distribution terms at any time, or simply drop the product. Most of them have time-based activation keys; the developed software thus ends up worthless.
F-CPU is written in VHDL’87 with some rare features of VHDL’93. The minimum requirement is to run under Vanilla VHDL, a tiny simulator that is freely available. Cadence and Aldec have provided personal keys for their tools; at least five tools were tested successfully.
I hope other similar projects can reuse the small bash scripts that I have written to keep their projects portable.
Do the new 64-bit chips out there help?
Indirectly: Most Debian packages already run nicely on ALPHA and 64-bit MIPSs and SPARCs. In short, F-CPU needs a very good GCC port.
However, the definition of the F-CPU architecture does not bind it to the 64-bit world. Well-written software will run on 64-bit implementations as well as 128, 256, 512 … with the same binary! The definition says that there is no arbitrary limit to the data width: For the programmer, a “register” is “at least” 64 bits. The F-CPU architecture is inherently SIMD, or “short vector” machine if very wide registers are implemented. A simple hardware/software interface allows portable code to be scalable so we’re ready for decades of evolution.
Riepe: Not to forget Intel Itanium, AMD Opteron/Athlon64 and IBM Power4(+) CPUs. I’ve seen Linux running nicely on all of them. The 64-bit breakthrough on the desktop will probably come with AMD’s Athlon 64, which is able to execute native 32-bit (IA-32) code at full speed (as opposed to the Itanium, which uses rather slow built-in emulation hardware), in conjunction with an AMD64 version of Windows.
We can’t compete with Intel, AMD or IBM anyway—they use the latest-and-greatest 90nm SOI low-k copper you-name-it silicon technology, which will probably never be available—and affordable—for open hardware projects (at least until there is something even more sophisticated). We won’t be able to offer comparable speed.
That’s the difference between open hardware and open software: Linux and *BSD can be as good as proprietary operating systems, maybe even better, but the F-CPU project may always lag behind. But on the other hand, we offer the opportunity to learn, participate and share.
What have you learned?
Guidon: Many things, technically and socially. I have explored methods, gotten skills and learned from group dynamics. I have had surprises and inventions, and met many interesting people; all of this is enriching. Not financially, though—but nobody ever said that freedom comes with a free lunch.
“Free hardware” is not software—it’s not like anything we knew before. The mindsets, the methods and the outcome are different. Many tools and licensing are borrowed from the software world, but you can’t patch a silicon die and you can’t “execute” an etching mask. This issue is only in its infancy.
The law is different, as well: Patents are valid worldwide and can cause more harm than software patents, which are still debated in Europe. F-CPU’s design is purposely unorthodox, not only to break x86 binary compatibility issues that would make it uselessly complex, but also to be less prone to potential patent infringement.
Has the open source community grown?
The downturn certainly accelerated its evolution. The worldwide economical daydreaming of 1999-2000 was too strong to be true and now, the gates of the geeks’ imagination are shut. I wish they will reopen slowly, wisely, one step at a time. It takes time to make something stable and successful, particularly if it’s groundbreaking. Look at GNU.
The open source hardware community has matured, leaving trolls and hype behind. Unfortunately, this slowed down many projects, and free tools are necessary for the development of other free tools. Working conditions are also much tougher. I’m currently among the unemployed crowd, and I spend most of my time developing small electronics projects for commercial purposes. However, I keep the faith.
Is the $150-per-chip goal still viable?
It’s not impossible; we just have to finish the source code and then find tens of millions of dollars. Do the math: 150 x 10,000 = $1.5 million. And it’s really the minimum budget for this kind of undertaking. When a bug is found, another million must be spent for a new mask set. Multi Project Wafer (MPW) programs—Europractice, CMP or MOSIS—gather several projects together on a single mask to share these high costs.
How has the design evolved?
The project founders had built a draft around their fantasies and experiences as software writers. When they left, they had created a group including people with more clues. Ultimately, the proven RISC principles prevailed.
“FC0,” F-CPU’s Core #0, is a superpipelined “Out-Of-Order Completion” core that can issue one SIMD instruction per cycle. The very fine pipeline granularity ensures that the speed will scale up easily with new silicon processes.
It looks almost like a “normal RISC” core. Computational instructions are powerful (SIMD with many options), though not groundbreaking. Memory access and branch mechanisms are a bit unusual, but represent the best compromise among latency, simplicity, scalability and efficiency for this class of core.
Riepe: From the programming point of view, the F-CPU has some unique features that are not found in today’s 64-bit designs, whether open or not. There is no separate MMX/SSE execution unit as in Intel (or AMD) processors (Apple’s G4/G5 use a similar unit, called “velocity engine”). Almost every operation can be performed on as many operands as will fit into a register—eight 8-bit bytes, four 16-bit “words,” two 32-bit “double words” or one 64-bit “quad word,” as Intel programmers call them. Floating-point operations support the standard 32-bit “single” and 64-bit “double” formats.
Compared to the 128-bit SIMD units that are included in other processors, a 64-bit F-CPU may seem inferior at first sight. But later versions may have wider registers. With 256-bit-wide registers, an F-CPU could process twice as many operands as today’s SSE units. It still remains a 64-bit processor, however.
How do you simulate the chip?
We can examine signals and features with VHDL freeware as well as, from time to time, with proprietary tools. We can’t run “real world” software because it’s too slow, so some C emulators are being written. I also plan to acquire some FPGA boards from different vendors, now that prices have become almost affordable.
Can open hardware succeed?
Several projects are already successful; for example, the 32-bit CPUs LEON and OpenRISC. Compared to F-CPU, the Opencores.org group is very successful, showing that it’s not an issue of open versus proprietary design.
Obviously, the F-CPU project has not yet delivered on its promises—but it’s only a matter of time and goodwill. We know that it’s possible, so it would be sad to not at least try.
Hardware, Copyrights and Patents
How can circuits be protected so that open hardware can succeed?
Circuits cannot be copylefted because they cannot be copyrighted. Definitions of circuits written in HDL (hardware definition languages) can be copylefted, but the copyleft covers only the expression of the definition, not the circuit itself. Likewise, a drawing or layout of a circuit can be copylefted, but this only covers the drawing or layout, not the circuit itself. What this means is that anyone can legally draw the same circuit topology in a different-looking way, or write a different HDL definition, which produces the same circuit. Thus, the strength of copyleft when applied to circuits is limited. However, copylefting HDL definitions and printed circuit layouts may do some good nonetheless.
It is probably not possible to use patents for this purpose either. Patents do not work like copyrights, and they are very expensive to obtain.
Whether or not a hardware device’s internal design is free, it is absolutely vital for its interface specifications to be free. We can’t write free software to run the hardware without knowing how to operate it. (Selling a piece of hardware, and refusing to tell the customer how to use it, strikes me as unconscionable.) But that is another issue.
Richard Stallman is the founder of the Free Software Foundation. © 1999 Richard Stallman. Verbatim copying and redistribution of this entire article is permitted provided this notice is preserved.