Visions of ICOT
Since Cameron started this discussion on ICOT, what were they really trying to achieve and what did they actually accomplish? What states of ah-wareness did they experience? When examining their original purpose and their findings, they must have experienced allot of Ah naw and Ah hem Multicore Moments.ICOT (Institute for New Generation Computer Technology) was established by the Japanese government in 1982. The Institute was a joint venture of several computer companies and researchers from universities for the purpose of creating the 5th generation of hardware and software. This 5th generation of hardware and software were to be knowledge-based, parallel, and logical, a Parallel Inference Machine (PIM). Their gamble was that parallel inference technology would lay the groundwork for future computer systems. The goal was to integrate advances in VLSI (very large scale integration) of knowledge-base management systems, with AI and human-computer interaction. This system would be capable of processing speech, text, graphics and performing such processes as inferencing, learning, associating, etc..
This system was to be designed in three tiers. The first tier was to be a knowledge-base system that included parallel database management hardware with a 100 - 1000 GB storage capacity capable of retrieving the knowledge bases needed to answer individual questions quickly. The second tier was to be the "problem solving inference" tier that included hardware for parallel inference mechanisms capable of performing 100 M Logical Inferences per Second (a logical inference takes about 100 instructions on conventional machines) to 1 G LIPS executed by 1000 processors. That's about 1 M LIPS per processor. At the time, conventional computers were only capable of 10 K LIPS. The third tier was to be an intelligent interface system that included a natural language interface, speech, graphics and image processing with special hardware for speech and signal processing. This system was to have up to a 100,000 word vocabulary (in Japanese of course), and 2,000 grammatical rules with an input of continuous speech from multiple speakers.
This project was to be completed in 10 years culminating in a complete and working prototype developed in the final three years. It seems that the Ah hems and Ah naws were early on. Those outside the project were skeptical at whether parallel machines were even possible (at the time) or research in AI was commercially meaningful. Those working inside the project doubted the approach to the project. They decided to do all development in one language, KL1, their kernel language for the parallel system based on Parlog (Parallel PROLOG), Concurrent PROLOG and FGHC (flat-guarded horn clauses developed at the beginning stages of the project). This was radical and seen as a possible "creative constraint" to the project. Another mistake some thought was the coupling of KL1 to their experimental hardware that prevented others researchers from testing the KL1 on conventional machines.
As the years waned, many conferences and papers, and preliminary research emerged from the Institute. At the end of the project, the prototype was demonstrated and evaluated by researchers, computer scientists, and developers at the FGCS Conference in 1992.
As far as hardware as proof of concept, they developed two parallel machines at different stages of the project, the Multi-PSI (Personal Sequential Inference Machine Version 2), which connected 64 PSIs (with a maximum of 256) in a mesh architecture and PIM (Parallel Inference Machines), a loosely coupled network of tightly coupled clusters of processors, in the final stage. Both were capable of running their PIMOS, parallel operating system written in KL1.
While I was reading many of the papers and evaluations of the hardware from the 1992 conference proceedings, it was a mixed bag of Multicore moments. Some viewed it as a failure, with the prototype not meeting the level of performance desired. Utilizing 64 processing elements, the Multi-PSI was able to perform 5 M LIPS and the PIM was able to perform 200 M LIPS on 512 processing elements; far short of the 10 M LIPS - 1 G LIPS using 1000 processors. These systems were outperformed by Unix workstations, x86 systems, HP, and DEC uniprocessors performing 100 - 200 MIPS. But keep in mind those were conventional machines. PIM was also compared to and outperformed by the CM-5, one of the descendants of MIT's Thinking Machines, also an alternative to the traditional architecture. CM-5 was a massively parallel machine of RISC processors that performed symbolic processing originally intended for AI applications. Floating point numeric co-processors (1 T FPS performance) were added to make the system more commercially viable for scientific computing (seen as the only market massive parallelism would be used for).
I had a difficult time trying to locate the final report given by Kazuhiro Fuchi, the director of ICOT. When finally locating a link to his final report on the Internet, with every click of my mouse, the report was in Japanese. This was beginning to sew the seeds of a conspiracy in my mind, reminiscent of how I felt about the Warren Report ("Who shot JFK?"). What were they hiding? What did they not want the world to read in his report? Many of the reports by the other researchers involved were in English and viewed these systems as successful. But did Mr. Fuchi apologize for the many shortcoming of the systems? Or did he discuss overwhelming success that would not be revealed internationally?
I finally found his report, in English and read it word-for-word. At the beginning he takes time clarifying their original goals and what they felt was possible for them to accomplish, removing some and narrowing others. It was not their objective to develop commercial systems ready-for-market at the end of the 10 year adventure or "... to solve in a mere ten years some of the most difficult problems in the field of Artificial Intelligence (AI) or create a machine translation system equipped with the same capabilities as humans." He stated:
"From the beginning, we envisaged that we would take logic programming and give it a role as a link that connects highly parallel machine architecture and problems concerning applications and software. Our mission was to find a programming language for Parallel Inference."
Ah hah, Parallel Inference! We definitely agree with some aspects of this approach to massive parallelism and some connection to Web 2.0 or the Semantic web is starting to become apparent.
This is his original diagram of the concept of the PIM and its development environment presented at the FGCS conference in 1981.
He urges the attendees to compare this diagram to the plan and results generated by the preparatory committee that would be presented later in the conference. Then he made some comments about horses and camels that I am not sure his point. I have not yet found that diagram but when I do I will show it to you.
Kazuhiro also spoke of "maturation of the technologies" that may take another 10-20 years. At the time of the initial announcement to the end of the project, the ah-wareness of parallel computers went from Ah hem to Ah yes! Now, almost 30 years later, the questions of what applications can utilize massively parallel computers to questions of developing massive cores inexpensively to be available on any desktop has been answered. Now, what tools or methods can be develop that effectively utilizes these cores? Maybe the answer is Parallel Inferencing.
If you want to read more on ICOT: History and Goals of ICOT ICOT Museum to download software, papers, pictures, etc