Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Parallel

Neurobiology Will Become "No-Brainer" Substitute for Software


Sam Venneri is senior vice president of Asynchrony Solutions, an innovative software technology firm focused on the agile delivery of systems integration, custom application development and secure collaboration solutions. He is the former CTO and CIO of NASA, where he focused on transforming advanced technology research into practical applications.


It used to be the stuff of fantasy, what Hollywood scriptwriters and producers made their careers out of. Computers and robots, all gaining self-awareness, able to "learn" from and adapt to their environment. No longer dumb machines capable of merely following explicit orders, they gain intelligence and can actually think for themselves.

The movies are replete with such images. HAL from 2001: A Space Odyssey. The machines from The Terminator. More recently, the human-looking beings from I, Robot. All of these machines became capable of making their own decisions without the input of their creators. Unfortunately, they all turned their newfound brainpower towards the purpose of destroying mankind. None succeeded, but the attempt was certainly frightening.

For many viewers of these cinema classics, one question arose in the back of their minds: Could this really happen? The answer: yes, but without the part about destroying mankind.

In fact, the creation of intelligent robots and computers with the power to learn and adapt to a changing environment is closer than many people realize. Research into this fascinating discipline has been ongoing for decades, and the development of an actual prototype is very much in the offing.

At the root of this once-unthinkable phenomenon is the dynamic transformation of the software industry.

Software History

If one looks at the trends in software starting in the early 1950s, this industry was out in front of the hardware community in terms of sophisticated conceptual development processes and technology. The software community, in fact, started what has been defined as the modern systems engineering approach long before the hardware community adopted it.

In the 1960s and 1970s, the software field progressed from basic language to FORTRAN. Then, in the 1980s and 1990s software, software became more of a driver of the devices of the day -- automobiles, dishwashers, microwaves all were equipped with microprocessor controls (the automobile is, essentially, a distributed computing environment).

At this time, software became a critical issue in many organizations; the majority of team leaders and program managers at the aerospace companies, automakers, even NASA, had come from the hardware environment; software to these people was almost an afterthought. Consequently, in most industries, software projects were increasingly outsourced to specialists.

However, software ultimately became a problem, not only because of the nature of code development and validation but due to the complexity of the programs necessary to drive the new, advanced devices. At NASA, we saw an abundance of errors in the lines of code we used. We were going from thousands to millions of lines of code. Plus, at the time, one individual would write single strings of code. Today, software in large, sophisticated systems isn't one continuous piece; it's written in sections by groups of programmers with interfaces defined to transfer all critical parameters from one section of software code to another.

In the early 1990s, despite the introduction of tools like Unified Modeling Language (UML) and integrated verification and validation techniques, the errors in software programming were more frequent -- and more costly -- due to the complexity of the programming. A case in point was the space vehicles that NASA was producing. Basically, the functionality and controllability of virtually all of the systems in these vehicles was becoming software-driven. However, the ramifications of faulty software were dramatically illustrated by the demise of one of the NASA Mars Polar Lander due to software errors, an incident which made worldwide headlines.

The point was clear: whether it was control theory, pointing telescopes, controlling automotive drive processes such as energy management and braking, or spacecraft operational management, there are unanticipated consequences in complex software-driven systems for which there is no adequate testing method.

Three-Sided Engineering

In the mid 1990s, NASA adopted a different approach. We looked at systems engineering as a sort of triangle: engineering on one leg and a combination of information technology and nanotechnology on a second leg. Then, we introduced biology on the third side to create synergy between them all. In many ways, it was the beginning of a new era: going from Newtonian mechanics to quantum physics to the principles of neurobiology. By starting to think in these terms, we began to view software not as being deterministic codes but rather as a flexible and "learnable" asset.

If you look at how the mammalian brain processes data, it's rather slow, but it's massively parallel and it doesn't work on instruction-based rules. Plus, no two brains are alike, not even in identical twins.

Ultimately, it is the environment, as well as the interaction in the environment, that determines how memory works, whether it's episodic or temporal memory. Back at NASA, we held a workshop in the late 1990s with respected neurobiologists to help us understand the advancement in neural science and understanding limitations with artificial intelligence. We also invited experts in the biomedical field to aid in the understanding of the human brain and how the neurobiological principles behind this amazing organ could be used to enable a revolutionary approach to embedded software systems.

Our excitement grew as we began to imagine the possibilities. Take the example of an unmanned aerial vehicle (UAV). Instead of writing software code to control the various onboard processes, you have something you can train, something that you can port into another UAV once it becomes functional. Rather than being software-driven, you'd have a device that is controlled by an almost intelligent platform. It doesn't have actual emotion, but you can train it, for instance, to "feel" fear by teaching it to avoid certain hazards or dangerous conditions when those situations present themselves. This does not involve reaction to instinct, but it does constitute first-level emotional response -- all without software programming or language.

At NASA, we conducted a number of experiments with robots. If one of the robots lost a sensor, they would ignore it and move to the remaining sensor sets that were still functional, whereas in a deterministic software system the device might go into an endless loop or simply shut down. An example of this is when a person loses one of his or her senses - e.g., sight - and the remaining senses compensate for this loss. This highlights the robust redundancy characteristics and the ability to integrate multiple sensors; it is similar to fuzzy logic but doesn't use rule-based or predetermined processes. The robot is utilizing environmental interaction with the ability to learn and anticipate and take actions on previously stored memories. It has what neurologists call "plasticity": the ability to form connections and reinforce connections based on previous training. The bottom line? The machine's performance is modeling that of the mammalian brain.

Actually, what we are talking about is not even software. It is, at its core, an entirely new engineering discipline, using neurobiological principles as its foundation. A number of academic institutions are advancing this science, including George Mason University, the University of California at Berkeley, Rutgers University, among others. There are also other schools starting to think in terms of formally integrating biology into computer science disciplines. There are even people with PhDs in this field. And the National Science Foundation is exploring the idea of putting university activities together in this area. There is no doubt that a groundswell of support for this discipline is in its nascent stages.

The truth is, programmers cannot keep writing millions of lines of code and expect reliability. The programs are getting too complex, which results in mistakes that simply cannot be caught. All of this represents a "change state" that started when the application of neural nets was instituted -- and it will continue unabated. In fact, it's already here in some forms, as evidenced by the fuzzy logic currently incorporated into the Japanese bullet trains. Further, the European and Asian markets are already well on their way to making substantial human and financial investment in this area -- even more so that the United States, where pockets of resistance still remain.

A Question of Ethics

It's important to note that the ethics of this discipline are not being ignored. The second we started talking about neurobiological principles at NASA, we brought in experts from various related fields to examine the moral concerns that could potentially be raised. Without question, when neurobiological topics are discussed, there is inevitable worry -- and understandably so -- from a segment of the public that wonders whether we should even be venturing into this realm.

To highlight this concern, one should look no further than the case years ago when the U.S. Department of Agriculture was beginning to promote genetically engineered crops. This led to an outcry from the public that began to worry about "mutant tomatoes." And of course, cloning still remains an emotional, hot-button practice that promises significant medical breakthroughs but that raises legitimate ethical conundrums.

Further, experimentation involving animals, even the lowest invertebrate life forms, stir highly charged and visceral reactions. Witness the outcry when university researchers years ago made a computer of neuron cells from rats to control a Microsoft Flight Simulator. Consequently, nothing we had been doing at NASA involved any of these approaches; all work in the area of neurobiology centered on embedding neurobiological principles in electronics -- as opposed to the "wet" or molecular computing that has stirred so much controversy.

Despite the grim prospects for the software industry, shed no tears for its eventual demise. No other industry could provide a product with such a plethora of bugs, errors and malfunctions and still be considered a viable market. (Most people working in science use Linux systems because of their higher degree of reliability.)

Industry Applications

A handful of forward-thinking companies, including Asynchrony Solutions, have been investigating the ways that neurobiology can be applied to practical applications; the possibilities are virtually endless. Take healthcare, for example. Our engineers are researching different ways of displaying data, meaning that doctors will be able to have handheld nomadic mobile computing devices that allow them to get in contact with anyone anywhere -- much like an iPhone. In the defense industry, diverse information from many sources can be brought into the real-time battlefield environment in a multi-modal form that utilizes all the senses of a human operator. This ultimately allows commanders to make split-second tactical and strategic decisions.

Ultimately, the adoption of neurobiology into engineering will help us to open up what a knowledge repository really is supposed to be -- including low-cost, wearable computing visualization capabilities. Our work in this area is still at the proof of principle level, but within a year or two, we're confident that some of these actual devices will be available for use.

In the end, when you start talking about intelligent, brain-based neurobiological principles, you open up a whole new venue in terms of what embedded computing hardware solutions become. You can really start to think about intelligent learning capabilities that go well beyond artificial intelligence and deterministic rule-based systems. In the end, this represents a major change in what software will become over next decade.

Remember: think computers and autonomous robots that have the ability to learn and adapt to their changing environment. This is not a movie -- this is the future.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.