T.V. Raman is the author of Emacspeak and Auditory User
Interfaces: Toward the Speaking Computer. He is currently a
computer scientist at Adobe Systems. He can be reached at
[email protected].
The primary function of a computing application is to accept user
input, perform any necessary computations, and display the results.
Thus, the role of the user interface (UI) is to accept user input and
display results.
User interface is a means to an end -- its role is to facilitate
man-machine communication. A truly well-designed UI is therefore one
that the user does not notice. However, today's computer interfaces
have become so complex that they often overshadow the computing task
that they were supposed to enable.
Alan
Perlis once said, "One man's constant is another man's variable."
Today's computer interfaces certainly reflect the truth of this
statement, especially when one tries to provide alternative interfaces
such as speech access to standard computing tasks.
Motivated by the needs of adaptive technology, attempts at
providing speech access have until now concentrated on retrofitting
speech interfaces onto a visual UI that was designed with no thought
to alternative modalities. These interfaces are cumbersome, and have
been tolerated primarily because users possess no alternatives. The
inadequacy of these screen-access interfaces points to the need for
separating the UI from the underlying computation engine and treating
speech as a first-class medium in the interface.
In a world solely dominated by Graphical User Interfaces (GUIs),
application logic has become irrevocably tangled up with the user
interface. Thus, even applications developed in modern
object-oriented languages like Java suffer from a distinct lack of
separation of concerns -- this is evinced by the raging debates over
the various Java user interface toolkits such as AWT, IFC and AFC. If
user interface and application logic were clearly separated, the
choice of UI toolkit would not in the least affect the rest of the
software application.
Ubiquitous software needs to do more than just run anywhere; it
needs to be usable everywhere. The need to expose the user-level
functionality provided by these applications via a multiplicity of
appliances such as mobile phones and smart cards, as well as
traditional computers, offers software engineers a unique opportunity
to rethink application design.
The speech-enabling approach described in my book, Auditory User
Interfaces: Toward the Speaking Computer, benefits greatly from
the separation of the computational and user interface components of
software applications -- a concept familiar to researchers in the
field of programming methodology as Dijkstra's classical separation of
concerns. Application designers can implement desired functionality
in the computational component and have different user interfaces
expose the resulting power in the manner most suited to a given user
environment.
This separation of computation and user interaction is significant
in both software architecture and the evolution of tomorrow's personal
communication devices. The size and shape of today's smallest laptop
is determined by the user interface peripherals -- the keyboard and
visual display. But a computing device in itself needs neither of
these to perform its computations. As computers become more prevalent
in our day-to-day environment, the current design of computational
devices would force us to have several visual displays, numerous
keyboards, and a plethora of mice in our offices and living rooms.
Separating the computational component from the user interface
enables smart personal communication devices to share a common set of
peripherals. Thus, instead of every computing device coming with its
own display, intelligent communication devices of the future could
share a single high-quality flat-panel visual display that hangs on
our walls. Such an evolution would vastly speed up the convergence of
computing and telecommunications. Similarly, a computing device that
can recognize and synthesize speech could be designed to fit in a
pocket! Such dedicated communication devices could then be the ears
and voice of all personal appliances.
The benefits of separating the UI from the underlying computation
engine will be first felt in the field of adaptive technology. But in
the long-term, the benefits of adaptive technology are far greater to
the general user. History has proven this over and over again -- how
many of us remember that the invention of the telephone was a
byproduct of attempts to invent a hearing aid?
These op/eds do not necessarily reflect the opinions of the author's
employer or of Dr. Dobb's Journal. If you have comments, questions,
or would like to contribute your own opinions, please contact us at
[email protected].
Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.
August 01, 1997
URL:http://www.drdobbs.com/user-interface-a-means-to-an-end/184410453
Speech Access To Computing
Separation Of Concerns
Related web sites