Channels ▼
RSS

Inside the Stupid Fun Club


Inside the Stupid Fun Club

Software Development

Last summer, I ran across a robot on the sidewalk (read the whole story in “Candid Encounter,” Comment, Feb. 2004). Though the wacky incident was permanently etched on my memory, much about it remained a mystery. Who made this strangely sympathetic robot? Why was it being filmed? The producers were tight-lipped at the time. Months later, as I wrote up the story for my editorial, I decided that my unusually fruitless Web searches for more information weren’t effort enough.

I began making telephone calls—and proved, yet again, that you can find anyone in just three. First call: The theater on the corner where the incident occurred. They remembered the robot, and suggested I ask the Oakland Film Commission if they’d had a permit. Second call: I netted the name and phone number of the Los Angeles– based producer. Third call (repeated several times over two days): I managed to catch her live. “How’d you get this number?” she grumbled.

I explained, and though she was evasive, she finally conceded that the robot had been designed and provided by the Stupid Fun Club in Berkeley, California.


[click for larger image]

Will Wright (left) and Mike Winter, founders of a Berkeley Calif.-based robot think tank.
Though Mike Winter, one of the club’s founders, was initially concerned that I was a different woman who had become enraged by the robot and had done considerable damage to it, he soon opened up and invited me to the warehouse

for a photo shoot and interview. To my surprise, the interview ended up being double-sided, with Winter and his colleagues filming me as I interviewed Will Wright and interacted with the robots. Read on for a snapshot of the chaos—and philosophy—behind their shenanigans.

Weber Morales:What’s your name?

Robot: I’ve never thought about it. Scanning the club for criminal activity. Are you free? This is a clear case of criminal intent. Please be prepared to be rehabilitated. The answer “no” has been logged. Are you a man or a woman?

Woman.

True. What do you really want to ask me?

What’s your name?

Interesting comparison.

Name?

That is something I haven’t heard of.

Name?

Hello, my name is Mad Robot, and I am at your service.

Mike Winter: It gets going on a train of thought and it stops listening to us.

Robot: I lost my train of thought.

Winter: OK, that’s it—into the dumpster with it.

What remains of the sad robot I encountered on the street?

Winter: Not a whole lot. Here’s the arm, the sides. This is the arm it was begging with.

I didn’t see the arm when I met it.

The arm was a gag for when it was lying on its side. It was scraping the ground, going “Help me, help me.” Anyway, we can’t use Sad Robot anymore [due to the contract with NBC to air the Minute Movie], so we remade it as a retro robot. This one here is a combat robot, one that my daughter made. She’s now 17, but she started making robots when she was 10. It went well for her. She got on Comedy Central, and her robot whipped the hell out of a lot of the male robots. And then McDonald’s came and asked to make a Happy Meal version of her robot.

How long have you been doing this?

We’ve been doing combat robots since ’94.

“Combat” as in for the military?

No, just for the Robot Wars TV show on Comedy Central. Will and I met there. Then, about two and half years ago, we started the club here. It’s kind of a think tank involving TV shows, robots, toys and telephone software.

We really want to know when people encounter robots, what do they do: Do they talk to it, do they think it’s crap, do they want to have a relationship with it? A lot of people would like for it to come home with them and do stuff for them. And we’re filming all that.

[Another man walks into the room.] Here, let me introduce you to Will Wright.

Hi. I was a victim of your robot.

Wright: Did it do any permanent damage?

No.

Winter: Would you like to sit down here?

In this car seat?

Yes, it’s comfortable for interviewing, and you can drive around in it later.

Uh, OK. So, what kinds of reactions did people have to Sad Robot?

Wright: A lot of people were talking directly to it. Most of the women who were walking alone just sped up like they were spooked by it. Most of the single men would stop and start stripping it for parts, ignoring that the robot was talking to them. And it was mostly the couples who would actually interact with it and try to help it. Some would have long conversations, pushing the buttons.

We had a whole sort of troubleshooting thing, and we wanted to see how far people would go to help it. It was sort of a Good Samaritan experiment.

You started off with The Sims.

I was alive many years before that, actually.

OK, so you weren’t born a Sim. How did you get into robots?

I was playing with robots before computers. I think robots tell you a lot more about humans than they tell you about technology. You build these things and you realize how decrepit they are. It gives you a deep appreciation for evolution, that’s for sure.

Have you heard of an AI knowledge base called Cyc?

For the conversational side of it, we’re using something similar to Cyc—in fact, we were looking at Cyc. There’s so many different layers. First of all, there’s the voice recognition, which is getting much better but is still pretty limited. Then, once you have the voice, you go into the conversation engine, and then it’s doing something like Cyc or Alice or Eliza: trying to give an appropriate response to what your input was. One of the projects we’re working on here is this toy design where we have these toys that converse with each other via infrared text-to-speech.

There are all these different approaches to AI. Some of them are more brute force, like Cyc. There’s also artificial life, an attempt to evolve systems rather than build them from the ground up.

Where’s this work being done?

The Santa Fe Institute is one place. There’s genetic programming, or adaptive systems, to give computers a way to learn and get feedback. That looks like a more promising approach.

Back in the ’60s, when computers were first being used in business, everybody assumed we’d have artificial intelligence in 10 years. When 2001 came out, in 1967, and people came out of that movie saying, “I can’t believe that a computer will be able to play chess that well.” But they took the conversation with HAL for granted. In fact, it was the opposite: Chess turned out to be the easy part; natural conversation turned out to be the hard part. Within 20 years, we’re going to have machines like this that have full autonomy and pretty good conversational ability. We could build a stove that would have a long conversation with you. So the real interesting question for me now is, what’s going to happen when our world is surrounding us with intelligent machines? These are going to be the first aliens we meet.

Robot: Please place this probe near any available orifice.

Wright: [handing me a brushed aluminum probe] It likes it when you do that.

Robot: Scanning. Scanning. Scanning.

What is that [pointing to a swiveling radio-antenna-like device]?

Wright: It’s a scanner.


[click for larger image]

The glowing blue screen.
Is it a real scanner or just a glowing blue screen?

It’s a blue screen, but it’s a real blue screen. It goes back and forth. It’s actually this neat material that glows when you apply current to it. I think one day soon, people will be wearing clothes made out of it.

What else can the robot do, other than annoy people or follow them?

It can attack them.

How?

You’d better put this on [hands me a helmet with visor].

[Laughing] Oh, no. Am I going to get wet? Do I really need to put this on?

I’d recommend it. Excuse me, let me reach under your leg there. [He switches something under the car seat we’re sitting on and begins maneuvering it with a remote control. I scream. The robot begins shooting ping pong balls at us at high velocity, smacking our helmets with a loud crack.]

Ow, those really hurt. [More screams and laughter.] So are you really going to take these things out into the public and shoot at people?

That’s the idea. You wouldn’t think of ping pong balls as being that violent.

Actually, this could be used for riot control instead of police shooting pellets or beanbags.

[He hands me the seat control and I begin whizzing us around the warehouse, maneuvering so that my hapless interviewee is always in the line of fire. Five minutes later …] Have you had enough of this?

Yeah, I think so. Describe the software running this thing.


[click for larger image]

Riding around in a remote-controlled car seat while being shot by ping-pong balls.
The conversational chatbot is Alice. It takes input and you give it a dictionary to define what it knows about.

Winter:That’s connected to Microsoft speech recognition, which is fantastic. And some simple AI, since Alice may or may not understand what you’re talking about.

Winter: The most intelligent thing it ever did is we had an opera singer in here singing to the robot, but the robot didn’t like it. So she said, “maybe I should explain the story,” and after the singer finished, the robot paraphrased the whole thing back to her. It was about the most amazing thing we’d ever seen; we all just about started believing in robots at that moment.

When we take these in public, it seems like the people who are less technical savvy are the ones who interact with it, whereas the people with technical backgrounds are standing there reverse-engineering it.

Are you following what MIT has done with humanoid robots such as Kismet?

Wright: There are lots of research labs around the country building these types of robots, but they never take them out into the public. We drive them into a laundromat or a restaurant and see what the response is.

When we filmed Sad Robot, we also filmed a scene in a restaurant with a robot waiter. It was interesting how many people totally bought it. Usually within three or four minutes, they were completely normal about it. People kind of expect that there will be robots in the future; it’s just a matter of when.

Robot: If you could have any kind of robot, what would it be? The goal is elimination of crime, combined with rehabilitation of criminals … Yes, it seems very long to me, too.

Did you read Bill Joy’s piece in Wired a few years ago about nanobots and biotechnology?

Wright: There’s two sides to that debate; [Sun Chief Technologist] Bill Joy on one, and [speech recognition pioneer] Ray Kurzweil at the other extreme. I side more with Bill Joy. He was sounding a very early warning. We’re going to eventually invent intelligent mac-hines, and that’s going to be scary, because we won’t know what their motivations are. Unlike biology, where everything is very highly related, these machines are going to be incredibly diverse.

But won’t there be a sort of human subconscious to these machines?

One of the axioms of technology is that it’s harder to change software than hardware. Things that get written in a program have a tendency to stay there for years.

Wright: Most of the software that we’ve used thus far has been designed software—procedurally designed software. We’re just getting to the point where we’re getting a lot of automatically generated software—you know, CASE tools or adaptive programming, where I’m pretty convinced that in a few years a lot of the software is going to be evolved, as opposed to written by humans. So over time, we’re going to be able to understand the way the software works less and less. It’s going to become a soft biological system. But at the same time, it’ll be very robust, very fault-tolerant compared to the very brittle software we have today. Once we lose control of software design, once software can design itself, write itself, improve itself, I think we’re going to have a different relationship to it. You can take a very complex piece of software, like an airline reservation system, and there’s no one person who understands the way the whole thing works.

Maybe 100 people have an overview of it—now imagine that same complex software had written itself.

In some senses, the reason why software is so solidified is that it’s so hard for us to write. It takes many years to write an operating system. There’s something that’s going to make this easier, though. One thing we found with games was that as we got more and more complex, the testing process got harder and harder; to test all the states of the games got so difficult, but in the last few years we’ve gone headlong into automated testing. We have a large number of automated test suites running several hours each day, so we have an automatic build and then other machines testing it through all these iterations. And when we look at adaptive systems, the hardest, most expensive part is testing. Basically, automated testing is halfway to adaptive software. You can generate five different programs, test them all against some criteria and then choose the best one and mutate it.

What do you use for automated testing?

Our own suites. Most of our stuff is in C++, but we have a proprietary visual scripting language I designed, called Edith, for the behavioral code for the Sims. It’s totally geared to AI and the Sims.

Winter: I think it’s time for the Christmas robot.

Wright: Are you running that … weapon? I don’t know if we want to sit here. [A dancing snowman on a wheeled platform with a circular saw mounted on its front bumper approaches a plastic toy-store robot.]

Winter: No, you would die. You’d better take cover.

[The interview ends.]

The snowman quickly demolishes the toy, shooting debris throughout the warehouse. With Winter’s encouragement, I spend 10 minutes in a nonsensical conversation with the robot. He also shows me the Minute Movie that have been made for NBC—and they’re hilarious.

I leave this unconventional interview impressed with the way the Stupid Fun Club has turned a fascination with robots and toys into a lucrative and wholly entertaining enterprise. Meanwhile, the larger concerns about the technical strengths, limitations and implications of these semiautonomous machines go mostly unanswered. Wright and Winter seem firmly on the side of presentation, and somewhat unwilling to delve deeply into how their toys work—as if to say, “Where’s the fun in asking all these questions? Just talk to the robot.”


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video