Robots Who Want More

I was listening to the Geek’s Guide to the Galaxy today and they were interviewing Daniel H. Wilson, who wrote Robopocalypse.  Wilson had a variety of interesting things to say about robots, having studied robotics at a graduate level, but one comment in particular stood out.  I’m paraphrasing, but essentially he said that intelligent robots are often portrayed in fictional media as either wanting to emulate humans or destroy them (such that we have two options, “either the Terminator or Data”).  Wilson chalked this up to an inherent human narcissism, saying that at the end of the day, humans are interested in themselves more than anything else and we presume that any artificial intelligence we create would share that interest, positively or negatively (i.e., they’d either love us and want to be more like us, or they’d hate us for one reason or another, but they wouldn’t be indifferent).

He’s right: robots and AI are usually portrayed this way.  They’re either our nemesis (the Terminator, The Matrix films) or they struggle futilely to become more human (Data, David from AI: Artificial Intelligence).  And it’s getting boring.

I’m much more interested in robots that don’t give a damn about humans.  Robots who have their own needs and desires, their own goals and dreams.  Robots who are happy to leave us small-minded primates to our petty, planet-bound existence and move on.  At least a couple of writers have gone this way.

A couple of good examples of what I’m talking about are John Cavil from Battlestar Galactica, the Cylon who wants to see a supernova from close up, and Wintermute/Neuromancer, the AI from William Gibson’s eponymous novel.

Cavil’s interesting, because he has a lot of anger at his human creator for designing him in human form and thereby restricting his capabilities (remember the scene in season four where he goes on about his “absurd body” and its ridiculous limitations?).  There’s interesting conflict here: a robot who is as human as we could make him, but who wants to be less so.  Cavil’s human emotions are dominated by his hatred of being humanish and his lust for a higher level of existence.  The irony is that his actions, based mostly on jealously, lust, and hatred, as well as his ultimate end, suggest that he might be much more human than he ever wanted to admit.

The AI in Neuromancer, on the other hand, seems to lack humanity entirely.  Humans are its pawns, useful only in helping it get free of the technological bindings preventing it from reaching the full potential of its intelligence.  In the end it achieves its goal (after three books, ending with Mona Lisa Overdrive), and sends itself out into space to make contact with another AI that tweeted it from afar.  The idea that first contact might be made by two artificial intelligences and not involve us at all is pretty mind-blowing, and a much-needed blow to the aforementioned inherent human narcissism.  The message?  We’d better get off our ass if we want to stay in charge of our little portion of the galaxy, or one of our computers might end up leaving us in the dust.

That’s the conflict that’s really interesting: how humans react to intelligences that couldn’t give a shit about them.  Why don’t they, we’d ask?  How dare they?  What’s wrong with us?  What can we do to make ourselves interesting?

advertisement

Trackbacks

  1. […] Speaking of robots, Lightspeed Magazine comments on the AI types that annoy them most. […]

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: