Hey, Sophia! Robots Don’t Look Like Real People

Hey, Sophia! Robots Don’t Look Like Real People

A robot named Sophia generated lots of publicity recently. Made by a Texas-based firm called Hanson Robotics, the animatronic device with a woman’s face and Ex Machina-style see-through skull has been interviewed and feted all over the world. She appeared on Britain’s Good Morning Britain, told The Tonight Show’s Jimmy Fallon it wanted to dominate the world, and even received a citizenship from Saudi Arabia.

The tour has been a huge success for Sophia’s creators, especially on the thought-leadership front. Others may now take up their best ideas and, through conversation and action, blaze a trail for the next level of humanized robotics. During this time of change and innovation fatigue, thought-leadership is important in making new ideas easier to understand.

But I think Sophia presents a problem. She’s the wrong vessel for the type of conversations we should be having about robots. While she engages in so-called interviews following mostly prepared scripts and brings up interesting debates about intelligence and personhood, she also obscures other important issues.

Here are a few of the conversations I think Sophia should actually bring up:

 

Robots Don’t Look Like People

Alan Turing’s famous test for A.I. was whether or not a computer could convince a human it was human, too. It doesn’t have to look like one, as already evidenced by how hard it is to tell if a Twitter account is run by a machine. In fact, there’s a tool called a Botometer to help find them.

Robots already do a lot of work in factories and look nothing like people, usually because appearing human has nothing to do with the vast majority of things that need to get done. This includes replacing people in the workforce; even in the consumer products world, people seem happy to order hamburgers from smart billboards instead of one another. Cars park themselves without an ersatz driver turning the wheel.

The truth is that robots will sound and act like humans long before they look like them, if that’s even necessary for applications beyond the porn industry. We should ask questions about giving that simulacra of consciousness the responsibility for making decisions that impact our lives. I think a more pertinent one is: Should we trust them any more (or less) than we trust one another?

Coding Intelligence

An A.I. expert recently told me intelligence was defined by “the ability to achieve goals in a wide variety of environments.” So, when Google’s Deep Mind AlphaGo Zero was smart enough to beat the Go world champion recently, a former version of itself, humans were knocked out of contention for good. The device had learned to win within a very defined and completely closed set of rules.

The real “aha!”, however, was that the program taught itself, studying 100,000 games and then competing against itself, literally learning by doing. It accomplished millions of games over the course of three days.

This type of approach is known as reinforcement learning, and it enables robots on a factory floor to watch videos of what movements are needed, and then mimic them until they’re perfected. Such additive knowledge should yield ever-greater flexibility in applying it. It’s conceivable that, at some point, it’ll yield an insight that goes beyond the sum of those parts.

Programming Personhood 

The citizenship thing was an insult to the millions of human Saudi women who have no such rights. We don’t need a pretend robot to remind us the question of consciousness has legal and moral implications.

But can things other than people have rights? There’s been a slow but inexorable move toward protecting animals since the days of bear baiting as spectator sport. There are court cases contesting the right to mistreat or kill them, even as defenders must admit they don’t possess our consciousness.

Ultimately, who am I to judge if a human friend feels in the same ways that I do? Or whether I’m just seeing the appearance of it? So, what happens if a robot has learned to recognize and respond to emotional inputs? Is it fair to doom it to a life of endless assembly-line repetitive movement because we’re comfortable with it that way?

I’m reminded by the A.I. in the movie Her, who learns enough to decide to break up with its human operator and go hang out with other virtual minds.

Sophia isn’t the future, and it doesn’t represent much of the current reality of A.I., either. But what a glorious opportunity to talk about the issues that surround it.

What necessary conversations, too.

What do you think?

17 points
Upvote Downvote

Leave a Reply