Main content
Course: Wireless Philosophy > Unit 5
Lesson 5: Are there hidden dangers in robots that look like us?Are there hidden dangers in robots that look like us?
In this Wireless Philosophy video, Ryan Jenkins (professor of Philosophy at Cal Poly) examines the ethical problems raised by the use of anthropomorphic “framing” in the design of robots and other AI technologies. How might unconscious biases end up shaping these supposedly neutral designs, and what responsibilities do engineers have to ensure that their designs don’t perpetuate certain kinds of marginalization and injustice? Created by Gaurav Vazirani.
Video transcript
[upbeat music] Hi, I’m Ryan Jenkins, a philosophy
professor at Cal Poly in San Luis Obispo. It’s easy to think that technologies
are basically just neutral tools. But an overlooked aspect of
technology is the way that it can reflect and in turn, distort our
perceptions of the world. Sometimes, even when trying to be objective we can end up importing
our unconscious biases into the way we create ‘neutral’ tools. And some of these ways
might not be so innocent because they might
reinforce social injustices. Designers sometimes use
what’s called “framing” when creating robots
or artificial assistants – that is, they design these technologies such that they imitate
human behavior or animal behaviors. Users feel more comfortable
interacting with robots and AI if they can understand
our “natural language,” if they have human-sounding voices
and human names, if they can make
facial expressions, and so on. Children enjoy playing with robots
that look like animals, like puppies. Could there be anything wrong
with this kind of framing? Let’s take a look at a few examples. Think of the voice assistants,
powered by artificial intelligence, that are common nowadays, on every smartphone,
smart speaker and in many new cars. Google, Apple, and Amazon
all offer such assistants, and all of them, Google’s Assistant,
Apple’s Siri and Amazon’s Alexa — are female by default. It took years for those companies
to even make a male voice an option. This is important because voice
is an increasingly common way we interact with our devices. But as these examples show, AI voice assistants
also provide an opportunity to reinforce the idea of women as servants, as if we’re calling upon our secretaries
in the typing pool back in the 1960s. We see this not justin voice assistants
but also in robots — whether they’re products
that help keep our house clean or robots on the big screen in movies. Female robots tend to be
overly sexualized and subservient, with names like Rosie,
Cherry, Lenore, and so on. In the early days of Siri, users who asked the assistant questions that were sexually explicit
were met with coy and cutesy responses like,
“I would blush, if I could.” Meanwhile, robots that have male names more often reflect a sense
of mastery, domination, or virility: Ulysses, Gigantor,
Optimus Prime, and so on. Consider another stark
example in robot design — not concerning gender, but race. In the 2009 movie
Transformers: Revenge of the Fallen, the pair of robots Skids
and Mudflap provide comic relief — but they also end up reinforcing some pretty awful stereotypes
about African Americans. They’re the only robots,
we’re told, that can’t read. They use tone deaf "street slang.” One of them has a gold tooth. But then again, they’re just robots, right? We have to be careful
when designing these depictions. Even without the deliberate
framing that technologists can build into their products, human users have a tendency to “anthropomorphize”
the objects they interact with. They project human qualities onto them, like personalities,
feelings, beliefs, and so on. We name our boats and planes. We say that it’s “bad for”
our car to be driven without oil, in the same way that too much
sugar might be bad for a growing child. This is true even though,
of course, our artifacts are inanimate: they don’t have thoughts or feelings. Still, engineers must understand that the products they design
play into this process and can nurture these kinds of associations and invite anthropomorphic
projections from their users. What’s the point of all of this? In general,
we should try to avoid sustaining these kinds of harmful stereotypes that can marginalize
historically underrepresented groups, lead to denied opportunities
for education or employment and significantly harm their self-esteem. And rather than seeing their
end product as neutral or inocent, engineers have responsibilities
to resist stereotypes, too. One of the best solutions to this
is to ensure the diversity of the people working on these projects. For example, it’s hard to imagine
that Siri’s 'blushing’ responses to sexually explicit chatter
wouldn’t have raised eyebrows among women engineers… that is, if there were any in the room. Ironically, though, the more
common these representations become, the more likely it is
that they will discourage women or minorities from entering STEM. This is a hard problem,
since subconscious biases are everywhere and especially persistent. The philosopher Immanuel Kant said that “from the crooked timber of humanity,
no straight thing was ever fashioned.” How persistent should we be
in trying to uncover and redress moral problems
in our allegedly “neutral” design decisions when we know that human bias
is ultimately inescapable? And after all, aren’t we just talking about
imaginary robots? What do you think?