Neda Atanasoski and Kalindi Vora, co-authors of the book “Surrogate Humanity: Race, Robots and Technological Futures,” drew connections between the history of colonialism and social hierarchy in the rapidly developing technology-labor sector at an event hosted by the University’s Center for the Study of Race and Ethnicity in America.
The event was the center’s first panel in their new “Equilibrium Discussion Series,” which aims to “invite scholars whose work examines the intersections of race and STEM fields,” according to the event website.
“The starting point (for the book) was thinking about how we conceptualize political revolution when technology seems to be the space where people are now thinking about revolution,” Atanasoski said in an interview prior to the panel.
“We're thinking about this as a message to help people who think technology can be used for changing society instead of just reproducing what we already have,” Vora said in the same interview.
The lecture opened with a picture of the “Mine Kafon,” a wind-powered drone that flies over fields with active landmines to detonate them without risk to humans.
Vora then drew a sharp contrast with the Atlas Robot, designed by Boston Dynamics and funded by the U.S. military, which can independently navigate complex terrain, lift heavy objects and recover itself after falling.
In contrast to the Kafon, the Atlas “employs fantasies of human autonomy and command, together. These are values that we identify with the often violent history of globalism,” Vora said. These robots, she claims, fulfill the “fantasy for (a perpetuation) of enslaved labor. They cannot rebel.”
Vora defined the goal of technological labor as the reproduction of “labor performed by … marginalized workers of the past.”
Then she introduced the Jibo product, a discontinued robotic assistant that was mass-marketed as an emotive home companion able to order food, set reminders and perform other household tasks.
The robot’s “obedient physicality,” Vora argued, “brings it into the immediate context of the gendered and raced history of domesticity in the normative family form.”
“Mainstream social robots were designed to maintain a form of relating that preserved a dominating, autonomous, post-enlightenment subject as the only one that is recognizable as human,” Atanasoski said. She said that this design is rooted in Charles Darwin’s theory that only white Europeans could control their emotions, while people of color could not.
In other words, many robots’ emotional expressions were programmed with Darwin’s discriminatory ideologies at the forefront of the design, she said.
They ended the lecture with a discussion of two pieces of art. Kelly Dobson’s “Omo,” a green ball that expanded and contracted with the rhythm of its holder, challenged the idea that robots are “modeled to seem like a human, so they can be imagined as a human replacement,” Vora said.
The second piece, “Drone Selfies,” involved drones taking pictures of their reflections in the mirror, giving us a chance to “see what drones would do without humans,” Atanasoski said.
A discussion followed with moderator Suresh Venkatasubramanian, professor of data science and computer science. The panel discussed ChatGPT, job obsolescence and the risks of large-language models “inscribing a very minoritarian view of the world into what’s accessible.”
Audience questions ranged from asking about the interaction of different socioeconomic groups with robots and the “uncanny valley” to human emotional attachment to robots and hierarchy among robots themselves.
Event attendees expressed their view that the panel offered a compelling discussion of unfamiliar ideas. Alexander Jackson, a Providence community member, said that the talk “opened up a whole bunch of questions about the development of technology and how it plays a role socially.”
Corey Wood ’24 said that he “hasn’t had the opportunity to hear about (this topic) as much in general settings or previous classes I've taken.”
Following the panel’s discussion of labor, Nico Gascon ’23 said that he could not “help but think about the TA labor organization in the context of the discussion on labor that we had, and the automation of certain labor and what necessary labor looks like.”
With the talk centering around the relationship between robots and humans, none of the panelists could agree on one definition of the difference between the two. Atanasoski offered her closest estimation: “Becoming moral is the object of becoming human,” something she believes robots will not achieve any time soon.
Owen Dahlkamp is a section editor overseeing coverage for University News and Science and Research. Hailing from San Diego, CA, he is concentrating in Political Science and Cognitive Neuroscience with an interest in data analytics. In his free time, you can find him making spreadsheets at Dave’s Coffee.