Advanced Robotics within Oppressive Systems

Context

Advanced Robotics

On December 29th, 2020, Boston Dynamics wished a happy new year by releasing a video of its various robots dancing to Do You Love Me by The Contours:

With my jaw dropped, I replayed that video at least five times over, sharing it with friends and debating whether or not this was motion-captured animation or real life; the movements of the robots were smooth, graceful, natural, and human. Ever since Boston Dynamics first released its video documenting the BigDog (2010) followed by parody videos over a decade ago (seedwellcomedy, 2011), I - along with many others - have been fascinated and inspired.

Boston Dynamic’s Big Dog Overview.

A parody of Boston Dynamic’s Big Dog.

A compilation of fails from DARPA’s Robotics Challenge.

Stuff like this is what motivated me to participate in FIRST Robotics Challenge (2015) in high school and the NASA Robotic Mining Competition (2017) in college. It further inspired me to pursue a more hardware-balanced Computer Engineering degree for my undergraduate studies.

The tech that organizations like Boston Dynamics pushes forward absolutely towers over the experience I’ve had with robotics, and even other cutting-edge endeavors, such as those organizations in the DARPA Robotics Challenge.

Early in my degree, I was taught the vast difference between how far we’ve advanced in computing technologies, such as artificial intelligence, and the relatively dismal state of robotics clearly shown by the video above. Nevertheless, they’re both computer-driven; the new deep learning architectures which have enabled AIs to beat humans (AlphaGo) in a variety of tasks feed back into robotics (Pierson, 2017), enabling organizations like Boston Dynamics to excel. 

Within Oppressive Systems

Despite the bright start to the year, Boston Dynamics - with a similar degree of virality - faced backlash for providing the New York Police Department (NYPD) with their technology by deploying DigiDog.

This was a stark contrast from the past year filled with Black Lives Matter protests against policing. A quote from the intro to a Critical Race Theory for HCI (Ogbonnaya-Ogburu et al., 2020) summarizes the racism enforced by the police clearly: 

“Race-based disparities in the United States are firmly established (figures from 2016-2018 data): Median incomes for Blacks, Hispanics, and Native Americans are about 65% of White income [74, 135]. The median Black and Latinx families own 2% and 4% the median wealth of White families, respectively [27]. Rates of college degrees among adults 25 years and older are 54% for Asians, 36% for Whites, 22.5% for Blacks, and 15.5% for Hispanics [108]. Blacks are six times more likely to be in prison than Whites, Hispanics are three times more likely [21, 57]. There is no end to such statistics.

These inequities have historical roots embedded deeply in our legal and economic institutions. Slavery in America goes back to 1619 [122], a century and a half before the country’s founding. The country’s constitution, ratified in 1788, counted slaves as three-fifths of a person and denied them a vote [28], a situation that continued until the 14th Amendment, passed in 1868. After slavery was abolished, legally enforced racism continued with oppressive Jim Crow laws [73], voting laws [50], and redlining [12].”

Are Prisons Obsolete by Angela Davis, among countless other essays and books, shows how policing in America was fundamentally founded upon protecting capital and keeping slavery and oppression going to this day.

Recent tech worker petition subjects.

Recent tech worker petition subjects.

A (not yet published) study I did last quarter in Computer Ethics with Amal Nanavati analyzing tech worker petitions showed that anti-racism was the third-most popular topic, with engineers from large companies like Google and Microsoft speaking out against their involvement in militarized police inflicting racial trauma on black and indigenous people of color. The second-most popular topic was immigration, criticizing CBP & ICE (Customs and Border Protections and Immigration and Customs Enforcement) inflicting general trauma on undocumented immigrants. For instance, one open letter in our dataset signed by nearly 2,500 researchers titled Abolish the #TechToPrisonPipeline spoke out against crime prediction technologies.



My take

All this brings up the question:
How complicit are we engineers in all this?


I recognize the arguments in Data Science as Political Action: Grounding Data Science in a Politics of Justice by Ben Green (2018), which addresses claims such as "I'm just an engineer," "Our job isn't to take political stances," and "We should not let the perfect be the enemy of the good;” arguing that data scientists should recognize themselves as political actors. The general point is that works produced from science and engineering aren't in some ideal, value-free vacuum; research of any kind is inherently political in several ways, and exists in the context of biased, oppressive systems. Robotics is not exempt from this.

I recognize the indubitable fact that all policing intentionally harms black and indigenous people of color, as well as other minorities. I recognize the fact that all policing prioritizes upholding power structures over protecting people.

I recognize that responsibility is a convoluted thing. With extremely interconnected systems, such as those depicted in the Anatomy of AI, how much are we responsible for the inhumane cruelties which may be outside our narrow purview? How much is a roboticist that designs something like a neural network-powered control algorithm for smooth manipulator movement responsible for it being used to shoot minorities while it may also be used for surgical robots? How much of this infinitely complex cause-and-effect is captured by our human intuitions, and what levels of involvement do we find acceptable? 

Anatomy of an AI system

Anatomy of an AI system

I recognize the tension between obligating ourselves to minimize suffering (as argued by Peter Singer in Famine, Affluence, and Morality (1972) and preserving values like justice, equity, and fairness (Gabriel, 2016). I further recognize the tension between being morally obligated to preserve all those things, and focusing on meaningfully (Wolf, 1997) living a human life which is positively and healthily selfish. Robotics incites passion within a lot of people, including me.

I recognize that given all this, it would be easy to label everything as political and unethical, which is valid.
I recognize that an organization of people, be it a gang or company, is like a bucket of mixed paint with each color being an individual’s intentions and values. I recognize that in some cases, which don’t include policing, an individual can effectively utilize the group’s power to amplify their own intentions and values, while condemning some of the groups’ actions - this is what the plethora of tech petitions are for. I also recognize that this may be futile. For instance, organized actions by Tableau employees speaking out against involvement in CPB and ICE last year haven’t prevented job postings for that exact application a couple months ago.

I recognize that all this is really complex. While there are some clear lines, such as not developing weapons technologies, there are infinitely more gray areas, which means there are a lot of places for tech bros to play devil’s advocate. Debating this complexity, nor dismissing the question entirely, does not help children in the near future who will be harmed by oppressive robots.

Ultimately, my answer is that developing robotics for these purposes - policing, military, and weaponization - is most definitely unethical. Some may argue that using robots in place of police officers or soldiers only saves lives. This is directly analogous to arguing for the development of weaponized drones to save pilot’s lives; given the resulting damage and which side faces deaths, this argument doesn’t appeal to me. So what then of the roboticists who develop the “neutral” aspects of components which make up these robots; who work for organizations such as Boston Robotics for good intentions such as robots which assist the disabled, dance, and perform surgery? Could they ethically work for that organization? I would say that it depends on the degree of separation from the harm done, how that component is used, and whom the roboticists’ total efforts end up serving.

References

Abolish the #TechToPrisonPipeline | by Coalition for Critical Technology | Medium. (n.d.). Retrieved May 30, 2021, from https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16

AlphaGo Zero: Starting from scratch | DeepMind. (n.d.). Retrieved May 30, 2021, from https://deepmind.com/blog/article/alphago-zero-starting-scratch

Amal Nanavati – Learner, Activist, Creator. (n.d.). Retrieved May 30, 2021, from https://amalnanavati.com/

Anatomy of an AI System. (n.d.). Anatomy of an AI System. Retrieved May 30, 2021, from http://www.anatomyof.ai

Boston Dynamics. (2010, April 22). BigDog Overview (Updated March 2010). https://www.youtube.com/watch?v=cNZPRsrwumQ

Boston Dynamics. (2020, December 29). Do You Love Me? https://www.youtube.com/watch?v=fn3KWM1kuAw

Collective Action in Tech. (n.d.). Retrieved May 30, 2021, from https://collectiveaction.tech/

Critical Race Theory for HCI | Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. (n.d.). Retrieved May 30, 2021, from https://dl.acm.org/doi/abs/10.1145/3313831.3376392

DARPA Robotics Challenge. (n.d.). Retrieved May 30, 2021, from https://www.darpa.mil/program/darpa-robotics-challenge

Davis, A. Y. (2003). Are prisons obsolete? Seven Stories Press.

FIRST Robotics Competition. (2015, May 8). FIRST. https://www.firstinspires.org/robotics/frc

Gabriel, I. (2016). Effective Altruism and its Critics. Journal of Applied Philosophy, 34, n/a-n/a. https://doi.org/10.1111/japp.12176

Green, B. (2018). Data Science as Political Action: Grounding Data Science in a Politics of Justice. https://arxiv.org/abs/1811.03435v3

IEEE Spectrum. (2015, June 6). A Compilation of Robots Falling Down at the DARPA Robotics Challenge. https://www.youtube.com/watch?v=g0TaYhjpOfo

NASA RMC 2017—Anand Sekar. (n.d.). Retrieved May 30, 2021, from http://www.anandsekar.com/portfolio/rmc2017

N.Y.P.D. - The New York Times. (n.d.). Retrieved May 30, 2021, from https://www.nytimes.com/2021/04/28/nyregion/nypd-robot-dog-backlash.html

NYPD uses robot dog during police operation—YouTube. (n.d.). Retrieved May 30, 2021, from https://www.youtube.com/watch?v=24jufNhuUSI

Pierson, H. A., & Gashler, M. S. (2017). Deep Learning in Robotics: A Review of Recent Research. https://arxiv.org/abs/1707.07217v1

seedwellcomedy. (2011, May 18). BigDog Beta (High Quality)—Early Big Dog quadruped robot testing. https://www.youtube.com/watch?v=mXI4WWhPn-U

Singer, P. (1972). Famine, Affluence, and Morality. Philosophy and Public Affairs, 1(3), 229–243.

Tableau Employee Ethics Alliance – Medium. (n.d.). Retrieved May 30, 2021, from https://medium.com/@TableauEmpEthicsAlliance

Wolf, S. (1997). Meaning and Morality. Proceedings of the Aristotelian Society, 97, 299–315.

Anand Sekar