Quick introduction to Philosophy Tube before I begin: Philosophy Tube is a YouTube channel where a woman, Abigal Thorn, discusses various philosophical issues in a video-essay format while dressing in outfits that match the themes she talks about and cracking jokes every once in a while (she also cites all her sources – academic plus!). To give examples and have an excuse to list some of my favorites, she has covered how death changes your perspective, censorship in media, how the law is made up, and, most recently, if Friedrich Nietzche could be considered woke! (I fully recommend watching these – I won’t spoil anything, but they go places you would never expect!) (Put sources and discuss outfits and she’s funny)
But we’re not talking about any of these – we’re talking about the video below, which essentially dares to answer the question: Can AI be ethical?
*Content Warning: Cussing and Provocative (is that the right word to use?) Outfits
In this blog post, I’m going to summarize what she discusses to share them with the class. It will be organized into the same sections her video is.
Level One – Does Not Compute
In this section, she discusses two solutions to ethical problems with AI, citing an example where AI ranked job applications from women lower than men for a tech company.
The first solution is to ‘build it better’. In the example above, the AI had been trained using accepted job applicants in the past, and because tech is a male-dominated field, So, to fix this, why don’t we tell AI that, if 30% of jobs go to women in the field, the AI should give 30% of jobs to women. But that would ruin the purpose of the AI – it’s meant to find the best job applicants, regardless of gender. What if we offer the AI more diverse data so that it doesn’t only hire men? But this diverse data wouldn’t be as accurate, which would, once again, undermine the AI’s purpose.
The second solution is to ‘police it better’. Because AI is unable to explain itself, trusting it would be dangerous. To fix this problem, we would need the right to an explanation – the right to go up to someone and get an explanation for why the AI rejected your application or anything else that has affected your life. But if you were to go to the CEO of the company your application got rejected from, what explanation would you want? An explanation of how the code works? How would that help you? A counterfactual explanation could be useful to you – ‘if you did this, you would’ve gotten the job’ – but that implies that the CEO, or anyone else you talk to, actually has enough knowledge to give you this explanation and that this explanation is true in every single scenario, which is essentially impossible to test given AI’s ability to adapt. You could also become a victim of fairwashing, a concept put forward by scientists who built what Abigail calls the ‘racism machine’ (a machine that was intentionally biased) and then a surrogate model that gives a bullshit explanation to job applicants of color who didn’t get the job. You are told that your results are fair and give good reason to believe so, but you are being fairwashed.
Level Two – Read Only
But fairwashing isn’t just a theoretical concept – it’s real via the penis detection machine (yes, that’s what I said). To illustrate this, Abigail talks about her experiences as a trans woman at airports when she walks through full body scanners (or the penis detection machine). Fun fact for cisgender people (including myself): when you walk through one, someone on the other side either pushes a blue or pink button, and if there are ‘lumps’ on your body that don’t correspond to the right gender, you may have to have a humiliating conversation explaining why (she had to do this in front of her parents) and maybe even get assaulted for the sake of ‘checking’. ‘Interesting’, huh?
The scanners have cisgender biases because they were built by and for cisgender people, similar to how the Racist Machine had biases for white people. This is an example of digital epidermalization, or how AI can reflect information onto people. It’s the same thing as white people defining the term ‘black people’ and using it to form society when instead black people should be the only ones allowed to define what being a black person means. Facial recognition AI, when asked to only look at a specific gender in a crowd, have to then decide what features correlate to every gender even though gender can be considered more of a personal thing rather than a physical one.
Level Three – Data
To introduce this section, Abigail tells a story about who had made and then posted non-consensual pornography of her with AI using a specific video she posted. All of the hard work done by the people who made the video, including her, everyone in the crew, and the ones who made the costume she was wearing, was used to make pornography.
This is the data-flattening problem. We’ve talked in class about how ChatGPT is trained on data and here she poses how problematic that can be. It can be both violating and it steals the work of so many people. Many writers and artists feel incredibly insulted when AI takes their work and creates a soulless rip-off of it and it threatens many of their jobs.
How can AI use data while also prioritizing the consent of its usage? One proposal is to set up a data-owning democracy in which everyone has a copy of the data they produce and can decide to do whatever they want with it. Another is the proposal of digital socialism in which tech companies would be democratically controlled by their workers and the infrastructure they use (data) would be owned by society. B
But Sam Altman, CEO of OpenAI (who made ChatGPT) and those in his field generally don’t understand these problems and therefore they aren’t really being addressed in a professional capacity. (This is addressed much better in the video!)
Level Four – A Ghost Haunting The Machine
Abigal dispels the idea that AI is an algorithm that simply exists in the air. The computers that generate AI content require lithium batteries, and lithium is a mineral that must be mined by miners who are severely underpaid for their work. She points out that the cost of mining itself is generally more than the cost of the mineral received from it, but the business still works because the companies aren’t the ones paying the extraneous costs – it’s the miners, going to the hospital when they get sick, for example. The extra money that makes mining so costly has nothing to do with the company, but with miners ‘personal struggles’ that they just so happen to face after every job. Not only are there miners, there are also data labelers. Data labeling is a form of subemployment; a job that pay so little you might as well be unemployed. And the only reason it’s considered a real job is because society has decided the people who work there don’t matter – whether that be because of their race, national origin, age, or whatever else. Keyword time! Abigal suggests that ‘AI’ is not the best term for the concept, but rather ‘Large-Scale Computing’ as that encompasses how many resources are used, CO2 emitted, and people exploited.
But just because is ‘large-scale’ does not mean it isn’t vulnerable. Large-Scale Computing is vulnerable to climate change. The video shows an example where a drought in the Panama Canal led to a lack of resources for the computers. Workers are another important factor. In response to Large-Scale Computing advances, people have gone on strike. She also says that institutions who support the negative impacts of Large-Scale Computing work hard to suppress worker rights to decrease the risk of a strike.
Conclusions
“There is no ethical computation under Capitalism!”
If the goal of making Large-Scale Computation is to make money rather than to be ethical, then it can never be ethical. Based on the argument she presented, I agree. Ethics must be the main focus. Every piece of tech must have the goal of not harming justice and fairness by any means.
My Thoughts
This part doesn’t come from the video, it’s just me giving my thoughts, so hello!
This video opened my mind in so many ways. Disregarding Large-Scale Computing for a moment, I’ve never known about the potential harassment transgender people fear for every time they want to get on a plane. That’s absolutely insane to me. It’s even more insane to think about how transgender people would have no way to defend themselves. The harasser would have a good reason for their actions on paper – they might have a gun in their underpants’ – and if they were to deny being searched…what a world of trouble to step into. Potentially more trouble than it’s worth, unfortunately.
The other main thing that stood out to me was the environmental problems with Large-Scale Computing which I only really started to learn about because of this class. As Abigal dispelled the notion of for me in particular, AI is a real, tangible thing that harms the Earth and the people who make it via its creation. Why isn’t this a more discussed topic? Actually, scratch that – the answer is obviously that institutions don’t use bad propagation. Better question: why don’t people know that AIs are physical computers? That seems silly – we’re not at the point where computers aren’t usually surmised as non-physical – but it’s a still good question, isn’t it? That’s something a lot of people don’t consider. There is a large, physical computer that is made with lithium that heats up the Earth every time it is used. Perhaps this building block of intelligence is what is missing from people in order to allow them to be sympathetic to the miners of lithium that make AI possible. Maybe. I don’t know. I’m just throwing conjecture around.
I think it is another question all together if humans will ever be able to use AI ethically and then, further, what it means to use AI ethically. I won’t touch on it here since my source material didn’t, but I feel I should point out that her explanation doesn’t fully make AI ethical.