The research question which I’m engaging with in this essay remains “Why is AI being pushed so hard?” My proposed way of answering that question goes something as follows: The first section, providing the ostensible reason that AI is being pushed so hard, which is efficiency and better work, and a demonstration of the insidious ways that AI is being pushed even when the hype levels are seemingly low serving as a transition into the next section. The second section, an explanation of the manufactured discourse around building trustworthy AI. The third section, an explanation that technological progress has been used as an excuse to abuse or otherwise devalue laborers and an example or two. Sections two and three may swap orders if needed. The fourth section is the obvious synthesis between sections two and three: more trustworthy AI (or the perception that more trustworthy AI is inevitable) enables interests of the shareholders/CEOs/capitalists to abuse and devalue laborers/consumers/the public, and the conflation of interests between the two groups serves this end. The fifth section, the thesis of the paper: something akin to “building local systems of communication and creation as an antidote to the commodification of the self.” Or, in other words: Writing Art with Your Friends to Save the World. This is the section which needs the most workshopping.
Richard Waters’ “Investors Must Ask If OPENAI’s Valuation Marks the Peak of AI Mania: INSIDE BUSINESS TECHNOLOGY.” from The Financial Times will serve as the mentioned transition between sections one and two, where I point out the quiet and insidious nature by which pushing for AI is present in all sides of the AI hype/trust discourse as a way to prep for the next source. The next source being Steffen Kruger and Christopher Wilson’s “The Problem with Trust: On the Discursive Commodification of Trust in AI,” an academic source which is the foundation for most of the points made within the second and fourth sections surrounding manufacturing trust, the way interests are conflated, and how all of that serves those in power. I will also be using Marx’s definition of commodification from Capital a lot, and generally using Marx as my primary “Theory” source, under the B-TEAM framework.
4 Comments
Are there other reasons why CEOs want trustworthy AI? How much do CEOs care if AI is trustworthy? How does the trustworthiness of AI impact sales (does it?)?
These are the first questions that came to mind. The first answer I think of to your question is *money*, so I think it’d be interesting if you delved more into that.
Also, while this isn’t a very academic source, there’s a Youtube video posted by Philosophy Tube called “Is AI ethical?” that touches on the topics you’re bringing up (specifically your second and third sections). I wrote in my blog about it. There are sources cited on the screen, so maybe it could lead you to more sources?
So what this comment tells me, in essence, is that either section 2 or section 3 needs to have a section more explicitly spelling out Marxist theory. I think I can hit all the points you mentioned by going over class dynamics, profit incentives, and types of interests under Capitalism. I originally wanted to have all the Marx stuff be more in the background except commodification, both since it would take a lot of time and because people hear “Marx” or “socialism” and go insane, but a basic explanation seems unavoidable.
Your outline for your paper is laid out really well and it all fits cohesively together from point to point. When I read it, though, I am left with a question somewhere between 2 and 3. Is the rise of AI viewed more negatively by the working force, especially those with jobs that may become devalued, in fear of their livelihood/purpose through their professional work? I know your fourth section addresses their intersection, so you may just have already planned to include the answer to my question there.
This is a good question. The simple answer is: the most effective way to get away with abusing someone is to convince them it’s for their own good. Because the answer, frankly, is that I’m not sure how the general working class views AI getting more “trustworthy,” but I’m sure that it’s in the interests of their bosses and shareholders to pump everything they have into portraying it as good and inevitable. I probably will address it in my fourth section.