Final RP Proposal

The question that I am going to engage with in this paper is that of what we should allow AI (specifically deep learning AI) to do for us. My answer to that question is going to be that at least at this time, the answer is that we should allow AI to do nothing for us. I plan for the paper to proceed in essentially 3 stages. Firstly, I want to demonstrate using a variety of sources, that trust (in a broad sense) seems to be the most important factor in deciding what we should let AI do. Secondly, I will show using a variety of sources, that at the present time in the development of AI we cannot trust AI for certain (perhaps the most important) things. And thirdly, I will give an answer to the question/argument for that if we take claims 1 and 2 together this means that we should not allow AI to do much of anything, at least yet.

I’m happy with the above as I think it seems more research paper-y to me than my original idea did, since my conclusion can be grounded in the concrete facts I plan to demonstrate instead of a purely reason based a priori philosophical argument about what we should let AI do. but it still keeps a lot of the bits that make me interested.

Moving onto sources, I think for my paper I will primarily be using Exhibit/Argument pieces. Although my argument pieces will probably be ones that I agree with. Basically, I think that I need 2 “kinds” of sources. Those that demonstrate/argue for claim 1, and those that demonstrate/argue for claim 2. as an example, and to show sources that I think will be big for me, for claim one I can use Tan and Lubar’s Ask not what AI can do, but what AI should do: Towards a framework of task delegability to provide evidence for claim 1 since their research indicates how important of a factor trust is in us allowing AI to do things. and for claim 2 I can use Hicks et al’s ChatGPT is Bullshit (although this might be more of an exhibition piece to accompany sources that actually show data that AI is fooling or “bullshitting” us) to show that we seemingly cannot trust AI. I plan on having at least 2 or 3 sources for both of those sections since having multiple sources bolsters the strength of my claims. I don’t see these sources being hard to find given the resources I have. I’m not sure if the 3rd section, my conclusion/argument will have any sources since it may just be me putting together an argument/answer to the posed question based off of previously used sources.

2 responses

  1. One of the main sources in my paper happens to be about almost exactly what you’re talking about. However, due to the nature of that paper (and the nature of your paper) I think it makes sense to investigate the question “should we want AI even if we can trust it?” since the only gripe you seem to have currently is that it’s not *yet* trustworthy. (Here’s the source Steffen Kruger and Christopher Wilson’s “The Problem with Trust: On the Discursive Commodification of Trust in AI,”)

  2. Wyatt Haas Avatar
    Wyatt Haas

    This prompt is very interesting and makes me curious about the use of AI in my own life and naturally draws any reader in. I love how all-encompassing the idea of what AI should do for us is, but I think this might also be a detriment. The scope of this question of choosing what AI should do is very large and philosophical, so it might be worth considering focusing on a smaller area/subject that has more information and sources about it rather than how it affects life in a general aspect.

    For example, maybe choosing a specific industry to focus on such as silicon, oil, data, etc. would help narrow down the scope. You could totally go with this prompt and probably be fine! I I just picture myself having trouble navigating a prompt like this which is why I bring it up.

Leave a Reply

Your email address will not be published. Required fields are marked *