RP Blog 4: Completed Bibliography

1. Huang, Changwu, et al. “An overview of artificial intelligence ethics.” IEEE Transactions on Artificial Intelligence 4.4 (2022): 799-819.

Huang et al. from institutions including the Chinese Academy of Sciences and Hong Kong Polytechnic University, provide a broad overview of AI ethics, their paper lays out ethical concerns in AI, such as fairness, accountability, transparency, and privacy, while also addressing challenges in implementing ethical AI frameworks. For me, section 3, on ethical issues and risks of AI is the most important. I think this is really a context piece since it lays out the current thought on what we should allow AI to do. This paper doesn’t really contribute anything new, but it does a good job of keeping the current discourse around the ethics of AI in one place. It also has a massive bibliography which is helpful for further research.

2. Lubars, Brian, and Chenhao Tan. “Ask not what AI can do, but what AI should do: Towards a framework of task delegability.” Advances in neural information processing systems 32 (2019).

Lubars and Tan, both from the University of Chicago Boulder present a paper which attempts to begin answering the question of what AI should do for us or, as they put it in the paper, AI task delegability. They do this by first deciding on 4 factors which they think contribute the most to our deciding if a task should be fully automated or not (motivation, difficulty, risk, trust) and then administering a survey about how comfortable participants would be with AI performing a specific task. They find that trust is the most important factor in how automated a task should be and also that, on average, we prefer a level of automation which puts humans in the leading role, with AI assisting. This text is definitely an exhibit piece, but I think it could also work as a context one depending on what conclusion I come to about my paper as I begin to work on it. This paper also has a section on related work which is helpful specifically regarding papers which talk about the role of trust, risk, and uncertainty when considering what AI should do for us. 

3. Lewandowsky, Stephan, Michael Mundy, and Gerard Tan. “The dynamics of trust: comparing humans to automation.” Journal of Experimental Psychology: Applied 6.2 (2000): 104.

4. Lee, John D., and Katrina A. See. “Trust in automation: Designing for appropriate reliance.” Human factors 46.1 (2004): 50-80.

5. McDermid, John A., et al. “Artificial intelligence explainability: the technical and ethical dimensions.” Philosophical Transactions of the Royal Society A 379.2207 (2021): 20200363.

6. Kabir, Samia, et al. “Is stack overflow obsolete? an empirical study of the characteristics of chatgpt answers to stack overflow questions.” Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 2024.

7. Klaudia and Chandrasekar, working with the Columbia journalism review, conduct a study on the capability of multiple LLM’s like ChatGPT, Deepseek, CoPilot, etc… to correctly answer questions about where a given text is sourced from. What they find is that these models overwhelmingly either make up links, are incorrect in what the source of what they were given was, and wouldn’t decline to answer questions about things they couldn’t answer accurately. Overall, they were incorrect about 60% of questions. This is a great source for me in demonstrating that we can’t trust AI in it’s current stage. This is a display piece for me to show that beyond what we may have also experienced personally, the reliability of LLM’S low, or at least low enough that we shouldn’t trust it to do dangerous or important things for us.

Jaźwińska, Klaudia, and Aisvarya Chandrasekar. “Ai Search Has a Citation Problem.” Columbia Journalism Review, 6 Mar. 2025, www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php.

8. Jacovi et al, from various university’s, attempt to define what trust is between a human and an AI, how it differs from human to human trust and importantly, how to evaluate whether our trust is warranted. they also make the connection of how trust and the explainability of AI are related. But really, I like this source as an example for all the sources that will follow below, those that I plan on using to demonstrate how often trust is commonly cited as important in our treatment of machines and AI’s. I think combined with 2, which is a study done on normal people that demonstrates the importance of trust, a collection of real studies that cite trust as important will give a lot of weight to my claim that trust is the most important thing in what we let AI do. these aren’t sources that I will really “use” but simply ones that I can cite as marking trust an important factor in what we let AI do.

Jacovi, Alon, et al. “Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI.” Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.

Carvalho, Diogo V., Eduardo M. Pereira, and Jaime S. Cardoso. “Machine learning interpretability: A survey on methods and metrics.” Electronics 8.8 (2019): 832.

Das, Arun, and Paul Rad. “Opportunities and challenges in explainable artificial intelligence (xai): A survey.” arXiv preprint arXiv:2006.11371 (2020).

Tjoa, Erico, and Cuntai Guan. “A survey on explainable artificial intelligence (xai): Toward medical xai.” IEEE transactions on neural networks and learning systems 32.11 (2020): 4793-4813.

Xu, Feiyu, et al. “Explainable AI: A brief survey on history, research areas, approaches and challenges.” Natural language processing and Chinese computing: 8th cCF international conference, NLPCC 2019, dunhuang, China, October 9–14, 2019, proceedings, part II 8. Springer International Publishing, 2019.

9.Carvalho, Diogo V., Eduardo M. Pereira, and Jaime S. Cardoso. “Machine learning interpretability: A survey on methods and metrics.” Electronics 8.8 (2019): 832

10.Plan, REPowerEU. “Communication from the commission to the European parliament, the European council, the council, the European economic and social committee and the committee of the regions.” European Commission: Brussels, Belgium (2018). 

11.IBM. “What Are Large Language Models (LLMs)?” Ibm.com, 2 Nov. 2023, www.ibm.com/think/topics/large-language-models.

Leave a Reply

Your email address will not be published. Required fields are marked *