1 Statement of Jennifer E. Rothman Nicholas F. Gallicchio …, 2 Feb. 2024, rightofpublicityroadmap.com/wp-content/uploads/2024/02/Rothman_Statement_Subcommittee-on-IP_February-2_2024_Submitted.pdf.
- This PDF is a record of a formal statement made by Jennifer E. Rothman before the Subcommittee on Courts, Intellectual Property, and the Internet for the House of Representatives. Professor Rothman is the Nicholas F. Gallichio Professor of Law at the University of Pennsylvania Carey Law School, and is considered to be the leading expert on identity and publicity rights. Considering her expertise, the aforementioned subcommittee for the House of Representatives invited her to testify to issues raised by AI and whether or not Congress should have a roll in addressing them. This hearing was particularly meant to shed light on the proposed “AI Fraud Act,” which is certainly does. Professor Rothman explicates overlooked issues with the AI Fraud Act through her critiques, providing important insight to the Bill. Overall, her statement will be very helpful in my review of the bill.
“Blueprint for an AI Bill of Rights.” The White House, Oct. 2022, www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
“Blueprint for an AI Bill of Rights.” The White House, The United States Government, 22 Nov. 2023, www.whitehouse.gov/ostp/ai-bill-of-rights/.
“CDT and Civil Society Partners Urge Congress to Protect Artists, Creators, and Free Expression as It Examines Possible Misuse of AI Technologies .” Center for Democracy & Technology, 1 Feb. 2024, cdt.org/wp-content/uploads/2024/02/Coalition-Letter-NO-AI-Fraud-Act-_-NO-FAKES-Act-2.1.2024-.pdf.
- This source is essentially an open letter from various well know free speech and technology organizations (mostly non-profits) to Congress. In this letter, the companies urge congress not to pursue action with the proposed No AI Fraud Act. In this critique they point out the broad, vague, nature of the bill. Due to this nature, they claim that the bill could have larger implications than intended, potentially hindering free speech rights. One of the organizations who co-authored this letter is FIRE. The President of FIRE, Greg Lukianoff, recently spoke at The University of Oklahoma at an event I was in attendance for. Overall, these organizations raise valid concerns that will bring insight to my analysis of the proposed legislation.
Hlr. “Voluntary Commitments from Leading Artificial Intelligence Companies On.” Harvard Law Review, The Harvard Law Review Association, 10 Feb. 2024, harvardlawreview.org/print/vol-137/voluntary-commitments-from-leading-artificial-intelligence-companies-on-july-21-2023/.
- This source is an online section of the Harvard Law Review, volume 137, issue 4. The Harvard Law Review is a “student-run journal of legal scholarship” in association with Harvard University. This particular article within the journal responds to the commitment made by several AI companies while convening in the White House to develop a system, such as watermarking, to label AI generated content. Thus changing the way we currently understand Copyright law. It also responds in part to the AI Bill of Rights published under the Biden-Harris Administration. What the article proposes may very well be the solution for many unprecedented legal issues with AI. It would provide the courts with a standard by which such cases should be determined, as well as more clearly define the liability held by AI companies and consumers. It also explicates the strengths and weaknesses of the AI Bill of Rights, providing a nuanced understanding of the document.
“Jennifer E. Rothman.” PennLaw, www.law.upenn.edu/faculty/rothmj. Accessed 25 Nov. 2024.
McSherry, Corynne. “Generative AI Policy Must Be Precise, Careful, and Practical: How to Cut through the Hype and Spot Potential Risks in New Legislation.” Electronic Frontier Foundation, 25 Sept. 2024, www.eff.org/deeplinks/2023/07/generative-ai-policy-must-be-precise-careful-and-practical-how-cut-through-hype.
Salazar, Maria, and Madeleine Dean. No AI Fraud Act, 10 Jan. 2024, dean.house.gov/_cache/files/7/7/77047f57-f9f8-4ddb-9385-f4f82c7d22fd/090C34FC92DED2E83456EB85C8E64E44.no-ai-fraud-act.pdf.
- This source is the first draft of the proposed No AI Fraud Act as it currently exists. The No AI Fraud Act is a bipartisan bill presented by Representatives Maria Salazar and Madeleine Dean to the House on January 10th, 2024. Though it is still in the early stages of review and revision, if passed into law it would create a “federal framework to protect Americans’ individual right to their likeness and voice against AI generated fakes and forgeries.” At the very onset of the bill, Representatives Salazar and Dean detail the ethical dilemmas spurred by generative AI. Specifically, deepfakes. This source is particularly helpful, as it dictates legal and ethical issues with AI that have not yet been outwardly publicized by courts currently considering such cases.