Artificial Intelligence Legislation Tracker

UPDATED: Jan. 22, 2026

The rapid rise in interest in – and adoption of – artificial intelligence (“AI”) technology, including generative AI, has resulted in global demands for regulation and corresponding legislation. Microsoft, for one, has pushed for the development of “new law and regulations for highly capable AI foundation models” and the creation of a new agency in the United States to implement those new rules, as well as the establishment of licensing requirements in order for entities to operate the most powerful AI models. At the same time, Sam Altman, the CEO of ChatGPT-developer OpenAI, had called for “the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations, and tests that A.I. models must pass before being released to the public,” among other things.

The United States has “trailed the globe on regulations in privacy, speech, and protections for children,” the New York Times reported in connection with calls for AI regulation. The paper’s Cecilia Kang noted that the U.S. is “also behind on A.I. regulations” given that the EU AI Act was formally adopted in 2024 and entered into force Aug. 1, 2024. Meanwhile, China currently has among “the most comprehensive suite of AI regulations in the world,” with the Interim Measures for the Management of Generative AI Services becoming effective on Aug. 15, 2023.

What the U.S. has done to date is release non-binding guidance in the form of the AI Risk Management, the final AI Risk Management Framework (AI RMF 1.0) was released on Jan. 26, 2023. Intended for voluntary use, the AI Risk Management Framework aims to enable companies to “address risks in the design, development, use, and evaluation of AI products, services, and systems” in light of the “rapidly evolving” AI research and development standards landscape.

Before that, in October 2022, the White House Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights, at the center of which are five principles that intended to minimize potential harm from AI systems. (Those five principles are: safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, consideration, and fallback.)

Despite such lags in regulation, a growing number of new AI-focused bills coming from lawmakers at the federal and state levels are worth keeping an eye on. With that in mind, here is a running list of key domestic legislation that industry occupants should be aware of – and we will continue to track substantive developments for each and update accordingly …

(This is not an exhaustive list of AI legislation; it is focused on bills that broadly have implications for retail companies and online platforms, and/or that focus on intellectual property.)

Jan. 22, 2026: Transparency and Responsibility for AI Networks Act (TRAIN Act) (H.R. 7209)

Introduced: Jan. 22, 2026, by Rep. Madeleine Dean (PA) and Rep. Moran

Snapshot: The TRAIN Act (H.R. 7209) establishes a legal mechanism for copyright owners to determine whether their copyrighted works were used in the training of generative artificial intelligence (AI) models. The bill creates an administrative subpoena process allowing rights holders to access records from AI developers under certain conditions.

Key Provisions: The bill permits copyright holders to request a subpoena from any U.S. district court clerk requiring AI developers to disclose records or copies of training data, when the requester has a good faith belief that their copyrighted works were used. It defines key terms such as “developer,” “generative artificial intelligence model,” and “training material,” and limits subpoenas to works owned by the requester. Developers must respond expeditiously upon receipt of a valid subpoena, and there is a duty of confidentiality on recipients. If a developer fails to comply, a rebuttable presumption is created that the copyrighted material was used. The bill includes safeguards against abuse, with courts authorized to impose sanctions for bad faith subpoena requests.

Potential Implications: Supporters argue the bill introduces much-needed transparency into how AI systems are trained and helps protect intellectual property rights. “This bill gives creatives a tool to seek the truth about how their work is being used in AI training,” said Rep. Dean. “It’s a step toward accountability and fairness in the age of generative technology.”

Jan. 1, 2026 – California: Generative AI Training Data Transparency

Bill: Generative Artificial Intelligence Training Data Transparency Act (AB 2013)

Introduced by/Sponsor: Assemblymember Jacqui Irwin (D)

Snapshot: Requires developers of generative AI systems or services made available to Californians to publicly disclose “high-level” information about the training data used. Covered information includes data sources and ownership; data characteristics and volume; relevance to the AI system’s purpose; collection and processing methods; intellectual property status, including use of copyrighted or licensed data; whether training data includes personal information under the California Consumer Privacy Act; collection timelines; and whether synthetic data was used in training.

The law applies to systems first released or substantially modified on or after January 1, 2022, and broadly defines covered developers as any individual or entity that designs, codes, produces, or substantially modifies generative AI for public use in California.

Exemptions include systems used solely for cybersecurity or security testing, aircraft operations in national airspace, or systems developed for national security, military, or defense purposes and made available only to a U.S. federal agency. AB 2013 does not prescribe specific enforcement mechanisms or penalties, and it contains no explicit trade secret exemption, leaving uncertainties around enforcement and proprietary protections.

Status: Enacted – Signed by Governor September 28, 2024; effective January 1, 2026.

Jan. 1, 2026 – California: Companion Chatbot Operational and Reporting Requirements

Bill: Companion Chatbot Law (SB 243)

Introduced by/Sponsors: California State Senate (signed by Governor October 13, 2025)

Snapshot: Establishes the first comprehensive state law regulating “companion chatbots” — AI systems with adaptive, human-like conversational capabilities designed to meet users’ social or emotional needs. The law applies only to operators that make companion chatbot platforms available to users in California and excludes many customer service bots, business process tools, video game chatbots limited to in-game responses, and voice-activated assistants that do not sustain emotional or relational interactions.

Core requirements include clear AI disclosures when a reasonable person might be misled into thinking they are interacting with a human; implementation of safety protocols to prevent harmful content (including referrals to crisis services for suicide/self-harm risk); and, for users known to be minors, additional safeguards such as periodic reminders that the chatbot is AI and measures to prohibit sexually explicit content.

Beginning July 1, 2027, operators must annually report to the California Office of Suicide Prevention on crisis referrals and safety-protocol measures, with summary data published by the state. The statute creates an express private right of action for individuals harmed by non-compliance, allowing claims for injunctive relief, actual damages (minimum $1,000 per violation or actual damages, whichever is greater), and attorneys’ fees.

Status: Enacted – Signed into law on October 13, 2025; key provisions effective January 1, 2026; annual reporting begins July 1, 2027.

2025

Dec. 11, 2025 – New York: AI Transparency in Advertising

Bill: AI Transparency in Advertising (S.8420-A / A.8887-B)

Introduced by/Sponsors: Sen. Michael Gianaris; Assemblymember Linda B. Rosenthal

Snapshot: Requires individuals and entities that produce or create commercial advertisements to clearly and conspicuously disclose when an advertisement includes an AI-generated “synthetic performer”—defined as digitally created media that appears as a real person. The law is designed to prevent consumer deception as AI-generated human likenesses become increasingly realistic. Exceptions apply for audio-only advertisements, AI used solely for translation, and promotional materials for expressive works (such as films or television shows) where the synthetic performer appears consistently in the underlying work.

Status: Enacted – Signed into law on December 11, 2025; effective June 9, 2026. Enforced through civil penalties of $1,000 for a first violation and $5,000 for subsequent violations.

Dec. 11, 2025 – New York: Posthumous Right of Publicity

Bill: Posthumous Right of Publicity (S.8391 / A.8882)

Introduced by/Sponsors: Sen. Michael Gianaris; Assemblymember Tony Simone

Snapshot: Expands New York’s right of publicity law by requiring prior consent from a deceased individual’s heirs or executors for the commercial use of the individual’s name, image, voice, or likeness. The law specifically restricts the unauthorized creation and use of AI-generated “digital replicas” of deceased performers in audiovisual works, sound recordings, and live musical performances. It applies to deceased personalities and performers who were domiciled in New York at the time of death, while preserving exemptions for expressive works such as parody, satire, criticism, and commentary.

Status: Enacted – Signed into law and effective immediately on December 11, 2025; violations may result in statutory damages of $2,000 or compensatory damages (including profits), with potential exposure to punitive damages.

Dec. 10, 2025 – Federal AI Disclosure in Government Communications

Bill: Responsible and Ethical AI Labeling Act (H.R. 6571) (“REAL Act”)

Introduced by / Sponsors: Rep. Bill Foster (IL); co-sponsored by Rep. Pete Sessions (TX)

Snapshot: Would require Federal officials — including the President, Vice President, and agency employees — to clearly disclose when any publicly released content has been created or materially manipulated using generative artificial intelligence. Disclaimers must be clear, conspicuous, written in plain language, and explain that AI was used, how the content was generated or altered, and the technology involved.

The bill carves out narrow exceptions for non-public communications, classified content (with recordkeeping safeguards), minor visual edits that do not alter meaning (e.g., cropping or brightness), routine text drafting tools reviewed by staff, and personal, non-government social media activity unrelated to official duties.

Implementation would be led by the Office of Management and Budget, which must issue government-wide rules within 180 days. Annual public audits of compliance would be required from the White House and federal agencies. Violations could trigger mandatory retractions and corrective disclosures, corrective action plans overseen by the Comptroller General, and disciplinary measures against federal employees or contractors.

Status: Introduced in the House on December 10, 2025; referred to the House Committee on Oversight and Government Reform. Not yet enacted.

Dec. 1, 2025 – Deepfake Liability Act

Bill: Deepfake Liability Act (H.R. 6334)

Introduced by/Sponsors: Rep. Jake Auchincloss (D–MA)

Snapshot: Amends the Section 230 and TAKE IT DOWN Act framework to define and target certain forms of “digital forgery,” including AI-generated or manipulated audio, images, and video that falsely depict identifiable individuals. The bill would narrow platform immunity and introduce new liability exposure for the creation, distribution, and hosting of covered deepfake content, particularly where such material is used for deception, harassment, or nonconsensual exploitation.

Nov. 10, 2025 – New York: Algorithmic Pricing Disclosure Act

Bill: Algorithmic Pricing Disclosure Act (New York General Business Law § 349-A)

Introduced by/Sponsors: New York State Legislature (signed by Governor Kathy Hochul)

Snapshot: Requires businesses to clearly and conspicuously disclose when a price offered to a New York consumer is set by an algorithm using that consumer’s personal data. The law targets individualized or “surveillance” pricing practices, while carving out exceptions for pricing based solely on non-personal data (such as supply and demand), certain subscription discounts, regulated financial institutions, insurers, and specific transportation fares.

Status: Enacted – Effective November 10, 2025; enforced exclusively by the New York Attorney General, with civil penalties of up to $1,000 per violation and a mandatory notice-and-cure period before enforcement actions.

Aug. 13, 2025 – New York: AI Disclosure in News Work

Bill: AI Disclosure in News Work (A 8962)

Introduced by/Sponsors: Assemblymember Rozic

Snapshot: Mandates that news content substantially created with generative AI must be conspicuously labeled at the top of articles, visuals, or beginning of audio; content must be human-reviewed before publication.

Status: Active – Introduced and referred to Committee on Consumer Affairs and Protection on Aug. 13, 2025.

Jun. 9, 2025 – New York: Synthetic Performer Ad Disclosure

Bill: Synthetic Performer Ad Disclosure (S 8420 / A 8887)

Introduced by/Sponsors: Senator Gianaris; Assemblymember Rosenthal

Snapshot: Requires advertisements featuring AI-generated or synthetic performers to include a clear disclosure; imposes civil penalties of $1,000 for the first violation, $5,000 for subsequent ones.

Status: In progress – To Governor (passed both chambers; A 8887 was substituted by S 8420-A).

Mar. 24, 2025 – California: Bots. Disclosure

Bill: Bots: Disclosure (AB 410)

Introduced by/Sponsors: Assemblymember Wilson (D-CA)

Snapshot: Expands California’s bot disclosure laws. Requires bots that autonomously interact with people online to identify themselves in their first communication, truthfully respond to follow-up identity queries, and avoid misrepresenting themselves as human. Public prosecutors may bring civil actions for violations.

Status: California ended its 2025 legislative session without passing AB 410.

Mar. 20, 2025 – Arkansas: Ownership of Model Training and Content Generated by a Generative AI Tool

Bill: Ownership of Model Training and Content Generated by a Generative AI Tool (H 1876 / Act 927)

Introduced by/Sponsors: Rep. Scott Richardson; co-sponsored by Sen. Joshua Bryant

Snapshot: Clarifies that when an individual uses a generative AI tool, the person who provides the input or data is the owner of the resulting content or trained model (so long as it does not infringe on existing IP). Includes a “work for hire” provision: if done under employment scope and direction, ownership belongs to the employer.

Status: Enacted – Enacted and signed on  May 2, 2025 (Act 927)

Mar. 5, 2025 – New York: Stop Deepfakes Act / Synthetic Content Provenance 

Bill: Stop Deepfakes Act / Synthetic Content Providers Must Include Provenance (S 6954)

Introduced by/Sponsors: Assemblymember Alex Bores (D)

Snapshot: Requires AI-generated or synthetic content systems to embed cryptographic provenance data—including origin, AI provider, and generation timestamp—to preserve authenticity and prevent tampering.
Status: In progress – Passed Senate, pending Assembly consideration.

Feb. 19, 2025 – California: AI Transparency Act 

Bill: AI Transparency Act (AB 853)

Introduced by/Sponsors: Assemblymember Buffy Wicks (D-Berkeley)

Snapshot: Requires creators of generative AI systems with over 1 million monthly users to offer a free AI-detection tool that reveals whether image, video, or audio content was created or altered by AI. Beginning January 1, 2027 (delayed to August 2, 2026), platforms must display provenance labels for AI-generated content.

Status: Enacted October 13, 2025.

Feb. 13, 2025 – Pennsylvania: Clarifying AI & Copyright

Bill: Clarifying AI & Copyright (HR 81)

Introduced by/Sponsors: Rep. Kristine Howard (D-PA-167); co-sponsors include Giral, Guenst, Sanchez, Merski, Otten, Cepeda-Freytiz, Daley, and Green

Snapshot: A resolution calling on Congress to amend U.S. copyright law to clarify that works generated predominantly by AI are not copyrightable, that only human-authored works qualify for copyright, and that scraping copyrighted works for AI model training is not protected by fair use.

Status: Active – Introduced and circulated among House members; pending further action

Feb. 4, 2025 – California: Generative AI: Training Data

Bill: Generative AI: Training Data (AB 412)

Introduced by/Sponsors: Assemblymember Rebecca Bauer-Kahan (D-CA)

Snapshot: Requires developers of generative AI models available to Californians to document copyrighted materials used in training and record the copyright owner. Developers must, within seven days of a rights-owner’s request, provide a list of training materials or certify that none were used. Civil action permitted for non-compliance.

Status: Did not advance out of committee during the 2025 legislative year; not enacted.

Jan. 27, 2025 – New York: Advanced AI Licensing Act

Bill: Advanced AI Licensing Act (A 3356)

Introduced by/Sponsors: Assemblymembers Vanel, Blumencranz, Hyndman; cosponsored by Levenberg

Snapshot: Creates a licensing and regulatory framework for “high-risk” AI systems. Requires registration and compliance with ethical standards. Grants the Department of State authority to suspend/revoke licenses and impose penalties.

Status: Active – Referred to Assembly Committee on Science and Technology.

Jan. 22, 2025 – Algorithmic Pricing Disclosure Act

Bill: Algorithmic Pricing Disclosure Act (A.6765A)

Introduced by: New York State Assembly

Snapshot: The Act requires most companies using algorithmic pricing—systems that automatically adjust prices based on personal data such as location, income, or shopping habits—to clearly disclose that prices are being set using personal information. Businesses must display the notice: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” The law aims to prevent “surveillance pricing” practices that charge consumers different prices for the same products without transparency.

Status: Signed into law by Governor Kathy Hochul in May 2025 as part of the FY 2026 Executive Budget; effective November 10, 2025. Attorney General Letitia James has issued a consumer alert urging New Yorkers to report violations.

Jan. 10, 2025 – New York: Publishers of Books Created with AI 

Bill: Publishers of Books Created with AI (A 1509 / S 1815)

Introduced by/Sponsors: Assemblymember Rivera (A 1509); Senator Fernandez (S 1815)

Snapshot: Requires books (print or digital) created wholly or partly with AI to include conspicuous cover disclosures. Applies to text, images, audio, puzzles, and games.

Status: Active – A 1509 referred to Assembly Consumer Affairs and Protection; S 1815 referred to Senate Internet and Technology Committee.

2024

Nov. 21, 2024 – SCAM Platform Act

Bill: SCAM Platform Act (H.R 10212)

Introduced by: Curtis, John R. (R-UT-3)

Snapshot: The bill directs the Federal Communications Commission (“FCC”) to add a tool on its website that uses AI to assist the public in identifying likely scams. Individuals would be able to submit emails, texts, website addresses, or photographs for the FCC online tool to provide a rating that reflects the likelihood of the submission being a scam.

Nov. 21, 2024 – Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act

Bill: Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act (S. 5379)

Introduced by: Welch, Peter (Sen.-D-VT)

Snapshot: The bill would create an administrative subpoena process to assist copyright owners in determining which of their copyrighted works have been used in the training of AI models.

Nov. 1, 2024 – Eliminating Bias in Algorithmic Systems Act of 2024

Bill: Eliminating Bias in Algorithmic Systems Act of 2024 (H.R. 10092)

Introduced by: Lee, Summer L. (D-PA-12)

Snapshot: The bill would require certain agencies that use, fund, or oversee algorithms to have an office of civil rights focused on bias, discrimination, and other algorithmic harms. It requires these civil rights offices to issue annual reports detailing the technology of covered algorithms with respect to jurisdiction of the covered agency, including risks relating to bias, discrimination, and other harms, actions the agency has taken to mitigate these risks and harms, and other information. It would also create an interagency working group on covered algorithms and civil rights.

Sept. 24, 2024 – Artificial Intelligence Civil Rights Act of 2024

Bill: Artificial Intelligence Civil Rights Act of 2024 (S. 5152)

Introduced by: Markey, Edward J. (Sen.-D-MA)

Snapshot: The bill regulates the use of algorithms in consequential decisions. It requires developers and deployers of such algorithms to evaluate the algorithm’s potential harms before deployment and implement post-deployment impact assessments. It increases transparency around the use of algorithms in consequential decisions and grants individuals the right to appeal an algorithmic decision.

Status: Expired; introduced in the 118th Congress and did not become law before Congress session ended in January 2025.

Apr. 18, 2024 – Future of Artificial Intelligence Innovation Act of 2024

Bill: Future of Artificial Intelligence Innovation Act of 2024 (S. 4178)

Introduced: Apr. 18, 2024

Introduced by/Sponsors: Sens. Maria Cantwell (D-Wash.), Todd Young (R-Ind.), John Hickenlooper (D-Colo.), and Marsha Blackburn (R-Tenn.)

Snapshot: The bill aims to set the foundation for continued U.S. leadership in the development of AI and emerging technologies, responding to “longstanding calls from researchers by incorporating key cybersecurity recommendations, such as the development of international standards, metrics, and AI testbeds; increased collaboration between the public-private sector and governments both domestically and abroad; and enhanced information sharing to drive secure AI research and development.”

Apr. 9, 2024 – Generative AI Copyright Disclosure Act

Bill: Generative AI Copyright Disclosure Act (H.R. 7913)

Introduced by/Sponsors: Rep. Schiff, Adam (D-CA)

Snapshot: The bill would require a notice to be submitted to the Register of Copyrights prior to the release of a new generative AI system with regard to all copyrighted works used in building or altering the training dataset for that system. The bill’s requirements would also apply retroactively to previously released generative AI systems.

Mar. 21, 2024 – Protecting Consumers From Deceptive AI Act

Bill: Protecting Consumers From Deceptive AI Act (H.R. 7766)

Introduced by/Sponsors: Rep. Eshoo, Anna G. (D-CA)

Snapshot: The bill requires the National Institute of Standards and Technology (NIST) to establish task forces to facilitate and develop technical standards and guidelines for identifying and labeling AI-generated content. It also requires generative artificial intelligence (GAI) developers and online content platforms to provide disclosures on AI-generated content.

Mar. 19, 2024 – AI CONSENT Act

Bill: AI CONSENT Act (S. 3975)

Introduced by/Sponsors: Sen. Welch, Peter (D-VT)

Snapshot: The bill would require online platforms to obtain consumers’ express informed consent before using their personal data to train artificial intelligence (AI) models. Failure to do so would be considered a deceptive or unfair practice, subject to Federal Trade Commission (FTC) enforcement. The bill also directs the FTC to study the efficacy of data de-identification given advancements in AI tools.

Mar. 6, 2024 – Protect Victims of Digital Exploitation and Manipulation Act of 2024

Bill: Protect Victims of Digital Exploitation and Manipulation Act of 2024 (H.R. 7567)

Introduced by/Sponsors: Rep. Mace, Nancy (R-SC)

Snapshot: The bill which would amend title 18, United States Code to prohibiting the production or distribution of non-consensual, deepfake pornography.

Feb. 1, 2024 – Artificial Intelligence Environmental Impacts Act of 2024

Bill: Artificial Intelligence Environmental Impacts Act of 2024 (H.R. 7197)

Introduced by/Sponsors: Rep. Eshoo, Anna G. (D-CA)

Snapshot: The bill would direct the National Institute of Standards and Technology (NIST) to develop standards to measure and report the full range of artificial intelligence’s (AI) environmental impacts, as well as create a voluntary framework for AI developers to report environmental impacts.

Jan. 29, 2024 – R U REAL Act

Bill: Restrictions on Utilizing Realistic Electronic Artificial Language (“R U REAL”) Act (H.R. 7120)

Introduced by/Sponsors: Rep. Schakowsky, Janice D. (D-IL)

Snapshot: The bill would give the Federal Trade Commission about three months to add a new mandate to telemarketing rules to require telemarketers to disclose if they are using artificial intelligence to mimic a human at the beginning of any call or text message.

Jan. 10, 2024 – No AI FRAUD Act

Bill: No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R. 6943)

Sponsors: Reps. María Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY), and Rob Wittman (R-VA).

Snapshot: The bill establishes a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries. The No AI FRAUD Act establishes a federal solution with baseline protections for all Americans by: (1) Reaffirming that everyone’s likeness and voice is protected, giving individuals the right to control the use of their identifying characteristics; (2) Empowering individuals to enforce this right against those who facilitate, create, and spread AI frauds without their permission; and (3) Balancing the rights against First Amendment protections to safeguard speech and innovation.

Jan. 10, 2024 – Ensuring Likeness Voice and Image Security (ELVIS) Act

Bill: Ensuring Likeness Voice and Image Security (ELVIS) Act

Sponsors: Tennessee Governor Bill Lee

Snapshot: The bill would update Tennessee’s Protection of Personal Rights law to include protections for songwriters, performers, and music industry professionals’ voices from the misuse of AI. The ELVIS Act aims to prevent the production and distribution of audio-visual and sound recordings featuring unauthorized AI-generated replica vocals of an individual without said individual’s consent. Tennessee law currently protects an individual’s image, photograph, and likeness from being exploited without consent, but the protection does not extend to an individual’s voice.

Status: Mar. 21, 2024 – The ELVIS Act was signed into law by Tennessee Gov. Bill Lee.

2023

Dec. 22, 2023 – AI Foundation Model Transparency Act of 2023

Bill: AI Foundation Model Transparency Act of 2023 (H.R.6881)

Sponsors: Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA)

Snapshot: The bill would direct the Federal Trade Commission (FTC), in consultation with the National Institute of Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP), to set standards for what information high-impact foundation models must provide to the FTC and what information they must make available to the public. Information identified for increased transparency would include training data used, how the model is trained, and whether user data is collected in inference.

What the sponsors are saying: “AI offers incredible possibilities for our country, but it also presents peril. Transparency into how AI models are trained and what data is used to train them is critical for consumers and policy makers,” said Eshoo. “The AI Foundation Model Transparency Act directs the Federal Trade Commission and NIST to establish standards for data sharing by foundation model deployers. This critical legislation will provide necessary information and empower consumers to make well informed decisions when they interact with AI. It will also provide the FTC critical information for it to continue to protect consumers in an AI-enabled world.”

Status: Expired; introduced in the 118th Congress and did not become law before Congress session ended in January 2025.

Dec. 15, 2023 – Artificial Intelligence Literacy Act of 2023

Bill: Artificial Intelligence Literacy Act of 2023 (H.R.6791)

Sponsors: Reps. Lisa Blunt Rochester (D-Del.) and Larry Bucshon, M.D. (R-Ind..)

Snapshot: The bill would codify AI literacy as a key component of digital literacy and creates opportunities to incorporate AI literacy into existing programs.

What the sponsors are saying: “It’s no secret that the use of artificial intelligence has skyrocketed over the past few years, playing a key role in the ways we learn, work, and interact with one another. Like any emerging technology, AI presents us with incredible opportunities along with unique challenges,” said Rep. Blunt Rochester. “That’s why I’m proud to introduce the bipartisan AI Literacy Act with my colleague, Rep. Bucshon. By ensuring that AI literacy is at the heart of our digital literacy program, we’re ensuring that we can not only mitigate the risk of AI, but seize the opportunity it creates to help improve the way we learn and the way we work.”

Status: Expired; introduced in the 118th Congress and did not become law before Congress session ended in January 2025.

Nov. 24, 2023 – AI Labeling Act of 2023

Bill: AI Labeling Act of 2023 (H.R.6466)

Sponsors: Rep. Tom Kean, Jr. (NJ-07)

Snapshot: The bill would help ensure people know when they are viewing AI-made content or interacting with an AI chatbot by requiring clear labels and disclosures. Specifically, the bill would: (1) Direct the Director of the National Institute of Standards and Technology (NIST) to coordinate with other federal agencies to form a working group to assist in identifying AI-generated content and establish a framework on labeling AI; (2) Require that developers of generative AI systems incorporate a prominently displayed disclosure to clearly identify content generated by AI; (3) Ensure developers and third-party licensees take responsible steps to prevent systematic publication of content without disclosures; and (4) Establish a working group of government, AI developers, academia, and social media platforms to identify best practices for identifying AI-generated content and determining the most effective means of transparently disclosing it to consumers.

Nov. 15, 2023 – AI Research, Innovation, and Accountability Act of 2023

Bill: Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312)

Sponsors: Sens. John Thune (R-S.D.), Amy Klobuchar (D-Minn.), Roger Wicker (R-Miss.), John Hickenlooper (D-Colo.), Shelley Moore Capito (R-W.Va.), and Ben Ray Luján (D-N.M.)

Snapshot: The bill establishes a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI.

What the sponsors are saying: “AI is a revolutionary technology that has the potential to improve health care, agriculture, logistics and supply chains, and countless other industries,” said Thune. “As this technology continues to evolve, we should identify some basic rules of the road that protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention. This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications.”

Oct. 3, 2023 – Digital Replica Contracts Act

Bill: Digital Replica Contracts Act (NY S.7676B)

Sponsors: NY Sen. Jessica Ramos

Snapshot: The bill aims to protects performers’ voices and likenesses from being copied or used without permission in New York state, mirroring laws previously passed in California (AB 2602 and AB 1836) and Tennessee (the ELVIS Act).

Status: Dec. 13, 2024 –The bill was signed by New York Governor Kathy Hochul.

Oct. 11, 2023 – NO FAKES Act

Bill: The Nurture Originals, Foster Art, and Keep Entertainment Safe (“NO FAKES”) Act

Sponsors: Sens. Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC)

Snapshot: The NO FAKES Act would protect the voice and visual likeness of all individuals from unauthorized recreations from generative artificial intelligence. The draft legislation would: (1) Hold individuals or companies liable if they produce an unauthorized digital replica of an individual in a performance; (2) Hold platforms liable for hosting an unauthorized digital replica if the platform has knowledge of the fact that the replica was not authorized by the individual depicted; and (3) Exclude certain digital replicas from coverage based on recognized First Amendment protections.

Status: Oct. 11, 2023 – A “discussion draft” of the legislation was introduced with the sponsors saying that they “look forward to continuing to work with stakeholders to ensure that Congress appropriately balances the need to protect individuals and creators, First Amendment considerations, and fostering U.S. leadership and innovation in AI.”

Sept. 28, 2023 – Preventing Deep Fake Scams Act

Bill: Preventing Deep Fake Scams Act (H.R.5808)

Sponsors: Reps. Brittany Pettersen (D-CO-7) and Mike Flood (R-NE)

Snapshot: The bill would address the growing threat of artificial intelligence (AI), often referred to as “deep fake” scams, to consumers, banks, credit unions, and the American economy by establishing a task force to examine AI in the financial services sector, including both the potential benefits of the technology for financial institutions and the unique risks it poses to customer account security.

What the sponsors are saying: “Scammers are already learning how to take advantage of regular Americans by stealing audio, photos, videos, and other personal information to hack into bank accounts and steal people’s hard-earned money. Artificial intelligence will only become more advanced and widely available, so our policies must keep up. Our bipartisan bill will allow Congress to stay at the cutting-edge of technological advances, understanding both the positive and negative impacts AI could have on our financial sector,” said Pettersen.

“Artificial intelligence is already changing how people live, work, and do business. While I am excited about the potential for artificial intelligence to change our economy in positive ways, deep fakes have the potential to lead to some troubling threats for Americans like identity theft and fraud,” said Flood.

Sept. 21, 2023 – Algorithmic Accountability Act of 2023

Bill: Algorithmic Accountability Act of 2023 (S.2892) and (H.R.5628)

Sponsors: Sen. Ron Wyden (D-OR) and Rep. Yvette D. Clarke (D-NY-9)

Snapshot: The bill would require companies to assess the impacts of the AI systems they use and sell, creates new transparency about when and how such systems are used, and empowers consumers to make informed choices when they interact with AI systems.

What the sponsors are saying: “AI is making choices, today, about who gets hired for a job, whether someone can rent an apartment and what school someone can attend. Our bill will pull back the curtain on these systems to require ongoing testing to make sure artificial intelligence that is responsible for critical decisions actually works, and doesn’t amplify bias based on where a person lives, where they go to church or the color of their skin,” Wyden said.

Sept. 20, 2023 – DEEPFAKES Accountability Act

Bill: DEEPFAKES Accountability Act (H.R.5586)

Sponsors: Rep. Yvette D. Clarke (D-NY-9)

Snapshot: The bill would protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes by providing prosecutors, regulators and particularly, victims with resources, like detection technology, and requiring creators to label all deepfakes uploaded to online platforms and make transparent any alterations made to a video or other type of content.

What the sponsors are saying: “We know that weaponized deception can be extremely harmful to our society. This bill is meant to take us into the 21st century and establish a baseline so we can discern who is intending to harm us,” said Clarke.

Sept. 12, 2023 – Advisory for AI-Generated Content Act

Bill: Advisory for AI-Generated Content Act (S.2765)

Sponsors: Sen. Pete Ricketts (R-NE)

Snapshot: The bill would make it unlawful for an AI-generating entity to create covered AI-generated material unless such material includes a watermark that meets the standards established by the FTC.

Jul. 27, 2023 – Digital Consumer Protection Commission Act of 2023

Bill: Digital Consumer Protection Commission Act of 2023 (S.2597)

Sponsors: Sens. Elizabeth Warren (D-MA) and Lindsey Graham (R-SC)

Snapshot: The Bill would would rein in Big Tech by establishing a new commission to regulate online platforms. The commission would have concurrent jurisdiction with FTC and DOJ, and would be responsible for overseeing and enforcing the new statutory provisions in the bill and implementing rules to promote competition, protect privacy, protect consumers, and strengthen our national security.

What the sponsors are saying: “The digital revolution provided new opportunities for promoting social interaction, starting businesses, and democratizing information. But digital advancement has a dark side. Today, a tiny number of Big Tech companies generate most of the world’s Internet traffic and effectively regulate Americans’ digital lives. Big Tech companies have far too much power — over our economy, our society, and our democracy. Tech monopolies suppress competition by buying up rivals, preferencing their own products, and charging hefty commissions to other businesses. To get ever more users and data, social media companies manipulate users to drive them to addiction. They target kids with content on self-harm, eating disorders, and bullying. And they leave consumers in the dark about how their data is collected or used, and fall prey to massive data leaks that leave us vulnerable to criminal activity, foreign interference, and disinformation,” Warren and Graham said in a joint statement.

Jul. 27, 2023 – AI Labeling Act of 2023

Bill: AI Labeling Act of 2023  (S.2691)

Sponsors: Sens. Brian Schatz (D-HI) and John Kennedy (R-LA)

Snapshot: The Bill would require generative artificial intelligence (AI) systems to include a clear and conspicuous disclosure that identifies the content as AI-generated content and that is permanent or unable to be easily removed by subsequent users. The Bill also outlines obligations for developers and third-party licensees to implement procedures to prevent downstream use of AI systems without the required disclosure.

Jul. 27, 2023 – CREATE AI Act of 2023

Bill: CREATE AI Act of 2023 (S.2714)

Sponsors: Sens. Martin Heinrich (D-NM), Todd Young (R-IN), Sen. Cory Booker (D-NJ), and Mike Rounds (R-SD)

Snapshot: The bill would establish the National Artificial Intelligence Research Resource as a shared national research infrastructure that provides AI researchers and students from diverse backgrounds with greater access to the complex resources, data, and tools needed to develop safe and trustworthy artificial intelligence.

What the sponsors are saying: “We know that AI will be enormously consequential. If we develop and deploy this technology responsibly, it can help us augment our human creativity and make major scientific advances, while also preparing American workers for the jobs of the future. If we don’t, it could threaten our national security, intellectual property, and civil rights,” said Sen. Heinrich. “The bipartisan CREATE AI Act will help us weigh these challenges and unleash American innovation by making the tools to conduct important research on this cutting-edge technology available to the best and brightest minds in our country. It will also help us prepare the future AI workforce, not just for Silicon Valley companies, but for the many industry sectors that will be transformed by AI. By truly democratizing and expanding access to AI systems, we can maintain our nation’s competitive lead while ensuring these rapid advancements are a benefit to our society and country — not a threat.”

Jul. 20, 2023 – Consumer Safety Technology Act

Bill: Consumer Safety Technology Act (H.R. 4814)

Sponsors: Reps. Darren Soto (D-FL), Michael Burgess (R-TX), Brett Guthrie, (R-KY), and Lori Trahan (D-Mass.)

Snapshot: The bill would direct the Consumer Product Safety Commission to launch a pilot program exploring the use of artificial intelligence to track injury trends, identify hazards, monitor recalls, or identify products not meeting importation requirements; requires the Department of Commerce and other agencies to study blockchain technology in the context of consumer products and safety; and directs the Department of Commerce and FTC to report on their efforts to address unfair or deceptive trade practices related to digital tokens.

What the sponsors are saying: “Emerging technologies like artificial intelligence, blockchain, and digital tokens are playing a growing importance in our daily lives and are proving to be an economic driver for the 21st-century economy. It is critical that the U.S. acts as a global leader in these emerging technologies to ensure our democratic values remain at the forefront of this technological development. As a responsible global leader, the U.S. must strike the appropriate balance of providing an environment that fosters innovation while ensuring consumer protection,” said Rep. Soto.

“By directing the Consumer Product Safety Commission (CPSC) to explore the application of artificial intelligence, this legislation would proactively track injury trends, identify hazards, and monitor recalls swiftly, ensuring timely interventions and improved safety standards for all,” said Rep. Burgess.

“For too long, Congress and regulators have struggled to keep up with new and emerging technologies – only stepping in after consumers are harmed. With the Act, we have the opportunity to flip that script,” said Rep. Trahan.

“The U.S. has been a global technology leader. It is crucial that we continue to lead in technological innovation and ensure we do not become reliant on foreign adversaries like the Chinese Communist Party to stay connected. The Act would authorize the Department of Commerce and other agencies to study blockchain technology to help us unleash its capabilities and protect consumers from fraud. As new technologies are quickly developed, it is critical we stay ahead of the curve by developing policies that allow our home-grown technologies to thrive domestically and globally,” said Rep. Guthrie.

Jul. 6, 2023 – Jobs of the Future Act of 2023

Bill: Jobs of the Future Act of 2023 (H.R. 4498)

Sponsors: Reps. Darren Soto (D-FL), Lori Chavez-DeRemer (R-OR), Lisa Blunt Rochester (D-DE), and Andrew Garbarino (R-NY).

Snapshot: The bill would authorize the Department of Labor and the National Science Foundation to work with private and public stakeholders to create a report analyzing the future growth of artificial intelligence and its impact on the American workforce. The study authorized by the bill would analyze: (1) Industries and occupations projected to have the most growth in AI use, and whether the technology is likely to result in the enhancement of workers’ capabilities or their replacement; (2) Opportunities for various stakeholders to influence the impact of AI on workers across various industries; (3) Which workforce demographics currently stand to be most affected by the proliferation of AI; (4) Skills, expertise, and education needed by workers to develop, operate, or work alongside AI; (5) Data required to evaluate the impact of AI on the U.S. workforce, and the availability of such data; (6) Methods by which these skills can effectively be delivered to the U.S. workforce; and (7) Potential for various academic institutions to disseminate necessary skills and training.

What the sponsors are saying: “As AI continues to grow rapidly, the Jobs of the Future Act will ensure we have information on industries projected to have the most growth, demographics affected by these changes, and more. In Central Florida, AI is increasingly being utilized in tourism, agriculture, aerospace, at Lake Nona’s Medical City, and at NeoCity for microchip manufacturing, so it is important for us to have this data. We hope the report generated as a result of our bill will help organizations identify opportunities for workers and prepare for the changes created by AI,” said Rep. Soto.

“As the Future of Work Caucus co-chair, I’m constantly looking at ways in which our world and our workforce is evolving. There’s no question that the development of artificial intelligence poses many challenges and opportunities, particularly when it comes to our economy,” said Rep. Blunt Rochester. “We, as lawmakers, have to come to the table with open eyes and relevant data to be able to make informed decisions. That’s why I’m thrilled to be introducing the Jobs of the Future Act of 2023 with my colleagues to help us gather that data. Working with the Department of Labor and the National Science Foundation, we can bring together public and private stakeholders, gather and channel important insights, and ultimately harness the power of AI to unleash the full potential of the American economy.”

Jun. 14, 2023 – A Bill to Waive Immunity Under S. 230 for Generative AI

Bill: A bill to waive immunity under S. 230 of the Communications Act for claims and charges related to generative AI (S.1993)

Sponsors: Sen. Josh Hawley (R-MO) and Richard Blumenthal (D-Con.)

Snapshot: The legislation would amend Section 230 by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI.

What the sponsors are saying: “We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” said Hawley. “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”

“AI companies should be forced to take responsibility for business decisions as they’re developing products—without any Section 230 legal shield,” said Senator Blumenthal. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”

Jun. 7, 2023 – Transparent Automated Governance Act

Bill: Transparent Automated Governance (“TAG”) Act (S.1865)

Sponsors: Sens. Gary Peters (D-MI), Mike Braun (R-Ind.), and James Lankford (R-Okla).

Snapshot: The legislation would direct agencies to be transparent when using automated and augmented systems to interact with the public or make critical decisions, and for other purposes.

What the sponsors are saying: “Artificial intelligence is already transforming how federal agencies are serving the public, but government must be more transparent with the public about when and how they are using these emerging technologies,” Peters said. “This bipartisan bill will ensure taxpayers know when they are interacting with certain federal AI systems and establishes a process for people to get answers about why these systems are making certain decisions.”

Jun. 5, 2023 – AI Disclosure Act of 2023

Bill: AI Disclosure Act of 2023 (H.R.3831)

Sponsors: Rep. Richie Torres (D-NY)

Snapshot: The legislation would require all material generated by artificial intelligence technology to include the following – “DISCLAIMER: this output has been generated by artificial intelligence.” It would apply to videos, photos, text, audio, and/or any other AI generated material. The Federal Trade Commission (FTC) would be responsible for enforcement of violations, which could result in civil penalties.

What the sponsors are saying: “Artificial intelligence is the most revolutionary technology of our time. It has the potential to be a weapon of mass disinformation, dislocation, and destruction,” said Rep. Torres. “Carefully crafting a regulatory framework for managing the existential risks of AI will be one of the central challenges confronting Congress in the years and decades to come. There is danger in both under-regulating and over-regulating. The simplest place to start is disclosure. All generative AI should be required to disclose itself as AI. Disclosure is by no means a magic bullet, but it’s a common-sense starting point to what will surely be a long road toward federal regulation.”

May 16, 2023 – AI Shield for Kids Act

Bill: Artificial Intelligence Shield for Kids (“ASK”) Act (S.1626)

Sponsors: Sen. Rick Scott (FL)

Snapshot: The legislation would require the Federal Communications Commission, in consultation with the Federal Trade Commission, to issue rules prohibiting entities from offering minor consumers artificial intelligence features in the products of those entities without parental consent, and for other purposes.

What the sponsors are saying: “Artificial intelligence surely has productive uses, but it can also present grave threats, especially to our children. Today, as the Senate Committee on Homeland Security and Governmental Affairs discusses the threats posed by AI, I am introducing my ASK Act to protect our kids and give parents the power to decide what their children are exposed to without paying ridiculous fees,” Sen. Scott said.

Updated

September 9, 2025

Updated as of Sept. 9, 2025