As Governments Focus on AI, U.S. Senators Introduce Bipartisan Framework

Image: Unsplash

As Governments Focus on AI, U.S. Senators Introduce Bipartisan Framework

Amid ongoing hearings in Washington that focus on the rise and widespread adoption of artificial intelligence (“AI”), including generative AI, and the need for legislation to address corresponding ethics, privacy, infringement, and transparency/neutrality ...

September 13, 2023 - By TFL

As Governments Focus on AI, U.S. Senators Introduce Bipartisan Framework

Image : Unsplash

key points

U.S. Senators Richard Blumenthal and Josh Hawley have announced the launch of a bipartisan framework focused on AI.

The proposal aims to impose licensing requirements for training and deployment, liability for harms, and limitations on international transfer of software and hardware.

The framework “should put us on a path to addressing the promise and peril AI portends,” the senators say.

Case Documentation

As Governments Focus on AI, U.S. Senators Introduce Bipartisan Framework

Amid ongoing hearings in Washington that focus on the rise and widespread adoption of artificial intelligence (“AI”), including generative AI, and the need for legislation to address corresponding ethics, privacy, infringement, and transparency/neutrality risks, U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-Mo.) have announced the launch of a bipartisan framework focused on AI. Calling it “the first tough, comprehensive legislative blueprint for real, enforceable AI protections” in the U.S., Sen. Blumenthal, who is the chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, and Hawley, who is the Subcommittee’s ranking member, say that the framework “should put us on a path to addressing the promise and peril AI portends.” 

The U.S. artificial intelligence (“AI”)-centric framework includes proposed requirements for the licensing and auditing of AI, the creation of an independent federal office to oversee the technology, liability for companies for privacy and civil rights violations, and requirements for data transparency and safety standards. Sen. Blumenthal stated in connection with the release of the framework that “hearings with industry leaders and experts [will continue],” as will “other conversations and fact finding to build a coalition of support for legislation.” 

In one show of early support, Institute for AI Policy executive director Daniel Colson stated that the AI governance framework is “a major step in the right direction for managing the risks from AI,” noting that “licensing requirements for training and deployment, liability for harms, and limitations on international transfer of software and hardware are three of the most important policy objectives for safety advocates.”

At a high level, the framework aims to … 

Establish a Licensing Regime Administered by an Independent Oversight Body: Companies developing sophisticated general-purpose AI models (e.g., GPT-4) or models used in high-risk situations (e.g., facial recognition) should be required to register with an independent oversight body. Licensing requirements should include the registration of information about Al models and be conditioned on developers maintaining risk management, pre-deployment testing, data governance, and adverse incident reporting programs. The oversight body should have the authority to conduct audits of companies seeking licenses and cooperate with other enforcers, including considering vesting concurrent enforcement authority in state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of AI, such as effects on employment. Personnel must be subject to strong conflict of interest rules to mitigate capture and revolving door concerns.

Ensure Legal Accountability for Harms: Congress should ensure that AI companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms. Where existing laws are insufficient to address new harms created by AI, Congress should ensure that enforcers and victims can take companies and perpetrators to court, including clarifying that Section 230 does not apply to AI In particular, Congress must take steps to directly prohibit harms that are already emerging from AI, such as non-consensual explicit deepfake imagery of real people, production of child sexual abuse material from generative AI, and election interference.

Defend National Security and International Competition: Congress should utilize export controls, sanctions, and other legal restrictions to limit the transfer of advanced AI models, hardware and related equipment, and other technologies to China, Russia, and other adversary nations, as well as countries engaged in gross human rights violations.

Promote Transparency: Congress should promote responsibility, due diligence, and consumer redress by requiring transparency from the companies developing and deploying AI systems. This includes: (1) Developers should be required to disclose essential information about the training data, limitations, accuracy, and safety of AI models to users and companies deploying systems, including through simple, comprehendible disclosures and to provide independent researchers access to data necessary to evaluate AI model performance; (2) Users should have a right to an affirmative notice that they are interacting with an AI model or system; (3) AI system providers should be required to watermark or otherwise provide technical disclosures of AI -generated deepfakes; and (4) The new oversight body should establish a public database and reporting so that consumers and researchers have easy access to AI model and system information, including when significant adverse incidents occur or failures in AI cause harms.

Protect Consumers and Kids: Companies deploying AI in high-risk or consequential situations should be required to implement safety brakes, including giving notice when AI is being used to make decisions, particularly adverse decisions, and have the right to a human review. Consumers should have control over how their personal data is used in AI systems and strict limits should be imposed on generative AI involving kids.

related articles