Risks and Regulations: Charting the Rise of Generative AI

Image: ChatGPT

Risks and Regulations: Charting the Rise of Generative AI

The Biden Administration is focusing on generative artificial intelligence (“AI”) if a new call for comment from the Department of Commerce is any indication. The Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced a ...

April 12, 2023 - By TFL

Risks and Regulations: Charting the Rise of Generative AI

Image : ChatGPT

Case Documentation

Risks and Regulations: Charting the Rise of Generative AI

The Biden Administration is focusing on generative artificial intelligence (“AI”) if a new call for comment from the Department of Commerce is any indication. The Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced a call for public comment on Tuesday, stating that it is looking to gather intel to advance its efforts on the AI front, and noting that it will use the insights to inform the administration’s ongoing work to “ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities,” including “policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

“While people are already realizing the benefits of AI, there are a growing number of incidents where AI and algorithmic systems have led to harmful outcomes,” according to the NTIA. “There is also growing concern about potential risks to individuals and society that may not yet have manifested, but which could result from increasingly powerful systems,” it continued. Against that background, the government agency asserts that “companies have a responsibility to make sure their AI products are safe before making them available [and] businesses and consumers using AI technologies and individuals whose lives and livelihoods are affected by these systems have a right to know that they have been adequately vetted and risks have been appropriately mitigated.” 

In particular, the NTIA says that it is seeking input on “what policies should shape the AI accountability ecosystem,” including topics such as: (1) What kinds of trust and safety testing should AI development companies and their enterprise clients conduct; (2) What kinds of data access is necessary to conduct audits and assessments; (3) How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability; and (4) What different approaches might be needed in different industry sectors – like employment or health care. 

AI Risks, Regulation & Guidance

The NTIA’s call for comment comes amid an array of proposed legislation, regulation, and informal guidance relating to AI that was introduced in the first quarter of the year. “At the federal level, some members of Congress have noted concerns with the rapid uptake of AI technologies,” with a Covington & Burling LLP note pointing to a house resolution (H. Res. 66) introduced by Rep. Ted Lieu (D-CA-36) that “urges Congress to focus on AI and would resolve that the House of Representatives supports focusing on AI to ensure development of AI is done in a way that ‘is safe, ethical, and respects the rights and privacy of all Americans’ and widely distributes AI benefits while minimizing risks,” including privacy, security, ethical, and other legal risks. 

At the state level, multiple bills have been introduced with the aim of regulating AI. For example, in Massachusetts, Senate Bill No. 31 – which the state legislature describes as “an Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT’ – establishes “operating standards” for the makers/operators of “large-scale generative AI models” (i.e., machine learning models with a capacity of at least one billion parameters that generates text or other forms of output, such as ChatGPT), and requires registration with the Attorney General. 

Meanwhile in California, the Covington & Burling attorneys state that A.B. 331 “would regulate automated decision tools by requiring, among other things, ‘deployers’ (defined as a person, partnership, state or local government agency, or corporation that uses an automated decision tool to make a decision that has a legal, material, or similar significant effect on an individual’s life) to perform impact assessments for any automated decision tool, notify persons about the use of the tool, and prohibit using a tool that contributes to algorithmic discrimination.” 

As for federal agency and regulatory developments, in January, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Management Framework “to better manage risks to individuals, organizations, and society associated with AI.” Developed in response to a direction from Congress and intended for voluntary use, NIST says that the Framework provides “a flexible, structured and measurable process that will enable organizations to address AI risks … in context-specific use cases and at any stages of the AI life cycle.” 

Immediate Issues

The NTIA highlighted the “potential risks to individuals and society that may not yet have manifested” from generative or other forms of AI, but the issues are not really that far away. “AI policy implications are immediate, not far off matters,” says Fenwick McKelvey, an associate professor in Information and Communication Technology Policy at Concordia University. “Because GPT-4 is trained on the entire internet and has expressly commercial ends, it raises questions about fair dealing and fair use.” While the Copyright Office has provided some guidance in terms of registrability, questions still remain, including with regard to infringement of training data and AI generators’ output

“Substantive disputes are anticipated in the wake of generative AI gaining widespread use,” RPC’s Nicholas McKenzie and Lauren Butler stated recently. For instance, they note that “visual artists are not happy with AI products being trained on their work without their consent,” giving rise concerns about widespread infringement and rising questions about the viability of the fair use defense.

McKenzie and Butler assert that “another crucial factor causing concern is that the underlying neural networks and deep learning that next-gen AIs use means that it can be difficult, and often impossible, to understand exactly how a generative AI has reached a decision or created its masterpiece.” This has led “some businesses to begin to crack down on allowing employees to use generative AI at work due to the fear of confidential information shared with AIs being leaked,” which could have implications from a trade secret perspective, among other things. 

And still yet, when it comes to privacy matters, McKelvey claims that ChatGPT’s approach is “hard to distinguish from another AI application, Clearview AI, as the models for both were trained using massive amounts of personal information collected on the open internet.” This has prompted action from Italy’s data-protection authority, for one, which announced a ban on ChatGPT this month due to privacy concerns. The regulator said on April 1 that it would ban and investigate ChatGPT-developer OpenAI – including under the lens of the General Data Protection Regulation – “with immediate effect.” A rep for OpenAI says the complies with privacy laws.

LOOKING FORWARD: The influx of venture capital, an array of new AI projects, and rising regulatory interest in this realm suggests that Generative AI is likely only slated to grow further in the near future, and “with development seemingly unleashed,” per McKenzie and Butler, “we can expect the next 12-18 months to bring us more bots, products and experiments as new generative AIs hit the market.” 

related articles