GPT-4 poses too many risks and releases should be halted, AI group tells FTC

  News
image_pdfimage_print
The ChatGPT website is displayed on a smartphone screen next to two blocks displaying the letters
Getty Images | VCG

A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.

OpenAI “has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment,” said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).

Calling for “independent oversight and evaluation of commercial AI products offered in the United States,” CAIDP asked the FTC to “open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

Noting that the FTC “has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability,'” the nonprofit group argued that “OpenAI’s product GPT-4 satisfies none of these requirements.”

GPT-4 was unveiled by OpenAI on March 14 and is available to subscribers of ChatGPT Plus. Microsoft’s Bing is already using GPT-4. OpenAI called GPT-4 a major advance, saying it “passes a simulated bar exam with a score around the top 10 percent of test takers,” compared to the bottom 10 percent of test takers for GPT-3.5.

Though OpenAI said it had external experts assess potential risks posed by GPT-4, CAIDP isn’t the first group to raise concerns about the AI field moving too fast. As we reported yesterday, the Future of Life Institute published an open letter urging AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter’s long list of signers included many professors alongside some notable tech-industry names like Elon Musk and Steve Wozniak.

Group claims GPT-4 violates the FTC Act

CAIDP said the FTC should probe OpenAI using its authority under Section 5 of the Federal Trade Commission Act to investigate, prosecute, and prohibit “unfair or deceptive acts or practices in or affecting commerce.” The group claims that “the commercial release of GPT-4 violates Section 5 of the FTC Act, the FTC’s well-established guidance to businesses on the use and advertising of AI products, as well as the emerging norms for the governance of AI that the United States government has formally endorsed and the Universal Guidelines for AI that leading experts and scientific societies have recommended.”

The FTC should “halt further commercial deployment of GPT by OpenAI,” require independent assessment of GPT products prior to deployment and “throughout the GPT AI lifecycle,” “require compliance with FTC AI Guidance” before future deployments, and “establish a publicly accessible incident reporting mechanism for GPT-4 similar to the FTC’s mechanisms to report consumer fraud,” the group said.

More broadly, CAIDP urged the FTC to issue rules requiring “baseline standards for products in the Generative AI market sector.”

We contacted OpenAI and will update this article if we get a response.

“OpenAI has not disclosed details”

CAIDP’s president and founder is Marc Rotenberg, who previously co-founded and led the Electronic Privacy Information Center. Rotenberg is an adjunct professor at Georgetown Law and served on the Expert Group on AI run by the international Organisation for Economic Co-operation and Development. Rotenberg also signed the Future of Life Institute’s open letter, which is cited in the CAIDP complaint.

CAIDP’s chair and research director is Merve Hickok, who is also a data ethics lecturer at the University of Michigan. She testified in a congressional hearing about AI on March 8. CAIDP’s list of team members includes many other people involved in technology, academia, privacy, law, and research fields.

The FTC last month warned companies to analyze “the reasonably foreseeable risks and impact of your AI product before putting it on the market.” The agency also raised various concerns about “AI harms such as inaccuracy, bias, discrimination, and commercial surveillance creep” in a report to Congress last year.

GPT-4 poses many types of risks, and its underlying technology hasn’t been adequately explained, CAIDP told the FTC. “OpenAI has not disclosed details about the architecture, model size, hardware, computing resources, training techniques, dataset construction, or training methods,” the CAIDP complaint said. “The practice of the research community has been to document training data and training techniques for Large Language Models, but OpenAI chose not to do this for GPT-4.”

“Generative AI models are unusual consumer products because they exhibit behaviors that may not have been previously identified by the company that released them for sale,” the group also said.

https://arstechnica.com/?p=1927971