An AI Ethics Group Has Claimed That ChatGPT is in Violation of FTC Regulations and Has Called for An Investigation – OpenAI’s GPT model does not meet the safety standards outlined by the FTC

open ai

Table of Contents

According to the complaint, OpenAI’s GPT model does not meet the safety standards outlined by the FTC, and there is a request to halt development.

The Center for AI and Digital Policy submitted a complaint to the Federal Trade Commission, calling for an investigation into OpenAI and a cessation of its development of large language learning models. The complaint claims that OpenAI’s GPT4 model is biased, deceptive, and poses a risk to privacy and public safety.

The Complaints and Adjudication Committee for the International Association of Privacy Professionals (CAID) filed a complaint. This came after 500 AI experts signed an open letter demanding a pause in the development of LLMs more advanced than GPT4, citing concerns about the potential risks it poses. CAID’s president, Marc Rotenberg, was among the signatories. The complaint focuses on the FTC’s guidance about AI systems, which should be transparent, explainable, fair, and empirically sound, while also fostering accountability. The complaint argues that GTP4 fails in meeting these standards.

The complaint alleges that the release of GPT4 by OpenAI lacked independent assessment and did not provide a means for outside parties to replicate its results. CAIDB expressed concern that the system could be utilized to disseminate misinformation, pose cybersecurity risks, and further perpetuate or entrench biases that are already present in AI models.

The group stated that there is a need for independent oversight and evaluation of commercial WordPress chatgpt plugin offered in the United States, and that it is time for the FTC to take action.

The FTC has expressed concern over the potential risks that new AI systems could pose to consumers. Through a series of blog posts, the agency has examined how chatbots and synthetic media could potentially complicate the ability to distinguish between what is real and fake online, creating possible opportunities for fraud and deception at scale.

The FTC stated that there is existing evidence of fraudsters using these tools to create convincing yet false content in a fast and inexpensive manner, distributing it to either large groups or targeting specific communities or individuals.

The Future of Life Institute released a letter this week expressing concerns about potential societal crisis related to AI. Experts in the field have differing opinions on the level of concern about future LLM models and whether they should be considered to have human-level intelligence. While many agree that policymakers need to establish rules and regulations to guide AI development, there is no consensus on the matter.

According to Sarah Myers West, the Managing Director of AI Now Institute, there is a concern that the hype surrounding AI systems can lead to an over-exaggeration of their capabilities and distract from important issues such as the heavy reliance on a small group of firms.

Maximize Your App's Potential Now!
Harness exceptional mobile app development services that elevate your business. Connect with Hoff & Mazor today to transform your vision into thriving mobile applications.


Maximize Growth Potential

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Get Your App Rolling Today!

Ready to Turn Your App idea into a Reality? Let’s Make it Happen with Hoff & Mazor