US FTC opens investigation into OpenAI over misleading statements – document

(Reuters) -The U.S. Federal Trade Commission has opened an investigation into OpenAI on claims it has run afoul of consumer protection laws by putting personal reputations and data at risk, the strongest regulatory threat to the Microsoft-backed startup yet.

The FTC this week sent a 20-page demand for records about how OpenAI — the maker of generative artificial intelligence chatbot ChatGPT — addresses risks tied to its AI models.

The agency is probing if OpenAI engaged in unfair practices that resulted in “reputational harm” to consumers.

The investigation marks another high-profile effort to rein in technology companies by the FTC’s progressive chair, Lina Khan, days after the agency suffered a big loss in court in its fight to stop Microsoft from buying Activision Blizzard. The FTC said it would appeal the court decision.

One of the questions FTC has asked OpenAI pertains to steps the company has taken to address its products’ potential to “generate statements about real individuals that are false, misleading, or disparaging.”

The Washington Post was the first to report the probe. The FTC declined comment, while OpenAI did not respond to a request for comment.

OpenAI CEO Sam Altman said in a series of tweets on Thursday that the latest version of the company’s technology, GPT-4, was built on years of safety research and the systems were designed to learn about the world and not private individuals.

“Of course we will work with the FTC,” he said.

OpenAI launched ChatGPT in November, enthralling consumers and fuelling one-upmanship at large tech companies to showcase how their AI-imbued products will change the way societies and businesses operate.

The AI race has raised widespread concerns about potential risks and regulatory scrutiny of the technology.

Global regulators are aiming to apply existing rules covering everything from copyright and data privacy to two key issues: the data fed into models and the content they produce, Reuters reported in May.

In the United States, Senate Majority leader Chuck Schumer has called for “comprehensive legislation” to advance and ensure safeguards on AI, and pledged to hold a series of forums later this year aimed at “laying down a new foundation for AI policy.”

OpenAI in March ran into trouble in Italy, where the regulator had ChatGPT taken offline over accusations the company violated the European Union’s GDPR — a wide-ranging privacy regime enacted in 2018.

ChatGPT was reinstated later after the U.S. company agreed to install age verification features and let European users block their information from being used to train the AI model.

(Reporting by Diane Bartz in Washington and Mrinmay Dey, Samrhitha Arunasalam, Aditya Soni and Yuvraj Malik in Bengaluru; Editing by Nivedita Bhattacharjee, Mark Porter and Maju Samuel)