The chief executive of ChatGPT creator OpenAI has said he is disappointed by a leak disclosing the US Federal Trade Commission's investigation of his company to determine whether it breached consumer protection laws by sweeping public data and publishing false information through its chatbot.
“It is very disappointing to see the FTC's request start with a leak and does not help build trust,” Sam Altman said on Twitter.
“That said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course, we will work with the FTC.”
The FTC has yet to comment or post any statements on its website regarding the investigation.
The Washington Post, which reported the investigation, published the FTC’s 20-page civil investigative demand (CID) that lays out the focus of the probe.
In it, the FTC demands that OpenAI reach out to the government agency's legal counsel for a meeting by telephone within 14 days and orders the company to refrain from “routine procedures for document destruction and [to] take measures to prevent the destruction of documents” that are relevant to the investigation.
The FTC's document shows that the investigation's primary focus is on whether OpenAI has “engaged in unfair or deceptive privacy or data security practices, or engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm, in violation of Section 5 of the FTC Act”.
Technology, particularly AI, is at an inflection point. As large language models become more pervasive and sophisticated, experts are raising alarm over potential dangers.
Since late last year, when OpenAI launched its generative AI platform ChatGPT, attracting more than 100 million users in less than a few months, companies are racing to bring AI-powered products to market.
Investors poured more than $4.2 billion into generative AI start-ups in 2021 and 2022 through 215 deals after interest surged in 2019, according to recent data from CB Insights.
The technology has been in the hands of consumers for over a decade, and there are concerns that it cannot be regulated.
Critics of the technology say while AI tools and automation technology are able to craft human-like text, music, images and computer code, which can boost productivity and make companies more efficient, they also pose numerous risks.
Detractors of AI have said the technology could be used to replace workers, cause layoffs, as well as create false images and videos that perpetuate disinformation, which can influence elections.
Globally, generative AI could cost the world the equivalent of 300 million full-time jobs to automation across major economies, Goldman Sachs said in a report in March. Lawyers and administrative staff would be among those at greatest risk of becoming redundant.
In May, scientists and tech industry leaders from Microsoft and Google, including Mr Altman and Geoffrey Hinton, described as the godfather of AI, were among hundreds of leading figures who signed a statement that warned of the dangers of AI to humankind.
Earlier in the year, more than 1,000 researchers and technology experts, including Elon Musk, signed another letter that called for a six-month pause on AI development, saying it posed “profound risks to society and humanity”.
In May, Mr Altman told a US Senate panel that regulating artificial intelligence was “critical” and he urged Congress to impose new rules on Big Tech.
US President Joe Biden has said his country needs to address concerns about AI while several other world governments, including EU states, have expressed a desire to regulate the rapidly emerging technology before it is too late.
Last month, Mr Biden convened a group of technology leaders to debate what he said were the “risks and enormous promises” of AI.
The FTC's civil investigative demand asks OpenAI to list each website or mobile application that it owns or operates.
It also seeks to find out which third parties have access to the company's large language models and to specify if their access is paid or unpaid.
The civil investigative demand also demands that OpenAI lists its top ten customers or licensors and explain how they retain and use consumer information, how they obtain information to train their Large Language Models and which algorithms are used to process and understand natural language.
The document also asks the company to list the steps it has taken to assess or mitigate risks and how it addresses statements about “real individuals that are false, misleading or disparaging”.