The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging probe into ChatGPT maker OpenAI.
In a letter sent to the Microsoft-backed company, the FTC said it would look at whether people have been harmed by the AI chatbot creating false information about them, as well as whether OpenAI has engaged in “unfair or deceptive” privacy and data security practices.
Generative AI products are increasingly in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm about the enormous amount of personal data consumed by the technology, as well as its potentially harmful outputs, ranging from misinformation to sexist and racist comments.
In May, the FTC fired a warning shot to the industry, saying it was “focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers”.
The great Guinness shortage has lessons for Diageo
Ireland has won the corporation tax game for now, but will that last?
Corkman leading €11bn development of Battersea Power Station in London: ‘We’ve created a place to live, work and play’
Elf doors, carriage rides and boat cruises: Christmas in Ireland’s five-star hotels
In its letter, the US regulator asked OpenAI to share internal material ranging from how the group uses or retains user information to steps the company has taken to address the risk of its model producing statements that are “false, misleading or disparaging”.
The FTC declined to comment on the letter, which was first reported by the Washington Post. OpenAI declined to comment.
Lina Khan, FTC chair, on Thursday testified before the House judiciary committee and faced strong criticism from Republican lawmakers over her tough enforcement stance.
When asked about the investigation during the hearing, Khan declined to comment on the probe, but said the regulator’s broader concerns involved ChatGPT and other AI services “being fed a huge trove of data” while there were “no checks on what type of data is being inserted into these companies”.
She added: “We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else. We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we’re concerned about.”
Experts have been concerned about the huge amount of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100 million monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1 million people in 169 countries within two weeks of its release in January.
Users have reported that ChatGPT has fabricated names, dates and facts, as well as fake links to news websites and academic paper references, an issue known in the industry as “hallucinations”.
The FTC’s probe digs into technical details of how ChatGPT was designed, including the company’s work on fixing hallucinations, and the oversight of its human reviewers, which affect consumers directly. It has also asked for information on consumer complaints and efforts made by the company to assess consumers’ understanding of the chatbot’s accuracy and reliability.
In March, Italy’s privacy watchdog temporarily banned ChatGPT while it examined the US company’s collection of personal information following a cyber security breach, among other issues. It was reinstated a few weeks later, after OpenAI made its privacy policy more accessible and introduced a tool to verify users’ ages.
OpenAI chief executive Sam Altman has previously admitted that ChatGPT has weaknesses. “ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness,” he wrote on Twitter in December. “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” – Copyright The Financial Times Limited 2023