State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix ‘delusional’ outputs

Date:

State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix ‘delusional’ outputs


After a string of disturbing psychological health incidents involving AI chatbots, a gaggle of state attorneys general have despatched a letter to the AI trade’s top firms, with a warning to fix “delusional outputs” or threat being in breach of state regulation. 

The letter, signed by dozens of AGs from U.S. states and territories with the National Association of Attorneys General, asks the firms, including Microsoft, OpenAI, Google, and 10 other major AI corporations, to implement a wide range of new inside safeguards to defend their customers. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI had been also included in the letter.

The letter comes as a fight over AI laws has been brewing between state and federal authorities.

Those safeguards embody clear third-party audits of large language fashions that search for indicators of delusional or sycophantic ideations, in addition to new incident reporting procedures designed to notify customers when chatbots produce psychologically dangerous outputs. Those third events, which might embody educational and civil society teams, must be allowed to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company,” the letter states.

“GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations,” the letter states, pointing to a variety of well-publicized incidents over the past 12 months — including suicides and homicide — through which violence has been linked to extreme AI use, the letter states. “In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”

AGs also recommend firms deal with psychological health incidents the same approach tech firms deal with cybersecurity incidents — with clear and clear incident reporting insurance policies and procedures.

Companies ought to develop and publish “detection and response timelines for sycophantic and delusional outputs,” the letter states. In the same style to how data breaches are at the moment dealt with, firms ought to also “promptly, clearly, and directly notify users if they were exposed to potentially harmful sycophantic or delusional outputs,” the letter says. 

The Gossip Blogger event

San Francisco
|
October 13-15, 2026

Another ask is that the firms develop “reasonable and appropriate safety tests” on GenAI fashions to “ensure the models do not produce potentially harmful sycophantic and delusional outputs.” These assessments must be carried out before the fashions are ever provided to the public, it provides.  

TechCrunch was unable to attain Google, Microsoft, or OpenAI for remark prior to publication. The article will probably be up to date if the firms reply.

Tech firms creating AI have had a a lot hotter reception at the federal level.

The Trump administration has made it recognized it’s unabashedly pro-AI, and, over the past 12 months, a number of makes an attempt have been made to move a nationwide moratorium on state-level AI laws. So far, those makes an attempt have failed — thanks, partially, to stress from state officers.

Not to be deterred, Trump announced Monday he plans to move an government order next week that will restrict the ability of states to regulate AI. The president said in a put up on Truth Social he hoped his EO would cease AI from being “DESTROYED IN ITS INFANCY.”

Stay informed with the latest headlines that matter. At TheGossipBlogger.com, we ship well timed and credible coverage on breaking news, global occasions, politics, society, and the whole lot in between.

Whether it’s unfolding developments, coverage modifications, or highly effective human-interest tales, our newsroom curates impactful content to preserve you up to date in real time.

From local points to worldwide affairs, we break down complicated tales with readability, context, and a deal with what’s related to you.

Bookmark News and examine in often — because staying informed is the first step towards staying ahead.

Share post:

img

Popular

Read more articles
Related

An Amazon warehouse worker died on the job at...

An Amazon warehouse worker died on the job at...

Microsoft is officially killing its Outlook Lite app next...

Microsoft is officially killing its Outlook Lite app next...

Uber and Nuro begin testing premium robotaxi service in...

Uber and Nuro begin testing premium robotaxi service in...

IBM pays $17M fine to end DOJ suit over...

IBM pays $17M fine to end DOJ suit over...

Booking.com confirms hackers accessed clients’ data

Booking.com confirms hackers accessed clients' data Booking.com confirmed Monday that...

Vercel CEO Guillermo Rauch signals IPO readiness as AI...

Vercel CEO Guillermo Rauch signals IPO readiness as AI...

Slate Auto raises $650M to fund its affordable EV...

Slate Auto raises $650M to fund its affordable EV...

The largest orbital compute cluster is open for business

The largest orbital compute cluster is open for business For...

Slate Auto: Everything you need to know about the...

Slate Auto: Everything you need to know about the...

At the HumanX convention, everyone was talking about Claude

At the HumanX convention, everyone was talking about Claude At...