A new piece of legislation could require AI companies to verify the ages of everyone who uses their chatbots. Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced the GUARD Act on Tuesday, which would also ban everyone under 18 from accessing AI chatbots, as reported earlier by NBC News.
Senators propose banning teens from using AI chatbots
The new rules would force AI companies to verify whether users are over 18.
The new rules would force AI companies to verify whether users are over 18.


The bill comes just weeks after safety advocates and parents attended a Senate hearing to call attention to the impact of AI chatbots on kids. Under the legislation, AI companies would have to verify ages by requiring users to upload their government ID or provide validation through another “reasonable” method, which might include something like face scans.
AI chatbots would be required to disclose that they aren’t human at 30-minute intervals under the bill. They would also have to include safeguards that prevent them from claiming that they are a human, similar to an AI safety bill recently passed in California. The bill would make it illegal to operate a chatbot that produces sexual content for minors or promotes suicide, too.
“Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” Blumenthal says in a statement provided to The Verge. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”
Most Popular
- ‘There isn’t really another choice:’ Signal chief explains why the encrypted messenger relies on AWS
- Amazon is cutting 14,000 corporate jobs
- If you can’t afford a vacation, an AI app will sell you pictures of one
- Pluribus’ Vince Gilligan on making shows that ‘attract really smart viewers’
- The FCC just gave itself the power to make a DJI drone ban stick











