Urge Congress to Support the AI LEAD Act & the GUARD Act!


Countless children have been harmed or have even died because of their interactions with AI chatbots. Regulation is imperative to prevent further tragedies. To this end, two important bills have been introduced in the Senate: the AI LEAD Act and the GUARD Act.

The AI LEAD Act creates a product liability framework for artificial intelligence systems. With this law, AI developers and deployers can be held liable for the failure to exercise reasonable care in designing their product.

The GUARD Act requires AI chatbots to implement age verification for their users and prevent children from accessing AI companions (chatbots that mimic human interactions and encourage emotional bonding). It also mandates that chatbots regularly remind the user that they are not human beings or licensed professionals.

Your Info

Having trouble taking action? You can contact your Representative directly here. Feel free to use the email template below:


Dear Senator [First name, Last name],


I write to you today to ask that you support the AI LEAD Act and the GUARD Act.

Countless children have been harmed or have even died because of their interactions with AI chatbots. These two bills are imperative to prevent further tragedies.

The AI LEAD Act

Sponsored by Senators Josh Hawley (R-MO) and Dick Durbin (D-IL), the AI LEAD Act creates a product liability framework for artificial intelligence systems, confirming that they are products, not services. It requires AI companies to build products with the safety of users in mind, not just the amount of money the product will bring in. 

With this law, AI developers and deployers can be held liable for the failure to exercise reasonable care when designing their product.  

The GUARD Act

Sponsored by Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), the GUARD Act requires AI chatbots to implement age verification for their users and prevent children from accessing AI companions (chatbots that mimic human interactions and encourage emotional bonding). It also requires that AI chatbots regularly remind the user that they are not human beings or licensed professionals.

Common Sense Media conducted a survey which found that 72% of children have used AI companions and almost a quarter of users surveyed have shared personal information with AI, leaving them even more vulnerable to exploitation and harm.

The Center for Countering Digital Hate also released research, showing that ChatGPT will tell 13-year-olds how to get drunk or high and how to hide their intoxication while at school. It will instruct them on how to conceal an eating disorder, generate a plan to commit suicide, and even draft a suicide note to the child’s loved ones.

There is a lot of dialogue swirling around about banning AI regulation in the name of “innovation,” but the consequences of zero guardrails would be catastrophic. We cannot sacrifice the lives of our children to the altar of innovation above all costs.

It is not an either/or choice between innovation and the safety of our kids; we can innovate responsibly, with user safety in mind. But tech companies will not do this unless they are required to by law.

Please support the AI LEAD Act, to hold tech companies accountable to prioritizing safety. Children across the country need your protection.


Thank you,

[Your Name] 


Powered by Powered By CharityEngine