As amended on 2/3/2026: - Requires an operator of an AI companion to disclose the companion's nonhuman status if a user could be misled into believing they are interacting with a human. - Requires operators to implement a protocol to prevent an AI companion from producing suicidal or self-harm content and to refer users who express such ideation to crisis services. - Imposes specific safeguards for users known or suspected to be minors, including periodic reminders of the AI's nonhuman status and measures to prevent the AI from producing sexually explicit material. - Authorizes the Attorney General to enforce the act and seek civil penalties of up to $10,000 per violation. - The 2/3/2026 amendment applies obligations when the operator knows or should have known a user is a minor, rather than reasonably suspected. Requires the disclosure that companion chatbots may not be suitable for minors only if the service is offered to users that an operator knows are minors.
| Date | Chamber | Action |
|---|---|---|
Mar 18, 2026 | H | Referred to Communications & Technology, March 18, 2026 |
Mar 17, 2026 | S | Hearing Scheduled - Appropriations |
Mar 17, 2026 | S | (Remarks see Senate Journal Page ....), March 17, 2026 |
Mar 17, 2026 | S | Third consideration and final passage, March 17, 2026 (49-1) |
Mar 17, 2026 | S | Re-reported as committed, March 17, 2026 |
Feb 4, 2026 | S | Re-referred to Appropriations, Feb. 4, 2026 |
Feb 4, 2026 | S | Second consideration, Feb. 4, 2026 |
Feb 3, 2026 | S | Amended on second consideration, Feb. 3, 2026 |
| Last Action | Mar 18, 2026 |
| Year | 2025 |
| Bill Type | Bill |
| Created | Nov 17, 2025 |
| Updated | Mar 19, 2026 |