As substituted on 1/27/2026: - Requires large developers of frontier artificial intelligence models (at least $500 million in annual revenues) to create, implement, and publish public safety plans to address catastrophic risks, create assessments of catastrophic risk, and implement cybersecurity practices. - Requires large frontier developers that operate a covered chatbot likely to be accessed by minors to write, implement, and publish a child protection plan to mitigate child risks and use third party assessments of risk. - Prohibits developers from making materially false or misleading statements regarding covered risks or compliance. - Requires large frontier developers to report safety incidents. - Establishes whistleblower protections for employees. - The 1/27/2026 substitution adds the AI Transparency Enforcement Restricted Account, funded by civil penalties.
| Date | Chamber | Action |
|---|---|---|
Mar 6, 2026 | — | House/ filed |
Mar 6, 2026 | — | House/ strike enacting clause |
Mar 3, 2026 | — | House/ 3rd Reading Calendar to Rules |
Feb 5, 2026 | — | House/ circled |
Feb 5, 2026 | — | House/ 3rd reading |
Feb 4, 2026 | — | LFA/ fiscal note publicly available for HB0286S01 |
Feb 2, 2026 | — | LFA/ fiscal note sent to sponsor for HB0286S01 |
Jan 28, 2026 | — | House/ 2nd reading |
| Last Action | Mar 6, 2026 |
| Year | 2026 |
| Bill Type | Bill |
| Created | Jan 20, 2026 |
| Updated | Mar 7, 2026 |