As introduced: - Creates a duty of care for developers of artificial intelligence (AI) systems to use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination. - Requires developers to provide deployers with certain documentation on intended and foreseeable uses, known limitations and risks, and information on datasets. - Requires the deployers of AI systems to disclose the risks of discrimination, and their measures to mitigate discrimination to the Attorney General and via public statements. - Requires deployers of AI systems to maintain a risk management program and a complete annual impact assessment. - Requires any corporation that uses artificial intelligence systems to target specific consumer groups or influence behavior to disclose purposes of AI use, the ways tools are designed to influence consumer behavior, and details of third party partnerships. - Requires notification to consumers when they are targeted by AI systems in ways that materially impact their decisions or algorithms are used to determine pricing, eligibility, or access to services. - Grants sole enforcement authority to the Attorney General.
| Last Action | Jan 1, 1970 |
| Year | 2025 |
| Bill Type | Bill |
| Created | Jan 9, 2025 |
| Updated | Jan 6, 2026 |