New York State Enacts the RAISE Act: Landmark AI Safety Law Mandates Transparency and Incident Reporting for Frontier Models
Category: Industry Trends
Excerpt:
On December 19, 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law — the strongest state-level AI safety framework in the U.S. to date. Targeting developers of advanced "frontier" AI models, the bill requires public disclosure of safety protocols, mandatory reporting of critical incidents within 72 hours, and establishes penalties up to $3 million for violations. Coming amid federal efforts to preempt state regulations, this move aligns New York with California while pushing stricter standards, signaling a new era of accountability for Big Tech's most powerful systems.
Core Requirements That Hit Hard
- • Safety Protocol Disclosure: Large developers (those training models with >10^26 operations or $100M+ compute) must create, publish (redacted versions), and internally retain full safety frameworks — covering testing, risk mitigation, and security measures.
- • Rapid Incident Reporting: Any "critical safety incident" (e.g., model theft, unintended dangerous behavior) triggers mandatory state notification within 72 hours — faster than California's 15-day window.
- • Enforcement Muscle: New York Attorney General gains civil action powers; fines start at $1M for first offenses, scaling to $3M thereafter. A new DFS office oversees rules, assessments, and annual reports.
- • Scope Smarts: Exempts startups/academia; focuses on "frontier models" capable of existential-scale risks, avoiding overreach on everyday AI.
The Political Firefight Behind the Scenes
Passed by lawmakers in June with broad support (including AI safety experts like Geoffrey Hinton), the bill faced Big Tech lobbying blitzes and Hochul's rewrite attempts. Sponsors fought back key wins — dedicated enforcement, tougher timelines — turning a potential veto into America's strictest transparency regime. OpenAI and Anthropic voiced support, noting unified standards across major states ease compliance burdens.
Real-World Impact Already Brewing
Developer Prep Mode
Labs like Google, Meta, and Anthropic ramp up documentation and red-teaming; expect "safety plan" templates to become industry standard.
Federal Clash Incoming
Trump's EO directs agencies to challenge "patchwork" regs — New York's defiance sets up lawsuits, potentially escalating to SCOTUS.
Innovation Safeguard
Proponents argue codifying best practices prevents corner-cutting without stifling progress — Empire AI consortium thrives alongside.
Broader Privacy & Ethics Echoes
While not directly mandating child-specific age gates or granular data privacy (separate from NY's Child Data Protection Act), RAISE's transparency push indirectly bolsters accountability in sensitive domains like education and healthcare AI.
The RAISE Act isn't the endgame — it's the opening salvo in America's fragmented fight for responsible AI. By forcing frontier developers to show their safety homework and report failures fast, New York proves states can lead when federal action stalls. As litigation looms and more states eye similar bills, one thing's clear: the Wild West era of AI deployment is closing, replaced by guardrails that balance breakthrough innovation with unbreakable public trust.
Key RAISE Act Metrics
- Reporting Window: 72 hours (vs CA's 15 days)
- First Offense Fine: $1M
- Repeat Fine: Up to $3M
- Coverage Threshold: >10^26 ops / $100M+ compute
- Exemptions: Startups, academia
- Enforcer: NY AG + new DFS office










