New York Approves Legislation for AI Safety

Key Takeaways

  • New York Governor Kathy Hochul signed the Responsible Artificial Intelligence and Safety Education (RAISE) Act on December 19, following President Trump’s executive order on AI regulation.
  • The law mandates companies with over $500 million in revenue to create safety protocols and report incidents within 72 hours, with penalties for non-compliance reaching up to $3 million.
  • The new oversight office will be established within the Department of Financial Services to monitor AI developers and issue annual reports.

New York Advances AI Safety Legislation

Governor Kathy Hochul has signed the RAISE Act, a landmark piece of legislation aimed at establishing safety protocols for advanced artificial intelligence systems. This move comes shortly after President Donald Trump issued an executive order intending to centralize federal control over AI, signaling a potential clash between state and federal regulatory approaches.

The timing of the RAISE Act’s passage is significant, as it arrives on the heels of a broader discussion regarding AI governance. Hochul noted the necessity of the law, emphasizing that it reinforces a framework akin to that recently adopted in California. “The law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states,” she stated. The need for proactive regulation is underscored by criticisms of the federal government for lagging in the implementation of what she termed “common sense regulations.”

The RAISE Act will take effect on January 1, 2027, and it imposes several critical requirements on companies that develop AI technologies. Those with revenues exceeding $500 million are required to draft and publish their safety protocols, ensuring transparency in AI development. In addition, any incidents related to AI use must be reported to state authorities within a strict 72-hour window.

To facilitate oversight and compliance, New York will establish a dedicated office within the Department of Financial Services. This office will assess developers and issue annual reports regarding AI safety measures. Non-compliance with the new regulations could result in fines of up to $1 million for first-time offenses and escalate to $3 million for subsequent breaches.

Despite intensive lobbying from tech companies, some proposed measures were modified or removed during the legislative process. For instance, a ban on the release of AI models that do not pass safety tests was ultimately not included. Additionally, the initial proposal for steeper fines was adjusted. Nevertheless, Alex Bores, the bill’s sponsor and a member of the New York assembly, expressed satisfaction with the outcome, claiming that the regulatory framework established by the RAISE Act sets a significant precedent for AI safety legislation.

Bores remarked, “In New York, we defeated last-ditch attempts from AI oligarchs to wipe out this bill and, by doing so, raised the floor for what AI safety legislation can look like.” He further noted how the state has successfully countered attempts to undermine the RAISE Act and prevent any regulatory challenges posed by President Trump and his allies.

As AI technology continues to evolve rapidly, New York’s proactive stance through the RAISE Act could serve as a critical model for other states, signaling a commitment to balancing innovation with public safety. This law not only aims to protect users but also to establish a clear framework for AI development that other tech-forward states may adopt in the future.

The content above is a summary. For more details, see the source article.

Leave a Comment

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Become a member

RELATED NEWS

Become a member

Scroll to Top