New York’s RAISE Act: What AI Developers Need to Know About New Safety Rules On March 19, 2026, New York State’s Responsible AI Safety and Education (RAISE) Act officially took effect, establishing comprehensive safety and transparency requirements for developers of large-scale artificial intelligence models. This landmark New York RAISE Act AI regulation represents one of the most significant state-level AI governance frameworks in the United States, with implications that extend far beyond New York’s borders. What is the RAISE Act? The RAISE Act is comprehensive AI safety legislation designed to ensure that powerful AI systems are developed and deployed responsibly. The law focuses on large AI models—systems with significant capabilities that could pose risks to public safety, privacy, or civil rights if not properly managed. Key objectives of the RAISE Act include: Establishing safety standards for AI model development and deployment Requiring transparency about AI system capabilities and limitations Protecting New York residents from potential AI-related harms Creating accountability mechanisms for AI developers Promoting responsible AI innovation while managing risks The legislation represents a middle-ground approach: stringent enough to address legitimate safety concerns, but designed to avoid stifling innovation in the rapidly evolving AI sector. Who is Affected by the RAISE Act? The RAISE Act applies to developers of “covered AI systems,” defined as: Large language models with more than 10 billion parameters Multimodal AI systems that process multiple data types and exceed specified capability thresholds AI systems deployed in New York that affect state residents, regardless of where the developer is located Foundation models that are fine-tuned or adapted for use in New York This broad scope means that major AI developers—including OpenAI, Google, Anthropic, Meta, and others—must comply with the RAISE Act when their systems are accessible to New York users. Key Provisions and Requirements The RAISE Act imposes several categories of requirements on covered AI developers: Safety Testing and Evaluation Before deploying a covered AI system in New York, developers must: Conduct comprehensive safety testing to identify potential risks Evaluate the system for bias, discrimination, and fairness issues Test for potential misuse scenarios, including generation of harmful content Document testing methodologies and results Engage third-party auditors for independent safety assessments Transparency and Disclosure The Act mandates extensive AI transparency rules, requiring developers to: Publish detailed model cards describing system capabilities and limitations Disclose training data sources and data governance practices Provide clear information about known risks and failure modes Explain safety measures and guardrails implemented Make technical documentation available to regulators Ongoing Monitoring and Reporting Compliance doesn’t end at deployment. Developers must: Implement systems to monitor AI behavior in production Report significant incidents or safety failures to state authorities Conduct periodic safety re-evaluations as systems are updated Maintain detailed logs of system modifications and their safety implications User Rights and Protections The RAISE Act establishes rights for New York residents, including: The right to know when they’re interacting with an AI system Access to information about how AI systems make decisions affecting them Mechanisms to challenge or appeal AI-generated decisions Protection against discriminatory AI outcomes Penalties for Non-Compliance The RAISE Act includes substantial penalties to ensure compliance: Civil penalties: Up to $5,000 per violation, with each day of non-compliance potentially constituting a separate violation Enhanced penalties: Up to $50,000 for violations that result in demonstrable harm to individuals Injunctive relief: Courts can order companies to cease operations of non-compliant AI systems Private right of action: Individuals harmed by RAISE Act violations can sue for damages These penalties are designed to be significant enough to ensure compliance even from large, well-resourced companies. Background: Why New York Created the RAISE Act The RAISE Act emerged from growing concerns about AI safety and the perceived gap in federal AI regulation. Several factors motivated the legislation: High-Profile AI Incidents Incidents involving AI systems producing harmful outputs, exhibiting bias, or being misused for malicious purposes highlighted the need for governance frameworks. Rapid AI Advancement The pace of AI capability improvement, particularly in generative AI, created urgency around establishing safety standards before more powerful systems are deployed. Federal Regulatory Vacuum With limited federal AI regulation, states like New York moved to fill the gap, similar to how California led on privacy regulation with CCPA. Stakeholder Input The Act was developed through extensive consultation with AI researchers, industry representatives, civil rights organizations, and the public. Comparison with Other AI Regulations The RAISE Act exists within a growing landscape of AI governance: EU AI Act The European Union’s AI Act, which took effect in 2024, uses a risk-based approach categorizing AI systems by potential harm. The RAISE Act shares this risk-based philosophy but is more narrowly focused on large foundation models. California AI Regulations California has enacted several AI-related laws, including requirements for algorithmic transparency in hiring and restrictions on facial recognition. The RAISE Act is more comprehensive in scope. Federal AI Executive Orders Federal executive orders have established AI safety standards for government use and encouraged voluntary industry commitments. The RAISE Act goes further by creating legally enforceable requirements. International Approaches Countries like the UK, Canada, and Singapore have adopted various AI governance frameworks. New York’s approach is notable for being state-level regulation in a federal system. Industry Reactions and Compliance Challenges The AI industry’s response to the RAISE Act has been mixed: Supportive Voices Some AI safety researchers and responsible AI advocates have praised the Act as a necessary step toward ensuring AI systems are developed with appropriate safeguards. Several companies have publicly committed to compliance. Concerns and Criticisms Critics have raised several concerns: Compliance costs: Smaller AI companies worry about the resources required for testing, auditing, and documentation Innovation impact: Some fear that regulatory requirements could slow AI development or drive it to less-regulated jurisdictions Technical challenges: Measuring and ensuring AI safety remains technically difficult, making compliance complex Regulatory fragmentation: Different state-level requirements could create a patchwork of regulations that’s difficult to navigate What AI Developers Need to Do to Comply For AI developers subject to the RAISE Act, compliance requires a systematic approach: Immediate Steps Assess applicability: Determine whether your AI systems meet the “covered system” criteria Conduct gap analysis: Compare current practices against RAISE Act requirements Establish compliance team: Designate personnel responsible for RAISE Act compliance Review existing documentation: Inventory current safety testing, model cards, and technical documentation Ongoing Compliance Implement safety testing protocols: Develop comprehensive testing frameworks that meet RAISE Act standards Create transparency documentation: Prepare detailed model cards and technical disclosures Establish monitoring systems: Implement production monitoring to detect safety issues Engage third-party auditors: Contract with qualified auditors for independent assessments Train staff: Ensure development teams understand AI compliance requirements Develop incident response plans: Create procedures for reporting and addressing safety incidents Best Practices Integrate safety considerations into the entire development lifecycle, not just as a pre-deployment checklist Engage with regulators proactively to clarify requirements and demonstrate good faith compliance efforts Participate in industry working groups developing compliance standards and best practices Consider compliance as an opportunity to improve AI safety, not just a regulatory burden Future Outlook and Implications The RAISE Act’s implementation will likely influence AI regulation beyond New York: Model for other states: Other states may adopt similar legislation, potentially creating momentum for federal action Industry standards: Compliance practices developed for the RAISE Act may become de facto industry standards International influence: New York’s approach may inform AI governance discussions globally Evolution of requirements: The Act includes provisions for updating requirements as AI technology evolves Conclusion New York’s RAISE Act represents a significant milestone in AI governance, establishing comprehensive safety and transparency requirements for large AI systems. While the Act presents compliance challenges for AI developers, it also reflects growing recognition that powerful AI technologies require appropriate oversight to protect public interests. For AI developers, the RAISE Act necessitates integrating responsible AI development practices throughout the development lifecycle. Companies that view compliance as an opportunity to strengthen their AI safety practices—rather than merely a regulatory burden—will be better positioned for long-term success in an increasingly regulated environment. As AI capabilities continue to advance, frameworks like the RAISE Act will play a crucial role in ensuring that innovation proceeds responsibly, with appropriate safeguards to protect individuals and society. The Act’s implementation will be closely watched as a test case for state-level AI regulation in the United States. Post navigation NVIDIA GTC 2026: Major AI Hardware and Software Announcements Unveiled US Senate Proposes National AI Framework to Preempt State Laws