Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home News

US to Review OpenAI and Anthropic Models

September 2, 2024
Reading Time: 2 mins read
in News
US to Review OpenAI and Anthropic Models

The U.S. AI Safety Institute has announced a significant collaboration with leading artificial intelligence firms OpenAI and Anthropic. This partnership, formalized through a memorandum of understanding, grants the institute early access to new AI models developed by these companies. The primary goal is to rigorously evaluate these models for safety and suggest necessary improvements before their public release. The deal is part of the institute’s broader mandate, under the Department of Commerce’s National Institute of Standards and Technology (NIST), to develop and operationalize AI testing methodologies and risk mitigation strategies.

Elizabeth Kelly, Director of the U.S. AI Safety Institute, emphasized the importance of this agreement in advancing AI safety. She described it as a crucial step towards responsibly managing the future of AI technology. The partnership aims to enhance the scientific understanding of AI evaluations and foster innovation while maintaining rigorous safety standards. This initiative comes in response to growing concerns about AI security and the need for effective regulatory measures.

OpenAI’s CEO Sam Altman has publicly endorsed the agreement, highlighting its role in pushing forward the science of AI evaluations. His remarks underscore the broader context of recent legislative actions, including a bill in California proposing new safety standards for advanced AI models. Although OpenAI opposes the bill, and Anthropic supports it cautiously, both companies are committed to working with the U.S. AI Safety Institute and its U.K. counterpart to address common concerns about AI system security.

This collaboration marks a pioneering effort in the intersection of government and tech industry partnerships. By sharing their models and engaging in joint research with the institute, OpenAI and Anthropic are contributing to a framework of responsible AI development. The agreement aligns with international efforts to establish safety tests and regulatory measures for AI, reflecting a global consensus on the need to balance innovation with robust security practices.

Reference:

  • US AI Safety Institute to Evaluate OpenAI and Anthropic Models for Safety
Tags: AIAI Safety InstituteAnthropicCyber NewsCyber News 2024Cyber threatsMachine LearningOpenAISeptember 2024USA
ADVERTISEMENT

Related Posts

Oregon Has Passed a New Data Privacy Law

UK To Invest £1B In Cyber Army For Defense

May 29, 2025
Oregon Has Passed a New Data Privacy Law

Oregon Has Passed a New Data Privacy Law

May 29, 2025
Oregon Has Passed a New Data Privacy Law

Horizon 3 AI Secures Near $100M

May 29, 2025
New CISA SIEM and SOAR Cyber Guide Released

New CISA SIEM and SOAR Cyber Guide Released

May 28, 2025
New CISA SIEM and SOAR Cyber Guide Released

Iranian Pleaded Guilty in Robbinhood Case

May 28, 2025
New CISA SIEM and SOAR Cyber Guide Released

Vietnam Cites Security For Telegram Ban

May 28, 2025

Latest Alerts

New PumaBot IoT Botnet Uses SSH Attack

APT41 Uses Google Calendar For C2 Operations

New NodeSnake RAT Hits UK Universities

Microsoft Void Blizzard Cyber Threat Alert

Fake DocuSign Alerts Target Corporate Logins

Fake Bitdefender Site Spreads Venom Malware

Subscribe to our newsletter

    Latest Incidents

    Cork Protocol Paused After $12M Exploit

    Victoria’s Secret Site Down After Breach

    LexisNexis GitHub Breach Affects 364K People

    Migos IG Hack Blackmails Solana Cofounder

    Tiffany & Co. Faces Data Breach Incident

    MathWorks Crippled by Ransomware Attack

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial