Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home News

US to Review OpenAI and Anthropic Models

September 2, 2024
Reading Time: 2 mins read
in News
US to Review OpenAI and Anthropic Models

The U.S. AI Safety Institute has announced a significant collaboration with leading artificial intelligence firms OpenAI and Anthropic. This partnership, formalized through a memorandum of understanding, grants the institute early access to new AI models developed by these companies. The primary goal is to rigorously evaluate these models for safety and suggest necessary improvements before their public release. The deal is part of the institute’s broader mandate, under the Department of Commerce’s National Institute of Standards and Technology (NIST), to develop and operationalize AI testing methodologies and risk mitigation strategies.

Elizabeth Kelly, Director of the U.S. AI Safety Institute, emphasized the importance of this agreement in advancing AI safety. She described it as a crucial step towards responsibly managing the future of AI technology. The partnership aims to enhance the scientific understanding of AI evaluations and foster innovation while maintaining rigorous safety standards. This initiative comes in response to growing concerns about AI security and the need for effective regulatory measures.

OpenAI’s CEO Sam Altman has publicly endorsed the agreement, highlighting its role in pushing forward the science of AI evaluations. His remarks underscore the broader context of recent legislative actions, including a bill in California proposing new safety standards for advanced AI models. Although OpenAI opposes the bill, and Anthropic supports it cautiously, both companies are committed to working with the U.S. AI Safety Institute and its U.K. counterpart to address common concerns about AI system security.

This collaboration marks a pioneering effort in the intersection of government and tech industry partnerships. By sharing their models and engaging in joint research with the institute, OpenAI and Anthropic are contributing to a framework of responsible AI development. The agreement aligns with international efforts to establish safety tests and regulatory measures for AI, reflecting a global consensus on the need to balance innovation with robust security practices.

Reference:

  • US AI Safety Institute to Evaluate OpenAI and Anthropic Models for Safety
Tags: AIAI Safety InstituteAnthropicCyber NewsCyber News 2024Cyber threatsMachine LearningOpenAISeptember 2024USA
ADVERTISEMENT

Related Posts

Akira Ransomware Made 244 Million Dollars

Skripal Hacker Arrested In Thailand

November 14, 2025
Akira Ransomware Made 244 Million Dollars

Claude AI Linked To Chinese Espionage

November 14, 2025
Akira Ransomware Made 244 Million Dollars

Akira Ransomware Made 244 Million Dollars

November 14, 2025
UK Unveils Cyber Security Bill

Google Sues Text Message Scammers

November 13, 2025
Google Sues Cybercriminals Behind Lighthouse

Google Sues Cybercriminals Behind Lighthouse

November 13, 2025
Google Sues Cybercriminals Behind Lighthouse

Police Take Down Major Malware Operations

November 13, 2025

Latest Alerts

Imunify360 Flaw Puts Sites At Risk

Safery Extension Steals Crypto Wallets

ChatGPT Flaw Exposed Core Infrastructure

Firefox Chrome Fix High Severity Bugs

CISA Warns Of WatchGuard Fireware Flaw

Npm Package Targets GitHub Repos

Subscribe to our newsletter

    Latest Incidents

    Hackers Breach NY State Texting Service

    Doordash Hit By October User Data Breach

    Synnovis Reports Data Theft In Attack

    Hyundai Breach Risks Drivers Data

    Hackers Demand 200K From Doctor Alliance

    GlobalLogic Confirms Data Breach

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial