Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home News

US to Review OpenAI and Anthropic Models

September 2, 2024
Reading Time: 2 mins read
in News
US to Review OpenAI and Anthropic Models

The U.S. AI Safety Institute has announced a significant collaboration with leading artificial intelligence firms OpenAI and Anthropic. This partnership, formalized through a memorandum of understanding, grants the institute early access to new AI models developed by these companies. The primary goal is to rigorously evaluate these models for safety and suggest necessary improvements before their public release. The deal is part of the institute’s broader mandate, under the Department of Commerce’s National Institute of Standards and Technology (NIST), to develop and operationalize AI testing methodologies and risk mitigation strategies.

Elizabeth Kelly, Director of the U.S. AI Safety Institute, emphasized the importance of this agreement in advancing AI safety. She described it as a crucial step towards responsibly managing the future of AI technology. The partnership aims to enhance the scientific understanding of AI evaluations and foster innovation while maintaining rigorous safety standards. This initiative comes in response to growing concerns about AI security and the need for effective regulatory measures.

OpenAI’s CEO Sam Altman has publicly endorsed the agreement, highlighting its role in pushing forward the science of AI evaluations. His remarks underscore the broader context of recent legislative actions, including a bill in California proposing new safety standards for advanced AI models. Although OpenAI opposes the bill, and Anthropic supports it cautiously, both companies are committed to working with the U.S. AI Safety Institute and its U.K. counterpart to address common concerns about AI system security.

This collaboration marks a pioneering effort in the intersection of government and tech industry partnerships. By sharing their models and engaging in joint research with the institute, OpenAI and Anthropic are contributing to a framework of responsible AI development. The agreement aligns with international efforts to establish safety tests and regulatory measures for AI, reflecting a global consensus on the need to balance innovation with robust security practices.

Reference:

  • US AI Safety Institute to Evaluate OpenAI and Anthropic Models for Safety
Tags: AIAI Safety InstituteAnthropicCyber NewsCyber News 2024Cyber threatsMachine LearningOpenAISeptember 2024USA
ADVERTISEMENT

Related Posts

Niobium Raises 23 Million For FHE Tech

NCSC Warns Orgs Of Exposed Device Flaws

December 5, 2025
PRC Hackers Use BrickStorm In US

PRC Hackers Use BrickStorm In US

December 5, 2025
NCSC Warns Orgs Of Exposed Device Flaws

Hackers Accused Of Wiping 96 Databases

December 5, 2025
Niobium Raises 23 Million For FHE Tech

Niobium Raises 23 Million For FHE Tech

December 4, 2025
Defender Outage Disrupts Threat Alerting

Arizona AG Sues Temu Over Data Theft

December 4, 2025
Niobium Raises 23 Million For FHE Tech

Google Expands Android Scam Protection

December 4, 2025

Latest Alerts

Silver Fox Spreads ValleyRAT In China

Intellexa Leak Exposes Predator Zero Days

Hackers Exploit ArrayOS AG VPN Flaw

Record DDoS Linked To Massive Botnet

RSC Bugs Let Hackers Run Remote Code Now

WordPress Elementor Addon Flaw Exploited

Subscribe to our newsletter

    Latest Incidents

    ASUS Confirms Vendor Breach By Everest

    Marquis Breach Hits Over 780,000 People

    Leroy Merlin Reports Data Breach

    Freedom Mobile Customer Data Breach Exposed

    Penn Phoenix Data Breach Oracle Hack Now

    Defender Outage Disrupts Threat Alerting

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial