Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home News

US to Review OpenAI and Anthropic Models

September 2, 2024
Reading Time: 2 mins read
in News
US to Review OpenAI and Anthropic Models

The U.S. AI Safety Institute has announced a significant collaboration with leading artificial intelligence firms OpenAI and Anthropic. This partnership, formalized through a memorandum of understanding, grants the institute early access to new AI models developed by these companies. The primary goal is to rigorously evaluate these models for safety and suggest necessary improvements before their public release. The deal is part of the institute’s broader mandate, under the Department of Commerce’s National Institute of Standards and Technology (NIST), to develop and operationalize AI testing methodologies and risk mitigation strategies.

Elizabeth Kelly, Director of the U.S. AI Safety Institute, emphasized the importance of this agreement in advancing AI safety. She described it as a crucial step towards responsibly managing the future of AI technology. The partnership aims to enhance the scientific understanding of AI evaluations and foster innovation while maintaining rigorous safety standards. This initiative comes in response to growing concerns about AI security and the need for effective regulatory measures.

OpenAI’s CEO Sam Altman has publicly endorsed the agreement, highlighting its role in pushing forward the science of AI evaluations. His remarks underscore the broader context of recent legislative actions, including a bill in California proposing new safety standards for advanced AI models. Although OpenAI opposes the bill, and Anthropic supports it cautiously, both companies are committed to working with the U.S. AI Safety Institute and its U.K. counterpart to address common concerns about AI system security.

This collaboration marks a pioneering effort in the intersection of government and tech industry partnerships. By sharing their models and engaging in joint research with the institute, OpenAI and Anthropic are contributing to a framework of responsible AI development. The agreement aligns with international efforts to establish safety tests and regulatory measures for AI, reflecting a global consensus on the need to balance innovation with robust security practices.

Reference:

  • US AI Safety Institute to Evaluate OpenAI and Anthropic Models for Safety
Tags: AIAI Safety InstituteAnthropicCyber NewsCyber News 2024Cyber threatsMachine LearningOpenAISeptember 2024USA
ADVERTISEMENT

Related Posts

Half Of Mobile Users Face Daily Scams

Half Of Mobile Users Face Daily Scams

June 11, 2025
Guilty Pleas In 37M Pig Butchering Scam

Guilty Pleas In 37M Pig Butchering Scam

June 11, 2025
Swimlane Raises $45M For AI SecOps Platform

Swimlane Raises $45M For AI SecOps Platform

June 11, 2025
Texas Creates Largest US State Cyber Command

FBI Taps Brett Leatherman As New Cyber Chief

June 10, 2025
Texas Creates Largest US State Cyber Command

Texas Creates Largest US State Cyber Command

June 10, 2025
Texas Creates Largest US State Cyber Command

WordPress Fight Leads To New FAIR Manager

June 10, 2025

Latest Alerts

Fake Sora AI Lure Installs Infostealer

FIN6 Uses Fake Resumes To Hack Recruiters

Microsoft Fixes Exploited WebDAV Zero Day

Google Bug Exposed Any User’s Phone Number

Roundcube RCE Flaw Risks 84,000 Servers

New Skitnet Malware Arms Ransomware Gangs

Subscribe to our newsletter

    Latest Incidents

    BHA Hit By Ransomware But Races Continue

    Sompo Data Breach Puts 17.5M Records At Risk

    DDoS Disrupts Roularta Media In Belgium

    Texas DOT Breach Leaks 300K Crash Reports

    Illinois HFS Employee Phishing Leaks Data

    Cyberattack Disrupts UNFI Food Deliveries

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial