By using this website, you agree to our Privacy Policy and Terms of Use.
Accept
Craftium.AICraftium.AICraftium.AI
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Font ResizerAa
Craftium.AICraftium.AI
Font ResizerAa
Пошук
  • Home
  • News
  • Catalog
  • Collections
  • Blog
Follow US
  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback
© 2024-2025 Craftium.AI.

Meta defined its policy on risky AI systems

The company will restrict access to dangerous systems, taking expert opinions into account

Igor Lev
Igor Lev
Published: 04.02.2025
News
Meta
Illustrative image
SHARE

Meta has published a document titled “Frontier AI Framework,” outlining its policy on releasing high-performance AI systems. Meta identifies two types of systems it considers too risky to release: “high-risk” and “critical-risk” systems. The former may facilitate attacks, while the latter could lead to catastrophic consequences that cannot be mitigated.

The document states that risk assessment of systems is based on the opinions of internal and external experts, rather than quantitative metrics. Meta will restrict access to “high-risk” systems until the risk is reduced to a moderate level. If a system is classified as “critical risk,” development is suspended until the necessary safeguards are implemented.

Read also

resisting robot
Research Reveals GPT-4o’s Reluctance to Shut Down
Meta Launches AI Video Editing
One Billion Users in a Month: Meta AI Gains Momentum

This step by Meta is a response to criticism of the company’s open approach to system development. While Meta strives to make its technologies accessible, it faces risks that they could be used for dangerous purposes. For example, one of its models, Llama, was used to develop a defense-oriented chatbot by a hostile country.

Meta emphasizes the importance of balancing benefits and risks when developing and deploying advanced AI technologies. The company believes it is possible to provide society with the advantages of technology while maintaining an acceptable level of risk.

Which Chatbots Collect the Most Personal User Data
OpenAI introduced an updated model for the Operator service
Claude Opus 4 outpaces competitors but surprises with behavior
AI-Based Chatbots Are Easily Tricked by Bypassing Their Security Systems
Delay in Behemoth Launch: Meta Changes Pace in AI Race
TAGGED:MetaSecurity
Leave a Comment

Leave a Reply Cancel reply

Follow us

XFollow
YoutubeSubscribe
TelegramFollow
MediumFollow

Popular News

SB1
Soundboard SB1 by ElevenLabs — Creating Music and Effects on the Fly
18.05.2025
Collective language formation
Artificial Intelligence Creates Its Own Language Rules in Groups
15.05.2025
Codex
New Codex Agent from OpenAI Expands ChatGPT Capabilities
16.05.2025
Google Beam
3D Video Meetings Become Reality with Google Beam
21.05.2025
AI worker stress
Generative AI Negatively Affects Employee Motivation
15.05.2025

Читайте також

confused AI doctor
News

AI chats won’t replace doctors: research results

11.05.2025
ChatGPT на ПК
News

OpenAI will test new versions of ChatGPT together with users

04.05.2025
riendly robot thumbs-up
News

Why OpenAI Had to Recall the New Version of ChatGPT

03.05.2025

Craftium AI is a team that closely follows the development of generative AI, applies it in their creative work, and eagerly shares their own discoveries.

Navigation

  • News
  • Reviews
  • Collections
  • Blog

Useful

  • Terms of Use
  • Privacy Policy
  • Copyright
  • Feedback

Subscribe to our weekly digest of news, guides, and reviews about AI. Get fresh content delivered straight to your inbox!

Craftium.AICraftium.AI
Follow US
© 2024-2025 Craftium.AI
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?