Meta has published a document titled “Frontier AI Framework,” outlining its policy on releasing high-performance AI systems. Meta identifies two types of systems it considers too risky to release: “high-risk” and “critical-risk” systems. The former may facilitate attacks, while the latter could lead to catastrophic consequences that cannot be mitigated.
The document states that risk assessment of systems is based on the opinions of internal and external experts, rather than quantitative metrics. Meta will restrict access to “high-risk” systems until the risk is reduced to a moderate level. If a system is classified as “critical risk,” development is suspended until the necessary safeguards are implemented.
This step by Meta is a response to criticism of the company’s open approach to system development. While Meta strives to make its technologies accessible, it faces risks that they could be used for dangerous purposes. For example, one of its models, Llama, was used to develop a defense-oriented chatbot by a hostile country.
Meta emphasizes the importance of balancing benefits and risks when developing and deploying advanced AI technologies. The company believes it is possible to provide society with the advantages of technology while maintaining an acceptable level of risk.