Frontier AI regulation: Managing emerging risks to public safety

Abstract

Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term “frontier AI” models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model’s capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.

Lorem Ipsum

OpenAI and Scale are joining forces to help more companies benefit from fine-tuning our most advanced models.

Companies expect high performance, steerability, and customization when it comes to deploying AI in production. We recently launched fine-tuning for GPT-3.5 Turbo, and will bring fine-tuning to GPT-4 this fall. With fine-tuning, companies can now securely customize our most advanced models on proprietary data, making our most powerful models even more useful. As always, data sent in and out of the fine-tuning API is owned by the customer and is not used by OpenAI, or any other organization, to train other models.

We’re working with Scale as a preferred partner to extend the benefits of our fine-tuning capability given their experience helping enterprises securely and effectively leverage data for AI. Building robust enterprise-grade functionality requires rigorous data enrichment and model evaluation. Scale customers can now fine-tune OpenAI models just as they would through OpenAI, while also benefiting from Scale’s enterprise AI expertise and Data Engine.

Scale has already demonstrated value for customers by fine-tuning GPT-3.5 for Brex. Check out more details here.

Related

Revolutionizing Workflow: The Impact of AI on Everyday Business Processes

Easify AI Beta v0.1 Release

Frontier AI regulation