Anthropic outlined Friday how it will comply with California’s first-in-the-nation law mandating large AI developers publish details about their efforts to prevent public safety risks.
The San Francisco-based company is the first major artificial intelligence company to detail such plans ahead of the Transparency in Frontier AI Act (SB 53) going into effect Jan. 1.
The company spelled out its system for evaluating various risks, such as cyberattacks, biological threats, nuclear risks, and the loss of control of the AI system. The plan describes mitigation efforts and how the company would respond to a safety incident.
Anthropic was ...