OpenAI is assuming that new AI models can pose a ‘high’ risk.
- Company is strengthening models for defensive cybersecurity tasks
- Training models to safely respond to harmful requests
- Strategy is to ‘mitigate risk through layered safety stack’
- OpenAI to establish ‘Frontier Risk Council’
- Advisory group to bring experienced cyber defenders and security practitioners into close collaboration with our teams
To view the source of this information click
To contact the reporter on this story:
To contact the editor responsible for this story:
© 2025 Bloomberg L.P. All rights reserved. ...
