Experts urge accountability amid reports of AI-driven self-harm and hacking
A new AI safety report warns that some of the world’s largest AI companies are falling behind on basic safeguards.
The latest edition of the Future of Life Institute’s AI Safety Index, released Wednesday, evaluated eight major AI companies and concluded that none have a sufficiently robust strategy to control advanced AI models or mitigate catastrophic risks.
The index was assembled with the help of an independent panel of experts.
The findings land at a moment of intensifying fears about real-world harm from cutting-edge AI. Several recent cases of self-harm and suicide have been linked to AI chatbots, adding to anxieties about AI-driven cybercrime, misinformation, psychological manipulation and the emergence of superintelligent systems.
“Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” Max Tegmark, an MIT professor and president of the Future of Life Institute, said.
The Future of Life Institute, a nonprofit organisation founded in 2014 with early support from Tesla CEO Elon Musk, advocates for reducing existential risks from advanced technologies including AI and nuclear weapons.
The report’s safety scores show what researchers call a ‘widening gap’ in corporate responsibility.
“We see two clusters of companies in terms of their safety promises and practices,” Sabina Nong, an AI safety investigator at FLI, told NBC News.
“Three companies are leading: Anthropic, OpenAI, Google DeepMind, in that order, and then five other companies are on the next tier.”
Those lower-tier companies include xAI and Meta, alongside three rapidly advancing Chinese developers: Z.ai, DeepSeek and Alibaba Cloud. Their models have been increasingly used in Silicon Valley due to both fast-improving capabilities and the broad availability of open-source code.
Even among the top performers, grades were low. Anthropic, ranked first, received only a C+.
Alibaba Cloud placed last with a D-.
DeepSeek also scored poorly, second to last overall. According to the report, unlike leading US companies, DeepSeek has not published any framework explaining how it evaluates risks or mitigates harmful system behaviour, nor does it provide a whistleblower policy, a key measure for identifying emerging threats.
The index assessed 35 safety indicators across six areas, including internal risk assessment, information-sharing procedures, whistleblower protections and support for research into AI alignment.
xAI and Meta were credited with developing early versions of safety frameworks, but the report said those systems lack scope, measurable thresholds and independent review.
Meta received some praise for introducing “the only outcome-based thresholds,” though the report warned its triggers for action are too high and leave unclear who has authority to intervene.
To bolster public trust, the report calls on companies to disclose more about internal evaluations, employ independent safety auditors, strengthen protections against AI-induced psychological harm, and scale back aggressive lobbying against regulation.
AI companies responded cautiously.
An OpenAI spokesperson said: “Safety is core to how we build and deploy AI. We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts.”
A Google DeepMind spokesperson said the company will “continue to innovate on safety and governance at pace with capabilities” as AI systems grow more advanced.
Leave a comment