Home Artificial intelligence AI firm Anthropic seeks weapons expert to stop users from ‘misuse’
Artificial intelligence

AI firm Anthropic seeks weapons expert to stop users from ‘misuse’

Share


Anthropic is not the only AI firm adopting this strategy.

A similar position, external has been advertised by ChatGPT developer OpenAI. On its careers website, it lists a job vacancy for a researcher in “biological and chemical risks”, with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic.

But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons – even if they have been instructed not to use it.

“Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?” said Dr Stephanie Hare, tech researcher and co-presenter of the BBC’s AI Decoded TV programme.

“There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight.”

The AI industry has continuously warned about the potential existential threats posed by its technology, but there has been no attempt to slow down its progress.

The issue has gained urgency as the US government calls on AI firms while launching war in Iran and military operations in Venezuela.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *