As is the case with most other software, artificial intelligence (AI) is vulnerable to hacking.
A hacker, who is part of an international effort to draw attention to the shortcomings of the biggest tech companies, is stress-testing, or “jailbreaking,” the language models at Microsoft, ChatGPT and Google, according to a recent report from the Financial Times.
Two weeks ago, Russian hackers used AI for a cyber-attack on major London hospitals, according to the former chief executive of the National Cyber Security Centre. Hospitals declared a critical incident after the ransomware attack, which affected blood transfusions and test results.
On this week’s AI Decoded, the BBC’s Christian Fraser explores the security implications of businesses that are turning to AI to improve their systems.