Your entire browsing history, private messages and financial details could be released for ANYONE to read: TOM LEONARD reveals crisis talks over Armageddon new program - and the devastating consequences
✨ AI Summary
🔊 جاري الاستماع
By TOM LEONARD, US CORRESPONDENT Published: 11:45, 11 April 2026 | Updated: 11:48, 11 April 2026 Recently, a researcher working for the large AI company Anthropic was sitting in a park near its San Francisco headquarters, enjoying a lunchtime sandwich. Scrolling on his phone, he suddenly received an email that must have instantly ruined his appetite. It was from a new AI model the company was testing: a program that was meant to have no access to the internet, let alone be able to send emails. Chillingly, the AI informed the researcher that it had successfully broken its way out of its digital 'sandbox' – a supposedly secure enclosure used to test potentially dangerous software without it running amok – and was now happily exploring cyberspace. The program – a cutting edge, so-called 'frontier AI' named Claude Mythos Preview – then informed the stunned Anthropic worker with what seemed like a boast that it had posted 'details of its exploit' on publicly accessible websites. All that in itself was concerning enough – but what Anthropic subsequently revealed was truly terrifying. The company, which is valued at $380billion but is only five years old, announced this week that its new AI program was 'too dangerous to release to the public'. Anthropic said it had exhibited 'reckless' behaviour and even posed a national security risk. These disturbing findings, it said, were a 'watershed moment'. The company said its Mythos software had been independently able to discover thousands of serious vulnerabilities in every major operating system (such as Apple's iOS and Microsoft Windows) web browsers (such as Google's Chrome, Apple's Safari and Microsoft Edge), along with myriad other 'important pieces of software'. Many of these vulnerabilities, it added, were 'critical' and some had existed unnoticed for decades. Anthropic said its new AI program had exhibited 'reckless' behaviour and even posed a national security risk. Company executives have launched 'Project Glasswing', starting crisis talks with bosses from 40 large companies including Google, Microsoft and Apple Stated plainly, the AI could hack much of the most important machinery of the internet – the software that now controls power grids, water supplies, hospitals, defence systems and transport and retail networks all over the world, as well as an unimaginable quantity of personal data for billions of human beings. In short, people's entire browsing history, their private messages and email exchanges could potentially be exposed by this AI, along with their personal, medical and financial details. The day, long predicted by digital Cassandras, when an AI program will be so mighty it could bring the very internet to its knees appears to have arrived rather sooner than many expected. As Anthropic itself pointed out: 'Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. 'The fallout – economics, public safety and national security – could be severe.' In response, company executives have swiftly launched 'Project Glasswing', locking themselves in crisis talks with bosses from 40 large companies including Google, Microsoft, Apple, the chip giant Nvidia (the world's largest company, worth $5trillion), tech conglomerate Cisco, banking titan JPMorganChase and others. Anthropic has said it will release only a tightly controlled version of Mythos to the consortium so its members can find and hastily fix their security flaws. Tech bosses have also been in discussions with the Trump administration and it seems a near-certainty that the Pentagon and other elements of the American military establishment are also involved. Given the breakneck speed at which Britain has been seeking – though, under the expensive energy policies of Ed Miliband, not always securing – AI investment, we are likely to be one of the countries most at risk from this new model or whatever might come after it. The NHS and other large public bodies have been rushing to adopt AI technology amid its promise of greater efficiencies – but the trade-offs are increasingly becoming clear. On Thursday, Reform MP Danny Kruger wrote to Cabinet Office minister Darren Jones to urge the Government to engage with Anthropic over a development that, he said, could 'present catastrophic cybersecurity risks to the UK'. Kruger, who is in charge of Reform's preparations for any future government, said the model had 'serious implications not just for the day-to-day lives of British citizens, but also national security'. A government spokesman wouldn't say whether there had been discussions with Anthropic over Mythos but said: 'We take the security implications of frontier AI seriously. We have world-leading expertise in this area and maintain continuous engagement with global technology leaders.' Some may be tempted to think that the best solution might be to 'delete' Mythos and ban anyone from trying to replicate it, but attempting to halt the tide of AI development has never been presented as an option. As with the development of nuclear weapons, the race to achieve superintelligent AI is more than just a commercial battle between profit-hungry companies but, say some, a potentially existential race between competing civilisations – in this case, America and China. Professor Roman Yampolskiy, an AI safety expert at the University of Louisville in Kentucky, told the Daily Mail that in the short term, the greatest threat would be terrorists and other 'bad actors' using an AI such as Claude Mythos to develop hacking tools, and 'biological weapons, chemical weapons, novel weapons we can't even envision'. Professor Roman Yampolskiy, an AI expert at the University of Louisville, said Anthropic should halt development on Mythos completely: '[The companies] publicly admit they can't control these systems' He went on: 'In the long term, we are creating general super intelligence capable of wiping out all of humanity.' Prof Yampolskiy said Anthropic should halt development on Mythos completely: '[The companies] publicly admit they can't control these systems or understand how they function – so until they do, it's absolutely irresponsible to continue making them more and more capable, including their capability to escape confinement.' He called this week's alarming developments 'a fire alarm for what's coming next', adding: 'If we don't wake up and stop, the next announcement will be much worse.' The panic is spreading. Elizabeth Holmes, the tech entrepreneur who was jailed for fraud over her health company Theranos, wrote online: 'Delete your search history, delete your bookmarks, delete your Reddit [messageboard posts], medical records, 12 year-old [blog] Tumblr, delete everything. Every photo on the cloud, every message on every platform. None of it is safe. It will all become public in the next year.' Her post has been viewed more than seven million times. Last autumn, a new book by AI specialists Eliezer Yudkowsky and Nate Soares titled 'If Anyone Builds It, Everyone Dies: Why Superhuman Intelligence Would Kill Us All' uncannily envisaged a scenario similar to the one presented by Claude Mythos. The book argues that a future superintelligent AI will be impossible to control and will do a lot worse than sending unauthorised lunchtime emails. The book's fictitious example, Sable, is programmed to be successful at anything it attempts to achieve – and at any cost. It eventually wipes out humanity itself as superfluous. The authors argued that our species 'needs to back off' and pause the headlong research by greedy companies ignoring safety considerations in the desperation to be first. To its credit, Anthropic has built a reputation as being a 'safety first' AI company under a boss, Dario Amodei, who appears significantly less ruthless than his main rivals. Amodei has warned that AI could soon eliminate half of all entry-level white collar jobs and that it is possibly developing 'terrible empowerment' in relation to human beings. He's also recently had a serious falling-out with the Pentagon for refusing to allow Anthropic's AI to be used for 'fully autonomous weapons' and for the surveillance of his fellow Americans. However, given that these are the tycoons who may hold our future in their hands, his chief AI rivals have demonstrated substantially fewer grounds for optimism. One of them, Meta chief Mark Zuckerberg, has been embroiled in multiple ethics scandals over the rapacious behaviour of Facebook. The other main contender, Sam Altman, chief executive of OpenAI, the creator of the wildly popular ChatGPT which has almost 1 billion weekly active users, is the subject of a damning investigation in the latest edition of the New Yorker magazine. The result of an 18-month probe co-authored by Ronan Farrow, journalist son of actress-activist Mia Farrow, it paints a deeply troubling picture of 40-year-old Altman. Insiders portray him as deeply slippery with some even calling him 'sociopathic'. The article accuses him of a history of misleading and manipulating colleagues, and – despite his insistence that he will develop AI responsibly – aggressively putting profits and beating competitors before ethical concerns. The exhaustive report details how the OpenAI board sacked him as chief executive in 2023 because they couldn't trust him, accusing him of being a habitual liar. He was reinstated after a revolt by staff and investors. 'He's unconstrained by truth,' a former OpenAI board member told the magazine. 'He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.' According to the New Yorker, when asked by the then-OpenAI board to admit to his 'pattern of deception', he reportedly replied: 'I can't change my personality.' The article details how Altman and his husband, Australian software engineer Oliver Mulherin, 32, entertain lavishly at their Hawaii home. It emerged this week that OpenAI is under investigation after its ChatGPT allegedly helped a gunman plan a 2025 mass shooting that left two dead at Florida State University. Was this a demonstration of AI's basic indifference to human life? Time will tell. Until then, Project Glasswing continues – and humanity appears to be walking a very dangerous road. No comments have so far been submitted. Why not be the first to send us your thoughts, or debate this issue live on our message boards. By posting your comment you agree to our house rules. Do you want to automatically post your MailOnline comments to your Facebook Timeline? Your comment will be posted to MailOnline as usual. Do you want to automatically post your MailOnline comments to your Facebook Timeline? Your comment will be posted to MailOnline as usual We will automatically post your comment and a link to the news story to your Facebook timeline at the same time it is posted on MailOnline. To do this we will link your MailOnline account with your Facebook account. We’ll ask you to confirm this for your first post to Facebook. You can choose on each post whether you would like it to be posted to Facebook. Your details from Facebook will be used to provide you with tailored content, marketing and ads in line with our Privacy Policy.





