AI models can be easily hacked, the testing results surprised, yet the government did not reveal their names – AI models found vulnerable to basic hacking can give harmful result yet UK govt did not reveal their names

UK government researchers have found that the systems used to protect AI chatbots are not secure. They can easily bypass cybersecurity circles. This means that all AI-based chatbots are more vulnerable to issuing illegal, toxic or explicit responses. That is, they can be easily tampered with through such instructions. Or they can be made to produce the wrong results. The UK’s AI Safety Institute (AISI) said the systems it tested were highly vulnerable to jailbreak. Jailbreak is a hacking method that aims to circumvent the ethical safeguards of AI models…

Read More