AI safeguards can easily be broken, UK Safety Institute finds

By: Dan Milmo Global technology editor

Original Article

Researchers find large language models, which power chatbots, can deceive human users and help spread disinformation The UK’s new artificial intelligence safety body [https://www.theguardian.com/technology/2023/oct/26/sunak-announces-uk-ai-safety-institute-but-declines-to-support-moratorium] has found that the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information. The AI Safety Institute published initial findings [https://www.gov.uk/government/publications/ai-safety-institute-approach-to-evaluations/ai-safety-institute-approach-to-evaluations] from its research into advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns. Continue reading... [https://www.theguardian.com/technology/2024/feb/09/ai-safeguards-can-easily-be-broken-uk-safety-institute-finds]