Blog

May 18, 2024

Researchers find LLMs are easy to manipulate into giving harmful information

Posted by in category: robotics/AI

A team of AI researchers at AWS AI Labs, Amazon, has found that most, if not all, publicly available Large Language Models (LLMs) can be easily tricked into revealing dangerous or unethical information.

Leave a reply