Blog

Jan 17, 2024

In Leaked Audio, Microsoft Cherry-Picked Examples to Make Its AI Seem Functional

Posted by in categories: cybercrime/malcode, robotics/AI

Microsoft “cherry-picked” examples of its generative AI’s output after it would frequently “hallucinate” incorrect responses, Business Insider reports.

The scoop comes from leaked audio of an internal presentation on an early version of Microsoft’s Security Copilot, a ChatGPT-like AI tool designed to help cybersecurity professionals.

According to BI, the audio contains a Microsoft researcher discussing the results of “threat hunter” tests in which the AI analyzed a Windows security log for possible malicious activity.

Comments are closed.