Blog

Sep 26, 2023

OpenAI’s GPT-4 with vision still has flaws, paper reveals

Posted by in categories: climatology, mobile phones, robotics/AI

When OpenAI first unveiled GPT-4, its flagship text-generating AI model, the company touted the model’s multimodality — in other words, its ability to understand the context of images as well as text. GPT-4 could caption — and even interpret — relatively complex images, OpenAI said, for example identifying a Lightning Cable adapter from a picture of a plugged-in iPhone.

But since GPT-4’s announcement in late March, OpenAI has held back the model’s image features, reportedly on fears about abuse and privacy issues. Until recently, the exact nature of those fears remained a mystery. But early this week, OpenAI published a technical paper detailing its work to mitigate the more problematic aspects of GPT-4’s image-analyzing tools.

To date, GPT-4 with vision, abbreviated “GPT-4V” by OpenAI internally, has only been used regularly by a few thousand users of Be My Eyes, an app to help low-vision and blind people navigate the environments around them. Over the past few months, however, OpenAI also began to engage with “red teamers” to probe the model for signs of unintended behavior, according to the paper.

Comments are closed.