Blog

Mar 14, 2018

A new test could tell us whether an AI has common sense

Posted by in categories: materials, robotics/AI

Virtual assistants and chatbots don’t have a lot of common sense. It’s because these types of machine learning rely on specific situations they have encountered before, rather than using broader knowledge to answer a question. However, researchers at the Allen Institute for AI (Ai2) have devised a new test, the Arc Reasoning Challenge (ARC) that can test an artificial intelligence on its understanding of the way our world operates.

Humans use common sense to fill in the gaps of any question they are posed, delivering answers within an understood but non-explicit context. Peter Clark, the lead researcher on ARC, explained in a statement, “Machines do not have this common sense, and thus only see what is explicitly written, and miss the many implications and assumptions that underlie a piece of text.”

The test asks basic multiple-choice questions that draw from general knowledge. For example, one ARC question is: “Which item below is not made from a material grown in nature?” The possible answers are a cotton shirt, a wooden chair, a plastic spoon and a grass basket.

Read more

Comments are closed.