Blog

Oct 16, 2023

Minds of machines: The great AI consciousness conundrum

Posted by in category: robotics/AI

At the same time, Mudrik has been trying to figure out what this diversity of theories means for AI. She’s working with an interdisciplinary team of philosophers, computer scientists, and neuroscientists who recently put out a white paper that makes some practical recommendations on detecting AI consciousness. In the paper, the team draws on a variety of theories to build a sort of consciousness “report card”—a list of markers that would indicate an AI is conscious, under the assumption that one of those theories is true. These markers include having certain feedback connections, using a global workspace, flexibly pursuing goals, and interacting with an external environment (whether real or virtual).

In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong. That’s where LLMs like LaMDA currently are: they don’t possess the right type of feedback connections, use global workspaces, or appear to have any other markers of consciousness.

The trouble with consciousness-by-committee, though, is that this state of affairs won’t last. According to the authors of the white paper, there are no major technological hurdles in the way of building AI systems that score highly on their consciousness report card. Soon enough, we’ll be dealing with a question straight out of science fiction: What should one do with a potentially conscious machine?

Comments are closed.