Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different?
The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven’t yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches.
Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing’s thought experiment that posits when an AI in a chat can’t be distinguished reliably from a human, it will have achieved general intelligence.
But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion.
Comments are closed.