Toggle light / dark theme

Techno-futurists are selling an interplanetary paradise for the posthuman generation—they just forgot about the rest of us

Inside the cult of TESCREALism and the dangerous fantasies of Silicon Valley’s self-appointed demigods, for Document’s Spring/Summer 2024 issue.

As legend has it, Steve Jobs once asked Larry Kenyon, an engineer tasked with developing the Mac computer, to reduce its boot time by 10 seconds. Kenyon said that was impossible. “What if it would save a person’s life?” Jobs asked. Then, he went to a whiteboard and laid out an equation: If 5 million users spent an additional 10 seconds waiting for the computer to start, the total hours wasted would be equivalent to 100 human lifetimes every year. Kenyon shaved 28 seconds off the boot time in a matter of weeks.

Often cited as an example of the late CEO’s “reality distortion field,” this anecdote illustrates the combination of charisma, hyperbole, and marketing with which Jobs convinced his disciples to believe almost anything—elevating himself to divine status and creating “a cult of personality for capitalists,” as Mark Cohen put it in an article about his death for the Australian Broadcasting Corporation. In helping to push the myth of the genius tech founder into the cultural mainstream, Jobs laid the groundwork for future generations of Silicon Valley investors and entrepreneurs who have, amid the global decline of organized religion, become our secular messiahs. They preach from the mounts of Google and Meta, selling the public on digital technology’s saving grace, its righteous ability to reshape the world.

Did AI Just Pass the Turing Test?

A recent study by UC San Diego researchers brings fresh insight into the ever-evolving capabilities of AI. The authors looked at the degree to which several prominent AI models, GPT-4, GPT-3.5, and the classic ELIZA could convincingly mimic human conversation, an application of the so-called Turing test for identifying when a computer program has reached human-level intelligence.

The results were telling: In a five-minute text-based conversation, GPT-4 was mistakenly identified as human 54 percent of the time, contrasted with ELIZA’s 22 percent. These findings not only highlight the strides AI has made but also underscore the nuanced challenges of distinguishing human intelligence from algorithmic mimicry.

The important twist in the UC San Diego study is that it clearly identifies what constitutes true human-level intelligence. It isn’t mastery of advanced calculus or another challenging technical field. Instead, what stands out about the most advanced models is their social-emotional persuasiveness. For an AI to catch (or fool a human) it has to be able to effectively imitate the subtleties of human conversation. When judging whether their interlocutor was an AI or a human, participants tended to focus on whether responses were overly formal, contained excessively correct grammar, or repetitive sentence structures, or exhibited an unnatural tone. Participants flagged stilted or inconsistent personalities or senses of humor as non-human.

9.523: Aspects of a Computational Theory of Intelligence

The problem of intelligence — its nature, how it is produced by the brain and how it could be replicated in machines — is a deep and fundamental problem that cuts across multiple scientific disciplines. Philosophers have studied intelligence for centuries, but it is only in the last several decades that developments in science and engineering have made questions such as these approachable: How does the mind process sensory information to produce intelligent behavior, and how can we design intelligent computer algorithms that behave similarly? What is the structure and form of human knowledge — how is it stored, represented, and organized? How do human minds arise through processes of evolution, development, and learning? How are the domains of language, perception, social cognition, planning, and motor control combined and integrated? Are there common principles of learning, prediction, decision, or planning that span across these domains?

This course explores these questions with an approach that integrates cognitive science, which studies the mind; neuroscience, which studies the brain; and computer science and artificial intelligence, which study the computations needed to develop intelligent machines. Faculty and postdoctoral associates affiliated with the Center for Brains, Minds and Machines discuss current research on these questions.

Robot planning tool accounts for human carelessness

A new algorithm may make robots safer by making them more aware of human inattentiveness. In computerized simulations of packaging and assembly lines where humans and robots work together, the algorithm developed to account for human carelessness improved safety by about a maximum of 80% and efficiency by about a maximum of 38% compared to existing methods.

The work is reported in IEEE Transactions on Systems, Man, and Cybernetics: Systems.

“There are a large number of accidents that are happening every day due to carelessness—most of them, unfortunately, from human errors,” said lead author Mehdi Hosseinzadeh, assistant professor in Washington State University’s School of Mechanical and Materials Engineering.

NIST’s post-quantum cryptography standards are here

The US National Institute of Standards and Technology has released Federal Information Processing Standards (FIPS) publications for three quantum-resistant cryptographic algorithms.

In a landmark announcement, the National Institute of Standards and Technology (NIST) has published its first set of post-quantum cryptography (PQC) standards. This announcement serves as an inflection point in modern cybersecurity: as the global benchmark for cryptography, the NIST standards signal to enterprises, government agencies, and supply chain vendors that the time has come to make the world’s information security systems resistant to future cryptographically relevant quantum computers.


NIST released FIPS publications for three quantum-resistant cryptographic algorithms.

Do SETI Optimists Have a Fine-Tuning Problem?

Abstract: In ecological systems, be it a garden or a galaxy, populations evolve from some initial value (say zero) up to a steady state equilibrium, when the mean number of births and deaths per unit time are equal. This equilibrium point is a function of the birth and death rates, as well as the carrying capacity of the ecological system itself. The growth curve is S-shaped, saturating at the carrying capacity for large birth-to-death rate ratios and tending to zero at the other end. We argue that our astronomical observations appear inconsistent with a cosmos saturated with ETIs, and thus SETI optimists are left presuming that the true population is somewhere along the transitional part of this S-curve. Since the birth and death rates are a-priori unbounded, we argue that this presents a fine-tuning problem. Further, we show that if the birth-to-death rate ratio is assumed to have a log-uniform prior distribution, then the probability distribution of the ecological filling fraction is bi-modal — peaking at zero and unity. Indeed, the resulting distribution is formally the classic Haldane prior, conceived to describe the prior expectation of a Bernoulli experiment, such as a technological intelligence developing (or not) on a given world. Our results formally connect the Drake Equation to the birth-death formalism, the treatment of ecological carrying capacity and their connection to the Haldane perspective.

From: David Kipping [view email].

/* */