Toggle light / dark theme

Technological singularity

It is with sadness — and deep appreciation of my friend and colleague — that I must report the passing of Vernor Vinge.


The technological singularity —or simply the singularity[1] —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7].

The Political Singularity and a Worthy Successor, with Daniel Faggella

Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.

Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.

So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.

AGI in 3 to 8 years

When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence… the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?

According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.

We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity.

We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet’s evolution, and envisages the role of humans in a world where AI surpasses human intelligence.

00:00 Introduction to the Future of AI
01:28 Predicting the Timeline of Artificial General Intelligence.
02:06 The Role of LLMs in the Path to AGI
05:23 The Impact of AI on Jobs and Economy.
06:43 The Future of AI Development.
10:35 The Role of Humans in a World with AGI
35:10 The Diverse Future of Human and Post-Human Minds.
36:51 The Challenges of Transitioning to a World with AGI
39:34 Conclusion: The Future of AGI.

The Paradox Of Time That Scares Scientists

When time reaches its limits, scientists call those moments “singularities.” These can mark the start or end of time itself. The most famous singularity is the big bang, which happened around 13.7 billion years ago, kicking off the universe and time as we know it. If the universe ever stops expanding and starts collapsing, it could lead to a reverse of the big bang called the big crunch, where time would stop. As our distant descendants approach the end of time, they will face increasing challenges in a hostile universe, and their efforts will only accelerate the inevitable. We are not passive victims of time’s demise; we contribute to it. Through our existence, we convert energy into waste heat, contributing to the universe’s degeneration. Time must cease for us to continue living.

/* */