Toggle light / dark theme

This video covers digital immortality, its required technologies, processes of uploading a mind, its potential impact on society, and more. Watch this next video about the world in 2200: https://bit.ly/3htaWEr.
🎁 5 Free ChatGPT Prompts To Become a Superhuman: https://bit.ly/3Oka9FM
đŸ€– AI for Business Leaders (Udacity Program): https://bit.ly/3Qjxkmu.
☕ My Patreon: https://www.patreon.com/futurebusinesstech.
âžĄïž Official Discord Server: https://discord.gg/R8cYEWpCzK

CHAPTERS
00:00 Required Technologies.
01:42 The Processes of Uploading a Mind.
03:32 Positive Impacts On Society.
05:34 When Will It Become Possible?
05:53 Is Digital Immortality Potentially Dangerous?

SOURCES:
‱ The Singularity Is Near: When Humans Transcend Biology (Ray Kurzweil): https://amzn.to/3ftOhXI
‱ The Future of Humanity (Michio Kaku): https://amzn.to/3Gz8ffA
‱ https://www.scientificamerican.com/article/what-is-the-memory-capacity/
‱ https://www.anl.gov/article/researchers-image-an-entire-mous
first-time.
‱ https://interestingengineering.com/cheating-death-and-becomi
-uploading.

Official Discord Server: https://discord.gg/R8cYEWpCzK

Join Dr. Ben Goertzel, the visionary CEO and Founder of SingularityNET, as he delves into the compelling realm of large language models. In this Dublin Tech Summit keynote presentation, Dr. Goertzel will navigate the uncharted territories of AI, discussing the imminent impact of large language models on innovation across industries. Discover the intricacies, challenges, and prospects of developing and deploying these transformative tools. Gain insights into the future of AI, as Dr. Goertzel unveils his visionary perspective on the role of large language models in shaping the AI landscape. Tune in to explore the boundless potentials of AI and machine learning in this thought-provoking session.

Themes: AI & Machine Learning | Innovation | Future of Technology | Language Models | Industry Transformation.
Keynote: Dr. Ben Goertzel, CEO and Founder, SingularityNET
#dubtechsummit

In 1993, acclaimed sci-fi author and computer scientist Vernor Vinge made a bold prediction – within 30 years, advances in technology would enable the creation of artificial intelligence surpassing human intelligence, leading to “the end of the human era.”

Vinge theorized that once AI becomes capable of recursively improving itself, it would trigger a feedback loop of rapid, exponential improvements to AI systems. This hypothetical point in time when AI exceeds human intelligence has become known as “the Singularity.”

While predictions of superhuman AI may have sounded far-fetched in 1993, today they are taken seriously by many AI experts and tech investors seeking to develop “artificial general intelligence” or AGI – AI capable of fully matching human performance on any intellectual task.

This book, ‘The Singularity Is Near’, predicts the future. However, unlike most best-selling futurology books, its author, Kurzweil, is a renowned technology expert. His insights into the future are not technocratic wild fantasies but are rooted in his profound contemplation of technological principles.

This audio informs us that, due to Moore’s Law, the pace of human technological advancement in the future will far exceed our expectations. By 2045, we will reach the technological ‘Singularity’, which will profoundly alter our human condition, and technology may even enable humans to conquer the universe within a millennium.

The author, Ray Kurzweil, is a true tech maestro. He has been inducted into the National Inventors Hall of Fame in the U.S., is a recipient of the National Medal of Technology, holds 13 honorary doctorates, has been lauded by three U.S. presidents, and is dubbed by the media as the ‘rightful heir to Thomas Edison’.

In the audio, you will hear:

Artificial General Intelligence (AGI) is a term for Artificial Intelligence systems that meet or exceed human performance on the broad range of tasks that humans are capable of performing. There are benefits and downsides to AGI. On the upside, AGIs can do most of the labor that consume a vast amount of humanity’s time and energy. AGI can herald a utopia where no one has wants that cannot be fulfilled. AGI can also result in an unbalanced situation where one (or a few) companies dominate the economy, exacerbating the existing dichotomy between the top 1% and the rest of humankind. Beyond that, the argument goes, a super-intelligent AGI could find it beneficial to enslave humans for its own purposes, or exterminate humans so as to not compete for resources. One hypothetical scenario is that an AGI that is smarter than humans can simply design a better AGI, which can, in turn, design an even better AGI, leading to something called hard take-off and the singularity.

I do not know of any theory that claims that AGI or the singularity is impossible. However, I am generally skeptical of arguments that Large Language Models such the GPT series (GPT-2, GPT-3, GPT-4, GPT-X) are on the pathway to AGI. This article will attempt to explain why I believe that to be the case, and what I think is missing should humanity (or members of the human race) so choose to try to achieve AGI. I will also try to convey a sense for why it is easy to talk about the so-called “recipe for AGI” in the abstract but why physics itself will prevent any sudden and unexpected leap from where we are now to AGI or super-AGI.

To achieve AGI it seems likely we will need one or more of the following:

🍿 Watch the full interview for free at https://londonreal.tv/dr-ben-goertzel-artificial-intelligenc
e-know-it/
đŸ€ The Investment Club: https://londonreal.tv/club.
đŸ”„ The Crypto & DeFi Accelerator: https://londonreal.tv/defi-ytd.
💰 The Wealth Accelerator: https://londonreal.tv/wealth.
đŸ‡ș🇾 Biden bombed the Nord Stream?! https://londonreal.tv/nordstream.

Dr Ben Goertzel is a cross-disciplinary scientist, futurist, author and entrepreneur, who has spent the best part of his working life focused on creating benevolent superhuman artificial general intelligence (AGI).

🔔 SUBSCRIBE ON YOUTUBE: http://bit.ly/SubscribeToLondonReal.
▶ FREE FULL EPISODES: https://londonreal.tv/episodes.

🎁 FREE 30 Day Audible Trial: https://londonreal.tv/audible.

Peter Diamandis is best known as the founder of the XPrize Foundation, which offers big cash prizes as an incentive for tech solutions to big problems. The entrepreneur and investor is also co-founder of the Singularity University, a Silicon Valley-based nonprofit offering education in futurology. His new book, The Future Is Faster Than You Think, argues that the already rapid pace of technological innovation is about to get a whole lot quicker.

Do you think people are worried about where technology is going to take us? I can palpably feel how fast things are changing and that the rate of change is accelerating, and I have picked up a growing amount of fear coming from people who don’t understand where the world is going. And that is not good when you’re trying to solve problems. This book is about giving people a roadmap for where things are going over the next decade so they have less fear and more anticipation. Because, yes, in the next 10 years, we’re going to reinvent every industry on this planet, but the change is one that is for the benefit of masses, whether it’s in longevity or food or banking.

Does our increasing dependency on technology diminish our human potential? In this episode, visionary scientist Gregg Braden discusses the current transhuman movement – the merging of technology and human biology, often referred to as the singularity. He describes three levels of tech integration where the final level replaces our natural biology. In a time of rapid evolution, reflection and discernment are key. Braden highlights what we can do to release the conditioning of a technology-dependent society and how to follow the natural rhythms within ourselves.