In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.
Category: existential risks – Page 41
 
  Chandra X-ray Observatory identifies new stellar danger to planets
Astronomers using data from NASA’s Chandra X-ray Observatory and other telescopes have identified a new threat to life on planets like Earth: a phase during which intense X-rays from exploded stars can affect planets over 100 light-years away. This result, as outlined in our latest press release, has implication for the study of exoplanets and their habitability.
This newly found threat comes from a supernova’s blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist’s impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet.
A new study reporting this threat is based on X-ray observations of 31 supernovae and their aftermath—mostly from NASA’s Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA’s XMM-Newton—show that planets can be subjected to lethal doses of radiation located as much as about 160 light-years away. Four of the supernovae in the study (SN 1979C, SN 1987A, SN 2010jl, and SN 1994I) are shown in composite images containing Chandra data in the supplemental image.
 
  Why do some AI researchers dismiss the potential risks to humanity?
Existential risk from AI is admittedly more speculative than pressing concerns such as its bias, but the basic solution is the same. A robust public discussion is long overdue, says David Krueger
Why we can still avoid imminent extinction with Daniel Schmachtenberger
Some of Daniel Schmarchtenberger’s friends say you can be “Schmachtenberged”. It means realising that we are on our way to self-destruction as a civilisation, on a global level. This is a topic often addressed by the American philosopher and strategist, in a world with powerful weapons and technologies and a lack of efficient governance. But, as the catastrophic script has already started to be written, is there still hope? And how do we start reversing the scenario?
 
  Lightning strike creates a material seen for the first time on Earth
After lightning struck a tree in New Port Richey, Florida, a team of scientists from the University of South Florida (USF) discovered that this strike led to the formation of a new phosphorous material in a rock. This is the first time such a material has been found in solid form on Earth and could represent a member of a new mineral group.
“We have never seen this material occur naturally on Earth – minerals similar to it can be found in meteorites and space, but we’ve never seen this exact material anywhere,” said study lead author Matthew Pasek, a geoscientist at USF.
According to the researchers, high-energy events such as lightning can sometimes cause unique chemical reactions which, in this particular case, have led to the formation of a new material that seems to be transitional between space minerals and minerals found on Earth.
The intelligence explosion: Nick Bostrom on the future of AI
We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains.
Up next, Is AI a species-level threat to humanity? With Elon Musk, Michio Kaku, Steven Pinker & more ► https://youtu.be/91TRVubKcEM
Nick Bostrom, a professor at Oxford University and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility.
Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty.
Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future.
0:00 Smarter than humans.
 
  Doomsday Predictions Around ChatGPT Are Counter-Productive
The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).
Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.
As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?
Fermi Paradox: The Vulnerable World Hypothesis
And exploration of the Vulnerable World Hypothesis solution to the Fermi Paradox.
And exploration of the possibility of finding fossils of alien origin right here on the surface of the earth.
My Patreon Page:
https://www.patreon.com/johnmichaelgodier.
My Event Horizon Channel:

