Unlike many of his peers in the artificial intelligence community, Andrew Ng isn’t convinced about the dangers of AI.
In a video posted to Twitter this week, Ng, a Stanford University professor and founder of several Silicon Valley AI startups, expressed doubt about the doomsday predictions of other executives and experts in the field.
Science Fiction author Robert J. Sawyer talks about Oppenheimer and about his Alternate History book: The Oppenheimer Alternative.
Where to find ‘The Oppenheimer Alternative” book? Robert J. Sawyer’s website: https://sfwriter.com.
* Trinity moment — AI vs. Nuclear. * ‘Now I am become death, the destroyer of worlds’ * The Jewish connection to the Manhattan project and the Nazi nuclear program. * Nuking Japan. * Oppenheimer personality. * Nuclear as a Double Edge Sword. Existential risk of a nuclear Holocaust. * Thermonuclear — the rivalry with Edward Teller. * Alternate History — the end of the world by 2030 * Military driven science vs. science driven by scientists. * Nuclear energy in space. * The Orion project — Nuclear Impales propulsion. * Controversy of Wernher von Braun. * Role of science fiction.
How to support the channel: Get $5 in NDAX (Canadian Crypto Exchange): https://refer.ndax.io/vm1j. Buy Escape Velocity short stories collection: Support with Ethereum or Plygon donation: sciandscifi.nft
The sex of human and other mammal babies is decided by a male-determining gene on the Y chromosome. But the human Y chromosome is degenerating and may disappear in a few million years, leading to our extinction unless we evolve a new sex gene.
The good news is two branches of rodents have already lost their Y chromosome and have lived to tell the tale.
A raft of industry experts have given their views on the likely impact of artificial intelligence on humanity in the future. The responses are unsurprisingly mixed.
The Guardian has released an interesting article regarding the potential socioeconomic and political impact of the ever-increasing rollout of artificial intelligence (AI) on society. By asking various experts in the field on the subject, the responses were, not surprisingly, a mixed bag of doom, gloom, and hope.
Yucelyilmaz/iStock.
“I don’t think the worry is of AI turning evil or AI having some kind of malevolent desire,” Jessica Newman, director of University of California Berkeley’s Artificial Intelligence Security Initiative, told the Guardian. “The danger is from something much more simple, which is that people may program AI to do harmful things, or we end up causing harm by integrating inherently inaccurate AI systems into more and more domains of society,” she added.
Geneticists have unearthed a major event in the ancient history of sturgeons and paddlefish that has significant implications for the way we understand evolution. They have pinpointed a previously hidden “whole genome duplication” (WGD) in the common ancestor of these species, which seemingly opened the door to genetic variations that may have conferred an advantage around the time of a major mass extinction some 200 million years ago.
The big-picture finding suggests that there may be many more overlooked, shared WGDs in other species before periods of extreme environmental upheaval throughout Earth’s tumultuous history.
The research, led by Professor Aoife McLysaght and Dr. Anthony Redmond from Trinity College Dublin’s School of Genetics and Microbiology, has just been published in Nature Communications.
As apocalyptic warnings go, today is right up there. Some of the world’s most influential tech geniuses and entrepreneurs say AI risks the extinction of humanity.
Having lobbed the ball firmly in the court of global leaders and lawmakers the question is: will they have any idea what to do about it?
It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.
A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.
The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
When will an asteroid hit Earth and wipe us out? Not for at least 1,000 years, according to a team of astronomers. Probably.
Either way, you should get to know an asteroid called 7482 (1994 PC1), the only one known whose orbital path will cross that of Earth’s consistently for the next millennium—and thus has the largest probability of a “deep close encounter” with us, specifically in 502 years. Possibly.
Published on a preprint archive and accepted for publication in The Astronomical Journal, the paper states that astronomers have almost found all the kilometer-sized asteroids. There’s a little under 1,000 of them.