Toggle light / dark theme

The future of AI is unknown. That’s the problem with tech ‘prophets’ influencing AI policy

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

The skies above where I reside near New York City were noticeably apocalyptic last week. But to some in Silicon Valley, the fact that we wimpy East Coasters were dealing with a sepia hue and a scent profile that mixed cigar bar, campfire and old-school happy hour was nothing to worry about. After all, it is AI, not climate change, that appears to be top of mind to this cohort, who believe future superintelligence is either going to kill us all, save us all, or almost kill us all if we don’t save ourselves first.

Whether they predict the “existential risks” of runaway AGI that could lead to human “extinction” or foretell an AI-powered utopia, this group seems to have equally strong, fixed opinions (for now, anyway — perhaps they are “loosely held”) that easily tip into biblical prophet territory.

AI researcher, Stanford professor Andrew Ng: AI poses ‘no extinction risk’ for humans

Unlike many of his peers in the artificial intelligence community, Andrew Ng isn’t convinced about the dangers of AI.

In a video posted to Twitter this week, Ng, a Stanford University professor and founder of several Silicon Valley AI startups, expressed doubt about the doomsday predictions of other executives and experts in the field.

Oppenheimer — with Robert J. Sawyer

Science Fiction author Robert J. Sawyer talks about Oppenheimer and about his Alternate History book: The Oppenheimer Alternative.

Where to find ‘The Oppenheimer Alternative” book?
Robert J. Sawyer’s website: https://sfwriter.com.

* Trinity moment — AI vs. Nuclear.
* ‘Now I am become death, the destroyer of worlds’
* The Jewish connection to the Manhattan project and the Nazi nuclear program.
* Nuking Japan.
* Oppenheimer personality.
* Nuclear as a Double Edge Sword. Existential risk of a nuclear Holocaust.
* Thermonuclear — the rivalry with Edward Teller.
* Alternate History — the end of the world by 2030
* Military driven science vs. science driven by scientists.
* Nuclear energy in space.
* The Orion project — Nuclear Impales propulsion.
* Controversy of Wernher von Braun.
* Role of science fiction.

Channel inks:
Quora blog: https://spacefaringcivilization.quora.com/
Amazon Author page: http://amazon.com/author/ronfriedman.
My Website: https://ronsfriedman.wordpress.com/

How to support the channel:
Get $5 in NDAX (Canadian Crypto Exchange): https://refer.ndax.io/vm1j.
Buy Escape Velocity short stories collection:
Support with Ethereum or Plygon donation: sciandscifi.nft

The Y Chromosome Is Vanishing. A New Sex Gene Could Be The Future of Men

The sex of human and other mammal babies is decided by a male-determining gene on the Y chromosome. But the human Y chromosome is degenerating and may disappear in a few million years, leading to our extinction unless we evolve a new sex gene.

The good news is two branches of rodents have already lost their Y chromosome and have lived to tell the tale.

A recent paper in Proceedings of the National Academy of Science shows how the spiny rat has evolved a new male-determining gene.

The rise of AI: ‘AI doomsday’ or the best thing since sliced bread?

A raft of industry experts have given their views on the likely impact of artificial intelligence on humanity in the future. The responses are unsurprisingly mixed.

The Guardian has released an interesting article regarding the potential socioeconomic and political impact of the ever-increasing rollout of artificial intelligence (AI) on society. By asking various experts in the field on the subject, the responses were, not surprisingly, a mixed bag of doom, gloom, and hope.


Yucelyilmaz/iStock.

“I don’t think the worry is of AI turning evil or AI having some kind of malevolent desire,” Jessica Newman, director of University of California Berkeley’s Artificial Intelligence Security Initiative, told the Guardian. “The danger is from something much more simple, which is that people may program AI to do harmful things, or we end up causing harm by integrating inherently inaccurate AI systems into more and more domains of society,” she added.

Geneticists discover hidden ‘whole genome duplication’ that may explain why some species survived mass extinctions

Geneticists have unearthed a major event in the ancient history of sturgeons and paddlefish that has significant implications for the way we understand evolution. They have pinpointed a previously hidden “whole genome duplication” (WGD) in the common ancestor of these species, which seemingly opened the door to genetic variations that may have conferred an advantage around the time of a major mass extinction some 200 million years ago.

The big-picture finding suggests that there may be many more overlooked, shared WGDs in other species before periods of extreme environmental upheaval throughout Earth’s tumultuous history.

The research, led by Professor Aoife McLysaght and Dr. Anthony Redmond from Trinity College Dublin’s School of Genetics and Microbiology, has just been published in Nature Communications.

Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement

It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.

A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.

The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”


Another warning from AI’s top table.

/* */