Toggle light / dark theme

The Internet just changed forever, but most people living in the United States don’t even realize what just happened. A draconian new law known as the “Digital Services Act” went into effect in the European Union on Friday, and it establishes an extremely strict regime of Internet censorship that is far more authoritarian than anything we have ever seen before.

From this point forward, hordes of European bureaucrats will be the arbiters of what is acceptable to say on the Internet. If they discover something that you have said on a large online platform that they do not like, they can force that platform to take it down, because someone in Europe might see it. So even though this is a European law, the truth is that it is going to have a tremendous impact on all of us.

A major _New York Times_ investigation reveals how the United States’ aquifers are becoming severely depleted due to overuse in part from huge industrial farms and sprawling cities. The _Times_ reports that Kansas corn yields are plummeting due to a lack of water, there is not enough water to support the construction of new homes in parts of Phoenix, Arizona, and rivers across the country are drying up as aquifers are being drained far faster than they are refilling. “It can take millions of years to fill an aquifer, but they can be depleted in 50 years,” says Warigia Bowman, director of sustainable energy and natural resources law at the University of Tulsa College of Law. “All coastal regions in the United States are really being threatened by groundwater and aquifer problems.”

Transcript: democracynow.org.

Democracy Now! is an independent global news hour that airs on over 1,500 TV and radio stations Monday through Friday. Watch our livestream at democracynow.org Mondays to Fridays 8–9 a.m. ET.

Support independent media: https://democracynow.org/donate.

It sometimes presents incorrect steps to arrive at the answer, because it is designed to base conclusions on precedent. And a precedent based on a given data set is limited to the confines of the data set. This, says Microsoft, leads to “increased costs, memory, and computational overheads.”

AoT to the rescue. The algorithm evaluates whether the initial steps—” thoughts,” to use a word generally associated only with humans—are sound, thereby avoiding a situation where an early wrong “thought” snowballs into an absurd outcome.

Though not expressly stated by Microsoft, one can imagine that if AoT is what it’s cracked up to be, it might help mitigate the so-called AI “hallucinations”—the funny, alarming phenomenon whereby programs like ChatGPT spits out false information. In one of the more notorious examples, in May 2023, a lawyer named Stephen A. Schwartz admitted to “consulting” ChatGPT as a source when conducting research for a 10-page brief. The problem: The brief referred to several court decisions as legal precedents… that never existed.

What happens in femtoseconds in nature can now be observed in milliseconds in the lab.

Scientists at the university of sydney.

The University of Sydney is a public research university located in Sydney, New South Wales, Australia. Founded in 1,850, it is the oldest university in Australia and is consistently ranked among the top universities in the world. The University of Sydney has a strong focus on research and offers a wide range of undergraduate and postgraduate programs across a variety of disciplines, including arts, business, engineering, law, medicine, and science.

In the last ten years, AI systems have developed at rapid speed. From the breakthrough of besting a legendary player at the complex game Go in 2016, AI is now able to recognize images and speech better than humans, and pass tests including business school exams and Amazon coding interview questions.

Last week, during a U.S. Senate Judiciary Committee hearing about regulating AI, Senator Richard Blumenthal of Connecticut described the reaction of his constituents to recent advances in AI. “The word that has been used repeatedly is scary.”

The Subcommittee on Privacy, Technology, and the Law overseeing the meeting heard testimonies from three expert witnesses, who stressed the pace of progress in AI. One of those witnesses, Dario Amodei, CEO of prominent AI company Anthropic, said that “the single most important thing to understand about AI is how fast it is moving.”

Google DeepMind researchers have finally found a way to make life coaching even worse: infuse it with generative AI.

According to internal documents obtained by The New York Times reports, Google and the Google-owned DeepMind AI lab are working with “generative AI to perform at least 21 different types of personal and professional tasks.” And among those tasks, apparently, is an effort to use generative AI to build a “life advice” tool. You know, because an inhuman AI model knows everything there is to know about navigating the complexities of mortal human existence.

As the NYT points out, the news of the effort notably comes months after AI safety experts at Google said, back in just December, that users of AI systems could suffer “diminished health and well-being” and a “loss of agency” as the result of taking AI-spun life advice. The Google chatbot Bard, meanwhile, is barred from providing legal, financial, or medical advice to its users.

Check out our Patreon page: https://www.patreon.com/teded.

View full lesson: https://ed.ted.com/lessons/can-we-grow-human-brains-outside-…-lancaster.

Shielded by our thick skulls and swaddled in layers of protective tissue, the human brain is extremely difficult to observe in action. Luckily, scientists can use brain organoids — pencil eraser-sized masses of cells that function like human brains but aren’t part of an organism — to look closer. How do they do it? And is it ethical? Madeline Lancaster shares how to make a brain in a lab.

Lesson by Madeline Lancaster, animation by Adam Wells.

Thank you so much to our patrons for your support! Without you this video would not be possible! Nik Maier, Robert Sukosd, Mark Morris, Tamás Drávai, Adi V, Peter Liu, Leora Allen, Hiroshi Uchiyama, Michal Salman, Julie Cummings-Debrot, Gilly, Ka-Hei Law, Maya Toll, Aleksandar Srbinovski, Jose Mamattah, Mauro Pellegrini, Ricardo Rendon Cepeda, Renhe Ji, Andrés Melo Gámez, Tim Leistikow, Moonlight, Shawar Khan, Chris, Alex Serbanescu, Megan Douglas, Barbara Smalley, Filip Dabrowski, Joe Giamartino, Clair Chen, Vik Nagjee, Karen Goepen-Wee, Della Palacios, Rui Rizzi, Bryan Blankenburg, Bah Becerra and Stephanie Perozo.

Please check out Numerai — our sponsor using our link @
http://numer.ai/mlst.

Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.

Support us! https://www.patreon.com/mlst.
MLST Discord: https://discord.gg/aNPkGUQtc5
Twitter: https://twitter.com/MLStreetTalk.

In this fascinating interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of “Counterfeit People.” Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.

They delve into Dennett’s “Two Black Boxes” thought experiment, the Chinese Room Argument by John Searle, and discuss the implications of AI in terms of reversibility, reontologisation, and realism. Dr. Scarfe and Dennett also examine adversarial LLMs, mental trajectories, and the emergence of consciousness and semanticity in AI systems.

Throughout the conversation, they touch upon various philosophical perspectives, including Gilbert Ryle’s Ghost in the Machine, Chomsky’s work, and the importance of competition in academia. Dennett concludes by highlighting the need for legal and technological barriers to protect against the dangers of counterfeit AI creations.

Reichman University’s new Innovation Institute, which is set to formally open this spring under the auspices of the new Graziella Drahi Innovation Building, aims to encourage interdisciplinary, innovative and applied research as a cooperation between the different academic schools. The establishment of the Innovation Institute comes along with a new vision for the University, which puts the emphasis on the fields of synthetic biology, Artificial Intelligence (AI) and Advanced Reality (XR). Prof. Noam Lemelshtrich Latar, the Head of the Institute, identifies these as fields of the future, and the new Innovation Institute will focus on interdisciplinary applied research and the ramifications of these fields on the subjects that are researched and taught at the schools, for example, how law and ethics influence new medical practices and scientific research.

Synthetic biology is a new interdisciplinary field that integrates biology, chemistry, computer science, electrical and genetic engineering, enabling fast manipulation of biological systems to achieve a desired product.

Prof. Lemelshtrich Latar, with Dr. Jonathan Giron, who was the Institute’s Chief Operating Officer, has made a significant revolution at the University, when they raised a meaningful donation to establish the Scojen Institute for Synthetic Biology. The vision of the Scojen Institute is to conduct applied scientific research by employing top global scientists at Reichman University to become the leading synthetic biology research Institute in Israel. The donation will allow recruiting four world-leading scientists in various scopes of synthetic biology in life sciences. The first scientist and Head of the Scojen Institute has already been recruited – Prof. Yosi Shacham Diamand, a leading global scientist in bio-sensors and the integration of electronics and biology. The Scojen Institute labs will be located in the Graziella Drahi Innovation Building and will be one part of the future Dina Recanati School of Medicine, set to open in the academic year 2024–2025.