Toggle light / dark theme

Taking AI Welfare Seriously

In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI Welfare Seriously’ and his up and coming book ‘The Moral Circle’, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.

Paper ‘Taking AI Welfare Seriously’: https://eleosai.org/papers/20241030_T… — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20… Jeff’s Website: https://jeffsebo.net/ Eleos AI: https://eleosai.org/ Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what’s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions — Indirect normativity 51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo’s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

Book — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20

Jeff’s Website: https://jeffsebo.net/

Eleos AI: https://eleosai.org/

Chapters:

Viva City Fountainside Chat with Robin Hanson

Hey everyone! Robin Hanson will be speaking on Thursday about his galaxy brain ideas on better incentive models for longevity. Plus his unique takes on prediction markets and long-term thinking. [ https://lu.ma/wzuwk1lp](https://lu.ma/wzuwk1lp)


Join us for a groundbreaking discussion with economist Robin Hanson on the future of longevity economics and city governance!

Grok 3 Is Now The Most Powerful Artificial Intelligence Model, And You Can Try It Free

In today’s AI news, Elon Musk’s AI company, xAI, has officially launched its latest flagship AI model, Grok 3. Released late on February 17, 2025, Grok 3 introduces significant advancements over its predecessor, Grok 2, and aims to compete with leading AI models such as OpenAI’s GPT-4o and Google’s Gemini.

In other advancements, Replit has transformed non-technical employees at Zillow into software developers. The real estate giant now routes over 100,000 home shoppers to agents using applications built by team members who had never written code before. This breakthrough stems from Replit’s new partnership with Anthropic and Google Cloud, which has enabled over 100,000 applications on Google Cloud Run.

Then, Wu Yonghui, a prestigious “Google Fellow” who worked at the US tech giant for 17 years, recently joined TikTok owner ByteDance to lead foundational research on artificial intelligence (AI), as the firm seeks to “explore the upper limit of intelligence”. Wu now works at ByteDance’s Seed department, which the Beijing-based company started in early 2023.

Meanwhile, large companies are not adopting AI as quickly as start-ups, AWS managing director Tanuja Randery says. The gap is leading to a “two-tier” AI economy as startups outpace corporations. Citing a new report from AWS, Randery said that European startups had integrated AI at pace over the last year while larger enterprises in the region were falling behind.

In videos, join Sara Bacha from Converge Technology Solutions as she delves into how GraphRAG outperforms traditional RAG by leveraging knowledge graphs and LLM to enhance data relationships and accuracy. Learn the benefits in development, production, and governance, making maintenance easier with better explainability and traceability.

Then, join 20VC host Harry Stebbings while he speaks with Jonathan Ross is the Founder & CEO of Groq, the creator of the world’s first Language Processing Unit (LPUTM). Prior to Groq, Jonathan began what became Google’s Tensor Processing Unit (TPU) as a 20% project where he designed and implemented the core elements of the first-generation TPU chip.

And, Can AI chatbots be trusted? Join IBM’s Jeff Crume as he delves into the complexities of AI errors, from innocent mistakes to deliberate lies. Discover how AI chatbots handle truth, generate hallucinations, and the essential principles behind AI transparency, fairness, and privacy.

YEAR 3050: The Rise of Mega-Corporations & AI Domination 🚀 | Sci-Fi Cyberpunk Future

🚀 Welcome to the year 3,050 – a cyberpunk dystopian future where mega-corporations rule over humanity, AI surveillance is omnipresent, and cities have become neon-lit jungles of power and oppression.

🌆 In this AI-generated vision, experience the breathtaking yet terrifying future of corporate-controlled societies:
✅ Towering skyscrapers and hyper-dense cityscapes filled with neon and holograms.
✅ Powerful corporations with total control over resources, AI, and governance.
✅ A world where the elite live above the clouds, while the masses struggle below.
✅ Hyper-advanced AI, cybernetic enhancements, and the ultimate surveillance state.

🎧 Best experienced with headphones!

If you love Cyberpunk, AI-driven societies, and futuristic cityscapes, this is for you!
🔥 Would you survive in the dystopian world of 3050? Let us know in the comments!

👉 Subscribe & Turn on Notifications for More Epic AI Sci-Fi!

💎 Support the channel on Patreon for exclusive content: https://www.patreon.com/PintoCreation.

Elon Musk shares ‘Mars video’ with this ‘Welcome to Mars’ post

Elon Musk revives discussion on Mars colonization with a viral AI-generated video, amassing over 46 million views, showing an advanced Martian city. Originally predicted for 2024–2025, Musk’s vision includes direct democracy for Mars governance. The video sparked a mix of curiosity and criticism, especially regarding the absence of natural greenery.

China shaping AI governance mechanism

The development of artificial intelligence has entered a pivotal phase. With groundbreaking advancements in large models such as ChatGPT and Sora, AI is approaching what has been termed as “technological singularity”.The allure of AI’s potential is undeniable, but its immense potential is accompanied by significant risks including deepfakes, frauds and autonomous weapons systems.

The complexities and interconnectedness of AI pose a new global challenge. Hence, building a coordinated global governance framework for AI is no longer optional; it is an urgent necessity.

AI transcends national boundaries, creating both global opportunities and risks that no country alone can manage. Hence, countries across the world need to work together to eliminate the risks.

Researchers combine holograms and AI to create uncrackable optical encryption system

WASHINGTON — As the demand for digital security grows, researchers have developed a new optical system that uses holograms to encode information, creating a level of encryption that traditional methods cannot penetrate. This advance could pave the way for more secure communication channels, helping to protect sensitive data.

“From rapidly evolving digital currencies to governance, healthcare, communications and social networks, the demand for robust protection systems to combat digital fraud continues to grow,” said research team leader Stelios Tzortzakis from the Institute of Electronic Structure and Laser, Foundation for Research and Technology Hellas and the University of Crete, both in Greece.


Optica is the leading society in optics and photonics. Quality information and inspiring interactions through publications, meetings, and membership.

AI combined with holograms creates an uncrackable optical encryption system

As the demand for digital security grows, researchers have developed a new optical system that uses holograms to encode information, creating a level of encryption that traditional methods cannot penetrate. This advance could pave the way for more secure communication channels, helping to protect sensitive data.

“From rapidly evolving digital currencies to governance, , communications and social networks, the demand for robust protection systems to combat digital fraud continues to grow,” said research team leader Stelios Tzortzakis from the Institute of Electronic Structure and Laser, Foundation for Research and Technology Hellas and the University of Crete, both in Greece.

“Our new system achieves an exceptional level of encryption by utilizing a to generate the decryption key, which can only be created by the owner of the encryption system.”

Mind the Anticipatory Gap: Genome Editing, Value Change and Governance

I was recently a co-author on a paper about anticipatory governance and genome editing. The lead author was Jon Rueda, and the others were Seppe Segers, Jeroen Hopster, Belén Liedo, and Samuela Marchiori. It’s available open access here on the Journal of Medical Ethics website. There is a short (900 word) summary available on the JME blog. Here’s a quick teaser for it:

Transformative emerging technologies pose a governance challenge. Back in 1980, a little-known academic at the University of Aston in the UK, called David Collingridge, identified the dilemma that has come to define this challenge: the control dilemma (also known as the ‘Collingridge Dilemma’). The dilemma states that, for any emerging technology, we face a trade-off between our knowledge of its impact and our ability to control it. Early on, we know little about it, but it is relatively easy to control. Later, as we learn more, it becomes harder to control. This is because technologies tend to diffuse throughout society and become embedded in social processes and institutions. Think about our recent history with smartphones. When Steve Jobs announced the iPhone back in 2007, we didn’t know just how pervasive and all-consuming this device would become. Now we do but it is hard to put the genie back in the bottle (as some would like to do).

The field of anticipatory governance tries to address the control dilemma. It aims to carefully manage the rollout of an emerging technology so as to avoid the problem of losing control just as we learn more about the effects of the technology. Anticipatory governance has become popular in the world of responsible innovation and design. In the field of bioethics, approaches to anticipatory governance often try to anticipate future technical realities, ethical concerns, and incorporate differing public opinion about a technology. But there is a ‘gap’ in current approaches to anticipatory governance.

A Guide to Managing Interconnected AI Systems

Increasingly, AI systems are interconnected, which is generating new complexities and risks. Managing these ecosystems effectively requires comprehensive training, designing technological infrastructures and processes so they foster collaboration, and robust governance frameworks. Examples from healthcare, financial services, and legal profession illustrate the challenges and ways to overcome them.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.397802 data-title= A Guide to Managing Interconnected AI Systems data-url=/2024/12/a-guide-to-managing-interconnected-ai-systems data-topic= AI and machine learning data-authors= I. Glenn Cohen; Theodoros Evgeniou; Martin Husovec data-content-type= Digital Article data-content-image=/resources/images/article_assets/2024/12/Dec24_13_BrianRea-383x215.jpg data-summary=

The risks and complexities of these ecosystems require specific training, infrastructure, and governance.