Toggle light / dark theme

Increasingly, AI systems are interconnected, which is generating new complexities and risks. Managing these ecosystems effectively requires comprehensive training, designing technological infrastructures and processes so they foster collaboration, and robust governance frameworks. Examples from healthcare, financial services, and legal profession illustrate the challenges and ways to overcome them.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.397802 data-title= A Guide to Managing Interconnected AI Systems data-url=/2024/12/a-guide-to-managing-interconnected-ai-systems data-topic= AI and machine learning data-authors= I. Glenn Cohen; Theodoros Evgeniou; Martin Husovec data-content-type= Digital Article data-content-image=/resources/images/article_assets/2024/12/Dec24_13_BrianRea-383x215.jpg data-summary=

The risks and complexities of these ecosystems require specific training, infrastructure, and governance.

Researchers at the university of pennsylvania.

The University of Pennsylvania (Penn) is a prestigious private Ivy League research university located in Philadelphia, Pennsylvania. Founded in 1740 by Benjamin Franklin, Penn is one of the oldest universities in the United States. It is renowned for its strong emphasis on interdisciplinary education and its professional schools, including the Wharton School, one of the leading business schools globally. The university offers a wide range of undergraduate, graduate, and professional programs across various fields such as law, medicine, engineering, and arts and sciences. Penn is also known for its significant contributions to research, innovative teaching methods, and active campus life, making it a hub of academic and extracurricular activity.

New research challenges the ease of implanting false memories, highlighting flaws in the influential “Lost in the Mall” study.

By reexamining the data from a previous study, researchers found that many supposed false memories might actually be based on real experiences, casting doubt on the use of such studies in legal contexts.

Reevaluating the “Lost in the Mall” Study.

A new theory related to the second law of thermodynamics describes the motion of active biological systems ranging from migrating cells to traveling birds.

In 1944, Erwin Schrödinger published the book What is life? [1]. Therein, he reasoned about the origin of living systems by using methods of statistical physics. He argued that organisms form ordered states far from thermal equilibrium by minimizing their own disorder. In physical terms, disorder corresponds to positive entropy. Schrödinger thus concluded: “What an organism feeds upon is negative entropy […] freeing itself from all the entropy it cannot help producing while alive.” This statement poses the question of whether the second law of thermodynamics is valid for living systems. Now Benjamin Sorkin at Tel Aviv University, Israel, and colleagues have considered the problem of entropy production in living systems by putting forward a generalization of the second law [2].

Researchers at the University of Cincinnati College of Medicine and Cincinnati Children’s Hospital have developed a new approach, which combines advanced screening techniques with computational modeling, to significantly shorten the drug discovery process. It has the potential to transform the pharmaceutical industry.

The research, published recently in Science Advances, represents a significant leap forward in drug discovery efficiency. It was featured on LegalReader.com.

https://www.uc.edu/news/articles/2024/09/uc-college-of-medic…aster.html


Legal Reader seeks to provide the latest legal news & commentary on the laws that shape our world.

This episode is sponsored by Legal Zoom.

Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code Smith10 to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.

In this episode of the Eye on AI podcast, we dive into the world of Artificial General Intelligence (AGI) with Ben Goertzel, CEO of SingularityNET and a leading pioneer in AGI development.

Ben shares his vision for building machines that go beyond task-specific capabilities to achieve true, human-like intelligence. He explores how AGI could reshape society, from revolutionizing industries to redefining creativity, learning, and autonomous decision-making.

Throughout the conversation, Ben discusses his unique approach to AGI, which combines decentralized AI systems and blockchain technology to create open, scalable, and ethically aligned AI networks. He explains how his work with SingularityNET aims to democratize AI, making AGI development transparent and accessible while mitigating risks associated with centralized control.

Ben also delves into the philosophical and ethical questions surrounding AGI, offering insights into consciousness, the role of empathy, and the potential for building machines that not only think but also align with humanity’s best values. He shares his thoughts on how decentralized AGI can avoid the narrow, profit-driven goals of traditional AI and instead evolve in ways that benefit society as a whole.

There are contexts where human cognitive and emotional intelligence takes precedence over AI, which serves a supporting role in decision-making without overriding human judgment. Here, AI “protects” human cognitive processes from things like bias, heuristic thinking, or decision-making that activates the brain’s reward system and leads to incoherent or skewed results. In the human-first mode, artificial integrity can assist judicial processes by analyzing previous law cases and outcomes, for instance, without substituting a judge’s moral and ethical reasoning. For this to work well, the AI system would also have to show how it arrives at different conclusions and recommendations, considering any cultural context or values that apply differently across different regions or legal systems.

4 – Fusion Mode:

Artificial integrity in this mode is a synergy between human intelligence and AI capabilities combining the best of both worlds. Autonomous vehicles operating in Fusion Mode would have AI managing the vehicle’s operations, such as speed, navigation, and obstacle avoidance, while human oversight, potentially through emerging technologies like Brain-Computer Interfaces (BCIs), would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real-time, and blending AI’s precision with human moral reasoning. These kinds of advanced integrations between humans and machines will require artificial integrity at the highest level of maturity: artificial integrity would ensure not only technical excellence but ethical robustness, to guard against any exploitation or manipulation of neural data as it prioritizes human safety and autonomy.

Delivering Innovative, Compassionate And Accessible Patient Care — Robert Stone, CEO — City of Hope & Dr. Marcel van den Brink, MD, PhD, President, City of Hope Comprehensive Cancer Center.


Robert Stone is the CEO of City of Hope (https://www.cityofhope.org/robert-stone), a premier cancer research and treatment center dedicated to innovation in biomedical science and the delivery of compassionate, world-class patient care. A seasoned health care executive, he has served in a number of strategic decision-making roles since he joined City of Hope in 1996, culminating with his appointment as president in 2012, CEO in 2014, and as the Helen and Morgan Chu Chief Executive Officer Distinguished Chair in 2021.

Mr. Stone has J.D., University of Chicago Law School, Chicago, IL.

Mr. Stone’s strategic acumen, empathy and visionary leadership have driven City of Hope’s rapid evolution.

As an independent institution dedicated to advancing the fight against cancer and diabetes, City of Hope is accelerating opportunities for high-impact discovery and ensuring that patients around the world have access to the most advanced therapies. Recent examples include a groundbreaking alliance in precision medicine with the Translational Genomics Research Institute (TGen), a leader in genomic analysis and bioinformatics; leadership in CAR T cell therapy research and therapy; and an innovative program to offer cancer support services to the employees of some of American’s largest employers, regardless of geography.

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?

In many cases, AI systems gather external information to use as context when answering a particular query. For example, to answer a question about a medical condition, the system might reference recent research papers on the topic. Even with this relevant context, models can make mistakes with what feels like high doses of confidence. When a model errs, how can we track that specific piece of information from the context it relied on — or lack thereof?

To help tackle this obstacle, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers created ContextCite, a tool that can identify the parts of external context used to generate any particular statement, improving trust by helping users easily verify the statement.


The ContextCite tool from MIT CSAIL can find the parts of external context that a language model used to generate a statement. Users can easily verify the model’s response, making the tool useful in fields like health care, law, and education.