Blog

Archive for the ‘information science’ category

Nov 2, 2024

Decomposing causality into its synergistic, unique, and redundant components

Posted by in categories: futurism, information science

Information theory, the science of message communication44, has also served as a framework for model-free causality quantification. The success of information theory relies on the notion of information as a fundamental property of physical systems, closely tied to the restrictions and possibilities of the laws of physics45,46. The grounds for causality as information are rooted in the intimate connection between information and the arrow of time. Time-asymmetries present in the system at a macroscopic level can be leveraged to measure the causality of events using information-theoretic metrics based on the Shannon entropy44. The initial applications of information theory for causality were formally established through the use of conditional entropies, employing what is known as directed information47,48. Among the most recognized contributions is transfer entropy (TE)49, which measures the reduction in entropy about the future state of a variable by knowing the past states of another. Various improvements have been proposed to address the inherent limitations of TE. Among them, we can cite conditional transfer entropy (CTE)50,51,52,53, which stands as the nonlinear, nonparametric extension of conditional GC27. Subsequent advancements of the method include multivariate formulations of CTE45 and momentary information transfer54, which extends TE by examining the transfer of information at each time step. Other information-theoretic methods, derived from dynamical system theory55,56,57,58, quantify causality as the amount of information that flows from one process to another as dictated by the governing equations.

Another family of methods for causal inference relies on conducting conditional independence tests. This approach was popularized by the Peter-Clark algorithm (PC)59, with subsequent extensions incorporating tests for momentary conditional independence (PCMCI)23,60. PCMCI aims to optimally identify a reduced conditioning set that includes the parents of the target variable61. This method has been shown to be effective in accurately detecting causal relationships while controlling for false positives23. Recently, new PCMCI variants have been developed for identifying contemporaneous links62, latent confounders63, and regime-dependent relationships64.

The methods for causal inference discussed above have significantly advanced our understanding of cause-effect interactions in complex systems. Despite the progress, current approaches face limitations in the presence of nonlinear dependencies, stochastic interactions (i.e., noise), self-causation, mediator, confounder, and collider effects, to name a few. Moreover, they are not capable of classifying causal interactions as redundant, unique, and synergistic, which is crucial to identify the fundamental relationships within the system. Another gap in existing methodologies is their inability to quantify causality that remains unaccounted for due to unobserved variables. To address these shortcomings, we propose SURD: Synergistic-Unique-Redundant Decomposition of causality. SURD offers causal quantification in terms of redundant, unique, and synergistic contributions and provides a measure of the causality from hidden variables. The approach can be used to detect causal relationships in systems with multiple variables, dependencies at different time lags, and instantaneous links. We demonstrate the performance of SURD across a large collection of scenarios that have proven challenging for causal inference and compare the results to previous approaches.

Nov 1, 2024

Ultra-low power neuromorphic hardware show promise for energy-efficient AI computation

Posted by in categories: information science, internet, robotics/AI

A team including researchers from Seoul National University College of Engineering has developed neuromorphic hardware capable of performing artificial intelligence (AI) computations with ultra-low power consumption. The research, published in the journal Nature Nanotechnology, addresses fundamental issues in existing intelligent semiconductor materials and devices while demonstrating potential for array-level technology.

Currently, vast amounts of power are consumed in parallel computing for processing big data in various fields such as the Internet of Things (IoT), user data analytics, generative AI, large language models (LLM), and autonomous driving. However, the conventional silicon-based CMOS semiconductor computing used for parallel computation faces problems such as high energy consumption, slower memory and processor speeds, and the physical limitations of high-density processes. This results in energy and carbon emission issues, despite AI’s positive contributions to daily life.

To address these challenges, it’s necessary to overcome the limitations of digital-based Von Neumann architecture computing. As such, the development of next-generation intelligent semiconductor-based neuromorphic hardware that mimics the working principles of the human brain has emerged as a critical task.

Oct 29, 2024

Idaho State Researcher Develops Algorithm to Model Brain Activity

Posted by in categories: biotech/medical, information science, robotics/AI

Thanks to an algorithm created by an Idaho State University professor, the way engineers, doctors, and physicists tackle the hard questions in their respective fields could all change.

Emanuele Zappala, an assistant professor of mathematics at ISU, and his colleagues at Yale have developed the Attentional Neural Integral Equations algorithm, or ANIE for short. Their work was recently published in Nature Machine Intelligence and describes how ANIE can model large, complex systems using data alone.

“Natural phenomena–everything from plasma physics to how viruses spread–are all governed by equations which we do not fully understand,” explains Zappala. “One of the main complexities lies in long-distance relations between different data points in the systems over space and time. What ANIE does is it allows us to learn these complex systems using just those known data points.”

Oct 29, 2024

Michael Levin: What is Synthbiosis? Diverse Intelligence Beyond AI & The Space of Possible Minds

Posted by in categories: bioengineering, biotech/medical, cyborgs, education, ethics, genetics, information science, robotics/AI

Michael Levin is a Distinguished Professor in the Biology department at Tufts University and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University. @drmichaellevin holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. Prior to college, Michael Levin worked as a software engineer and independent contractor in the field of scientific computing. He attended Tufts University, interested in artificial intelligence and unconventional computation. To explore the algorithms by which the biological world implemented complex adaptive behavior, he got dual B.S. degrees, in CS and in Biology and then received a PhD from Harvard University. He did post-doctoral training at Harvard Medical School, where he began to uncover a new bioelectric language by which cells coordinate their activity during embryogenesis. His independent laboratory develops new molecular-genetic and conceptual tools to probe large-scale information processing in regeneration, embryogenesis, and cancer suppression.

TIMESTAMPS:
0:00 — Introduction.
1:41 — Creating High-level General Intelligences.
7:00 — Ethical implications of Diverse Intelligence beyond AI & LLMs.
10:30 — Solving the Fundamental Paradox that faces all Species.
15:00 — Evolution creates Problem Solving Agents & the Self is a Dynamical Construct.
23:00 — Mike on Stephen Grossberg.
26:20 — A Formal Definition of Diverse Intelligence (DI)
30:50 — Intimate relationships with AI? Importance of Cognitive Light Cones.
38:00 — Cyborgs, hybrids, chimeras, & a new concept called “Synthbiosis“
45:51 — Importance of the symbiotic relationship between Science & Philosophy.
53:00 — The Space of Possible Minds.
58:30 — Is Mike Playing God?
1:02:45 — A path forward: through the ethics filter for civilization.
1:09:00 — Mike on Daniel Dennett (RIP)
1:14:02 — An Ethical Synthbiosis that goes beyond “are you real or faking it“
1:25:47 — Conclusion.

Continue reading “Michael Levin: What is Synthbiosis? Diverse Intelligence Beyond AI & The Space of Possible Minds” »

Oct 29, 2024

Malur Narayan Shares About A Large Language Model Trained with Diverse Histories & Inclusive Voices

Posted by in categories: information science, neuroscience, quantum physics, robotics/AI, sustainability

Here’s Malur Narayan of Latimer AI sharing about removing bias, and setting a standard for identifying and measuring it in artificial intelligence systems, and LLM’s.

Malur is a tech leader in AI / ML, mobile, quantum, and is an advocate of tech for good, and responsible AI.

Continue reading “Malur Narayan Shares About A Large Language Model Trained with Diverse Histories & Inclusive Voices” »

Oct 28, 2024

Computers normally can’t see optical illusions — but a scientist combined AI with quantum mechanics to make it happen

Posted by in categories: information science, particle physics, quantum physics, robotics/AI

The AI system is dubbed a “quantum-tunneling deep neural network” and combines neural networks with quantum tunneling. A deep neural network is a collection of machine learning algorithms inspired by the structure and function of the brain — with multiple layers of nodes between the input and output. It can model complex non-linear relationships and, unlike conventional neural networks (which include a single layer between input and output) deep neural networks include many hidden layers.

Quantum tunneling, meanwhile, occurs when a subatomic particle, such as an electron or photon (particle of light), effectively passes through an impenetrable barrier. Because a subatomic particle like light can also behave as a wave — when it is not directly observed it is not in any fixed location — it has a small but finite probability of being on the other side of the barrier. When sufficient subatomic particles are present, some will “tunnel” through the barrier.

After the data representing the optical illusion passes through the quantum tunneling stage, the slightly altered image is processed by a deep neural network.

Oct 28, 2024

AI ‘can stunt the skills necessary for independent self-creation’: Relying on algorithms could reshape your entire identity without you realizing

Posted by in categories: information science, media & arts, robotics/AI

“If you constantly use an AI to find the music, career or political candidate you like, you might eventually forget how to do this yourself.” Ethicist Muriel Leuenberger considers the personal impact of relying on AI.

Oct 27, 2024

Computer Scientists Establish the Best Way to Traverse a Graph

Posted by in categories: computing, information science

A new proof shows that an upgraded version of the 70-year-old Dijkstra’s algorithm reigns supreme: It finds the most efficient pathways through any graph.

It doesn’t just tell you the fastest route to one destination.


In an interview toward the end of his life, Dijkstra credited his algorithm’s enduring appeal in part to its unusual origin story. “Without pencil and paper you are almost forced to avoid all avoidable complexities,” he said.

Continue reading “Computer Scientists Establish the Best Way to Traverse a Graph” »

Oct 27, 2024

NIST Advances 14 Candidates in Post-Quantum Cryptography Digital Signatures Process

Posted by in categories: encryption, information science, quantum physics

PRESS RELEASE — After over a year of evaluation, NIST has selected 14 candidates for the second round of the Additional Digital Signatures for the NIST PQC Standardization Process. The advancing digital signature algorithms are:

NIST Internal Report (IR) 8528 describes the evaluation criteria and selection process. Questions may be directed to [email protected]. NIST thanks all of the candidate submission teams for their efforts in this standardization process as well as the cryptographic community at large, which helped analyze the signature schemes.

Moving forward, the second-round candidates have the option of submitting updated specifications and implementations (i.e., “tweaks”). NIST will provide more details to the submission teams in a separate message. This second phase of evaluation and review is estimated to last 12–18 months.

Oct 26, 2024

An Efficient Way to Optimize Laser-Driven Nuclear Fusion

Posted by in categories: information science, nuclear energy

In 2022, a nuclear-fusion experiment yielded more energy than was delivered by the lasers that ignited the fusion reaction (see Viewpoint: Nuclear-Fusion Reaction Beats Breakeven). That demonstration was an example of indirect-drive inertial-confinement fusion, in which lasers collapse a fuel pellet by heating a gold can that surrounds it. This approach is less efficient than heating the pellet directly since the pellet absorbs less of the lasers’ energy. Nevertheless, it has been favored by researchers at the largest laser facilities because it is less sensitive to nonuniform laser illumination. Now Duncan Barlow at the University of Bordeaux, France, and his colleagues have devised an efficient way to improve illumination uniformity in direct-drive inertial-confinement fusion [1]. This advance helps overcome a remaining barrier to high-yield direct-drive fusion using existing facilities.

Triggering self-sustaining fusion by inertial confinement requires pressures and temperatures that are achievable only if the fuel pellet implodes with high uniformity. Such uniformity can be prevented by heterogeneities in the laser illumination and in the way the beams interact with the resulting plasma. Usually, researchers identify the laser configuration that minimizes these heterogeneities by iterating radiation-hydrodynamics simulations that are computationally expensive and labor intensive. Barlow and his colleagues developed an automatic, algorithmic approach that bypasses the need for such iterative simulations by approximating some of the beam–plasma interactions.

Compared with an experiment using a spherical, plastic target at the National Ignition Facility in California, the team’s optimization method should deliver an implosion that reaches 2 times the density and 3 times the pressure. But the approach can also be applied to other pellet geometries and at other facilities.

Page 1 of 32012345678Last