Blog

Archive for the ‘information science’ category: Page 143

Mar 18, 2022

Artificial neurons help decode cortical signals

Posted by in categories: information science, robotics/AI

Russian scientists have proposed a new algorithm for automatic decoding and interpreting the decoder weights, which can be used both in brain-computer interfaces and in fundamental research. The results of the study were published in the Journal of Neural Engineering.

Brain-computer interfaces are needed to create robotic prostheses and neuroimplants, rehabilitation simulators, and devices that can be controlled by the power of thought. These devices help people who have suffered a stroke or physical injury to move (in the case of a robotic chair or prostheses), communicate, use a computer, and operate household appliances. In addition, in combination with machine learning methods, neural interfaces help researchers understand how the human brain works.

Most frequently brain-computer interfaces use electrical activity of neurons, measured, for example, with electro-or magnetoencephalography. However, a special decoder is needed in order to translate neuronal signals into commands. Traditional methods of signal processing require painstaking work on identifying informative features—signal characteristics that, from a researcher’s point of view, appear to be most important for the decoding task.

Mar 17, 2022

Mathematical paradoxes demonstrate the limits of AI

Posted by in categories: information science, mathematics, robotics/AI

Humans are usually pretty good at recognizing when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.

Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realize when it’s making a mistake than to produce a correct result.

Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate exist, yet no algorithm can produce such a . Only in specific cases can algorithms compute stable and accurate neural networks.

Mar 17, 2022

Wormholes May Be Lurking in the Universe — Here Are Proposed Ways of Finding Them

Posted by in categories: cosmology, information science, physics

Albert Einstein’s theory of general relativity profoundly changed our thinking about fundamental concepts in physics, such as space and time. But it also left us with some deep mysteries. One was black holes, which were only unequivocally detected over the past few years. Another was “wormholes” – bridges connecting different points in spacetime, in theory providing shortcuts for space travellers.

Wormholes are still in the realm of the imagination. But some scientists think we will soon be able to find them, too. Over the past few months, several new studies have suggested intriguing ways forward.

Black holes and wormholes are special types of solutions to Einstein’s equations, arising when the structure of spacetime is strongly bent by gravity. For example, when matter is extremely dense, the fabric of spacetime can become so curved that not even light can escape. This is a black hole.

Mar 15, 2022

Machine Learning Reimagines the Building Blocks of Computing

Posted by in categories: information science, robotics/AI

Traditional algorithms power complicated computational tools like machine learning. A new approach, called algorithms with predictions, uses the power of machine learning to improve algorithms.

Mar 15, 2022

When It Comes to AI, Can We Ditch the Datasets?

Posted by in categories: information science, robotics/AI

Summary: Training a machine learning algorithm with synthetic data for image classification can rival one trained on traditional datasets.

Source: MIT

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a model’s performance.

Mar 15, 2022

Entanglement unlocks scaling for quantum machine learning

Posted by in categories: information science, quantum physics, robotics/AI

The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.

Mar 15, 2022

The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)

Posted by in categories: information science, media & arts, robotics/AI

Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

Continue reading “The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)” »

Mar 14, 2022

Study highlights the potential of neuromorphic architectures to perform random walk computations

Posted by in categories: information science, mathematics, robotics/AI, space

Over the past decade or so, many researchers worldwide have been trying to develop brain-inspired computer systems, also known as neuromorphic computing tools. The majority of these systems are currently used to run deep learning algorithms and other artificial intelligence (AI) tools.

Researchers at Sandia National Laboratories have recently conducted a study assessing the potential of neuromorphic architectures to perform a different type of computations, namely random walk computations. These are computations that involve a succession of random steps in the mathematical space. The team’s findings, published in Nature Electronics, suggest that neuromorphic architectures could be well-suited for implementing these computations and could thus reach beyond machine learning applications.

“Most past studies related to focused on cognitive applications, such as ,” James Bradley Aimone, one of the researchers who carried out the study, told TechXplore. “While we are also excited about that direction, we wanted to ask a different and complementary question: can neuromorphic computing excel at complex math tasks that our brains cannot really tackle?”

Mar 13, 2022

New algorithm could help enable next-generation deep brain stimulation devices

Posted by in categories: bioengineering, biotech/medical, information science, neuroscience

Now, a developed by Brown University bioengineers could be an important step toward such adaptive DBS. The algorithm removes a key hurdle that makes it difficult for DBS systems to sense while simultaneously delivering .

“We know that there are in the associated with disease states, and we’d like to be able to record those signals and use them to adjust neuromodulation therapy automatically,” said David Borton, an assistant professor of biomedical engineering at Brown and corresponding author of a study describing the algorithm. “The problem is that stimulation creates electrical artifacts that corrupt the signals we’re trying to record. So we’ve developed a means of identifying and removing those artifacts, so all that’s left is the signal of interest from the brain.”

Mar 13, 2022

AI Overcomes Stumbling Block on Brain-Inspired Hardware

Posted by in categories: information science, robotics/AI

Algorithms that use the brain’s communication signal can now work on analog neuromorphic chips, which closely mimic our energy-efficient brains.