Blog

Archive for the ‘information science’ category: Page 199

Aug 17, 2020

Gearing for the 20/20 Vision of Our Cybernetic Future — The Syntellect Hypothesis, Expanded Edition | Press Release

Posted by in categories: computing, cosmology, engineering, information science, mathematics, nanotechnology, neuroscience, quantum physics, singularity

“A neuron in the human brain can never equate the human mind, but this analogy doesn’t hold true for a digital mind, by virtue of its mathematical structure, it may – through evolutionary progression and provided there are no insurmountable evolvability constraints – transcend to the higher-order Syntellect. A mind is a web of patterns fully integrated as a coherent intelligent system; it is a self-generating, self-reflective, self-governing network of sentient components… that evolves, as a rule, by propagating through dimensionality and ascension to ever-higher hierarchical levels of emergent complexity. In this book, the Syntellect emergence is hypothesized to be the next meta-system transition, developmental stage for the human mind – becoming one global mind – that would constitute the quintessence of the looming Cybernetic Singularity.” –Alex M. Vikoulov, The Syntellect Hypothesis https://www.ecstadelic.net/e_news/gearing-for-the-2020-visio…ss-release

#SyntellectHypothesis

Continue reading “Gearing for the 20/20 Vision of Our Cybernetic Future — The Syntellect Hypothesis, Expanded Edition | Press Release” »

Aug 15, 2020

New Algorithm Paves the Way Towards Error-Free Quantum Computing

Posted by in categories: computing, information science, quantum physics

To avoid this problem, the researchers came up with several shortcuts and simplifications that help focus on the most important interactions, making the calculations tractable while still providing a precise enough result to be practically useful.

To test their approach, they put it to work on a 14-qubit IBM quantum computer accessed via the company’s IBM Quantum Experience service. They were able to visualize correlations between all pairs of qubits and even uncovered long-range interactions between qubits that had not been previously detected and will be crucial for creating error-corrected devices.

They also used simulations to show that they could apply the algorithm to a quantum computer as large as 100 qubits without calculations getting intractable. As well as helping to devise error-correction protocols to cancel out the effects of noise, the researchers say their approach could also be used as a diagnostic tool to uncover the microscopic origins of noise.

Aug 15, 2020

Soldiers could teach future robots how to outperform humans

Posted by in categories: information science, military, robotics/AI

The researchers fused machine learning from demonstration algorithms and more classical autonomous navigation systems. Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration. This paradigm allows for the deployed system to retain all the benefits of classical navigation systems—such as optimality, explainability and safety—while also allowing the system to be flexible and adaptable to new environments, Warnell said.


In the future, a soldier and a game controller may be all that’s needed to teach robots how to outdrive humans.

At the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground to improve its existing systems by watching a human drive. The team tested its approach—called adaptive planner parameter learning from demonstration, or APPLD—on one of the Army’s experimental autonomous ground vehicles.

Continue reading “Soldiers could teach future robots how to outperform humans” »

Aug 12, 2020

Artificial Intelligence And Data Privacy – Turning A Risk Into A Benefit

Posted by in categories: information science, privacy, robotics/AI

On the higher end, they work to ensure that development is open in order to work on multiple cloud infrastructures, providing companies the ability to know that portability exists.

That openness is also why deep learning is not yet part of a solution. There is still not the transparency needed into the DL layers in order to have the trust necessary for privacy concerns. Rather, these systems aim to help manage information privacy for machine learning applications.

Artificial intelligence applications are not open, and can put privacy at risk. The addition of good tools to address privacy for data being used by AI systems is an important early step in adding trust into the AI equation.

Aug 11, 2020

Time-reversal of an unknown quantum state

Posted by in categories: computing, engineering, information science, mathematics, quantum physics

Physicists have long sought to understand the irreversibility of the surrounding world and have credited its emergence to the time-symmetric, fundamental laws of physics. According to quantum mechanics, the final irreversibility of conceptual time reversal requires extremely intricate and implausible scenarios that are unlikely to spontaneously occur in nature. Physicists had previously shown that while time-reversibility is exponentially improbable in a natural environment—it is possible to design an algorithm to artificially reverse a time arrow to a known or given state within an IBM quantum computer. However, this version of the reversed arrow-of-time only embraced a known quantum state and is therefore compared to the quantum version of pressing rewind on a video to “reverse the flow of time.”

In a new report now published in Communications Physics, Physicists A.V. Lebedev and V.M. Vinokur and colleagues in materials, physics and advanced engineering in the U.S. and Russia, built on their previous work to develop a technical method to reverse the temporal evolution of an arbitrary unknown . The technical work will open new routes for general universal algorithms to send the temporal evolution of an arbitrary system backward in time. This work only outlined the mathematical process of time reversal without experimental implementations.

Aug 9, 2020

An Algorithm Has Been Developed to Obstruct AI Facial Recognition, and It’s Free to Use

Posted by in categories: information science, robotics/AI

Are you worried about AI collecting your facial data from all the pictures you have ever posted or shared? Researchers have now developed a method for hindering facial recognition.

It is a commonly accepted fact nowadays that the images we post or share online can and might find themselves being used by third parties for one reason or another. It may not be something we truly agree with, but it’s a fact that most of us have accepted as an undesirable consequence of using freely available social media apps and websites.

Continue reading “An Algorithm Has Been Developed to Obstruct AI Facial Recognition, and It’s Free to Use” »

Aug 8, 2020

F-16 pilots to face off against AI in simulated dogfight for DARPA

Posted by in categories: government, information science, robotics/AI

An aerial combat simulation between an F-16 pilot and an artificial intelligence algorithm is part of the government-sponsored “Alpha Dog Trials” on Aug. 20.

Aug 7, 2020

Algorithm predicts the compositions of new materials

Posted by in categories: information science, robotics/AI, solar power, sustainability

A machine-learning algorithm that can predict the compositions of trend-defying new materials has been developed by RIKEN chemists1. It will be useful for finding materials for applications where there is a trade-off between two or more desirable properties.

Artificial intelligence has great potential to help scientists find new materials with desirable properties. A that has been trained with the compositions and properties of known materials can predict the properties of unknown materials, saving much time in the lab.

But discovering new materials for applications can be tricky because there is often a trade-off between two or more material properties. One example is organic materials for , where it is desired to maximize both the voltage and current, notes Kei Terayama, who was at the RIKEN Center for Advanced Intelligence Project and is now at Yokohama City University. “There’s a trade-off between voltage and current: a material that exhibits a high voltage will have a low current, whereas one with a high current will have a low voltage.”

Aug 6, 2020

A Quintillion Calculations a Second: DOE Calculating the Benefits of Exascale and Quantum Computers

Posted by in categories: information science, quantum physics, supercomputing

A quintillion calculations a second. That’s one with 18 zeros after it. It’s the speed at which an exascale supercomputer will process information. The Department of Energy (DOE) is preparing for the first exascale computer to be deployed in 2021. Two more will follow soon after. Yet quantum computers may be able to complete more complex calculations even faster than these up-and-coming exascale computers. But these technologies complement each other much more than they compete.

It’s going to be a while before quantum computers are ready to tackle major scientific research questions. While quantum researchers and scientists in other areas are collaborating to design quantum computers to be as effective as possible once they’re ready, that’s still a long way off. Scientists are figuring out how to build qubits for quantum computers, the very foundation of the technology. They’re establishing the most fundamental quantum algorithms that they need to do simple calculations. The hardware and algorithms need to be far enough along for coders to develop operating systems and software to do scientific research. Currently, we’re at the same point in quantum computing that scientists in the 1950s were with computers that ran on vacuum tubes. Most of us regularly carry computers in our pockets now, but it took decades to get to this level of accessibility.

In contrast, exascale computers will be ready next year. When they launch, they’ll already be five times faster than our fastest computer – Summit, at Oak Ridge National Laboratory’s Leadership Computing Facility, a DOE Office of Science user facility. Right away, they’ll be able to tackle major challenges in modeling Earth systems, analyzing genes, tracking barriers to fusion, and more. These powerful machines will allow scientists to include more variables in their equations and improve models’ accuracy. As long as we can find new ways to improve conventional computers, we’ll do it.

Aug 6, 2020

AI is learning when it should and shouldn’t defer to a human

Posted by in categories: biotech/medical, information science, robotics/AI

The context: Studies show that when people and AI systems work together, they can outperform either one acting alone. Medical diagnostic systems are often checked over by human doctors, and content moderation systems filter what they can before requiring human assistance. But algorithms are rarely designed to optimize for this AI-to-human handover. If they were, the AI system would only defer to its human counterpart if the person could actually make a better decision.

The research: Researchers at MIT’s Computer Science and AI Laboratory (CSAIL) have now developed an AI system to do this kind of optimization based on strengths and weaknesses of the human collaborator. It uses two separate machine-learning models; one makes the actual decision, whether that’s diagnosing a patient or removing a social media post, and one predicts whether the AI or human is the better decision maker.

The latter model, which the researchers call “the rejector,” iteratively improves its predictions based on each decision maker’s track record over time. It can also take into account factors beyond performance, including a person’s time constraints or a doctor’s access to sensitive patient information not available to the AI system.