Blog

Archive for the ‘information science’ category: Page 25

Apr 28, 2024

ETH Zurich’s wheeled-legged robot masters urban terrain

Posted by in categories: information science, robotics/AI

ETH Zurich researchers have developed a locomotor control that can enable wheeled-legged robots to autonomously navigate various urban environments.

The robot was equipped with sophisticated navigational abilities thanks to a combination of machine learning algorithms. It was tested in the cities of Seville, Spain, and Zurich, Switzerland.

With little assistance from humans, the team’s ANYmal wheeled-legged robot accomplished autonomous operations in urban settings at the kilometer scale.

Apr 28, 2024

Google Chrome’s new post-quantum cryptography may break TLS connections

Posted by in categories: encryption, information science, quantum physics

Some Google Chrome users report having issues connecting to websites, servers, and firewalls after Chrome 124 was released last week with the new quantum-resistant X25519Kyber768 encapsulation mechanism enabled by default.

Google started testing the post-quantum secure TLS key encapsulation mechanism in August and has now enabled it in the latest Chrome version for all users.

The new version utilizes the Kyber768 quantum-resistant key agreement algorithm for TLS 1.3 and QUIC connections to protect Chrome TLS traffic against quantum cryptanalysis.

Apr 27, 2024

Unveiling a new quantum frontier: Frequency-domain entanglement

Posted by in categories: computing, information science, quantum physics

Scientists have introduced a form of quantum entanglement known as frequency-domain photon number-path entanglement. This advance in quantum physics involves an innovative tool called a frequency beam splitter, which has the unique ability to alter the frequency of individual photons with a 50% success rate.

For years, the scientific community has delved into spatial-domain number-path entanglement, a key player in the realms of quantum metrology and information science.

This concept involves photons arranged in a special pattern, known as NOON states, where they’re either all in one pathway or another, enabling applications like super-resolution imaging that surpasses traditional limits, the enhancement of quantum sensors, and the development of quantum computing algorithms designed for tasks requiring exceptional phase sensitivity.

Apr 26, 2024

Tweak to Schrödinger’s cat equation could unite Einstein’s relativity and quantum mechanics, study hints

Posted by in categories: information science, particle physics, quantum physics

Physicists have proposed modifications to the infamous Schrödinger’s cat paradox that could help explain why quantum particles can exist in more than one state simultaneously, while large objects (like the universe) seemingly cannot.

Apr 25, 2024

Quantum Computing Meets Genomics: The Dawn of Hyper-Fast DNA Analysis

Posted by in categories: biotech/medical, computing, information science, quantum physics

A new project unites world-leading experts in quantum computing and genomics to develop new methods and algorithms to process biological data.

Researchers aim to harness quantum computing to speed up genomics, enhancing our understanding of DNA and driving advancements in personalized medicine

A new collaboration has formed, uniting a world-leading interdisciplinary team with skills across quantum computing, genomics, and advanced algorithms. They aim to tackle one of the most challenging computational problems in genomic science: building, augmenting, and analyzing pangenomic datasets for large population samples. Their project sits at the frontiers of research in both biomedical science and quantum computing.

Apr 25, 2024

Scientists Confirm the Incredible Existence of Wigner Crystals

Posted by in category: information science

A phenomenon once confined to equations breaks into the observable world.

Apr 23, 2024

Navigating The Generative AI Divide: Open-Source Vs. Closed-Source Solutions

Posted by in categories: information science, robotics/AI, security

If you’re considering how your organization can use this revolutionary technology, one of the choices that have to be made is whether to go with open-source or closed-source (proprietary) tools, models and algorithms.

Why is this decision important? Well, each option offers advantages and disadvantages when it comes to customization, scalability, support and security.

In this article, we’ll explore the key differences as well as the pros and cons of each approach, as well as explain the factors that need to be considered when deciding which is right for your organization.

Apr 22, 2024

Single atoms captured morphing into quantum waves in startling image

Posted by in categories: information science, particle physics, quantum physics

In the 1920s, Erwin Schrödinger wrote an equation that predicts how particles-turned-waves should behave. Now, researchers are perfectly recreating those predictions in the lab.

By Karmela Padavic-Callaghan

Apr 21, 2024

Lights, camera, algorithm: How artificial intelligence is being used to make films

Posted by in categories: information science, media & arts, robotics/AI

The “it” Mr Woodman is referring to is Sora, a new text-to-video AI model from OpenAI, the artificial intelligence research organisation behind viral chatbot ChatGPT.

Instead of using their broad technical skills in filmmaking, such as animation, to overcome obstacles in the process, Mr Woodman and his team relied only on the model to generate footage for them, shot by shot.

“We just continued generating and it was almost like post-production and production in the same breath,” says Patrick Cederberg, who also worked on the project.

Apr 20, 2024

Making AI more energy efficient with neuromorphic computing

Posted by in categories: biological, information science, mobile phones, robotics/AI

CWI senior researcher Sander Bohté started working on neuromorphic computing already in 1998 as a PhD-student, when the subject was barely on the map. In recent years, Bohté and his CWI-colleagues have realized a number of algorithmic breakthroughs in spiking neural networks (SNNs) that make neuromorphic computing finally practical: in theory many AI-applications can become a factor of a hundred to a thousand more energy-efficient. This means that it will be possible to put much more AI into chips, allowing applications to run on a smartwatch or a smartphone. Examples are speech recognition, gesture recognition and the classification of electrocardiograms (ECG).

“I am really grateful that CWI, and former group leader Han La Poutré in particular, gave me the opportunity to follow my interest, even though at the end of the 1990s neural networks and neuromorphic computing were quite unpopular”, says Bohté. “It was high-risk work for the long haul that is now bearing fruit.”

Spiking neural networks (SNNs) more closely resemble the biology of the brain. They process pulses instead of the continuous signals in classical neural networks. Unfortunately, that also makes them mathematically much more difficult to handle. For many years SNNs were therefore very limited in the number of neurons they could handle. But thanks to clever algorithmic solutions Bohté and his colleagues have managed to scale up the number of trainable spiking neurons first to thousands in 2021, and then to tens of millions in 2023.

Page 25 of 320First2223242526272829Last