Blog

Archive for the ‘information science’ category: Page 33

Mar 1, 2024

Elon Musk sues OpenAI for abandoning its mission to benefit humanity

Posted by in categories: Elon Musk, health, information science, law, robotics/AI

Elon Musk claims OpenAI is using GPT-4 to ‘maximize profits’ instead of ‘for the benefit of humanity.’


The lawsuit claims that the GPT-4 model OpenAI released in March 2023 isn’t just capable of reasoning but is also actually “better at reasoning than average humans,” having scored in the 90th percentile on the Uniform Bar Examination for lawyers. The company is rumored to be developing a more advanced model, known as “Q Star,” that has a stronger claim to being true artificial general intelligence (AGI).

Altman was fired (and subsequently rehired five days later) by OpenAI in 2023 over vague claims that his communication with the board was “hindering its ability to exercise its responsibilities.” The lawsuit filed by Musk alleges that in the days following this event, Altman, Brockman, and Microsoft “exploited Microsoft’s significant leverage over OpenAI” to replace board members with handpicked alternatives that were better approved of by Microsoft.

Continue reading “Elon Musk sues OpenAI for abandoning its mission to benefit humanity” »

Mar 1, 2024

Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage

Posted by in categories: computing, information science, quantum physics

Popular Summary.

Unequivocally demonstrating that a quantum computer can significantly outperform any existing classical computers will be a milestone in quantum science and technology. Recently, groups at Google and at the University of Science and Technology of China (USTC) announced that they have achieved such quantum computational advantages. The central quantity of interest behind their claims is the linear cross-entropy benchmark (XEB), which has been claimed and used to approximate the fidelity of their quantum experiments and to certify the correctness of their computation results. However, such claims rely on several assumptions, some of which are implicitly assumed. Hence, it is critical to understand when and how XEB can be used for quantum advantage experiments. By combining various tools from computer science, statistical physics, and quantum information, we critically examine the properties of XEB and show that XEB bears several intrinsic vulnerabilities, limiting its utility as a benchmark for quantum advantage.

Concretely, we introduce a novel framework to identify and exploit several vulnerabilities of XEB, which leads to an efficient classical algorithm getting comparable XEB values to Google’s and USTC’s quantum devices (2% 12% of theirs) with just one GPU within 2 s. Furthermore, its performance features better scaling with the system size than that of a noisy quantum device. We observe that this is made possible because the XEB can highly overestimate the fidelity, which implies the existence of “shortcuts” to achieve high XEB values without simulating the system. This is in contrast to the intuition of the hardness of achieving high XEB values by all possible classical algorithms.

Mar 1, 2024

AI could find out when cancer cells will resist chemotherapy

Posted by in categories: biotech/medical, information science, nanotechnology, robotics/AI

In a new study, scientists have been able to leverage a machine learning algorithm to tackle one of the biggest challenges facing cancer researchers — predicting when cancer will resist chemotherapy.


But in what could be a game-changer, scientists at the University of California San Diego School of Medicine revealed today in a study that a high-tech machine learning tool might just figure out when cancer is going to give the cold shoulder to chemotherapy.

Teaming up against cancer

Continue reading “AI could find out when cancer cells will resist chemotherapy” »

Feb 28, 2024

AI Is Everywhere—Including Countless Applications You’ve Likely Never Heard Of

Posted by in categories: information science, mapping, robotics/AI, transportation

One major area of our lives that uses largely “hidden” AI is transportation. Millions of flights and train trips are coordinated by AI all over the world. These AI systems are meant to optimize schedules to reduce costs and maximize efficiency.

Artificial intelligence can also manage real-time road traffic by analyzing traffic patterns, volume and other factors, and then adjusting traffic lights and signals accordingly. Navigation apps like Google Maps also use AI optimization algorithms to find the best path in their navigation systems.

AI is also present in various everyday items. Robot vacuum cleaners use AI software to process all their sensor inputs and deftly navigate our homes.

Feb 28, 2024

A General Equation of State for a Quantum Simulator

Posted by in categories: information science, particle physics, quantum physics

Researchers have characterized the thermodynamic properties of a model that uses cold atoms to simulate condensed-matter phenomena.

Feb 27, 2024

Frontiers: Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems

Posted by in categories: biotech/medical, information science, neuroscience, robotics/AI, supercomputing

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

Feb 27, 2024

Super-Resolution Microscopy Harnesses Digital Display Technology

Posted by in categories: information science, innovation

In the ever-evolving realm of microscopy, recent years have witnessed remarkable strides in both hardware and algorithms, propelling our ability to explore the infinitesimal wonders of life. However, the journey towards three-dimensional structured illumination microscopy (3DSIM) has been hampered by challenges arising from the speed and intricacy of polarization modulation.

Enter the high-speed modulation 3DSIM system “DMD-3DSIM,” combining digital display with super-resolution imaging, allowing scientists to see cellular structures in unprecedented detail.

As reported in Advanced Photonics Nexus, Professor Peng Xi’s team at Peking University developed this innovative setup around a digital micromirror device (DMD) and an electro-optic modulator (EOM). It tackles resolution challenges by significantly improving both lateral (side-to-side) and axial (top-to-bottom) resolution, for a 3D spatial resolution reportedly twice that achieved by traditional wide-field imaging techniques.

Feb 27, 2024

Algorithms are everywhere

Posted by in categories: education, energy, information science, internet

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are.

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

Feb 26, 2024

Fundamental equation for superconducting quantum bits revised

Posted by in categories: computing, information science, quantum physics

Physicists from Forschungszentrum Jülich and the Karlsruhe Institute of Technology have uncovered that Josephson tunnel junctions—the fundamental building blocks of superconducting quantum computers—are more complex than previously thought.

Just like overtones in a , harmonics are superimposed on the fundamental mode. As a consequence, corrections may lead to quantum bits that are two to seven times more stable. The researchers support their findings with experimental evidence from multiple laboratories across the globe, including the University of Cologne, Ecole Normale Supérieure in Paris, and IBM Quantum in New York.

It all started in 2019, when Dr. Dennis Willsch and Dennis Rieger—two Ph.D. students from FZJ and KIT at the time and joint first authors of a new paper published in Nature Physics —were having a hard time understanding their experiments using the standard model for Josephson tunnel junctions. This model had won Brian Josephson the Nobel Prize in Physics in 1973.

Feb 26, 2024

Physicists Discover Evidence of Time Being Reversible in Glass

Posted by in categories: information science, physics

Time’s inexorable march might well wait for no one, but a new experiment by researchers at the Technical University of Darmstadt in Germany and Roskilde University in Denmark shows how in some materials it might occasionally shuffle.

An investigation into the way substances like glass age has uncovered the first physical evidence of a material-based measure of time being reversible.

For the most part the laws of physics care little about time’s arrow. Flip an equation describing the movement of an object and you can easily calculate where it started. We describe such laws as time reversible.

Page 33 of 321First3031323334353637Last