Toggle light / dark theme

Microsoft “cherry-picked” examples of its generative AI’s output after it would frequently “hallucinate” incorrect responses, Business Insider reports.

The scoop comes from leaked audio of an internal presentation on an early version of Microsoft’s Security Copilot, a ChatGPT-like AI tool designed to help cybersecurity professionals.

According to BI, the audio contains a Microsoft researcher discussing the results of “threat hunter” tests in which the AI analyzed a Windows security log for possible malicious activity.

A team of computer scientists led by the University of Massachusetts Amherst recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct.

This new method, called Baldur, leverages the artificial intelligence power of large language models (LLMs), and when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day,” says Yuriy Brun, professor in the Manning College of Information and Computer Sciences at UMass Amherst and the paper’s senior author.

⚠️ Over 7,100 WordPress sites have been hit by the ‘Balada Injector’ malware, which exploits sites using a vulnerable version of the Popup Builder plugin. Read More ➡️ https://thehackernews.com/2024/01/balada-injector-infects-over-7100.htm


Thousands of WordPress sites using a vulnerable version of the Popup Builder plugin have been compromised with a malware called Balada Injector.

First documented by Doctor Web in January 2023, the campaign takes place in a series of periodic attack waves, weaponizing security flaws WordPress plugins to inject backdoor designed to redirect visitors of infected sites to bogus tech support pages, fraudulent lottery wins, and push notification scams.

Subsequent findings unearthed by Sucuri have revealed the massive scale of the operation, which is said to have been active since 2017 and infiltrated no less than 1 million sites since then.

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction—and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia, and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them—with the understanding that there is no silver bullet.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”

Adam Stern is Founder and CEO of Infinitely Virtual, provider of cloud technology solutions, based in Los Angeles. Twitter: @iv_cloudhosting

Back in the 1960s, when the U.S. faced off against the Soviets, MAD Magazine initiated a snarky proxy war in the form of a recurring comic strip that pitted two animated spies attempting to outsmart each other. In “Spy vs. Spy,” there were no permanent victors.

Fast forward to the ChatGPT generation. In cybersecurity, it’s AI vs AI now, and the black-hatted figure versus the guy in the white hat is no longer as binary as it once was.

This post is also available in: he עברית (Hebrew)

A new report by Computer scientists from the National Institute of Standards and Technology presents new kinds of cyberattacks that can “poison” AI systems.

AI systems are being integrated into more and more aspects of our lives, from driving vehicles to helping doctors diagnose illnesses to interacting with customers as online chatbots. To perform these tasks the models are trained on vast amounts of data, which in turn helps the AI predict how to respond in a given situation.

The required precision to perform quantum simulations beyond the capabilities of classical computers imposes major experimental and theoretical challenges. The key to solving these issues are highly precise ways of characterizing analog quantum sim ulators. Here, we robustly estimate the free Hamiltonian parameters of bosonic excitations in a superconducting-qubit analog quantum simulator from measured time-series of single-mode canonical coordinates. We achieve the required levels of precision in estimating the Hamiltonian parameters by maximally exploiting the model structure, making it robust against noise and state-preparation and measurement (SPAM) errors. Importantly, we are also able to obtain tomographic information about those SPAM errors from the same data, crucial for the experimental applicability of Hamiltonian learning in dynamical quantum-quench experiments. Our learning algorithm is highly scalable both in terms of the required amounts of data and post-processing. To achieve this, we develop a new super-resolution technique coined tensorESPRIT for frequency extraction from matrix time-series. The algorithm then combines tensorESPRIT with constrained manifold optimization for the eigenspace reconstruction with pre-and post-processing stages. For up to 14 coupled superconducting qubits on two Sycamore processors, we identify the Hamiltonian parameters — verifying the implementation on one of them up to sub-MHz precision — and construct a spatial implementation error map for a grid of 27 qubits. Our results constitute a fully characterized, highly accurate implementation of an analog dynamical quantum simulation and introduce a diagnostic toolkit for understanding, calibrating, and improving analog quantum processors.

Submitted 18 Aug 2021 to Quantum Physics [quant-ph]

Subjects: quant-ph cond-mat.quant-gas physics.comp-ph.