Blog

Archive for the ‘information science’ category: Page 219

Mar 11, 2020

Google releases quantum computing library

Posted by in categories: information science, quantum physics, robotics/AI

Google announced Monday that it is making available an open-source library for quantum machine-learning applications.

TensorFlow Quantum, a free library of applications, is an add-on to the widely-used TensorFlow toolkit, which has helped to bring the world of machine learning to developers across the globe.

“We hope this framework provides the necessary tools for the and machine learning research communities to explore models of both natural and artificial quantum systems, and ultimately discover new quantum algorithms which could potentially yield a quantum advantage,” a report posted by members of Google’s X unit on the AI Blog states.

Mar 8, 2020

Intel AI gives women career advice for Int’l Women’s Day

Posted by in categories: information science, robotics/AI

Intel Israel announced that the project is the first of its kind which uses AI to create “female intelligence.” The experts who worked on the project, led by data scientist and researcher Shira Guskin, analyzed thousands of insights from “veteran career women.” Once the initial advice was submitted by many women across the Israeli work force, the researchers passed the data through three algorithm models: Topic Extraction, Grouping and Summarization. This led to an algorithm which “processed the tips pool and extracted the key tips and guidelines.”


The AI said that women should fully invest in their careers, be confident, network, love, and trust their guts.

Mar 6, 2020

Scientists break Google’s quantum algorithm

Posted by in categories: information science, quantum physics

Google is racing to develop quantum-enhanced processors that use quantum mechanical effects to increase the speed at which data can be processed. In the near term, Google has devised new quantum-enhanced algorithms that operate in the presence of realistic noise. The so-called quantum approximate optimisation algorithm, or QAOA for short, is the cornerstone of a modern drive toward noise-tolerant quantum-enhanced algorithm development.

The celebrated approach taken by Google in QAOA has sparked vast commercial interest and ignited a global research community to explore novel applications. Yet, little is known about the ultimate performance limitations of Google’s QAOA .

A team of scientists from Skoltech’s Deep Quantum Laboratory took up this contemporary challenge. The all-Skoltech team led by Prof. Jacob Biamonte discovered and quantified what appears to be a fundamental limitation in the widely adopted approach initiated by Google.

Mar 5, 2020

Algorithms, Transhumanism & Futurism

Posted by in categories: information science, transhumanism

Posthuman Daily has a new format — https://paper.li/e-1437691924


A fancy futuristic word like transhumanism has become a daily reality and technology has already expanded our limitations. Here you will find JD’s thoughts about the future of humanity.

Opinion

Mar 5, 2020

Stanford’s AI Index Report: How Much Is BS?

Posted by in categories: economics, engineering, health, information science, law, mobile phones, robotics/AI, sustainability, transportation

Another important question is the extent to which continued increases in computational capacity are economically viable. The Stanford Index reports a 300,000-fold increase in capacity since 2012. But in the same month that the Report was issued, Jerome Pesenti, Facebook’s AI head, warned that “The rate of progress is not sustainable…If you look at top experiments, each year the cost is going up 10-fold. Right now, an experiment might be in seven figures but it’s not going to go to nine or 10 figures, it’s not possible, nobody can afford that.”

AI has feasted on low-hanging fruit, like search engines and board games. Now comes the hard part — distinguishing causal relationships from coincidences, making high-level decisions in the face of unfamiliar ambiguity, and matching the wisdom and commonsense that humans acquire by living in the real world. These are the capabilities that are needed in complex applications such as driverless vehicles, health care, accounting, law, and engineering.

Despite the hype, AI has had very little measurable effect on the economy. Yes, people spend a lot of time on social media and playing ultra-realistic video games. But does that boost or diminish productivity? Technology in general and AI in particular are supposed to be creating a new New Economy, where algorithms and robots do all our work for us, increasing productivity by unheard-of amounts. The reality has been the opposite. For decades, U.S. productivity grew by about 3% a year. Then, after 1970, it slowed to 1.5% a year, then 1%, now about 0.5%. Perhaps we are spending too much time on our smartphones.

Mar 4, 2020

Invisible Headlights

Posted by in categories: information science, robotics/AI, transportation

Autonomous and semi-autonomous systems need active illumination to navigate at night or underground. Switching on visible headlights or some other emitting system like lidar, however, has a significant drawback: It allows adversaries to detect a vehicle’s presence, in some cases from long distances away.

To eliminate this vulnerability, DARPA announced the Invisible Headlights program. The fundamental research effort seeks to discover and quantify information contained in ambient thermal emissions in a wide variety of environments and to create new passive 3D sensors and algorithms to exploit that information.

“We’re aiming to make completely passive navigation in pitch dark conditions possible,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “In the depths of a cave or in the dark of a moonless, starless night with dense fog, current autonomous systems can’t make sense of the environment without radiating some signal—whether it’s a laser pulse, radar or visible light beam—all of which we want to avoid. If it involves emitting a signal, it’s not invisible for the sake of this program.”

Mar 4, 2020

Musician uses algorithm to generate every possible melody to prevent copyright lawsuits

Posted by in category: information science

Catalogue of 68 billion tunes contains ‘every melody that’s ever existed and ever can exist’

Mar 4, 2020

Unveiling Biology with Deep Microscopy

Posted by in categories: biotech/medical, finance, information science, military, robotics/AI, space

The scientific revolution was ushered in at the beginning of the 17th century with the development of two of the most important inventions in history — the telescope and the microscope. With the telescope, Galileo turned his attention skyward, and advances in optics led Robert Hooke and Antonie van Leeuwenhoek toward the first use of the compound microscope as a scientific instrument, circa 1665. Today, we are witnessing an information technology-era revolution in microscopy, supercharged by deep learning algorithms that have propelled artificial intelligence to transform industry after industry.

One of the major breakthroughs in deep learning came in 2012, when the performance superiority of a deep convolutional neural network combined with GPUs for image classification was revealed by Hinton and colleagues [1] for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In AI’s current innovation and implementation phase, deep learning algorithms are propelling nearly all computer vision-intensive applications, including autonomous vehicles (transportation, military), facial recognition (retail, IT, communications, finance), biomedical imaging (healthcare), autonomous weapons and targeting systems (military), and automation and robotics (military, manufacturing, heavy industry, retail).

It should come as no surprise that the field of microscopy would ripe for transformation by artificial intelligence-aided image processing, analysis and interpretation. In biological research, microscopy generates prodigious amounts of image data; a single experiment with a transmission electron microscope can generate a data set containing over 100 terabytes worth of images [2]. The myriad of instruments and image processing techniques available today can resolve structures ranging in size across nearly 10 orders of magnitude, from single molecules to entire organisms, and capture spatial (3D) as well as temporal (4D) dynamics on time scales of femtoseconds to seconds.

Mar 3, 2020

Google algorithm teaches robot how to walk in mere hours

Posted by in categories: information science, robotics/AI

A new robot has overcome a fundamental challenge of locomotion by teaching itself how to walk.

Researchers from Google developed algorithms that helped the four-legged bot to learn how to walk across a range of surfaces within just hours of practice, annihilating the record times set by its human overlords.

Continue reading “Google algorithm teaches robot how to walk in mere hours” »

Mar 3, 2020

Honeywell says it will soon launch the world’s most powerful quantum computer

Posted by in categories: computing, information science, quantum physics

“The best-kept secret in quantum computing.” That’s what Cambridge Quantum Computing (CQC) CEO Ilyas Khan called Honeywell’s efforts in building the world’s most powerful quantum computer. In a race where most of the major players are vying for attention, Honeywell has quietly worked on its efforts for the last few years (and under strict NDA’s, it seems). But today, the company announced a major breakthrough that it claims will allow it to launch the world’s most powerful quantum computer within the next three months.

In addition, Honeywell also today announced that it has made strategic investments in CQC and Zapata Computing, both of which focus on the software side of quantum computing. The company has also partnered with JPMorgan Chase to develop quantum algorithms using Honeywell’s quantum computer. The company also recently announced a partnership with Microsoft.