Blog

Archive for the ‘information science’ category: Page 151

Dec 29, 2021

Simple, accurate, and efficient: Improving the way computers recognize hand gestures

Posted by in categories: information science, mobile phones, robotics/AI, wearables

In the 2002 science fiction blockbuster film “Minority Report,” Tom Cruise’s character John Anderton uses his hands, sheathed in special gloves, to interface with his wall-sized transparent computer screen. The computer recognizes his gestures to enlarge, zoom in, and swipe away. Although this futuristic vision for computer-human interaction is now 20 years old, today’s humans still interface with computers by using a mouse, keyboard, remote control, or small touch screen. However, much effort has been devoted by researchers to unlock more natural forms of communication without requiring contact between the user and the device. Voice commands are a prominent example that have found their way into modern smartphones and virtual assistants, letting us interact and control devices through speech.

Hand gestures constitute another important mode of human communication that could be adopted for human-computer interactions. Recent progress in camera systems, image analysis and machine learning have made optical-based gesture recognition a more attractive option in most contexts than approaches relying on wearable sensors or data gloves, as used by Anderton in “Minority Report.” However, current methods are hindered by a variety of limitations, including high computational complexity, low speed, poor accuracy, or a low number of recognizable gestures. To tackle these issues, a team led by Zhiyi Yu of Sun Yat-sen University, China, recently developed a new hand gesture recognition algorithm that strikes a good balance between complexity, accuracy, and applicability.

Dec 27, 2021

New AI improves itself through Darwinian-style evolution

Posted by in categories: information science, mathematics, robotics/AI

AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms “from scratch,” as the paper states. Then, it selects the best ones, and mutates them through a process that’s similar to Darwinian evolution.

AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms.-Zero then selects the top-performing algorithm to be the “parent.”

“This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed,” the paper states.

Dec 24, 2021

Heart Rate Detection using Eulerian Magnification + YOLOR

Posted by in categories: drones, information science, robotics/AI

Real Time Heart Rate Detection using Eulerian Magnification + YOLOR is used for head detection which feeds into a Eulerian Magnification algorithm developed by Rohin Tangirala. Courtesy of Dragos Stan for assistance in this demo and code.

⭐️Code+Dataset — https://lnkd.in/deRj6SPf.

Continue reading “Heart Rate Detection using Eulerian Magnification + YOLOR” »

Dec 24, 2021

The quantum mechanics of time travel through post-selected teleportation

Posted by in categories: information science, quantum physics, time travel

This paper discusses the quantum mechanics of closed timelike curves (CTC) and of other potential methods for time travel. We analyze a specific proposal for such quantum time travel, the quantum description of CTCs based on post-selected teleportation (P-CTCs). We compare the theory of P-CTCs to previously proposed quantum theories of time travel: the theory is physically inequivalent to Deutsch’s theory of CTCs, but it is consistent with path-integral approaches (which are the best suited for analyzing quantum field theory in curved spacetime). We derive the dynamical equations that a chronology-respecting system interacting with a CTC will experience. We discuss the possibility of time travel in the absence of general relativistic closed timelike curves, and investigate the implications of P-CTCs for enhancing the power of computation.

Dec 24, 2021

LightOn Photonic Co-processor Integrated Into European AI Supercomputer

Posted by in categories: information science, robotics/AI, supercomputing

PARIS, Dec. 23, 2021 – LightOn announces the integration of one of its photonic co-processors in the Jean Zay supercomputer, one of the Top500 most powerful computers in the world. Under a pilot program with GENCI and IDRIS, the insertion of a cutting-edge analog photonic accelerator into High Performance Computers (HPC) represents a technological breakthrough and a world-premiere. The LightOn photonic co-processor will be available to selected users of the Jean Zay research community over the next few months.

LightOn’s Optical Processing Unit (OPU) uses photonics to speed up randomized algorithms at a very large scale while working in tandem with standard silicon CPU and NVIDIA latest A100 GPU technology. The technology aims to reduce the overall computing time and power consumption in an area that is deemed “essential to the future of computational science and AI for Science” according to a 2021 U.S. Department of Energy report on “Randomized Algorithms for Scientific Computing.”

INRIA (France’s Institute for Research in Computer Science and Automation) researcher Dr. Antoine Liutkus provided additional context to the integration of LightOn’s coprocessor in the Jean Zay supercomputer: “Our research is focused today on the question of large-scale learning. Integrating an OPU in one of the most powerful nodes of Jean Zay will give us the keys to carry out this research, and will allow us to go beyond a simple ” proof of concept.”

Dec 24, 2021

Azure AI milestone: Microsoft KEAR surpasses human performance on CommonsenseQA benchmark

Posted by in categories: food, information science, robotics/AI

KEAR (Knowledgeable External Attention for commonsense Reasoning) —along with recent milestones in computer vision and neural text-to-speech —is part of a larger Azure AI mission to provide relevant, meaningful AI solutions and services that work better for people because they better capture how people learn and work—with improved vision, knowledge understanding, and speech capabilities. At the center of these efforts is XYZ-code, a joint representation of three cognitive attributes: monolingual text (X), audio or visual sensory signals (Y), and multilingual (Z). For more information about these efforts, read the XYZ-code blog post.

Last month, our Azure Cognitive Services team, comprising researchers and engineers with expertise in AI, achieved a groundbreaking milestone by advancing commonsense language understanding. When given a question that requires drawing on prior knowledge and five answer choices, our latest model— KEAR, Knowledgeable External Attention for commonsense Reasoning —performs better than people answering the same question, calculated as the majority vote among five individuals. KEAR reaches an accuracy of 89.4 percent on the CommonsenseQA leaderboard compared with 88.9 percent human accuracy. While the CommonsenseQA benchmark is in English, we follow a similar technique for multilingual commonsense reasoning and topped the X-CSR leaderboard.

Continue reading “Azure AI milestone: Microsoft KEAR surpasses human performance on CommonsenseQA benchmark” »

Dec 23, 2021

Machine learning used to predict synthesis of complex novel materials

Posted by in categories: chemistry, information science, nanotechnology, quantum physics, robotics/AI

Scientists and institutions dedicate more resources each year to the discovery of novel materials to fuel the world. As natural resources diminish and the demand for higher value and advanced performance products grows, researchers have increasingly looked to nanomaterials.

Nanoparticles have already found their way into applications ranging from energy storage and conversion to quantum computing and therapeutics. But given the vast compositional and structural tunability nanochemistry enables, serial experimental approaches to identify impose insurmountable limits on discovery.

Now, researchers at Northwestern University and the Toyota Research Institute (TRI) have successfully applied to guide the synthesis of new nanomaterials, eliminating barriers associated with materials discovery. The highly trained algorithm combed through a defined dataset to accurately predict new structures that could fuel processes in clean energy, chemical and automotive industries.

Dec 21, 2021

DeepMind’s New AI With a Memory Outperforms Algorithms 25 Times Its Size

Posted by in categories: information science, robotics/AI

DeepMind’s model, with just 7 billion parameters, outperformed the 178 billion-parameter Jurassic-1 transformer on various language tasks.

Dec 21, 2021

New haptic device communicates emotion with nearly 80% accuracy of human touch

Posted by in categories: information science, robotics/AI

With the spread of the omicron variant, not everyone can or is eager to travel for the winter break. But what if virtual touch could bring you assurance that you were not alone?

At the USC Viterbi School of Engineering, computer scientist and roboticist Heather Culbertson has been exploring various methods to simulate touch. As part of a new study, Culbertson a senior author on this study, along with researchers at Stanford, her alma mater, wanted to see if two companions (platonic or romantic), could communicate and express care and emotion remotely. People perceive a partner’s true intentions through in-person touch an estimated 57 percent of the time. When interacting with a device that simulated human touch, respondents were able to discern the touch’s intention 45 percent of the time. Thus, devices in this study appear to perform with approximately 79 percent accuracy of perceived human touch.

Our sense of touch is unique. In fact, people have a “touch language” says Culbertson, the WiSE Gabilan Assistant Professor and Assistant Professor of Computer Science and Aerospace and Mechanical Engineering at USC. Thus, she says, creating virtual touch that people can direct towards their loved ones is quite complex—not only do we have differences in our comfort with and levels of “touchiness” but we also may have a distinct way of communicating different emotions such sympathy, love or sadness. The challenge for the researchers was to create an algorithm that can be flexible enough to incorporate the many dimensions of touch.

Dec 18, 2021

Optical Chip Promises 350x Speedup Over RTX 3080 in Some Algorithms

Posted by in categories: finance, information science, robotics/AI, space

Lightelligence, a Boston-based photonics company, revealed the world’s first small form-factor, photonics-based computing device, meaning it uses light to perform compute operations. The company claims the unit is “hundreds of times faster than a typical computing unit, such as NVIDIA RTX 3080.” 350 times faster, to be exact, but that only applies to certain types of applications.


However, the PACE achieves that coveted specialization through an added field of computing — which not only makes the system faster, it makes it incredibly more efficient. While traditional semiconductor systems have the issue of excess heat that results from running current through nanometre-level features at sometimes ludicrous frequencies, the photonic system processes its workloads with zero Ohmic heating — there’s no heat produced from current resistance. Instead, it’s all about light.

Continue reading “Optical Chip Promises 350x Speedup Over RTX 3080 in Some Algorithms” »