Toggle light / dark theme

Computer vision inches toward ‘common sense’ with Facebook’s latest research

Machine learning is capable of doing all sorts of things as long as you have the data to teach it how. That’s not always easy, and researchers are always looking for a way to add a bit of “common sense” to AI so you don’t have to show it 500 pictures of a cat before it gets it. Facebook’s newest research takes a big step toward reducing the data bottleneck.

The company’s formidable AI research division has been working for years now on how to advance and scale things like advanced computer vision algorithms, and has made steady progress, generally shared with the rest of the research community. One interesting development Facebook has pursued in particular is what’s called “semi-supervised learning.”

Generally when you think of training an AI, you think of something like the aforementioned 500 pictures of cats — images that have been selected and labeled (which can mean outlining the cat, putting a box around the cat or just saying there’s a cat in there somewhere) so that the machine learning system can put together an algorithm to automate the process of cat recognition. Naturally if you want to do dogs or horses, you need 500 dog pictures, 500 horse pictures, etc. — it scales linearly, which is a word you never want to see in tech.

Machine learning algorithm helps unravel the physics underlying quantum systems

Scientists from the University of Bristol’s Quantum Engineering Technology Labs (QETLabs) have developed an algorithm that provides valuable insights into the physics underlying quantum systems—paving the way for significant advances in quantum computation and sensing, and potentially turning a new page in scientific investigation.

The Science of Consciousness: Towards the Cybernetic Theory of Mind

Consciousness remains scientifically elusive because it constitutes layers upon layers of non-material emergence: Reverse-engineering our thinking should be done in terms of networks, modules, algorithms and second-order emergence — meta-algorithms, or groups of modules. Neuronal circuits correlate to “immaterial” cognitive modules, and these cognitive algorithms, when activated, produce meta-algorithmic conscious awareness and phenomenal experience, all in all at least two layers of emergence on top of “physical” neurons. Furthermore, consciousness represents certain transcendent aspects of projective ontology, according to the now widely accepted Holographic Principle.

#CyberneticTheoryofMind


There’s no shortage of workable theories of consciousness and its origins, each with their own merits and perspectives. We discuss the most relevant of them in the book in line with my own Cybernetic Theory of Mind that I’m currently developing. Interestingly, these leading theories, if metaphysically extended, in large part lend support to Cyberneticism and Digital Pantheism which may come into scientific vogue with the future cyberhumanity.

According to the Interface Theory of Perception developed by Donald Hoffman and the Biocentric theory of consciousness developed by Robert Lanza, any universe is essentially non-existent without a conscious observer. In both theories, conscious minds are required as primary building blocks for any universe arising from probabilistic domain into existence. But biological minds reveal to us just a snippet in the space of possible minds. Building on the tenets of Biocentrism, Cyberneticism goes further and includes all other possible conscious observers such as artificially intelligent self-aware entities. Perhaps, the extended theory could be dubbed as ‘Noocentrism’.

Existence boils down to experience. No matter what ontological level a conscious entity finds herself at, it will be smack in the middle, between her transcendental realm and lower levels of organization. This is why I prefer the terms ‘Experiential Realism’ and ‘Pantheism’ as opposed to ‘Panentheism’ as some suggested in regards to my philosophy.

Nvidia Entangled in Quantum Simulators

Quantum simulators are a strange breed of systems for purposes that might seem a bit nebulous from the outset. These are often HPC clusters with fast interconnects and powerful server processors (although not usually equipped with accelerators) that run a literal simulation of how various quantum circuits function for design and testing of quantum hardware and algorithms. Quantum simulators do more than just test. They can also be used to emulate quantum problem solving and serve as a novel approach to tackling problems without all the quantum hardware complexity.

Despite the various uses, there’s only so much commercial demand for quantum simulators. Companies like IBM have their own internally and for others, Atos/Bull have created these based on their big memory Sequanna systems but these are, as one might imagine, niche machines for special purposes. Nonetheless, Nvidia sees enough opportunity in this arena to make an announcement at their GTC event about the performance of quantum simulators using the DGX A100 and its own custom-cooked quantum development software stack, called CuQuantum.

After all, it is probably important for Nvidia to have some kind of stake in quantum before (and if) it ever really takes off, especially in large-scale and scientific computing. What better way to get an insider view than to work with quantum hardware and software developers who are designing better codes and qubits via a benchmark and testing environment?

Hive raises $85M for AI-based APIs to help moderate content, identify objects and more

As content moderation continues to be a critical aspect of how social media platforms work — one that they may be pressured to get right, or at least do better in tackling — a startup that has built a set of data and image models to help with that, along with any other tasks that require automatically detecting objects or text, is announcing a big round of funding.

Hive, which has built a training data trove based on crowdsourced contributions from some 2 million people globally, which then powers a set of APIs that can be used to identify automatically images of objects, words and phrases — a process used not just in content moderation platforms, but also in building algorithms for autonomous systems, back-office data processing, and more — has raised $85 million in a Series D round of funding that the startup has confirmed values it at $2 billion.

“At the heart of what we’re doing is building AI models that can help automate work that used to be manual,” said Kevin Guo, Hive’s co-founder and CEO. “We’ve heard about RPA and other workflow automation, and that is important too but what that has also established is that there are certain things that humans should not have to do that is very structural, but those systems can’t actually address a lot of other work that is unstructured.” Hive’s models help bring structure to that other work, and Guo claims they provide “near human level accuracy.”

FTC warns it could crack down on biased AI

AI systems can lead to race or gender discrimination.


The US Federal Trade Commission has warned companies against using biased artificial intelligence, saying they may break consumer protection laws. A new blog post notes that AI tools can reflect “troubling” racial and gender biases. If those tools are applied in areas like housing or employment, falsely advertised as unbiased, or trained on data that is gathered deceptively, the agency says it could intervene.

“In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver,” writes FTC attorney Elisa Jillson — particularly when promising decisions that don’t reflect racial or gender bias. “The result may be deception, discrimination — and an FTC law enforcement action.”

As Protocol points out, FTC chair Rebecca Slaughter recently called algorithm-based bias “an economic justice issue.” Slaughter and Jillson both mention that companies could be prosecuted under the Equal Credit Opportunity Act or the Fair Credit Reporting Act for biased and unfair AI-powered decisions, and unfair and deceptive practices could also fall under Section 5 of the FTC Act.

Scientists are on a path to sequencing 1 million human genomes and use big data to unlock genetic secrets

The more data collected, the better the results.


Understanding the genetics of complex diseases, especially those related to the genetic differences among ethnic groups, is essentially a big data problem. And researchers need more data.

1000, 000 genomes

To address the need for more data, the National Institutes of Health has started a program called All of Us. The project aims to collect genetic information, medical records and health habits from surveys and wearables of more than a million people in the U.S. over the course of 10 years. It also has a goal of gathering more data from underrepresented minority groups to facilitate the study of health disparities. The All of Us project opened to public enrollment in 2018, and more than 270000 people have contributed samples since. The project is continuing to recruit participants from all 50 states. Participating in this effort are many academic laboratories and private companies.

/* */