Toggle light / dark theme

The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations—climatic, epidemiological, financial, and so forth.

Eight years ago a machine learning algorithm learned to identify a cat —and it stunned the world. A few years later AI could accurately translate languages and take down world champion Go players. Now, machine learning has begun to excel at complex multiplayer video games like Starcraft and Dota 2 and subtle games like poker. AI, it would appear, is improving fast.

But how fast is fast, and what’s driving the pace? While better computer chips are key, AI research organization OpenAI thinks we should measure the pace of improvement of the actual machine learning algorithms too.

In a blog post and paper —authored by OpenAI’s Danny Hernandez and Tom Brown and published on the arXiv, an open repository for pre-print (or not-yet-peer-reviewed) studies—the researchers say they’ve begun tracking a new measure for machine learning efficiency (that is, doing more with less). Using this measure, they show AI has been getting more efficient at a wicked pace.

As far back as 2015, the National Institute of Standards and Technology (NIST) began asking encryption experts to submit their candidate algorithms for testing against quantum computing’s expected capabilities — so this is an issue that has already been front of mind for security professionals and organizations. But even with an organization like NIST leading the way, working through all those algorithms to judge their suitability to the task will take time. Thankfully, others within the scientific community have also risen to the challenge and joined in the research.

It will take years for a consensus to coalesce around the most suitable algorithms. That’s similar to the amount of time it took ECC encryption to gain mainstream acceptance, which seems like a fair comparison. The good news is that such a timeframe still should leave the opportunity to arrive at — and widely deploy — quantum-resistant cryptography before quantum computers capable of sustaining the number of qubits necessary to seriously threaten RSA and ECC encryption become available to potential attackers.

The ongoing development of quantum-resistant encryption will be fascinating to watch, and security professionals will be sure to keep a close eye on which algorithms and encryption strategies ultimately prove most effective. The world of encryption is changing more quickly than ever, and it has never been more important for the organizations dependent on that encryption to ensure that their partners are staying ahead of the curve.

A machine-learning algorithm has been developed by scientists in Japan to breathe new life into old molecules. Called BoundLess Objective-free eXploration, or Blox, it allows researchers to search chemical databases for molecules with the right properties to see them repurposed. The team demonstrated the power of their technique by finding molecules that could work in solar cells from a database designed for drug discovery.

Chemical repurposing involves taking a molecule or material and finding an entirely new use for it. Suitable molecules for chemical repurposing tend to stand apart from the larger group when considering one property against another. These materials are said to be out-of-trend and can display previously undiscovered yet exceptional characteristics.

‘In public databases there are a lot of molecules, but each molecule’s properties are mostly unknown. These molecules have been synthesised for a particular purpose, for example drug development, so unrelated properties were not measured,’ explains Koji Tsuda of the Riken Centre for Advanced Intelligence and who led the development of Blox. ‘There are a lot of hidden treasures in databases.’

Tissue biopsy slides stained using hematoxylin and eosin (H&E) dyes are a cornerstone of histopathology, especially for pathologists needing to diagnose and determine the stage of cancers. A research team led by MIT scientists at the Media Lab, in collaboration with clinicians at Stanford University School of Medicine and Harvard Medical School, now shows that digital scans of these biopsy slides can be stained computationally, using deep learning algorithms trained on data from physically dyed slides.

Pathologists who examined the computationally stained H&E images in a blind study could not tell them apart from traditionally stained slides while using them to accurately identify and grade prostate cancers. What’s more, the slides could also be computationally “de-stained” in a way that resets them to an original state for use in future studies, the researchers conclude in their May 20 study published in JAMA Network Open.

This process of computational digital staining and de-staining preserves small amounts of tissue biopsied from cancer patients and allows researchers and clinicians to analyze slides for multiple kinds of diagnostic and prognostic tests, without needing to extract additional tissue sections.