Toggle light / dark theme

Yes, this works with the financial profile of “middle class” American families.


(Tech Xplore)—RethinkX, an independent think tank that analyzes and forecasts disruptive technologies, has released an astonishing report predicting a far more rapid transition to EV/autonomous vehicles than experts are currently predicting. The report is based on an analysis of the so-called technology-adoption S-curve that describes the rapid uptake of truly disruptive technologies like smartphones and the internet. Additionally, the report addresses in detail the massive economic implications of this prediction across various sectors, including energy, transportation and manufacturing.

Rethinking Transportation 2020–2030 suggests that within 10 years of regulatory approval, by 2030, 95 percent of U.S. passenger miles traveled will be served by on-demand autonomous electric vehicles (AEVs). The primary driver of this unfathomably huge change in American life is economics: The cost savings of using transport-as-a-service (TaaS) providers will be so great that consumers will abandon individually owned vehicles. The report predicts that the cost of TaaS will save the average family $5600 annually, the equivalent of a 10 percent raise in salary. This, the report suggests, will lead to the biggest increase in consumer spending in history.

Consumers are already beginning to adapt to TaaS with the broad availability of ride-sharing services; additionally, the report says, Uber, Lyft and Didi are investing billions developing technologies and services to help consumers overcome psychological and behavioral hurdles to shared transportation such as habit, fear of strangers and affinity for driving. In 2016 alone, 550,000 passengers chose TaaS services in New York City alone.

The “Watchsense” prototype uses a small depth camera attached to the arm, mimicking a depth camera on a smartwatch. It could make it easy to type, or in a music program, volume could be increased by simply raising a finger. (credit: Srinath Sridhar et al.)

If you wear a smartwatch, you know how limiting it is to type it on or otherwise operate it. Now European researchers have developed an input method that uses a depth camera (similar to the Kinect game controller) to track fingertip touch and location on the back of the hand or in mid-air, allowing for precision control.

The researchers have created a prototype called “WatchSense,” worn on the user’s arm. It captures the movements of the thumb and index finger on the back of the hand or in the space above it. It would also work with smartphones, smart TVs, and virtual-reality or augmented reality devices, explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.

Read more

UK artificial intelligence (AI) startup Babylon has raised $60 million (£47 million) for its smartphone app which aims to put a doctor in your pocket.

The latest funding round, which comes just over a year after the startup’s last fundraise, means that the three-year-old London startup now has a valuation in excess of $200 million (£156 million), according to The Financial Times.

Babylon’s app has been downloaded over a million times and it allows people in UK, Ireland, and Rwanda to ask a chatbot a series of questions about their condition without having to visit a GP.

Read more

In the past 10 years, the best-performing artificial-intelligence systems—such as the speech recognizers on smartphones or Google’s latest automatic translator—have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

Read more

Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.

To help us make the most of these “micro-moments,” researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a series of apps called “WaitSuite” that test you on vocabulary words during idle moments, like when you’re waiting for an instant message or for your phone to connect to WiFi.

Building on micro-learning apps like Duolingo, WaitSuite aims to leverage moments when a person wouldn’t otherwise be doing anything — a practice that its developers call “wait-learning.”

Read more

Drawing inspiration from the plant world, researchers have invented a new electrode that could boost our current solar energy storage by an astonishing 3,000 percent.

The technology is flexible and can be attached directly to solar cells — which means we could finally be one step closer to smartphones and laptops that draw their power from the Sun, and never run out.

A major problem with reliably using solar energy as a power source is finding an efficient way to store it for later use without leakage over time.

Read more

I’ve been reading about Gcam, the Google X project that was first sparked by the need for a tiny camera to fit inside Google Glass, before evolving to power the world-beating camera of the Google Pixel. Gcam embodies an atypical approach to photography in seeking to find software solutions for what have traditionally been hardware problems. Well, others have tried, but those have always seemed like inchoate gimmicks, so I guess the unprecedented thing about Gcam is that it actually works. But the most exciting thing is what it portends.

I think we’ll one day be able to capture images without any photographic equipment at all.

Now I know this sounds preposterous, but I don’t think it’s any more so than the internet or human flight might have once seemed. Let’s consider what happens when we tap the shutter button on our cameraphones: light information is collected and focused by a lens onto a digital sensor, which converts the photons it receives into data that the phone can understand, and the phone then converts that into an image on its display. So we’re really just feeding information into a computer.

Read more

Microscopically fine conductor paths are required on the surfaces of smartphone touchscreens. At the edges of the appliances, these microscopic circuit paths come together to form larger connective pads. Until now, these different conductive paths had to be manufactured in several steps in time-consuming processes. With photochemical metallization, this is now possible in one single step on flexible substrates. The process has several benefits: It is fast, flexible, variable in size, inexpensive and environmentally friendly. Additional process steps for post-treatment are not necessary.

For the new process, the foils are coated with a photoactive layer of . “After that, we apply a colorless, UV-stable silver compound,” Peter William de Oliveira, head of optical materials, explains. By irradiation of this sequence of layers, the silver compound disintegrates on the photoactive layer and the silver ions are reduced to form metallic, electrically conductive silver. In this way, paths of varying sizes down to the smallest size of a thousandth of a millimeter can be achieved.

This basic principle allows conductive paths to be created individually. “There are different possibilities we can use depending on the requirements: Writing conductive paths using UV lasers is particularly suitable for the initial customized prototype manufacture and testing a new design of the conductive path. However, for mass production, this method is too time-consuming,” de Oliveira explains.

Read more

Four years ago, Google started to see the real potential for deploying neural networks to support a large number of new services. During that time it was also clear that, given the existing hardware, if people did voice searches for three minutes per day or dictated to their phone for short periods, Google would have to double the number of datacenters just to run machine learning models.

The need for a new architectural approach was clear, Google distinguished hardware engineer, Norman Jouppi, tells The Next Platform, but it required some radical thinking. As it turns out, that’s exactly what he is known for. One of the chief architects of the MIPS processor, Jouppi has pioneered new technologies in memory systems and is one of the most recognized names in microprocessor design. When he joined Google over three years ago, there were several options on the table for an inference chip to churn services out from models trained on Google’s CPU and GPU hybrid machines for deep learning but ultimately Jouppi says he never excepted to return back to what is essentially a CISC device.

We are, of course, talking about Google’s Tensor Processing Unit (TPU), which has not been described in much detail or benchmarked thoroughly until this week. Today, Google released an exhaustive comparison of the TPU’s performance and efficiencies compared with Haswell CPUs and Nvidia Tesla K80 GPUs. We will cover that in more detail in a separate article so we can devote time to an in-depth exploration of just what’s inside the Google TPU to give it such a leg up on other hardware for deep learning inference. You can take a look at the full paper, which was just released, and read on for what we were able to glean from Jouppi that the paper doesn’t reveal.

Read more