Toggle light / dark theme

Google was referring to its dominant position in the search market. In the early 2000s, several search engines existed in the market, but Google now has 90 per cent of the market. It is also not the pioneer of Android, though it purchased it at an early stage. At that time, the market was dominated by Blackberry and Nokia custom OS (operating systems), but today, Android has the largest mobile OS market share.

In the email, Pichai reportedly remained optimistic and said that AI has “gone through many winters and springs”, adding that the “most important thing we can do right now is to focus on building a great product and developing it responsibly.”

Following Google Bard’s patchy launch last week, Google employees called the rollout rushed and botched. Google was feeling massive pressure from Microsoft, which is leveraging OpenAI’s new-gen ChatGPT technology into its Bing search engine and Edge browser.

ChatGPT and other AI systems are propelling us faster toward the long-term technology dream of artificial general intelligence and the radical transformation called the “singularity,” Silicon Valley chip luminary and former Stanford University professor John Hennessy believes.

Hennessy won computing’s highest prize, the Turing Award, with colleague Dave Patterson for developing the computing architecture that made energy-efficient smartphone chips possible and that now is the foundation for virtually all major processors. He’s also chairman of Google parent company Alphabet.

Quantum Mechanics is the science behind nuclear energy, smart phones, and particle collisions. Yet, almost a century after its discovery, there is still controversy over what the theory actually means. The problem is that its key element, the quantum-mechanical wave function describing atoms and subatomic particles, isn’t observable. As physics is an experimental science, physicists continue to argue over whether the wave function can be taken as real, or whether it is just a tool to make predictions about what can be measured—typically large, “classical” everyday objects.

The view of the antirealists, advocated by Niels Bohr, Werner Heisenberg, and an overwhelming majority of physicists, has become the orthodox mainstream interpretation. For Bohr especially, reality was like a movie shown without a film or projector creating it: “There is no quantum world,” Bohr reportedly affirmed, suggesting an imaginary border between the realms of microscopic, “unreal” quantum physics and “real,” macroscopic objects—a boundary that has received serious blows by experiments ever since. Albert Einstein was a fierce critic of this airy philosophy, although he didn’t come up with an alternative theory himself.

For many years only a small number of outcasts, including Erwin Schrödinger and Hugh Everett populated the camp of the realists. This renegade view, however, is getting increasingly popular—and of course triggers the question of what this quantum reality really is. This is a question that has occupied me for many years, until I arrived at the conclusion that quantum reality, deep down at the most fundamental level, is an all-encompassing, unified whole: “The One.”

Over the past decade, I’ve kept a close eye on the emergence of artificial intelligence in healthcare. Throughout, one truth remained constant: Despite all the hype, AI-focused startups and established tech companies alike have failed to move the needle on the nation’s overall health and medical costs.

Finally, after a decade of underperformance in AI-driven medicine, success is approaching faster than physicians and patients currently recognize.


The next version, ChatGPT4, is scheduled for release later this year, as is Google’s rival AI product. And, last week, Microsoft unveiled an AI-powered search engine and web browser in partnership with OpenAI, with other tech-industry competitors slated to join the fray.

It remains to be seen which company will ultimately win the generative-AI arms race. But regardless of who comes out on top, we’ve reached a tipping point.

Lurking inside your next gadget may be a chip unlike those of the past. People used to do all the complex silicon design work, but for the first time, AI is helping to build new chips for data centers, smartphones, and IoT devices. AI firm Synopsys has announced that its DSO.ai tool has successfully aided in the design of 100 chips, and it expects that upward trend to continue.

Companies like STMicroelectronics and SK Hynix have turned to Synopsys to accelerate semiconductor designs in an increasingly competitive environment. The past few years have seen demand for new chips increase while materials and costs have rocketed upward. Therefore, companies are looking for ways to get more done with less, and that’s what tools like DSO.ai are all about.

The tool can search design spaces, telling its human masters how best to arrange components to optimize power, performance, and area, or PPA as it’s often called. Among those 100 AI-assisted chip designs, companies have seen up to a 25% drop in power requirements and a 3x productivity increase for engineers. SK Hynix says a recent DSO.ai project resulted in a 15% cell area reduction and a 5% die shrink.

A pair of engineers at Delft University of Technology, working with a colleague at Aix-Marseille University, reports that applying ultrasound to the surface of a glass plate can mimic the feel of pressed button. Laurence Willemet, Michaël Wiertlewski and Jocelyn Monnoyer have published a paper in Journal of The Royal Society Interface describing the device they built to test the idea of using ultrasound as a haptic screen enhancer.

Currently, users pressing buttons on their smart phone screens do not receive much in the way of physical feedback—phone engineers would like to change that. In this new effort, the researchers looked into the idea of using on a plate to mimic the sensations of pushing a physical button.

The researchers created the device by merging two modules. One used blue and red lights to optically track the movement of an approaching finger. The other monitored and responded to contact. Together, the modules controlled piezo actuators that generated ultrasound at a frequency of 28.85 kHz. The device was affixed to a glass plate, which in turn was held in place by an aluminum frame. When in use, the actuators were driven by a ±200 V carrier signal.

Dramatic advances in quantum computing, smartphones that only need to be charged once a month, trains that levitate and move at superfast speeds. Technological leaps like these could revolutionize society, but they remain largely out of reach as long as superconductivity—the flow of electricity without resistance or energy waste—isn’t fully understood.

One of the major limitations for real-world applications of this technology is that the materials that make superconducting possible typically need to be at extremely cold temperatures to reach that level of electrical efficiency. To get around this limit, researchers need to build a clear picture of what different superconducting materials look like at the atomic scale as they transition through different states of matter to become superconductors.

Scholars in a Brown University lab, working with an international team of scientists, have moved a small step closer to cracking this mystery for a recently discovered family of superconducting Kagome metals. In a new study, they used an innovative new strategy combining nuclear magnetic resonance imaging and a quantum modeling theory to describe the microscopic structure of this superconductor at 103 degrees Kelvin, which is equivalent to about 275 degrees below 0 degrees Fahrenheit.

Google is launching new updates for Maps that are part of its plan to make the navigation app more immersive and intuitive for users, the company announced today at its event in Paris.

Most notably, the company announced that Immersive View is rolling out starting today in London, Los Angeles, New York, San Francisco and Tokyo. Immersive View, which Google first announced at I/O in May 2022, is designed to help you plan ahead and get a deeper understanding of a city before you visit it. The company plans to launch Immersive View in more cities, including Amsterdam, Dublin, Florence and Venice in the coming months.

The feature fuses billions of Street View and aerial images to create a digital model of the world. It also layers information on top of the digital model, such as details about the weather, traffic and how busy a location may be. For instance, say you’re planning to visit the Rijksmuseum in Amsterdam and want to get an idea of it before you go. You can use Immersive View to virtually soar over the building to get a better idea of what it looks like and where the entrances are located. You can also see what the area looks like at different times of the day and what the weather will be like. Immersive View can also show you nearby restaurants, and allows you look inside them to see if they would be an ideal spot for you.

“To create these true-to-life scenes, we use neural radiance fields (NeRF), an advanced AI technique, transforms ordinary pictures into 3D representations,” Google explained in a blog post. “With NeRF, we can accurately recreate the full context of a place including its lighting, the texture of materials and what’s in the background. All of this allows you to see if a bar’s moody lighting is the right vibe for a date night or if the views at a cafe make it the ideal spot for lunch with friends.”

The company also announced that a new feature called “glanceable directions” is rolling out globally on Android and iOS in the coming months. The feature lets you track your journey right from your route overview or lock screen. Users will see updated ETAs and where to make your next turn. If you decide to take another path, the app will update your trip automatically. Google notes that previously, this information was only visible by unlocking your phone, opening the app and using comprehensive navigation mode. Glanceable directions can be used whenever you’re using the app, whether you’re walking, biking or taking public transit.

Pilonnel noticed that millions watch his videos, but very few actually attempt them. He wants to help people by making replacement parts available.

Users of Apple’s AirPods are well aware that the product they purchased is pretty much disposable. Once the rechargeable battery on the device gives way, there is no way to replace them; you need to buy new AirPods, unless you are ready to do the hard work yourself, with a little help, of course.

Ken Pillonel is no stranger to toying with Apple products. As an engineering student, he built the world’s first iPhone with a USB-C port and has previously shown us how the batteries in the AirPods can be replaced if you can 3D-print a new case.