Something once thought too delicate for real cities just survived them. A quiet test in Germany hints that the next internet may be both unbreakable and already under our feet.
On a 30-kilometer loop of commercial fiber in Berlin, researchers just teleported data while ordinary internet traffic flowed on the same line without a hiccup. The feat, executed by T-Labs with Qunnect’s Carina platform, kept delicate quantum states steady against city vibrations and temperature swings, hitting 95 percent fidelity in real time. It shows that today’s networks can carry tomorrow’s quantum links, with stakes that range from unbreakable cryptography to connected quantum computers. For Deutsche Telekom’s Abdu Mudesir, it also signals a path to European technological sovereignty as the system scales to longer distances and more nodes.
Quantum technologies, computers or other devices that operate leveraging quantum mechanical effects, rely on the precise control of light and matter. Over the past decades, quantum physicists and material scientists have been trying to identify systems that can reliably generate photons (i.e., light particles) and could thus be used to create quantum technologies.
One approach for generating photons relies on silicon color centers, such as the emerging T center. Color centers are defects or irregularities in the crystal structure of silicon characterized by a different arrangement of atoms.
The T center and other silicon color centers can emit light in the wavelength band that is already used by fiber-optic internet cables, which is desirable for the development of quantum networks and quantum communication systems.
Cables underneath New York City are teeming with entangled quantum particles of light thanks to Qunnect, a company that has spent a decade working on building an unhackable quantum internet
Google API keys for services like Maps embedded in accessible client-side code could be used to authenticate to the Gemini AI assistant and access private data.
Researchers found nearly 3,000 such keys while scanning internet pages from organizations in various sectors, and even from Google.
The problem occurred when Google introduced its Gemini assistant, and developers started enabling the LLM API in projects. Before this, Google Cloud API keys were not considered sensitive data and could be exposed online without risk.
Supply chain attacks are now a top cyber threat—SolarWinds and Colonial Pipeline showed how one weak link can cascade across entire sectors.
In my latest article, I examine how AI, 5G, IoT, and quantum computing are expanding both risks and defenses, and share practical steps: zero trust, SBOMs, supplier audits, public-private collaboration, and board-level ownership.
Cyber supply chain security is no longer optional—it’s essential for resilience, innovation, and national security.
At Rice University, a research lab’s signature keepsake has helped perfect a method for growing patterned diamond surfaces that could help decrease operating temperatures in electronics by 23 degrees Celsius. The paper is published in the journal Applied Physics Letters.
“In the world of electronics, heat is the enemy,” said Xiang Zhang, assistant research professor of materials science and nanoengineering at Rice and a first author on the study. “A reduction of 23 C is significant—it can extend the lifespan of a device and allow it to run faster without overheating.”
Heat management is one of the major challenges facing today’s high-power technologies, from the gallium nitride transistors used in radar and 5G devices to the processing units powering the data center infrastructure that supports artificial intelligence. Diamond outshines most other materials when it comes to handling heat, but its hardness makes it difficult to work with. Growing diamond in technology-relevant forms is particularly challenging.
With Fortinet appliances becoming an attractive target for threat actors, it’s essential that organizations ensure management interfaces are not exposed to the internet, change default and common credentials, rotate SSL-VPN user credentials, implement multi-factor authentication for administrative and VPN access, and audit for unauthorized administrative accounts or connections.
It’s also recommended to isolate backup servers from general network access, ensure all software programs are up-to-date, and monitor for unintended network exposure.
“As we expect this trend to continue in 2026, organizations should anticipate that AI-augmented threat activity will continue to grow in volume from both skilled and unskilled adversaries,” Moses said. “Strong defensive fundamentals remain the most effective countermeasure: patch management for perimeter devices, credential hygiene, network segmentation, and robust detection for post-exploitation indicators.”
Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.
She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.
So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?
Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all: • Cars enabled carjackings and drive-by shootings, but also faster police pursuits. • Microbiology enabled bioweapons, but also faster vaccine development. • The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.
But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.
The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity.
Artificial intelligence and quantum computing are no longer hypothetical; they are actively altering cybersecurity, extending attack surfaces, escalating dangers, and eroding existing defenses. We are in a new ear of emerging technologies that are directly impacting cybersecurity requirements.
As a seasoned observer and participant in the cybersecurity domain—through my work, teaching, and contributions to Homeland Security Today, my book “Inside Cyber: How AI, 5G, IoT, and Quantum Computing Will Transform Privacy and Our Security”, — I have consistently underscored that technological advancement is outpacing our institutions, policies, and workforce preparedness.
Current frameworks, intended for a pre-digital convergence era, are increasingly unsuitable. In order to deal with these dual-use technologies that act as force multipliers for both defenders and enemies, we must immediately adjust our strategy as time is of the essence.
One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.
“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.
When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.