Toggle light / dark theme

What if accessing knowledge, which used to require hours of analyzing handwritten scrolls or books, could be done in mere moments?

Throughout history, the way humans acquire knowledge has experienced great revolutions. The birth of writing and books altered learning, allowing ideas to be preserved and shared across generations. Then came the Internet, connecting billions of people to vast information at their fingertips.

Today, we stand at another shift: the age of AI tools, where AI doesn’t just give us answers—it provides reliable, tailored responses in seconds. We no longer need to gather and evaluate the correct information for our problems. If knowledge is now a tool everyone can hold, the real revolution starts when we use this superpower to solve problems and improve the world.

At the heart of this breakthrough – driven by Japan’s National Institute of Information and Communications Technology (NICT) and Sumitomo Electric Industries – is a 19-core optical fiber with a standard 0.125 mm cladding diameter, designed to fit seamlessly into existing infrastructure and eliminate the need for costly upgrades.

Each core acts as an independent data channel, collectively forming a “19-lane highway” within the same space as traditional single-core fibers.

Unlike earlier multi-core designs limited to short distances or specialized wavelength bands, this fiber operates efficiently across the C and L bands (commercial standards used globally) thanks to a refined core arrangement that slashes signal loss by 40% compared to prior models.

Back in 2018, a scientist from the University of Texas at Austin proposed a protocol to generate randomness in a way that could be certified as truly unpredictable. That scientist, Scott Aaronson, now sees that idea become a working reality. “When I first proposed my certified randomness protocol in 2018, I had no idea how long I’d need to wait to see an experimental demonstration of it,” said Aaronson, who now directs a quantum center at a major university.

The experiment was carried out on a cutting-edge 56-qubit quantum computer, accessed remotely over the internet. The machine belongs to a company that recently made a significant upgrade to its system. The research team included experts from a large bank’s tech lab, national research centers, and universities.

To generate certified randomness, the team used a method called random circuit sampling, or RCS. The idea is to feed the quantum computer a series of tough problems, known as challenge circuits. The computer must solve them by choosing among many possible outcomes in a way that’s impossible to predict. Then, classical supercomputers step in to confirm whether the answers are genuinely random or not.

HELSINKI — Chinese commercial satellite manufacturer MinoSpace has won a major contract to build a remote sensing satellite constellation for Sichuan Province, under a project approved by the country’s top economic planner.

Beijing-based MinoSpace won the bid for the construction of a “space satellite constellation,” the National Public Resources Trading Platform (Sichuan Province) announced May 18, Chinese language Economic Observer reported.

The contract is worth 804 million yuan (around $111 million) and the constellation has been approved by the National Development and Reform Commission (NDRC), China’s top economic planning agency, signaling potential alignment with national satellite internet and remote sensing infrastructure goals.

Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.

Be it Elon Musk’s latest company Neuralink —which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.

Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.

Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications.

METHODS:

We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification.

Self-driving cars which eliminate traffic jams, getting a health care diagnosis instantly without leaving your home, or feeling the touch of loved ones based across the continent may sound like the stuff of science fiction.

But new research, led by the University of Bristol and published in the journal Nature Electronics, could make all this and more a step closer to reality thanks to a radical breakthrough in .

The futuristic concepts rely on the ability to communicate and transfer vast volumes of data much faster than existing networks. So physicists have developed an innovative way to accelerate this process between scores of users, potentially across the globe.

By Chuck Brooks


Dear Friends and Colleagues, this issue of the Security & Insights newsletter focuses on cybersecurity and the convergence of devices and networks. The convergence of the Internet of Things, industrial control systems (ICS), operational technology (OT), and information technology (IT) has revealed vulnerabilities and expanded attack surfaces. They are prime targets for hackers, who frequently look for unprotected ports and systems on internet-connected industrial devices. Because they provide several avenues of entry for attackers and because older OT systems were not built to withstand cyberattacks, IT/OT/ICS supply chains in continuous integration (CI) are especially vulnerable. Below is a collection of articles that address the challenges and threats of cybersecurity for connected devices and people.

Thanks for reading and stay safe! Chuck Brooks

Growing cyberthreats to the internet of things.