Toggle light / dark theme

PsiQuantum unveiled Omega, a quantum photonic chipset designed for large-scale quantum computing. This development, detailed in a Nature publication, marks a significant milestone in the mass production of quantum chips. Manufactured in partnership with GlobalFoundries at their Albany, New York facility, Omega integrates advanced components essential for constructing million-qubit quantum computers. The chipset employs photonics technology, manipulating single photons for computations, which offers advantages such as simplified cooling mechanisms. PsiQuantum has achieved manufacturing yields comparable to standard semiconductors, producing millions of these chips. The company plans to establish two Quantum Compute Centers in Brisbane, Australia, and Chicago, Illinois, aiming for operational facilities by 2027. This progress positions PsiQuantum at the forefront of the quantum computing industry, alongside other major companies making significant strides in the field. Summary of the paper in Nature: For decades, scientists have dreamed of building powerful quantum computers using light—photonic quantum computers. These machines could solve complex problems far beyond the reach of today’s most advanced supercomputers. However, a major roadblock has been the sheer difficulty of manufacturing the components required at the necessary scale. Now, researchers have developed a manufacturable platform for photonic quantum computing, marking a significant breakthrough. Their system is built using silicon photonics, a technology that integrates optical components directly onto a chip, much like modern semiconductor chips. The team demonstrated key capabilities: * Ultra-precise qubits: They achieved a stunning 99.98% accuracy in preparing and measuring quantum states. * Reliable quantum interference: Independent photon sources interacted with a visibility of 99.50%, crucial for quantum logic operations. * High-fidelity entanglement: A critical quantum process, known as two-qubit fusion, reached 99.22% accuracy. * Seamless chip-to-chip connections: The team linked quantum chips with 99.72% fidelity, a crucial step for scaling up quantum systems. Looking ahead, the researchers highlight new technologies that will further improve performance, including better photon sources, advanced detectors, and high-speed switches. This work represents a major step toward large-scale, practical quantum computing, bringing us closer to a future where quantum machines tackle problems that are impossible today.


PsiQuantum’s focus is now on wiring these chips together across racks, into increasingly large-scale multi-chip systems – work the company is now expanding through its partnership with the U.S. Department of Energy at SLAC National Accelerator Laboratory in Menlo Park, California as well as a new manufacturing and testing facility in Silicon Valley. While chip-to-chip networking remains a hard research problem for many other approaches, photonic quantum computers have the intrinsic advantage that photonic qubits can be networked using standard telecom optical fiber without any conversion between modalities, and PsiQuantum has already demonstrated high-fidelity quantum interconnects over distances up to 250m.

In 2024, PsiQuantum announced two landmark partnerships with the Australian Federal and Queensland State governments, as well as the State of Illinois and the City of Chicago, to build its first utility-scale quantum computers in Brisbane and Chicago. Recognizing quantum as a sovereign capability, these partnerships underscore the urgency and race towards building million-qubit systems. Later this year, PsiQuantum will break ground on Quantum Compute Centers at both sites, where the first utility-scale, million-qubit systems will be deployed.

At the threshold of a century poised for unprecedented transformations, we find ourselves at a crossroads unlike any before. The convergence of humanity and technology is no longer a distant possibility; it has become a tangible reality that challenges our most fundamental conceptions of what it means to be human.

This article seeks to explore the implications of this new era, in which Artificial Intelligence (AI) emerges as a central player. Are we truly on the verge of a symbiotic fusion, or is the conflict between the natural and the artificial inevitable?

The prevailing discourse on AI oscillates between two extremes: on one hand, some view this technology as a powerful extension of human capabilities, capable of amplifying our creativity and efficiency. On the other, a more alarmist narrative predicts the decline of human significance in the face of relentless machine advancement. Yet, both perspectives seem overly simplistic when confronted with the intrinsic complexity of this phenomenon. Beyond the dichotomy of utopian optimism and apocalyptic pessimism, it is imperative to critically reflect on AI’s cultural, ethical, and philosophical impact on the social fabric, as well as the redefinition of human identity that this technological revolution demands.

Since the dawn of civilization, humans have sought to transcend their natural limitations through the creation of tools and technologies. From the wheel to the modern computer, every innovation has been seen as a means to overcome the physical and cognitive constraints imposed by biology. However, AI represents something profoundly different: for the first time, we are developing systems that not only execute predefined tasks but also learn, adapt, and, to some extent, think.

This transition should not be underestimated. While previous technologies were primarily instrumental—serving as controlled extensions of human will—AI introduces an element of autonomy that challenges the traditional relationship between subject and object. Machines are no longer merely passive tools; they are becoming active partners in the processes of creation and decision-making. This qualitative leap radically alters the balance of power between humans and machines, raising crucial questions about our position as the dominant species.

But what does it truly mean to “be human” in a world where the boundaries between mind and machine are blurring? Traditionally, humanity has been defined by attributes such as consciousness, emotion, creativity, and moral decision-making. Yet, as AI advances, these uniquely human traits are beginning to be replicated—albeit imperfectly—within algorithms. If a machine can imitate creativity or exhibit convincing emotional behavior, where does our uniqueness lie?

This challenge is not merely technical; it strikes at the core of our collective identity. Throughout history, humanity has constructed cultural and religious narratives that placed us at the center of the cosmos, distinguishing us from animals and the forces of nature. Today, that narrative is being contested by a new technological order that threatens to displace us from our self-imposed pedestal. It is not so much the fear of physical obsolescence that haunts our reflections but rather the anxiety of losing the sense of purpose and meaning derived from our uniqueness.

Despite these concerns, many AI advocates argue that the real opportunity lies in forging a symbiotic partnership between humans and machines. In this vision, technology is not a threat to humanity but an ally that enhances our capabilities. The underlying idea is that AI can take on repetitive or highly complex tasks, freeing humans to engage in activities that truly require creativity, intuition, and—most importantly—emotion.

Concrete examples of this approach can already be seen across various sectors. In medicine, AI-powered diagnostic systems can process vast amounts of clinical data in record time, allowing doctors to focus on more nuanced aspects of patient care. In the creative industry, AI-driven text and image generation software are being used as sources of inspiration, helping artists and writers explore new ideas and perspectives. In both cases, AI acts as a catalyst, amplifying human abilities rather than replacing them.

Furthermore, this collaboration could pave the way for innovative solutions in critical areas such as environmental sustainability, education, and social inclusion. For example, powerful neural networks can analyze global climate patterns, assisting scientists in predicting and mitigating natural disasters. Personalized algorithms can tailor educational content to the specific needs of each student, fostering more effective and inclusive learning. These applications suggest that AI, far from being a destructive force, can serve as a powerful instrument to address some of the greatest challenges of our time.

However, for this vision to become reality, a strategic approach is required—one that goes beyond mere technological implementation. It is crucial to ensure that AI is developed and deployed ethically, respecting fundamental human rights and promoting collective well-being. This involves regulating harmful practices, such as the misuse of personal data or the indiscriminate automation of jobs, as well as investing in training programs that prepare people for the new demands of the labor market.

While the prospect of symbiotic fusion is hopeful, we cannot ignore the inherent risks of AI’s rapid evolution. As these technologies become more sophisticated, so too does the potential for misuse and unforeseen consequences. One of the greatest dangers lies in the concentration of power in the hands of a few entities, whether they be governments, multinational corporations, or criminal organizations.

Recent history has already provided concerning examples of this phenomenon. The manipulation of public opinion through algorithm-driven social media, mass surveillance enabled by facial recognition systems, and the use of AI-controlled military drones illustrate how this technology can be wielded in ways that undermine societal interests.

Another critical risk in AI development is the so-called “alignment problem.” Even if a machine is programmed with good intentions, there is always the possibility that it misinterprets its instructions or prioritizes objectives that conflict with human values. This issue becomes particularly relevant in the context of autonomous systems that make decisions without direct human intervention. Imagine, for instance, a self-driving car forced to choose between saving its passenger or a pedestrian in an unavoidable collision. How should such decisions be made, and who bears responsibility for the outcome?

These uncertainties raise legitimate concerns about humanity’s ability to maintain control over increasingly advanced technologies. The very notion of scientific progress is called into question when we realize that accumulated knowledge can be used both for humanity’s benefit and its detriment. The nuclear arms race during the Cold War serves as a sobering reminder of what can happen when science escapes moral oversight.

Whether the future holds symbiotic fusion or inevitable conflict, one thing is clear: our understanding of human identity must adapt to the new realities imposed by AI. This adjustment will not be easy, as it requires confronting profound questions about free will, the nature of consciousness, and the essence of individuality.

One of the most pressing challenges is reconciling our increasing technological dependence with the preservation of human dignity. While AI can significantly enhance quality of life, there is a risk of reducing humans to mere consumers of automated services. Without a conscious effort to safeguard the emotional and spiritual dimensions of human experience, we may end up creating a society where efficiency outweighs empathy, and interpersonal interactions are replaced by cold, impersonal digital interfaces.

On the other hand, this very transformation offers a unique opportunity to rediscover and redefine what it means to be human. By delegating mechanical and routine tasks to machines, we can focus on activities that truly enrich our existence—art, philosophy, emotional relationships, and civic engagement. AI can serve as a mirror, compelling us to reflect on our values and aspirations, encouraging us to cultivate what is genuinely unique about the human condition.

Ultimately, the fate of our relationship with AI will depend on the choices we make today. We can choose to view it as an existential threat, resisting the inevitable changes it brings, or we can embrace the challenge of reinventing our collective identity in a post-humanist era. The latter, though more daring, offers the possibility of building a future where technology and humanity coexist in harmony, complementing each other.

To achieve this, we must adopt a holistic approach that integrates scientific, ethical, philosophical, and sociological perspectives. It also requires an open, inclusive dialogue involving all sectors of society—from researchers and entrepreneurs to policymakers and ordinary citizens. After all, AI is not merely a technical tool; it is an expression of our collective imagination, a reflection of our ambitions and fears.

As we gaze toward the horizon, we see a world full of uncertainties but also immense possibilities. The future is not predetermined; it will be shaped by the decisions we make today. What kind of social contract do we wish to establish with AI? Will it be one of domination or cooperation? The answer to this question will determine not only the trajectory of technology but the very essence of our existence as a species.

Now is the time to embrace our historical responsibility and embark on this journey with courage, wisdom, and an unwavering commitment to the values that make human life worth living.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/a-sinfonia-do-amanha-tit…exao-seria ]

Since their invention, traditional computers have almost always relied on semiconductor chips that use binary “bits” of information represented as strings of 1’s and 0’s. While these chips have become increasingly powerful and simultaneously smaller, there is a physical limit to the amount of information that can be stored on this hardware. Quantum computers, by comparison, utilize “qubits” (quantum bits) to exploit the strange properties exhibited by subatomic particles, often at extremely cold temperatures.

Two qubits can hold four values at any given time, with more qubits translating to an exponential increase in calculating capabilities. This allows a quantum computer to process information at speeds and scales that make today’s supercomputers seem almost antiquated. Last December, for example, Google unveiled an experimental quantum computer system that researchers say takes just five minutes to finish a calculation that would take most supercomputers over 10 septillion years to complete—longer than the age of the universe as we understand it.

But Google’s Quantum Processing Unit (QPU) is based on different technology than Microsoft’s Majorana 1 design, detailed in a paper published on February 19 in the journal Nature. The result of over 17 years of design and research, Majorana 1 relies on what the company calls “topological qubits” through the creation of topological superconductivity, a state of matter previously conceptualized but never documented.

By combining digital and analog quantum simulation into a new hybrid approach, scientists have already started to make fresh scientific discoveries using quantum computers.

In today’s AI news, Backed by $200 million in funding, Scott Wu and his team at Cognition are building an AI tool that could potentially disintegrate the whole industry, at a $2 Billion valuation. Devin is an autonomous AI agent that, in theory, writes the code itself—no people involved—and can complete entire projects typically assigned to developers.

In other advancements, OpenAI is changing how it trains AI models to explicitly embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says in a new policy. OpenAI is releasing a significantly expanded version of its Model Spec, a document that defines how its AI models should behave — and is making it free for anyone to use or modify.

Then, xAI, the artificial intelligence company founded by Elon Musk, is set to launch Grok 3 on Monday, Feb. 17. According to xAI, this latest version of its chatbot, which Musk describes as “scary smart,” represents a major step forward, improving reasoning, computational power and adaptability. Grok 3’s development was accelerated by its Colossus supercomputer, which was built in just eight months, powered by 100,000 Nvidia H100 GPUs.

And, large language models can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that with just a small batch of well-curated examples, you can train an LLM for tasks that were thought to require tens of thousands of training instances.

S new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. ” + Then, join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusses with Link Ventures’ John Werner, the future of artificial intelligence and how open-source development is driving innovation. In this wide-ranging conversation, LeCun explains why AI systems won’t “take over” but will instead serve as empowering assistants.

Time, by its very nature, is a paradox. We live anchored in the present, yet we are constantly traveling between the past and the future—through memories and aspirations alike. Technological advancements have accelerated this relationship with time, turning what was once impossible into a tangible reality. At the heart of this transformation lies Artificial Intelligence (AI), which, far from being just a tool, is becoming an extension of the human experience, redefining how we interact with the world.

In the past, automatic doors were the stuff of science fiction. Paper maps were essential for travel. Today, these have been replaced by smart sensors and navigation apps. The smartphone, a small device that fits in the palm of our hand, has become an extension of our minds, connecting us to the world instantly. Even its name reflects its evolution—from a mere mobile phone to a “smart” device, now infused with traces of intelligence, albeit artificial.

And it is in this landscape that AI takes center stage. The debate over its risks and benefits has been intense. Many fear a stark divide between humans and machines, as if they are destined for an inevitable clash. But what if, instead of adversaries, we saw technology as an ally? The fusion of human and machine is already underway, quietly shaping our daily lives.

When applied effectively, AI becomes a discreet assistant, capable of anticipating our needs and enhancing productivity. Studies suggest that by 2035, AI could double annual economic growth, transforming not only business but society as a whole. Naturally, some jobs will disappear, but new ones will emerge. History has shown that evolution is inevitable and that the future belongs to those who adapt.

But what about AI’s role in our personal lives? From music recommendations tailored to our mood to virtual assistants that complete our sentences before we do, AI is already recognizing behavioral patterns in remarkable ways. Through Machine Learning, computer systems do more than just store data—they learn from it, dynamically adjusting and improving. Deep Learning takes this concept even further, simulating human cognitive processes to categorize information and make decisions based on probabilities.

But what if the relationship between humans and machines could transcend time itself? What if we could leave behind an interactive digital legacy that lives on forever? This is where a revolutionary concept emerges: digital immortality.

ETER9 is a project that embodies this vision, exploring AI’s potential to preserve interactive memories, experiences, and conversations beyond physical life. Imagine a future where your great-grandchildren could “speak” with you, engaging with a digital presence that reflects your essence. More than just photos or videos, this would be a virtual entity that learns, adapts, and keeps individuality alive.

The truth is, whether we realize it or not, we are all being shaped by algorithms that influence our online behavior. Platforms like Facebook are designed to keep us engaged for as long as possible. But is this the right path? A balance must be found—a point where technology serves humanity rather than the other way around.

We don’t change the world through empty criticism. We change it through innovation and the courage to challenge the status quo. Surrounding ourselves with intelligent people is crucial; if we are the smartest in the room, perhaps it’s time to find a new room.

The future has always fascinated humanity. The unknown evokes fear, but it also drives progress. Many of history’s greatest inventions were once deemed impossible. But “impossible” is only a barrier until it is overcome.

Sometimes, it feels like we are living in the future before the world is ready. But maturity is required to absorb change. Knowing when to pause and when to move forward is essential.

And so, in a present that blends with the future, we arrive at the ultimate question:

What does it mean to be eternal?

Perhaps the answer lies in our ability to dream, create, and leave a legacy that transcends time.

After all, isn’t digital eternity our true journey through time?

__
Copyright © 2025, Henrique Jorge

Physicists have performed a groundbreaking simulation they say sheds new light on an elusive phenomenon that could determine the ultimate fate of the Universe.

Pioneering research in quantum field theory around 50 years ago proposed that the universe may be trapped in a false vacuum – meaning it appears stable but in fact could be on the verge of transitioning to an even more stable, true vacuum state. While this process could trigger a catastrophic change in the Universe’s structure, experts agree that predicting the timeline is challenging, but it is likely to occur over an astronomically long period, potentially spanning millions of years.

In an international collaboration between three research institutions, the team report gaining valuable insights into false vacuum decay – a process linked to the origins of the cosmos and the behaviour of particles at the smallest scales. The collaboration was led by Professor Zlatko Papic, from the University of Leeds, and Dr Jaka Vodeb, from the Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich, Germany.