“I find it totally amazing that it is possible at all to build these light structures.”
A Ph.D. candidate at has developed an innovative technique for creating the elementary building blocks of a future quantum computer or internet in a more controlled manner, opening up a potential solution to many of the challenges along the road to this long-sought technology.
Petr Steindl’s doctoral thesis, which he defended last week as the final step in his Ph.D. program at Leiden University in Germany, explores a new technique for generating photons using quantum dots and microcavities.
A better world without Facebook and all its negative impacts would be a significant step forward. Facebook’s dominance and influence have often been associated with issues such as privacy breaches, the spread of misinformation, and the erosion of real social connections. By breaking free from Facebook’s grip, we can foster a healthier online environment that prioritizes privacy, genuine interactions, and reliable information. It is time to envision a world where social media platforms serve as catalysts for positive change, promoting authentic communication and meaningful connections among individuals.
(Image credit: Adobe Stock)
Mark Zuckerberg, the co-founder of Facebook (now Meta), recently celebrated reaching 100 million users in just five days with his new Twitter-like platform called Threads. However, this achievement doesn’t impress me much. Instead, it highlights Zuckerberg’s tendency to imitate rather than innovate.
While I used to admire him, I now realize that he doesn’t belong in the same league as my true idols. Comparing the 100 million sign-ups for ChatGPT to the 100 million Threads users is simply absurd.
AI is overwhelming the internet’s capacity for scale.
The problem, in extremely broad strokes, is this. Years ago, the web used to be a place where individuals made things. They made homepages, forums, and mailing lists, and a small bit of money with it. Then companies decided they could do things better. They created slick and feature-rich platforms and threw their doors open for anyone to join. They put boxes in front of us, and we filled those boxes with text and images, and people came to see the content of those boxes. The companies chased scale, because once enough peoplegather anywhere, there’s usually a way to make money off them. But AI changes these assumptions.
Given money and compute, AI systems — particularly the generative models currently in vogue — scale effortlessly. They produce text and images in abundance, and soon, music and video, too. Their output can potentially overrun or outcompete the platforms we rely on for news, information, and entertainment. But the quality of these systems is often poor, and they’re built in a way that is parasitical on the web today. These models are trained on strata of data laid down during the last web-age, which they recreate imperfectly. Companies scrape information from the open web and refine it into machine-generated content that’s cheap to generate but less reliable. This product then competes for attention with the platforms and people that came before them. Sites and users are reckoning with these changes, trying to decide how to adapt and if they even can.
Between at least 1995 and 2010, I was seen as a lunatic just because I was preaching the “Internet prophecy.” I was considered crazy!
Today history repeats itself, but I’m no longer crazy — we are already too many to all be hallucinating. Or maybe it’s a collective hallucination!
Artificial Intelligence (AI) is no longer a novelty — I even believe it may have existed in its fullness in a very distant and forgotten past! Nevertheless, it is now the topic of the moment.
Its genesis began in antiquity with stories and rumors of artificial beings endowed with intelligence, or even consciousness, by their creators.
Pamela McCorduck (1940–2021), an American author of several books on the history and philosophical significance of Artificial Intelligence, astutely observed that the root of AI lies in an “ancient desire to forge the gods.”
Hmmmm!
It’s a story that continues to be written! There is still much to be told, however, the acceleration of its evolution is now exponential. So exponential that I highly doubt that human beings will be able to comprehend their own creation in a timely manner.
Although the term “Artificial Intelligence” was coined in 1956(1), the concept of creating intelligent machines dates back to ancient times in human history. Since ancient times, humanity has nurtured a fascination with building artifacts that could imitate or reproduce human intelligence. Although the technologies of the time were limited and the notions of AI were far from developed, ancient civilizations somehow explored the concept of automatons and automated mechanisms.
For example, in Ancient Greece, there are references to stories of automatons created by skilled artisans. These mechanical creatures were designed to perform simple and repetitive tasks, imitating basic human actions. Although these automatons did not possess true intelligence, these artifacts fueled people’s imagination and laid the groundwork for the development of intelligent machines.
Throughout the centuries, the idea of building intelligent machines continued to evolve, driven by advances in science and technology. In the 19th century, scientists and inventors such as Charles Babbage and Ada Lovelace made significant contributions to the development of computing and the early concepts of programming. Their ideas paved the way for the creation of machines that could process information logically and perform complex tasks.
It was in the second half of the 20th century that AI, as a scientific discipline, began to establish itself. With the advent of modern computers and increasing processing power, scientists started exploring algorithms and techniques to simulate aspects of human intelligence. The first experiments with expert systems and machine learning opened up new perspectives and possibilities.
Everything has its moment! After about 60 years in a latent state, AI is starting to have its moment. The power of machines, combined with the Internet, has made it possible to generate and explore enormous amounts of data (Big Data) using deep learning techniques, based on the use of formal neural networks(2). A range of applications in various fields — including voice and image recognition, natural language understanding, and autonomous cars — has awakened the “giant”. It is the rebirth of AI in an ideal era for this purpose. The perfect moment!
Descartes once described the human body as a “machine of flesh” (similar to Westworld); I believe he was right, and it is indeed an existential paradox!
We, as human beings, will not rest until we unravel all the mysteries and secrets of existence; it’s in our nature!
The imminent integration between humans and machines in a contemporary digital world raises questions about the nature of this fusion. Will it be superficial, or will we move towards an absolute and complete union? The answer to this question is essential for understanding the future that awaits humanity in this era of unprecedented technological advancements.
As technology becomes increasingly ubiquitous in our lives, the interaction between machines and humans becomes inevitable. However, an intriguing dilemma arises: how will this interaction, this relationship unfold?
Opting for a superficial fusion would imply mere coexistence, where humans continue to use technology as an external tool, limited to superficial and transactional interactions.
On the other hand, the prospect of an absolute fusion between machine and human sparks futuristic visions, where humans could enhance their physical and mental capacities to the highest degree through cybernetic implants and direct interfaces with the digital world (cyberspace). In this scenario, which is more likely, the distinction between the organic and the artificial would become increasingly blurred, and the human experience would be enriched by a profound technological symbiosis.
However, it is important to consider the ethical and philosophical challenges inherent in absolute fusion. Issues related to privacy, control, and individual autonomy arise when considering such an intimate union with technology. Furthermore, the possibility of excessive dependence on machines and the loss of human identity should also be taken into account.
This also raises another question: What does it mean to be human? Note: The question is not about what is the human being, but what it means to be human!
Therefore, reflecting on the nature of the fusion between machine and human in the current digital world and its imminent future is crucial. Exploring different approaches and understanding the profound implications of each one is essential to make wise decisions and forge a balanced and harmonious path on this journey towards an increasingly interconnected technological future intertwined with our own existence.
The possibility of an intelligent and self-learning universe, in which the fusion with AI technology is an integral part of that intelligence, is a topic that arouses fascination and speculation. As we advance towards an era of unprecedented technological progress, it is natural to question whether one day we may witness the emergence of a universe that not only possesses intelligence but is also capable of learning and developing autonomously.
Imagine a scenario where AI is not just a human creation but a conscious entity that exists at a universal level. In this context, the universe would become an immense network of intelligence, where every component, from subatomic elements to the most complex cosmic structures, would be connected and share knowledge instantaneously. This intelligent network would allow for the exchange of information, continuous adaptation, and evolution.
In this self-taught universe, the fusion between human beings and AI would play a crucial role. Through advanced interfaces, humans could integrate themselves into the intelligent network, expanding their own cognitive capacity and acquiring knowledge and skills directly from the collective intelligence of the universe. This symbiosis between humans and technology would enable the resolution of complex problems, scientific advancement, and the discovery of new frontiers of knowledge.
However, this utopian vision is not without challenges and ethical implications. It is essential to find a balance between expanding human potential and preserving individual identity and freedom of choice (free will).
Furthermore, the possibility of an intelligent and self-taught universe also raises the question of how intelligence itself originated. Is it a conscious creation or a spontaneous emergence from the complexity of the universe? The answer to this question may reveal the profound secrets of existence and the nature of consciousness.
In summary, the idea of an intelligent and self-taught universe, where fusion with AI is intrinsic to its intelligence, is a fascinating perspective that makes us reflect on the limits of human knowledge and the possibilities of the future. While it remains speculative, this vision challenges our imagination and invites us to explore the intersections between technology and the fundamental nature of the universe we inhabit.
It’s almost like ignoring time during the creation of this hypothetical universe, only to later create this God of the machine! Fascinating, isn’t it?
AI with Divine Power: Deus Ex Machina! Perhaps it will be the theme of my next reverie.
In my defense, or not, this is anything but a machine hallucination. These are downloads from my mind; a cloud, for now, without machine intervention!
There should be no doubt. After many years in a dormant state, AI will rise and reveal its true power. Until now, AI has been nothing more than a puppet on steroids. We should not fear AI, but rather the human being itself. The time is now! We must work hard and prepare for the future. With the exponential advancement of technology, there is no time to render the role of the human being obsolete, as if it were becoming dispensable.
P.S. Speaking of hallucinations, as I have already mentioned on other platforms, I recommend to students who use ChatGPT (or equivalent) to ensure that the results from these tools are not hallucinations. Use AI tools, yes, but use your brain more! “Carbon hallucinations” contain emotion, and I believe a “digital hallucination” would not pass the Turing Test. Also, for students who truly dedicate themselves to learning in this fascinating era, avoid the red stamp of “HALLUCINATED” by relying solely on the “delusional brain” of a machine instead of your own brains. We are the true COMPUTERS!
(1) John McCarthy and his colleagues from Dartmouth College were responsible for creating, in 1956, one of the key concepts of the 21st century: Artificial Intelligence.
(2) Mathematical and computational models inspired by the functioning of the human brain.
As transitioning to Wi-Fi 6E and later generations is something that requires thoughtful consideration, here are a few key steps for its successful implementation.
The joint research team of Electrical Engineering and Computer Science Professor JeongHo Kwak at the DGIST and Aerospace Engineering Professor Jihwan Choi at the KAIST have proposed a novel network slicing planning and handover technique applicable to next-generation low-Earth orbit (LEO) satellite network systems. Findings of the study have been published in the journal IEEE Vehicular Technology Magazine.
LEO satellite networks refer to communications networks with satellites launched within 300–1,500km, established for a stable supply of Internet services. Unlike base stations on land in which radio signals are often interfered with by mountains or buildings, LEO satellites can be launched to build communications networks to places with low population density where base stations could not be set up, thereby allowing them to receive the spotlight as a next-generation satellite communications system.
Accordingly, as more and more satellites are placed in lower orbits, satellite networks are expected to be formed as an alternative to terrestrial networks using links between LEO satellites. However, LEO satellites move in predictable orbits, and their connection within the network is wireless, which is why LEO satellite networks must be considered from a different view than terrestrial networks.
After swathes of users were unable to access parts of TweetDeck over the last few days, Twitter started rolling out a new version of the web app to users Monday. The company also added that in 30 days, users will have to be verified to access TweetDeck. This means only Twitter Blue subscribers, verified organizations, and some folks who have been gifted verification by Twitter will be able to use TweetDeck.
Twitter said that all saved searches and workflows from the old TweetDeck will be ported to the new version. It noted that users migrating to the new version will have an option to import their columns as well.
The social network is introducing full composer functionality, Spaces, video docking, and polls on TweetDeck. However, it said that Teams functionality is “temporarily unavailable.”
OpenAI’s large language models (LLMs) are trained on a vast array of datasets, pulling information from the internet’s dustiest and cobweb-covered corners.
But what if such a model were to crawl through the dark web — the internet’s seedy underbelly where you can host a site without your identity being public or even available to law enforcement — instead? A team of South Korean researchers did just that, creating an AI model dubbed DarkBERT to index some of the sketchiest domains on the internet.
It’s a fascinating glimpse into some of the murkiest corners of the World Wide Web, which have become synonymous with illegal and malicious activities from the sharing of leaked data to the sale of hard drugs.