Blog

Jun 6, 2023

Intelligence Explosion — Part 2/3

Posted by in categories: big data, computing, disruptive technology, evolution, futurism, innovation, internet, machine learning, robotics/AI, singularity, supercomputing

Hallucination!

Can “hallucinations” generate an alternate world, prophesying falsehood?

As I write this article, NVIDIA( is surpassing Wall Street’s expectations. The company, headquartered in Santa Clara, California, has just joined the exclusive club of only five companies in the world valued at over a trillion dollars [Apple (2.7T), Microsoft (2.4T), Saudi Aramco (2T), Alphabet/Google (1.5T), and Amazon (1.2T)], as its shares rose nearly 25% in a single day! A clear sign of how the widespread use of Artificial Intelligence (AI) can dramatically reshape the technology sector.

Intel has announced an ambitious plan to develop scientific generative AIs designed with one trillion parameters. These models will be trained on various types of data, including general texts, code, and scientific information. In comparison, OpenAI’s GPT-3 has 175 billion parameters (the size of GPT-4 has not yet been disclosed by OpenAI). The semiconductor company’s main focus is to apply these AIs in the study of areas such as biology, medicine, climate, cosmology, chemistry, and the development of new materials. To achieve this goal, Intel plans to launch a new supercomputer called Aurora, with processing capacity exceeding two EXAFLOPS(*, later this year.

The world’s Data Centers need to undergo a transformation due to the revolution that will come with AI technologies.

The AI train has already started and, before reaching cruising speed, it will make several stops to allow all species of businesses, like Noah’s Ark, to come aboard. At this stage, it is crucial to have all “species” in the race in order to ensure a more democratic and diverse… Intelligence.

Once at cruising speed, we reach the point of no return. Wise choices and decisions are needed before that! Ultimately, the future of humanity is at stake.

OpenAI proposes the creation of an international body dedicated to the regulation of Artificial Intelligences. The organization believes that within the next ten years, AI systems may surpass the level of expertise in various domains. Faced with this scenario, there is a need to conduct studies on the governance of a Superintelligence, which OpenAI considers the most powerful technology ever developed by humanity.

Therefore, the proposal is to establish an international body to address the challenges and ethical issues involved in the advancement of AIs, in order to ensure their responsible use and benefits for society as a whole.

It may seem like a hallucination, but it’s very real.

But today, in this 2nd part of the “Explosion of Intelligence,” I come to talk about another type of hallucination.

The Machine’s Hallucination!

The explosion of AI is one of the most significant phenomena of the digital era. I call it an explosion because of the speed at which it (finally) emerged after being in a state of suspension for so long. The global impact is impressive, indicating widespread acceptance, despite the usual resistance stemming from the fear of the unknown, inherent in human nature.

It is interesting to note the successive “sympathy explosions” resulting from the shockwaves of the main explosions! Let’s hope that this increase in kinetic energy does not cause the decomposition of the explosive (AI) itself.

Sympathy explosions occur when an explosive is hit by a shockwave, causing an increase in kinetic energy and subsequent decomposition of the explosive.

So far, it has been a controlled process of releasing large amounts of information, but when the intensification of the shockwave takes place, like the Munroe-Newmann Effect, things will heat up. A lot. Really!

In recent years, AI has become an increasingly important part of modern life, from product recommendation systems to personal assistants (what I have been trying to bring to light for over a decade with the ETER9 Project, which I called “digital counterparts”). Bill Gates himself recently stated that digital agents (counterparts) will radically change the Internet, thus transforming existing business models or established technologies.

The former CEO of Google, Eric Schmidt, also said so at the Collision Conference in 2022: “Humans will soon have a Second Self (counterpart) created by Artificial Intelligence.”

So, I believe I’m not alone anymore!

The success of AI can be largely attributed to learning models such as Deep Learning, which can process large amounts of data and extract complex patterns. However, with the increasing complexity of these models, new challenges such as “hallucination” also arise.

Deep Learning models are a subset of AI algorithms that use neural networks to learn from large datasets. These models can process information in different layers, allowing the extraction of complex patterns and the creation of new knowledge. For example, a neural network trained on a set of cat images can learn to identify objects in new images with high accuracy. However, these models also have limitations, such as a tendency toward hallucination.

Hallucination is a phenomenon that occurs when an AI model is trained on a dataset with incomplete or inaccurate information. When the model is exposed to new data, it may create nonexistent information based on incorrect assumptions. For example, if an AI model is trained on a set of cat images but never sees a black cat, it may create a black cat based on incomplete information. This can lead to inaccurate or even dangerous results, especially when it comes to critical applications such as medical or security systems.

To avoid hallucination, AI experts are exploring new learning techniques, such as adversarial learning, where two AI models are trained simultaneously to create a balance of power and prevent hallucination. Additionally, experts are also working on interpretability models, which allow users to understand how an AI model arrived at a specific decision, making it easier to identify and correct incorrect information.

Despite the challenges of Machine Hallucination, AI continues to evolve and become increasingly important for modern society. Deep Learning models are being used in a wide range of applications, from personal assistants to autonomous cars. As AI continues to evolve, it is important for experts to continue exploring new techniques to improve the accuracy and reliability of models, so they can be safely used in critical applications.

I am convinced that hallucination will be temporary, a mere side effect of learning. Nonetheless, it is worth mentioning that in this machine learning process, hallucination represents the creation of new data that never existed before, creating a paradox of what is real (true) and what is false (hallucination, which is also real for the machine and subsequent user).

Let us never forget: as long as humans dominate AIs, creating and training them, there is no greater danger. But make no mistake, we are all contributing to the creation of Pandora’s Box. Will we resist its opening? Unlike Greek mythology, I want to believe that when opened, the box will not contain only “winds and storms,” but the essence of humanity, albeit ironically coming from the machine!

P.S. Regarding machine hallucinations, a small recommendation for students using ChatGPT (or equivalent). Use it, but make sure the results are not hallucinations. What do I mean by this? Use AI tools, yes, but use your brain more!

I believe some teachers will create a new stamp, “HALLUCINATED” in red, to grade tests of students who rely solely on the “hallucinated brain” of a machine instead of their own brains.

( I suggest reading Part 1/3 of the “Explosion of Intelligence” for a better understanding of my reference to NVIDIA.

(* EXAFLOPS is a unit of measurement used to express the processing capacity of a computer, in this case, a supercomputer, in terms of floating-point operations per second. The prefix “EXA” indicates a value equal to 10 to the 18th power. “FLOPS” is an abbreviation for floating-point operations per second. The rate at which a system performs a FLOP in seconds is measured in EXAFLOPS.

Therefore, when we mention “two EXAFLOPS,” we are talking about a supercomputer capable of performing approximately two billion floating-point operations per second. This measure is used to evaluate the speed and performance of the supercomputer in terms of computationally intensive calculations, such as complex simulations, climate modeling, large-scale data analysis, and other types of computationally intensive processing. Having a supercomputer with this capacity represents a significant advancement in processing power and opens possibilities for solving increasingly challenging scientific and technological problems.

You can read this same article, in Portuguese, on Link To Leaders (https://linktoleaders.com/).

Comments are closed.