Toggle light / dark theme

Scientists Claim To Have Discover What Existed BEFORE The Beginning Of The Universe!

Non-scientific versions of the answer have invoked many gods and have been the basis of all religions and most philosophy since the beginning of recorded time.

Now a team of mathematicians from Canada and Egypt have used cutting edge scientific theory and a mind-boggling set of equations to work out what preceded the universe in which we live.

In (very) simple terms they applied the theories of the very small – the world of quantum mechanics – to the whole universe – explained by general theory of relativity, and discovered the universe basically goes though four different phases.

The Big Bang Never Happened — And There Might Be Traces Of An Earlier Universe, Scientist Claims

A physicist from the University of Campinas in Brazil isn’t a big fan of the idea that time started with a so-called Big Bang. So Instead, Juliano César Silva Neves imagines a collapse followed by a sudden expansion, one that could even still carry the scars of a previous timeline.

Updated version of the previous article.

The idea itself isn’t new, but Neves has used a fifty-year-old mathematical trick describing black holes to show how our Universe needn’t have had such a compact start to existence. At first glance, our Universe doesn’t seem to have a lot in common with black holes. One is expanding space full of clumpy bits; the other is mass pulling at space so hard that even light has no hope of escape. But at the heart of both lies a concept known as a singularity – a volume of energy so infinitely dense, we can’t even begin to explain what’s going on inside it.

Discovering novel algorithms with AlphaTensor

Algorithms have helped mathematicians perform fundamental operations for thousands of years. The ancient Egyptians created an algorithm to multiply two numbers without requiring a multiplication table, and Greek mathematician Euclid described an algorithm to compute the greatest common divisor, which is still in use today.

During the Islamic Golden Age, Persian mathematician Muhammad ibn Musa al-Khwarizmi designed new algorithms to solve linear and quadratic equations. In fact, al-Khwarizmi’s name, translated into Latin as Algoritmi, led to the term algorithm. But, despite the familiarity with algorithms today – used throughout society from classroom algebra to cutting edge scientific research – the process of discovering new algorithms is incredibly difficult, and an example of the amazing reasoning abilities of the human mind.

In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication. This sheds light on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

How Quantum Physics Leads to Decrypting Common Algorithms

The rise of quantum computing and its implications for current encryption standards are well known. But why exactly should quantum computers be especially adept at breaking encryption? The answer is a nifty bit of mathematical juggling called Shor’s algorithm. The question that still leaves is: What is it that this algorithm does that causes quantum computers to be so much better at cracking encryption? In this video, YouTuber minutephysics explains it in his traditional whiteboard cartoon style.

“Quantum computation has the potential to make it super, super easy to access encrypted data — like having a lightsaber you can use to cut through any lock or barrier, no matter how strong,” minutephysics says. “Shor’s algorithm is that lightsaber.”

According to the video, Shor’s algorithm works off the understanding that for any pair of numbers, eventually multiplying one of them by itself will reach a factor of the other number plus or minus 1. Thus you take a guess at the first number and factor it out, adding and subtracting 1, until you arrive at the second number. That would unlock the encryption (specifically RSA here, but it works on some other types) because we would then have both factors.

Wiggling toward bio-inspired machine intelligence

Juncal Arbelaiz Mugica is a native of Spain, where octopus is a common menu item. However, Arbelaiz appreciates octopus and similar creatures in a different way, with her research into soft-robotics theory.

More than half of an octopus’ nerves are distributed through its eight arms, each of which has some degree of autonomy. This distributed sensing and information processing system intrigued Arbelaiz, who is researching how to design decentralized intelligence for human-made systems with embedded sensing and computation. At MIT, Arbelaiz is an applied math student who is working on the fundamentals of optimal distributed control and estimation in the final weeks before completing her PhD this fall.

She finds inspiration in the biological intelligence of invertebrates such as octopus and jellyfish, with the ultimate goal of designing novel control strategies for flexible “soft” robots that could be used in tight or delicate surroundings, such as a surgical tool or for search-and-rescue missions.

Posits, a New Kind of Number, Improves the Math of AI

Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.

Back in 2017, John Gustafson, then jointly appointed at A*STAR Computational Resources Centre and the National University of Singapore, and Isaac Yonemoto, then at Interplanetary Robot and Electric Brain Co., developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.

Now, a team of researchers at the Complutense University of Madrid have developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s IEEE Symposium on Computer Arithmetic.

A computational shortcut for neural networks

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.

Are We Living in a Simulation with David Chalmers [S3 Ep.12]

Welcome to another episode of Conversations with Coleman.

My guest today is David Chalmers. David is a professor of philosophy and neuroscience at NYU and the co-director of NYU Centre for Mind, Brain and Consciousness.

David just released a new book called “Reality+: Virtual Worlds and the Problems of Philosophy”, which we discuss in this episode. We also discuss whether we’re living in a simulation, the progress that’s been made in virtual reality, whether virtual worlds count as real, whether people would and should choose to live in a virtual world, and many other classic questions in the philosophy of mind and more.

#Ad.
The best way to learn anything is by doing it yourself. Learn interactively with Brilliant’s fun hands-on lessons in math, science, and computer science. Brilliant has lots of great courses for all ability and knowledge levels, so you’ll find something that interests you. Master all sorts of technical subjects, with topics ranging from Geometry to Classical Mechanics to Programming with Python to Cryptocurrency and much more.
Instead of just memorizing, Brilliant teaches you how to think about STEM by guiding you through fun problems. You’ll get practice with real problem solving, which helps you train your critical thinking and creative problem-solving skills. You’ll come to understand how STEM actually works, and how it’s relevant to your everyday life.
Head over to https://brilliant.org/CWC to get started with a free week of unlimited access to Brilliant’s interactive lessons. The first 200 listeners will also get 20% off an annual membership.

FOLLOW COLEMAN

YouTube — http://bit.ly/38kzium.