Toggle light / dark theme

What the bleep is an exocortex and why should we care?

Ray Kurzweil

An exocortex can be accurately described as an external neocortex. Many people may have heard of the exocortex from Ray Kurzweil. The idea of an exocortex is actually a bit older than Ray Kurzweil’s description. As We May Think was the title of an essay by Vannevar Bush the famed inventor and in that essay he describes a machine which may be used to record the collective memory of mankind. It is the first known exocortex concept that I could find and he called this device the Memex.

The Memex would allow anyone to store all of the books and knowledge they gathered in their lifetime in a personal knowledge base. Unfortunately we still do not have a Memex device which allows us to store our own memories even though there are many centralized organizations which collect vast amounts of big data on our lives to put in central databases. Google could be said to be building an exocortex today but this exocortex is centralized and while we can use it as an external memory it conveniently has the feature (or bug) that allows our individual memories or thoughts to be searched. Maybe it’s time to build a decentralized exocortex which can allow the individual to own their own thoughts, own their own search, own their content, their data, and their digital selves?

Read more

Interesting — data compression algorithm can be applied to detect Quantum Entanglement.


The next time you archive some files and compress them, you might think about the process a little differently. Researchers at the National University of Singapore have discovered a common compression algorithm can be used to detect quantum entanglement. What makes this discovery so interesting is that it does not rely on heavily on an assumption that the measured particles are independent and identically distributed.

If you measure the property of a particle and then measure the same property of another particle, in classical mechanics there is no reason for them to match but pure chance. In quantum mechanics though, the two particles can be entangled, such that the results will match each other. This follows from Bell’s theorem, which is applied to test if particles are in fact entangled. The catch is that the theorem is derived for testing pairs of particles, but many pairs have to be measured and the probabilities they are entangled calculated. This is where the researchers’ discovery comes into play because instead of calculating probabilities, the measurements can be fed into the open-source Lempel-Ziv-Markov chain algorithm (LZMA) to get their normalized compression difference. Compression algorithms work by finding patterns in data and encoding them more efficiently, and in this case they also find correlations from quantum entanglement.

Using a weird phenomenon in which particles of light seem to travel at faster-than-light speeds, scientists have shown that waves of light can seem to travel backward in time.

The new experiment also shows other bizarre effects of light, such as pairs of images forming and annihilating each other.

Taken together, the results finally prove a century-old prediction made by British scientist and polymath Lord Rayleigh. The phenomenon, called time reversal, could allow researchers to develop ultra-high-speed cameras that can peer around corners and see through walls. [In Images: The World’s 11 Most Beautiful Equations].

Read more

Ever notice how maps of the large structures of the Universe look like maps of the brain or a Pollock painting?


On the grandest scale, our universe is a network of galaxies tied together by the force of gravity. Cosmic Web, a new effort led by cosmologists and designers at Northeastern’s Center for Complex Network Research, offers a roadmap toward understanding how all of those tremendous clusters of stars connect—and the visualizations are stunning.

The images below show us several hypothetical architectures for our universe, built from data on 24,000 galaxies. By varying the construction algorithm, the researchers have designed cosmic webs that link up in a number of different ways; based on the size, proximity, and relative velocities of individual galaxies. I call it God View.

AI is hackable as long as it’s underpinning technology is still supported on legacy platform technology and connected to a legacy infrastructure. Only when the underpinning technology & net infrastructure is updated to Quantum will we see a secured AI environment.


At MIT, machine learning specialists are training deep learning algorithms to spot cyber attacks. It may be AI’s ultimate test.

Read more

Newswise — The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go—an achievement demonstrating the explosive speed of development in machine capabilities.

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the prestigious science journal Nature.

Read more

The Direct Fusion Drive (DFD) concept provides game-changing propulsion and power capabilities that would revolutionize interplanetary travel. DFD is based on the Princeton Field-Reversed Configuration (PFRC) fusion reactor under development at the Princeton Plasma Physics Laboratory. The mission context we are proposing is delivery of a Pluto orbiter with a lander. The key objective of the proposal is to determine the feasibility of the proposed Pluto spacecraft using improved engine models. DFD provides high thrust to allow for reasonable transit times to Pluto while delivering substantial mass to orbit: 1000 kg delivered in 4 to 6 years. Since DFD provides power as well as propulsion in one integrated device, it will also provide as much as 2 MW of power to the payloads upon arrival. This enables high-bandwidth communication, powering of the lander from orbit, and radically expanded options for instrument design. The data acquired by New Horizons’ recent Pluto flyby is just a tiny fraction of the scientific data that could be generated from an orbiter and lander. We have evaluated the Pluto mission concept using the Lambert algorithm for maneuvers with rough estimates of the engine thrust and power. The acceleration times are sufficiently short for the Lambert approximation, i.e. impulsive burns, to have some validity. We have used fusion scaling laws to estimate the total mission mass and show that it would fit within the envelope of a Delta IV Heavy launch vehicle. Estimates of the amount of Helium 3 required to fuel the reactor are within available terrestrial stores.

Read more

Scientists from ITMO University in Saint Petersburg, Russia have enabled the longer distance (250 Kilos) of secured data transmission occur via Quantum. Nice; and should be a wake up call to the US as well on advancing their efforts more.


A group of scientists from ITMO University in Saint Petersburg, Russia has developed a novel approach to the construction of quantum communication systems for secure data exchange. The experimental device based on the results of the research is capable of transmitting single-photon quantum signals across distances of 250 kilometers or more, which is on par with other cutting edge analogues. The research paper was published in the Optics Express journal.

Information security is becoming more and more of a critical issue not only for large companies, banks and defense enterprises, but even for small businesses and individual users. However, the data encryption algorithms we currently use for protecting our data are imperfect — in the long-term, their logic can be cracked. Regardless of how complex and intricate the algorithm is, getting round it is just the matter of time.

Contrary to algorithm-based encryption, systems that protect information by making use of the fundamental laws of quantum physics, can make data transmission completely immune to hacker attacks in the future. Information in a quantum channel is carried by single photons that change irreversibly once an eavesdropper attempts to intercept them. Therefore, the legitimate users will instantly know about any kind of intervention.

Good question? Answer as of to date — depends on the “AI creator.” AI today is all dependent upon architects, engineers, etc. design and development and the algorithms & information exposed in AI. Granted we have advanced this technology; however, it is still based on logical design and principles; nothing more.

BTW — here is an example to consider in this argument. If a bank buys a fully functional and autonomous AI; and through audits (such as SOX) it is uncovered that embezzling was done by this AI solution (like in another report 2 weeks ago showing where an AI solution stole money out of customer accounts) who is at fault? Who gets prosecuted? who gets sued? The bank, or the AI technology company; or both? We must be ready to address these types of situations soon and legislation and the courts are going to face some very interesting times in the near future; and consumers will probably take the brunt of the chaos.


A recent experiment in which an artificially intelligent chatbot became virulently racist highlights the challenges we could face if machines ever become superintelligent. As difficult as developing artificial intelligence might be, teaching our creations to be ethical is likely to be even more daunting.

Read more