Toggle light / dark theme

Ask an Information Architect, CDO, Data Architect (Enterprise and non-Enterprise) they will tell you they have always known that information/ data is a basic staple like Electricity all along; and glad that folks are finally realizing it. So, the same view that we apply to utilities as core to our infrastructure & survival; we should also apply the same value and view about information. And, in fact, information in some areas can be even more important than electricity when you consider information can launch missals, cure diseases, make you poor or wealthy, take down a government or even a country.


What is information? Is it energy, matter, or something completely different? Although we take this word for granted and without much thought in today’s world of fast Internet and digital media, this was not the case in 1948 when Claude Shannon laid the foundations of information theory. His landmark paper interpreted information in purely mathematical terms, a decision that dematerialized information forever more. Not surprisingly, there are many nowadays that claim — rather unthinkingly — that human consciousness can be expressed as “pure information”, i.e. as something immaterial graced with digital immortality. And yet there is something fundamentally materialistic about information that we often ignore, although it stares us — literally — in the eye: the hardware that makes information happen.

As users we constantly interact with information via a machine of some kind, such as our laptop, smartphone or wearable. As developers or programmers we code via a computer terminal. As computer or network engineers we often have to wade through the sheltering heat of a server farm, or deal with the material properties of optical fibre or copper in our designs. Hardware and software are the fundamental ingredients of our digital world, both necessary not only in engineering information systems but in interacting with them as well. But this status quo is about to be massively disrupted by Artificial Intelligence.

A decade from now the postmillennial youngsters of the late 2020s will find it hard to believe that once upon a time the world was full of computers, smartphones and tablets. And that people had to interact with these machines in order to access information, or build information systems. For them information would be more like electricity: it will always be there, and always available to power whatever you want to do. And this will be possible because artificial intelligence systems will be able to manage information complexity so effectively that it will be possible to deliver the right information at the right person at the right time, almost at an instant. So let’s see what that would mean, and how different it would be from what we have today.

Cambridge University spin-out Optalysys has been awarded a $350k grant for a 13-month project from the US Defense Advanced Research Projects Agency (DARPA). The project will see the company advance their research in developing and applying their optical co-processing technology to solving complex mathematical equations. These equations are relevant to large-scale scientific and engineering simulations such as weather prediction and aerodynamics.

The Optalysys technology is extremely energy efficient, using light rather than electricity to perform intensive mathematical calculations. The company aims to provide existing computer systems with massively boosted processing capabilities, with the aim to eventually reach exaFLOP rates (a billion billion calculations per second). The technology operates at a fraction of the energy cost of conventional high-performance computers (HPCs) and has the potential to operate at orders of magnitude faster.

In April 2015 Optalysys announced that they had successfully built a scaleable, lens-less optical processing prototype that can perform mathematical functions. Codenamed Project GALELEO, the device demonstrates that second order derivatives and correlation pattern matching can be performed optically in a scaleable design.

Read more

I forgot Sony in the list of contact lens patents. Sony’s new camera contact patent. So, we have Google, Huawei, and Samsung with AR and CPU patents and Sony’s patents on the camera. Waiting for Apple and my favorite Microsoft’s announcements.


Sony has joined Google and Samsung in the world of contact lens camera patents, Sony’s version also has zoom and aperture control built in.

Read more

Nice; however, I see also 3D printing along with machine learning being part of any cosmetic procedures and surgeries.


With an ever-increasing volume of electronic data being collected by the healthcare system, researchers are exploring the use of machine learning—a subfield of artificial intelligence—to improve medical care and patient outcomes. An overview of machine learning and some of the ways it could contribute to advancements in plastic surgery are presented in a special topic article in the May issue of Plastic and Reconstructive Surgery®, the official medical journal of the American Society of Plastic Surgeons (ASPS).

“Machine learning has the potential to become a powerful tool in plastic surgery, allowing surgeons to harness complex clinical data to help guide key clinical decision-making,” write Dr. Jonathan Kanevsky of McGill University, Montreal, and colleagues. They highlight some key areas in which machine learning and “Big Data” could contribute to progress in plastic and reconstructive surgery.

Machine Learning Shows Promise in Plastic Surgery Research and Practice

Machine learning analyzes historical data to develop algorithms capable of knowledge acquisition. Dr. Kanevsky and coauthors write, “Machine learning has already been applied, with great success, to process large amounts of complex data in medicine and surgery.” Projects with healthcare applications include the IBM Watson Health cognitive computing system and the American College of Surgeons’ National Surgical Quality Improvement Program.

Not only Google; there is Huawei and their AR contacts and Samsung are also making AR Contacts. And, the news 3 weeks ago shows that Samsung has applied for their own patent.


Google has filed a patent for what sounds like a bionic eye.

A patent filed in 2014 and published Thursday describes a device that could correct vision without putting contacts in or wearing glasses everyday.

But to insert the device, a person must undergo what sounds like a rather intrusive procedure.

Closing the instability gap.


(Phys.org)—It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. Errors arise because of the constant interaction between the qubits and their environment, which can result in photon loss, which in turn causes the qubits to randomly flip to an incorrect state.

In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.

In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.

Kurzweil, me and others have been saying devices will eventually be phased out for a while now. However, I do not believe the phase out will be due to AI. I do believe it will be based on how humans will use and adopt NextGen technology. I believe that AI will only be a supporting technology for humans and will be used in conjunction with AR, BMI, etc.

My real question around the phasing out of devices is will we jump from Smartphone directly to BMI or see a migration of Smartphone to AR Contacts & Glasses then eventually BMI?…


(Bloomberg) — Forget personal computer doldrums and waning smartphone demand. Google thinks computers will one day cease being physical devices.

“Looking to the future, the next big step will be for the very concept of the “device to fade away, Google Chief Executive Officer Sundar Pichai wrote Thursday in a letter to shareholders of parent Alphabet Inc. “Over time, the computer itself — whatever its form factor — will be an intelligent assistant helping you through your day.

Instead of online information and activity happening mostly on the rectangular touch screens of smartphones, Pichai sees artificial intelligence powering increasingly formless computers. “We will move from mobile first to an AI first world, he said.

Read more

I read this article and it’s complaints about the fragile effects of data processing and storing information in a Quantum Computing platform. However, I suggest the writer to review the news released 2 weeks ago about the new Quantum Data Bus highlighted by PC World, GizMag, etc. It is about to go live in the near future. Also, another article to consider is today’s Science Daily articile on electron spin currents which highlights how this technique effectively processes information.


Rare-earth materials are prime candidates for storing quantum information, because the undesirable interaction with their environment is extremely weak. Consequently however, this lack of interaction implies a very small response to light, making it hard to read and write data. Leiden physicists have now observed a record-high Purcell effect, which enhances the material’s interaction with light. Publication on April 25 in Nature Photonics (“Multidimensional Purcell effect in an ytterbium-doped ring resonator”).

Ordinary computers perform calculations with bits—ones and zeros. Quantum computers on the other hand use qubits. These information units are a superposition of 0 and 1; they represent simultaneously a zero and a one. It enables quantum computers to process information in a totally different way, making them exponentially faster for certain tasks, like solving mathematical problems or decoding encryptions.

Fragile.

The difficult part now is to actually build a quantum computer in real life. Rather than silicon transistors and memories, you will need physical components that can process and store quantum information, otherwise the key to the whole idea is lost. But the problem with quantum systems is that they are more or less coupled to their environments, making them lose their quantum properties and become ‘classical’. Thermal noise, for example, can destroy the whole system. It makes quantum systems extremely fragile and hard to work with.

Read more