Toggle light / dark theme

During a recent United Nations meeting about emerging global risks, political representatives from around the world were warned about the threats posed by artificial intelligence and other future technologies.

The event, organized by Georgia’s UN representatives and the UN Interregional Crime and Justice Research Institute (UNICRI), was set up to foster discussion about the national and international security risks posed by new technologies, including chemical, biological, radiological, and nuclear (CBRN) materials.

The panel was also treated to a special discussion on the potential threats raised by artificial superintelligence—that is, AI whose capabilities greatly exceed those of humans. The purpose of the meeting, held on October 14, was to discuss the implications of emerging technologies, and how to proactively mitigate the risks.

Read more

Professor Nick Bostrom briefed political representatives from around the world on the national and international security risks posed by artificial intelligence and other future technologies at a UN event last week.

Professor Bostrom, Director of the Future of Humanity Institute, Oxford Martin School, was invited to speak at a special side event examining the challenges posed by chemical, biological, radiological and nuclear (CBRN) materials and weapons, held during the UN’s 2015 General Assembly meeting.

The event was organised by Georgia’s UN representatives, in collaboration with the United Nations Interregional Crime and Justice Research Institute (UNICRI), with the aim of understanding the implications of new technologies, ensuring responsible development and mitigating against misuse.

Read more

There are over 500 million hungry people in the world—but that number only tells part of the story. The other part of it is the amount of the actual food shortfall. So how much food would we need to make up the gap? There’s now an exact number.

The latest International Food Security report is out, and the good news is that global food insecurity has been falling—and it’s projected to keep on doing that over the next 10 years. The bad news? It’s not falling everywhere. Sub-Saharan Africa is especially being shut out of these gains.

But how much food would it take to close the gap for every food insecure person on the planet to have access to 2,100 calories a day? The USDA has calculated a figure: 11.8 million tons of grain.

Read more

A piece I wrote recently about blockchain & AI, and how I see the Lifeboat Foundation as a crucial component in a bright future.


Blockchain technology could lead to an AI truly reminiscent of the human brain, with less of its frailties, and more of its strengths. Just as a brain is not inherently dictated by a single neuron, neither is the technology behind bitcoin. The advantage (and opportunity) in this sense, is the advent of an amalgamation of many nodes bridged together to form an overall, singular function. This very much resembles the human brain (just as billions of neurons and synapses work in unison). If we set our sights on the grander vision of things, humans could accomplish great things if we utilize this technology to create a truly life-like Artificial Intelligence. At the same time, we need to keep in mind the dangers of such an intelligence being built upon a faultless system that has no single point of failure.

Just as any technology has upsides and corresponding downsides, this is no exception. The advantages of this technology are seemingly endless. In the relevant sense, it has the ability to create internet services without the same downfalls exploited in the TV show ‘Mr. Robot,’ where a hacker group named “fsociety” breached numerous data centers and effectively destroyed every piece of data the company held, causing worldwide ramifications across all of society. Because blockchain technology ensures no centralized data storage (by using all network users as nodes to spread information), it can essentially be rendered impossible to take down. Without a single targeted weak point, this means a service that, in the right hands, doesn’t go offline from heavy loads, which speeds up as more people use it, has inherent privacy/security safeguards, and unique features that couldn’t be achieved with conventional technology. In the wrong hands, however, this could be outright devastation. Going forward, we must tread lightly and not forget to keep tabs on this technology, as it could run rampant and destroy society as we know it.

Throughout the ages, society has always experienced mass change; the difference here being the ability for it to wipe us out. Therefore, it arises from a survival imperative that we strive for the former rather than the latter. We can evolve without destroying ourselves, but it won’t be a cakewalk. With our modern-day luxuries, we, as a species think ourselves invincible, while, in reality, we’re just dressed-up monkeys operating shiny doomsday technology. Just as it was a challenge to cross the seas, to invent tools and harness electricity, the grandest stakes posed by the future (and the ones defining our survival) are the most difficult to accomplish.

Read more

“Beyond implementation of quantum communication technologies, nanotube-based single photon sources could enable transformative quantum technologies including ultra-sensitive absorption measurements, sub-diffraction imaging, and linear quantum computing. The material has potential for photonic, plasmonic, optoelectronic, and quantum information science applications…”


In optical communication, critical information ranging from a credit card number to national security data is transmitted in streams of laser pulses. However, the information transmitted in this manner can be stolen by splitting out a few photons (the quantum of light) of the laser pulse. This type of eavesdropping could be prevented by encoding bits of information on quantum mechanical states (e.g. polarization state) of single photons. The ability to generate single photons on demand holds the key to realization of such a communication scheme.

By demonstrating that incorporation of pristine into a silicon dioxide (SiO2) matrix could lead to creation of solitary oxygen dopant state capable of fluctuation-free, room-temperature single , Los Alamos researchers revealed a new path toward on-demand single photon generation. Nature Nanotechnology published their findings.

Read more

China is set to complete the installation of the world’s longest quantum communication network stretching 2,000km (1,240miles) from Beijing to Shanghai by 2016, say scientists leading the project. Quantum communications technology is considered to be “unhackable” and allows data to be transferred at the speed of light.

By 2030, the Chinese network would be extended worldwide, the South China Morning Post reported. It would make the country the first major power to publish a detailed schedule to put the technology into extensive, large-scale use.

The development of quantum communications technology has accelerated in the last five years. The technology works by two people sharing a message which is encrypted by a secret key made up of quantum particles, such as polarized photons. If a third person tries to intercept the photons by copying the secret key as it travels through the network, then the eavesdropper will be revealed by virtue of the laws of quantum mechanics – which dictate that the act of interfering with the network affects the behaviour of the key in an unpredictable manner.

Read more

Dr. Nils J. Nilsson spent almost a lifetime in the field of Artificial Intelligence (AI) before writing and publishing his book, The Quest for Artificial Intelligence (2009). I recently had the opportunity to speak with the former Stanford computer science professor, now retired at the age of 82, and reflect on the earlier accomplishments that have led to some of the current trends in AI, as well as the serious economic and security considerations that need to be made about AI as society moves ahead in the coming decades.

The Early AI that Powers Today’s Trends

One key contribution of early AI developments included rules-based expert systems, such as MYCIN, which was developed in the mid-1970s by Ted Shortliffe and colleagues at Stanford University. The information built into the diagnostic system was gleaned from medical diagnosticians, and the system would then ask questions based on that information. A person could then type in answers about a patient’s tests, symptoms, etc., and the program would then attempt to diagnose diseases and prescribe therapy.

“Bringing us more up to the future was the occurrence of huge databases (in the 1990s) — sometimes called big data — and the ability of computers to mine that data and find information and make inferences,” remarks Nils. This made possible the new work on face recognition, speech recognition, and language translation. “AI really had what might be called a take off at this time.” Both of these technologies also feed into the launch of IBM’s Watson Healthcare, which combines advanced rules-based systems with big data capabilities and promises to give healthcare providers access to powerful tools in a cloud-based data sharing hub.

Work in neural networks, another catalyst, went through two phases, an earlier phase in the 1950s and 1960s and a latter phase in the 1980s and 1990s. “The second phase (of neural networks) allowed…people to make changes in the connected strength in those networks and multiple layers, and this allowed neural networks that can steer and drive automobiles.” More primitive networks led to the cutting-edge work being done by today’s scientists in the self-driven automobile industry via companies like Tesla and Google.

Robotics was also being developed at Stanford in the 1950s and 1960s. A robot could look at its environment and determine the position of objects, could be given tasks and then make a plan of action to achieve the goal. A built-in monitoring system allowed it to evaluate results and it could re-orient itself and get back on track. These early robots used a digital equipment computer that is not nearly as powerful as the technology that we have on a present-day wristwatch, let alone an an autonomous drone.

All of these early technologies led to the trends in industry that are moving and shaking the global economy. An interesting difference worth noting, between the way that old and new technologies were primarily developed, has to do with the driving context. There is a well-known distinction between ‘demand-pull’ technology (using AI to solve existing problems) and ‘push’ technology orientations (AI that is developed from the top down, without necessarily meeting specific user needs).

In the earlier days of AI research, scientists and engineers leaned toward the latter ‘push’ method, as opposed to the ‘demand-pull’ strategy that is more prevalent today (though certainly both exist). Early AI scientists “really wanted to see how far they could leverage the technology”, remarks Nilsson, as they were fascinated by the basic techniques, but not yet sure about purpose.

AI Threats on the Horizon

Global security is an increasing threat made evident by developing technologies. People are and will be monitored more often (at airports, street corners, etc.). There’s also the issue of autonomous weapons, in particular planes or drones that are not guided as they are now. The technology is already available for allowing such machines to make decisions autonomously. This is already a pressing a public issue to which the United States, “as defenders of democracy”, and other nations should pay careful attention to, remarks Nilsson.

The theoretical concerns of AI’s threat to the existence of the human species, so readily covered in the media, are legitimate; however, Nils does not believe they are the only concern for the very near future. Instead, he suggests that AI poses other kinds of relevant threats about which we should also be thinking, such as those risks to the existing economic system.

The issue of employment as a result of increasing automation is one point for real consideration. Economists have made the argument that automation has occurred in the past and that such innovations have not prevented new startups; however, with the seeming inevitable development of human-level cognitive AI, there are many more jobs (certain types of journalism, for example) that machines can also perform more quickly.

“Now, should we regard that as a threat, or should we regard that as ‘well, lots of people have jobs that they don’t like, why should we regard eliminating the need for people to work at those jobs as a threat?”, says Nils. While increased automation will inevitably result in a decreased need for human labor, the question as to what humans will do with more leisure time remains. They could spend time doing more creative things, but this is a serious consideration for humankind.

Nilsson points out that nations will also need to reorient the economy. “The production of goods and services will certainly increase, but will be done by robots and automation; I think the big problem for us is to decide, ‘okay, how do we actually distribute these goods and services to people who aren’t earning a salary?’”

The Need for Real Solutions

What are the potential solutions? On autonomous weapons, Nils believes that there certainly needs to be international collaboration. The United Nations (UN), “which sometimes is not as effective as it should be”, needs to be heavily involved. Nilsson states the need for forming other alliances with NATO, the Chinese, and other rising governments; he’s not sure what to do about the Middle East, which is already a “hot spot.”

Regardless, Nilsson emphasizes that the United States needs to be able to lead the way and set better, well thought-out examples in how we use such technology. “Not only we, as a ‘defender of democracy’ should pay careful attention to and worry about (these threats), but other nations need to do so also.”

On the issue of employment, Nils points to the need to ask some tough questions i.e. “are we going to have a policy of income distribution or reverse income taxes?” There is the problem of people’s ability to purchase, but also the very real consideration as to what people will do with their time, a paradox shift. “In the past, people didn’t have that worry because they had to work…we can’t very well go back to the old breads and circus thing of the Romans,” he jests.

Nilsson believes that this will require the usual avenues of politics and think tanks, though one would assume a much more active and integrated citizenry as well. “One important part is the citizenry has to be more well-informed about these threats than it is at the moment.” Coming up with real solutions requires a real collaborative, multi-pronged effort, one that is all too often easier said than done.

Dropbox today announced that it has started to allow users to log in using USB keys as a universal second factor (U2F) of authentication.

U2F, a protocol promoted by the FIDO Alliance, isn’t the most trendy form of secure authentication for consumer web services. What’s far more common is two-factor authentication that you can sign on with by entering a code that’s sent to your phone. Dropbox already allows its users to do that, but now it’s gone further.

“After typing in your password, just insert your key into a USB port when you’re prompted, instead of typing in a six-digit code,” Dropbox’s Patrick Heim and Jay Patel wrote in a blog post today. “And unlike two-step with a phone, you’ll never have to worry about your battery going dead when you use a security key.”

Read more