Toggle light / dark theme

Whether in the brain or in code, neural networks are shaping up to be one of the most critical areas of research in both neuroscience and computer science. An increasing amount of attention, funding, and development has been pushed toward technologies that mimic the brain in both hardware and software to create more efficient, high performance systems capable of advanced, fast learning.

One aspect of all the efforts toward more scalable, efficient, and practical neural networks and deep learning frameworks we have been tracking here at The Next Platform is how such systems might be implemented in research and enterprise over the next ten years. One of the missing elements, at least based on the conversations that make their way into various pieces here, for such eventual end users is reducing the complexity of the training process for neural networks to make them more practically useful–and without all of the computational overhead and specialized systems training requires now. Crucial then, is a whittling down of how neural networks are trained and implemented. And not surprisingly, the key answers lie in the brain, and specifically, functions in the brain and how it “trains” its own network that are still not completely understood, even by top neuroscientists.

In many senses, neural networks, cognitive hardware and software, and advances in new chip architectures are shaping up to be the next important platform. But there are still some fundamental gaps in knowledge about our own brains versus what has been developed in software to mimic them that are holding research at bay. Accordingly, the Intelligence Advanced Research Projects Activity (IARPA) in the U.S. is getting behind an effort spearheaded by Tai Sing Lee, a computer science professor at Carnegie Mellon University’s Center for the Neural Basis of Cognition, and researchers at Johns Hopkins University, among others, to make new connections between the brain’s neural function and how those same processes might map to neural networks and other computational frameworks. The project called the Machine Intelligence from Cortical Networks (MICRONS).

Read more

New equation proves no “Big Bang” theory and no beginning either as well as no singularity.


(Phys.org) —The universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein’s theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

The widely accepted age of the , as estimated by , is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or . Only after this point began to expand in a “Big Bang” did the universe officially begin.

Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

How could AI disrupt the music and commercial media industries?


1Artificial intelligence may be set to disrupt the world of live music. Using data driven algorithms, AI would be able to calculate when and where artists should play, as well as streamline the currently deeply flawed means through which fans discover concerts happening in their area.

____________________________________________

Guest Post by Cortney Harding on Medium

We don’t live in a world that’s pinning the survival of humanity of Matthew McConaughey’s shoulders, but if it turns out the plot of the 2014 film Interstellar is true, then we live in a world with at least five dimensions. And that would mean that a ring-shaped black hole would, as scientists recently demonstrated, “break down” Einstein’s general theory of relativity. (And to think, the man was just coming off a phenomenal week.)

In a study published in Physical Review Letters, researchers from the UK simulated a black hole in a “5-D” universe shaped like a thin ring (which were first posited by theoretical physicists in 2002). In this universe, the black hole would bulge strangely, with stringy connections that become thinner as time passes. Eventually, those strings pinch off like budding bacteria or water drops off a stream and form miniature black holes of their own.

This is wicked weird stuff, but we haven’t even touched on the most bizarre part. A black hole like this leads to what physicists call a “naked singularity,” where the equations that support general relativity — a foundational block of modern physics — stop making sense.

Read more

(Phys.org)—Researchers have designed and implemented an algorithm that solves computing problems using a strategy inspired by the way that an amoeba branches out to obtain resources. The new algorithm, called AmoebaSAT, can solve the satisfiability (SAT) problem—a difficult optimization problem with many practical applications—using orders of magnitude fewer steps than the number of steps required by one of the fastest conventional algorithms.

The researchers predict that the amoeba-inspired may offer several benefits, such as high efficiency, miniaturization, and low , that could lead to a new computing paradigm for nanoscale high-speed .

Led by Masashi Aono, Associate Principal Investigator at the Earth-Life Science Institute, Tokyo Institute of Technology, and at PRESTO, Japan Science and Technology Agency, the researchers have published a paper on the amoeba-inspired system in a recent issue of Nanotechnology.

Read more

Actors and Actresses will never have to worry about reading through pages of scripts to decide whether or not the role is worth their time; AI will do the work for you.


A version of this story first appeared in the Feb. 26 issue of The Hollywood Reporter magazine. To receive the magazine, click here to subscribe.

During his 12 years in UTA’s story department, Scott Foster estimates he read about 5,500 screenplays. “Even if it was the worst script ever, I had to read it cover to cover,” he says. So when Foster left the agency in 2013, he teamed with Portland, Ore.-based techie Brian Austin to create ScriptHop, an artificial intelligence system that manages the volume of screenplays that every agency and studio houses. “When I took over [at UTA], we were managing hundreds of thousands of scripts on a Word document,” says Foster, who also worked at Endeavor and Handprint before UTA. “The program began to eat itself and become corrupt because there was too much information to handle.” ScriptHop can read a script and do a complete character breakdown in four seconds, versus the roughly four man hours required of a reader. The tool, which launches Feb. 16 is free, and is a sample of the overall platform coming later in 2016 that will recommend screenplays as well as store and manage a company’s library for a subscription fee of $29.99 a month per user.

As for how exactly it works, Austin is staying mum. “There’s a lot of sauce in the secret sauce,” he says. Foster and Austin aren’t the first to create AI to analyze scripts. ScriptBook launched in 2015 as an algorithmic assessment to determine a script’s box-office potential. By contrast, ScriptHop is more akin to a Dewey Decimal System for film and TV. Say a manager needs to find a project for a 29-year-old male client who is 5 feet tall, ScriptHop will spit out the options quickly. “If you’re an agent looking for roles for minority clients, it’s hugely helpful,” says Foster. There’s also an emotional response dynamic (i.e., Oscar bait) that charts a character’s cathartic peaks and valleys as well as screen time and shooting days. So Meryl Streep instantly can find the best way to spend a one month window between studio gigs. Either way, it appears that A.I. script reading is the future. The only question is what would ScriptHop make of Ex Machina’s Ava? “That would be an interesting character breakdown,” jokes Foster.

Read more

Neural networks have become enormously successful – but we often don’t know how or why they work. Now, computer scientists are starting to peer inside their artificial minds.

A PENNY for ’em? Knowing what someone is thinking is crucial for understanding their behaviour. It’s the same with artificial intelligences. A new technique for taking snapshots of neural networks as they crunch through a problem will help us fathom how they work, leading to AIs that work better – and are more trustworthy.

In the last few years, deep-learning algorithms built on neural networks – multiple layers of interconnected artificial neurons – have driven breakthroughs in many areas of artificial intelligence, including natural language processing, image recognition, medical diagnoses and beating a professional human player at the game Go.

The trouble is that we don’t always know how they do it. A deep-learning system is a black box, says Nir Ben Zrihem at the Israel Institute of Technology in Haifa. “If it works, great. If it doesn’t, you’re screwed.”

Neural networks are more than the sum of their parts. They are built from many very simple components – the artificial neurons. “You can’t point to a specific area in the network and say all of the intelligence resides there,” says Zrihem. But the complexity of the connections means that it can be impossible to retrace the steps a deep-learning algorithm took to reach a given result. In such cases, the machine acts as an oracle and its results are taken on trust.

I must admit that this will be hard to do. Sure; I can code anything to come across as responding & interacting to questions, topics, etc. Granted logical/ pragmatic decision making is based on facts/ information that people have at a given point of time; being human isn’t only based on algorithms and prescript data it includes being spontaneous, and sometimes emotional thinking. Robots without the ability to be spontaneous, and have emotional thinking capabilities; will not be human and will lack the connection that humans need.


Some people worry that someday a robot – or a collective of robots – will turn on humans and physically hurt or plot against us.

The question, they say, is how can robots be taught morality?

There’s no user manual for good behavior. Or is there?

This is one that truly depends on the targeted audience. I still believe that the 1st solely owned & operated female robotics company will make billions.


Beyond correct pronunciation, there is the even larger challenge of correctly placing human qualities like inflection and emotion into speech. Linguists call this “prosody,” the ability to add correct stress, intonation or sentiment to spoken language.

Today, even with all the progress, it is not possible to completely represent rich emotions in human speech via artificial intelligence. The first experimental-research results — gained from employing machinelearning algorithms and huge databases of human emotions embedded in speech — are just becoming available to speech scientists.

Synthesised speech is created in a variety of ways. The highest-quality techniques for natural-sounding speech begin with a human voice that is used to generate a database of parts and even subparts of speech spoken in many different ways. A human voice actor may spend from 10 hours to hundreds of hours, if not more, recording for each database.