Dan Faggella – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Sat, 29 Apr 2017 22:47:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Can Technology Help Fine-Tune Your Happiness? https://spanish.lifeboat.com/blog/2016/09/can-technology-help-fine-tune-your-happiness Mon, 19 Sep 2016 17:19:28 +0000 http://lifeboat.com/blog/?p=30282 Given the demands of the modern world, many people find solace and relaxation when they disconnect from their smart phones, computers and email. But what if you could improve your overall happiness simply by playing games on your phone? In a recent interview, tech entrepreneur and co-founder of Happify Ofer Leidner said gamification can make people “happier”, and that the development of technology that improves well-being is only just getting beginning.

Image credit: x-bility.com

Image credit: x-bility.com

It should be noted that not just any game on your phone can help one live a happier, healthier life. Instead, Happify and other comparable platforms use science-based games to drive behavior and to help people learn skills for generally improving their outlook on life. It’s still gaming and gamification, but gaming done with a meaningful purpose.

“After telling us a little bit about themselves, we recommend a certain track, which is a topic around which (Happify users) want to build those skills for greater emotional fitness. We then prescribe for them a set of activities and interventions that have been transformed into an interactive app,” Leidner said. “You can do them on your phone, when you’re commuting, or you can do it at night. What we’re doing, in terms of the measurement of improved outcome, is we’re actually measuring them based on scientific event reports.”

Leidner said that Happify and other apps like it aren’t inventing the science, but rather translating existing interventions, studies, and research; this data suggests that an overall happiness index is determined by one’s ability to experience positive emotions and overall life satisfaction. While that may seem like a nebulous target, he said there’s plenty of research to back it up.

In addition to the behavioral science aspect of gamification technology, Leidner also cites evidence of its efficacy in a neuroscientific approach. Seeing the notable changes in functional MRI brain scans as a result of gamification-driven behavior is what led him to make applications that could engage and benefit society on a grander scale.

Leidner acknowledges that happiness is a charged term that can mean many things to different people. But in a world with both objective and self-reported measurements, it’s what the user does with the feedback from those gaming measurements that will make the difference. “To give you a simple example, you will not be able to be happy if you’re sleep deprived. It doesn’t matter. Sleep deprivation is the number one technique to make you unhappy,” Leidner said. “I think the important thing is not just to measure, but what do you do with the measurements.”

Looking to the future, Leidner sees more gamification-driven applications and hardware on the horizon that will help people learn to live happier, healthier lives, including existing health applications like HealthKeep. Future app technologies will likely also collaborate with sensors in your body to help calibrate self-reported information (like mental state) with objective physical measurements, in turn improving recommended activities and better tailoring apps to enhance an individual user’s happiness and well-being.

Beyond that, Leidner also predicts that augmented and virtual reality will play a big role in improving people’s lives in the future. Such technologies, he said, will help people escape from their lives and emotions while helping them learn how to use more of their “mindfulness muscle”.

“There is a theme that says technology is not the way. If you want to live well and live happier lives, disconnect from technology, shut down your devices,” Leidner said. “We’re basically saying, ‘We’re not gonna’ shut down our devices. We’re just gonna’ turn the focus to apps and technology and services that help us create more meaningful lives. Augmented reality can play a very important role (in that) I think.” As with anything else, our new apps are a tool, one that can be used for ill or for good; eliminating this technology from our lives may not be realistic, but choosing how we use these technologies is within every free person’s realm of personal choice.

]]>
OpenBCI and the Future of EEG Technology https://spanish.lifeboat.com/blog/2016/08/openbci-and-the-future-of-eeg-technology https://spanish.lifeboat.com/blog/2016/08/openbci-and-the-future-of-eeg-technology#comments Tue, 02 Aug 2016 18:14:17 +0000 http://lifeboat.com/blog/?p=28560 Last year, OpenBCI burst onto the scene with a Kickstarter campaign to fund development of an open source brain-computer interface for makers. The company more than doubled its goal of raising $100,000 for its EEG platform and, as I write this, OpenBCI is preparing to ship its first run of finished products. Conor does a demo of the technology in the link below:

OpenBCI Demo by Conor Russomano

Recently, I had a chance to talk with OpenBCI co-founder Conor Russomanno to get his thoughts on how open source has changed the brain-computer interface (BCI) landscape and opened new opportunities in the present, and how it might affect future development opportunities as well.

“The one thing that we’re hoping to achieve with OpenBCI is to really lower the barrier of entry – both in terms of educational materials but also cost,” Russomanno said. “I think one really awesome implication is that, in a classroom or laboratory, where one research grade EEG system was used by a number students, now the same amount of money could be used to outfit every student with their own device. And we’ve seen that in our customer base, as a huge proportion of our customers are students, graduate-level researchers and professors who want to use OpenBCI as a learning tool.”

Another exciting change that OpenBCI is creating is an open source community that allows users and makers to connect and share their knowledge to take the technology even further, Russomanno noted. In fact, OpenBCI is dedicating a fair chink of its resources to create that community.

“Probably the quickest people to jump on the preorders and the Kickstarters were students and researchers who were already working with existing EEG devices. We are trying to get more people interested by creating a community, putting out instructional guides and making it more approachable.

“I like to think what we’re doing with OpenBCI as Lego meets EEGs. I think of what we’re building as not a finished product, but as a narrow building block. And we want the world to use these blocks to build the cool stuff,” he said.

While the success and acclaim OpenBCI has received in mainstream media has been exciting, as he looks at the opportunities for further development of open source BCI, Russomanno is cautiously optimistic. In my mentioning of some of the farther-reaching future implications of BCI technologies, Conor brought the conversation back to the present, seeming less interested in far away “what ifs” than in how the next step forward in research might be taken:

“I think its important to be realistic about what the technology is capable of,” he said. “There are still a lot of challenges and they’re not all going to be solved by the same company or by a single field of research. It’s important that people collaborate together, specialize and improve upon a small facet of the problem by sharing that information with someone else who has solved another small facet.

“What we’re trying to do with OpenBCI is to expose all of the weaknesses of the full system and say ‘Hey guys! Jump in! What can you do to improve this other piece?’”

Another hurdle Russomanno hopes open source BCI can bridge in the future is the gap between the enthusiastic expectations of the general public and the realistic limits of the current technology. While an enthusiastic hope of BCI might involve telepathic control of technology or complete conscious “embodiment” in a robotic form, the current reality of BCI is less “far out.” The calibration of today’s external BCI devices still involves a relatively slow process of attuning to individual brain patterns, and isn’t nearly at “telepathic” levels, although some researchers have been able to develop significant control of devices and games with EEG headsets.

“I think many people would agree that the ‘Holy Grail’ of practical, wearable EEG is a sensor. Right now, it’s very difficult to acquire a strong EEG signal from outside the scalp because you’ve got a lot of things that produce ‘noise,’” he continued. “I’m not sure if it will ever happen, but the one problem that needs to be optimized is the electrode problem. We’ve broken out the header pins so you can attach any electrode on, so if that Holy Grail does get found in the next one or two years, hopefully you’ll just be able to plug it right into the OpenBCI board.

“On the other end of the spectrum,” Russomanno continued. “Once you’ve got good spatial resolution, a high number of channels and a good quality of signal, what do you do with this data now that you’re collecting it? How do you classify this information to create a system that responds in a pre-determined way?

“That’s where software and research comes in. You’ve got electrical engineers that need to solve the electrode problem. But then you’ve got data analytics and programmers that need to work together to create algorithms that will classify massive amounts of data,” he noted.

Conor’s earlier comment about the interdisciplinary nature of BCI research starts to hit home, but he wasn’t done yet. After software challenges, there’s one more hurdle left for the full optimization of open source BCI, he added.

“Every brain is similar but every brain is unique. When it gets to that point where we’ve got enough systems producing enough data that it can be scaled cheaply from individual to individual, then it’s a matter of building an interface that’s user customizable that has enough flexibility to be able to refine its classification inputs to match the specific user.”

Ultimately, Russomanno says the mission for OpenBCI is to make the technology more accessible and that, wherever open source BCI goes in the future, a community based on cooperation and collaboration will take it there.

With so much to work on, he’s aiming to facilitate the global conversation necessary to bring BCI to the next level, without funding it all in his own proprietary lab. If all brains are unique, then we’ll learn more about calibrating devices by testing and tinkering with people all over the world. Conor’s aim, however, it not just to use their heads as experiments, but to generate new hypotheses to test and ideas to explore — expanding the field for everyone.

“Putting our heads together” takes on multiple literal interpretations here, and that’s how he intends it.

Conor ended our chat with come practical advice for researchers and makers who want to help the cause: “The best way for people to join that community is to acquire the technology, try to figure out how to make it work, be vocal on the forums and keep spreading the open source wildfire.”

]]>
https://spanish.lifeboat.com/blog/2016/08/openbci-and-the-future-of-eeg-technology/feed 1
Virtuix Omni Provides Virtual Worlds in Which to Walk, Run, and Jump https://spanish.lifeboat.com/blog/2016/07/virtuix-omni-provides-virtual-worlds-in-which-to-walk-run-and-jump Sat, 16 Jul 2016 23:22:54 +0000 http://lifeboat.com/blog/?p=27912 2016 has been called the year that virtual reality becomes a reality, as some of the most anticipated devices will be made available on the consumer market. From Magic Leap (valued at over $1 billion) to Oculus (acquired by Facebook last year for $2 billion), there’s plenty of interest in the market, and ample room for it.

Though virtual reality is often depicted as an experience for the recliner or gaming chair, a number of VR enthusiasts want us to rethink how we engage with the technology. Instead of sitting around, they’d like to get us moving and looking beyond the go-to medium of gaming. Developers of these new devices consider fields like military training and healthcare as valuable places for mobile virtual reality to be applied.

Leading this approach is a Kickstarter VR company called Virtuix, whose treadmill-like device Omni encourages users to get upright and active. The reason to develop such a device felt like a natural evolution to Virtuix’s former Product Manager, Colton Jacobs.

“If I am walking in the virtual world my avatar is actually walking…”, he says, “…then the body’s natural reaction is, I want to stand up, I want to be walking with this avatar as if I’m actually there.”

Virtuix's Omni

Virtuix’s Omni

Users first slip on a pair of Omni’s $60 slippery-bottomed shoes before stripping into a lower body harness, which allows them to shuffle and slide their feet over the surface of the device. The device looks like a treadmill, but the track doesn’t actually move; rather, the shuffling and sliding motions register on the track and translate into the VR program being used. Shuffle forward, and your character moves forward. Shuffle backward, and your character back pedals. The resulting movement is somewhat awkward looking from the outside, but a reportedly smooth and consistent experience for the user.

Engadget had a chance to try out the device earlier this year and gave a positive review: “Even the easy mode that I tried, wherein you fire where you look rather than where the gun is pointing – had my heart pounding.” Testing games like Doom and Call of Duty, Engadget reporter Andrew Tarantola recounts his experience:

“The generic ‘base defense’ shooter game that I tried blew me away. Having to actually move, turn, look around and aim was incredibly immersive and added a completely new level of gameplay. There’s no more sprint button. If you want to run faster, then you really have to run faster. I was sweating by the end of my session — and grinning harder than I have in years.”

Meanwhile, Jacobs describes Omni as an immersive experience like few others. Where your standard, handheld controls allow you to manipulate your character’s movement by toggling a joystick, Omni facilitates movement by mimicking the user’s movement.

“You can walk, run, jump, turn around 360 degrees and act out your virtual avatar’s motions. We’re doing motion tracking ourselves to the lower body so we’re going to be able to tell what direction you’re walking in and how fast you’re moving, jumping, side stepping, things like that and translate that into analog movement in the virtual world.”

Though he acknowledges gaming as the default medium for immersive VR devices, Jacobs envisions Omni’s technology applied beyond gaming.

“The gaming market is a great first adopter market for us,” he says. “That being said, we’ve had multiple orders for military installations, West Point Navy installations, where they’re doing training simulation for some emergency and disaster training… instead of sending soldiers out to the field, they can actually just do training for various types of scenarios in the virtual world together.”

By getting soldiers to actually act out the movement, they’re able to simulate the physicality of real-world scenarios more realistically than can be done with a hand-held controller.

Virtuix likely won’t stop with entertainment and military training. According to Jacobs, the company has looked into healthcare — particularly physical therapy and recovery – as a field that would greatly benefit from a device like Omni.

“We’ve had multiple universities around the world actually, go ahead and order the Omni and they’re looking to test it for various different rehabilitation, also research. Stroke survivors is one of our big ones. A traditional treadmill can get someone up and walking in a straight line but a lot of the dangers that they have when they’re walking around in their home or just walking in general is when they have to turn corners. When they have to go around an object, that’s when they have a higher fall risk. Our system could potentially help to reactivate those muscles after a major neurological attack like a stroke or Alzheimer’s.”

Though virtual reality is already used in healthcare, some ask why there isn’t hasn’t been more innovation in this area by the virtual reality industry. Perhaps Omni could be a useful bridge in moving the technology forward.

Of course, an important component in such applications is ensuring that the system is safe. After all, the thought of wearing a head mounted display (HMD) while sliding around a track sounds like an accident waiting to happen.

“Safety is definitely a top priority,” Jacobs says. “You can’t actually see your surroundings while you’re in the Omni. You need to be strapped in, make sure you’re safe, but you’re also standing on a frictionless surface. You have to watch a video of this to really understand, it’s very hard to say in words.”

But so far, Omni has stood up to the gaming obstacles, which speaks to potential opportunities for use in healthcare-type settings.

“With our harness and our safety support structure, we’ve never had anyone fall on the Omni, especially in a high-intensity gaming situation; it’s even more intense when there’s zombies flying at you.”

]]>
Can A Brain-Machine Interface Help Train Your Mind? https://spanish.lifeboat.com/blog/2016/06/can-a-brain-machine-interface-help-train-your-mind Wed, 01 Jun 2016 23:19:25 +0000 http://lifeboat.com/blog/?p=26339 Walk into any workout facility and, odds are, you’ll see plenty of people working with a personal fitness trainer. It’s common practice to hire a trainer who can help improve your physical fitness, but is it possible to find a trainer for better mental fitness? Entrepreneur Ariel Garten founded her company,  InteraXon, around this very idea. Bolstered by new advances in non-invasive brain-machine interfaces (BMIs) that can help people practice ways to reduce stress and improve cognitive abilities, Garten believes this is just the beginning of a lucrative industry.

Garten’s company manufactures a BMI called the Muse, an EEG sensor headband that monitors occipital and temporal brain waves. According to Ariel, the goal of the device is to help people understand their mental processes while at the same time learning to calm and quiet their mind at any time, with the same convenience of carrying around an iPhone.  

Image credit: www.choosemuse.com

“We don’t measure stress (with the Muse). What we’re actually measuring is a state of stable, focused attention,” Garten said. “When you hone your mind into a state of stable focused attention, what you’re able to do is resist the thoughts that you have and the distractions that you have. That helps you improve your cognitive function and attention. And, it also helps you decrease your stress, anxiety and all of the downstream physiological responses of that stress.”

According to Garten, when one is in a state of stable, focused attention, their brain-wave signatures  are very similar to those seen when one is in a calm, relaxed state. Reaching that state of stable, focused attention leads to more Alpha waves, which have been recorded when people do activities like preparing to go to bed. Those Alpha waves represent a shutting down of external sensory processing, which Ariel says amounts to better holding your focus.

While it has parallels to meditation, Garten noted that BMI-based stable attention exercises can show one’s brain activity in real time. That feedback allows for deeper and faster learning, as well as the ability to maintain the practice or the exercise over time.

Much like the concept of muscle memory, once a user learns how to reach stable, focused attention, the Muse and its accompanying applications help train the user to be able to return to that state whenever it’s needed in their daily lives. Garten noted that a number of research studies have found focused attention exercises can also lead to increased gray matter in the brain, while decreasing anxiety and helping with depression, eating disorders, insomnia and more.

“In the next five years, you’re going to see a proliferation of these types of devices… simple clean, and easy-to-use brain sensing technology applications. What you’ll see is applications that let you play games directly with your mind and applications that let you understand and improve yourself,” she said. “We’re not at the point in technology where you can control stuff directly with your mind by reading a thought. That will happen someday…15 to 20 years in the future.”

While we can look at changes at brain states right now, the future promises more responsive technology that can help provide you with a much more detailed understanding of your brain’s function and use that information to support your interactions with your external environment.

“We’re going to be able to see applications and algorithms that understand you more effectively and are able to give you personalized insight based on you and your own brain and how it works, moment to moment to moment,” Garten said. “We’re going to see the hardware getting smaller, so that it fits into other devices you already wear. We’re also going to also see greater accessibility and cross platform integration with your favorite tools to get a more comprehensive picture of yourself.”

BMI technology that is minimally invasive but offers the user more personalized control certainly seems like a pragmatic first step towards broader acceptance of such technologies in the near future. While not part of the mainstream consumer market quite yet, Muse’s successes with its loyal customer base may point to real opportunity for similar products in the neurotechnology marketplace.

]]>
Recommendation Engines Yielding Stronger Predictions into Our Wants and Needs https://spanish.lifeboat.com/blog/2016/05/recommendation-engines-yielding-stronger-predictions-into-our-wants-and-needs Thu, 12 May 2016 19:09:11 +0000 http://lifeboat.com/blog/?p=25624 If you’ve ever seen a “recommended item” on eBay or Amazon that was just what you were looking for (or maybe didn’t know you were looking for), it’s likely the suggestion was powered by a recommendation engine. In a recent interview, Co-founder of machine learning startup Delvv, Inc., Raefer Gabriel, said these applications for recommendation engines and collaborative filtering algorithms are just the beginning of a powerful and broad-reaching technology. 

Raefer Gabriel, Delvv, Inc.

Raefer Gabriel, Delvv, Inc.

Gabriel noted that content discovery on services like Netflix, Pandora, and Spotify are most familiar to people because of the way they seem to “speak” to one’s preferences in movies, games, and music. Their relatively narrow focus of entertainment is a common thread that has made them successful as constrained domains. The challenge lies in developing recommendation engines for unbounded domains, like the internet, where there is more or less unlimited information.

“Some of the more unbounded domains, like web content, have struggled a little bit more to make good use of the technology that’s out there. Because there is so much unbounded information, it is hard to represent well, and to match well with other kinds of things people are considering,” Gabriel said. “Most of the collaborative filtering algorithms are built around some kind of matrix factorization technique and they definitely tend to work better if you bound the domain.”

Of all the recommendation engines and collaborative filters on the web, Gabriel cites Amazon as the most ambitious. The eCommerce giant utilizes a number of strategies to make item-to-item recommendations, complementary purchases, user preferences, and more. The key to developing those recommendations is more about the value of the data that Amazon is able to feed into the algorithm initially, hence reaching a critical mass of data on user preferences, which makes it much easier to create recommendations for new users.

“In order to handle those fresh users coming into the system, you need to have some way of modeling what their interest may be based on that first click that you’re able to extract out of them,” Gabriel said. “I think that intersection point between data warehousing and machine learning problems is actually a pretty critical intersection point, because machine learning doesn’t do much without data. So, you definitely need good systems to collect the data, good systems to manage the flow of data, and then good systems to apply models that you’ve built.”

Beyond consumer-oriented uses, Gabriel has seen recommendation engines and collaborative filter systems used in a narrow scope for medical applications and in manufacturing. In healthcare for example, he cited recommendations based on treatment preferences, doctor specialties, and other relevant decision-based suggestions; however, anything you can transform into a “model of relationships between items and item preferences” can map directly onto some form of recommendation engine or collaborative filter.

One of the most important elements that has driven the development of recommendation engines and collaborative filtering algorithms is the Netflix Prize, Gabriel said. The competition, which offered a $1 million prize to anyone who could design an algorithm to improve upon the proprietary Netflix’s recommendation engine, allowed entrants to use pieces of the company’s own user data to develop a better algorithm. The competition spurred a great deal of interest in the potential applications of collaborative filtering and recommendation engines, he said.

In addition, relative ease of access to an abundant amount of cheap memory is another driving force behind the development of recommendation engines. An eCommerce company like Amazon with millions of items needs plenty of memory to store millions of different of pieces of item and correlation data while also storing user data in potentially large blocks.

“You have to think about a lot of matrix data in memory. And it’s a matrix, because you’re looking at relationships between items and other items and, obviously, the problems that get interesting are ones where you have lots and lots of different items,” Gabriel said. “All of the fitting and the data storage does need quite a bit of memory to work with. Cheap and plentiful memory has been very helpful in the development of these things at the commercial scale.”

Looking forward, Gabriel sees recommendation engines and collaborative filtering systems evolving more toward predictive analytics and getting a handle on the unbounded domain of the internet. While those efforts may ultimately be driven by the Google Now platform, he foresees a time when recommendation-driven data will merge with search data to provide search results before you even search for them.

“I think there will be a lot more going on at that intersection between the search and recommendation space over the next couple years. It’s sort of inevitable,” Gabriel said. “You can look ahead to what someone is going to be searching for next, and you can certainly help refine and tune into the right information with less effort.”

While “mind-reading” search engines may still seem a bit like science fiction at present, the capabilities are evolving at a rapid pace, with predictive analytics at the bow.

]]>
Why Pessimistic Predictions For Future of AI May be More Hype than High Tech https://spanish.lifeboat.com/blog/2016/04/why-pessimistic-predictions-for-future-of-ai-may-be-more-hype-than-high-tech Tue, 12 Apr 2016 00:31:00 +0000 http://lifeboat.com/blog/?p=24541 The growth of human and computer intelligence has triggered a barrage of dire predictions about the rise of super intelligence and the singularity. But some retain their skepticism, including Dr. Michael Shermer, a science historian and founding publisher of Skeptic Magazine.

quote-i-m-a-skeptic-not-because-i-do-not-want-to-believe-but-because-i-want-to-know-michael-shermer-71-29-72

The reason so many rational people put forward hypotheses that are more hype than high tech, Shermer says, is that being smart and educated doesn’t protect anyone from believing in “weird things.” In fact, sometimes smart and educated people are better at rationalizing beliefs that they hold for not-so-rational reasons. The smarter and more educated you are, the better able you are to find evidence to support what you want to be true, suggests Shermer.

“This explains why Nobel Prize winners speak about areas they know nothing about with great confidence and are sure that they’re right. Just because they have this great confidence of being able to do that (is) a reminder that they’re more like lawyers than scientists in trying to marshal a case for their client,” Shermer said. “(Lawyers) just put together the evidence, as much as you can, in support of your client and get rid of the negative evidence. In science you’re not allowed to do that, you’re supposed to look at all the evidence, including the counter evidence to your theory.”

The root of many of these false hypotheses, Shermer believes, is based in religion. Using immortality as an example, Shermer said the desire to live forever has strong parallels to religious beliefs; however, while there are many making prophecies that technology will insure we’ll live forever, too many people in groups throughout history have made similar yet unfulfilled promises.

“What we’d like to be true is not necessarily what is true, so the burden of proof is on them to go ahead and make the case. Like the cryonics people…they make certain claims that this or that technology is going to revive people that are frozen later…I hope they do it, but you’ve got to prove otherwise. You have to show that you can actually do that.”

Even if we do find a way to live forever, Shermer notes the negatives may outweigh the positives. It’s not just living longer that we want to achieve, but living longer at a high quality of life. There’s not much benefit in living to age 150, he adds, if one is bedridden for 20 or 30 years.

Instead, Shermer compares the process to the evolution of the automobile. While the flying cars promised by 1950’s-era futurists haven’t come to pass, today’s automobile is exponentially smarter and safer than those made 50 or 60 years ago. While forward thinkers have had moments of lucid foresight, humans also have a history of making technology predictions that often don’t turn out to be realized. Often, as is the case with the automobile, we don’t notice differences in technological changes because the changes happen incrementally each year.

“That’s what’s really happening with health and longevity. We’re just creeping up the ladder slowly but surely. We’ve seen hip replacements, organ transplants, better nutrition, exercise, and getting a better feel for what it takes to be healthy,” Shermer says. “The idea that we’re gonna’ have one big giant discovery made that’s going to change everything? I think that’s less likely than just small incremental things. A Utopian (society) where everybody gets to live forever and they’re infinitely happy and prosperous and so on? I think it’s unrealistic to think along those lines.”

Looking at the future of technology, Shermer is equally reticent to buy in to the predictions of artificial intelligence taking over the world. “I think the concern about AI turning evil (and) this dystopian, science fiction perspective is again, not really grounded in reality. I’m an AI optimist, but I don’t think the AI pessimists have any good arguments,” Shermer said

While we know, for the most part, which types of governments work well, we don’t have any similar precedent for complex AI systems. Humans will remain in control and, before we start passing laws and restrictions to curb AI out of fear, Shermer believes we should keep improving our computers and artificial intelligence to make life better, evaluating and taking action as these systems continue to evolve.

]]>
When Will Virtual Embodiment Take Shape in Mainstream Society? https://spanish.lifeboat.com/blog/2016/03/when-will-virtual-embodiment-take-shape-in-mainstream-society Tue, 08 Mar 2016 20:06:43 +0000 http://lifeboat.com/blog/?p=23243 Virtual and augmented reality is taking giant leaps every day, both in the mainstream and in research labs. In a recent TechEmergence interview, Biomedical Engineer and Founder of g.tec Medical Engineering Christopher Guger said the next big steps will be in brain-computer interfaces (BCIs) and embodiment.

Image credit: HCI International

Image credit: HCI International

If you’re unfamiliar with the term, embodiment is the moment when a person truly “feels” at one with a device controlled by their thoughts, while sensing that device as a part of, or an extension, of themselves. While researchers are taking big strides toward that concept, Guger believes those are only baby steps toward what is to come.

While augmented or virtual reality can take us away for a brief period, Guger said true embodiment will require far more BCI development. There has been a lot of work recently in robotic embodiment using BCI.

“We have the robotic system, which is learning certain tasks. You can train the robotic system to pick up objects, to play a musical instrument and, after the robotic system has learned, you’re just giving the high-level command for the robotic system to do it for you,” he said. “This is like a human being, where you train yourself for a certain task and you have to learn it. You need your cortex and a lot of neurons to do the task. Sometimes, it’s pre-programmed and (sometimes) you’re just making the high-level decision to do it.”

Another tool at work in the study of embodiment is what Guger called “virtual avatars.” These virtual avatars allow researchers to experiment with embodiment and learn both how avatars need to behave, while also helping humans grow more comfortable with the concept of embodiment inside the avatar. Being at ease inside the avatar, he said, makes it easier for one to learn tasks and train, or re-train, for specific functions.

As an example, Guger cited a stroke patient working to regain movement in his hand. Placing the patient into a virtual avatar, the patient can “see” the hand of the avatar moving in the same manner that he wants his own hand to move. This connection activates mirror neurons in the patient’s brain, which helps the brain rewire itself to regain a sense of the hand.

“We also do functional electrical stimulation (where) the hand is electrically stimulated, so you also get the same type of movement. This, altogether, has a very positive effect on the remobilization of the patient,” Guger said. “Your movement and the virtual movement, that’s all feeding back to the artificial systems in the cortex again and is affecting brain plasticity. This helps people learn to recover faster.”

One hurdle that researchers are still working to overcome is the concept of “break in presence” (discussed in the article under the sub-heading ‘head-tracking module’). Basically, this is the moment where one’s immersion in a virtual reality world is interrupted by an outside influence, leading to the loss of embodiment. Avoiding that loss of embodiment, he said, is what researchers are striving to attain to make virtual reality a more effective technology.

Though Guger believes mainstream BCI use and true embodiment is still a ways off, other applications of BCI and embodiment are already happening in the medical field. In addition to helping stroke patients regain their mobility, there are BCI systems that allow doctors to do assessments of brain activity on coma patients, which provides some level of communication for both the patient and the family. Further, ALS patients are able to take advantage of BCI technology to improve their quality of life through virtual movement and communication.

“For the average person on the street, it’s very important that the BCI system is cheap and working, and it has to be faster or better compared to other devices that you might have,” he said. “The embodiment work shows that you can really be embodied in another device; this is only working if you are controlling it mentally, like the body is your own, because you don’t have to steer the keyboard or the mouse. It’s just your body and it’s doing what you want it to do. And then you gain something.” 

]]>
The Many Uses of Multi-Agent Intelligent Systems https://spanish.lifeboat.com/blog/2016/02/the-many-uses-of-multi-agent-intelligent-systems https://spanish.lifeboat.com/blog/2016/02/the-many-uses-of-multi-agent-intelligent-systems#comments Mon, 15 Feb 2016 21:19:35 +0000 http://lifeboat.com/blog/?p=22127 In professional cycling, it’s well known that a pack of 40 or 50 riders can ride faster and more efficiently than a single rider or small group. As such, you’ll often see cycling teams with different goals in a race work together to chase down a breakaway before the finish line.

This analogy is one way to think about collaborative multi-agent intelligent systems, which are poised to change the technology landscape for individuals, businesses, and governments, says Dr. Mehdi Dastani, a computer scientist at Utrecht University. The proliferation of these multi-agent systems could lead to significant systemic changes across society in the next decade.

Image credit: ResearchGate

Image credit: ResearchGate

“Multi-agent systems are basically a kind of distributed system with sets of software. A set can be very large. They are autonomous, they make their own decisions, they can perceive their environment, “Dastani said. “They can perceive other agents and they can communicate, collaborate or compete to get certain resources. A multi-agent system can be conceived as a set of individual softwares that interact.”

As a simple example of multi-agent systems, Dastani cited Internet mail servers, which connect with each other and exchange messages and packets of information. On a larger scale, he noted eBay’s online auctions, which use multi-agent systems to allow one to find an item they want to buy, enter their maximum price and then, if needed, up the bid on the buyer’s behalf as the auction closes. Driverless cars are another great example of a multi-agent system, where many softwares must communicate to make complicated decisions.

Dastani noted that multi-agent systems dovetail nicely with today’s artificial intelligence. In the early days of AI, intelligence was a property of one single entity of software that could, for example, understand human language or perceive visual inputs to make its decisions, interact, or perform an action. As multi-agent systems have been developed, those single agents interact and receive information from other agents that they may lack, which allows them to collectively create greater functionality and more intelligent behavior.

“When we consider (global) trade, we basically define a kind of interaction in terms of action. This way of interacting among individuals might make their market more efficient. Their products might get to market for a better price, as the amount of time (to produce them) might be reduced,” Dastani said. “When we get into multi-agent systems, we consider intelligence as sort of an emergent phenomena that can be very functional and have properties like optimal global decision or situations of state.” 

Other potential applications of multi-agent systems include designs for energy-efficient appliances, such as a washing machine that can contact an energy provider so that it operates during off-peak hours or a factory that wants to flatten out its peak energy use, he said. Municipal entities can also use multi-agent systems for planning, such as simulating traffic patterns to improve traffic efficiency.

Looking to the future, Dastani notes the parallels between multi-agent systems and Software as a Service (SaaS) computing, which could shed light on how multi-agent systems might evolve.  Just as SaaS combines various applications for on-demand use, multi-agent systems can combine functionalities of various software to provide more complex solutions. The key to those more complex interactions, he added, is to develop a system that will govern the interactions of multi-agent systems and overcome the inefficiencies that can be created on the path toward functionality.

“The idea is the optimal interaction that we can design or we can have. Nevertheless, that doesn’t mean that multi-agent systems are by definition, efficient,” Dastani said. “We can have many processes that communicate, make an enormous number of messages and use a huge amount of resources and they still can not have a sort of interesting functionality. The whole idea is, how can we understand and analyze the interactions? How can we decide which interaction is better than the other interactions or more efficient or more productive?”

]]>
https://spanish.lifeboat.com/blog/2016/02/the-many-uses-of-multi-agent-intelligent-systems/feed 1
How Technology Changes the Way We Diagnose and Treat Mental Illness https://spanish.lifeboat.com/blog/2016/01/how-technology-changes-the-way-we-diagnose-and-treat-mental-illness Tue, 26 Jan 2016 23:42:34 +0000 http://lifeboat.com/blog/?p=21219 As recently as 50 years ago, psychiatry lacked a scientific foundation, the medical community considered mental illness a disorder of the mind, and mental patients were literally written off as “sick in the head.” A fortunate turn in progress has yielded today’s modern imaging devices, which allow neuroscientists and psychiatrists to examine the brain of an individual suffering from a mental disorder and provide the best treatment options. In a recent interview, Columbia University Psychiatry Chair Dr. Jeffrey Lieberman stated that new research into understanding the mind is growing at an accelerated pace.

(iStock)

(iStock)

Lieberman noted that, just as Galileo couldn’t prove heliocentrism until he had a telescope, psychiatry lacked the technological sophistication, tools, and instruments necessary to get an understanding of the brain until the 1950s.  It wasn’t until the advent of psychopharmacology and neuroimaging, he said, that researchers could look inside the so-called black box that is the brain.

“(It began with) the CAT scan, magnetic resonance imaging (MRI) systems, positron emission tomography (PET scans) and then molecular genetics. Most recently, the burgeoning discipline of neuroscience and all of the methods within, beginning with molecular biology and progressing to optogenetics, this capacity has given researchers the ability to deconstruct the brain, understand its integral components, its mechanisms of action and how they underpin mental function and behavior,” Lieberman said. “The momentum that has built is almost like Moore’s law with computer chips, (and) you see this increasing power occurring with exponential sort of growth.”

Specifically, the use of MRIs and PET scans has allowed researchers to study the actual functional activity of different circuits and regions of the brain, Lieberman noted. Further, PET scans provided a look at the chemistry of the brain, which has allowed for the development of more sophisticated pathological theories. These measures, he said, were used to develop treatments while also allowing measurement of the effectiveness of both medication-based therapies and psychotherapies.

As an example, Lieberman cited the use of imaging in the treatment of post-traumatic stress disorder (PTSD). The disorder, a hyperarousal that chronically persists even in the absence of threatening stimulation, is treated through a method called desensitization. Over time, researchers have been able to fine-tune the desensitization therapies and treatments by accessing electronic images of the brain, which can show if there’s been a reduction in the activation of the affected amygdala.

Lieberman noted that despite progress in this area, technology has not replaced interaction with the individual patient; however, as technology continues to evolve, he expects the diagnoses of mental disorders to be refined.

“By the use of different technologies including genetics (and) imaging, including electrophysiological assessments, which are kind of EEG based, what we’ll have is one test that can confirm conditions that were previously defined by clinical description of systems,” Lieberman said. “I think, of all the disciplines that will do this, genetics will be the most informative.”

Just as genetics is currently used to diagnose cancer using anatomy and histology, Lieberman said the expanding field is helping researchers distinguish mental illness in individuals with certain genetic mutations. He expects that in the future, doctors will use “biochips” to routinely screen patients and provide a targeted therapy against the gene or gene product. These chips will have panels of genes known to be potentially associated with the risk for mental illness.

“Someone used the analogy of saying the way we treat depression now is as if you needed to put coolant into your car. Instead of putting it into the radiator, you just dump it on the engine,” he said. “So genetics will probably be the most powerful method to really tailor to the individual and use this technique of precision and personalized medicine.”

Lieberman also sees additional promise in magnetic stimulation, deep brain stimulation through the surgical implanting of electrodes, and optogenetics. Though he has plenty of optimism for these treatments and other potential treatments for mental illness, much of their continued growth may hinge on government policy and budgets. Recent coverage of gun violence in the United States, and a public call for better means by which to screen individuals for mental health inflictions, may be an unfortunate catalyst in moving funding forward in this research arena. A recent article from the UK’s Telegraph discusses Google’s newfound interest in this research, with former US Head of the National Institute of Mental Health now in a position at Google Life Sciences.

“Science, technology and healthcare are doing very well, but when it comes to the governmental process, I think we’re in trouble,” he said. “A welcome development in this regard is President Obama’s Human Brain Initiative, which if you look at the description of it, (is) basically to develop new tools in neurotechnology that can really move forward in a powerful way of being able to measure the function of the brain. Not by single cells or single circuits, but by thousands or tens of thousands of cells and multiple circuits simultaneously. That’s what we need.”

]]>
Connecting The Dots to Get the Big Picture with Artificial Intelligence https://spanish.lifeboat.com/blog/2016/01/connecting-the-dots-to-get-the-big-picture-with-artificial-intelligence https://spanish.lifeboat.com/blog/2016/01/connecting-the-dots-to-get-the-big-picture-with-artificial-intelligence#comments Wed, 20 Jan 2016 00:24:14 +0000 http://lifeboat.com/blog/?p=20935 Ask the average passerby on the street to describe artificial intelligence and you’re apt to get answers like C-3PO and Apple’s Siri. But for those who follow AI developments on a regular basis and swim just below the surface of the broad field , the idea that the foreseeable AI future might be driven more by Big Data rather than big discoveries is probably not a huge surprise. In a recent interview with Data Scientist and Entrepreneur Eyal Amir, we discussed how companies are using AI to connect the dots between data and innovation.

Image credit: Startup Leadership Program Chicago

Image credit: Startup Leadership Program Chicago

According to Amir, the ability to make connections between big data together has quietly become a strong force in a number of industries. In advertising for example, companies can now tease apart data to discern the basics of who you are, what you’re doing, and where you’re going, and tailor ads to you based on that information.

“What we need to understand is that, most of the time, the data is not actually available out there in the way we think that it is. So, for example I don’t know if a user is a man or woman. I don’t know what amounts of money she’s making every year. I don’t know where she’s working,” said Eyal. “There are a bunch of pieces of data out there, but they are all suggestive. (But) we can connect the dots and say, ‘she’s likely working in banking based on her contacts and friends.’ It’s big machines that are crunching this.”

Amir used the example of image recognition to illustrate how AI is connecting the dots to make inferences and facilitate commerce. Many computer programs can now detect the image of a man on a horse in a photograph. Yet many of them miss the fact that, rather than an actual man on a horse, the image is actually a statue of a man on a horse. This lack of precision in analysis of broad data is part of what’s keep autonomous cars on the curb until the use of AI in commerce advances.

“You can connect the dots enough that you can create new applications, such as knowing where there is a parking spot available in the street. It doesn’t make financial sense to put sensors everywhere, so making those connections between a bunch of data sources leads to precise enough information that people are actually able to use,” Amir said. “Think about, ‘How long is the line at my coffee place down the street right now?’ or ‘Does this store have the shirt that I’m looking for?’  The information is not out there, but most companies don’t have a lot of incentive to put it out there for third parties. But there will be the ability to…infer a lot of that information.”

This greater ability to connect information and deliver more precise information through applications will come when everybody chooses to pool their information, said Eyal. While he expects a fair bit of resistance to that concept, Amir predicts that there will ultimately be enough players working together to infer and share information; this approach may provide more benefits on an aggregate level, as compared to  an individual company that might not have the same incentives to share.

As more data is collected and analyzed, another trend that Eyal sees on the horizon is more autonomy being given to computers. Far from the dire predictions of runaway computers ruling the world, he sees a ‘supervised’ autonomy  in which computers have the ability to perform tasks using knowledge that is out-of-reach for humans. Of course, this means developing a sense trust and allowing the computer to make more choices for us.

“The same way that we would let our TiVo record things that are of interest to us, it would still record what we want, but maybe it would record some extras. The same goes with (re-stocking) my groceries every week,” he said. “There is this trend of ‘Internet of Things,’ which brings together information about the contents of your refrigerator, for example. Then your favorite grocery store would deliver what you need without you having to spend an extra hour (shopping) every week.” 

On the other hand, Amir does have some potential concerns about the future of artificial intelligence, comparable to what’s been voiced by Elon Musk and others. Yet he emphasizes that it’s not just the technology we should be concerned about.

“At the end, this will be AI controlled by market forces. I think the real risk is not the technology, but the combination of technology and market forces. That, together, poses some threats,” Amir said. “I don’t think that the computers themselves, in the foreseeable future, will terminate us because they want to. But they may terminate us because the hackers wanted to.”

]]>
https://spanish.lifeboat.com/blog/2016/01/connecting-the-dots-to-get-the-big-picture-with-artificial-intelligence/feed 1