Blog

Archive for the ‘ethics’ category: Page 61

Jan 17, 2016

Machine learning’s hand in touch-less, straight-through processing and beyond

Posted by in categories: employment, ethics, robotics/AI

AI can easily replace much of the back office operations and some front office over time. As a result, there will be a need to have a massive social system and displacement program in place as a joint effort with governments and companies to re-school and re-tool workers and financially support the workers and their families until they can be retooled/ retrained to get one of the existing jobs or one of the new careers resulting from AI. There will be a need and social obligation placed back on companies at a scale like we have never seen before. With power and wealth; there truly comes a level of moral responsibility imposed by society.


Tradeshift CloudScan uses machine learning to create automatic mappings from image files and PDFs into a structured format such as UBL.

Read more

Jan 17, 2016

DIY gene-editing kit: Is it fun or scary?

Posted by in categories: biotech/medical, ethics, food, genetics, habitats

Although the recent article and announcement of Josiah Zayner (CA scientist) new do it yourself gene editing kit for $120 sent shock waves across industry as well as further raised the question “how do we best put controls in place to ensure ethics and prevent disaster or a crisis?”; this genie truly is out of the bottle. Because Josiah created this easily in his own kitchen, it can be replicated by many others in their own homes. What we have to decide is how to best mitigate it’s impact. Black markets & exotic animal, etc. collectors around the world will pay handsomely for this capability and raise the stakes of the most bizarre animals (deadly and non-deadly) to be created for their own profits and amusements.


BURLINGAME, Calif. — On the kitchen table of his cramped apartment, Josiah Zayner is performing the feat that is transforming biology. In tiny vials, he’s cutting, pasting and stirring genes, as simply as mixing a vodka tonic. Next, he slides his new hybrid creations, living in petri dishes, onto a refrigerator shelf next to the vegetables. And he’s packaging and selling his DIY gene-editing technique for $120 so that everyone else can do it, too.

Read more

Dec 21, 2015

Inside OpenAI: Will Transparency Protect Us From Artificial Intelligence Run Amok?

Posted by in categories: Elon Musk, ethics, finance, robotics/AI

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

Read more

Dec 17, 2015

Ethics on the near-future battlefield

Posted by in categories: bioengineering, biotech/medical, cyborgs, ethics, food, genetics, military, neuroscience, robotics/AI

US army’s report visualises augmented soldiers & killer robots.


The US Army’s recent report “Visualizing the Tactical Ground Battlefield in the Year 2050” describes a number of future war scenarios that raise vexing ethical dilemmas. Among the many tactical developments envisioned by the authors, a group of experts brought together by the US Army Research laboratory, three stand out as both plausible and fraught with moral challenges: augmented humans, directed-energy weapons, and autonomous killer robots. The first two technologies affect humans directly, and therefore present both military and medical ethical challenges. The third development, robots, would replace humans, and thus poses hard questions about implementing the law of war without any attending sense of justice.

Augmented humans. Drugs, brain-machine interfaces, neural prostheses, and genetic engineering are all technologies that may be used in the next few decades to enhance the fighting capability of soldiers, keep them alert, help them survive longer on less food, alleviate pain, and sharpen and strengthen their cognitive and physical capabilities. All raise serious ethical and bioethical difficulties.

Continue reading “Ethics on the near-future battlefield” »

Dec 16, 2015

Russia, China Building ‘Robot’ Army

Posted by in categories: business, ethics, military, robotics/AI, security

Despite more than a thousand artificial-intelligence researchers signing an open letter this summer in an effort to ban autonomous weapons, Business Insider reports that China and Russia are in the process of creating self-sufficient killer robots, and in turn is putting pressure on the Pentagon to keep up.

“We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield,” U.S. Deputy Secretary of Defense Robert Work said during a national security forum on Monday.

Work added, “[Gerasimov] said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.’”

Read more

Dec 15, 2015

When Will We Look Robots in the Eye?

Posted by in categories: ethics, human trajectories, robotics/AI

In the various incarnations of Douglas Adams’ Hitchhiker’s Guide To The Galaxy, a sentient robot named Marvin the Paranoid Android serves on the starship Heart of Gold. Because he is never assigned tasks that challenge his massive intellect, Marvin is horribly depressed, always quite bored, and a burden to the humans and aliens around him. But he does write nice lullabies.

While Marvin is a fictional robot, Scholar and Author David Gunkel predicts that sentient robots will soon be a fact of life and that mankind needs to start thinking about how we’ll treat such machines, at present and in the future.

For Gunkel, the question is about moral standing and how we decide if something does or does not have moral standing. As an example, Gunkel notes our children have moral standing, while a rock or our smartphone may not have moral consideration. From there, he said, the question becomes, where and how do we draw the line to decide who is inside and who is outside the moral community?

“Traditionally, the qualities for moral standing are things like rationality, sentience (and) the ability to use languages. Every entity that has these properties generally falls into the community of moral subjects,” Gunkel said. “The problem, over time, is that these properties have changed. They have not been consistent.”

Continue reading “When Will We Look Robots in the Eye?” »

Dec 14, 2015

Why Infosys is joining Elon Musk, Y Combinator and others in pledging $1 billion for OpenAI — By Harshith Mallya | YourStory

Posted by in categories: education, ethics, open source, robotics/AI

YourStory-OpenAI

““Our trust in complex systems stems mostly from understanding their predictability, whether it is nuclear reactors, lathe machines, or 18-wheelers; or of course, AI. If complex systems are not open to be used, extended, and learned about, they end up becoming yet another mysterious thing for us, ones that we end up praying to and mythifying. The more open we make AI, the better.””

Read more

Dec 7, 2015

Can The Existential Risk Of Artificial Intelligence Be Mitigated?

Posted by in categories: ethics, existential risks, futurism, government, human trajectories, robotics/AI

It seems like every day we’re warned about a new, AI-related threat that could ultimately bring about the end of humanity. According to Author and Oxford Professor Nick Bostrom, those existential risks aren’t so black and white, and an individual’s ability to influence those risks might surprise you.

Image Credit: TED

Image Credit: TED

Bostrom defines an existential risk as one distinction of earth originating life or the permanent and drastic destruction of our future development, but he also notes that there is no single methodology that is applicable to all the different existential risks (as more technically elaborated upon in this Future of Humanity Institute study). Rather, he considers it an interdisciplinary endeavor.

“If you’re wondering about asteroids, we have telescopes, we can study them with, we can look at past crater impacts and derive hard statistical data on that,” he said. “We find that the risk of asteroids is extremely small and likewise for a few of the other risks that arrive from nature. But other really big existential risks are not in any direct way susceptible to this kind of rigorous quantification.”

Continue reading “Can The Existential Risk Of Artificial Intelligence Be Mitigated?” »

Nov 30, 2015

Is Media Attention Hindering The Development Of Artificial Intelligence?

Posted by in categories: ethics, robotics/AI

Image Credit: LinkedIn

Image Credit: LinkedIn

As the line between tabloid media and mainstream media becomes more diffuse, news items such as Ebola, pit bulls, Deflategate, and Donald Trump can frequently generate a cocktail of public panic, scrutiny, and scorn before the news cycle moves on to the next sensational headline. According to Robotics Expert and self-proclaimed “Robot Psychiatrist” Dr. Joanne Pransky, the same phenomenon has happened in robotics, which can shape public perception and, by extension, the future development of robots and AI.

“The challenge, since robotics is just starting to come into the mainstream, is that most of the country is ignorant. So, if you believe what you read, then I think people have a very negative and inaccurate picture (of robotics),” Pransky said. “I spend a lot of time bashing negative headlines, such as ‘ROBOT KILLS HUMAN,’ when actually the human killed himself by not following proper safety standards. A lot of things are publicized about robotics, but there’s nothing about the robot in the article. It leads people on the wrong path.”

Continue reading “Is Media Attention Hindering The Development Of Artificial Intelligence?” »

Nov 27, 2015

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech

Posted by in categories: business, ethics, robotics/AI

Russell also signed the letter, but he says his view is less apocalyptic. He says that, until now, the field of artificial intelligence has been singularly focused on giving robots the ability to make “high-quality” decisions.

“At the moment, we don’t know how to give the robot what you might call human values,” he says.

But Russell believes that as this problem becomes clearer, it’s only natural that people will start to focus their energy on solving it.

Read more

Page 61 of 82First5859606162636465Last