Blog

Archive for the ‘information science’ category: Page 160

Nov 3, 2021

Trust The AI? You Decide

Posted by in categories: biotech/medical, information science, robotics/AI

Trust in AI. If you’re a clinician or a physician, would you trust this AI?

Clearly, sepsis treatment deserves to be focused on, which is what Epic did. But in doing so, they raised several thorny questions. Should the model be recalibrated for each discrete implementation? Are its workings transparent? Should such algorithms publish confidence along with its prediction? Are humans sufficiently in the loop to ensure that the algorithm outputs are being interpreted and implem… See more.


Earlier this year, I wrote about fatal flaws in algorithms that were developed to mitigate the COVID-19 pandemic. Researchers found two general types of flaws. The first is that model makers used small data sets that didn’t represent the universe of patients which the models were intended to represent leading to sample selection bias. The second is that modelers failed to disclose data sources, data-modeling techniques and the potential for bias in either the input data or the algorithms used to train their models leading to design related bias. As a result of these fatal flaws, such algorithms were inarguably less effective than their developers had promised.

Continue reading “Trust The AI? You Decide” »

Nov 2, 2021

Big data and predictive modelling for the opioid crisis: existing research and future potential

Posted by in categories: information science, security

A need exists to accurately estimate overdose risk and improve understanding of how to deliver treatments and interventions in people with opioid use…


The Microsoft 365 Defender security research team discovered a new vulnerability in macOS that allows an attacker to bypass the System integrity protection or SIP. This is a critical security feature in macOS which uses kernel permissions to limit the ability to write critical system files. Microsoft explains that they also found a similar technique […].

Nov 2, 2021

Deep Reasoning: Is this the Next Era of AI?

Posted by in categories: information science, robotics/AI

Thanks to this new category of algorithms that has proved its power of mimicking human skills just by learning through examples. Deep learning is a technology representing the next era of machine learning. Algorithms used in machine learning are created by programmers and they hold the responsibility for learning through data. Decisions are made based on such data.

Some of the AI experts say, t here will a shift in AI trends. For instance, the late 1990s and early 2000s saw the rise of machine learning. Neural networks gained its popularity in the early 2010s, and growth in reinforcement came into light recently.

Well, these are just a couple of caveats we’re experienced throughout the past years.

Nov 2, 2021

AI provides fast, accurate diagnosis of heart failure

Posted by in categories: biotech/medical, information science, robotics/AI

A new algorithm created by researchers at Mount Sinai Hospital, New York, has learned to identify subtle changes in electrocardiograms (ECGs) to predict whether a patient is developing heart failure.

Nov 2, 2021

The First Artificial Intelligence to Beat Humans at Everything!

Posted by in categories: employment, information science, robotics/AI, singularity

Artificial Intelligence is rapidly improving and has recently gotten to a point where it can outperform humans in several highly competetive job markets including the media. OpenAI and Intel are working on the most advanced AI Algorithms that are actually starting to understand the world similar to the way we experience it. They call these models: OpenAI CLIP, Codex, GPT 4 and other things which are all good at certain things. Now they’re trying to combine them to improve their generality and maybe create a real and working Artificial General Intelligence for our future. Whether AI Supremacy will happen before the singularity is unclear, but one thing is for sure: AI and Machine Learning will take over many jobs in the very near future.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 The Rise of AI Supremacy.
01:15 What Text-Generation AI is doing.
03:28 OpenAI is not open at all?
06:12 The Image AI: CLIP
08:52 LastIs AI taking over every job?
10:32 Last Words.

#ai #agi #intel

Nov 2, 2021

Could Big Data Beat Our Opioid Crisis?

Posted by in categories: information science, robotics/AI, terrorism

Experts in the AI and Big Data sphere consider October 2021 to be a dark month. Their pessimism isn’t fueled by rapidly shortening days or chilly weather in much of the country—but rather by the grim news from Facebook on the effectiveness of AI in content moderation.

This is unexpected. The social media behemoth has long touted tech tools such as machine learning and Big Data as answers to its moderation woes. As CEO Mark Zuckerberg explained for CBS News, “The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all—including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global.”

Nov 1, 2021

Human Brain Project researchers demonstrate highly efficient deep learning on a spiking neuromorphic chip

Posted by in categories: information science, robotics/AI

None.


Scientists from Heidelberg and Bern have succeeded in training spiking neural networks to solve complex tasks with extreme energy efficiency. The advance was enabled by the BrainScaleS-2 neuromorphic platform, which can be accessed online as part of the EBRAINS research infrastructure.

Developing a machine that processes information as efficiently as the human brain has been a long-standing research goal towards true artificial intelligence. An interdisciplinary research team at Heidelberg University and the University of Bern led by Dr Mihai Petrovici is tackling this problem with the help of biologically-inspired artificial neural networks.

Continue reading “Human Brain Project researchers demonstrate highly efficient deep learning on a spiking neuromorphic chip” »

Oct 31, 2021

How Can Facebook Algorithms Be Accountable to Users?

Posted by in categories: information science, robotics/AI

This is the first installment blog summarizing AI Theology’s panel discussion on how to make Facebook algorithms accountable to users.

Oct 30, 2021

Precision Medicine Data Dive Shows “Water Pill” Could Potentially Be Repurposed To Treat Alzheimer’s

Posted by in categories: biotech/medical, genetics, information science, life extension, neuroscience

A commonly available oral diuretic pill approved by the U.S. Food and Drug Administration may be a potential candidate for an Alzheimer’s disease treatment for those who are at genetic risk, according to findings published in Nature Aging. The research included analysis showing that those who took bumetanide — a commonly used and potent diuretic — had a significantly lower prevalence of Alzheimer’s disease compared to those not taking the drug. The study, funded by the National Institute on Aging (NIA), part of the National Institutes of Health, advances a precision medicine approach for individuals at greater risk of the disease because of their genetic makeup.

The research team analyzed information in databases of brain tissue samples and FDA-approved drugs, performed mouse and human cell experiments, and explored human population studies to identify bumetanide as a leading drug candidate that may potentially be repurposed to treat Alzheimer’s.

“Though further tests and clinical trials are needed, this research underscores the value of big data-driven tactics combined with more traditional scientific approaches to identify existing FDA-approved drugs as candidates for drug repurposing to treat Alzheimer’s disease,” said NIA Director Richard J. Hodes, M.D.

Oct 30, 2021

New Algorithms Give Digital Images More Realistic Color

Posted by in categories: augmented reality, biotech/medical, computing, information science, virtual reality

In Optica, The Optical Society’s (OSA) journal for high impact research, Qiu and colleagues describe a new approach for digitizing color. It can be applied to cameras and displays — including ones used for computers, televisions and mobile devices — and used to fine-tune the color of LED lighting.

“Our new approach can improve today’s commercially available displays or enhance the sense of reality for new technologies such as near-eye-displays for virtual reality and augmented reality glasses,” said Jiyong Wang, a member of the PAINT research team. “It can also be used to produce LED lighting for hospitals, tunnels, submarines and airplanes that precisely mimics natural sunlight. This can help regulate circadian rhythm in people who are lacking sun exposure, for example.”