Blog

Archive for the ‘information science’ category: Page 81

Apr 24, 2023

Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT

Posted by in categories: information science, robotics/AI, transportation

Like many people, you may have had your mind blown recently by the possibility of ChatGPT and other large language models like the new Bing or Google’s Bard.

For anyone who somehow hasn’t come across them — which is probably unlikely as ChatGPT is reportedly the fastest-growing app of all time — here’s a quick recap:

Continue reading “Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT” »

Apr 23, 2023

On theoretical justification of the forward–backward algorithm for the variational learning of Bayesian hidden Markov models

Posted by in categories: computing, information science

Hidden Markov model (HMM) [ 1, 2 ] is a powerful model to describe sequential data and has been widely used in speech signal processing [ 3-5 ], computer vision [ 6-8 ], longitudinal data analysis [ 9 ], social networks [ 10-12 ] and so on. An HMM typically assumes the system has K internal states, and the transition of states forms a Markov chain. The system state cannot be observed directly, thus we need to infer the hidden states and system parameters based on observations. Due to the existence of latent variables, the Expectation-Maximisation (EM) algorithm [ 13, 14 ] is often used to learn an HMM. The main difficulty is to calculate site marginal distributions and pairwise marginal distributions based on the posterior distribution of latent variables. The forward-backward algorithm was specifically designed to tackle this problem. The derivation of the forward-backward algorithm heavily relies on HMM assumptions and probabilistic relationships between quantities, thus requiring the parameters in the posterior distribution to have explicit probabilistic meanings.

Bayesian HMM [ 15-22 ] further imposes priors on the parameters of HMM, and the resulting model is more robust. It has been demonstrated that Bayesian HMM often outperforms HMM in applications. However, the learning process of a Bayesian HMM is more challenging since the posterior distribution of latent variables is intractable. Mean-field theory-based variational inference is often utilised in the E-step of the EM algorithm, which tries to find an optimal approximation of the posterior distribution in a factorised family. The variational inference iteration also involves computing site marginal distributions and pairwise marginal distributions given the joint distribution of system state indicator variables. Existing works [ 15-23 ] directly apply the forward-backward algorithm to obtain these values without justification. This is not theoretically sound and the result is not guaranteed to be correct, since the requirements of the forward-backward algorithm are not met in this case.

In this paper, we prove that the forward-backward algorithm can be applied in more general cases where the parameters have no probabilistic meanings. The first proof converts the general case to an HMM and uses the correctness of the forward-backward algorithm on HMM to prove the claim. The second proof is model-free, which derives the forward-backward algorithm in a totally different way. The new derivation does not rely on HMM assumptions and merely utilises matrix techniques to rewrite the desired quantities. Therefore, this derivation naturally proves that it is unnecessary to make probabilistic requirements on the parameters of the forward-backward algorithm. Specifically, we justify that heuristically applying the forward-backward algorithm in the variational learning of Bayesian HMM is theoretically sound and guaranteed to return the correct result.

Apr 23, 2023

Quantum circuit learning as a potential algorithm to predict experimental chemical properties

Posted by in categories: chemistry, information science, quantum physics

We introduce quantum circuit learning (QCL) as an emerging regression algorithm for chemo-and materials-informatics. The supervised model, functioning on the rule of quantum mechanics, can process linear and smooth non-linear functions from small datasets (100 records). Compared with conventional algorithms, such as random forest, support vector machine, and linear regressions, the QCL can offer better predictions with some one-dimensional functions and experimental chemical databases. QCL will potentially help the virtual exploration of new molecules and materials more efficiently through its superior prediction performances.

Apr 22, 2023

The Multiverse: Our Universe Is Suspiciously Unlikely to Exist—Unless It Is One of Many

Posted by in categories: alien life, information science, particle physics

But we expect that it’s in that first tiny fraction of a second that the key features of our universe were imprinted.

The conditions of the universe can be described through its “fundamental constants”—fixed quantities in nature, such as the gravitational constant (called G) or the speed of light (called C). There are about 30 of these representing the sizes and strengths of parameters such as particle masses, forces, or the universe’s expansion. But our theories don’t explain what values these constants should have. Instead, we have to measure them and plug their values into our equations to accurately describe nature.

Continue reading “The Multiverse: Our Universe Is Suspiciously Unlikely to Exist—Unless It Is One of Many” »

Apr 21, 2023

Artificial intelligence has improved the first-ever real photo of a supermassive black hole 6.5 billion times heavier than the Sun

Posted by in categories: cosmology, information science, robotics/AI

In 2017, the European Southern Observatory (ESO) obtained the first ever real photo of a black hole. Six years later, artificial intelligence was able to improve the image.

Here’s What We Know

American scientists have decided to improve the photo of a black hole. The original image shows something resembling a “fuzzy donut”. Experts have applied the PRIMO algorithm, based on machine learning, to improve the image.

Apr 21, 2023

Giant orbital magnetic moment appears in a graphene quantum dot

Posted by in categories: computing, information science, particle physics, quantum physics

A giant orbital magnetic moment exists in graphene quantum dots, according to new work by physicists at the University of California Santa Cruz in the US. As well as being of fundamental interest for studying systems with relativistic electrons – that is those travelling at near-light speeds – the work could be important for quantum information science since these moments could encode information.

Graphene, a sheet of carbon just one atom thick, has a number of unique electronic properties, many of which arise from the fact that it is a semiconductor with a zero-energy gap between its valence and conduction bands. Near where the two bands meet, the relationship between the energy and momentum of charge carriers (electrons and holes) in the material is described by the Dirac equation and resembles that of a photon, which is massless.

These bands, called Dirac cones, enable the charge carriers to travel through graphene at extremely high, “ultra-relativistic” speeds approaching that of light. This extremely high mobility means that graphene-based electronic devices such as transistors could be faster than any that exist today.

Apr 20, 2023

Is deep learning a necessary ingredient for artificial intelligence?

Posted by in categories: information science, robotics/AI, transportation

The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots.

The key question driving new research published today in Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less .

Continue reading “Is deep learning a necessary ingredient for artificial intelligence?” »

Apr 20, 2023

What’s AGI, and Why Are AI Experts Skeptical?

Posted by in categories: information science, robotics/AI

ChatGPT and other bots have revived conversations on artificial general intelligence. Scientists say algorithms won’t surpass you any time soon.

Apr 19, 2023

Algorithms Simulate Infinite Quantum System on Finite Quantum Computers

Posted by in categories: computing, information science, quantum physics

Year 2021 😗😁


Researchers say algorithms can simulate an infinite quantum system on finite quantum computers in interesting advance for quantum tech.

Apr 19, 2023

A defence of human uniqueness against AI encroachment, with Kenn Cukier

Posted by in categories: economics, employment, information science, robotics/AI, singularity

Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full “general intelligence”. And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.

These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.

Continue reading “A defence of human uniqueness against AI encroachment, with Kenn Cukier” »

Page 81 of 322First7879808182838485Last