Blog

Archive for the ‘information science’ category: Page 131

Jun 28, 2022

Atomic quantum processors make their debut

Posted by in categories: computing, information science, particle physics, quantum physics

Two research groups demonstrate quantum algorithms using neutral atoms as qubits. Tim Wogan reports.

The first quantum processors that use neutral atoms as qubits have been produced independently by two US-based groups. The result offers the possibility of building quantum computers that could be easier to scale up than current devices.

Two technologies have dominated quantum computing so far, but they are not without issues. Superconducting qubits must be constructed individually, making it nearly impossible to fabricate identical copies, so the probability of the output being correct is reduced – causing what is known as “gate fidelity”. Moreover, each qubit must be cooled close to absolute zero. Trapped ions, on the other hand, have the advantage that each ion is guaranteed to be indistinguishable by the laws of quantum mechanics. But while ions in a vacuum are relatively easy to isolate from thermal noise, they are strongly interacting and so require electric fields to move them around.

Jun 26, 2022

The Next Generation Of IBM Quantum Computers

Posted by in categories: computing, information science, quantum physics

IBM is building accessible, scalable quantum computing by focusing on three pillars:

**· **Increasing qubit counts.

**· **Developing advanced quantum software that can abstract away infrastructure complexity and orchestrate quantum programs.

Continue reading “The Next Generation Of IBM Quantum Computers” »

Jun 26, 2022

‘Killer robots’ are coming. Is the US ready for the consequences?

Posted by in categories: information science, military, robotics/AI

🤖 Officially, they’re called “lethal autonomous weapons systems.” Colloquially, they’re called “killer robots.” Either way you’re going to want to read about their future in warfare. 👇


The commander must also be prepared to justify his or her decision if and when the LAWS is wrong. As with the application of force by manned platforms, the commander assumes risk on behalf of his or her subordinates. In this case, a narrow, extensively tested algorithm with an extremely high level of certainly (for example, 99 percent or higher) should meet the threshold for a justified strike and absolve the commander of criminal accountability.

Lastly, LAWS must also be tested extensively in the most demanding possible training and exercise scenarios. The methods they use to make their lethal decisions—from identifying a target and confirming its identity to mitigating the risk of collateral damage—must be publicly released (along with statistics backing up their accuracy). Transparency is crucial to building public trust in LAWS, and confidence in their capabilities can only be built by proving their reliability through rigorous and extensive testing and analysis.

Continue reading “‘Killer robots’ are coming. Is the US ready for the consequences?” »

Jun 24, 2022

DeepMind Researchers Develop ‘BYOL-Explore’: A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks

Posted by in categories: information science, policy, robotics/AI

DeepMind Researchers Develop ‘BYOL-Explore’, A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks


Reinforcement learning (RL) requires exploration of the environment. Exploration is even more critical when extrinsic incentives are few or difficult to obtain. Due to the massive size of the environment, it is impractical to visit every location in rich settings due to the range of helpful exploration paths. Consequently, the question is: how can an agent decide which areas of the environment are worth exploring? Curiosity-driven exploration is a viable approach to tackle this problem. It entails learning a world model, a predictive model of specific knowledge about the world, and (ii) exploiting disparities between the world model’s predictions and experience to create intrinsic rewards.

An RL agent that maximizes these intrinsic incentives steers itself toward situations where the world model is unreliable or unsatisfactory, creating new paths for the world model. In other words, the quality of the exploration policy is influenced by the characteristics of the world model, which in turn helps the world model by collecting new data. Therefore, it might be crucial to approach learning the world model and learning the exploratory policy as one cohesive problem to be solved rather than two separate tasks. Deepmind researchers keeping this in mind, introduced a curiosity-driven exploration algorithm BYOL-Explore. Its attraction stems from its conceptual simplicity, generality, and excellent performance.

Continue reading “DeepMind Researchers Develop ‘BYOL-Explore’: A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks” »

Jun 23, 2022

Github’s AI-Powered Copilot Can Make Developers’ Job Way Easier

Posted by in categories: information science, robotics/AI

Microsoft-owned GitHub is launching its Copilot AI tool today, which helps suggest lines of code to developers inside their code editor. GitHub originally teamed up with OpenAI last year to launch a preview of Copilot, and it’s generally available to all developers today. Priced at US$10 per month or US$100 a year, GitHub Copilot is capable of suggesting the next line of code as developers type in an integrated development environment (IDE) like Visual Studio Code, Neovim, and JetBrains IDEs. Copilot can suggest complete methods and complex algorithms alongside boilerplate code and assistance with unit testing. More than 1.2 million developers signed up to use the GitHub Copilot preview over the past 12 months, and it will remain a free tool for verified students and maintainers of popular open-source projects. In files where it’s enabled, GitHub says nearly 40 percent of code is now being written by Copilot.

“Over the past year, we’ve continued to iterate and test workflows to help drive the ‘magic’ of Copilot,” Ryan J. Salva, VP of product at GitHub, told TechCrunch via email. “We not only used the preview to learn how people use GitHub Copilot but also to scale the service safely.”

“We specifically designed GitHub Copilot as an editor extension to make sure nothing gets in the way of what you’re doing,” GitHub CEO Thomas Dohmke says in a blog post(Opens in a new window). “GitHub Copilot distills the collective knowledge of the world’s developers into an editor extension that suggests code in real-time, to help you stay focused on what matters most: building great software.”

Jun 23, 2022

The Startup at the End of the Age : Creating True AI and instigating the Technological Singularity

Posted by in categories: augmented reality, biotech/medical, information science, mathematics, mobile phones, robotics/AI, singularity, supercomputing, virtual reality

The talk is provided on a Free/Donation basis. If you would like to support my work then you can paypal me at this link:
https://paypal.me/wai69
Or to support me longer term Patreon me at: https://www.patreon.com/waihtsang.

Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.

Continue reading “The Startup at the End of the Age : Creating True AI and instigating the Technological Singularity” »

Jun 21, 2022

Quantum Artificial Intelligence | My PhD at MIT

Posted by in categories: information science, quantum physics, robotics/AI, security

Algorithms, Shor’s Quantum Factoring Algorithm for breaking RSA Security, and the Future of Quantum Computing.

▬ In this video ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
I talk about my PhD research at MIT in Quantum Artificial Intelligence. I also explain the basic concepts of quantum computers, and why they are superior to conventional computers for specific tasks. Prof. Peter Shor, the inventor of Shor’s algorithm and one of the founding fathers of Quantum Computing, kindly agreed to participate in this video.

Continue reading “Quantum Artificial Intelligence | My PhD at MIT” »

Jun 20, 2022

Generative AI to Help Humans Create Hyperreal Population in Metaverse

Posted by in categories: augmented reality, blockchains, holograms, information science, internet, robotics/AI, virtual reality

In forthcoming years, everyone will get to observe how beautifully Metaverse will evolve towards immersive experiences in hyperreal virtual environments filled with avatars that look and sound exactly like us. Neil Stephenson’s Snow Crash describes a vast world full of amusement parks, houses, entertainment complexes, and worlds within themselves all connected by a virtual street tens of thousands of miles long. For those who are still not familiar with the metaverse, it is a virtual world in which users can put on virtual reality goggles and navigate a stylized version of themselves, known as an avatar, via virtual workplaces, and entertainment venues, and other activities. The metaverse will be an immersive version of the internet with interactive features using different technologies such as virtual reality (VR), augmented reality (AR), 3D graphics, 5G, hologram, NFT, blockchain, haptic sensors, and artificial intelligence (AI). To scale personalized content experiences to billions of people, one potential answer is generative AI, the process of using AI algorithms on existing data to create new content.

In computing, procedural generation is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated assets and algorithms coupled with computer-generated randomness and processing power. In computer graphics, it is commonly used to create textures and 3D models.

The algorithmic difficulty is typically seen in Diablo-style RPGs and some roguelikes which use instancing of in-game entities to create randomized items. Less frequently it can be used to determine the relative difficulty of hand-designed content to be subsequently placed procedurally, as can be seen with the monster design in Unangband. For example, the designer can rapidly create content, but leaves it up to the game to determine how challenging that content is to overcome, and consequently where in the procedurally generated environment this content will appear. Notably, the Touhou series of bullet hell shooters use algorithmic difficulty. Though the users are only allowed to choose certain difficulty values, several community mods enable ramping the difficulty beyond the offered values.

Jun 20, 2022

Artificial intelligence has reached a threshold. And physics can help it break new ground

Posted by in categories: information science, physics, robotics/AI

For years, physicists have been making major advances and breakthroughs in the field using their minds as their primary tools. But what if artificial intelligence could help with these discoveries?

Last month, researchers at Duke University demonstrated that incorporating known physics into machine learning algorithms could result in new levels of discoveries into material properties, according to a press release by the institution. They undertook a first-of-its-kind project where they constructed a machine-learning algorithm to deduce the properties of a class of engineered materials known as metamaterials and to determine how they interact with electromagnetic fields.

Jun 20, 2022

Google LIMoE — A Step Towards Goal Of A Single AI

Posted by in categories: information science, robotics/AI

Google announced a new technology called LIMoE that it says represents a step toward reaching Google’s goal of an AI architecture called Pathways.

Pathways is an AI architecture that is a single model that can learn to do multiple tasks that are currently accomplished by employing multiple algorithms.

LIMoE is an acronym that stands for Learning Multiple Modalities with One Sparse Mixture-of-Experts Model. It’s a model that processes vision and text together.