Toggle light / dark theme

Reading vs. Playing on a Tablet: Do They have Different Effects on the Brain?

The difference between the brains of children who read books (left picture) and screen time (right picture) over 1 hour. Early childhood, screen time over 60 minutes, are vulnerable to emotional and focus disorders. Increasing the duration of screen time reduces brain connectivity in the language, visual and intelligence centres compared to reading books.


The school bell rang long ago, but Danny is still sitting in his chair, trying to finish copying from the board. “Why is this process so hard? Why does it take me so much longer to read than it takes my friends?” Danny is frustrated. The more he tries to read faster, the harder it is for him to understand what he is reading. Around the time when he finally finishes copying, his friends come back to the class from the break. Like 10–15% of the children in the world, Danny has dyslexia. Dyslexia is defined as difficulty in reading accurately or quickly and, most of the time; it affects the person’s ability to understand what is read and to spell words correctly. The reading difficulty continues into adulthood and does not disappear, even though most adults with dyslexia find ways to “bypass” this difficulty, sometimes using text-to-speech software. Children and adults with dyslexia have different brain activity than do people who are good readers. They have lower activity in the brain area responsible for vision and identification of words [ 1, 2 ] and in another brain area responsible for attention and recognition of errors during reading [ 3 ]. A question could then be asked: is this reading difficulty strange or is it actually the ability to read that is magical? How did the human brain learn to read? And does the daily use of technology, which sometimes “bypasses” the need to make an effort to read, help us to learn to read or make it more difficult? This article will discuss these subjects.

Reading is a relatively new human ability—about 5,000 years old. The Egyptians were among the first to use symbols to represent words within a spoken language, and they used drawings to transmit ideas via writing. As difficult as it is to draw each word in a language, it is still much easier to understand Egyptian hieroglyphs than to figure out what is written in an unfamiliar language. Today, 5,000 years later, we expect each child in first grade to immediately understand that the lines and circles that form letters have a unique sound corresponding to them. To do that, the brain has to rely on neural networks that were designed to perform other tasks, such as seeing, hearing, language comprehension, speech, attention, and concentration [ 4 ] (see Figure 1).

Merge Labs, Sam Altman, genetic targeting, ultrasound systems, Open AI, mechanosensitive channels

In recent years, neuroengineers have devised a number of new modalities for interfacing with the nervous system. Among these are optical stimulation, vibrational stimulation, and optogenetics. A newer and perhaps more promising technology is sonogenetics.

Sonogenetics, the use of focused ultrasound to control cells that have been made ultrasound-responsive via gene delivery, is moving from compelling papers to a potential platform strategy. From a neurotech commercialization standpoint, the significance of sonogenetics is less about a single lab trick and more about the emerging convergence of three capabilities: precise genetic targeting, durable and safe delivery, and field-robust ultrasound systems that work the first time outside the origin lab.

One commercial firm that may be exploiting this technology is Merge Labs. The startup recently made a big splash with a $250 million investment from Open AI and Sam Altman. While the company has not yet released its website and the technical personnel behind the company have not been identified, it is rumored to be working with focused ultrasound implants and sonogenetics as gene therapy. If Merge and its peers can validate durable expression, predictable dose–response, and reliable outside-the-lab bring-up, a first wave of indications will likely sit at the intersection of neurology, psychiatry, and rehabilitation, with longer-term spillover into human-machine interaction.

“Only a Question of Time” Does AI Mean We’re DOOMED? Plus Oz Pearlman Reads Piers Morgan’s Mind!

ExpressVPN: Right now you can get an extra four months of ExpressVPN for free. Just scan the QR code on the screen, or go to https://ExpressVPN.com/PIERS and get four extra months for free.

Two years ago, Elon Musk was among a thousand experts to sign an open letter demanding an urgent pause on the advancement of Artificial Intelligence because of the risks concerning job losses, misinformation and more.

But now Musk is now spending a billion dollars a month to compete in an AI arms race, which is inflating the stock market to bursting point.

Amazon just laid off 14,000 workers in its ongoing A.I pivot — so, are the worst fears of doomsaying experts already coming true?

Joining Piers Morgan to discuss are respected thinkers in this field; Dr Roman Yampolskiy, Dr Michio Kaku, Alex Smola and Avi Loeb.

Then; he’s performed for presidents, billionaires and sports stars, but Oz Pearlman’s recent extraction of Joe Rogan’s pin number may have been his biggest hit yet.

POV: What You Would See During an AI Takeover

If you’ve ever wondered what exactly an AI attack, takeover, and/or extermination of humanity would like and how if it occurred in the VERY near future (or even if you do), this is a video you really need to see.


Highly recommend the full book, which goes into way more detail: https://amzn.to/4qeJgFL

Detailed sources: https://docs.google.com/document/d/1o8N5hiV9dXsoi27RIA5-XVCh…sp=sharing.

Hey guys, I’m Drew. This video has taken literally months to finish, so if you liked it, would really appreciate a sub smile

I also post mid memes on twitter: https://twitter.com/PauseusMaximus.

View a PDF of the paper titled Reasoning with Sampling: Your Base Model is Smarter Than You Think, by Aayush Karan and 1 other authors

Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel behaviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether comparable reasoning capabilites can be elicited from base models at inference time by pure sampling, without any additional training. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we propose a simple iterative sampling algorithm leveraging the base models’ own likelihoods. Over different base models, we show that our algorithm offers substantial boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Moreover, our sampler avoids the collapse in diversity over multiple samples that is characteristic of RL-posttraining. Crucially, our method does not require training, curated datasets, or a verifier, suggesting broad applicability beyond easily verifiable domains.

Active Matter Gets Solid

Coaxing tiny, self-propelled particles into cohesive structures suggests an approach for making micromachines inspired by living systems. Taking a step toward that goal, researchers have poked and prodded strands of material made from such “active” particles and measured their responses [1]. Understanding the mechanics of structures like these will be essential for the design of devices such as cilia sheets or autonomous microrobots that perform tasks in materials assembly or medicine.

Active matter refers to collections of objects that can move on their own via some energy-consuming process. For 15 years, researchers have studied active fluids that, for example, model the emergent behaviors typical of flocking birds or schooling fish. More recently, researchers have begun to explore active solids—semirigid structures made from active particles. These structures could, in principle, change their shapes in controlled ways or adapt their locomotion to suit their surroundings.

Jérémie Palacci of the Institute of Science and Technology Austria and his colleagues previously designed an active solid made from 2-µm-diameter particles submerged in water [2]. Each particle is a plastic sphere with a hematite cube fixed to its surface. When exposed to blue light, the hematite reacts with hydrogen peroxide in the water and emits the reaction products, a bit like an underwater jet.

/* */