Toggle light / dark theme

Vanderbilt University researchers, led by alumnus Bryan Gitschlag, have uncovered groundbreaking insights into the evolution of mitochondrial DNA (mtDNA). In their paper in Nature Communications titled “Multiple distinct evolutionary mechanisms govern the dynamics of selfish mitochondrial genomes in Caenorhabditis elegans,” the team reveals how selfish mtDNA, which can reduce the fitness of its host, manages to persist within cells through aggressive competition or by avoiding traditional selection pressures. The study combines mathematical models and experiments to explain the coexistence of selfish and cooperative mtDNA within the cell, offering new insights into the complex evolutionary dynamics of these essential cellular components.

Gitschlag, an alumnus of Vanderbilt University, conducted the research while in the lab of Maulik Patel, assistant professor of biological sciences. He is now a postdoctoral researcher at Cold Spring Harbor Laboratory in David McCandlish’s lab. Gitschlag collaborated closely with fellow Patel Lab members, including James Held, a recent PhD graduate, and Claudia Pereira, a former staff member of the lab.

What I believe is that symmetry follows everything even mathematics but what explains it is the Fibonacci equation because it seems to show the grand design of everything much like physics has I believe the final parameter of the quantified parameter of infinity.


Recent explorations of unique geometric worlds reveal perplexing patterns, including the Fibonacci sequence and the golden ratio.

The large language models that have increasingly taken over the tech world are not “cheap” in many ways. The most prominent LLMs, such as GPT-4, took some $100 million to build in the form of legal costs of accessing training data, computational power costs for what could be billions or trillions of parameters, the energy and water needed to fuel computation, and the many coders developing the training algorithms that must run cycle after cycle so the machine will “learn.”

But, if a researcher needs to do a specialized task that a machine could do more efficiently and they don’t have access to a large institution that offers access to generative AI tools, what other options are available? Say, a parent wants to prep their child for a difficult test and needs to show many examples of how to solve complicated math problems.

Building their own LLM is an onerous prospect for costs mentioned above, and making direct use of the big models like GPT-4 and Llama 3.1 might not immediately be suited for the complex in logic and math their task requires.

Model grounded in biology reveals the tissue structures linked to the disorder. A researcher’s mathematical modeling approach for brain imaging analysis reveals links between genes, brain structure and autism.

A multi-university research team co-led by University of Virginia engineering professor Gustavo K. Rohde has developed a system that can spot genetic markers of autism in brain images with 89 to 95% accuracy.

Their findings suggest doctors may one day see, classify and treat autism and related neurological conditions with this method, without having to rely on, or wait for, behavioral cues. And that means this truly personalized medicine could result in earlier interventions.

To expand its GPT capabilities, OpenAI released its long-anticipated o1 model, in addition to a smaller, cheaper o1-mini version. Previously known as Strawberry, the company says these releases can “reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

Although it’s still a preview, OpenAI states this is the first of this series in ChatGPT and on its API, with more to come.

The company says these models have been training to “spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”

This conversation between Max Tegmark and Joel Hellermark was recorded in April 2024 at Max Tegmark’s MIT office. An edited version was premiered at Sana AI Summit on May 15 2024 in Stockholm, Sweden.

Max Tegmark is a professor doing AI and physics research at MIT as part of the Institute for Artificial Intelligence \& Fundamental Interactions and the Center for Brains, Minds, and Machines. He is also the president of the Future of Life Institute and the author of the New York Times bestselling books Life 3.0 and Our Mathematical Universe. Max’s unorthodox ideas have earned him the nickname “Mad Max.”

Joel Hellermark is the founder and CEO of Sana. An enterprising child, Joel taught himself to code in C at age 13 and founded his first company, a video recommendation technology, at 16. In 2021, Joel topped the Forbes 30 Under 30. This year, Sana was recognized on the Forbes AI 50 as one of the startups developing the most promising business use cases of artificial intelligence.

Timestamps.
From cosmos to AI (00:00:00)
Creating superhuman AI (00:05:00)
Superseding humans (00:09:32)
State of AI (00:12:15)
Self-improving models (00:16:17)
Human vs machine (00:18:49)
Gathering top minds (00:19:37)
The “bananas” box (00:24:20)
Future Architecture (00:26:50)
AIs evaluating AIs (00:29:17)
Handling AI safety (00:35:41)
AI fooling humans? (00:40:11)
The utopia (00:42:17)
The meaning of life (00:43:40)

Follow Sana.
X — https://twitter.com/sanalabs.
LinkedIn — / sana-labs.
Instagram — / sanalabs.
Try Sana AI for free — https://sana.ai

With only 6.6B activate parameters, GRIN MoE achieves exceptionally good performance across a diverse set of tasks, particularly in coding and mathematics tasks.

Microsoft releases GRIN😁 MoE

GRadient-INformed MoE

Demo: https://huggingface.co/spaces/GRIN-MoE-Demo/GRIN-MoE model: https://huggingface.co/microsoft/GRIN-MoE github:

With only 6.6B activate parameters, GRIN MoE achieves exceptionally good performance across a…


Contribute to microsoft/GRIN-MoE development by creating an account on GitHub.

Scientists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have shown that a type of qubit whose architecture is more amenable to mass production can perform comparably to qubits currently dominating the field. With a series of mathematical analyses, the scientists have provided a roadmap for simpler qubit fabrication that enables robust and reliable manufacturing of these quantum computer building blocks.