America’s Office of Science and Technology Policy, the National Science and Technology Council, and the National Security Council prioritize.

The Future Of Space Tech & Innovation — Dr. Joel Mozer Ph.D., Director of Science, Technology & Research, United States Space Force.
Dr. Joel Mozer is the Director of Science, Technology, and Research, United States Space Force (https://www.spaceforce.mil/).
With a PhD in Physics, and MS in Atmospheric Science, from University of Arizona, Dr. Mozer serves as the principal scientific advisor to the Commander and is the senior authority for all science and technology matters for an organization of approximately 11,000 space professionals worldwide, and manages a global network of satellite command and control, communications, missile warning and launch facilities. In this role, he interacts with other principals, operational commanders, combatant commands, acquisition, and international communities to address cross-organizational science and technical issues and solutions.
Dr. Mozer represents USSF science and technology on decisions, high-level planning, and policy, building coalitions and alliances throughout the U.S. government, industry, academia, the international community, and other scientific and technology organizations.
Dr. Mozer entered government service in 1992 with the U.S. Air Force. Prior to his current assignment, he was Chief Space Experimentalist of the Air Force Research Laboratory Space Vehicle Directorate. In that role, he was responsible for managing AFRL’s $40 million-per-year investment in research and development related to the development of experimental satellites and payloads and conducted a team of 100 engineers and scientists at Kirtland and Holloman Air Force Bases, New Mexico — all working to develop cost-effective ways to assemble, integrate, test and fly novel spacecraft and systems and demonstrating new concepts for Department of Defense systems and missions. His area of specialization relates to space control and remote sensing — understanding the natural and man-made space environment and developing forecast tools for warfighters, theater battle commanders and other decision-makers to mitigate risks.
Elon Musk has always said that Neuralink, the company he created in 2016 to build brain-computer interfaces, would do amazing things: Eventually, he says, it aims to allow humans to interact seamlessly with advanced artificial intelligence through thought alone. Along the way, it would help to cure people with spinal cord injuries and brain disorders ranging from Parkinson’s to schizophrenia.
Now the company is approaching a key test: a human clinical trial of its brain-computer interface (BCI). In December, Musk told a conference audience that “we hope to have this in our first humans” in 2022. In January, the company posted a job listing for a clinical trial director, an indication that it may be on track to meet Musk’s suggested timeline.
Musk has put the startup under unrelenting pressure to meet unrealistic timelines, these former employees say. “There was this top-down dissatisfaction with the pace of progress even though we were moving at unprecedented speeds,” says one former member of Neuralink’s technical staff, who worked at the company in 2019. “Still Elon was not satisfied.” Multiple staffers say company policy, dictated by Musk, forbade employees from faulting outside suppliers or vendors for a delay; the person who managed that relationship had to take responsibility for missed deadlines, even those outside their control.
Accelerating Research To Prevent & Cure Disease — Dr. Kevin Perrott, Ph.D., Founder & CEO, OpenCures; Co-Founder & Treasurer, SENS Research Foundation
Dr. Kevin Perrott, Ph.D. is Founder and CEO, OpenCures (https://opencures.org/), Adjunct Professor, University of Alberta, Co-Founder and Advisor, Oisin Biotechnologies, President, of Global Healthspan Policy Institute, and Co-Founder and Treasurer, SENS Research Foundation.
Kevin is a successful entrepreneur and owner of the largest motorcycle and snowmobile dealership in Canada, Riverside Honda and Skidoo Sales in Edmonton, Alberta. He became a cancer survivor, an experience which clearly highlighted the deficiencies of the current health technology development paradigm where the customer has almost no input in the development of their own health solutions. Armed with the realization that nothing is more valuable than health and the time to enjoy it with those you love, Kevin resolved to put his energies towards addressing these deficiencies.
In 2003, Kevin joined Aubrey de Grey and David Gobel to form the Methuselah Foundation which offered competitive prizes for advances in longevity science. A few years later he became a co-founder with Aubrey de Grey and others of SENS Research Foundation to develop interventions able to repair or make harmless the damage that accumulates and underlies multiple age-related disorders. Mostly recently, Kevin formed OpenCures as a for-benefit corporation helping individuals perform self-directed research, generate data, and focus the value of data on the health solutions they are interested in.
Kevin has authored and co-authored several peer-reviewed articles all focused on accelerating the science underlying the development of therapies for degenerative diseases. He is a co-founder of multiple non-profit and for-profit entities whose missions are aligned with that purpose.
Second, we need to be aware of the manifest biases and fallacies that magnify the weight humans put on potential losses compared to potential future gains. As a result of these biases, humans often seek to preserve the status quo over pursuing activities that lead to future changes, even when the expected (but risky) gains from the latter may outweigh those of maintaining the status quo. The preference for the status quo, and neat narratives that oversimplify complex scenarios, can lead to overlooking (or ignoring) important information that is not consistent with the current generally accepted meme — illustrated, perhaps, in Musk’s continued optimism for autonomous vehicles despite the evidence leading to others downscaling their forecasts.
The first and second points together lead to the third important consideration: the importance of independently verified data over forecasts and opinion in determining the need for and appropriateness of policy interventions. And data is historical by nature. Pausing to collect it rather than rushing to respond is recommended.
To that end, we can use available data to analyze whether increasing use of AI is demonstrably affecting key labor market performance indicators: labor productivity and multifactor productivity growth. If, as Keynes suggests, AI-driven technological change is increasing the potential for new means of economizing the use of labor to outrun the pace of finding new ways to use it, we would expect to see both statistics rising in the era dominated by AI. Yet as Figures 1 and 2 show, the exact opposite appears true for a wide range of OECD countries. Neither does the data suggest that other key labor market indicators have changed negatively with the advent of AI. As with the computer industry, we see the effects of AI everywhere but in the productivity statistics.
What is AI, really? Jeff Dean, the head of Google’s AI efforts, explains the underlying technology that enables artificial intelligence to do all sorts of things, from understanding language to diagnosing disease — and presents a roadmap for building better, more responsible systems that have a deeper understanding of the world. (Followed by a Q&A with head of TED Chris Anderson)
Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. You’re welcome to link to or embed these videos, forward them to others and share these ideas with people you know.
Become a TED Member: http://ted.com/membership.
Follow TED on Twitter: http://twitter.com/TEDTalks.
Like TED on Facebook: http://facebook.com/TED
TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy (https://www.ted.com/about/our-organization/our-policies-term…e-policy). For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com
There is nothing inevitable about aging, or about its rate. Californian bristlecone pines are believed to live for 5,000 years, and there are long-lived mammalian creatures as well. Some marine creatures do not display any signs of aging at all, including hydra, jellyfish, planarian worms, and coral. Certain human cells have immortal characteristics too. When a woman gives birth, she produces a baby which is “new”. Her “germline” (reproduction-related) cells produce a child with no signs of age.
These and many other considerations combine with the unreasonable effectiveness of modern AI to lead some people to believe that significant advances in longevity are imminent. These advances probably cannot happen without the active participation of the wider pharmaceutical industry, and the acceptance by policy makers and regulators that aging is a disease, not just an unfortunate and inevitable component of the human condition. There is still considerable reluctance among major pharmaceutical companies to contemplate specific anti-aging therapeutic developments. But there are encouraging signs of this reluctance being challenged, especially at Novartis and AstraZeneca.
Beyond the pharma giants, Mellon reckons there are 255 companies which claim to be specifically targeting aging, of which 35 are listed on stock markets. But he thinks that only a minority of them are genuinely working to tackle aging, as opposed to one of the diseases it causes, like cancer, dementia, or heart disease. He likens the state of the longevity industry today to the internet industry of 20 years ago, when it was still in its dial-up phase, and downloading information (or, heaven forbid, images) was like sucking jelly through a straw. And although longevity will have such a massive impact on all of us that you might expect progress to be expedited, Mellon points out that the internet did not have to go through lengthy and expensive FDA trials at every step.
Russia’s central bank on Thursday proposed banning the use and mining of cryptocurrencies on Russian territory, citing threats to financial stability, citizens’ wellbeing and its monetary policy sovereignty.
The move is the latest in a global cryptocurrency crackdown as governments from Asia to the United States worry that privately operated and highly volatile digital currencies could undermine their control of financial and monetary systems.
Russia has argued for years against cryptocurrencies, saying they could be used in money laundering or to finance terrorism. It eventually gave them legal status in 2020 but banned their use as a means of payment.
For the past decade, AI has been quietly seeping into daily life, from facial recognition to digital assistants like Siri or Alexa. These largely unregulated uses of AI are highly lucrative for those who control them but are already causing real-world harms to those who are subjected to them: false arrests; health care discrimination; and a rise in pervasive surveillance that, in the case of policing, can disproportionately affect Black people and disadvantaged socioeconomic groups.
Gebru is a leading figure in a constellation of scholars, activists, regulators, and technologists collaborating to reshape ideas about what AI is and what it should be. Some of her fellow travelers remain in Big Tech, mobilizing those insights to push companies toward AI that is more ethical. Others, making policy on both sides of the Atlantic, are preparing new rules to set clearer limits on the companies benefiting most from automated abuses of power. Gebru herself is seeking to push the AI world beyond the binary of asking whether systems are biased and to instead focus on power: who’s building AI, who benefits from it, and who gets to decide what its future looks like.
Full Story:
The day after our Zoom call, on the anniversary of her departure from Google, Gebru launched the Distributed AI Research (DAIR) Institute, an independent research group she hopes will grapple with how to make AI work for everyone. “We need to let people who are harmed by technology imagine the future that they want,” she says.
When Gebru was a teenager, war broke out between Ethiopia, where she had lived all her life, and Eritrea, where both of her parents were born. It became unsafe for her to remain in Addis Ababa, the Ethiopian capital. After a “miserable” experience with the U.S. asylum system, Gebru finally made it to Massachusetts as a refugee. Immediately, she began experiencing racism in the American school system, where even as a high-achieving teenager she says some teachers discriminated against her, trying to prevent her taking certain AP classes. Years later, it was a pivotal experience with the police that put her on the path toward ethical technology. She recalls calling the cops after her friend, a Black woman, was assaulted in a bar. When they arrived, the police handcuffed Gebru’s friend and later put her in a cell. The assault was never filed, she says. “It was a blatant example of systemic racism.”
“No AI technology ‘where training or transactional data is known to be of poor quality, carry bias, or where the quality of such data is unknown’ should ever be considered for use, and thus should be deemed Extreme Risk, not High Risk. Any AI technology based on poor quality or biased data is inherently compromised.”
“No AI technology that assists in “identifying, categorizing, prioritizing or otherwise making decisions pertaining to members of the public” should be deemed Low Risk. Automating such actions through technology, even with the inclusion of a human-in-the-loop, is an intrinsically risky activity, and should be categorized as such by the Policy.”
Full Story:
AI technologies are impacting our everyday lives. The ethical risks of AI mean we should think beyond the barebones of algorithmic fairness and bias in order to identify the full range of effects of AI technologies on safety, privacy and society at large.