Toggle light / dark theme

Artificial Intelligence is everywhere in Europe.

While some are worried about its long-term impact, a team of researchers at the University of Technology in Vienna is working on responsible ways to use AI.

Watch more 👉


From industry to healthcare to the media and even the creative arts, artificial intelligence is already having an impact on our daily lives. It’s hailed by advocates as a gift to humanity, but others worry about the long-term effects on society.

Tesla began rolling out a significant update to its Full Self-Driving (FSD) software on Saturday, shifting the city-streets driving system to a single, end-to-end neural network model in FSD version 12.5.6.3.

Last week, Tesla CEO Elon Musk said the company’s FSD technology “is now almost entirely AI.” In early October, Musk had stated that FSD “will soon exceed 10,000 miles between critical interventions, which is a year of driving for most people.”

He has frequently voiced concerns over the Biden administration’s approach to immigration and the economy, and claimed free speech would be at risk with another Democrat presidency.

As one of the president-elect’s most important backers, the tech billionaire donated more than $119m (£92m) to fund a Super PAC aimed at re-electing Trump.

He also spent the last weeks before election day running a get-out-the-vote effort in the battleground states, which included a daily giveaway of $1m to voters in those states.

Australia has served up a Secure Innovation Placemat [PDF].

The wide variance in the documents is by design: each Five Eyes nation chose its own approach, although the campaign is a coordinated effort that is billed as “consistent and consolidated advice reflecting both the globalized and interconnected tech startup ecosystem as well as the global nature of the security threats startups face.” And everybody uses placemats.

Whether this advice will break through the “move fast and break things” culture that many startups nurture is anyone’s guess. The Register has reported on security and resilience troubles in the early years at Uber and Lyft, GitLab, and at OpenAI.