Blog

Archive for the ‘augmented reality’ category: Page 15

Jan 24, 2023

When WIll We Upload Our Minds To Other Species?

Posted by in categories: augmented reality, bioengineering, business, genetics, life extension, mathematics, robotics/AI, transhumanism

https://youtube.com/watch?v=3AQPgchedUw

This video explores aliens, mind uploading to other species (like in Avatar), genetic engineering, and future robots. Watch this next video about digital immortality: https://youtu.be/sZdWN9pbbew.
► Support This Channel: https://www.patreon.com/futurebusinesstech.
► Udacity: Up To 75% Off All Courses (Biggest Discount Ever): https://bit.ly/3j9pIRZ
► Brilliant: Learn Science And Math Interactively (20% Off): https://bit.ly/3HAznLL
► Jasper AI: Write 5x Faster With Artificial Intelligence: https://bit.ly/3MIPSYp.

SOURCES:
https://en.wikipedia.org/wiki/Eagle_eye#:~:text=Eagles%20hav…0developed, see%20from%205%20feet%20away.
https://vcahospitals.com/know-your-pet/how-dogs-use-smell-to…has%20been, 10%2C000%20times%20better%20than%20people.
https://www.scientificamerican.com/article/small-animals-liv…ion-world/
https://en.wikipedia.org/wiki/Human_cloning.

Continue reading “When WIll We Upload Our Minds To Other Species?” »

Jan 23, 2023

Microsoft has laid off entire teams behind Virtual, Mixed Reality, and HoloLens

Posted by in category: augmented reality

In the latest update on the massive Microsoft layoffs, it seems Redmond has gutted the teams behind HoloLens and Mixed Reality.

Jan 23, 2023

Maintaining Eye Contact in a Video Conference with NVIDIA Maxine

Posted by in categories: augmented reality, robotics/AI

Maintaining eye contact is crucial to establishing engagement and trust in a conversation. This can be challenging in a video conference because it requires participants to look at the camera instead of the screen. The NVIDIA Maxine Eye Contact feature creates an in-person experience for virtual meetings. Powered by AI, Maxine Eye Contact directs your eyes to a centered position to maintain eye contact with your audience. Eye Contact is available to developers through the Maxine Augmented Reality SDK at https://developer.nvidia.com/maxine#ar-sdk.

Learn more about Maxine at https://developer.nvidia.com/maxine and all of NVIDIA’s AI solutions at https://www.nvidia.com/en-us/deep-learning-ai/products/solutions/
#AI #NVIDIA #Maxine

Jan 14, 2023

Goodbye VR Controllers, HELLO VR GLOVES! (Best of CES 2023)

Posted by in categories: augmented reality, robotics/AI, virtual reality

Hello and welcome to… well… TWO-SDAY NEWSDAY! Your number one resource for the entire week’s worth of VR news! Today we haver the conclusion to my CES VR coverage… but it’s certainly not the least! Last video i did mostly VR headsets from HTC Vive’s newest XR Elite, to Shiftall and their Flp VR and MeganeX to Pimax and their newest Portal and Crystal… today I am covering a BUNCH of amazing VR gloves, from Diver X to bhaptics to AI Silk and Contact CI… In addition I try the PSVR 2 (not at CES), Lynx R1 as well as some really awesome haptic suits! Also got to see some of the most impressive AR lenses available! Awesome and exciting episode! Hope you enjoyed!

DiverX:
https://www.kickstarter.com/projects/diver-x/contact-glove.

Continue reading “Goodbye VR Controllers, HELLO VR GLOVES! (Best of CES 2023)” »

Jan 10, 2023

AI Trains Fire Fighters — A Man’s Best Friend

Posted by in categories: augmented reality, privacy, robotics/AI, transportation

This post is also available in: he עברית (Hebrew)

Fire departments conducting “size up” training typically rely on whiteboard discussions, drives around neighborhoods and photo-based systems. New training technology will help firefighters train for different types of fires or hazardous material situations, vehicle accidents, residential and commercial buildings, etc. An augmented reality training tool for firefighters, called Forge, uses artificial intelligence and biometric training to simulate real emergencies. Developed by Avrio Analytics, the system is designed to make sure that firefighters possess communication, situational awareness and associated skills needed in emergencies.

“Biometric and performance data collected during training allows Forge’s AI to dynamically change the training based on the user’s cognitive load, such as providing more or less guidance to the individual or introducing new training complexity in real-time,” the company told govtech.com. “This allows for training sessions tailored to the ability of the individual.”

Jan 9, 2023

Apple’s mixed-reality headset could arrive this year

Posted by in categories: augmented reality, futurism

According to a report from Bloomberg’s Mark Gurman, Apple is going to spend most of 2023 focusing on a brand new device — a mixed-reality headset that has been a work in progress for several years.

The new device could look like a pair of ski goggles, based on an earlier report from The Information. It will feature several cameras so that the device can track your movements in real time and see what’s happening in the real world.

Over the past few years, Apple CEO Tim Cook has stated several times that augmented reality is a promising technology. “I think the [AR] promise is even greater in the future. So it’s a critically important part of Apple’s future,” Cook told Kara Swisher back in 2021.

Jan 8, 2023

Mojo Vision puts contact lens production ‘on hold’ as it lays off 75% of staff

Posted by in categories: augmented reality, biotech/medical, computing, economics

We’ve met with Mojo Vision for several CESes, watching the startup’s AR contact lenses develop, year by year. These sorts of things take a lot of time and money, of course — and these days it seems increasingly difficult to find either. Today, the California-based firm announced that it is “decelerating” work on the Mojo Lens, citing, “significant challenges in raising capital.”

In an announcement posted to it site, CEO Drew Perkins blames insurmountable headwinds, including the bad economy and the “yet-to-be proven market potential for advanced AR products” in its ability to raise the necessary funding required to keep the project afloat.

“Although we haven’t had the chance yet to see it ship and to reach its full potential in the marketplace, we have proven that what was once considered science fiction can be developed into a technical reality,” Perkins writes. “Even though the pursuit of our vision for Invisible Computing is on hold for now, we strongly believe that there will be a future market for Mojo Lens and expect to accelerate it when the time is right.”

Jan 7, 2023

TCL’s RayNeo X2 AR Glasses Live-Translate Conversations for Me

Posted by in categories: augmented reality, virtual reality

TCL’s known for TVs. Now the company’s working on its own AR and VR hardware, too.

Jan 3, 2023

Muse: Text-To-Image Generation

Posted by in category: augmented reality

Via Masked Generative Transformers Presents Muse, a text-to-image Transformer model that achieves SotA image generation perf while being far more efficient than diffusion or AR models. proj: https://muse-model.github.io/ abs: https://arxiv.org/abs/2301.

Dec 31, 2022

Direct observations of a complex coronal web driving highly structured slow solar wind Astronomy

Posted by in categories: augmented reality, solar power, space

Thus, our SUVI observations captured direct imprints and dynamics of this S-web in the middle corona. For instance, consider the wind streams presented in Fig. 1. Those outflows emerge when a pair of middle-coronal structures approach each other. By comparing the timing of these outflows in Supplementary Video 5, we found that the middle-coronal structures interact at the cusp of the southwest pseudostreamer. Similarly, wind streams in Supplementary Figs. 1 3 emerge from the cusps of the HCS. Models suggest that streamer and pseudostreamer cusps are sites of persistent reconnection30,31. The observed interaction and continual rearrangement of the coronal web features at these cusps are consistent with persistent reconnection, as predicted by S-web models. Although reconnection at streamer cusps in the middle corona has been inferred in other observational studies32,33 and modelled in three dimensions30,31, the observations presented here represent imaging signatures of coronal web dynamics and their direct and persistent effects. Our observations suggest that the coronal web is a direct manifestation of the full breadth of S-web in the middle corona. The S-web reconnection dynamics modulate and drive the structure of slow solar wind through prevalent reconnection9,18.

A volume render of log Q highlights the boundaries of individual flux domains projected into the image plane, revealing the existence of substantial magnetic complexity within the CH–AR system (Fig. 3a and Supplementary Video 7). The ecliptic view of the 3D volume render of log Q with the CH–AR system at the west limb does closely reproduce elongated magnetic topological structures associated with the observed coronal web, confined to northern and southern bright (pseudo-)streamers (Fig. 3b and Supplementary Video 8). The synthetic EUV emission from the inner to middle corona and the white-light emission in the extended corona (Fig. 3c) are in general agreement with structures that we observed with the SUVI–LASCO combination (Fig. 1a). Moreover, radial velocity sliced at 3 R over the large-scale HCS crossing and the pseudostreamer arcs in the MHD model also quantitatively agree with the observed speeds of wind streams emerging from those topological features (Supplementary Figs. 4 and 6 and Supplementary Information). Thus, the observationally driven MHD model provides credence to our interpretation of the existence of the complex coronal web whose dynamics correlate to the release of wind streams.

The long lifetime of the system allowed us to probe the region from a different viewpoint using the Sun-orbiting STEREO-A, which was roughly in quadrature with respect to the Sun–Earth line during the SUVI campaign (Methods and Extended Data Fig. 6). By combining data from Solar Terrestrial Relations Observatory-Ahead’s (STEREO-A) extreme ultraviolet imager (EUVI)34, outer visible-light coronagraph (COR-2) and the inner visible-light heliospheric imager (HI-1)35, we found imprints of the complex coronal web over the CH–AR system extending into the heliosphere. Figure 4a and the associated Supplementary Video 9 demonstrate the close resemblance between highly structured slow solar wind streams escaping into the heliosphere and the S-web-driven wind streams that we observed with the SUVI and LASCO combination. Due to the lack of an extended field of view, the EUVI did not directly image the coronal web that we observed with SUVI, demonstrating that the SUVI extended field-of-view observations provide a crucial missing link between middle-coronal S-web dynamics and the highly structured slow solar wind observations.

Page 15 of 67First1213141516171819Last