Menu

Blog

Archive for the ‘augmented reality’ category: Page 17

Sep 14, 2022

US military set to get first delivery from $22 billion Microsoft HoloLens deal

Posted by in categories: augmented reality, military

Microsoft’s augmented reality headset the HoloLens has been in the works for years now, but it’s been a while since we’ve heard any news. We were seeing demos of it way back in 2015 (opens in new tab), but Microsoft has been pretty quiet on the tech in recent years when it comes to a consumer release.

What we’ve heard tons about is Microsoft’s deal to supply the United States Army with HoloLens tech. We first got wind of the deal back in 2018 (opens in new tab) with talks of a $480 million contract to help “increase lethality” of combat missions. It wasn’t until 2021 that Microsoft officially signed a much pricier $22 billion dollar contract (opens in new tab) with the army for military grade HoloLens supply.

Sep 5, 2022

Apple Researchers Develop NeuMan: A Novel Computer Vision Framework that can Generate Neural Human Radiance Field from a Single Video

Posted by in categories: augmented reality, computing, mapping, neuroscience

Neural Radiance Fields (NeRF) were first developed, greatly enhancing the quality of new vision synthesis. It was first suggested as a way to rebuild a static picture using a series of posed photographs. However, it has been swiftly expanded to include dynamic and uncalibrated scenarios. With the assistance of sizable controlled datasets, recent work additionally concentrate on animating these human radiance field models, thereby broadening the application domain of radiance-field-based modeling to provide augmented reality experiences. In this study, They are focused on the case when just one video is given. They aim to rebuild the human and static scene models and enable unique posture rendering of the person without the need for pricey multi-camera setups or manual annotations.

Neural Actor can create inventive human poses, but it needs several films. Even with the most recent improvements in NeRF techniques, this is far from a simple task. The NeRF models must be trained using many cameras, constant lighting and exposure, transparent backgrounds, and precise human geometry. According to the table below, HyperNeRF cannot be controlled by human postures but instead creates a dynamic scene based on a single video. ST-NeRF uses many cameras to rebuild each person using a time-dependent NeRF model, although the editing is only done to change the bounding box. HumanNeRF creates a human model from a single video with masks that have been carefully annotated; however, it does not demonstrate generalization to novel postures.

With a model trained on a single video, Vid2Actor can produce new human poses, but it cannot model the surroundings. They solve these issues by proposing NeuMan, a system that can create unique human stances and novel viewpoints while reconstructing the person and the scene from a single in-the-wild video. Figure 1’s high-quality pose-driven rendering is made possible by NeuMan, a cutting-edge framework for training NeRF models for both the human and the scene. They first estimate the camera poses, the sparse scene model, the depth maps, the human stance, the human form, and the human masks from a moving camera’s video.

Sep 2, 2022

No VR or AR: A new pocket-size eyeglass will be just big screen experience in your eyes

Posted by in categories: augmented reality, computing, education, mobile phones, virtual reality

You need to wait till 2023 to get them though.

Lenovo has unveiled its T1 Glasses at its Tech Life 2022 event and promises to place a full HD video-watching experience right inside your pockets, a company press release.

Mobile computing devices have exploded in the past few years as gaming has become more intense, and various video streaming platforms have gathered steam. The computing power of smartphones and tablets has increased manifold. Whether you want to ambush other people in an online shooting game or sit back and watch a documentary in high-definition, a device in your pocket can help you do that with ease.

Continue reading “No VR or AR: A new pocket-size eyeglass will be just big screen experience in your eyes” »

Sep 1, 2022

Will AR Smart Glasses Replace Smartphones and Become our Personal Buddy Bots?

Posted by in categories: augmented reality, mobile phones, robotics/AI

By | Sep 1, 2022 | Artificial Intelligence

When Steve Jobs unveiled the iPhone in 2007, no one understood at the time how disruptive that device would be to existing technology. Now with rumors of Apple launching their augmented reality (AR) smart glasses products next year, people are speculating about how disruptive this technology will be.

Since iPhones are one of Apple’s primary revenue streams, they may be cautious about releasing a product that may encroach on their own turf. However, as we’ll suggest below, it may not be an either/or situation for users.

Aug 31, 2022

Augmented Reality & Not Needing Physical Objects — Mark Zuckerberg & Joe Rogan

Posted by in categories: augmented reality, virtual reality

https://www.youtube.com/watch?v=Tgp_0FvKyyg

At the moment I think Meta VR gets laughed at, but this is a good explanation.


Clip from The Joe Rogan Experience #1863 with Mark Zuckerberg.
August 25th 2022

Continue reading “Augmented Reality & Not Needing Physical Objects — Mark Zuckerberg & Joe Rogan” »

Aug 28, 2022

A Case Study For The Industry: LG Investing In Metaverse

Posted by in categories: augmented reality, business, transportation, virtual reality

As the world increasingly embraces Web3, corporations are turning to metaverse applications to stay ahead of the curve. Based on Verified Market Research, the Metaverse market is anticipated to expand at a CAGR of 39.1 percent from 2022 to 2030, reaching USD 824.53 Billion in 2020 and USD 27.21 Billion in 2020. This is due to the increasing demand for AR/VR content and gaming and the need for more realistic and interactive training simulations.

These startups show Proof of Concept with a working product and clear value proposition for businesses and consumers.


Launch a corporate accelerator: Another way to increase your exposure to the Metaverse is to launch a corporate accelerator. This will give you access to a broader range of startups and help you build a more diverse portfolio. In addition, it will allow you to offer mentorship and resources to the startups you invest in.

Continue reading “A Case Study For The Industry: LG Investing In Metaverse” »

Aug 25, 2022

Deep Dive: Why 3D reconstruction may be the next tech disruptor

Posted by in categories: augmented reality, robotics/AI, virtual reality

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Artificial intelligence (AI) systems must understand visual scenes in three dimensions to interpret the world around us. For that reason, images play an essential role in computer vision, significantly affecting quality and performance. Unlike the widely available 2D data, 3D data is rich in scale and geometry information, providing an opportunity for a better machine-environment understanding.

Data-driven 3D modeling, or 3D reconstruction, is a growing computer vision domain increasingly in demand from industries including augmented reality (AR) and virtual reality (VR). Rapid advances in implicit neural representation are also opening up exciting new possibilities for virtual reality experiences.

Aug 9, 2022

How image features influence reaction times

Posted by in categories: augmented reality, biotech/medical, neuroscience, virtual reality

It’s an everyday scenario: you’re driving down the highway when out of the corner of your eye you spot a car merging into your lane without signaling. How fast can your eyes react to that visual stimulus? Would it make a difference if the offending car were blue instead of green? And if the color green shortened that split-second period between the initial appearance of the stimulus and when the eye began moving towards it (known to scientists as the saccade), could drivers benefit from an augmented reality overlay that made every merging vehicle green?

Qi Sun, a joint professor in Tandon’s Department of Computer Science and Engineering and the Center for Urban Science and Progress (CUSP), is collaborating with neuroscientists to find out.

He and his Ph.D. student Budmonde Duinkharjav—along with colleagues from Princeton, the University of North Carolina, and NVIDIA Research—recently authored the paper “Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency,” presenting a model that can be used to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Inspired by neuroscience, the model could ultimately have great implications for , telemedicine, e-sports, and in any other arena in which AR and VR are leveraged.

Aug 3, 2022

Augmented reality could be the future of paper books, according to new research

Posted by in categories: augmented reality, education, energy

“Augmented books, or a-books, can be the future of many book genres, from travel and tourism to education. This technology exists to assist the reader in a deeper understanding of the written topic and get more through digital means without ruining the experience of reading a paper book.”

Power efficiency and pre-printed conductive paper are some of the new features which allow Surrey’s augmented books to now be manufactured on a semi-industrial scale. With no wiring visible to the reader, Surrey’s augmented reality books allow users to trigger with a simple gesture (such as a swipe of a finger or turn of a page), which will then be displayed on a nearby device.

Aug 2, 2022

Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers

Posted by in categories: augmented reality, biotech/medical, internet, media & arts, privacy, robotics/AI, security, virtual reality

View insights.


In a paper distributed via ArXiv, titled “Exploring the Unprecedented Privacy Risks of the Metaverse,” boffins at UC Berkeley in the US and the Technical University of Munich in Germany play-tested an “escape room” virtual reality (VR) game to better understand just how much data a potential attacker could access. Through a 30-person study of VR usage, the researchers – Vivek Nair (UCB), Gonzalo Munilla Garrido (TUM), and Dawn Song (UCB) – created a framework for assessing and analyzing potential privacy threats. They identified more than 25 examples of private data attributes available to potential attackers, some of which would be difficult or impossible to obtain from traditional mobile or web applications. The metaverse that is rapidly becoming a part of our world has long been an essential part of the gaming community. Interaction-based games like Second Life, Pokemon Go, and Minecraft have existed as virtual social interaction platforms. The founder of Second Life, Philip Rosedale, and many other security experts have lately been vocal about Meta’s impact on data privacy. Since the core concept is similar, it is possible to determine the potential data privacy issues apparently within Meta.

There has been a buzz going around the tech market that by the end of 2022, the metaverse can revive the AR/VR device shipments and take it as high as 14.19 million units, compared to 9.86 million in 2021, indicating a year-over-year increase of about 35% to 36%. The AR/VR device market will witness an enormous boom in the market due to component shortages and the difficulty to develop new technologies. The growth momentum will also be driven by the increased demand for remote interactivity stemming from the pandemic. But what will happen when these VR or metaverse headsets start stealing your precious data? Not just headsets but smart glasses too are prime suspect when it comes to privacy concerns.

Continue reading “Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers” »

Page 17 of 66First1415161718192021Last