Toggle light / dark theme

The tech giant wants to be known for more than social media’s ills.


Facebook is planning to change its company name next week to reflect its focus on building the metaverse, according to a source with direct knowledge of the matter.

The coming name change, which CEO Mark Zuckerberg plans to talk about at the company’s annual Connect conference on October 28th, but could unveil sooner, is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail. The rebrand would likely position the blue Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more. A spokesperson for Facebook declined to comment for this story.

Facebook already has more than 10,000 employees building consumer hardware like AR glasses that Zuckerberg believes will eventually be as ubiquitous as smartphones. In July, he told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.”

Facebook is pouring a lot of time and money into augmented reality, including building its own AR glasses with Ray-Ban. Right now, these gadgets can only record and share imagery, but what does the company think such devices will be used for in the future?

A new research project led by Facebook’s AI team suggests the scope of the company’s ambitions. It imagines AI systems that are constantly analyzing peoples’ lives using first-person video; recording what they see, do, and hear in order to help them with everyday tasks. Facebook’s researchers have outlined a series of skills it wants these systems to develop, including “episodic memory” (answering questions like “where did I leave my keys?”) and “audio-visual diarization” (remembering who said what when).

Forward-looking: The service department can seem like the slowest part of a dealership, especially when it’s your car getting worked on. But Mercedes-Benz is infusing its dealerships with AR technology to speed up the diagnosis and repair of tricky and complex issues with its Virtual Remote Support, powered by Microsoft’s HoloLens 2 and Dynamics 365 Remote Assist. I visited one of Mercedes’ showcases last week to check out the implementation and get some hands-on time with the HoloLens 2.

For customers that bring in a hard-to-solve problem, it was common for Mercedes-Benz to call in a flying doctor, an expert from HQ that would fly in and get hands-on with an issue. That could take days to arrange, not to mention the costs and environmental impacts associated with flying in these technical specialists from all over the country. All the while, the customer is left without their luxury automobile.

While many shoppers go with a Mercedes for the styling and performance, it’s the service experience that impacts their future purchases. Long waits could see someone changing brands in the future.

Learn More.

UNDRR

The waters are rising! Japan is using Augmented Reality to teach children about the dangers of flash floods 🌊


The Status of Science and Technology report is an important step for monitoring the progress in the implementation of the Sendai Framework and an attempt to capture some of the progress across geographies, stakeholders, and disciplines towards the application of science and technology towards risk reduction in Asia-Pacific.

RENO, Nev. (KOLO) — Over the last 15 months, some of the world’s most advanced spinal care has taken place in Northern Nevada.

In July of 2,020 Spine Nevada’s Dr. James Lynch was the first surgeon in the country to use Augmedics Xvision in a community hospital. He’s now the first to reach 100 cases using the cutting-edge technology, which essentially allows surgeons to look through a patients’ skin using a pre-loaded CT scan and virtual headset.

“A long fusion that would’ve taken us an hour before can be done in about 15 minutes,” said Dr. Lynch. “The proof is in the pudding. The last 100 patients, most of them have done very well and benefited from this technology.”

The University of Bristol is part of an international consortium of 13 universities in partnership with Facebook AI, that collaborated to advance egocentric perception. As a result of this initiative, we have built the world’s largest egocentric dataset using off-the-shelf, head-mounted cameras.


Progress in the fields of artificial intelligence (AI) and augmented reality (AR) requires learning from the same data humans process to perceive the world. Our eyes allow us to explore places, understand people, manipulate objects and enjoy activities—from the mundane act of opening a door to the exciting interaction of a game of football with friends.

Egocentric 4D Live Perception (Ego4D) is a massive-scale dataset that compiles 3,025 hours of footage from the wearable cameras of 855 participants in nine countries: UK, India, Japan, Singapore, KSA, Colombia, Rwanda, Italy, and the US. The data captures a wide range of activities from the ‘egocentric’ perspective—that is from the viewpoint of the person carrying out the activity. The University of Bristol is the only UK representative in this diverse and international effort, collecting 270 hours from 82 participants who captured footage of their chosen activities of daily living—such as practicing a musical instrument, gardening, grooming their pet, or assembling furniture.

“In the not-too-distant future you could be wearing smart AR glasses that guide you through a recipe or how to fix your bike—they could even remind you where you left your keys,” said Principal Investigator at the University of Bristol and Professor of Computer Vision, Dima Damen.

Coastal artisanal fisheries, particularly those in developing countries, are facing a global crisis of overexploitation1. Artificial reefs (ARs), or human–made reefs2, have been widely advocated by governmental and non-governmental conservation and management organizations for addressing these issues. Industries, particularly oil and gas, seeking to avoid the costs of removal or conventional disposal of used materials are often major advocates for deploying ARs. Yet, major questions remain regarding the success of such efforts in the context of weak governance and poorly sustained international investment in AR development projects. There is frequently confusion over whether or not ARs should be fishing sites and the precise goals of constructing such ARs are often unclear, making difficult to evaluate their successfulness3. Over the last 40 years, both failures and success AR implementation programs have been reported4,5. The main point of the present work is to underline the importance of the governance issue and address social and management factors on AR “success”.

To improve fishery yields, it has been recommended that ARs must be no-take areas (e.g.,2). Yet, most ARs were historically delineated as sites for fishing4, and were rarely implemented at large scales in/for no-take zones, even in countries with centuries of experience in constructing ARs, such as Japan. In Japan, fishery authorities and local fishers use ARs to promote sustainable catches and to establish nursery grounds of target species6. However, fishery authorities and local fishery cooperatives in Japan have extensive management authority over ARs. For example, fishing around ARs is usually limited to hook and line techniques, with net fishing rarely being permitted in areas where risk of entanglement in ARs is high. Furthermore, during spawning, fishing gear and fishing season are often restricted around ARs in Japan. These practices are recognized for their effectiveness in maintaining good fishing performance and marine conservation in Japan and elsewhere where they have been implemented7.

Attempts to transpose ARs to developing countries have, however, frequently ended in failure8, particularly when project funding comes to an end9. Thus, it is important to provide recommendations to improve the sustainability of AR deployments and realize their biodiversity conservation and fisheries management goals. This is particularly important in developing countries, which are often characterized by poor governance. For fisheries scientists and marine ecologists, the effectiveness of ARs is primarily quantified by surveying fish populations on ARs. In particular, the question of whether ARs facilitate the “production” of new fish or whether they only attract the surrounding fish remains under debate10,11,12. Few studies have documented how ARs are managed, and the impacts of such management8,13, despite the key importance of protecting no-take ARs from illegal fishing being repeatedly highlighted2. Mathematical models, implemented to set the optimal AR volume to maximize catches, suggest that, although attraction and production effects can modulate the response, the effect of ARs on fisheries mostly depends on governance options and efficiency14. Existing models show that fishing exclusively on ARs has consistently negative impacts on the equilibrium of catches. In comparison, ARs can have negative or positive impacts on catches when fishing on areas surrounding them, as a function of the magnitude of the AR attraction effect14. Whether or not ARs are managed as no-take areas influences these phenomena. For instance, on unmanaged ARs, overexploitation risk increases, as fish become more accessible to fishing fleets. In comparison, when fishing is banned on ARs, the fish biomass concentrated near the AR rises, leading to a “spill-over” effect that enhances catch at equilibrium in adjacent fishing areas15.

If you follow the latest trends in the tech industry, you probably know that there’s been a fair amount of debate about what the next big thing is going to be. Odds-on favorite for many has been augmented reality (AR) glasses, while others point to fully autonomous cars, and a few are clinging to the potential of 5G. With the surprise debut of Amazon’s Astro a few weeks back, personal robotic devices and digital companions have also thrown their hat into the ring.

However, while there has been little agreement on exactly what the next thing is, there seems to be little disagreement that whatever it turns out to be, it will be somehow powered, enabled, or enhanced by artificial intelligence (AI). Indeed, the fact that AI and machine learning (ML) are our future seems to be a foregone conclusion.

Yet, if we do an honest assessment of where some of these technologies actually stand on a functionality basis versus initial expectations, it’s fair to argue that the results have been disappointing on many levels. In fact, if we extend that thought process out to what AI/ML were supposed to do for us overall, then we start to come to a similarly disappointing conclusion.