Toggle light / dark theme

The Startup at the End of the Age : Creating True AI and instigating the Technological Singularity

The talk is provided on a Free/Donation basis. If you would like to support my work then you can paypal me at this link:
https://paypal.me/wai69
Or to support me longer term Patreon me at: https://www.patreon.com/waihtsang.

Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.

TALK OVERVIEW
This talk is about the realization of the ideas behind the Fractal Brain theory and the unifying theory of life and intelligence discussed in the last Zoom talk, in the form of useful technology. The Startup at the End of Time will be the vehicle for the development and commercialization of a new generation of artificial intelligence (AI) and machine learning (ML) algorithms.

We will show in detail how the theoretical fractal brain/genome ideas lead to a whole new way of doing AI and ML that overcomes most of the central limitations of and problems associated with existing approaches. A compelling feature of this approach is that it is based on how neurons and brains actually work, unlike existing artificial neural networks, which though making sensational headlines are impeded by severe limitations and which are based on an out of date understanding of neurons form about 70 years ago. We hope to convince you that this new approach, really is the path to true AI.

In the last Zoom talk, we discussed a great unifying of scientific ideas relating to life & brain/mind science through the application of the mathematical idea of symmetry. In turn the same symmetry approach leads to a unifying of a mass of ideas relating to computer and information science. There’s been talk in recent years of a ‘master algorithm’ of machine learning and AI. We’ll explain that it goes far deeper than that and show how there exists a way of unifying into a single algorithm, the most important fundamental algorithms in use in the world today, which relate to data compression, databases, search engines and also existing AI/ML. Furthermore and importantly this algorithm is completely fractal or scale invariant. The same algorithm which is able to perform all these functionalities is able to run on a micro-controller unit (MCU), mobile phone, laptop and workstation, going right up to a supercomputer.

The application and utility of this new technology is endless. We will discuss the road map by which the sort of theoretical ideas I’ve been discussing in the Zoom, academic and public talks over the past few years, and which I’ve written about in the Fractal Brain Theory book, will become practical technology. And how the Java/C/C++ code running my workstation and mobile phones will become products and services.

Meta Reveals VR Headset Prototypes Designed to Make VR ‘Indistinguishable From Reality’

High Dynamic Range Zuckerberg said that of the four key challenges he and Abbrash overviewed “the most important of these all is HDR.” To prove out the impact of HDR on the VR experience, the Display Systems Research team built another prototype, appropriately called Starburst. According to Meta it’s the first VR headset prototype (‘as …

Generative AI to Help Humans Create Hyperreal Population in Metaverse

In forthcoming years, everyone will get to observe how beautifully Metaverse will evolve towards immersive experiences in hyperreal virtual environments filled with avatars that look and sound exactly like us. Neil Stephenson’s Snow Crash describes a vast world full of amusement parks, houses, entertainment complexes, and worlds within themselves all connected by a virtual street tens of thousands of miles long. For those who are still not familiar with the metaverse, it is a virtual world in which users can put on virtual reality goggles and navigate a stylized version of themselves, known as an avatar, via virtual workplaces, and entertainment venues, and other activities. The metaverse will be an immersive version of the internet with interactive features using different technologies such as virtual reality (VR), augmented reality (AR), 3D graphics, 5G, hologram, NFT, blockchain, haptic sensors, and artificial intelligence (AI). To scale personalized content experiences to billions of people, one potential answer is generative AI, the process of using AI algorithms on existing data to create new content.

In computing, procedural generation is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated assets and algorithms coupled with computer-generated randomness and processing power. In computer graphics, it is commonly used to create textures and 3D models.

The algorithmic difficulty is typically seen in Diablo-style RPGs and some roguelikes which use instancing of in-game entities to create randomized items. Less frequently it can be used to determine the relative difficulty of hand-designed content to be subsequently placed procedurally, as can be seen with the monster design in Unangband. For example, the designer can rapidly create content, but leaves it up to the game to determine how challenging that content is to overcome, and consequently where in the procedurally generated environment this content will appear. Notably, the Touhou series of bullet hell shooters use algorithmic difficulty. Though the users are only allowed to choose certain difficulty values, several community mods enable ramping the difficulty beyond the offered values.

Apple’s AR glasses reportedly coming late 2024 along with second-gen VR headset

There’s a lot going on when it comes to Apple’s rumored mixed reality headset, which is expected to combine both AR and VR technologies into a single device. However, at the same time, the company has also been working on new AR glasses. According to Haitong Intl Tech Research analyst Jeff Pu, Apple’s AR glasses will be announced in late 2024.

In a note seen by 9to5Mac, Pu mentions that Luxshare will remain as one of Apple’s main suppliers for devices to come between late 2022 and 2024. Among all devices, the analyst highlights products such as Apple Watch Series 8, iPhone 14, and Apple’s AR/VR headset. But more than that, Pu believes that Apple plans to introduce new AR glasses in the second half of 2024.

At this point, details about Apple’s AR glasses are unknown. What we do know so far is that, unlike Apple’s AR/VR headset, the new AR glasses will be highly dependent on the iPhone due to design limitations. Analyst Ming-Chi Kuo said in 2019 that the rumored “Apple Glasses” will act more like a display for the iPhone, similar to the first generation Apple Watch.

Meta Reality Labs Research: Codec Avatars 2.0 Approaching Complete Realism with Custom Chip

Researchers at Meta Reality Labs are reporting that their work on Codec Avatars 2.0 has reached a level where the avatars are approaching complete realism. The researchers created a prototype Virtual Reality headset that has a custom-built accelerator chip specifically designed to manage the AI processing capable of rendering Meta’s photorealistic Codec Avatars on standalone virtual reality headsets.

The prototype Virtual Reality avatars use very advanced machine learning techniques.

Meta first showcased the work on the sophisticated Codec Avatars far back in March 2019. The avatars are powered using multiple neural networks and are generated via a special capture rig that contains 171 cameras. After the avatars are generated, they are powered in real-time through a prototype virtual reality headset that has five cameras. Two cameras are internal viewing each eye while three are external viewing the lower face. It is though that such advanced and photoreal avatars may one day replace video conferencing.

Axon Announces TASER Drone Development to Address Mass Shootings

Remotely operated, non-lethal drones key in long-term plan to detect and stop mass shootings in less than 60 seconds

SCOTTSDALE, Ariz. 0, June 2, 2022 /PRNewswire/ — Axon (NASDAQ: AXON), the global leader in connected public safety technologies, today announced it has formally begun development of a non-lethal, remotely-operated TASER drone system as part of a long-term plan to stop mass shootings, and reaffirmed it is committed to public engagement and dialogue during the development process. This includes accelerating detection and improving real-time situational awareness of active shooter events, enhancing first responder effectiveness through VR training, and deploying remotely operated non-lethal drones capable of incapacitating an active shooter in less than 60 seconds.

OpenAI punished dev who used GPT-3 to ‘resurrect’ the dead — was this fair?

Predicting it now. 2030s there will be Tons of this, and not just chat bots of dead people, but making them seem alive, 24/7 in VR world meta whatever. There will probably be shops that cater to this and try and make it as close and realistic as possible, will probably mostly be underground.


The recent case of a man making a chatbot based on his deceased fiancée raises ethical questions: Is this something we want? property= description.

How Soul Machines is making new-gen avatars life-like

In the not-too-distant future, many of us may routinely use 3D headsets to interact in the metaverse with virtual iterations of companies, friends, and life-like company assistants. These may include Lily from AT&T, Flo from Progressive, Jake from State Farm, and the Swami from CarShield. We’ll also be interacting with new friends like Nestlé‘s Cookie Coach, Ruth, the World Health Organization’s Digital Health worker Florence, and many others.

Creating digital characters for virtual reality apps and in ecommerce is a fast-rising new segment of IT. San Francisco-based Soul Machines, a company that is rooted in both the animation and artificial intelligence (AI) sectors, is jumping at the opportunity to create animated digital avatars to bolster interactions in the metaverse. Customers are much more likely to buy something when a familiar face — digital or human — is involved.

Investors, understandably, are hot on the idea. This week, the 6-year-old company revealed an infusion of series B financing ($70 million) led by new investor SoftBank Vision Fund 2, bringing the company’s total funding to $135 million to date.

Project CAMBRIA VR Headset — The First Live Demonstration!

Project Cambria is coming out Later This year, The Next generation Standalone Mixed Reality Headset.
“This demo was created using Presence Platform, which we built to help developers build mixed reality experiences that blend physical and virtual worlds. The demo, called “The World Beyond,” will be available on App Lab soon. It’s even better with full color passthrough and the other advanced technologies we’re adding to Project Cambria. More details soon.“
Let’s Get into it!

SUPPORT THE CHANNEL smile
► Become a VR Techy Patreon → https://www.patreon.com/TyrielWood.
► Become a Sponsor on YouTube → https://www.youtube.com/channel/UC5rM
► Get some VR Tech Merch → tyriel-wood.creator-spring.com.
► VRCovers HERE: https://vrcover.com/?itm=274
► VR Rock Prescription lens: www.vr-rock.com (5% discount code: vrtech)

FOLLOW ME ON:
►Twitter → https://twitter.com/Tyrielwood.
►Facebook → https://www.facebook.com/TyrielWoodVR
► Instagram → https://www.instagram.com/tyrielwoodvr.
►Join our Discord → https://discord.gg/nsSjXx4kfj.

A BIG THANKS to my VR Techy Patreons for their support:
Trond Hjerde, Dinesh Punni, Randy Leestma, yu gao, Daniel Nehring II, Alexandre Soares da Silva, Kiyoshi Akita, Tolino Marco, Infligo, VeryEvilShadow.

MUSIC
► My Music from Epidemic Sounds → https://www.epidemicsound.com/referral/kf0ycv.

#Meta #VRHeadset #Cambria.

The History and Science of Virtual Reality Headsets

You don’t even have to cover your mouth. Virtual reality has come a long way in recent years, creating unreal environments and unprecedented tactile experiences. However, researchers have struggled to recreate an adequate simulation of our most precious senses of touch, like kissing.


You would be forgiven if you thought that the current wave of virtual reality headsets was a modern phenomenon. There were obviously some awkward—and failed—attempts to capitalize on the virtual reality craze of the early 1990s and for most people, this is as far back as virtual reality goes. The truth is that virtual reality is much, much older.

The science behind virtual reality was first explored in a practical sense as far back as the 1800s, but some could argue that it goes all the way back to Leonardo Da Vinci and the first explorations of perspective in paintings of the era. So how do virtual reality headsets work, and how come it took so long for them to become, well, a reality?

A virtual reality headset works because of a physiological concept known as stereopsis. You may not have heard the proper name, but you know about it; this refers to our ability to perceive depth because of the subtle horizontal differences in the image that each eye receives when we look at something.

/* */