Aug 4, 2024
How to access Chinese LLM chatbots across the world
Posted by Saúl Morales Rodriguéz in category: robotics/AI
Some models are available to users without Chinese phone numbers, while open-source platforms provide other workarounds.
Some models are available to users without Chinese phone numbers, while open-source platforms provide other workarounds.
A Mormon Transhumanist has trained a chatbot that was trained on his entire collection of writings, internet social media posts, and presentations.
I’ve merged with artificial intelligence. Well, I’m working on it. And I’m excited to share the results with you.
! Trained on everything that I’ve written publicly since 2000, he might be better at Mormon Transhumanism than I am.
An extremely cool application of large language models in combination with other AI tools such as models for text-to-speech and speech-to-text, image recognition and captioning, etc.
We created a robot tour guide using Spot integrated with Chat GPT and other AI models to explore the robotics applications of foundational models.
Joscha Bach meets with Ben Goertzel to discuss cognitive architectures, AGI, and conscious computers in another theolocution on TOE.
- Patreon: / curtjaimungal (early access to ad-free audio episodes!)
- Crypto: https://tinyurl.com/cryptoTOE
- PayPal: https://tinyurl.com/paypalTOE
- Twitter: / toewithcurt.
- Discord Invite: / discord.
- iTunes: https://podcasts.apple.com/ca/podcast…
- Pandora: https://pdora.co/33b9lfP
- Spotify: https://open.spotify.com/show/4gL14b9…
- Subreddit r/TheoriesOfEverything: / theoriesofeverything.
- TOE Merch: https://tinyurl.com/TOEmerch.
Continue reading “Joscha Bach Λ Ben Goertzel: Conscious AI, LLMs, AGI” »
Are you ready for artificial intelligence?
Curt’s “String Theory Iceberg”: https://youtu.be/X4PdPnQuwjYMain episode with Bach and Goertzel (October 2023): https://youtu.be/xw7omaQ8SgA?list=PLZ7ikzmc6zlN6E8KrxcYCWQIHg2tfkqvR Consider signing up for TOEmail at https://www.curtjaimungal.org
One of the significant challenges in AI research is the computational inefficiency in processing visual tokens in Vision Transformer (ViT) and Video Vision Transformer (ViViT) models. These models process all tokens with equal emphasis, overlooking the inherent redundancy in visual data, which results in high computational costs. Addressing this challenge is crucial for the deployment of AI models in real-world applications where computational resources are limited and real-time processing is essential.
Current methods like ViTs and Mixture of Experts (MoEs) models have been effective in processing large-scale visual data but come with significant limitations. ViTs treat all tokens equally, leading to unnecessary computations. MoEs improve scalability by conditionally activating parts of the network, thus maintaining inference-time costs. However, they introduce a larger parameter footprint and do not reduce computational costs without skipping tokens entirely. Additionally, these models often use experts with uniform computational capacities, limiting their ability to dynamically allocate resources based on token importance.
In the next 20 years, AI will eat software.
“We made it possible for the computer to write software by itself.”
—NVIDIA CEO Jensen Huang on the future of AI
Here they come… 🦾🤖
And this shows one of the many ways in which the Economic Singularity is rushing at us. The 🦾🤖 Bots are coming soon to a job near you.
NVIDIA unveiled a suite of services, models, and computing platforms designed to accelerate the development of humanoid robots globally. Key highlights include:
Continue reading “NVIDIA Accelerating the Future of AI & Humanoid Robots” »
Nvidia’s upcoming artificial intelligence chips will be delayed by three months or more due to design flaws, a snafu that could affect customers such as Meta Platforms, Google and Microsoft that have collectively ordered tens of billions of dollars worth of the chips, according to two people who help produce the chip and server hardware for it.
Nvidia this week told Microsoft, one of its biggest customers, and another large cloud provider about a delay involving the most advanced AI chip in its new Blackwell series of chips, according to a Microsoft employee and another person with direct knowledge.