The first node of the network is expected to come online in September.
Even the best AI large language models (LLMs) fail dramatically when it comes to simple logical questions. This is the conclusion of researchers from the Jülich Supercomputing Center (JSC), the School of Electrical and Electronic Engineering at the University of Bristol and the LAION AI laboratory.
In their paper posted to the arXiv preprint server, titled “Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models,” the scientists attest to a “dramatic breakdown of function and reasoning capabilities” in the tested state-of-the-art LLMs and suggest that although language models have the latent ability to perform basic reasoning, they cannot access it robustly and consistently.
The authors of the study—Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti and Jenia Jitsev—call on “the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of the current generation of LLMs.” They also call for the development of standardized benchmarks to uncover weaknesses in language models related to basic reasoning capabilities, as current tests have apparently failed to reveal this serious failure.
Researchers from the RIKEN Center for Computational Science (Japan) and the Max Planck Institute for Evolutionary Biology (Germany) have published new findings on how social norms evolve over time. They simulated how norms promote different social behavior, and how the norms themselves come and go. Because of the enormous number of possible norms, these simulations were run on RIKEN’s Fugaku, one of the fastest supercomputers worldwide.
Using supercomputers and satellite imagery, the researchers showed our planet breathing.
Researchers develop energy-efficient supercomputing with neural networks and charge density waves.
Researchers are creating efficient systems using neural networks and charge density waves to reduce supercomputing’s massive energy use.
As we have alluded to numerous times when talking about the next “AI” trade, data centers will be the “factories of the future” when it comes to the age of AI.
That’s the contention of Chris Miller, the author of Chip War, who penned a recent opinion column for Financial Times noting that ‘chip wars’ could very soon become ‘cloud wars’
He points out that the strategic use of high-powered computing dates back to the Cold War when the US allowed the USSR limited access to supercomputers for weather forecasting, not nuclear simulations.
Neuromorphic computers are devices that try to achieve reasoning capability by emulating a human brain. They are a different type of computer architecture that copies the physical characteristics and design principles of biological nervous systems. Although neuromorphic computations can be emulated, it’s very inefficient for classical computers to simulate. Typically new hardware is required.
The first neuromorphic computer at the scale of a full human brain is about to come online. It’s called DeepSouth, and will be finished in April 2024 at Western Sydney University. This computer should enable new research into how our brain actually functions, potentially leading to breakthroughs in how AI is created.
One important characteristic of this neuromorphic computer is that it’s constructed out of commodity hardware. Specifically, it’s built on top of FPGAs. This means it will be much easier for other organizations to copy the design. It also means that once AI starts self-improving, it can probably build new iterations of hardware quite easily. Instead of having to build factories from the ground up, leveraging existing digital technology allows all the existing infrastructure to be reused. This might have implications for how quickly we develop AGI, and how quickly superintelligence arises.
#ai #neuromorphic #computing.
A new supercomputer aims to closely mimic the human brain — it could help unlock the secrets of the mind and advance AI
https://theconversation.com/a-new-sup…
And this shows one of the many ways in which the Economic Singularity is rushing at us. The 🦾🤖 Bots are coming soon to a job near you.
NVIDIA unveiled a suite of services, models, and computing platforms designed to accelerate the development of humanoid robots globally. Key highlights include:
- NVIDIA NIM™ Microservices: These containers, powered by NVIDIA inference software, streamline simulation workflows and reduce deployment times. New AI microservices, MimicGen and Robocasa, enhance generative physical AI in Isaac Sim™, built on @NVIDIAOmniverse
- NVIDIA OSMO Orchestration Service: A cloud-native service that simplifies and scales robotics development workflows, cutting cycle times from months to under a week.
- AI-Enabled Teleoperation Workflow: Demonstrated at #SIGGRAPH2024, this workflow generates synthetic motion and perception data from minimal human demonstrations, saving time and costs in training humanoid robots.
NVIDIA’s comprehensive approach includes building three computers to empower the world’s leading robot manufacturers: NVIDIA AI and DGX to train foundation models, Omniverse to simulate and enhance AIs in a physically-based virtual environment, and Jetson Thor, a robot supercomputer. The introduction of NVIDIA NIM microservices for robot simulation generative AI further accelerates humanoid robot development.