Toggle light / dark theme

LLMs, Cellular Automata & the Brain—a conversation with Duggan Hammock of the Wolfram Institute

What do large language models, cellular automata, and the human brain have in common? In this polymath salon, I talk with Dugan Hammock from the Wolfram Institute to discuss the deep links between these seemingly disparate fields.

Highlights include:

Computational Irreducibility: Why we can’t take shortcuts in complex systems—whether it’s a simple cellular automaton or a sophisticated LLM generating text.

The Power of Autoregression: How the simple, step-by-step process of predicting the next element can give rise to incredible complexity and human-like language.

The Nature of Thinking: Whether our own thought processes are fundamentally autoregressive and sequential, or if there’s a different, parallel mode of cognition at play.

Memory and Consciousness: The critical role of a system’s “memory” or history in shaping its future, and how this relates to our own awareness and sense of self.

NVIDIA Opens Portals to World of Robotics With New Omniverse Libraries, Cosmos Physical AI Models and AI Computing Infrastructure

NVIDIA today announced new NVIDIA Omniverse™ libraries and NVIDIA Cosmos™ world foundation models (WFMs) that accelerate the development and deployment of robotics solutions.

Meet IDEA: An AI assistant to help geoscientists explore Earth and beyond

A new artificial intelligence tool developed by researchers at the University of Hawai’i (UH) at Mānoa is making it easier for scientists to explore complex geoscience data—from tracking sea levels on Earth to analyzing atmospheric conditions on Mars.

Called the Intelligent Data Exploring Assistant (IDEA), the combines the power of large language models, like those used in ChatGPT, with scientific data, tailored instructions, and computing resources.

By simply providing questions in everyday language, researchers can ask IDEA to retrieve data, run analyses, generate plots, and even review its own results—opening up new possibilities for research, education, and scientific discovery.

Using NVIDIA TensorRT-LLM to run gpt-oss-20b

This notebook provides a step-by-step guide on how to optimizing gpt-oss models using NVIDIA’s TensorRT-LLM for high-performance inference. TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in performant way.

/* */