Toggle light / dark theme

A new study in Nature Communications explores the dynamics of higher-order novelties, identifying fascinating patterns in how we combine existing elements to create novelty, potentially reshaping our understanding of human creativity and innovation.

Novelties—a common part of human life—refer to one of two things. The first is the discovery of a single item, like a place, song, or an artist. The second covers discoveries new to everyone, such as technological developments or drug discoveries.

The researchers in this study aimed to understand how both kinds of novelties emerge. The team was led by Prof. Vito Latora from the Queen Mary University of London, who spoke to Phys.org about the work.

OpenAI, the company behind ChatGPT, says it has proof that the Chinese start-up DeepSeek used its technology to create a competing artificial intelligence model — fueling concerns about intellectual property theft in the fast-growing industry.

OpenAI believes DeepSeek, which was founded by math whiz Liang Wenfeng, used a process called “distillation,” which helps make smaller AI models perform better by learning from larger ones.

While this is common in AI development, OpenAI says DeepSeek may have broken its rules by using the technique to create its own AI system.

While DeepSeek makes AI cheaper, seemingly without cutting corners on quality, a group is trying to figure out how to make tests for AI models that are hard enough. It’s ‘Humanity’s Last Exam’

If you’re looking for a new reason to be nervous about artificial intelligence, try this: Some of the smartest humans in the world are struggling to create tests that AI systems can’t pass.

For years, AI systems were measured by giving new models a variety of standardized benchmark tests. Many of these tests consisted of challenging, SAT-calibre problems in areas like math, science and logic. Comparing the models’ scores over time served as a rough measure of AI progress.

Yale physicists have uncovered a sophisticated and previously unknown set of “modes” within the human ear, which impose crucial constraints on how the ear amplifies faint sounds, withstands loud noises, and distinguishes an astonishing range of sound frequencies.

By applying existing mathematical models to a generic mock-up of the cochlea—a spiral-shaped organ in the inner ear—the researchers revealed an additional layer of cochlear complexity. Their findings provide new insights into the remarkable capacity and precision of human hearing.

“We set out to understand how the ear can tune itself to detect faint sounds without becoming unstable and responding even in the absence of external sounds,” said Benjamin Machta, an assistant professor of physics in Yale’s Faculty of Arts and Science and co-senior author of a new study in the journal PRX Life. “But in getting to the bottom of this we stumbled onto a new set of low frequency mechanical modes that the cochlea likely supports.”

In a ground-breaking theoretical study, two physicists have identified a new class of quasiparticle called the paraparticle. Their calculations suggest that paraparticles exhibit quantum properties that are fundamentally different from those of familiar bosons and fermions, such as photons and electrons respectively.

Using advanced mathematical techniques, Kaden Hazzard at Rice University in the US and his former graduate student Zhiyuan Wang, now at the Max Planck Institute of Quantum Optics in Germany, have meticulously analysed the mathematical properties of paraparticles and proposed a real physical system that could exhibit paraparticle behaviour.

“Our main finding is that it is possible for particles to have exchange statistics different from those of fermions or bosons, while still satisfying the important physical principles of locality and causality,” Hazzard explains.

Mathematics and physics have long been regarded as the ultimate languages of the universe, but what if their structure resembles something much closer to home: our spoken and written languages? A recent study suggests that the mathematical equations used to describe physical laws follow a surprising pattern—a pattern that aligns with Zipf’s law, a principle from linguistics.

This discovery could reshape our understanding of how we conceptualize the universe and even how our brains work. Let’s explore the intriguing connection between the language of mathematics and the physical world.

What Is Zipf’s Law?

Yale physicists have discovered a sophisticated, previously unknown set of “modes” within the human ear that put important constraints on how the ear amplifies faint sounds, tolerates noisy blasts, and discerns a stunning range of sound frequencies in between.

By applying existing mathematical models to a generic mock-up of a cochlea—a spiral-shaped organ in the inner ear—the researchers have revealed a new layer of cochlear complexity. The findings, which appear in PRX Life, offer fresh insight into the remarkable capacity and accuracy of human hearing.

“We set out to understand how the ear can tune itself to detect faint sounds without becoming unstable and responding even in the absence of external sounds,” said Benjamin Machta, an assistant professor of physics in Yale’s Faculty of Arts and Science and co-senior author of the new study. “But in getting to the bottom of this we stumbled onto a new set of low frequency mechanical modes that the cochlea likely supports.”

In today’s AI news, Mukesh Ambani’s Reliance Industries is set to build the world’s largest data centre in Jamnagar, Gujarat, according to a *Bloomberg News* report. The facility would dwarf the current largest data center, Microsoft’s 600-megawatt site in Virginia. The project could cost between $20 billion to $30 billion.

S most popular consumer-facing AI app. The Beijing-based company introduced its closed-source multimodal model Doubao 1.5 Pro, emphasizing a “resource-efficient” training approach that it said does not sacrifice performance. ‘ + And, OpenAI’s CEO Sam Altman announced that the free tier of ChatGPT will now use the o3-mini model, marking a significant shift in how the popular AI chatbot serves its user base. In the same tweet announcing the change, Altman revealed that paid subscribers to ChatGPT Plus and Pro plans will enjoy “tons of o3-mini usage,” giving people an incentive to move to a paid account with the company.

Then, researchers at Sakana AI, an AI research lab focusing on nature-inspired algorithms, have developed a self-adaptive language model that can learn new tasks without the need for fine-tuning. Called Transformer², the model uses mathematical tricks to align its weights with user requests during inference.

In videos, Demis Hassabis, CEO of Google DeepMind joins the Big Technology Podcast with Alex Kantrowitz to discuss the cutting edge of AI and where the research is heading. In this conversation, they cover the path to artificial general intelligence, how long it will take to get there, how to build world models, and much more.

Squawk Box Then, join IBM’s Meredith Mante as she takes you on a deep dive into Lag Llama, an open-source foundation model, and shows you how to harness its power for time series forecasting. Learn how to load and preprocess data, train a model, and evaluate its performance, gaining a deeper understanding of how to leverage Lag Llama for accurate predictions.

We close out with, CEO Sam Altman, along with OpenAI researchers and developers, Yash Kumar, Casey Chu, and Reiichiro Nakano as they introduce and demonstrate Operator, the new computer-user AI Agent from OpenAI.

Thats all for today, but AI is moving fast, subscribe today to stay informed. Please don’t forget to vote for me in the Entrepreneur of Impact Competition today! Thank you for supporting me and my partners, it’s how I keep NNN free.