Toggle light / dark theme

Google Chrome’s new post-quantum cryptography may break TLS connections

Some Google Chrome users report having issues connecting to websites, servers, and firewalls after Chrome 124 was released last week with the new quantum-resistant X25519Kyber768 encapsulation mechanism enabled by default.

Google started testing the post-quantum secure TLS key encapsulation mechanism in August and has now enabled it in the latest Chrome version for all users.

The new version utilizes the Kyber768 quantum-resistant key agreement algorithm for TLS 1.3 and QUIC connections to protect Chrome TLS traffic against quantum cryptanalysis.

Unveiling a new quantum frontier: Frequency-domain entanglement

Scientists have introduced a form of quantum entanglement known as frequency-domain photon number-path entanglement. This advance in quantum physics involves an innovative tool called a frequency beam splitter, which has the unique ability to alter the frequency of individual photons with a 50% success rate.

For years, the scientific community has delved into spatial-domain number-path entanglement, a key player in the realms of quantum metrology and information science.

This concept involves photons arranged in a special pattern, known as NOON states, where they’re either all in one pathway or another, enabling applications like super-resolution imaging that surpasses traditional limits, the enhancement of quantum sensors, and the development of quantum computing algorithms designed for tasks requiring exceptional phase sensitivity.

Quantum Computing Meets Genomics: The Dawn of Hyper-Fast DNA Analysis

A new project unites world-leading experts in quantum computing and genomics to develop new methods and algorithms to process biological data.

Researchers aim to harness quantum computing to speed up genomics, enhancing our understanding of DNA and driving advancements in personalized medicine

A new collaboration has formed, uniting a world-leading interdisciplinary team with skills across quantum computing, genomics, and advanced algorithms. They aim to tackle one of the most challenging computational problems in genomic science: building, augmenting, and analyzing pangenomic datasets for large population samples. Their project sits at the frontiers of research in both biomedical science and quantum computing.

Navigating The Generative AI Divide: Open-Source Vs. Closed-Source Solutions

If you’re considering how your organization can use this revolutionary technology, one of the choices that have to be made is whether to go with open-source or closed-source (proprietary) tools, models and algorithms.

Why is this decision important? Well, each option offers advantages and disadvantages when it comes to customization, scalability, support and security.

In this article, we’ll explore the key differences as well as the pros and cons of each approach, as well as explain the factors that need to be considered when deciding which is right for your organization.

Lights, camera, algorithm: How artificial intelligence is being used to make films

The “it” Mr Woodman is referring to is Sora, a new text-to-video AI model from OpenAI, the artificial intelligence research organisation behind viral chatbot ChatGPT.

Instead of using their broad technical skills in filmmaking, such as animation, to overcome obstacles in the process, Mr Woodman and his team relied only on the model to generate footage for them, shot by shot.

“We just continued generating and it was almost like post-production and production in the same breath,” says Patrick Cederberg, who also worked on the project.

Making AI more energy efficient with neuromorphic computing

CWI senior researcher Sander Bohté started working on neuromorphic computing already in 1998 as a PhD-student, when the subject was barely on the map. In recent years, Bohté and his CWI-colleagues have realized a number of algorithmic breakthroughs in spiking neural networks (SNNs) that make neuromorphic computing finally practical: in theory many AI-applications can become a factor of a hundred to a thousand more energy-efficient. This means that it will be possible to put much more AI into chips, allowing applications to run on a smartwatch or a smartphone. Examples are speech recognition, gesture recognition and the classification of electrocardiograms (ECG).

“I am really grateful that CWI, and former group leader Han La Poutré in particular, gave me the opportunity to follow my interest, even though at the end of the 1990s neural networks and neuromorphic computing were quite unpopular”, says Bohté. “It was high-risk work for the long haul that is now bearing fruit.”

Spiking neural networks (SNNs) more closely resemble the biology of the brain. They process pulses instead of the continuous signals in classical neural networks. Unfortunately, that also makes them mathematically much more difficult to handle. For many years SNNs were therefore very limited in the number of neurons they could handle. But thanks to clever algorithmic solutions Bohté and his colleagues have managed to scale up the number of trainable spiking neurons first to thousands in 2021, and then to tens of millions in 2023.

Merging nuclear physics experiments and astronomical observations to advance equation-of-state research

For most stars, neutron stars and black holes are their final resting places. When a supergiant star runs out of fuel, it expands and then rapidly collapses on itself. This act creates a neutron star—an object denser than our sun crammed into a space 13 to 18 miles wide. In such a heavily condensed stellar environment, most electrons combine with protons to make neutrons, resulting in a dense ball of matter consisting mainly of neutrons. Researchers try to understand the forces that control this process by creating dense matter in the laboratory through colliding neutron-rich nuclei and taking detailed measurements.

/* */