Toggle light / dark theme

A team of researchers give a spin to Tetris, and make observations as people play the game.

We live in a world run by machines. They make important decisions for us, like who to hire, who gets approved for a loan, or recommending user content on social media. Machines and computer programs have an increasing influence over our lives, now more than ever, with artificial intelligence (AI) making inroads in our lives in new ways. And this influence goes far beyond the person directly interacting with machines.


A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.

High-performance computing (HPC) has become an essential tool for processing large datasets and simulating nature’s most complex systems. However, researchers face difficulties in developing more intensive models because Moore’s Law—which states that computational power doubles every two years—is slowing, and memory bandwidth still cannot keep up with it. But scientists can speed up simulations of complex systems by using compression algorithms running on AI hardware.

A team led by computer scientist Hatem Ltaief are tackling this problem head-on by employing designed for (AI) to help scientists make their code more efficient. In a paper published in the journal High Performance Computing, they now report making simulations up to 150 times faster in the diverse fields of climate modeling, astronomy, seismic imaging and wireless communications.

Previously, Ltaief and co-workers showed that many scientists were riding the wave of hardware development and “over-solving” their models, carrying out lots of unnecessary calculations.

Something not musk:


No one will ever be able to see a purely mathematical construct such as a perfect sphere. But now, scientists using supercomputer simulations and atomic resolution microscopes have imaged the signatures of electron orbitals, which are defined by mathematical equations of quantum mechanics and predict where an atom’s electron is most likely to be.

Scientists at UT Austin, Princeton University, and ExxonMobil have directly observed the signatures of electron orbitals in two different transition-metal atoms, iron (Fe) and cobalt (Co) present in metal-phthalocyanines. Those signatures are apparent in the forces measured by atomic force microscopes, which often reflect the underlying orbitals and can be so interpreted.

Their study was published in March 2023 as an Editors’ Highlight in the journal Nature Communications.

Out of all common refrains in the world of computing, the phrase “if only software would catch up with hardware” would probably rank pretty high. And yet, software does sometimes catch up with hardware. In fact, it seems that this time, software can go as far as unlocking quantum computations for classical computers. That’s according to researchers with the RIKEN Center for Quantum Computing, Japan, who have published work on an algorithm that significantly accelerates a specific quantum computing workload. More significantly, the workload itself — called time evolution operators — has applications in condensed matter physics and quantum chemistry, two fields that can unlock new worlds within our own.

Normally, an improved algorithm wouldn’t be completely out of the ordinary; updates are everywhere, after all. Every app update, software update, or firmware upgrade is essentially bringing revised code that either solves problems or improves performance (hopefully). And improved algorithms are nice, as anyone with a graphics card from either AMD or NVIDIA can attest. But let’s face it: We’re used to being disappointed with performance updates.

The feature image you see above was generated by an AI text-to-image rendering model called Stable Diffusion typically runs in the cloud via a web browser, and is driven by data center servers with big power budgets and a ton of silicon horsepower. However, the image above was generated by Stable Diffusion running on a smartphone, without a connection to that cloud data center and running in airplane mode, with no connectivity whatsoever. And the AI model rendering it was powered by a Qualcomm Snapdragon 8 Gen 2 mobile chip on a device that operates at under 7 watts or so.

It took Stable Diffusion only a few short phrases and 14.47 seconds to render this image.


This is an example of a 540p pixel input resolution image being scaled up to 4K resolution, which results in much cleaner lines, sharper textures, and a better overall experience. Though Qualcomm has a non-algorithmic version of this available today, called Snapdragon GSR, someday in the future, mobile enthusiast gamers are going to be treated to even better levels of image quality without sacrificing battery life and with even higher frame rates.

This is just one example of gaming and media enhancement with pre-trained and quantized machine learning models, but you can quickly think of a myriad of applications that could benefit greatly, from recommendation engines to location-aware guidance, to computational photography techniques and more.

Update: The image for the ChatGPT 3.5 and vicuna-13B comparison has been updated for readability.

With the launch of Large Language Models (LLMs) for Generative Artificial Intelligence (GenAI), the world has become both enamored and concerned with the potential for AI. The ability to hold a conversation, pass a test, develop a research paper, or write software code are tremendous feats of AI, but they are only the beginning to what GenAI will be able to accomplish over the next few years. All this innovative capability comes at a high cost in terms of processing performance and power consumption. So, while the potential for AI may be limitless, physics and costs may ultimately be the boundaries.

Tirias Research forecasts that on the current course, generative AI data center server infrastructure plus operating costs will exceed $76 billion by 2028, with growth challenging the business models and profitability of emergent services such as search, content creation, and business automation incorporating GenAI. For perspective, this cost is more than twice the estimated annual operating cost of Amazon’s cloud service AWS, which today holds one third of the cloud infrastructure services market according to Tirias Research estimates. This forecast incorporates an aggressive 4X improvement in hardware compute performance, but this gain is overrun by a 50X increase in processing workloads, even with a rapid rate of innovation around inference algorithms and their efficiency. Neural Networks (NNs) designed to run at scale will be even more highly optimized and will continue to improve over time, which will increase each server’s capacity. However, this improvement is countered by increasing usage, more demanding use cases, and more sophisticated models with orders of magnitude more parameters. The cost and scale of GenAI will demand innovation in optimizing NNs and is likely to push the computational load out from data centers to client devices like PCs and smartphones.

Technology As A Force For Good In People’s Lives — Dr. Emre Ozcan, PhD, VP, Global Head of Digital Health & Walid Mehanna, Group Data Officer And Senior Vice President, Merck KGaA, Darmstadt, Germany.


EPISODE DISCLAIMER — At any time during this episode when anyone says Merck, in any context, it shall always be referring to Merck KGaA, Darmstadt, Germany.

Dr. Emre Ozcan, Ph.D. is VP, Global Head of Digital Health, at Merck KGaA, Darmstadt, Germany (https://www.emdgroup.com/en), where he brings 15+ years experience in biopharma, med-tech and healthcare consulting with experience across strategy, research, marketing, and operations in several therapeutic areas. In his current role, he holds the accountability for the design and end-to-end delivery of digital health solutions to support Merck KGaA, Darmstadt, Germany franchise strategies and shape the architecture of the offering “around the drug” including devices and diagnostics.

A novel protocol for quantum computers could reproduce the complex dynamics of quantum materials.

RIKEN researchers have created a hybrid quantum-computational algorithm that can efficiently calculate atomic-level interactions in complex materials. This innovation enables the use of smaller quantum computers or conventional ones to study condensed-matter physics and quantum chemistry, paving the way for new discoveries in these fields.

A quantum-computational algorithm that could be used to efficiently and accurately calculate atomic-level interactions in complex materials has been developed by RIKEN researchers. It has the potential to bring an unprecedented level of understanding to condensed-matter physics and quantum chemistry—an application of quantum computers first proposed by the brilliant physicist Richard Feynman in 1981.

During its ongoing Think 2023 conference, IBM today announced an end-to-end solution to prepare organisations to adopt quantum-safe cryptography. Called Quantum Safe technology, it is a set of tools and capabilities that integrates IBM’s deep security expertise. Quantum-safe cryptography is a technique to identify algorithms that are resistant to attacks by both classical and quantum computers.

Under Quantum Safe technology, IBM is offering three capabilities. First is the Quantum Safe Explorer to locate cryptographic assets, dependencies, and vulnerabilities and aggregate all potential risks in one central location. Next is the Quantum Safe Advisor which allows the creation of a cryptographic inventory to prioritise risks. Lastly, the Quantum Safe Remidiator lets organisations test quantum-safe remediation patterns and deploy quantum-safe solutions.

In addition, the company has also announced IBM Safe Roadmap, which will serve as the guide for industries to adopt quantum technology. IBM Quantum Safe Roadmap is the company’s first blueprint to help companies in dealing with anticipated cryptographic standards and requirements and protect systems from vulnerabilities.

An algorithm that allows more precise forecasts of the positions and velocities of a beam’s distribution of particles as it passes through an accelerator has been developed by researchers with the Department of Energy (DOE) and the University of Chicago.

Traveling at nearly light speed, the linear accelerator at the DOE’s SLAC National Accelerator Laboratory fires bursts of close to one billion electrons through long metallic pipes to generate its particle beam. Located in Menlo Park, California, the facility, originally called the Stanford Linear Accelerator Center, has used its 3.2-kilometer accelerator since its construction in 1962 to propel electrons to energies as great as 50 gigaelectronvolts (GeV).

The powerful particle beam generated by SLAC’s linear accelerator is used in the study of everything from innovative materials to the behavior of molecules on the atomic scale, despite how the beam itself remains somewhat mysterious since researchers have a hard time gauging its appearance as it passes through an accelerator.