Toggle light / dark theme

Meta says its new AI supercomputer will be the world’s fastest

Has the first phase of a new AI. Once the AI Research SuperCluster (RSC) is fully built out later this year, the company believes it will be the fastest AI supercomputer on the planet, capable of “performing at nearly 5 exaflops of mixed precision compute.”

The company says RSC will help researchers develop better AI models that can learn from trillions of examples. Among other things, the models will be able to build better augmented reality tools and “seamlessly analyze text, images and video together,” according to Meta. Much of this work is in service of its vision for the metaverse, in which it says AI-powered apps and products will have a key role.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together,” technical program manager Kevin Lee and software engineer Shubho Sengupta wrote.

The Human Brain-Scale AI Supercomputer Is Coming

What’s next? Human brain-scale AI.

Funded by the Slovakian government using funds allocated by the EU, the I4DI consortium is behind the initiative to build a 64 AI exaflop machine (that’s 64 billion, billion AI operations per second) on our platform by the end of 2022. This will enable Slovakia and the EU to deliver for the first time in the history of humanity a human brain-scale AI supercomputer. Meanwhile, almost a dozen other countries are watching this project closely, with interest in replicating this supercomputer in their own countries.

There are multiple approaches to achieve human brain-like AI. These include machine learning, spiking neural networks like SpiNNaker, neuromorphic computing, bio AI, explainable AI and general AI. Multiple AI approaches require universal supercomputers with universal processors for humanity to deliver human brain-scale AI.

Quantum Computer With More Than 5,000 Qubits Launched

Official launch marks a milestone in the development of quantum computing in Europe.

A quantum annealer with more than 5,000 qubits has been put into operation at Forschungszentrum Jülich. The Jülich Supercomputing Centre (JSC) and D-Wave Systems, a leading provider of quantum computing systems, today launched the company’s first cloud-based quantum service outside North America. The new system is located at Jülich and will work closely with the supercomputers at JSC in the future. The annealing quantum computer is part of the Jülich UNified Infrastructure for Quantum computing (JUNIQ), which was established in autumn 2019 to provide researchers in Germany and Europe with access to various quantum systems.

Light-matter interactions simulated on the world’s fastest supercomputer

Light-matter interactions form the basis of many important technologies, including lasers, light-emitting diodes (LEDs), and atomic clocks. However, usual computational approaches for modeling such interactions have limited usefulness and capability. Now, researchers from Japan have developed a technique that overcomes these limitations.

In a study published this month in The International Journal of High Performance Computing Applications, a research team led by the University of Tsukuba describes a highly efficient method for simulating light-matter interactions at the atomic scale.

What makes these interactions so difficult to simulate? One reason is that phenomena associated with the interactions encompass many areas of physics, involving both the propagation of light waves and the dynamics of electrons and ions in matter. Another reason is that such phenomena can cover a wide range of length and time scales.

New Silicon Carbide Qubits Bring Us One Step Closer to Quantum Networks

Chromium defects in silicon carbide may provide a new platform for quantum information.

Quantum computers may be able to solve science problems that are impossible for today’s fastest conventional supercomputers. Quantum sensors may be able to measure signals that cannot be measured by today’s most sensitive sensors. Quantum bits (qubits) are the building blocks for these devices. Scientists are investigating several quantum systems for quantum computing and sensing applications. One system, spin qubits, is based on the control of the orientation of an electron’s spin at the sites of defects in the semiconductor materials that make up qubits. Defects can include small amounts of materials that are different from the main material a semiconductor is made of. Researchers recently demonstrated how to make high quality spin qubits based on chromium defects in silicon carbide.

Supercomputing! The Purest Indicator of Structural Technological and Economic Progress (1H 2022)

How to check the trends of Supercomputing Progress, and how this is as close to a pure indicator of technological progress rates as one can find. The recent flattening of this trend has revealed a flattening in all technological and economic progress relative to long-term trendlines.

Top500.org chart : https://top500.org/statistics/perfdevel/

#Supercomputing, #EconomicGrowth #TechnologicalProgress #MooresLaw

Nanowire transistor with integrated memory to enable future supercomputers

For many years, a bottleneck in technological development has been how to get processors and memories to work faster together. Now, researchers at Lund University in Sweden have presented a new solution integrating a memory cell with a processor, which enables much faster calculations, as they happen in the memory circuit itself.

In an article in Nature Electronics, the researchers present a new configuration, in which a cell is integrated with a vertical transistor selector, all at the nanoscale. This brings improvements in scalability, speed and compared with current mass storage solutions.

The fundamental issue is that anything requiring large amounts of data to be processed, such as AI and , requires speed and more capacity. For this to be successful, the memory and processor need to be as close to each other as possible. In addition, it must be possible to run the calculations in an energy-efficient manner, not least as current technology generates high temperatures with high loads.

Newcomer Conduit Leverages Frontera to Understand SARS-CoV-2 ‘Budding’

I am happy to say that my recently published computational COVID-19 research has been featured in a major news article by HPCwire! I led this research as CTO of Conduit. My team utilized one of the world’s top supercomputers (Frontera) to study the mechanisms by which the coronavirus’s M proteins and E proteins facilitate budding, an understudied part of the SARS-CoV-2 life cycle. Our results may provide the foundation for new ways of designing antiviral treatments which interfere with budding. Thank you to Ryan Robinson (Conduit’s CEO) and my computational team: Ankush Singhal, Shafat M., David Hill, Jr., Tamer Elkholy, Kayode Ezike, and Ricky Williams.


Conduit, created by MIT graduate (and current CEO) Ryan Robinson, was founded in 2017. But it might not have been until a few years later, when the pandemic started, that Conduit may have found its true calling. While Conduit €™s commercial division is busy developing a Covid-19 test called nanoSPLASH, its nonprofit arm was granted access to one of the most powerful supercomputers in the world €”Frontera, at the Texas Advanced Computing Center (TACC) €”to model the €œbudding € process of SARS-CoV-2.

Budding, the researchers explained, is how the virus €™ genetic material is encapsulated in a spherical envelope €”and the process is key to the virus €™ ability to infect. Despite that, they say, it has hitherto been poorly understood:

The Conduit team €”comprised of Logan Thrasher Collins (CTO of Conduit), Tamer Elkholy, Shafat Mubin, David Hill, Ricky Williams, Kayode Ezike and Ankush Singhal €”sought to change that, applying for an allocation from the White House-led Covid-19 High-Performance Computing Consortium to model the budding process on a supercomputer.

Bug in backup software results in loss of 77 terabytes of research data at Kyoto University

Computer maintenance workers at Kyoto University have announced that due to an apparent bug in software used to back up research data, researchers using the University’s Hewlett-Packard Cray computing system, called Lustre, have lost approximately 77 terabytes of data. The team at the University’s Institute for Information Management and Communication posted a Failure Information page detailing what is known so far about the data loss.

The team, with the University’s Information Department Information Infrastructure Division, Supercomputing, reported that files in the /LARGEO (on the DataDirect ExaScaler storage system) were lost during a system backup procedure. Some in the press have suggested that the problem arose from a faulty script that was supposed to delete only old, unneeded log files. The team noted that it was originally thought that approximately 100TB of files had been lost, but that number has since been pared down to 77TB. They note also that the failure occurred on December 16 between the hours of 5:50 and 7pm. Affected users were immediately notified via emails. The team further notes that approximately 34 million files were lost and that the files lost belonged to 14 known research groups. The team did not release information related to the names of the research groups or what sort of research they were conducting. They did note data from another four groups appears to be restorable.

Kyoto University Loses 77 Terabytes of Research Data After Supercomputer Backup Error

Unfortunately, some of the data is lost forever. 🧐

#engineering


A routine backup procedure meant to safeguard data of researchers at Kyoto University in Japan went awry and deleted 77 terabytes of data, Gizmodo reported. The incident occurred between December 14 and 16, first came to light on the 16th, and affected as many as 14 research groups at the university.

Supercomputers are the ultimate computing devices available to researchers as they try to answer complex questions on a range of topics from molecular modeling to oil exploration, climate change models to quantum mechanics, to name a few. Capable of making hundred quadrillion operations a second, these computers are not only expensive to build but also to operate, costing hundreds of dollars for every hour of operation.

According to Bleeping Computer that originally reported the mishap, the university uses Cray supercomputers with the top system employing 122,400 computing cores. The memory on the system though is limited to approximately 197 terabytes and therefore, an Exascaler data storage system is used, which can transfer 150 GB of data per second and store up to 24 petabytes of information.