Toggle light / dark theme

Scientists Make Big Step Towards Making Antimatter Stand Still

Scientists have been able to trap antimatter particles using a combination of electric and magnetic fields. Antiprotons have been stored for over a year, while antimatter electrons have been stored for shorter periods of time, due to their lower mass. In 2011, researchers at CERN announced that they had stored antihydrogen for over 1,000 seconds.

While scientists have been able to store and manipulate small quantities of antimatter, they have not been able to answer why antimatter is so rare in the universe. According to Einstein’s famous equation E = mc2, energy should convert into matter and antimatter in equal quantities. And, immediately after the Big Bang, there was a lot of energy. Accordingly, we should see as much antimatter as matter in our universe, and yet we don’t. This is a pressing unsolved mystery of modern physics.

According to Einstein’s equations, as well as other modern theories of antimatter, antimatter should be exactly the same as ordinary matter, with only the electric charges reversed. Thus, antimatter hydrogen should emit light just like ordinary hydrogen does, and with exactly the same wavelengths. In fact, an experiment showing exactly this behavior was reported in early 2020. This was a triumph for current theories, but meant no explanation for the universe’s preference of matter was found.

Creating deeper defense against cyber attacks

To address the growing threat of cyberattacks on industrial control systems, a KAUST team including Fouzi Harrou, Wu Wang and led by Ying Sun has developed an improved method for detecting malicious intrusions.

Internet-based are widely used to monitor and operate factories and critical infrastructure. In the past, these systems relied on expensive dedicated networks; however, moving them online has made them cheaper and easier to access. But it has also made them more vulnerable to attack, a danger that is growing alongside the increasing adoption of internet of things (IoT) technology.

Conventional security solutions such as firewalls and are not appropriate for protecting industrial control systems because of their distinct specifications. Their sheer complexity also makes it hard for even the best algorithms to pick out abnormal occurrences that might spell invasion.

Negative Energy, Quantum Information and Causality — Adam Levine

Friends Lunch with a Member.

Topic: Negative Energy, Quantum Information and Causality.
Speaker: Adam Levine.
Date: November 19, 2021

Einstein’s equations of gravity show that too much negative energy can lead to causality violations and causal paradoxes such as the so-called “grandfather paradox. In quantum mechanics, however, negative energies can arise from intrinsically quantum effects, such as the Casimir effect. Thus, it is not clear that gravity and quantum mechanics can be self-consistently combined. In this talk, Levine will discuss modern advances in understanding the connection between energy and causality in gravity and how quantum gravity avoids obvious paradoxes. He will also explore how this line of thought leads to new insights in quantum field theory, which governs particle physics.

As a physicist, Adam Levine’s research aims to understand the structure of entanglement in quantum field theories and quantum gravity through use of techniques from the study of conformal field theories, as well as quantum information theory and AdS/CFT. With support from the National Science Foundation, Adam is a long term Member in the School of Natural Sciences. He received his Ph.D. from University of California, Berkeley (2019), was a Graduate Fellow at the Kavli Institute for Theoretical Physics (2018), a National Defense Science and Engineering Graduate Fellow (2017−2020), and received the Jeffrey Willick Memorial Award for Outstanding Scholarship in Astrophysics from Stanford University (2015).

What can Artificial Intelligence do?

✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this video we will talk about artificial intelligence. Repeating brain structure, mutual understanding and mutual assistance, self-learning and rethinking of biological life forms, replacing people in various jobs and cheating. What have neural networks learned lately? All new skills and superpowers of artificial intelligence-based systems in one video!

0:00 In this video.
0:26 Isomorphic Labs.
1:14 Artificial intelligence trains robots.
2:01 MIT researchers’ algorithm teaches robots social skills.
2:45 AI adopts brain structure.
3:28 Revealing cause and effect relationships.
4:40 Miami Herald replaces fired journalist with bot.
5:26 Nvidia unveiled a neural network that creates animated 3D face models based on voice.
5:55 Sber presented code generation model based on ruGPT-3 neural network.
6:50 ruDALL-E multimodal neural network.
7:16 Cristofari Neo supercomputer for neural network training.

#prorobots #robots #robot #future technologies #robotics.

More interesting and useful content:

✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ

Robots and AI assist in designing and building Swiss university’s ‘hanging gardens’

Architecture and construction have always been, rather quietly, at the bleeding edge of tech and materials trends. It’s no surprise, then, especially at a renowned technical university like ETH Zurich, to find a project utilizing AI and robotics in a new approach to these arts. The automated design and construction they are experimenting with show how homes and offices might be built a decade from now.

The project is a sort of huge sculptural planter, “hanging gardens” inspired by the legendary structures in the ancient city of Babylon. (Incidentally, it was my ancestor, Robert Koldewey, who excavated/looted the famous Ishtar Gate to the place.)

Begun in 2019, Semiramis (named after the queen of Babylon back then) is a collaboration between human and AI designers. The general idea of course came from the creative minds of its creators, architecture professors Fabio Gramazio and Matthias Kohler. But the design was achieved by putting the basic requirements, such as size, the necessity of watering and the style of construction, through a set of computer models and machine learning algorithms.

Microsoft Research Introduces ‘Tutel’: A High-Performance MoE Library To Facilitate The Development Of Large-Scale DNN (Deep Neural Network) Models

Tutel is a high-performance MoE library developed by Microsoft researchers to aid in the development of large-scale DNN (Deep Neural Network) models; Tutel is highly optimized for the new Azure NDm A100 v4 series, and Tutel’s diverse and flexible MoE algorithmic support allows developers across AI domains to execute MoE more easily and efficiently. Tutel achieves an 8.49x speedup on an NDm A100 v4 node with 8 GPUs and a 2.75x speedup on 64 NDm A100 v4 nodes with 512 A100 GPUs compared to state-of-the-art MoE implementations like Meta’s Facebook AI Research Sequence-to-Sequence Toolkit (fairseq) in PyTorch for a single MoE layer.

Tutel delivers a more than 40% speedup for Meta’s 1.1 trillion–parameter MoE language model with 64 NDm A100 v4 nodes for end-to-end performance, thanks to optimization for all-to-all communication. When working on the Azure NDm A100 v4 cluster, Tutel delivers exceptional compatibility and comprehensive capabilities to assure outstanding performance. Tutel is free and open-source software that has been integrated into fairseq.

Tutel is a high-level MoE solution that complements existing high-level MoE solutions like fairseq and FastMoE by focusing on the optimizations of MoE-specific computation and all-to-all communication and other diverse and flexible algorithmic MoE supports. Tutel features a straightforward user interface that makes it simple to combine with other MoE systems. Developers can also use the Tutel interface to include independent MoE layers into their own DNN models from the ground up, taking advantage of the highly optimized state-of-the-art MoE features right away.

Enhancing the workhorse: Artificial intelligence, hardware innovations boost confocal microscope’s performance

Since artificial intelligence pioneer Marvin Minsky patented the principle of confocal microscopy in 1957, it has become the workhorse standard in life science laboratories worldwide, due to its superior contrast over traditional wide-field microscopy. Yet confocal microscopes aren’t perfect. They boost resolution by imaging just one, single, in-focus point at a time, so it can take quite a while to scan an entire, delicate biological sample, exposing it light dosages that can be toxic.

To push confocal imaging to an unprecedented level of performance, a collaboration at the Marine Biological Laboratory (MBL) has invented a “kitchen sink” confocal platform that borrows solutions from other high-powered imaging systems, adds a unifying thread of “Deep Learning” artificial intelligence algorithms, and successfully improves the confocal’s volumetric resolution by more than 10-fold while simultaneously reducing phototoxicity. Their report on the technology, called “Multiview Confocal Super-Resolution Microscopy,” is published online this week in Nature.

“Many labs have confocals, and if they can eke more performance out of them using these artificial intelligence algorithms, then they don’t have to invest in a whole new microscope. To me, that’s one of the best and most exciting reasons to adopt these AI methods,” said senior author and MBL Fellow Hari Shroff of the National Institute of Biomedical Imaging and Bioengineering.

How AI Is Deepening Our Understanding of the Brain

Artificial neural networks are famously inspired by their biological counterparts. Yet compared to human brains, these algorithms are highly simplified, even “cartoonish.”

Can they teach us anything about how the brain works?

For a panel at the Society for Neuroscience annual meeting this month, the answer is yes. Deep learning wasn’t meant to model the brain. In fact, it contains elements that are biologically improbable, if not utterly impossible. But that’s not the point, argues the panel. By studying how deep learning algorithms perform, we can distill high-level theories for the brain’s processes—inspirations to be further tested in the lab.

China’s AI giant SenseTime readies Hong Kong IPO

One of China’s biggest AI solution providers SenseTime is a step closer to its initial public offering. SenseTime has received regulatory approval to list on the Hong Kong Stock Exchange, according to media reports. Founded in 2014, SenseTime was christened as one of China’s four “AI Dragons” alongside Megvii, CloudWalk, and Yitu. In the second half of the 2010s, their algorithms found much demand from businesses and governments hoping to turn real-life data into actionable insights. Cameras embedded with their AI models watch city streets 24 hours. Malls use their sensing solutions to track and predict crowds on the premises.

SenseTime’s three rivals have all mulled plans to sell shares either in mainland China or Hong Kong. Megvii is preparing to list on China’s Nasdaq-style STAR board after its HKEX application lapsed.

The window for China’s data-rich tech firms to list overseas has narrowed. Beijing is making it harder for companies with sensitive data to go public outside China. And regulators in the West are wary of facial recognition companies that could aid mass surveillance.

But in the past few years, China’s AI upstarts were sought after by investors all over the world. In 2018 alone, SenseTime racked up more than $2 billion in investment. To date, the company has raised a staggering $5.2 billion in funding through 12 rounds. Its biggest outside shareholders include SoftBank Vision Fund and Alibaba’s Taobao. For its flotation in Hong Kong, SenseTime plans to raise up to $2 billion, according to Reuters.

Full Story:

The Mathematical Structure of Particle Collisions Comes Into View

And that’s where physicists are getting stuck.

Zooming in to that hidden center involves virtual particles — quantum fluctuations that subtly influence each interaction’s outcome. The fleeting existence of the quark pair above, like many virtual events, is represented by a Feynman diagram with a closed “loop.” Loops confound physicists — they’re black boxes that introduce additional layers of infinite scenarios. To tally the possibilities implied by a loop, theorists must turn to a summing operation known as an integral. These integrals take on monstrous proportions in multi-loop Feynman diagrams, which come into play as researchers march down the line and fold in more complicated virtual interactions.

Physicists have algorithms to compute the probabilities of no-loop and one-loop scenarios, but many two-loop collisions bring computers to their knees. This imposes a ceiling on predictive precision — and on how well physicists can understand what quantum theory says.