Toggle light / dark theme

Around the same time, neuroscientists developed the first computational models of the primate visual system, using neural networks like AlexNet and its successors. The union looked promising: When monkeys and artificial neural nets were shown the same images, for example, the activity of the real neurons and the artificial neurons showed an intriguing correspondence. Artificial models of hearing and odor detection followed.

But as the field progressed, researchers realized the limitations of supervised training. For instance, in 2017, Leon Gatys, a computer scientist then at the University of Tübingen in Germany, and his colleagues took an image of a Ford Model T, then overlaid a leopard skin pattern across the photo, generating a bizarre but easily recognizable image. A leading artificial neural network correctly classified the original image as a Model T, but considered the modified image a leopard. It had fixated on the texture and had no understanding of the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies are designed to avoid such problems. In this approach, humans don’t label the data. Rather, “the labels come from the data itself,” said Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland. Self-supervised algorithms essentially create gaps in the data and ask the neural network to fill in the blanks. In a so-called large language model, for instance, the training algorithm will show the neural network the first few words of a sentence and ask it to predict the next word. When trained with a massive corpus of text gleaned from the internet, the model appears to learn the syntactic structure of the language, demonstrating impressive linguistic ability — all without external labels or supervision.

Vanderbilt researchers have developed an active machine learning approach to predict the effects of tumor variants of unknown significance, or VUS, on sensitivity to chemotherapy. VUS, mutated bits of DNA with unknown impacts on cancer risk, are constantly being identified. The growing number of rare VUS makes it imperative for scientists to analyze them and determine the kind of cancer risk they impart.

Traditional prediction methods display limited power and accuracy for rare VUS. Even machine learning, an artificial intelligence tool that leverages data to “learn” and boost performance, falls short when classifying some VUS. Recent work by the lab of Walter Chazin, Chancellor’s Chair in Medicine and professor of biochemistry and chemistry, led by co-first authors and postdoctoral fellows Alexandra Blee and Bian Li, featured an active machine learning technique.

Active machine learning relies on training an algorithm with existing data, as with machine learning, and feeding it new information between rounds of training. Chazin and his lab identified VUS for which predictions were least certain, performed biochemical experiments on those VUS and incorporated the resulting data into subsequent rounds of algorithm training. This allowed the model to continuously improve its VUS classification.

In Einstein’s theory of general relativity, gravity arises when a massive object distorts the fabric of spacetime the way a ball sinks into a piece of stretched cloth. Solving Einstein’s equations by using quantities that apply across all space and time coordinates could enable physicists to eventually find their “white whale”: a quantum theory of gravity.

In a new article in The European Physical Journal H 0, Donald Salisbury from Austin College in Sherman, USA, explains how Peter Bergmann and Arthur Komar first proposed a way to get one step closer to this goal by using Hamilton-Jacobi techniques. These arose in the study of particle motion in order to obtain the complete set of solutions from a single function of particle position and constants of the motion.

Three of the four —strong, weak, and electromagnetic—hold under both the ordinary world of our everyday experience, modeled by , and the spooky world of quantum physics. Problems arise, though, when trying to apply to the fourth force, gravity, to the quantum world. In the 1960s and 1970s, Peter Bergmann of Syracuse University, New York and his associates recognized that in order to someday reconcile Einstein’s of with the quantum world, they needed to find quantities for determining events in space and time that applied across all frames of reference. They succeeded in doing this by using the Hamilton-Jacobi techniques.

The US has retaken the top spot in the world supercomputer rankings with the exascale Frontier system at Oak Ridge National Laboratory (ORNL) in Tennessee.

The Frontier system’s score of 1.102 exaflop/s makes it “the most powerful supercomputer to ever exist” and “the first true exascale machine,” the Top 500 project said Monday in the announcement of its latest rankings. Exaflop/s (or exaflops) is short for 1 quintillion floating-point operations per second.

Frontier was more than twice as fast as a Japanese system that placed second in the rankings, which are based on the LINPACK benchmark that measures the “performance of a dedicated system for solving a dense system of linear equations.”

Machine-learning researchers make many decisions when designing new models. They decide how many layers to include in neural networks and what weights to give inputs at each node. The result of all this human decision-making is that complex models end up being “designed by intuition” rather than systematically, says Frank Hutter, head of the machine-learning lab at the University of Freiburg in Germany.

A growing field called automated machine learning, or autoML, aims to eliminate the guesswork. The idea is to have algorithms take over the decisions that researchers currently have to make when designing models. Ultimately, these techniques could make machine learning more accessible.

The material of the future could make an imaginative concept of the past real.


Brief history of the space elevator

Like most time-honored revolutionary ideas for space exploration, the space elevator can be traced to Russian/Soviet rocket scientist Konstantin Tsiolkovsky (1857−1935). Considered to be the top contender for the title of the “Father of Rocketry” (the other two being Hermann Oberth and Robert Goddard), Tsiolokovsky is responsible for developing the “Rocket Equation” and the design from which most modern rockets are derived. In his more adventurous musings, he proposed how humanity could build rotating Pinwheel Stations in space and a space elevator.

Many say that human beings have destroyed our planet. Because of this these people are endeavoring to save it through the help of artificial intelligence. Famine, animal extinction, and war may all be preventable one day with the help of technology.

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

0:00 Poached.
8:32 Deploying Cameras.
11:47 Avoiding Mass Extinction.
23:04 Plant Based Food.
26:16 Protecting From Nature.
36:06 Preventing Calamity.
41:41 DARPA

When users want to send data over the internet faster than the network can handle, congestion can occur—the same way traffic congestion snarls the morning commute into a big city.

Computers and devices that transmit data over the internet break the data down into smaller packets and use a special algorithm to decide how fast to send those packets. These control algorithms seek to fully discover and utilize available network capacity while sharing it fairly with other users who may be sharing the same network. These algorithms try to minimize delay caused by data waiting in queues in the network.

Over the past decade, researchers in industry and academia have developed several algorithms that attempt to achieve high rates while controlling delays. Some of these, such as the BBR algorithm developed by Google, are now widely used by many websites and applications.