Toggle light / dark theme

ABOVE: © ISTOCK.COM, CHRISTOPH BURGSTEDT

Artificial intelligence algorithms have had a meteoric impact on protein structure, such as when DeepMind’s AlphaFold2 predicted the structures of 200 million proteins. Now, David Baker and his team of biochemists at the University of Washington have taken protein-folding AI a step further. In a Nature publication from February 22, they outlined how they used AI to design tailor-made, functional proteins that they could synthesize and produce in live cells, creating new opportunities for protein engineering. Ali Madani, founder and CEO of Profluent, a company that uses other AI technology to design proteins, says this study “went the distance” in protein design and remarks that we’re now witnessing “the burgeoning of a new field.”

Proteins are made up of different combinations of amino acids linked together in folded chains, producing a boundless variety of 3D shapes. Predicting a protein’s 3D structure based on its sequence alone is an impossible task for the human mind, owing to numerous factors that govern protein folding, such as the sequence and length of the biomolecule’s amino acids, how it interacts with other molecules, and the sugars added to its surface. Instead, scientists have determined protein structure for decades using experimental techniques such as X-ray crystallography, which can resolve protein folds in atomic detail by diffracting X-rays through crystallized protein. But such methods are expensive, time-consuming, and depend on skillful execution. Still, scientists using these techniques have managed to resolve thousands of protein structures, creating a wealth of data that could then be used to train AI algorithms to determine the structures of other proteins. DeepMind famously demonstrated that machine learning could predict a protein’s structure from its amino acid sequence with the AlphaFold system and then improved its accuracy by training AlphaFold2 on 170,000 protein structures.

A NASA-led research team used satellite imagery and artificial intelligence methods to map billions of discrete tree crowns down to a 50-cm scale. The images encompassed a large swath of arid northern Africa, from the Atlantic to the Red Sea. Allometric equations based on previous tree sampling allowed the researchers to convert imagery into estimates of tree wood, foliage, root size, and carbon sequestration.

The new NASA estimation, published in the journal Nature, was surprisingly low. While the typical estimation of a region’s might rely on counting small areas and extrapolating results upwards, the NASA demonstrated technique only counts the trees that are actually there, down to the individual tree. Jules Bayala and Meine van Noordwijk published a News & Views article in the same journal commenting on the NASA team’s work.

The initial expectation of counting every scattered tree, in areas that previous models often represented by zero values, was erased by large overestimations in other areas of the earlier assessments. In previous attempts using satellites, cropland, and ground vegetation adversely affected optical images. If radar was used, topography, wetlands, and irrigated areas affected the radar backscatter, predicting higher stocks than the current NASA estimations.

Short and sweet. Everyone needs a daily dose of Sabine.


Is science close to explaining everything about our universe? Physicist Sabine Hossenfelder reacts.

Up next, Physics’ greatest mystery: Michio Kaku explains the God Equation ► https://youtu.be/B1GO1HPLp7Y

ALGORITHMS TURN PHOTO SHAPSHOTS INTO 3D VIDEO AND OR IMMERSIVE SPACE. This has been termed “Neural Radiance Fields.” Now Google Maps wants to turn Google Maps into a gigantic 3D space. Three videos below demonstrate the method. 1) A simple demonstration, 2) Google’s immersive maps, and 3) Using this principle to make dark, grainy photographs clear and immersive.

This technique is different from “time of flight” cameras which make a 3D snapshot based on the time light takes to travel to and from objects, but combined with this technology, and with a constellation of microsatellites as large as cell phones, a new version of “Google Earth” with live, continual imaging of the whole planet could eventually be envisioned.

2) https://www.youtube.com/watch?v=EUP5Fry24ao.

3)

Summary: Examining the cognitive abilities of the AI language model, GPT-3, researchers found the algorithm can keep up and compete with humans in some areas but falls behind in others due to a lack of real-world experience and interactions.

Source: Max Planck Institute.

Researchers at the Max Planck Institute for Biological Cybernetics in Tübingen have examined the general intelligence of the language model GPT-3, a powerful AI tool.

“A machine… of intelligent design.”

For the algorithm: UBI, singularity, Sydney bing ai, bing ai, microsoft, OpenAI, open ai, andrej karpathy, Ilya Sutskever, agi, artificial general intelligence, AI, ai, artificial intelligence, deep learning, how do neural networks work?, neural networks, machine learning, chatgpt, ChatGPT, GPT, bing, math, google, tech, technology, utopia.

A cybersecurity technique that shuffles network addresses like a blackjack dealer shuffles playing cards could effectively befuddle hackers gambling for control of a military jet, commercial airliner or spacecraft, according to new research. However, the research also shows these defenses must be designed to counter increasingly sophisticated algorithms used to break them.

Many aircraft, spacecraft and weapons systems have an onboard computer network known as military standard 1,553, commonly referred to as MIL-STD-1553, or even just 1553. The network is a tried-and-true protocol for letting systems like radar, flight controls and the heads-up display talk to each other.

Securing these networks against a is a national security imperative, said Chris Jenkins, a Sandia cybersecurity scientist. If a hacker were to take over 1,553 midflight, he said, the pilot could lose control of critical aircraft systems, and the impact could be devastating.