Toggle light / dark theme

A neuromorphic computer that can simulate 8 million neurons is in the news. The term “neuromorphic” suggests a design that can mimic the human brain. And neuromorphic computing? It is described as using very large scale integration systems with electric analog circuits imitating neuro-biological architectures in our system.

This is where Intel steps in, and significantly so. The Loihi chip applies the principles found in biological brains to computer architectures. The payoff for users is that they can process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications, e.g., sparse coding, graph search and constraint-satisfaction problems.

Its news release on Monday read “Intel’s Pohoiki Beach, a 64-Chip Neuromorphic System, Delivers Breakthrough Results in Research Tests.” Pohoiki Beach is Intel’s latest neuromorphic system.

It appears that the physics of information holds the key to the solution of the Fermi Paradox — indications are that we most likely live in a “Syntellect Chrysalis” (or our “second womb”) instead of a “cosmic jungle.”

Within the next few decades, we’ll transcend our biology by leaving today’s organic Chrysalis behind, by leaving our second womb, by leaving our cradle, if speaking in tropes.

This particular version of “human universe” is what we “see” from within our dimensional cocoon, it’s a construct of our minds but by no means represents objective reality “out there” including our most advanced models such as M-theory that are only approximations at best.

Peptides, one of the fundamental building blocks of life, can be formed from the primitive precursors of amino acids under conditions similar to those expected on the primordial Earth, finds a new UCL study.

The findings, published in Nature, could be a missing piece of the puzzle of how life first formed.

“Peptides, which are chains of amino acids, are an absolutely essential element of all life on Earth. They form the fabric of proteins, which serve as catalysts for biological processes, but they themselves require enzymes to control their formation from amino acids,” explained the study’s lead author, Dr Matthew Powner (UCL Chemistry).

Despite their names, artificial intelligence technologies and their component systems, such as artificial neural networks, don’t have much to do with real brain science. I’m a professor of bioengineering and neurosciences interested in understanding how the brain works as a system – and how we can use that knowledge to design and engineer new machine learning models.

In recent decades, brain researchers have learned a huge amount about the physical connections in the brain and about how the nervous system routes information and processes it. But there is still a vast amount yet to be discovered.

At the same time, computer algorithms, software and hardware advances have brought machine learning to previously unimagined levels of achievement. I and other researchers in the field, including a number of its leaders, have a growing sense that finding out more about how the brain processes information could help programmers translate the concepts of thinking from the wet and squishy world of biology into all-new forms of machine learning in the digital world.