Toggle light / dark theme

Consciousness remains scientifically elusive because it constitutes layers upon layers of non-material emergence: Reverse-engineering our thinking should be done in terms of networks, modules, algorithms and second-order emergence — meta-algorithms, or groups of modules. Neuronal circuits correlate to “immaterial” cognitive modules, and these cognitive algorithms, when activated, produce meta-algorithmic conscious awareness and phenomenal experience, all in all at least two layers of emergence on top of “physical” neurons. Furthermore, consciousness represents certain transcendent aspects of projective ontology, according to the now widely accepted Holographic Principle.

#CyberneticTheoryofMind


There’s no shortage of workable theories of consciousness and its origins, each with their own merits and perspectives. We discuss the most relevant of them in the book in line with my own Cybernetic Theory of Mind that I’m currently developing. Interestingly, these leading theories, if metaphysically extended, in large part lend support to Cyberneticism and Digital Pantheism which may come into scientific vogue with the future cyberhumanity.

According to the Interface Theory of Perception developed by Donald Hoffman and the Biocentric theory of consciousness developed by Robert Lanza, any universe is essentially non-existent without a conscious observer. In both theories, conscious minds are required as primary building blocks for any universe arising from probabilistic domain into existence. But biological minds reveal to us just a snippet in the space of possible minds. Building on the tenets of Biocentrism, Cyberneticism goes further and includes all other possible conscious observers such as artificially intelligent self-aware entities. Perhaps, the extended theory could be dubbed as ‘Noocentrism’.

Existence boils down to experience. No matter what ontological level a conscious entity finds herself at, it will be smack in the middle, between her transcendental realm and lower levels of organization. This is why I prefer the terms ‘Experiential Realism’ and ‘Pantheism’ as opposed to ‘Panentheism’ as some suggested in regards to my philosophy.

Quantum simulators are a strange breed of systems for purposes that might seem a bit nebulous from the outset. These are often HPC clusters with fast interconnects and powerful server processors (although not usually equipped with accelerators) that run a literal simulation of how various quantum circuits function for design and testing of quantum hardware and algorithms. Quantum simulators do more than just test. They can also be used to emulate quantum problem solving and serve as a novel approach to tackling problems without all the quantum hardware complexity.

Despite the various uses, there’s only so much commercial demand for quantum simulators. Companies like IBM have their own internally and for others, Atos/Bull have created these based on their big memory Sequanna systems but these are, as one might imagine, niche machines for special purposes. Nonetheless, Nvidia sees enough opportunity in this arena to make an announcement at their GTC event about the performance of quantum simulators using the DGX A100 and its own custom-cooked quantum development software stack, called CuQuantum.

After all, it is probably important for Nvidia to have some kind of stake in quantum before (and if) it ever really takes off, especially in large-scale and scientific computing. What better way to get an insider view than to work with quantum hardware and software developers who are designing better codes and qubits via a benchmark and testing environment?

As content moderation continues to be a critical aspect of how social media platforms work — one that they may be pressured to get right, or at least do better in tackling — a startup that has built a set of data and image models to help with that, along with any other tasks that require automatically detecting objects or text, is announcing a big round of funding.

Hive, which has built a training data trove based on crowdsourced contributions from some 2 million people globally, which then powers a set of APIs that can be used to identify automatically images of objects, words and phrases — a process used not just in content moderation platforms, but also in building algorithms for autonomous systems, back-office data processing, and more — has raised $85 million in a Series D round of funding that the startup has confirmed values it at $2 billion.

“At the heart of what we’re doing is building AI models that can help automate work that used to be manual,” said Kevin Guo, Hive’s co-founder and CEO. “We’ve heard about RPA and other workflow automation, and that is important too but what that has also established is that there are certain things that humans should not have to do that is very structural, but those systems can’t actually address a lot of other work that is unstructured.” Hive’s models help bring structure to that other work, and Guo claims they provide “near human level accuracy.”

AI systems can lead to race or gender discrimination.


The US Federal Trade Commission has warned companies against using biased artificial intelligence, saying they may break consumer protection laws. A new blog post notes that AI tools can reflect “troubling” racial and gender biases. If those tools are applied in areas like housing or employment, falsely advertised as unbiased, or trained on data that is gathered deceptively, the agency says it could intervene.

“In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver,” writes FTC attorney Elisa Jillson — particularly when promising decisions that don’t reflect racial or gender bias. “The result may be deception, discrimination — and an FTC law enforcement action.”

As Protocol points out, FTC chair Rebecca Slaughter recently called algorithm-based bias “an economic justice issue.” Slaughter and Jillson both mention that companies could be prosecuted under the Equal Credit Opportunity Act or the Fair Credit Reporting Act for biased and unfair AI-powered decisions, and unfair and deceptive practices could also fall under Section 5 of the FTC Act.

The more data collected, the better the results.


Understanding the genetics of complex diseases, especially those related to the genetic differences among ethnic groups, is essentially a big data problem. And researchers need more data.

1000, 000 genomes

To address the need for more data, the National Institutes of Health has started a program called All of Us. The project aims to collect genetic information, medical records and health habits from surveys and wearables of more than a million people in the U.S. over the course of 10 years. It also has a goal of gathering more data from underrepresented minority groups to facilitate the study of health disparities. The All of Us project opened to public enrollment in 2018, and more than 270000 people have contributed samples since. The project is continuing to recruit participants from all 50 states. Participating in this effort are many academic laboratories and private companies.

Developing Next Generation Artificial Intelligence To Serve Humanity — Dr. Patrick Bangert, Vice President of AI, Samsung SDS.


Dr. Patrick D. Bangert, is Vice President of AI, and heads the AI Engineering and AI Sciences teams, at Samsung SDS is a subsidiary of the Samsung Group, which provides information technology (IT) services, and are active in research and development of emerging IT technologies such as artificial intelligence (AI), blockchain, Internet of things (IoT) and Engineering Outsourcing.

Dr. Bangert is responsible for the Brightics AI Accelerator, a distributed ML training and automated ML product, and for X.insights, a data center intelligence platform.

Among his other responsibilities, Dr. Bangert acts as a visionary for the future of AI at Samsung.

Before joining Samsung, Dr. Bangert spent 15 years as CEO at Algorithmica Technologies, a machine learning software company serving the chemicals and oil and gas industries. Prior to that, he was assistant professor of applied mathematics at Jacobs University in Germany, as well as a researcher at Los Alamos National Laboratory and NASA’s Jet Propulsion Laboratory.

Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that’s driving the push to automate some of the easier tasks that take up their time.

Productivity tools like Eclipse and Visual Studio suggest snippets of code that developers can easily drop into their work as they write. These automated features are powered by sophisticated language models that have learned to read and write after absorbing thousands of examples. But like other deep learning models trained on big datasets without explicit instructions, language models designed for code-processing have baked-in vulnerabilities.

“Unless you’re really careful, a hacker can subtly manipulate inputs to these models to make them predict anything,” says Shashank Srikant, a graduate student in MIT’s Department of Electrical Engineering and Computer Science. “We’re trying to study and prevent that.”