Spotting fakes is just the start of a much bigger battle.

Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.
“We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software,” said the study’s senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.
Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.
The field has narrowed in the race to protect sensitive electronic information from the threat of quantum computers, which one day could render many of our current encryption methods obsolete.
As the latest step in its program to develop effective defenses, the National Institute of Standards and Technology (NIST) has winnowed the group of potential encryption tools—known as cryptographic algorithms—down to a bracket of 26. These algorithms are the ones NIST mathematicians and computer scientists consider to be the strongest candidates submitted to its Post-Quantum Cryptography Standardization project, whose goal is to create a set of standards for protecting electronic information from attack by the computers of both tomorrow and today.
“These 26 algorithms are the ones we are considering for potential standardization, and for the next 12 months we are requesting that the cryptography community focus on analyzing their performance,” said NIST mathematician Dustin Moody. “We want to get better data on how they will perform in the real world.”
China isn’t the only country with a draconian “social credit score” system — there’s one quite a bit like it operating in the U.S. Except that it’s being run by American businesses, not the government.
There’s plenty of evidence that retailers have been using a technique called “surveillance scoring” for decades in which consumers are given a secret score by an algorithm to give them a different price — but for the same goods and services.
But the practice might be illegal after all: a California nonprofit called Consumer Education Foundation (CEF) filed a petition yesterday asking for the Federal Trade Commission (FTC) to look into the shady practice.
Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.
Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.
Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.
It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to.
For as smart as artificial intelligence systems seem to get, they’re still easily confused by hackers who launch so-called adversarial attacks — cyberattacks that trick algorithms into misinterpreting their training data, sometimes to disastrous ends.
In order to bolster AI’s defenses from these dangerous hacks, scientists at the Australian research agency CSIRO say in a press release they’ve created a sort of AI “vaccine” that trains algorithms on weak adversaries so they’re better prepared for the real thing — not entirely unlike how vaccines expose our immune systems to inert viruses so they can fight off infections in the future.
Researchers at the University of Chicago published a novel technique for improving the reliability of quantum computers by accessing higher energy levels than traditionally considered. Most prior work in quantum computation deals with “qubits,” the quantum analogue of binary bits that encode either zero or one. The new work instead leverages “qutrits,” quantum analogues of three-level trits capable of representing zero, one or two.
The UChicago group worked alongside researchers based at Duke University. Both groups are part of the EPiQC (Enabling Practical-scale Quantum Computation) collaboration, an NSF Expedition in Computing. EPiQC’s interdisciplinary research spans from algorithm and software development to architecture and hardware design, with the ultimate goal of more quickly realizing the enormous potential of quantum computing for scientific discovery and computing innovation.