Toggle light / dark theme

We tend to think of AI as a monolithic entity, but it has actually developed along multiple branches. One of the main branches involves performing traditional calculations but feeding the results into another layer that takes input from multiple calculations and weighs them before performing its calculations and forwarding those on. Another branch involves mimicking the behavior of traditional neurons: many small units communicating in bursts of activity called spikes, and keeping track of the history of past activity.

Each of these, in turn, has different branches based on the structure of its layers and communications networks, types of calculations performed, and so on. Rather than being able to act in a manner we would recognize as intelligent, many of these are very good at specialized problems, like pattern recognition or playing poker. And processors that are meant to accelerate the performance of the software can typically only improve a subset of them.

That last division may have come to an end with the development of Tianjic by a large team of researchers primarily based in China. Tianjic is engineered so that its individual processing units can switch from spiking communications back to binary and perform a large range of calculations, in almost all cases faster and more efficiently than a GPU can. To demonstrate the chip’s abilities, the researchers threw together a self-driving bicycle that ran three different AI algorithms on a single chip simultaneously.

The research and development of neural networks is flourishing thanks to recent advancements in computational power, the discovery of new algorithms, and an increase in labelled data. Before the current explosion of activity in the space, the practical applications of neural networks were limited.

Much of the recent research has allowed for broad application, the heavy computational requirements for machine learning models still restrain it from truly entering the mainstream. Now, emerging algorithms are on the cusp of pushing neural networks into more conventional applications through exponentially increased efficiency.

Given that going viral on the Internet is often cyclical, it should come as no surprise that an app that made its debut in 2017 has once again surged in popularity. FaceApp applies various transformations to the image of any face, but the option that ages facial features has been especially popular. However, the fun has been accompanied by controversy; since biometric systems are replacing access passwords, is it wise to freely offer up our image and our personal data? The truth is that today the face is ceasing to be as non-transferable as it used to be, and in just a few years it could be more hackable than the password of a lifetime.

Our countenance is the most recognisable key to social relationships. We might have doubts when hearing a voice on the phone, but never when looking at the face of a familiar person. In the 1960s, a handful of pioneering researchers began training computers to recognise human faces, although it was not until the 1990s that this technology really began to take off. Facial recognition algorithms have improved to such an extent that since 1993 their error rate has been halved every two years. When it comes to recognising unfamiliar faces in laboratory experiments, today’s systems outperform human capabilities.

Nowadays these systems are among the most widespread applications of Artificial Intelligence (AI). Every day, our laptops, smartphones and tablets greet us by name as they recognise our facial features, but at the same time, the uses of this technology have set off alarm bells over invasion of privacy concerns. In China, the world leader in facial recognition systems, the introduction of this technology associated with surveillance cameras to identify even pedestrians has been viewed by the West as another step towards the Big Brother dystopia, the eye of the all-watching state, as George Orwell portrayed in 1984.

Facebook has announced a breakthrough in its plan to create a device that allows people to type just by thinking.

It has funded a study that developed machine-learning algorithms capable of turning brain activity into speech

It worked on epilepsy patients who had already had recording electrodes placed on their brains to asses the origins of their seizures, ahead of surgery.

We study the condensation of closed string tachyons as a time-dependent process. In particular, we study tachyons whose wave functions are either space-filling or localized in a compact space, and whose masses are small in string units; our analysis is otherwise general and does not depend on any specific model. Using world-sheet methods, we calculate the equations of motion for the coupled tachyon-dilaton system, and show that the tachyon follows geodesic motion with respect to the Zamolodchikov metric, subject to a force proportional to its beta function and friction proportional to the time derivative of the dilaton.

Scientists at work in laboratory. Photo: Public domain via Wikicommons.

CTech – When chemistry Nobel laureate Michael Levitt met his wife two years ago, he didn’t know it would lead to a wonderful friendship with a young Israeli scientist. When Israeli scientist Shahar Barbash decided to found a startup with the aim of cutting down the time needed to develop new medicine, he didn’t know that a friend’s wedding would help him score a meeting with a man many want to meet but few do. But Levitt’s wife is an old friend of Barbash’s parents, and the rest, as they say, is history.

One of the joys of being an old scientist is to encourage extraordinary young ones, Levitt, an American-British-Israeli biophysicist and a professor at Stanford University since 1987, said in a recent interview with Calcalist. He might have met Barbash because his wife knew his family, but that is not enough to make him go into business with someone, Levitt said. “I got on board because his vision excited me, even though I thought it would be very hard to realize.”

Abstract: The large, error-correcting quantum computers envisioned today could be decades away, yet experts are vigorously trying to come up with ways to use existing and near-term quantum processors to solve useful problems despite limitations due to errors or “noise.”

A key envisioned use is simulating molecular properties. In the long run, this can lead to advances in materials improvement and drug discovery. But not with noisy calculations confusing the results.

Now, a team of Virginia Tech chemistry and physics researchers have advanced quantum simulation by devising an algorithm that can more efficiently calculate the properties of molecules on a noisy quantum computer. Virginia Tech College of Science faculty members Ed Barnes, Sophia Economou, and Nick Mayhall recently published a paper in Nature Communications detailing the advancement.

Click on photo to start video.

Though some computer engineers claim to know what human consciousness is, many neuroscientists say that we’re nowhere close to understanding what it is — or its source.

In this video, bestselling author Douglas Rushkoff gives the “transhumanist myth” — the belief that A.I. will replace humans — a reality check. Is it hubristic to upload people’s minds to silicon chips, or re-create their consciousness with algorithms, when we still know so little about what it means to be human?

You can read more about Rushkoff’s perspective on this issue in his new book, Team Human.

Microsoft and Google companies want to be central to the development of the thinking machine.


The decision by Microsoft to invest $1 billion in OpenAI, a company jointly founded by Elon Musk, brings closer the time when machines threaten to replace humans in any tasks that humans do today.

OpenAI, which was founded just four years ago, has pioneered a range of technologies which have pushed the frontiers of massive data processing in defiance of the physical and computer capabilities that governed such developments for generations.

Now, with the investment from Microsoft, the pace of technological change is likely to accelerate rapidly. Today, Artificial Intelligence is at a level of what is known as ‘weak AI’ and relies on humans to create the algorithms which allow for the crunching of massive amounts of data to produce new and often predictive results. Artificial General Intelligence, or Strong AI, will herald a new era when robots will essentially be able to think for themselves.

Researchers have designed a tile set of DNA molecules that can carry out robust reprogrammable computations to execute six-bit algorithms and perform a variety of simple tasks. The system, which works thanks to the self-assembly of DNA strands designed to fit together in different ways while executing the algorithm, is an important milestone in constructing a universal DNA-based computing device.

The new system makes use of DNA’s ability to be programmed through the arrangement of its molecules. Each strand of DNA consists of a backbone and four types of molecules known as nucleotide bases – adenine, thymine, cytosine, and guanine (A, T, C, and G) – that can be arranged in any order. This order represents information that can be used by biological cells or, as in this case, by artificially engineered DNA molecules. The A, T, C, and G have a natural tendency to pair up with their counterparts: A base pairs with T, and C pairs with G. And a sequence of bases pairs up with a complementary sequence: ATTAGCA pairs up with TGCTAAT (in the reverse orientation), for example.

The DNA tile.