Toggle light / dark theme

Whether we realize it or not, cryptography is the fundamental building block on which our digital lives are based. Without sufficient cryptography and the inherent trust that it engenders, every aspect of the digital human condition we know and rely on today would never have come to fruition much less continue to evolve at its current staggering pace. The internet, digital signatures, critical infrastructure, financial systems and even the remote work that helped the world limp along during the recent global pandemic all rely on one critical assumption – that the current encryption employed today is unbreakable by even the most powerful computers in existence. But what if that assumption was not only challenged but realistically compromised?

This is exactly what happened when Peter Shor proposed his algorithm in 1995, dubbed Shor’s Algorithm. The key to unlocking the encryption on which today’s digital security relies is in finding the prime factors of large integers. While factoring is relatively simple with small integers that have only a few digits, factoring integers that have thousands of digits or more is another matter altogether. Shor proposed a polynomial-time quantum algorithm to solve this factoring problem. I’ll leave it to the more qualified mathematicians to explain the theory behind this algorithm but suffice it to say that when coupled with a quantum computer, Shor’s Algorithm drastically reduces the time it would take to factor these larger integers by multiple orders of magnitude.

Prior to Shor’s Algorithm, for example, the most powerful computer today would take millions of years to find the prime factors of a 2048-bit composite integer. Without Shor’s algorithm, even quantum computers would take such an inordinate amount of time to accomplish the task as to render it unusable by bad actors. With Shor’s Algorithm, this same factoring can potentially be accomplished in a matter of hours.

A new technological development by Tel Aviv University has made it possible for a robot to smell using a biological sensor. The sensor sends electrical signals as a response to the presence of a nearby odor, which the robot can detect and interpret.

In this new study, the researchers successfully connected the to an electronic system and, using a machine learning algorithm, were able to identify odors with a level of sensitivity 10,000 times higher than that of a commonly used electronic device. The researchers believe that in light of the success of their research, this technology may also be used in the future to identify explosives, drugs, diseases, and more.

The biological and was led by doctoral student Neta Shvil of Tel Aviv University’s Sagol School of Neuroscience, Dr. Ben Maoz of the Fleischman Faculty of Engineering and the Sagol School of Neuroscience, and Prof. Yossi Yovel and Prof. Amir Ayali of the School of Zoology and the Sagol School of Neuroscience. The results of the study were published in Biosensors and Bioelectronics.

Large Language Models (LLM) are on fire, capturing public attention by their ability to provide seemingly impressive completions to user prompts (NYT coverage). They are a delicate combination of a radically simplistic algorithm with massive amounts of data and computing power. They are trained by playing a guess-the-next-word game with itself over and over again. Each time, the model looks at a partial sentence and guesses the following word. If it makes it correctly, it will update its parameters to reinforce its confidence; otherwise, it will learn from the error and give a better guess next time.

While the underpinning training algorithm remains roughly the same, the recent increase in model and data size has brought about qualitatively new behaviors such as writing basic code or solving logic puzzles.

How do these models achieve this kind of performance? Do they merely memorize training data and reread it out loud, or are they picking up the rules of English grammar and the syntax of C language? Are they building something like an internal world model—an understandable model of the process producing the sequences?

Luwu Dynamics products are quadruped robot dogs with 12 degrees of freedom. They are used for teenagers to learn Artificial Intelligence Programming. They can realize omni-directional movement, six-dimensional attitude control, attitude stability and a variety of motion gait. They are internally equipped with 9-axis IMU, joint position sensor and current sensor to feed back their own attitude, joint angle and torque for internal algorithm and secondary development, It can off-line AI functions such as face recognition, image classification, gesture recognition, speech recognition, audio analysis and target tracking, and supports cross platform graphical and python programming.

luwu

JERUSALEM, Jan 19 (Reuters) — Israel’s chief rabbi has given a kosher stamp of approval this week to a company looking to sell steak grown from cow cells — while effectively taking the animal itself out of the equation.

Cultivated meat, grown from animal cells in a lab or manufacturing plant, has been getting a lot of attention as a way to sidestep the environmental toll of the meat industry and address concerns over animal welfare.

This method, however, has raised questions over religious restrictions, like kashrut in Judaism or Islam’s halal.

Why the recent surge in jaw-dropping announcements? Why are neutral atoms seeming to leapfrog other qubit modalities? Keep reading to find out.

The table below highlights the companies working to make Quantum Computers using neutral atoms as qubits:

And as an added feature I am writing this post to be “entangled” with the posts of Brian Siegelwax, a respected colleague and quantum algorithm designer. My focus will be on the hardware and corporate details about the companies involved, while Brian’s focus will be on actual implementation of the platforms and what it is like to program on their devices. Unfortunately, most of the systems created by the companies noted in this post are not yet available (other than QuEra’s), so I will update this post along with the applicable hot links to Brian’s companion articles, as they become available.

Computers and information technologies were once hailed as a revolution in education. Their benefits are undeniable. They can provide students with far more information than a mere textbook. They can make educational resources more flexible, tailored to individual needs, and they can render interactions between students, parents, and teachers fast and convenient. And what would schools have done during the pandemic lockdowns without video conferencing?

The advent of AI chatbots and large language models such as OpenAI’s ChatGPT, launched last November, create even more new opportunities. They can give students practice questions and answers as well as feedback, and assess their work, lightening the load on teachers. Their interactive nature is more motivating to students than the imprecise and often confusing information dumps elicited by Google searches, and they can address specific questions.

The algorithm has no sense that “love” and “embrace” are semantically related.

Artificial neural networks that are inspired by natural nerve circuits in the human body give primates faster, more accurate control of brain-controlled prosthetic hands and fingers, researchers at the University of Michigan have shown. The finding could lead to more natural control over advanced prostheses for those dealing with the loss of a limb or paralysis.

The team of engineers and doctors found that a feed-forward neural network improved peak finger velocity by 45% during control of robotic fingers when compared to traditional algorithms not using neural networks. This overturned an assumption that more complex neural networks, like those used in other fields of machine learning, would be needed to achieve this level of performance improvement.

“This feed-forward network represents an older, simpler architecture—with information moving only in one direction, from input to output,” said Cindy Chestek, Ph.D., an associate professor of biomedical engineering at U-M and corresponding author of the paper in Nature Communications.

Making predictions is never easy, but it is agreed that cryptography will be altered by the advent of quantum computers.

Thirteen, 53, and 433. That’s the size of quantum computers.


Hh5800/iStock.

In fact, the problems used for cryptography are so complex for our present algorithms and computers that the information exchange remains secure for any practical purposes – solving the problem and then hacking the protocol would take a ridiculous number of years. The most paradigmatic example of this approach is the RSA protocol (for its inventors Ron Rivest, Adi Shamir, and Leonard Adleman), which today secures our information transmissions.

Check out all the on-demand sessions from the Intelligent Security Summit here.

Classical machine learning (ML) algorithms have proven to be powerful tools for a wide range of tasks, including image and speech recognition, natural language processing (NLP) and predictive modeling. However, classical algorithms are limited by the constraints of classical computing and can struggle to process large and complex datasets or to achieve high levels of accuracy and precision.

Enter quantum machine learning (QML).