Toggle light / dark theme

DALL-E 2 transformed the world of art in 2022.

DALL-E is a system that has been around for years, but its successor, DALL-E 2, was launched this year.


Ibrahim Can/Interesting Engineering.

DALL-E and DALL-E 2 are machine-learning models created by OpenAI to produce images from language descriptions. These text-to-image descriptions are known as prompts. The system could generate realistic images just from a description of the scene. DALL-E is a neural network algorithm that creates accurate pictures from short phrases provided by the user. It comprehends language through textual descriptions and from “learning” information provided in its datasets by users and developers.

Researchers have developed a new all-optical method for driving multiple highly dense nanolaser arrays. The approach could enable chip-based optical communication links that process and move data faster than today’s electronic-based devices.

“The development of optical interconnects equipped with high-density nanolasers would improve information processing in the that move information across the internet,” said research team leader Myung-Ki Kim from Korea University.

“This could allow streaming of ultra-high-definition movies, enable larger-scale interactive online encounters and games, accelerate the expansion of the Internet of Things and provide the fast connectivity needed for big data analytics.”

After six decades we have finally reached controlled fusion “ignition.” Here is how it works and what it means (and doesn’t mean):

At the Lawrence Livermore National Lab (LLNL) the National Ignition Facility (NIF) starts with the Injection Laser System (ILS), a ytterbium-doped optical fiber laser (Master Oscillator) that produces a single very lower power, 1,053 nanometer (Infrared Light) beam. This single beam is split into 48 Pre-Amplifiers Modules (PAMs) that create four beams each (192 total). Each PAM conducts a two-stage amplification process via xenon flash lamps.


Self-coding and self-updating AI algorithms appear to be on the horizon. There are talks about Pitchfork AI, a top-secret Google Labs project that can independently code, refactor, and use both its own and other people’s code.

This type of AI has actually been discussed for a long time, and DeepMind mentioned it at the beginning of the year along with the AlphaCode AI, which, according to them, “code programs in competitive level” as a middle developer. However, since February, there hasn’t been any more interesting news.

Deep-learning models have proven to be highly valuable tools for making predictions and solving real-world tasks that involve the analysis of data. Despite their advantages, before they are deployed in real software and devices such as cell phones, these models require extensive training in physical data centers, which can be both time and energy consuming.

Researchers at Texas A&M University, Rain Neuromorphics and Sandia National Laboratories have recently devised a new system for deep learning models more efficiently and on a larger scale. This system, introduced in a paper published in Nature Electronics, relies on the use of new training algorithms and memristor crossbar , that can carry out multiple operations at once.

“Most people associate AI with health monitoring in smart watches, face recognition in smart phones, etc., but most of AI, in terms of energy spent, entails the training of AI models to perform these tasks,” Suhas Kumar, the senior author of the study, told TechXplore.

Search Engine Optimization (SEO) is the process of optimizing on-page and off-page factors that impact how high a web page ranks for a specific search term. This is a multi-faceted process that includes optimizing page loading speed, generating a link building strategy, as well as learning how to reverse engineer Google’s AI by using computational thinking.

Computational thinking is an advanced type of analysis and problem-solving technique that computer programmers use when writing code and algorithms. Computational thinkers will seek the ground truth by breaking down a problem and analyzing it using first principles thinking.

Since Google does not release their secret sauce to anyone, we will rely on computational thinking. We will walk through some pivotal moments in Google’s history that shaped the algorithms that are used, and we will learn why this matters.

Keith Downing is a professor of Computer Science at the Norwegian University of Science and Technology, specializing in Artificial Intelligence and Artificial Life. He has a particular interest in evolutionary algorithms, which have applications ranging from the development of the Mars Rover antenna, patented circuits, early driverless cars, to even art. For computer scientists to learn from nature, he believes there needs to be a shift in our traditional ways thinking.

About TEDx, x = independently organized event.
In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations).

Tournament selection, roulette selection, mutation, crossover — all processes used in genetic algorithms. Dr Alex Turner explains using the Knapsack Problem.

https://www.facebook.com/computerphile.
https://twitter.com/computer_phile.

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: https://bit.ly/nottscomputer.

In the cons column, quantum computers are hard to use, require a very controlled set up to operate, and have to contend with “decoherence” or losing their quantum state which gives weird results. They’re also rare, expensive, and for most tasks, way less efficient than a traditional computer.

Still, a lot of these issues can be offset by combining a quantum computer with a traditional computer, just as VTT has done. Researchers can create a hybrid algorithm that has LUMI, the traditional supercomputer, handle the parts it does best while handing off anything that could benefit from quantum computing to HELMI. LUMI can then integrate the results of HELMI’s quantum calculations, perform any additional calculations necessary or even send more calculations to HELMI, and return the complete results to the researchers.

Finland is now one of few nations in the world with a quantum computer and a supercomputer, and LUMI is the most powerful quantum-enabled supercomputer. While quantum computers are still a way from being broadly commercially viable, these kinds of integrated research programs are likely to accelerate progress. VTT is currently developing a 20-qubit quantum computer with a 50-qubit upgrade planned for 2024.

A large universal quantum computer is still an engineering dream, but machines designed to leverage quantum effects to solve specific classes of problems—such as D-wave’s computers—are alive and well. But an unlikely rival could challenge these specialized machines: computers built from purposely noisy parts.

This week at the IEEE International Electron Device Meeting (IEDM 2022), engineers unveiled several advances that bring a large-scale probabilistic computer closer to reality than ever before.

Quantum computers are unrivaled for any algorithm that relies on quantum’s complex amplitudes. “But for problems where the numbers are positive, sometimes called stochastic problems, probabilistic computing could be quite competitive,” says Supriyo Datta, professor of electrical and computer engineering at Purdue University and one of the pioneers of probabilistic computing.