Maisa AI is built on the premise that enterprise automation requires accountable AI agents, not opaque black boxes.

Over the past decades, computer scientists have developed increasingly sophisticated sensors and machine learning algorithms that allow computer systems to process and interpret images and videos. This tech-powered capability, also referred to as machine vision, is proving to be highly advantageous for the manufacturing and production of food products, drinks, electronics, and various other goods.
Machine vision could enable the automation of various tedious steps in industry and manufacturing, such as the detection of defects, the inspection of electronics, automotive parts or other items, the verification of labels or expiration dates and the sorting of products into different categories.
While the sensors underpinning the functioning of many previously introduced machine vision systems are highly sophisticated, they typically do not process visual information with as much detail as the human retina (i.e., a light-sensitive tissue in the eye that processes visual signals).
Threat researchers discovered the first AI-powered ransomware, called PromptLock, that uses Lua scripts to steal and encrypt data on Windows, macOS, and Linux systems.
The malware uses OpenAI’s gpt-oss:20b model through the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts.
High frequency radio waves can wirelessly carry the vast amount of data demanded by emerging technology like virtual reality, but as engineers push into the upper reaches of the radio spectrum, they are hitting walls. Literally.
Ultrahigh frequency bandwidths are easily blocked by objects, so users can lose transmissions walking between rooms or even passing a bookcase.
Now, researchers at Princeton Engineering have developed a machine-learning system that could allow ultrahigh frequency transmissions to dodge those obstacles. In an article in Nature Communications, the researchers unveiled a system that shapes transmissions to avoid obstacles coupled with a neural network that can rapidly adjust to a complex and dynamic environment.
Batteries, like humans, require medicine to function at their best. In battery technology, this medicine comes in the form of electrolyte additives, which enhance performance by forming stable interfaces, lowering resistance and boosting energy capacity, resulting in improved efficiency and longevity.
Finding the right electrolyte additive for a battery is much like prescribing the right medicine. With hundreds of possibilities to consider, identifying the best additive for each battery is a challenge due to the vast number of possibilities and the time-consuming nature of traditional experimental methods.
Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are using machine learning models to analyze known electrolyte additives and predict combinations that could improve battery performance. They trained models to forecast key battery metrics, like resistance and energy capacity, and applied these models to suggest new additive combinations for testing.