The accounting firm’s U.S. unit plans to integrate generative AI into internal workflows and help middle-market companies with AI strategies

The MRI shows a brain tumor in an inauspicious location, and a brain biopsy will entail high risks for a patient who had consulted doctors due to double vision. Situations such as this case prompted researchers at Charité—Universitätsmedizin Berlin to look for new diagnostic procedures. The result is an AI model.
The model makes use of specific characteristics in the genetic material of tumors—their epigenetic fingerprint, obtained for example from cerebrospinal fluid, among other things. As the team shows in the journal Nature Cancer, the new model classifies tumors quickly and very reliably.
Today, far more types of tumors are known than the organs from which they arise. Each tumor has its own characteristics: certain tissue features, growth rates and metabolic peculiarities. Nevertheless, tumor types with similar molecular characteristics can be grouped together. The treatment of the individual disease depends decisively on the type of tumor.
IN A NUTSHELL 🤖 Veho has partnered with RIVR to introduce wheeled-legged robots for parcel delivery in Austin, Texas. 🚀 The robots feature precision engineering and adaptive mobility to navigate complex urban environments. 🔗 This collaboration aims to enhance delivery efficiency while reducing physical strain on human drivers. 🌐 The initiative represents a major step
Pressure is on Apple to show it hasn’t lost its magic despite broken promises to ramp up iPhones with generative artificial intelligence (GenAI) as rivals race ahead with the technology.
Apple will showcase plans for its coveted devices and the software powering them at its annual Worldwide Developers Conference (WWDC) kicking off Monday in Silicon Valley.
The event comes a year after the tech titan said a suite of AI features it dubbed “Apple Intelligence” was heading for iPhones, including an improvement of its much criticized Siri voice assistant.
Whether you’re streaming a show, paying bills online or sending an email, each of these actions relies on computer programs that run behind the scenes. The process of writing computer programs is known as coding. Until recently, most computer code was written, at least originally, by human beings. But with the advent of generative artificial intelligence, that has begun to change.
Now, just as you can ask ChatGPT to spin up a recipe for a favorite dish or write a sonnet in the style of Lord Byron, you can now ask generative AI tools to write computer code for you. Andrej Karpathy, an OpenAI co-founder who previously led AI efforts at Tesla, recently termed this “vibe coding.”
For complete beginners or nontechnical dreamers, writing code based on vibes—feelings rather than explicitly defined information—could feel like a superpower. You don’t need to master programming languages or complex data structures. A simple natural language prompt will do the trick.
Empathy, the ability to understand what others are feeling and emotionally connect with their experiences, can be highly advantageous for humans, as it allows them to strengthen relationships and thrive in some professional settings. The development of tools for reliably measuring people’s empathy has thus been a key objective of many past psychology studies.
Most existing methods for measuring empathy rely on self-reports and questionnaires, such as the interpersonal reactivity index (IRI), the Empathy Quotient (EQ) test and the Toronto Empathy Questionnaire (TEQ). Over the past few years, however, some scientists have been trying to develop alternative techniques for measuring empathy, some of which rely on machine learning algorithms or other computational models.
Researchers at Hong Kong Polytechnic University have recently introduced a new machine learning-based video analytics framework that could be used to predict the empathy of people captured in video footage. Their framework, introduced in a preprint paper published in SSRN, could prove to be a valuable tool for conducting organizational psychology research, as well as other empathy-related studies.
Progress is rarely linear, and AI is no exception.
As academics, independent developers, and the biggest tech companies in the world drive us closer to artificial general intelligence — a still hypothetical form of intelligence that matches human capabilities — they’ve hit some roadblocks. Many emerging models are prone to hallucinating, misinformation, and simple errors.
Google CEO Sundar Pichai referred to this phase of AI as AJI, or “artificial jagged intelligence,” on a recent episode of Lex Fridman’s podcast.
New research from Caltech’s Center for Autonomous Systems and Technologies finds robots that morph before landing are more robust