Toggle light / dark theme

Microsoft’s camera-based AI app solves your math problems

Microsoft has made several quirky and useful apps that can help you with daily problems and its new app seeks to help you with math.

Microsoft Math Solver — available on both iOS and Android — can solve various math problems including quadratic equations, calculus, and statistics. The app can also show graphs for the equation to enhance your understanding of the subject.

Big Questions: The Multiverse, Cosmological Neural Networks and “Space Noodles”

Ira Pastor, ideaXme life sciences ambassador and founder of Bioquark interviews Dr Vitaly Vanchurin, PhD, Associate Professor, Theoretical Physics and Cosmology, Swenson College of Science and Engineering, at the University of Minnesota (UMN).

Dr Vanchurin’s big questions and the tools we need to answer them:

“What is the origin of our Universe? What determines our vacuum and the cosmological constant that is responsible for the observed accelerated expansion of space? What determines the onset of structure formation and the birth of galaxies in our Universe? Our innate curiosity about our beginnings has been, since time immemorial, and still is, at the heart of every human being. This age old question of our origin can now be addressed by applying the scientific method”.

Ira Pastor comments:

Today, we have a really exciting thought leader joining us on ideaXme who spends his time thinking about really BIG questions – Questions like: what is the origin of our Universe? What’s behind the cosmological constant (in Albert Einstein’s field equations of general relativity) that is responsible accelerated expansion of space? What determines the onset of structure formation and the birth of galaxies in our Universe? And many other fascinating topics.

Dr. Vitaly Vanchurin, is an Associate Professor, Theoretical Physics and Cosmology, Swenson College of Science and Engineering, at the University of Minnesota (UMN).

Ventilator-Associated Pneumonia: Diagnosis, Treatment, and Prevention

While critically ill patients experience a life-threatening illness, they commonly contract ventilator-associated pneumonia. This nosocomial infection increases morbidity and likely mortality as well as the cost of health care. This article reviews the literature with regard to diagnosis, treatment, and prevention. It provides conclusions that can be implemented in practice as well as an algorithm for the bedside clinician and also focuses on the controversies with regard to diagnostic tools and approaches, treatment plans, and prevention strategies.

Patients in the intensive care unit (ICU) are at risk for dying not only from their critical illness but also from secondary processes such as nosocomial infection. Pneumonia is the second most common nosocomial infection in critically ill patients, affecting 27% of all critically ill patients (170). Eighty-six percent of nosocomial pneumonias are associated with mechanical ventilation and are termed ventilator-associated pneumonia (VAP). Between 250,000 and 300,000 cases per year occur in the United States alone, which is an incidence rate of 5 to 10 cases per 1,000 hospital admissions (134, 170). The mortality attributable to VAP has been reported to range between 0 and 50% (10, 41, 43, 96, 161).

Neuroscience study finds ‘hidden’ thoughts in visual part of brain

How much control do you have over your thoughts? What if you were specifically told not to think of something—like a pink elephant?

A recent study led by UNSW psychologists has mapped what happens in the brain when a person tries to suppress a . The neuroscientists managed to ‘decode’ the complex brain activity using functional brain imaging (called fMRI) and an imaging algorithm.

The findings suggest that even when a person succeeds in ignoring a thought, like the pink elephant, it can still exist in another part of the brain—without them being aware of it.

NASA to test precision automated landing system designed for the moon and Mars on upcoming Blue Origin mission

NASA is going to be testing a new precision landing system designed for use on the tough terrain of the moon and Mars for the first time during an upcoming mission of Blue Origin’s New Shepard reusable suborbital rocket. The “Safe and Precise Landing – Integrated Capabilities Evolution” (SPLICE) system is made up of a number of lasers, an optical camera and a computer to take all the data collected by the sensors and process it using advanced algorithms, and it works by spotting potential hazards, and adjusting landing parameters on the fly to ensure a safe touchdown.

SPLICE will get a real-world test of three of its four primary subsystems during a New Shepard mission to be flown relatively soon. The Jeff Bezos –founded company typically returns its first-stage booster to Earth after making its trip to the very edge of space, but on this test of SPLICE, NASA’s automated landing technology will be operating on board the vehicle the same way they would when approaching the surface of the moon or Mars. The elements tested will include “terrain relative navigation,” Doppler radar and SPLICE’s descent and landing computer, while a fourth major system — lidar-based hazard detection — will be tested on future planned flights.

Currently, NASA already uses automated landing for its robotic exploration craft on the surface of other planets, including the Perseverance rover headed to Mars. But a lot of work goes into selecting a landing zone with a large area of unobstructed ground that’s free of any potential hazards in order to ensure a safe touchdown. Existing systems can make some adjustments, but they’re relatively limited in that regard.

Playing with Realistic Neural Talking Head Models

Researchers at the Samsung AI Center in Moscow (Russia) have recently presented interesting work called Living portraits: they made Mona Lisa and other subjects of photos and art alive using video of real people. They presented a framework for meta-learning of adversarial generative models called “Few-Shot Adversarial Learning”.

You can read more about details in the original paper.

Here we review this great implementation of the algorithm in PyTorch. The author of this implementation is Vincent Thévenin — research worker in De Vinci Innovation Center.

C-MIMI: Use of AI in radiology is evolving

September 14, 2020 — The use of artificial intelligence (AI) in radiology to aid in image interpretation tasks is evolving, but many of the old factors and concepts from the computer-aided detection (CAD) era still remain, according to a Sunday talk at the Conference on Machine Intelligence in Medical Imaging (C-MIMI).

A lot has changed as the new era of AI has emerged, such as faster computers, larger image datasets, and more advanced algorithms — including deep learning. Another thing that’s changed is the realization of additional reasons and means to incorporate AI into clinical practice, according to Maryellen Giger, PhD, of the University of Chicago. What’s more, AI is also being developed for a broader range of clinical questions, more imaging modalities, and more diseases, she said.

At the same time, many of the issues are the same as those faced in the era of CAD. There are the same clinical tasks of detection, diagnosis, and response assessment, as well as the same concern of “garbage in, garbage out,” she said. What’s more, there’s the same potential for off-label use of the software, and the same methods for statistical evaluations.

More laser power allows faster production of ultra-precise polymeric parts across 12 orders of magnitude

A high-power laser, optimized optical pathway, a patented adaptive resolution technology, and smart algorithms for laser scanning have enabled UpNano, a Vienna-based high-tech company, to produce high-resolution 3D-printing as never seen before.

“Parts with nano- and microscale can now be printed across 12 orders of magnitude—within times never achieved previously. This has been accomplished by UpNano, a spin-out of the TU Wien, which developed a high-end two-photon polymerization (2PP) 3D-printing system that can produce polymeric parts with a volume ranging from 100 to 1012 cubic micrometers. At the same time the printer allows for a nano- and microscale resolution,” the company said in a statement.

Recently the company demonstrated this remarkable capability by printing four models of the Eiffel Tower ranging from 200 micrometers to 4 centimeters—with perfect representation of all minuscule structures within 30 to 540 minutes. With this, 2PP 3D-printing is ready for applications in R&D and industry that seemed so far impossible.

The World’s First Living Machines

Teeny-tiny living robots made their world debut earlier this year. These microscopic organisms are composed entirely of frog stem cells, and, thanks to a special computer algorithm, they can take on different shapes and perform simple functions: crawling, traveling in circles, moving small objects — or even joining with other organic bots to collectively perform tasks.


The world’s first living robots may one day clean up our oceans.