Toggle light / dark theme

The number of AI and, in particular, machine learning (ML) publications related to medical imaging has increased dramatically in recent years. A current PubMed search using the Mesh keywords “artificial intelligence” and “radiology” yielded 5,369 papers in 2021, more than five times the results found in 2011. ML models are constantly being developed to improve healthcare efficiency and outcomes, from classification to semantic segmentation, object detection, and image generation. Numerous published reports in diagnostic radiology, for example, indicate that ML models have the capability to perform as good as or even better than medical experts in specific tasks, such as anomaly detection and pathology screening.

It is thus undeniable that, when used correctly, AI can assist radiologists and drastically reduce their labor. Despite the growing interest in developing ML models for medical imaging, significant challenges can limit such models’ practical applications or even predispose them to substantial bias. Data scarcity and data imbalance are two of these challenges. On the one hand, medical imaging datasets are frequently much more minor than natural photograph datasets such as ImageNet, and pooling institutional datasets or making them public may be impossible due to patient privacy concerns. On the other hand, even the medical imaging datasets that data scientists have access to could be more balanced.

In other words, the volume of medical imaging data for patients with specific pathologies is significantly lower than for patients with common pathologies or healthy people. Using insufficiently large or imbalanced datasets to train or evaluate a machine learning model may result in systemic biases in model performance. Synthetic image generation is one of the primary strategies to combat data scarcity and data imbalance, in addition to the public release of deidentified medical imaging datasets and the endorsement of strategies such as federated learning, enabling machine learning (ML) model development on multi-institutional datasets without data sharing.

A new quantum random-access memory device reads and writes information using a chirped electromagnetic pulse and a superconducting resonator, making it significantly more hardware-efficient than previous devices.

Random-access memory (or RAM) is an integral part of a computer, acting as a short-term memory bank from which information can be quickly recalled. Applications on your phone or computer use RAM so that you can switch between tasks in the blink of an eye. Researchers working on building future quantum computers hope that such systems might one day operate with analogous quantum RAM elements, which they envision could speed up the execution of a quantum algorithm [1, 2] or increase the density of information storable in a quantum processor. Now James O’Sullivan of the London Centre for Nanotechnology and colleagues have taken an important step toward making quantum RAM a reality, demonstrating a hardware-efficient approach that uses chirped microwave pulses to store and retrieve quantum information in atomic spins [3].

Just like quantum computers, experimental demonstrations of quantum memory devices are in their early days. One leading chip-based platform for quantum computation uses circuits made from superconducting metals. In this system, the central processing is done with superconducting qubits, which send and receive information via microwave photons. At present, however, there exists no quantum memory device that can reliably store these photons for long times. Luckily, scientists have a few ideas.

Artificial intelligence has long been a hot topic: a computer algorithm “learns” by being taught by examples: What is “right” and what is “wrong.” Unlike a computer algorithm, the human brain works with neurons—cells of the brain. These are trained and pass on signals to other neurons. This complex network of neurons and the connecting pathways, the synapses, controls our thoughts and actions.

Biological signals are much more diverse when compared with those in conventional computers. For instance, neurons in a biological neural network communicate with ions, biomolecules and neurotransmitters. More specifically, neurons communicate either chemically—by emitting the messenger substances such as neurotransmitters—or via , so-called “action potentials” or “spikes”.

Artificial neurons are a current area of research. Here, the efficient communication between the biology and electronics requires the realization of that emulate realistically the function of their biological counterparts. This means artificial neurons capable of processing the diversity of signals that exist in biology. Until now, most artificial neurons only emulate their biological counterparts electrically, without taking into account the wet biological environment that consists of ions, biomolecules and neurotransmitters.

This time I come to talk about a new concept in this Age of Artificial Intelligence and the already insipid world of Social Networks. Initially, quite a few years ago, I named it “Counterpart” (long before the TV series “Counterpart” and “Black Mirror”, or even the movie “Transcendence”).

It was the essence of the ETER9 Project that was taking shape in my head.

Over the years and also with the evolution of technologies — and of the human being himself —, the concept “Counterpart” has been getting better and, with each passing day, it makes more sense!

You can imagine starting at the beginning, evolving the Universe forward according to the laws of physics, and measuring those earliest signals and their imprints on the Universe to determine how it has expanded over time. Alternatively, you can imagine starting here and now, looking out at the distant objects as we see them receding from us, and then drawing conclusions as to how the Universe has expanded from that.

Both of these methods rely on the same laws of physics, the same underlying theory of gravity, the same cosmic ingredients, and even the same equations as one another. And yet, when we actually perform our observations and make those critical measurements, we get two completely different answers that don’t agree with one another. This is, in many ways, the most pressing cosmic conundrum of our time. But there’s still a possibility that no one is mistaken and everyone is doing the science right. The entire controversy over the expanding Universe could go away if just one new thing is true: if there was some form of “early dark energy” in the Universe. Here’s why so many people are compelled by the idea.

Cybersecurity researchers have uncovered 29 packages in Python Package Index (PyPI), the official third-party software repository for the Python programming language, that aim to infect developers’ machines with a malware called W4SP Stealer.

“The main attack seems to have started around October 12, 2022, slowly picking up steam to a concentrated effort around October 22,” software supply chain security company Phylum said in a report published this week.

The list of offending packages is as follows: typesutil, typestring, sutiltype, duonet, fatnoob, strinfer, pydprotect, incrivelsim, twyne, pyptext, installpy, faq, colorwin, requests-httpx, colorsama, shaasigma, stringe, felpesviadinho, cypress, pystyte, pyslyte, pystyle, pyurllib, algorithmic, oiu, iao, curlapi, type-color, and pyhints.

Bias in AI systems is proving to be a major stumbling block in efforts to more broadly integrate the technology into our society.

A new initiative that will reward researchers for finding any prejudices in AI systems could help solve the problem.

The effort is modeled on the bug bounties that software companies pay to cybersecurity experts who alert them of any potential security flaws in their products.

What is driving the mulitverse theory? Are the multiverse stories only a sticky-plaster solution to the Big Bang theory problem? Leading thinkers Sabine Hossenfelder, Roger Penrose and Michio Kaku debate.

00:00 Introduction.
02:22 Michio Kaku | Multiverse theory has now dominating cosmology; it is unavoidable.
06:03 Sabine Hossenfelder | Believing in the multiverse is the logical equivalent to believing in God.
07:57 Roger Penrose | Universes are sequential and so are not independent worlds.
16:36 Theme 1 | Do scientifc theories need to be testable?
28:45 Theme 2 | Are tales of the multiverse solutions to the Big Bang theory in trouble?
42:49 Theme 3 | Will theories of the universe always be bound by untestable elements?

Multiverses are everywhere. Or at least the theory is. Everyone from physicists Stephen Hawking and Brian Greene to Marvel superheroes have shown their support for the idea. But critics argue that not only is the multiverse improbable, it is also fantasy and fundamentally unscientific as the theory can never be tested — a requirement that has defined science from its outset.

Should we reject the grand claims and leave multiverse theories to the pages of comic books? Are tales of the multiverse really sticking-plaster solutions for Big Bang theory in trouble? Or should we take multiverse theory as seriously as its proponents, and accept that modern science has moved beyond the bounds of experiment and into that of imagination?