Menu

Blog

Archive for the ‘information science’ category: Page 10

Jul 3, 2024

Genetic algorithm enables precise design of phononic crystals

Posted by in categories: computing, genetics, information science, nanotechnology, quantum physics

The advent of quantum computers promises to revolutionize computing by solving complex problems exponentially more rapidly than classical computers. However, today’s quantum computers face challenges such as maintaining stability and transporting quantum information.

Phonons, which are quantized vibrations in periodic lattices, offer new ways to improve these systems by enhancing qubit interactions and providing more reliable information conversion. Phonons also facilitate better communication within quantum computers, allowing the interconnection of them in a network.

Nanophononic materials, which are artificial nanostructures with specific phononic properties, will be essential for next-generation quantum networking and . However, designing phononic crystals with desired characteristics at the nano-and micro-scales remains challenging.

Jul 1, 2024

Guillaume Verdon: Advancing Generative AI and Quantum Computing

Posted by in categories: information science, quantum physics, robotics/AI

What is the future of generative AI compute?

— The future of generative AI compute involves embedding AI algorithms into the physics of the world to push the limits of density, spatial efficiency, and speed for AI, creating a full stack of software and hardware specifically designed for AI from first principles.

Jul 1, 2024

The First Quantum Supercomputer is Here

Posted by in categories: information science, quantum physics, supercomputing

The first #Quantum #Supercomputers are here! Quantum enabled supercomputing promises to shed light on new quantum algorithms, hardware innovations, and error mitigation schemes. Large collaborations in the field are kicking off between corporations and supercomputing centers. Companies like NVIDIA, IBM, IQM, QuEra, and others are some of the earliest to participate in these partnerships.

Join My Discord: / discord.
Become a patron: https://patreon.com/user?u=100800416
for access to my animation source code, video scripts, and research materials.
Also check out my instagram: / lukasinthelab.

Jun 30, 2024

Like a Child, This Brain-Inspired AI Can Explain Its Reasoning

Posted by in categories: biotech/medical, information science, robotics/AI

But deep learning has a massive drawback: The algorithms can’t justify their answers. Often called the “black box” problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms—even if they have high diagnostic accuracy—can’t provide that information.

To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In a study in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.

The resulting AI acts a bit like a child. It condenses different types of information into “hubs.” Each hub is then transcribed into coding guidelines for humans to read—CliffsNotes for programmers that explain the algorithm’s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.

Jun 30, 2024

Automated discovery of algorithms from data

Posted by in categories: information science, robotics/AI

Automated algorithm discovery has been difficult for artificial intelligence given the immense search space of possible functions. Here explainable neural networks are used to discover algorithms that outperform those designed by humans.

Jun 29, 2024

New computational microscopy technique provides more direct route to crisp images

Posted by in categories: biotech/medical, computing, information science

For hundreds of years, the clarity and magnification of microscopes were ultimately limited by the physical properties of their optical lenses. Microscope makers pushed those boundaries by making increasingly complicated and expensive stacks of lens elements. Still, scientists had to decide between high resolution and a small field of view on the one hand or low resolution and a large field of view on the other.

In 2013, a team of Caltech engineers introduced a called FPM (for Fourier ptychographic microscopy). This technology marked the advent of computational microscopy, the use of techniques that wed the sensing of conventional microscopes with that process detected information in new ways to create deeper, sharper images covering larger areas. FPM has since been widely adopted for its ability to acquire high-resolution images of samples while maintaining a large field of view using relatively inexpensive equipment.

Now the same lab has developed a new method that can outperform FPM in its ability to obtain images free of blurriness or distortion, even while taking fewer measurements. The new technique, described in a paper that appeared in the journal Nature Communications, could lead to advances in such areas as biomedical imaging, digital pathology, and drug screening.

Jun 28, 2024

Exploring the Emergent Abilities of Large Language Models

Posted by in categories: information science, physics

Emergence, a fascinating and complex concept, illuminates how intricate patterns and behaviors can spring from simple interactions. It’s akin to marveling at a symphony, where each individual note, simple in itself, contributes to a rich, complex musical experience far surpassing the sum of its parts. Although definitions of emergence vary across disciplines, they converge on a common theme: small quantitative changes in a system’s parameters can lead to significant qualitative transformations in its behavior. These qualitative shifts represent different “regimes” where the fundamental “rules of the game”-the underlying principles or equations governing the behavior-change dramatically.

To make this abstract concept more tangible, let’s explore relatable examples from various fields:

1. Physics: Phase Transitions: Emergence is vividly illustrated through phase transitions, like water turning into ice. Here, minor temperature changes (quantitative parameter) lead to a drastic change from liquid to solid (qualitative behavior). Each molecule behaves simply, but collectively, they transition into a distinctly different state with their properties.

Jun 27, 2024

The surprising behavior of black holes in an expanding universe

Posted by in categories: cosmology, information science, quantum physics

A physicist investigating black holes has found that, in an expanding universe, Einstein’s equations require that the rate of the universe’s expansion at the event horizon of every black hole must be a constant, the same for all black holes. In turn this means that the only energy at the event horizon is dark energy, the so-called cosmological constant. The study is published on the arXiv preprint server.

“Otherwise,” said Nikodem Popławski, a Distinguished Lecturer at the University of New Haven, “the pressure of matter and curvature of spacetime would have to be infinite at a horizon, but that is unphysical.”

Black holes are a fascinating topic because they are about the simplest things in the universe: their only properties are mass, electric charge and angular momentum (spin). Yet their simplicity gives rise to a fantastical property—they have an event horizon at a critical distance from the black hole, a nonphysical surface around it, spherical in the simplest cases. Anything closer to the black hole, that is, inside the event horizon, can never escape the black hole.

Jun 26, 2024

On quantum computing for artificial superintelligence

Posted by in categories: information science, quantum physics, robotics/AI

Artificial intelligence algorithms, fueled by continuous technological development and increased computing power, have proven effective across a variety of tasks. Concurrently, quantum computers have shown promise in solving problems beyond the reach of classical computers. These advancements have contributed to a misconception that quantum computers enable hypercomputation, sparking speculation about quantum supremacy leading to an intelligence explosion and the creation of superintelligent agents. We challenge this notion, arguing that current evidence does not support the idea that quantum technologies enable hypercomputation. Fundamental limitations on information storage within finite spaces and the accessibility of information from quantum states constrain quantum computers from surpassing the Turing computing barrier.

Jun 26, 2024

AI Generated Content and Academic Journals

Posted by in categories: information science, policy, robotics/AI

What are good policy options for academic journals regarding the detection of AI generated content and publication decisions? As a group of associate editors of Dialectica note below, there are several issues involved, including the uncertain performance of AI detection tools and the risk that material checked by such tools is used for the further training of AIs. They’re interested in learning about what policies, if any, other journals have instituted in regard to these challenges and how they’re working, as well as other AI-related problems journals should have policies about. They write: As associate editors of a philosophy journal, we face the challenge of dealing with content that we suspect was generated by AI. Just like plagiarized content, AI generated content is submitted under false claim of authorship. Among the unique challenges posed by AI, the following two are pertinent for journal editors. First, there is the worry of feeding material to AI while attempting to minimize its impact. To the best of our knowledge, the only available method to check for AI generated content involves websites such as GPTZero. However, using such AI detectors differs from plagiarism software in running the risk of making copyrighted material available for the purposes of AI training, which eventually aids the development of a commercial product. We wonder whether using such software under these conditions is justifiable. Second, there is the worry of delegating decisions to an algorithm the workings of which are opaque. Unlike plagiarized texts, texts generated by AI routinely do not stand in an obvious relation of resemblance to an original. This renders it extremely difficult to verify whether an article or part of an article was AI generated; the basis for refusing to consider an article on such grounds is therefore shaky at best. We wonder whether it is problematic to refuse to publish an article solely because the likelihood of its being generated by AI passes a specific threshold (say, 90%) according to a specific website. We would be interested to learn about best practices adopted by other journals and about issues we may have neglected to consider. We especially appreciate the thoughts of fellow philosophers as well as members of other fields facing similar problems. — Aleks…

Page 10 of 315First7891011121314Last