Menu

Blog

Archive for the ‘information science’ category: Page 3

Jun 15, 2024

Big data and deep learning for RNA biology

Posted by in categories: biotech/medical, information science, robotics/AI

This review spotlights the revolutionary role of deep learning (DL) in expanding the understanding of RNA is a fundamental biomolecule that shapes and regulates diverse phenotypes including human diseases. Understanding the principles governing the functions of RNA is a key objective of current biology. Recently, big data produced via high-throughput experiments have been utilized to develop DL models aimed at analyzing and predicting RNA-related biological processes. This review emphasizes the role of public databases in providing these big data for training DL models. The authors introduce core DL concepts necessary for training models from the biological data. By extensively examining DL studies in various fields of RNA biology, the authors suggest how to better leverage DL for revealing novel biological knowledge and demonstrate the potential of DL in deciphering the complex biology of RNA.

This summary was initially drafted using artificial intelligence, then revised and fact-checked by the author.

Jun 15, 2024

AI Models Aid in Predicting Lung Cancer Risk

Posted by in categories: biotech/medical, information science, robotics/AI

Colin Jacobs, PhD, assistant professor in the Department of Medical Imaging at Radboud University Medical Center in Nijmegen, The Netherlands, and Kiran Vaidhya Venkadesh, a second-year PhD candidate with the Diagnostic Image Analysis Group at Radboud University Medical Center discuss their 2021 Radiology study, which used CT images from the National Lung Cancer Screening Trial (NLST) to train a deep learning algorithm to estimate the malignancy risk of lung nodules.

Jun 14, 2024

PBS Space Time

Posted by in categories: cosmology, information science, mathematics, physics, singularity

Viewers like you help make PBS (Thank you 😃). Support your local PBS Member Station here: https://to.pbs.org/DonateSPACE

Be sure to check out the Infinite Series episode Singularities Explained ‱ Singularities Explained | Infinite Se
 or How I Learned to Stop Worrying and Divide by Zero.

Continue reading “PBS Space Time” »

Jun 13, 2024

Researchers ask industry to develop signal processing algorithms for ship-tracking over-the-horizon radar

Posted by in categories: information science, transportation

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., has issued a solicitation (DARPA-PA-23–03-11) for the Defense Applications of Innovative Remote Sensing (DAIRS) project.

Primary emphasis will be in the high frequency (HF) band nominally at 4 to 15 MHz. Key applications in this frequency band are SWOTHR for aircraft, ship, and boat tracking, oceanographic SWOTHR, and sounding for ionospheric characterization.

Jun 12, 2024

America is the undisputed world leader in quantum computing even though China spends 8x more on the technology–but an own goal could soon erode U.S. dominance

Posted by in categories: business, cybercrime/malcode, economics, finance, government, information science, quantum physics, robotics/AI

When it comes to quantum computing, that chilling effect on research and development would enormously jeopardize U.S. national security. Our projects received ample funding from defense and intelligence agencies for good reason. Quantum computing may soon become the https://www.cyberdefensemagazine.com/quantum-security-is-nat...at%20allow, codebreaking%20attacks%20against%20traditional%20encryption" rel="noopener" class="">gold standard technology for codebreaking and defending large computer networks against cyberattacks.

Adopting the proposed march-in framework would also have major implications for our future economic stability. While still a nascent technology today, quantum computing’s ability to rapidly process huge volumes of data is set to revolutionize business in the coming decades. It may be the only way to capture the complexity needed for future AI and machine learning in, say, self-driving vehicles. It may enable companies to hone their supply chains and other logistical operations, such as manufacturing, with unprecedented precision. It may also transform finance by allowing portfolio managers to create new, superior investment algorithms and strategies.

Given the technology’s immense potential, it’s no mystery why China committed what is believed to be more than https://www.mckinsey.com/featured-insights/sustainable-inclu
n-quantum” rel=“noopener” class=””>$15 billion in 2022 to develop its quantum computing capacity–more than double the budget for quantum computing of EU countries and eight times what the U.S. government plans to spend.

Jun 12, 2024

New algorithm discovers language just by watching videos

Posted by in categories: information science, robotics/AI

Mark Hamilton, an MIT Ph.D. student in electrical engineering and computer science and affiliate of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), wants to use machines to understand how animals communicate. To do that, he set out first to create a system that can learn human language “from scratch.”

Jun 12, 2024

What using artificial intelligence to help monitor surgery can teach us

Posted by in categories: biotech/medical, information science, robotics/AI

1. Privacy is important, but not always guaranteed. Grantcharov realized very quickly that the only way to get surgeons to use the black box was to make them feel protected from possible repercussions. He has designed the system to record actions but hide the identities of both patients and staff, even deleting all recordings within 30 days. His idea is that no individual should be punished for making a mistake.

The black boxes render each person in the recording anonymous; an algorithm distorts people’s voices and blurs out their faces, transforming them into shadowy, noir-like figures. So even if you know what happened, you can’t use it against an individual.

But this process is not perfect. Before 30-day-old recordings are automatically deleted, hospital administrators can still see the operating room number, the time of the operation, and the patient’s medical record number, so even if personnel are technically de-identified, they aren’t truly anonymous. The result is a sense that “Big Brother is watching,” says Christopher Mantyh, vice chair of clinical operations at Duke University Hospital, which has black boxes in seven operating rooms.

Jun 11, 2024

A chain of copper and carbon atoms may be the thinnest metallic wire

Posted by in categories: computing, information science, nanotechnology, particle physics

While carbon nanotubes are the materials that have received most of the attention so far, they have proved very difficult to manufacture and control, so scientists are eager to find other compounds that could be used to create nanowires and nanotubes with equally interesting properties, but easier to handle.

So, Chiara Cignarella, Davide Campi and Nicola Marzari thought to use to parse known three-dimensional crystals, looking for those that—based on their structural and —look like they could be easily “exfoliated,” essentially peeling away from them a stable 1-D structure. The same method has been successfully used in the past to study 2D materials, but this is the first application to their 1-D counterparts.

The researchers started from a collection of over 780,000 crystals, taken from various databases found in the literature and held together by van der Waals forces, the sort of weak interactions that happen when atoms are close enough for their electrons to overlap. Then they applied an algorithm that considered the spatial organization of their atoms looking for the ones that incorporated wire-like structures, and calculated how much energy would be necessary to separate that 1-D structure from the rest of the crystal.

Jun 10, 2024

ATLAS chases long-lived particles with the Higgs boson

Posted by in categories: cosmology, information science, particle physics, robotics/AI

The Higgs boson has an extremely short lifespan, living for about 10–22 seconds before quickly decaying into other particles. For comparison, in that brief time, light can only travel about the width of a small atomic nucleus. Scientists study the Higgs boson by detecting its decay products in particle collisions at the Large Hadron Collider. But what if the Higgs boson could also decay into unexpected new particles that are long-lived? What if these particles can travel a few centimeters through the detector before they decay? These long-lived particles (LLPs) could shed light on some of the universe’s biggest mysteries, such as the reason matter prevailed over antimatter in the early universe and the nature of dark matter. Searching for LLPs is extremely challenging because they rarely interact with matter, making them difficult to observe in a particle detector. However, their unusual signatures provide exciting prospects for discovery. Unlike particles that leave a continuous track, LLPs result in noticeable displacements between their production and decay points within the detector. Identifying such a signature requires dedicated algorithms. In a new study submitted to Physical Review Letters, ATLAS scientists used a new algorithm to search for LLPs produced in the decay of Higgs bosons. Boosting sensitivity with a new algorithm Figure 1: A comparison of the radial distributions of reconstructed displaced vertices in a simulated long-lived particle (LLP) sample using the legacy and new (updated) track reconstruction configurations. The circular markers represent reconstructed vertices that are matched to LLP decay vertices and the dashed lines represent reconstructed vertices from background decay vertices (non-LLP). (Image: ATLAS Collaboration/CERN) Despite being critical to the LLP searches, dedicated reconstruction algorithms were previously so resource intensive that they could only be applied to less than 10% of all recorded ATLAS data. Recently, however, ATLAS scientists implemented a new “Large-Radius Tracking” algorithm (LRT), which significantly speeds up the reconstruction of charged particle trajectories in the ATLAS Inner Detector that do not point back to the primary proton-proton collision point, while drastically reducing backgrounds and random combinations of detector signals. The LRT algorithm is executed after the primary tracking iteration using exclusively the detector hits (energy deposits from charged particles recorded in individual detector elements) not already assigned to primary tracks. As a result, ATLAS saw an enormous increase in the efficiency of identifying LLP decays (see Figure 1). The new algorithm also improved CPU processing time more than tenfold compared to the legacy implementation, and the disk space usage per event was reduced by more than a factor of 50. These improvements enabled physicists to fully integrate the LRT algorithm into the standard ATLAS event reconstruction chain. Now, every recorded collision event can be scrutinized for the presence of new LLPs, greatly enhancing the discovery potential of such signatures. Physicists are searching for Higgs bosons decaying into new long-lived particles, which may leave a ‘displaced’ signature in the ATLAS detector. Exploring the dark with the Higgs boson Figure 2: Observed 95% confidence-limit on the decay of the Higgs boson to a pair of long-lived s particles that decay back to Standard-Model particles shown as a function of the mean proper decay length ( of the long-lived particle. The observed limits for the Higgs Portal model from the previous ATLAS search are shown with the dotted lines. (Image: ATLAS Collaboration/CERN) In their new result, ATLAS scientists employed the LRT algorithm to search for LLPs that decay hadronically, leaving a distinct signature of one or more hadronic “jets” of particles originating at a significantly displaced position from the proton–proton collision point (a displaced vertex). Physicists also focused on the Higgs “portal” model, in which the Higgs boson mediates interactions with dark-matter particles through its coupling to a neutral boson s, resulting in exotic decays of the Higgs boson to a pair of long-lived s particles that decay into Standard-Model particles. The ATLAS team studied collision events with unique characteristics consistent with the production of the Higgs boson. The background processes that mimic the LLP signature are complex and challenging to model. To achieve good discrimination between signal and background processes, ATLAS physicists used a machine learning algorithm trained to isolate events with jets arising from LLP decays. Complementary to this, a dedicated displaced vertex reconstruction algorithm was used to pinpoint the origin of hadronic jets originating from the decay of LLPs. This new search did not uncover any events featuring Higgs-boson decays to LLPs. It improves bounds on Higgs-boson decays to LLPs by a factor of 10 to 40 times compared to the previous search using the exact same dataset (see Figure 2)! For the first time at the LHC, bounds on exotic decays of the Higgs boson for low LLP masses (less than 16 GeV) have surpassed results for direct searches of exotic Higgs-boson decays to undetected states. About the event display: A 13 TeV collision event recorded by the ATLAS experiment containing two displaced decay vertices (blue circles) significantly displaced from the beam line showing “prompt” non displaced decay vertices (pink circles). The event characteristics are compatible with what would be expected if a Higgs boson is produced in association with a Z boson (decaying to two electrons indicated by green towers), and decayed into two LLPs (decaying into two b-quarks each). Tracks shown in yellow and jets are indicated by cones. The green and yellow blocks correspond to energy deposition in the electromagnetic and hadronic calorimeters, respectively. (Image: ATLAS Collaboration/CERN) Learn more Search for light long-lived particles in proton-proton collisions at 13 TeV using displaced vertices in the ATLAS inner detector (Submitted to PRL, arXiv:2403.15332, see figures) Performance of the reconstruction of large impact parameter tracks in the inner detector of ATLAS (Eur. Phys. J. C 83 (2023) 1,081, arXiv:2304.12867, see figures) Search for exotic decays of the Higgs boson into long-lived particles in proton-proton collisions at 13 TeV using displaced vertices in the ATLAS inner detector (JHEP 11 (2021) 229, arXiv:2107.06092, see figures)

Jun 9, 2024

The Missing Piece: Combining Foundation Models and Open-Endedness for Artificial Superhuman Intelligence ASI

Posted by in categories: information science, robotics/AI

Recent advances in artificial intelligence, primarily driven by foundation models, have enabled impressive progress. However, achieving artificial general intelligence, which involves reaching human-level performance across various tasks, remains a significant challenge. A critical missing component is a formal description of what it would take for an autonomous system to self-improve towards increasingly creative and diverse discoveries without end—a “Cambrian explosion” of emergent capabilities i-e the creation of open-ended, ever-self-improving AI remains elusive., behaviors, and artifacts. This open-ended invention is how humans and society accumulate new knowledge and technology, making it essential for artificial superhuman intelligence.

DeepMind researchers propose a concrete formal definition of open-endedness in AI systems from the perspective of novelty and learnability. They illustrate a path towards achieving artificial superhuman intelligence (ASI) by developing open-ended systems built upon foundation models. These open-ended systems would be capable of making robust, relevant discoveries that are understandable and beneficial to humans. The researchers argue that such open-endedness, enabled by the combination of foundation models and open-ended algorithms, is an essential property for any ASI system to continuously expand its capabilities and knowledge in a way that can be utilized by humanity.

The researchers provide a formal definition of open-endedness from the perspective of an observer. An open-ended system produces a sequence of artifacts that are both novel and learnable. Novelty is defined as artifacts becoming increasingly unpredictable to the observer’s model over time. Learnability requires that conditioning on a longer history of past artifacts makes future artifacts more predictable. The observer uses a statistical model to predict future artifacts based on the history, judging the quality of predictions using a loss metric. Interestingness is represented by the observer’s choice of loss function, capturing which features they find useful to learn about. This formal definition quantifies the key intuition that an open-ended system endlessly generates artifacts that are both novel and meaningful to the observer.

Page 3 of 30512345678Last