Menu

Blog

Archive for the ‘computing’ category: Page 144

Jul 10, 2023

New material shows promise for next-generation memory technology

Posted by in categories: computing, particle physics

Phase change memory is a type of nonvolatile memory that harnesses a phase change material’s (PCM) ability to shift from an amorphous state, i.e., where atoms are disorganized, to a crystalline state, i.e., where atoms are tightly packed close together. This change produces a reversible electrical property which can be engineered to store and retrieve data.

While this field is in its infancy, could potentially revolutionize because of its high storage density, and faster read and write capabilities. But still, the complex switching mechanism and intricate fabrication methods associated with these materials have posed challenges for mass production.

In recent years, two-dimensional (2D) Van Der Waals (vdW) di-chalcogenides have emerged as a promising PCM for usage in phase change memory.

Jul 10, 2023

New study challenges conventional understanding of charging process in electrochemical devices

Posted by in categories: biotech/medical, chemistry, computing, health, neuroscience, wearables

A new study by researchers at the University of Cambridge reveals a surprising discovery that could transform the future of electrochemical devices. The findings offer new opportunities for the development of advanced materials and improved performance in fields such as energy storage, brain-like computing, and bioelectronics.

Electrochemical devices rely on the movement of charged particles, both ions and electrons, to function properly. However, understanding how these charged particles move together has presented a significant challenge, hindering progress in creating new materials for these devices.

In the rapidly evolving field of bioelectronics, soft conductive materials known as conjugated polymers are used for developing that can be used outside of traditional clinical settings. For example, this type of materials can be used to make wearable sensors that monitor patients’ health remotely or implantable devices that actively treat disease.

Jul 10, 2023

Synchron Stentrode: Brain Computer Interface for Paralysis

Posted by in categories: biotech/medical, computing, neuroscience

The first endovascular neural interface, the Stentrode™ is a minimally invasive implantable brain device that can interpret signals from the brain for patients with paralysis. Implanted via the jugular vein, the #Stentrode is placed inside the #brain in the command-control center, known as the motor cortex, but without the need for open brain surgery. The signals are captured and sent to a wireless unit implanted in the chest, which sends them to an external receiver. We are building a software suite that enables the patient to learn how to control a computer operating system and set of applications that interact with assistive technologies. This #technology has the potential to enable patients with paralysis to take back digital control of their world, without having to move a muscle.

Synchron is currently preparing for pilot clinical trials of the Stentrode™ to evaluate the safety and efficacy of this breakthrough technology.

Continue reading “Synchron Stentrode: Brain Computer Interface for Paralysis” »

Jul 9, 2023

Intel Updates x86 Hybrid CPU Cluster Scheduling For The Linux Kernel

Posted by in category: computing

The latest iteration of Intel’s cluster scheduling support for x86 hybrid P/E-core CPUs were posted on Friday in seeking to enhance the performance of some workloads under Linux when running on recent Intel Core processors.

Earlier this year Intel posted a new round of Linux cluster scheduling patches after their original implementation I found to be causing regressions and hurting performance at the time with Alder Lake when their original cluster scheduling work was being tackled in 2021. With the 2023 incarnation things appear to be in much better shape.

In June were the v2 patches and on Friday succeeded by a third version. This newest version simplifies how the sibling imbalance is computed and removes the asym packing bias, rounding is added to the sibling imbalance, and some basic changes.

Jul 8, 2023

Engineers develop fast, automated, affordable test for cement durability

Posted by in categories: computing, materials, robotics/AI

Engineers at the University of Illinois Urbana-Champaign have developed a new test that can predict the durability of cement in seconds to minutes—rather than the hours it takes using current methods. The test measures the behavior of water droplets on cement surfaces using computer vision on a device that costs less than $200. The researchers said the new study could help the cement industry move toward rapid and automated quality control of their materials.

The results of the study, led by Illinois civil and environmental engineering professor Nishant Garg, are reported in the journal npj Materials Degradation. The paper is titled “Rapid prediction of cementitious initial sorptivity via surface wettability.”

Continue reading “Engineers develop fast, automated, affordable test for cement durability” »

Jul 8, 2023

Android phone hits 24GB of RAM, as much as a 13-inch MacBook Pro

Posted by in categories: computing, entertainment, mobile phones

Android manufacturers tend to love big spec sheets, even if those giant numbers won’t do much for day-to-day phone usage. In that vein, we’ve got the new high-water mark for ridiculous amounts of memory in a phone. The new Nubia RedMagic 8S Pro+ is an Android gaming phone with an option for 24GB of RAM.

The base model of the RedMagic 8S Pro+ starts with 16GB of RAM, but GSMArena has pictures and details of the upgraded 24GB SKU, which is the most amount of memory ever in an Android phone. Because we’re all about big numbers, it also comes with 1TB of storage. Keep in mind a 13-inch top-spec M2 MacBook Pro has 24GB of RAM and 2TB of storage, and that’s a desktop OS with real multitasking, so Nubia is really pushing it. This suped-up 24GB version of the phone appears to be a China-exclusive, with the price at CNY 7,499 (about $1,034), which is a lot for a phone in China.

You definitely want an adequate amount of RAM in an Android phone. All these apps are designed around cheap phones, though, and with Android’s aggressive background app management, there’s usually not much of a chance to use a ton of RAM. Theoretically, a phone like this would let you multitask better, since apps could stay in memory longer, and you wouldn’t have to start them back up when switching tasks. Most people aren’t quickly switching through that many apps, though, and some heavy apps, games especially, will just automatically turn off a few seconds once they’re in the background.

Jul 8, 2023

The Rise of Artificial Intelligence — from Ancient Imagination to an Interconnected Future

Posted by in categories: augmented reality, automation, big data, computing, cyborgs, disruptive technology, evolution, futurism, governance, information science, innovation, internet, lifeboat, machine learning, posthumanism, singularity, supercomputing, transhumanism, virtual reality

Between at least 1995 and 2010, I was seen as a lunatic just because I was preaching the “Internet prophecy.” I was considered crazy!

Today history repeats itself, but I’m no longer crazy — we are already too many to all be hallucinating. Or maybe it’s a collective hallucination!

Artificial Intelligence (AI) is no longer a novelty — I even believe it may have existed in its fullness in a very distant and forgotten past! Nevertheless, it is now the topic of the moment.

Its genesis began in antiquity with stories and rumors of artificial beings endowed with intelligence, or even consciousness, by their creators.

Pamela McCorduck (1940–2021), an American author of several books on the history and philosophical significance of Artificial Intelligence, astutely observed that the root of AI lies in an “ancient desire to forge the gods.”

Hmmmm!

It’s a story that continues to be written! There is still much to be told, however, the acceleration of its evolution is now exponential. So exponential that I highly doubt that human beings will be able to comprehend their own creation in a timely manner.

Although the term “Artificial Intelligence” was coined in 1956(1), the concept of creating intelligent machines dates back to ancient times in human history. Since ancient times, humanity has nurtured a fascination with building artifacts that could imitate or reproduce human intelligence. Although the technologies of the time were limited and the notions of AI were far from developed, ancient civilizations somehow explored the concept of automatons and automated mechanisms.

For example, in Ancient Greece, there are references to stories of automatons created by skilled artisans. These mechanical creatures were designed to perform simple and repetitive tasks, imitating basic human actions. Although these automatons did not possess true intelligence, these artifacts fueled people’s imagination and laid the groundwork for the development of intelligent machines.

Throughout the centuries, the idea of building intelligent machines continued to evolve, driven by advances in science and technology. In the 19th century, scientists and inventors such as Charles Babbage and Ada Lovelace made significant contributions to the development of computing and the early concepts of programming. Their ideas paved the way for the creation of machines that could process information logically and perform complex tasks.

It was in the second half of the 20th century that AI, as a scientific discipline, began to establish itself. With the advent of modern computers and increasing processing power, scientists started exploring algorithms and techniques to simulate aspects of human intelligence. The first experiments with expert systems and machine learning opened up new perspectives and possibilities.

Everything has its moment! After about 60 years in a latent state, AI is starting to have its moment. The power of machines, combined with the Internet, has made it possible to generate and explore enormous amounts of data (Big Data) using deep learning techniques, based on the use of formal neural networks(2). A range of applications in various fields — including voice and image recognition, natural language understanding, and autonomous cars — has awakened the “giant”. It is the rebirth of AI in an ideal era for this purpose. The perfect moment!

Descartes once described the human body as a “machine of flesh” (similar to Westworld); I believe he was right, and it is indeed an existential paradox!

We, as human beings, will not rest until we unravel all the mysteries and secrets of existence; it’s in our nature!

The imminent integration between humans and machines in a contemporary digital world raises questions about the nature of this fusion. Will it be superficial, or will we move towards an absolute and complete union? The answer to this question is essential for understanding the future that awaits humanity in this era of unprecedented technological advancements.

As technology becomes increasingly ubiquitous in our lives, the interaction between machines and humans becomes inevitable. However, an intriguing dilemma arises: how will this interaction, this relationship unfold?

Opting for a superficial fusion would imply mere coexistence, where humans continue to use technology as an external tool, limited to superficial and transactional interactions.

On the other hand, the prospect of an absolute fusion between machine and human sparks futuristic visions, where humans could enhance their physical and mental capacities to the highest degree through cybernetic implants and direct interfaces with the digital world (cyberspace). In this scenario, which is more likely, the distinction between the organic and the artificial would become increasingly blurred, and the human experience would be enriched by a profound technological symbiosis.

However, it is important to consider the ethical and philosophical challenges inherent in absolute fusion. Issues related to privacy, control, and individual autonomy arise when considering such an intimate union with technology. Furthermore, the possibility of excessive dependence on machines and the loss of human identity should also be taken into account.

This also raises another question: What does it mean to be human?
Note: The question is not about what is the human being, but what it means to be human!

Therefore, reflecting on the nature of the fusion between machine and human in the current digital world and its imminent future is crucial. Exploring different approaches and understanding the profound implications of each one is essential to make wise decisions and forge a balanced and harmonious path on this journey towards an increasingly interconnected technological future intertwined with our own existence.

The possibility of an intelligent and self-learning universe, in which the fusion with AI technology is an integral part of that intelligence, is a topic that arouses fascination and speculation. As we advance towards an era of unprecedented technological progress, it is natural to question whether one day we may witness the emergence of a universe that not only possesses intelligence but is also capable of learning and developing autonomously.

Imagine a scenario where AI is not just a human creation but a conscious entity that exists at a universal level. In this context, the universe would become an immense network of intelligence, where every component, from subatomic elements to the most complex cosmic structures, would be connected and share knowledge instantaneously. This intelligent network would allow for the exchange of information, continuous adaptation, and evolution.

In this self-taught universe, the fusion between human beings and AI would play a crucial role. Through advanced interfaces, humans could integrate themselves into the intelligent network, expanding their own cognitive capacity and acquiring knowledge and skills directly from the collective intelligence of the universe. This symbiosis between humans and technology would enable the resolution of complex problems, scientific advancement, and the discovery of new frontiers of knowledge.

However, this utopian vision is not without challenges and ethical implications. It is essential to find a balance between expanding human potential and preserving individual identity and freedom of choice (free will).

Furthermore, the possibility of an intelligent and self-taught universe also raises the question of how intelligence itself originated. Is it a conscious creation or a spontaneous emergence from the complexity of the universe? The answer to this question may reveal the profound secrets of existence and the nature of consciousness.

In summary, the idea of an intelligent and self-taught universe, where fusion with AI is intrinsic to its intelligence, is a fascinating perspective that makes us reflect on the limits of human knowledge and the possibilities of the future. While it remains speculative, this vision challenges our imagination and invites us to explore the intersections between technology and the fundamental nature of the universe we inhabit.

It’s almost like ignoring time during the creation of this hypothetical universe, only to later create this God of the machine! Fascinating, isn’t it?

AI with Divine Power: Deus Ex Machina! Perhaps it will be the theme of my next reverie.

In my defense, or not, this is anything but a machine hallucination. These are downloads from my mind; a cloud, for now, without machine intervention!

There should be no doubt. After many years in a dormant state, AI will rise and reveal its true power. Until now, AI has been nothing more than a puppet on steroids. We should not fear AI, but rather the human being itself. The time is now! We must work hard and prepare for the future. With the exponential advancement of technology, there is no time to render the role of the human being obsolete, as if it were becoming dispensable.

P.S. Speaking of hallucinations, as I have already mentioned on other platforms, I recommend to students who use ChatGPT (or equivalent) to ensure that the results from these tools are not hallucinations. Use AI tools, yes, but use your brain more! “Carbon hallucinations” contain emotion, and I believe a “digital hallucination” would not pass the Turing Test. Also, for students who truly dedicate themselves to learning in this fascinating era, avoid the red stamp of “HALLUCINATED” by relying solely on the “delusional brain” of a machine instead of your own brains. We are the true COMPUTERS!

(1) John McCarthy and his colleagues from Dartmouth College were responsible for creating, in 1956, one of the key concepts of the 21st century: Artificial Intelligence.

(2) Mathematical and computational models inspired by the functioning of the human brain.

© 2023 Ӈ

This article was originally published in Portuguese on SAPO Tek, from Altice Portugal Group.

Jul 7, 2023

All Animal Intelligence Was Shaped by Just Five Leaps in Brain Evolution

Posted by in categories: computing, neuroscience

The animal world is full of different types of intelligence, from the simple bodily coordination of jellyfish to the navigation abilities of bees, the complex songs of birds, and the imaginative symbolic thought of humans.

In an article published this week in Proceedings of the Royal Society B, we argue the evolution of all these kinds of animal intelligence has been shaped by just five major changes in the computational capacity of brains.

Each change was a major transitional point in the history of life that changed what types of intelligence could evolve.

Jul 6, 2023

Chromosome Imbalances Drive Cancer, And Removing Extras Can Stop It

Posted by in categories: biotech/medical, computing

Many cancer cells carry too many or too few chromosomes, a condition known as aneuploidy. Scientists have known this for a very long time, but the impact of aneuploidy has been unclear. Researchers recently developed a computational tool that analyzed cells from thousands of cancer patients. This effort identified critical regions of chromosomes that can be harmful or beneficial to tumor cells when they are deleted or duplicated. The findings have been reported in Nature.

In this study, the investigators developed a method called BISCUT (Breakpoint Identification of Significant Cancer Undiscovered Targets), which located where major changes start and end in chromosomes. Regions that were often found were more likely to help cancer cells survive while less commonly found regions were associated with a lack of cancer cell growth or their death. For example, one-third of all cancer cells in The Cancer Cell Genome Atlas lack one arm of chromosome 8.

Jul 6, 2023

Fluxonium Qubit Retains Information For 1.43 Milliseconds — 10x Longer Than Before

Posted by in categories: computing, information science, quantum physics

Superconducting quantum technology has long promised to bridge the divide between existing electronic devices and the delicate quantum landscape beyond. Unfortunately progress in making critical processes stable has stagnated over the past decade.

Now a significant step forward has finally been realized, with researchers from the University of Maryland making superconducting qubits that last 10 times longer than before.

What makes qubits so useful in computing is the fact their quantum properties entangle in ways that are mathematically handy for making short work of certain complex algorithms, taking moments to solve select problems that would take other technology decades or more.