Toggle light / dark theme

Likewise, the company behind an app that can recommend your next TV binge, movie to watch, podcast to stream or book to read, is out today with its own entertainment-focused AI companion, Pix. Built using a combination of Likewise’s own customer data and technology from partner OpenAI, Pix can make entertainment recommendations and answer other questions via text message or email, or by communicating with Pix within the Pix mobile app, website or even by speaking to Pix’s TV app using a voice remote.

Founded in 2017 by former Microsoft communications chief Larry Cohen with financial backing from Bill Gates, the recommendations startup aims to offer an easy way for people to discover new TV shows, movies, books, podcasts and more, as well as follow other users and make lists of their favorites to share. While today, recommendations are often baked into the streaming services or apps we use to play our entertainment content, Likewise maintains a registered user base of more than 6 million, and over 2 million monthly active users.

To build Pix, the company leveraged around 600 million consumer data points along with machine learning algorithms, as well as the natural language processing technology of OpenAI’s GPT 3.5 and 4. To work, the AI chatbot learns the preferences of the individual user and then provides them with personalized recommendations — similar to Likewise itself. In addition, the bot will reach out to users when new content becomes available that matches their interests.

Without full fault tolerance in quantum computers we will never practically get past 100 qubits but full fault tolerance will eventually open up the possibility of billions of qubits and beyond. In a Wright Brothers Kittyhawk moment for Quantum Computing, a fully fault-tolerant algorithm was executed on real qubits. They were only three qubits but this was never done on real qubits before.

This is the start of the fully fault tolerant age of quantum computers. For quantum computers to be the real deal of unlimited computing disruption then we needed full fault tolerance on real qubits.

ROCHESTER, Minn. — A recent study based on real-world community patient data confirms the effectiveness of the Pooled Cohort Equation (PCE), developed by the American Heart Association and the American College of Cardiology in 2013. The PCE is used to estimate a person’s 10-year risk of developing clogged arteries, also known as atherosclerosis, and guide heart attack and stroke prevention efforts. Study findings are published in the Journal of the American College of Cardiology.

The new study highlights to patients and clinicians the continued reliability and effectiveness of the PCE as a tool for assessing cardiovascular risk, regardless of statin use to lower cholesterol.

The PCE serves as a shared decision-making tool for a clinician and patient to evaluate their current status in preventing atherosclerotic cardiovascular disease. The calculator considers input in the categories of gender, age, race, total cholesterol, HDL cholesterol, systolic blood pressure, treatment for high blood pressure, diabetes status, and smoking status.

For more information on liver cancer treatment or #YaleMedicine, visit: https://www.yalemedicine.org/stories/artificial-intelligence-liver-cancer.

With liver cancer on the rise (deaths rose 25% between 2006 and 2015, according to the CDC), doctors and researchers at the Yale Cancer Center are highly focused on finding new and better treatment options. A unique collaboration between Yale Medicine physicians and researchers and biomedical engineers from Yale’s School of Engineering uses artificial intelligence (AI) to pinpoint the specific treatment approach for each patient. First doctors need to understand as much as possible about a particular patient’s cancer. To this end, medical imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI) are valuable tools for early detection, accurate diagnosis, and effective treatment of liver cancer. For every patient, physicians need to interpret and analyze these images, along with a multitude of other clinical data points, to make treatment decisions likeliest to lead to a positive outcome. “There’s a lot of data that needs to be considered in terms of making a recommendation on how to manage a patient,” says Jeffrey Pollak, MD, Robert I. White, Jr. Professor of Radiology and Biomedical Imaging. “It can become quite complex.” To help, researchers are developing AI tools to help doctors tackle that vast amount of data. In this video, Julius Chaprio, MD, PhD, explains how collaboration with biomedical engineers like Lawrence Staib, PhD, facilitated the development of specialized AI algorithms that can sift through patient information, recognize important patterns, and streamline the clinical decision-making process. The ultimate goal of this research is to bridge the gap between complex clinical data and patient care. “It’s an advanced tool, just like all the others in the physician’s toolkit,” says Dr. Staib. “But this one is based on algorithms instead of a stethoscope.”

Imagine you’re in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they’re always looking out for different things. If they’re both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between and machine, rooted in understanding .

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the , it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems.

It wouldn’t shock me if all the buzz around searching for the ‘locus of consciousness’ merely fine-tunes our grasp of how the brain is linked to consciousness — without actually revealing where consciousness comes from, because it’s not generated in the brain. Similarly, your smartphone doesn’t create the Internet or a cellular network; it just processes them. Networks of minds are a common occurrence throughout the natural world. What sets humans apart is the impending advent of cybernetic connectivity explosion that could soon evolve into a form of synthetic telepathy, eventually leading to the rise of a unified, global consciousness — what could be termed the Syntellect Emergence.

#consciousness #phenomenology #cybernetics #cognition #neuroscience


In summary, the study of consciousness could be conceptualized through a variety of lenses: as a series of digital perceptual snapshots, as a cybernetic system with its feedback processes, as a grand theater; or perhaps even as a VIP section in a cosmological establishment of magnificent complexity. Today’s leading theories of consciousness are largely complementary, not mutually exclusive. These multiple perspectives not only contribute to philosophical discourse but also herald the dawn of new exploratory avenues, equally enthralling and challenging, in our understanding of consciousness.

In The Cybernetic Theory of Mind (2022), I expand on existing theories to propose certain conceptual models and concepts, such as Noocentrism, Digital Presentism (D-Theory of Time), Experiential Realism, Ontological Holism, Multi-Ego Pantheistic Solipsism, the Omega Singularity, deeming a non-local consciousness, or Universal Mind, as the substrate of objective reality. In search of God’s equation, we finally look upward for the source. What many religions call “God” is clearly an interdimensional being within the nested levels of complexity. Besides setting initial conditions for our universe, God speaks to us in the language of religion, spirituality, synchronicities and transcendental experiences.

One of the most well-established and disruptive uses for a future quantum computer is the ability to crack encryption. A new algorithm could significantly lower the barrier to achieving this.

Despite all the hype around quantum computing, there are still significant question marks around what quantum computers will actually be useful for. There are hopes they could accelerate everything from optimization processes to machine learning, but how much easier and faster they’ll be remains unclear in many cases.

One thing is pretty certain though: A sufficiently powerful quantum computer could render our leading cryptographic schemes worthless. While the mathematical puzzles underpinning them are virtually unsolvable by classical computers, they would be entirely tractable for a large enough quantum computer. That’s a problem because these schemes secure most of our information online.

A team led by Northwestern University researchers has developed the first artificial intelligence (AI) to date that can intelligently design robots from scratch.

To test the new AI, the researchers gave the system a simple prompt: Design a robot that can walk across a . While it took nature billions of years to evolve the first walking species, the compressed to lightning speed—designing a successfully walking robot in mere seconds.

But the AI program is not just fast. It also runs on a lightweight and designs wholly novel structures from scratch. This stands in sharp contrast to other AI systems, which often require energy-hungry supercomputers and colossally large datasets. And even after crunching all that data, those systems are tethered to the constraints of human creativity—only mimicking humans’ past works without an ability to generate new ideas.

Study math for long enough and you will likely have cursed Pythagoras’s name, or said “praise be to Pythagoras” if you’re a bit of a fan of triangles.

But while Pythagoras was an important historical figure in the development of mathematics, he did not figure out the equation most associated with him (a2 + b2 = c2). In fact, there is an ancient Babylonian tablet (by the catchy name of IM 67118) which uses the Pythagorean theorem to solve the length of a diagonal inside a rectangle. The tablet, likely used for teaching, dates from 1770 BCE – centuries before Pythagoras was born in around 570 BCE.

Another tablet from around 1800–1600 BCE has a square with labeled triangles inside. Translating the markings from base 60 – the counting system used by ancient Babylonians – showed that these ancient mathematicians were aware of the Pythagorean theorem (not called that, of course) as well as other advanced mathematical concepts.

An artificial intelligence platform developed by an Israeli startup can reveal whether a patient is at risk of a heart attack by analyzing their routine chest CT scans.

Results from a new study testing Nanox. AI’s HealthCCSng algorithm on such scans found that 58 percent of patients unknowingly had moderate to severe levels of coronary artery calcium (CAC) or plaque.

CAC is the strongest predictor of future cardiac events, and measuring it typically subjects patients to an additional costly scan that is not normally covered by insurance companies.