Menu

Blog

Archive for the ‘cybercrime/malcode’ category: Page 47

Mar 28, 2023

Hacking phones remotely without touching via new inaudible ultrasound attack

Posted by in categories: cybercrime/malcode, mobile phones, robotics/AI

The Near–Ultrasound Invisible Trojan, or NUIT, was developed by a team of researchers from the University of Texas at San Antonio and the University of Colorado Colorado Springs as a technique to secretly convey harmful orders to voice assistants on smartphones and smart speakers.

If you watch videos on YouTube on your smart TV, then that television must have a speaker, right? According to Guinevere Chen, associate professor and co-author of the NUIT article, “the sound of NUIT harmful orders will [be] inaudible, and it may attack your mobile phone as well as connect with your Google Assistant or Alexa devices.” “That may also happen in Zooms during meetings. During the meeting, if someone were to unmute themself, they would be able to implant the attack signal that would allow them to hack your phone, which was placed next to your computer.

Continue reading “Hacking phones remotely without touching via new inaudible ultrasound attack” »

Mar 28, 2023

Windows, Ubuntu, and VMWare Workstation hacked on last day of Pwn2Own

Posted by in category: cybercrime/malcode

On the third day of the Pwn2Own hacking contest, security researchers were awarded $185,000 after demonstrating 5 zero-day exploits targeting Windows 11, Ubuntu Desktop, and the VMware Workstation virtualization software.

The highlight of the day was the Ubuntu Desktop operating system getting hacked three times by three different teams, although one of them was a collision with the exploit being previously known.

The three working Ubuntu zero-day were demoed by Kyle Zeng of ASU SEFCOM (a double free bug), Mingi Cho of Theori (a Use-After-Free vulnerability), and Bien Pham (@bienpnn) of Qrious Security.

Mar 26, 2023

Ingenious Photosynthesis “Hack” Paves Way for Renewable Energy Breakthroughs

Posted by in categories: cybercrime/malcode, energy, sustainability

Researchers have ‘hacked’ the earliest stages of photosynthesis, the natural machine that powers the vast majority of life on Earth, and discovered new ways to extract energy from the process, a finding that could lead to new ways of generating clean fuel and renewable energy. We didn’t know as.

Mar 25, 2023

Gmail and Outlook users given ‘red alert’ over scary AI ‘hiding in your inbox’

Posted by in categories: cybercrime/malcode, robotics/AI

A nefarious use for AI. Phishing emails.


SECURITY experts have issued a warning over dangerous phishing emails that are put together by artificial intelligence.

The scams are convincing and help cybercriminals connect with victims before they attack, according to security site CSO.

Continue reading “Gmail and Outlook users given ‘red alert’ over scary AI ‘hiding in your inbox’” »

Mar 24, 2023

Scientist Reveals How to Escape Our Simulation

Posted by in category: cybercrime/malcode

Hack your way out of the wrong reality.

Mar 21, 2023

Google AI And Microsoft ChatGPT Are Not Our Biggest Security Risks, Warns Chess Legend Kasparov

Posted by in categories: biotech/medical, cybercrime/malcode, internet, robotics/AI, supercomputing

Amid a flurry of Google and Microsoft generative AI releases last week during SXSW, Garry Kasparov, who is a chess grandmaster, Avast Security Ambassador and Chairman of the Human Rights Foundation, told me he is less concerned about ChatGPT hacking into home appliances than he is about users being duped by bad actors.

“People still have the monopoly on evil,” he warned, standing firm on thoughts he shared with me in 2019. Widely considered one of the greatest chess players of all time, Kasparov gained mythic status in the 1990s as world champion when he beat, and then was defeated by IBM’s Deep Blue supercomputer.


Despite the rapid advancement of generative AI, chess legend Garry Kasparov, now ambassador for the security firm Avast, explains why he doesn’t fear ChatGPT creating a virus to take down the Internet, but shares Gen’s CTO concerns that text-to-video deepfakes could warp our reality.

Continue reading “Google AI And Microsoft ChatGPT Are Not Our Biggest Security Risks, Warns Chess Legend Kasparov” »

Mar 19, 2023

OpenAI CEO cautions AI like ChatGPT could cause disinformation, cyber-attacks

Posted by in categories: cybercrime/malcode, robotics/AI

Society has a limited amount of time “to figure out how to react” and “regulate” AI, says Sam Altman.

OpenAI CEO Sam Altman has cautioned that his company’s artificial intelligence technology, ChatGPT, poses serious risks as it reshapes society.

He emphasized that regulators and society must be involved with the technology, according to an interview telecasted by ABC News on Thursday night.

Continue reading “OpenAI CEO cautions AI like ChatGPT could cause disinformation, cyber-attacks” »

Mar 16, 2023

Cryptojacking Group TeamTNT Suspected of Using Decoy Miner to Conceal Data Exfiltration

Posted by in categories: cryptocurrencies, cybercrime/malcode

The cryptojacking group known as TeamTNT is suspected to be behind a previously undiscovered strain of malware used to mine Monero cryptocurrency on compromised systems.

That’s according to Cado Security, which found the sample after Sysdig detailed a sophisticated attack known as SCARLETEEL aimed at containerized environments to ultimately steal proprietary data and software.

Specifically, the early phase of the attack chain involved the use of a cryptocurrency miner, which the cloud security firm suspected was deployed as a decoy to conceal the detection of data exfiltration.

Mar 15, 2023

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

Posted by in categories: cybercrime/malcode, internet, robotics/AI

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

Mar 13, 2023

AT&T data breach hits nine million customer accounts

Posted by in category: cybercrime/malcode

A third-party vendor hack exposed millions of AT&T customers’ account information, including names, phone numbers, and email addresses.

Page 47 of 212First4445464748495051Last