Menu

Blog

Archive for the ‘cybercrime/malcode’ category: Page 17

Jan 22, 2024

Thomvest Ventures closes $250M fund to invest across fintech, cybersecurity, AI

Posted by in categories: cybercrime/malcode, finance, robotics/AI

Thomvest Ventures is popping into 2024 with a new $250 million fund and the promotion of Umesh Padval and Nima Wedlake to the role of managing directors.

The Bay Area venture capital firm was started about 25 years ago by Peter Thomson, whose family is the majority owners of Thomson Reuters.

“Peter has always had a very strong interest in technology and what technology would do in terms of shaping society and the future,” Don Butler, Thomvest Ventures’ managing director, told TechCrunch. He met Thomson in 1999 and joined the firm in 2000.

Jan 19, 2024

From quantum leaps to threats, IBM foresees ‘Cybersecurity Armageddon’

Posted by in categories: cybercrime/malcode, quantum physics

IBM warns that advancements in quantum computing could lead to a cybersecurity crisis.

Jan 19, 2024

A simple technique to defend ChatGPT against jailbreak attacks

Posted by in categories: cybercrime/malcode, ethics, robotics/AI

Large language models (LLMs), deep learning-based models trained to generate, summarize, translate and process written texts, have gained significant attention after the release of Open AI’s conversational platform ChatGPT. While ChatGPT and similar platforms are now widely used for a wide range of applications, they could be vulnerable to a specific type of cyberattack producing biased, unreliable or even offensive responses.

Researchers at Hong Kong University of Science and Technology, University of Science and Technology of China, Tsinghua University and Microsoft Research Asia recently carried out a study investigating the potential impact of these attacks and techniques that could protect models against them. Their paper, published in Nature Machine Intelligence, introduces a new psychology-inspired technique that could help to protect ChatGPT and similar LLM-based conversational platforms from cyberattacks.

“ChatGPT is a societally impactful artificial intelligence tool with millions of users and integration into products such as Bing,” Yueqi Xie, Jingwei Yi and their colleagues write in their paper. “However, the emergence of attacks notably threatens its responsible and secure use. Jailbreak attacks use adversarial prompts to bypass ChatGPT’s ethics safeguards and engender harmful responses.”

Jan 18, 2024

How to exploit Windows Defender Antivirus to infect a device with malware

Posted by in category: cybercrime/malcode

Malware — information security newspaper | hacking news.

Jan 18, 2024

New Docker Malware Steals CPU for Crypto & Drives Fake Website Traffic

Posted by in categories: cryptocurrencies, cybercrime/malcode

A new attack targets Docker servers and uses a combo of cryptocurrency mining and website traffic generation for profit. It could leave a backdoor for attackers to exploit later. Patch your systems and monitor for suspicious activity:

Jan 17, 2024

In Leaked Audio, Microsoft Cherry-Picked Examples to Make Its AI Seem Functional

Posted by in categories: cybercrime/malcode, robotics/AI

Microsoft “cherry-picked” examples of its generative AI’s output after it would frequently “hallucinate” incorrect responses, Business Insider reports.

The scoop comes from leaked audio of an internal presentation on an early version of Microsoft’s Security Copilot, a ChatGPT-like AI tool designed to help cybersecurity professionals.

According to BI, the audio contains a Microsoft researcher discussing the results of “threat hunter” tests in which the AI analyzed a Windows security log for possible malicious activity.

Jan 15, 2024

Researchers develop AI-driven Machine-Checking Method for Verifying Software Code

Posted by in categories: cybercrime/malcode, robotics/AI

A team of computer scientists led by the University of Massachusetts Amherst recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct.

This new method, called Baldur, leverages the artificial intelligence power of large language models (LLMs), and when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day,” says Yuriy Brun, professor in the Manning College of Information and Computer Sciences at UMass Amherst and the paper’s senior author.

Jan 15, 2024

Balada Injector Infects Over 7,100 WordPress Sites Using Plugin Vulnerability

Posted by in categories: biotech/medical, cybercrime/malcode

⚠️ Over 7,100 WordPress sites have been hit by the ‘Balada Injector’ malware, which exploits sites using a vulnerable version of the Popup Builder plugin. Read More ➡️ https://thehackernews.com/2024/01/balada-injector-infects-over-7100.htm


Thousands of WordPress sites using a vulnerable version of the Popup Builder plugin have been compromised with a malware called Balada Injector.

First documented by Doctor Web in January 2023, the campaign takes place in a series of periodic attack waves, weaponizing security flaws WordPress plugins to inject backdoor designed to redirect visitors of infected sites to bogus tech support pages, fraudulent lottery wins, and push notification scams.

Continue reading “Balada Injector Infects Over 7,100 WordPress Sites Using Plugin Vulnerability” »

Jan 11, 2024

Linux devices are under attack by a never-before-seen worm

Posted by in category: cybercrime/malcode

https://arstechnica.com/security/2024/01/a-previously-unknow…or-a-year/ # Linux Comments: https://news.ycombinator.com/item?id=38942102


Based on Mirai malware, self-replicating NoaBot installs cryptomining app on infected devices.

Jan 11, 2024

New report identifies types of cyberattacks that manipulate behavior of AI systems

Posted by in categories: cybercrime/malcode, government, robotics/AI

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction—and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia, and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them—with the understanding that there is no silver bullet.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”

Page 17 of 212First1415161718192021Last