Tag
#intel
For years, North Korea has been secretly placing young IT workers inside Western companies. With AI, their schemes are now more devious—and effective—than ever.
France accuses Russia’s APT28 hacking group (Fancy Bear) of targeting French government entities in a cyber espionage campaign.…
WordPress sites are under threat from a deceptive anti-malware plugin. Learn how this malware grants backdoor access, hides…
As the field of artificial intelligence (AI) continues to evolve at a rapid pace, new research has found how techniques that render the Model Context Protocol (MCP) susceptible to prompt injection attacks could be used to develop security tooling or identify malicious tools, according to a new report from Tenable. MCP, launched by Anthropic in November 2024, is a framework designed to connect
Google enhances cybersecurity with Agentic AI, launching Unified Security to fight zero-day exploits, enterprise threats, and credential-based attacks.…
Frankfurt am Main, Germany, 30th April 2025, CyberNewsWire
Meta on Tuesday announced LlamaFirewall, an open-source framework designed to secure artificial intelligence (AI) systems against emerging cyber risks such as prompt injection, jailbreaks, and insecure code, among others. The framework, the company said, incorporates three guardrails, including PromptGuard 2, Agent Alignment Checks, and CodeShield. PromptGuard 2 is designed to detect direct
Popular messaging app WhatsApp on Tuesday unveiled a new technology called Private Processing to enable artificial intelligence (AI) capabilities in a privacy-preserving manner. "Private Processing will allow users to leverage powerful optional AI features – like summarizing unread messages or editing help – while preserving WhatsApp's core privacy promise," the Meta-owned service said in a
WhatsApp's AI tools will use a new “Private Processing” system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks.
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content. The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety