Latest News
### Impact Applications which have been bootstrapped by the new igniter installer (since AshAuthentication v4.1.0) and who have used the magic link strategy _or_ are manually revoking tokens are affected by revoked tokens being allowed to verify as valid. If you did not use the new installer, then you are absolutely not affected. Additionally, unless you have implemented any kind of custom token revocation feature in your application (in which case even cursory testing would have uncovered this issue), then you will not be significantly affected. The impact here for users using builtin functionality is that magic link tokens are reusable until they expire instead of being immediately revoked. Magic link tokens are only valid for 10 minutes, so the surface area for abuse is extremely low here. ### Patches The flaw is patched in version 4.4.9. Additionally a compile time warning is shown to users with remediation instructions if they upgrade. 4.4.9 ships with an upgrader, so if you u...
pyca/cryptography's wheels include a statically linked copy of OpenSSL. The versions of OpenSSL included in cryptography 42.0.0-44.0.0 are vulnerable to a security issue. More details about the vulnerability itself can be found in https://openssl-library.org/news/secadv/20250211.txt. If you are building cryptography source ("sdist") then you are responsible for upgrading your copy of OpenSSL. Only users installing from wheels built by the cryptography project (i.e., those distributed on PyPI) need to update their cryptography versions.
### Impact Systems running registry version > `3.0.0-beta.1` with token authentication enabled. ### Patches Update to at least `v3.0.0-rc.3` ### Workarounds There is no way to work around this issue without patching if your system requires token authentication. ### References The issue lies in how the JWK verification is performed. When a JWT contains a JWK header without a certificate chain, the code only checks if the KeyID (`kid`) matches one of the trusted keys, but doesn't verify that the actual key material matches. Here's the problematic flow: 1. An attacker generates their own key pair 2. They create a JWT and include their public key in the JWK header 3. They set the `kid` in the JWK to match one of the trusted keys' IDs (which they could potentially discover) 4. They sign the JWT with their private key 5. The registry only checks if the `kid` exists in the trusted keys map but then uses the attacker's public key from the JWK to verify the signature
Monero (XMR) remains the leading privacy cryptocurrency with its unparalleled anonymity and security in a world increasingly financially…
Cisco denies recent data breach claims by the Kraken ransomware group, stating leaked credentials are from a resolved 2022 incident. Learn more about Cisco's response and the details of the original attack.
State-led data privacy laws and commitment to enforcement play a major factor in shoring up business data security, an analysis shows.
PandasAI uses an interactive prompt function that is vulnerable to prompt injection and run arbitrary Python code that can lead to Remote Code Execution (RCE) instead of the intended explanation of the natural language processing by the LLM. The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing the attackers to manipulate the system into executing untrusted code, leading to untrusted code execution (RCE), system compromise, or pivoting attacks on connected services.
Google has stepped in to clarify that a newly introduced Android System SafetyCore app does not perform any client-side scanning of content. "Android provides many on-device protections that safeguard users against threats like malware, messaging spam and abuse protections, and phone scam protections, while preserving user privacy and keeping users in control of their data," a spokesperson for
Salt Typhoon underscores the urgent need for organizations to rapidly adopt modern security practices to meet evolving threats.
The popular generative AI (GenAI) model allows hallucinations, easily avoidable guardrails, susceptibility to jailbreaking and malware creation requests, and more at critically high rates, researchers find.