Tag
#git
Improper escaping of a custom field's name allows an attacker to inject HTML and, if CSP settings permit, achieve execution of arbitrary JavaScript when: - resolving or closing issues (bug_change_status_page.php) belonging to a project linking said custom field - viewing issues (view_all_bug_page.php) when the custom field is displayed as a column - printing issues (print_all_bug_page.php) when the custom field is displayed as a column ### Impact Cross-site scripting (XSS). ### Patches https://github.com/mantisbt/mantisbt/commit/447a521aae0f82f791b8116a14a20e276df739be ### Workarounds Ensure Custom Field Names do not contain HTML tags. ### References - https://mantisbt.org/bugs/view.php?id=34432 - This is related to CVE-2020-25830 (same root cause, different affected pages)
By Cyber Newswire London, United Kingdom, May 13th, 2024, CyberNewsWire Logicalis, the global technology service provider delivering next-generation digital managed services,… This is a post from HackRead.com Read the original post: Logicalis enhances global security services with the launch of Intelligent Security
If an issue references a note that belongs to another issue that the user doesn't have access to, then it gets hyperlinked. Clicking on the link gives an access denied error as expected, yet some information remains available via the link, link label, and tooltip. ### Impact Disclosure of the following information: - existence of the note - note author name - note creation timestamp - issue id the note belongs to ### Patches See PR https://github.com/mantisbt/mantisbt/pull/2000 ### Workarounds None ### References https://mantisbt.org/bugs/view.php?id=34434
Tuesday’s verdict in the trial of Alexey Pertsev, a creator of crypto-privacy service Tornado Cash, is the first in a string of cases that could make it much harder to skirt financial surveillance.
## Description `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to RCE by a carefully constructed payload. ## Source-to-Sink ### `llama.py` -> `class Llama` -> `__init__`: ```python class Llama: """High-level Python wrapper for a ...
### Impact This vulnerability can spike the resource utilization of the STS service, and combined with a significant traffic volume could potentially lead to a denial of service. ### Patches This vulnerability existed in the repository at HEAD, we will cut a 0.1.0 release with the fix. ### Workarounds None ### References None
By Waqas Surfshark pulls a unique stunt in London with a see-through toilet! This security campaign uses public discomfort to spark a conversation about online data privacy. Learn how Surfshark VPN can help you protect your information. This is a post from HackRead.com Read the original post: Surfshark VPN Brings Data Breach Awareness with See-Through Toilet Campaign
Refuge robbed: Car location tracking is becoming a tool of control in situations of domestic abuse. It's time car companies responded.
The financially motivated threat actor known as FIN7 has been observed leveraging malicious Google ads spoofing legitimate brands as a means to deliver MSIX installers that culminate in the deployment of NetSupport RAT. "The threat actors used malicious websites to impersonate well-known brands, including AnyDesk, WinSCP, BlackRock, Asana, Concur, The Wall
By Deeba Ahmed Researchers uncover a novel cyberattack scheme called "LLMjacking" exploiting stolen cloud credentials to hijack powerful AI models. This article explores the implications of attackers leveraging large language models (LLMs) for malicious purposes and offers security recommendations for the cloud and AI communities. This is a post from HackRead.com Read the original post: New LLMjacking Attack Lets Hackers Hijack AI Models for Profit