Security
Headlines
HeadlinesLatestCVEs

Source

ghsa

GHSA-885r-hhpr-cc9p: Jenkins Gogs Plugin uses non-constant time webhook token comparison

Jenkins Gogs Plugin 1.0.15 and earlier uses a non-constant time comparison function when checking whether the provided and expected webhook token are equal, potentially allowing attackers to use statistical methods to obtain a valid webhook token.

ghsa
#web#git
GHSA-mv77-fj63-q5w8: Stored XSS vulnerability in Jenkins GitHub Plugin

Jenkins GitHub Plugin 1.37.3 and earlier does not escape the GitHub project URL on the build page when showing changes, resulting in a stored cross-site scripting (XSS) vulnerability exploitable by attackers with Item/Configure permission.

GHSA-54f6-9mx9-86f7: SaToken privilege escalation vulnerability

An issue in Dromara SaToken version 1.36.0 and before allows a remote attacker to escalate privileges via a crafted payload to the URL.

GHSA-w9vh-hv5g-7wmr: SaToken authentication bypass vulnerability

An issue in Dromara SaToken version 1.3.50RC and before when using Spring dynamic controllers, a specially crafted request may cause an authentication bypass.

GHSA-fgq9-fc3q-vqmw: dom4j XML Entity Expansion vulnerability

An issue in dom4.j org.dom4.io.SAXReader v.2.1.4 and before allows a remote attacker to obtain sensitive information via the setFeature function.

GHSA-7g24-qg88-p43q: jose4j uses weak cryptographic algorithm

jose4j before v0.9.3 allows attackers to set a low PBES2 iteration count of 1000 or less.

GHSA-mx47-h5fv-ghwh: light-oauth2 missing public key verification

light-oauth2 before version 2.1.27 obtains the public key without any verification. This could allow attackers to authenticate to the application with a crafted JWT token.

GHSA-3j2f-58rq-g6p7: Sureness uses hardcoded key

Dromara Sureness before v1.0.8 was discovered to use a hardcoded key.

GHSA-hrfv-mqp8-q5rw: Werkzeug DoS: High resource usage when parsing multipart/form-data containing a large part with CR/LF character at the beginning

Werkzeug multipart data parser needs to find a boundary that may be between consecutive chunks. That's why parsing is based on looking for newline characters. Unfortunately, code looking for partial boundary in the buffer is written inefficiently, so if we upload a file that starts with CR or LF and then is followed by megabytes of data without these characters: all of these bytes are appended chunk by chunk into internal bytearray and lookup for boundary is performed on growing buffer. This allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it. The amount of CPU time required can block worker processes from handling legitimate requests. The amount of RAM required can trigger an out of memory kill of the process. If many concurrent requests are sent continuously, this can exhaust or kill all available workers.

GHSA-jq6c-r9xf-qxjm: dtale vulnerable to Remote Code Execution through the Custom Filter Input

### Impact Users hosting D-Tale publicly can be vulnerable to remote code execution allowing attackers to run malicious code on the server. ### Patches Users should upgrade to version 3.7.0 where the "Custom Filter" input is turned off by default. You can find out more information on how to turn it back on [here](https://github.com/man-group/dtale#custom-filter) ### Workarounds The only workaround for versions earlier than 3.7.0 is to only host D-Tale to trusted users. ### References See "Custom Filter" [documentation](https://github.com/man-group/dtale#custom-filter)