Security
Headlines
HeadlinesLatestCVEs

Headline

GHSA-mvj3-qrqh-cjvr: CometBFT PeerState JSON serialization deadlock

Impact

An internal modification to the way struct PeerState is serialized to JSON introduced a deadlock when new function MarshallJSON is called. This function can be called from two places:

  1. Via logs
    • Setting the consensus logging module to “debug” level (should not happen in production), and
    • Setting the log output format to JSON
  2. Via RPC dump_consensus_state

Case 1 above, which should not be hit in production, will eventually hit the deadlock in most goroutines, effectively halting the node.

In case 2, only the data structures related to the first peer will be deadlocked, together with the thread(s) dealing with the RPC request(s). This means that only one of the channels of communication to the node’s peers will be blocked. Eventually the peer will timeout and excluded from the list (typically after 2 minutes). The goroutines involved in the deadlock will not be garbage collected, but they will not interfere with the system after the peer is excluded.

The theoretical worst case for case 2, is a network with only two validator nodes. In this case, each of the nodes only has one PeerState struct. If dump_consensus_state is called in either node (or both), the chain will halt until the peer connections time out, after which the nodes will reconnect (with different PeerState structs) and the chain will progress again. Then, the same process can be repeated.

As the number of nodes in a network increases, and thus, the number of peer struct each node maintains, the possibility of reproducing the perturbation visible with 2 nodes decreases. Only the first PeerState struct will deadlock, and not the others (RPC dump_consensus_state accesses them in a for loop, so the deadlock at the first iteration causes the rest of the iterations of that “for” loop to never be reached).

This regression was introduced in versions v0.34.28 and v0.37.1, and will be fixed in v0.34.29 and v0.37.2.

Patches

The PR containing the fix is here, and the corresponding issue is here

Workarounds

For case 1 (hitting the deadlock via logs)

  • either don’t set the log output to "json", leave at "plain",
  • or don’t set the consensus logging module to "debug", leave it at “info” or higher.

For case 2 (hitting the deadlock via RPC dump_consensus_state)

  • do not expose dump_consensus_state RPC endpoint to the public internet (e.g., via rules in your nginx setup)

References

  • Issue that introduced the deadlock
  • Issue reporting the bug via logs
ghsa
#js#git#nginx

Impact

An internal modification to the way struct PeerState is serialized to JSON introduced a deadlock when new function MarshallJSON is called. This function can be called from two places:

  1. Via logs
    • Setting the consensus logging module to “debug” level (should not happen in production), and
    • Setting the log output format to JSON
  2. Via RPC dump_consensus_state

Case 1 above, which should not be hit in production, will eventually hit the deadlock in most goroutines, effectively halting the node.

In case 2, only the data structures related to the first peer will be deadlocked, together with the thread(s) dealing with the RPC request(s). This means that only one of the channels of communication to the node’s peers will be blocked. Eventually the peer will timeout and excluded from the list (typically after 2 minutes). The goroutines involved in the deadlock will not be garbage collected, but they will not interfere with the system after the peer is excluded.

The theoretical worst case for case 2, is a network with only two validator nodes. In this case, each of the nodes only has one PeerState struct. If dump_consensus_state is called in either node (or both), the chain will halt until the peer connections time out, after which the nodes will reconnect (with different PeerState structs) and the chain will progress again. Then, the same process can be repeated.

As the number of nodes in a network increases, and thus, the number of peer struct each node maintains, the possibility of reproducing the perturbation visible with 2 nodes decreases. Only the first PeerState struct will deadlock, and not the others (RPC dump_consensus_state accesses them in a for loop, so the deadlock at the first iteration causes the rest of the iterations of that “for” loop to never be reached).

This regression was introduced in versions v0.34.28 and v0.37.1, and will be fixed in v0.34.29 and v0.37.2.

Patches

The PR containing the fix is here, and the corresponding issue is here

Workarounds

For case 1 (hitting the deadlock via logs)

  • either don’t set the log output to "json", leave at "plain",
  • or don’t set the consensus logging module to "debug", leave it at “info” or higher.

For case 2 (hitting the deadlock via RPC dump_consensus_state)

  • do not expose dump_consensus_state RPC endpoint to the public internet (e.g., via rules in your nginx setup)

References

  • Issue that introduced the deadlock
  • Issue reporting the bug via logs

References

  • GHSA-mvj3-qrqh-cjvr
  • https://nvd.nist.gov/vuln/detail/CVE-2023-34450
  • cometbft/cometbft#524
  • cometbft/cometbft#863
  • cometbft/cometbft#865

Related news

CVE-2023-34450: PeerState JSON serialization deadlock

CometBFT is a Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine and replicates it on many machines. An internal modification made in versions 0.34.28 and 0.37.1 to the way struct `PeerState` is serialized to JSON introduced a deadlock when new function MarshallJSON is called. This function can be called from two places. The first is via logs, setting the `consensus` logging module to "debug" level (should not happen in production), and setting the log output format to JSON. The second is via RPC `dump_consensus_state`. Case 1, which should not be hit in production, will eventually hit the deadlock in most goroutines, effectively halting the node. In case 2, only the data structures related to the first peer will be deadlocked, together with the thread(s) dealing with the RPC request(s). This means that only one of the channels of communication to the node's peers will be blocked. Eventually the peer will timeout and excluded from the list (typically afte...

ghsa: Latest News

GHSA-49cc-xrjf-9qf7: SFTPGo allows administrators to restrict command execution from the EventManager