Discussion thread for single-slot finality

See: Paths toward single-slot finality - HackMD


The “interface” between Casper FFG finalization and LMD GHOST fork choice is a source of significant complexity, leading to a number of attacks that have required fairly complicated patches to fix, with more weaknesses being regularly discovered. Single-slot finality offers an opportunity to create a cleaner relationship between a single-slot confirmation mechanism and the fork choice rule (which would only run in the ≥ 1/3 offline case). Other sources of complexity (eg. shuffling into fixed-size committees) could also be cut.

Overall excerpt goal: Reduce complexity.

Bringing attention to: fork choice rule (which would only run in the ≥ 1/3 offline case).

My thoughts: Simply having the code / implementation in place for the fork choice rule is complex (implementation, documentation, testing, attack vectors, integration, etc…).

Short proposal: If byzantine safety cannot be achieved within some timeout due to offline / unresponsive validators, is dynamically reducing the validator set and defaulting to Casper FFG an option? The tradeoff we’re making is architectural and implementation simplicity over security, but software complexity also comes with a big cost. Note that I have not read the “Combining GHOST and Casper” paper, so let me know if the answer lies in there.

The fork choice rule (LMD GHOST) is only used in the exceptional case where a committee doesn’t confirm (this requires >1/4 to be offline or malicious).

What is the source of the “1/4” figure?

Most of the time, validators could withdraw instantly.

Does this imply withdrawing staked funds or withdrawing as a validator actor?

The biggest problem that remains is signature aggregation. There are 131,072 validators making and sending signatures, and these need to be quickly combined into a single large aggregate signature.

What is the source of the load?

  1. Computation related to validation of each individual signature?
  2. Networking requirements related to flooding the network with > 100K non trivially sized payloads?
  3. Computation related to signature aggregation?

Background: I’m asking this question for a personal reason since the next version of our team’s protocol will use Hotstuff and will require threshold signature that aggregate > 100K signatures. Theoretically, it is looking to me that linear networking requirements with BLS signatures might be a viable solution here, but I’m looking to build more context.