📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Analysis of Rollup from the perspective of Celestia: review resistance and activity of 6 variants
Author: NashQ, researcher at Celestia
原文标题:Redefining Sequencers: Understanding the Aggregator and the Header Producer
Compilation: Faust, Geek Web3
Translator's Note: For the purpose of making the Rollup model easier to understand and analyze, Celestia researcher NashQ** divided Rollup's sequencer (Sequencer) into two logical entities—aggregator and Header generator. At the same time, he divided the transaction ordering process into three logical steps: inclusion, ordering, and execution. **
Under the guidance of this analytical thinking, the six important variants of sovereign Rollup are clearer and easier to understand. **NashQ discussed in detail the censorship resistance and liveness of different Rollup variants, and also discussed the minimum configuration of the nodes of each Rollup variant in the trust-minimized state (that is, to achieve the Trustless state, what Rollup users must run at least type of node). **
Although this article analyzes Rollup from the perspective of Celestia, which is different from the way the Ethereum community analyzes the Rollup model, considering the many interconnections between Ethereum Rollup and Celestia sovereign Rollup, as well as the latter’s growing influence, it is important for For Ethereum enthusiasts, this article is also extremely worth reading.
What is Rollup?
A Rollup is a blockchain that publishes its "transaction data" to another blockchain and inherits its consensus and data availability.
Why did I purposely use the word "transaction data" instead of "block"? This involves a distinction between rollup blocks and rollup data, with the most compact rollups requiring only rollup data like the first variant below.
A Rollup block is a data structure that represents the blockchain ledger at a certain block height. Rollup block consists of rollup data and rollup header. Among them, Rollup data can be a batch of transactions, or state changes between a batch of transactions.
Variant 1: Pessimistic Rollup / Based Rollup
The easiest way to build a Rollup is to let users publish transactions to another blockchain, which we will call the consensus and data availability layer (DA-Layer), and I will simply call it the DA-layer below (Translator's Note: It is similar to Layer 1, which is often said in the Ethereum community).
In the first Rollup variant I'm going to introduce, the nodes of the Rollup network must re-execute the Rollup transactions contained in the DA layer to check the final state of the ledger. This is pessimistic Rollup!
Pessimistic Rollup is a Rollup that only supports full nodes, and these full nodes need to re-execute all transactions contained in the Rollup ledger to check their validity.
But in this case, who acts as the Sequencer of the Rollup? In fact, except for Rollup's full nodes, no entity has ever executed the transactions contained in the Rollup ledger. Generally speaking, the sequencer aggregates transaction data and generates a Rollup header. But the pessimistic Rollup mentioned above has no Rollup header!
For the convenience of discussion, we can split the sequencer into two logical entities: Aggregator Aggregator and Header Generator. To generate a Rollup Header, you must first execute the transaction, complete the state transition and then calculate the corresponding Header. But for an aggregator, it doesn't need to complete a state transition in order to proceed with the aggregation step.
Sorting Sequencing is the process of "aggregation + creating Rollup Header".
Aggregation is the step of batching transaction data into a batch. A batch generally contains many transactions (Translator's Note: Batch is the part of the data in the Rollup block other than the Header).
The header generation step is the process of creating a Rollup Header. Rollup Header is the metadata about the Rollup block, at least including the commitment of the transaction data in the block (Translator's Note: The commitment mentioned here refers to the commitment to the correctness of the transaction processing results).
Through the above perspective, it can be seen who is responsible for each part of Rollup. First look at the part of the aggregator Aggregator. The aforementioned pessimistic Rollup has no Header generation process, and users publish transactions directly to the DA layer, which means that the DA layer network essentially acts as an aggregator.
Therefore, pessimistic Rollup is a Rollup variant that delegates the aggregation step to the DA layer, which does not have a sequencer Sequencer. Sometimes this type of rollup is called a "based rollup".
Based Rollup has the same censorship resistance and activity as the DA layer (activity measures the system's feedback speed to user requests). If users of this type of Rollup want to achieve a state of minimal trust (closest to Trustless), they must run at least one light node of the DA layer network and a full node of the Rollup network.
Variant 2: Pessimistic aggregation using a shared aggregator
Let's discuss pessimistic aggregation using a shared aggregator. This idea was suggested by Evan Forbes in his forum post on shared sequencer design. Its key assumption is that a shared sequencer is the only formal way to sequence transactions. Evan explains the benefits of shared sequencers this way:**
"In order to achieve a user experience equivalent to Web2, shared sequencer can provide fast generation Soft Commitment (not very reliable guarantee). These Soft Commitment provide some guarantees about the final transaction order (that is, the commitment transaction order will not change), and the steps of updating the Rollup ledger status can be carried out in advance (but Finalize has not been completed yet).
Once the Rollup block data is confirmed and released to the base layer Base Layer (here it should refer to the DA layer), the status update of the Rollup ledger is finalized and finalized. "
The aforementioned Rollup variant still belongs to the category of pessimistic Rollup, because there are only full nodes in this type of Rollup system and no light nodes. Each Rollup node must execute all transactions to ensure the validity of the ledger status update. Because this type of Rollup has no light nodes, it does not need a Rollup Header, nor does it need a Header generator. (Translator's Note: In general, light nodes of a blockchain do not need to synchronize complete blocks, only receive block headers)
Since there is no Rollup Header generation step, the above-mentioned Rollup shared sequencer does not need to execute transactions for status updates (a prerequisite for generating Headers), but only includes the process of aggregating transaction data. So I prefer to call it a shared aggregator shared aggregator.
In this variant, Rollup users need to at least run DA layer light nodes + light nodes of the shared aggregator network + Rollup full nodes in a trust-minimized state.
At this point, it is necessary to verify the published aggregator header (not the Rollup Header) through the light nodes of the shared aggregator network. As mentioned above, the shared aggregator undertakes the work of sorting transactions. It contains a cryptographic commitment in the published aggregator header, corresponding to the Batch it released on the DA layer.
In this way, the Rollup node operator can confirm that the batch Batch received from the DA layer was created by the shared aggregator and not by others.
Since the shared aggregator takes care of the inclusion and sorting, Rollup's censorship resistance depends on it.
If it is assumed that L_ss is the activity of the shared aggregator and L_da is the activity of the DA-Layer, then the activity of the Rollup model is L = L_da && L_ss. In other words, if either of the two parts has a liveness failure, the Rollup also has a liveness failure.
For simplicity, I'll look at liveness as a bool value. If the shared aggregator fails, Rollup cannot continue to operate. If the DA layer network fails, the shared aggregator can continue to provide Soft Commitment for Rollup blocks. But at this time, the attributes of Rollup will depend entirely on the shared aggregator network, and the attributes of the latter are often far inferior to the original DA layer.
Let's continue to explore the censorship resistance of the above Rollup scheme:
In this scheme, the DA layer cannot review some specific transactions (Translator's Note: Transaction review can often refuse to allow certain transactions to be uploaded to the chain), it can only start for the entire batch of transactions submitted by the shared aggregator Transaction review (refused to allow a Batch to be included in the DA layer).
However, according to the Rollup workflow, when the shared aggregator submits the transaction batch Batch to the DA layer, it has already completed the transaction sorting, and the order between different batches has also been determined. Therefore, this kind of transaction review at the DA layer has no other effect except to delay the final confirmation of Rollup's ledger.
In summary, I believe that the point of censorship resistance is to ensure that no single entity can control or manipulate the flow of information within the system, while liveness involves maintaining the functionality and availability of the system, even in the presence of network outages and confrontational behavior. Although this conflicts with the current mainstream academic definition, I will still use the definition of the concept I have articulated.
Variant 3: Pessimistic Rollup based on Based Rollup and Shared Aggregator
Although the shared aggregator brings benefits to users and the community, we should avoid over-reliance on it and allow users to withdraw from the shared aggregator to the DA layer. We can combine the two Rollup variants introduced earlier, allowing users to submit transactions directly to the DA layer while using a shared aggregator. **
We assume that the final Rollup transaction sequence depends on the transaction sequence submitted by the shared aggregator, and the Rollup transactions directly submitted by users in the DA layer block. We call this Rollup's fork selection rule.
Aggregation is divided into two steps here. First, a shared aggregator comes into play, aggregating some transactions. Then, the DA layer can aggregate the Batch submitted by the shared aggregator and the transactions directly submitted by the user.
**The censorship resistance analysis is a bit more complicated at this point. **DA layer network nodes may review the Batch submitted by the shared aggregator before the next DA layer block is produced. After knowing the transaction data in the batch, the DA layer nodes can extract the MEV value. The account on the network initiates a front-running transaction and includes it in the DA layer block first, and then includes the batch submitted by the Rollup shared aggregator.
Obviously, the finality of the transaction order guaranteed by the soft commitment of the third type of Rollup variant is more fragile than the aforementioned second type of Rollup variant. In this case, the shared aggregator handed over the MEV value to the DA layer nodes. In this regard, I recommend readers to watch the research lecture on exploiting profitable censored MEV.
At present, some design schemes have emerged to reduce the ability of DA layer network nodes to execute such MEV transactions, such as the "reorganization window period" function, which will delay the execution of transactions submitted directly to the DA layer by Rollup network users. Sovereign Labs describes this in detail in their design proposal called Based Sequencing with Soft Confirmations, which introduces the concept of a "preferred sequencer".
Since the MEV problem depends on the aggregator scheme chosen by Rollup, and the rollup fork selection rules, some schemes will not leak MEV to DA layer, and some schemes will leak some or all MEV to DA layer, but this is another topic.
As for liveness, this rollup scheme has advantages over schemes that only allow shared aggregators to submit transactions to the DA layer. In the event of a liveness failure on a shared aggregator, users can still submit transactions to the DA layer.
Finally, let's talk about the minimum configuration of Rollup users under trust minimization:
At least run DA layer light node + shared aggregator light node + Rollup full node.
At this point, it is still necessary to verify the aggregator header issued by the shared aggregator, so that the rollup full node can distinguish transaction batches according to the fork selection rules.
Variant 4: Optimistic Based Rollup and Centralized Header Generator
Let's discuss a variant called Based Optimistic Rollup and a centralized header generator. **This solution uses the DA layer to aggregate Rollup transactions, but introduces a centralized Header generator to generate Rollup Headers to enable Rollup light nodes. **
Rollup light nodes can indirectly check the validity of Rollup transactions through a single round of fraud proof. The light node will be optimistic about the generator of the Rollup Header, and will make the final confirmation after the fraud proof window period ends. Another possibility is that it receives a fraud proof from an honest full node that the header generator submitted erroneous data.
I'm not going to go into the details of how a single-round fraud proof works here, as that's beyond the scope of this article. The advantage of a single round of fraud proof is that it can shorten the fraud proof window period from 7 days to a certain extent. The specific value is yet to be determined, but the order of magnitude is smaller than the traditional optimistic rollup. Light nodes can obtain fraud proofs through the P2P network composed of Rollup full nodes without waiting for the subsequent dispute process, because all the criteria are fully provided in a single fraud proof.
The above Rollup model uses the DA layer as an aggregator and inherits its censorship resistance. The DA layer at this point is responsible for containing and ordering transactions. The centralized Header generator will read the Rollup transaction sequence from the DA layer and build the corresponding Rollup Header accordingly. The Header generator will publish the Header and Stateroot to the DA layer. These Stateroots are required when creating fraud proofs. **In short, the aggregator is responsible for including and sorting transactions, and the Header generator will execute the transaction to update the state to get the Stateroot. **
Assume that the DA layer (which also acts as an aggregator for Rollup at this point) is sufficiently decentralized and censorship-resistant. In addition, the header generator cannot change the sequence of Rollup transactions published by the aggregator. Now, if the Header generator is decentralized, the only benefit is better liveness, but the other properties of Rollup are the same as the first variant Based Rollup.
If the header generator fails liveness, the rollup will also fail liveness. Light nodes will not be able to follow the progress of the Rollup ledger, but full nodes can. At this point, the Rollup described in Variant 4 degenerates into the Based Rollup described in Variant 1. Apparently, the trust-minimized minimum configuration described by Variant 4 is:
**DA layer light node + Rollup light node. **
Variant 5: Based ZK-Rollup and Decentralized Prover Market
We have discussed Pessimistic Rollup (Based Rollup) and Optimistic Rollup, now it is time to consider ZK-Rollup. Recently Toghrul gave a speech on the separation of the aggregator (Sequencer) and the Header generator (Prover) (Sequencer-Prover Separation in Zero-Knowledge Rollups). In this model, it is easier to handle publishing transactions as Rollup data rather than State Diff, so I will focus on the former. **Variant 5 is a decentralized Prover Market based on zk-rollup. **
By now, you should be familiar with how Rollup works. Variant 5 delegates the aggregator role to DA layer nodes, which do the work of including and sorting transactions. I'll quote Sovereign-Labs' documentation, which has a good explanation of the lifecycle of a transaction in variant 5:
The user publishes a new data block to the L1 chain (DA layer). Once these data blocks are finalized on the L1 chain, it is logically final (unchangeable). After the blocks of the L1 chain enter the finalization stage (that is, they cannot be rolled back), the full nodes of Rollup will scan these blocks, process all data blocks related to Rollup in order, and generate the latest Rollup state root Stateroot. At this point, from the perspective of Rollup full nodes, these data blocks have been finalized.
In this model, the Header generator is acted by the decentralized Prover Market.
The working process of the Prover prover node (a full node running in ZKVM) is similar to that of a normal Rollup full node—scanning the DA layer blockchain and processing all Rollup transaction batches in order—to generate corresponding zero-knowledge Proof and publish it on the DA layer chain. (If the Rollup system wants to motivate the prover, the latter must send the generated ZK proof to the DA layer chain, otherwise it will not be possible to determine which Prover submitted the ZK proof first). Once the ZK proof corresponding to a certain transaction batch is released to the chain, the transaction batch is finalized in the eyes of all Rolup nodes (including light nodes).
Variant 5 has the same censorship resistance as the DA layer. The decentralized Prover Market cannot review Rollup transactions, because the DA layer has already determined the standard transaction order, only to obtain better activity and create an incentive market, so the Header generator (here refers to Prover) is decentralized change.
The activity here is L = L_da && L_pm (Prover's activity). If the incentives of the Prover Market are inconsistent, or there is an active failure, the Rollup light node will not be able to synchronize the progress of the blockchain, but the Rollup full node can. For the full node, this is just a fallback to the Based Rollup/Pessimistic Rollup. The minimum configuration for trust minimization here is the same as in the case of optimistic Rollup, namely
DA layer light node + Rollup light node.
Variant 6: Hybrid Based Rollup + Centralized Optimistic Header Generator + Decentralized Prover
We still let DA layer nodes act as Rollup aggregators and delegate the work of including and ordering transactions to them.
As you can see from the figure below, both ZK Rollup and Optimistic Rollup use the same ordered transaction batch on the DA layer as the source of the Rollup ledger. This is the reason we can use both proof systems at the same time: the ordered batch of transactions on the DA layer is not itself affected by the proof system.
Let’s talk about finality first. From the perspective of the Rollup full node, when the block of the DA layer is finalized, the Rollup transaction batch contained in it is finalized and cannot be changed. But we care more about the finality from the perspective of light nodes. Assume that the centralized Header generator mortgages some assets, signs the generated Rollup Header, and submits the calculated Stateroot to the DA layer.
Like the previous variant 4, the light node will optimistically trust the header generator, believe that the header it issued is correct, and wait for the fraud proof from the full node network. If the window period of the fraud proof is over and the full node network has not issued the fraud proof, from the perspective of the Rollup light node, the Rollup block is finalized.
The key point is that if we can get a ZK proof, we don't have to wait for the fraud proof window to end. In addition to a single round of fraud proofs, we can replace fraud proofs with ZK proofs and discard false headers generated by malicious header generators!
When light nodes receive a ZK proof for a batch of Rollup transactions, the batch is finalized.
Now we have fast Soft Commitment and fast finality.
Variant 6 still has the same censorship resistance as the DA layer because it is based on the DA layer. For liveness, we will have L = L_da && (L_op || L_pm), which means we add liveness guarantees. If either the centralized Header generator or the decentralized Prover Market has a liveness failure, we can degenerate to the other of the two.
In this variant, the minimum configuration for user trust minimization is:
**One DA layer light node + one Rollup light node. **
Summary:
Aggregators and Header generators.
We divide the work of Sequencer into three logical processes: containment, sorting and execution.
Pessimistic rollup and based rollup are one thing.
According to your needs, you can choose different aggregator and header generator solutions.
Each Rollup variant introduced in this post follows the same design pattern:
Finally, I have some thoughts. Please think about: