How L2 Gas Price Works

How L2 Gas Price Works

Batch overhead & limited resources of the batch

To process the batch, the ZKsync team has to pay for proving the batch, committing to it, etc. Processing a batch involves some operational costs as well. All of these values we call “Batch overhead”. It consists of two parts:

  • The L2 requirements for proving the circuits (denoted in L2 gas).
  • The L1 requirements for the proof verification as well as general batch processing (denoted in L1 gas).

We generally try to aggregate as many transactions as possible and each transaction pays for the batch overhead proportionally to how close the transaction brings the batch to being sealed, i.e. closed and prepared for proof verification and submission on L1. A transaction gets closer to sealing a batch by using the batch’s limited resources.

While on Ethereum, the main reason for the existence of a batch gas limit is to keep the system decentralized & load low, i.e. assuming the existence of the correct hardware, only time would be a requirement for a batch to adhere to. In the case of ZKsync batches, there are some limited resources the batch should manage:

  • Time. The same as on Ethereum, the batch should generally not take too much time to be closed to provide better UX. To represent the time needed we use a batch gas limit, note that it is higher than the gas limit for a single transaction.
  • Slots for transactions. The bootloader has a limited number of slots for transactions, i.e. it can not take more than a certain transactions per batch.
  • The memory of the bootloader. The bootloader needs to store the transaction’s ABI encoding in its memory & this fills it up. In practical terms, it serves as a penalty for having transactions with large calldata/signatures in the case of custom accounts.
  • Pubdata bytes. To fully appreciate the gains from the storage diffs, i.e. the fact that changes in a single slot happening in the same batch need to be published only once, we need to publish all the batch’s public data only after the transaction has been processed. Right now, we publish all the data with the storage diffs as well as L2→L1 messages, etc in a single transaction at the end of the batch. Most nodes have a limit of 128kb per transaction, so this is the limit that each ZKsync batch should adhere to.

Each transaction spends the batch overhead proportionally to how closely it consumes the resources above.

Note, that before the transaction is executed, the system can not know how many of the limited system resources the transaction will take, so we need to charge for the worst case and provide the refund at the end of the transaction.

MAX_TRANSACTION_GAS_LIMIT

A recommended maximal amount of gas that a transaction can spend on computation is MAX_TRANSACTION_GAS_LIMIT. But in case the operator trusts the user, the operator may provide the trusted gas limit, i.e. the limit which exceeds MAX_TRANSACTION_GAS_LIMIT assuming that the operator knows what he is doing. This can be helpful in the case of a ZK Chain with different parameters.

Derivation of baseFee and gasPerPubdata

At the start of each batch, the operator provides the following two parameters:

  1. FAIR_L2_GAS_PRICE. This variable should denote what is the minimal L2 gas price that the operator is willing to accept. It is expected to cover the cost of proving/executing a single unit of zkEVM gas, the potential contribution of usage of a single gas towards sealing the batch, as well as congestion.
  2. FAIR_PUBDATA_PRICE, which is the price of a single pubdata byte in Wei. Similar to the variable about, it is expected to cover the cost of publishing a single byte as well as the potential contribution of usage of a single pubdata byte towards sealing the batch.

In the descriptions above by "contribution towards sealing the batch" we referred to the fact that if a batch is most often closed by a certain resource (e.g. pubdata), then the pubdata price should include this cost.

The baseFee and gasPerPubdata are then calculated as:

baseFee := max(
    fairL2GasPrice,
    ceilDiv(fairPubdataPrice, MAX_L2_GAS_PER_PUBDATA())
)
gasPerPubdata := ceilDiv(pubdataPrice, baseFee)

While the way we charge for pubdata in theory allows for any gasPerPubdata, some SDKs expect the gasLimit by a transaction to be a uint64 number. We would prefer gasLimit for transactions to stay within JS's safe "number" range in case someone uses number type to denote gas there. For this reason, we will bind the MAX_L2_GAS_PER_PUBDATA to 2^20 gas per 1 pubdata byte. The number is chosen such that MAX_L2_GAS_PER_PUBDATA * 2^32 is a safe JS integer. The 2^32 part is the maximal possible value for pubdata counter that could be in theory used. It is unrealistic that this value will ever appear under an honest operator, but it is needed just in case.

Note, however, that it means that the total under high L1 gas prices gasLimit may be larger than u32::MAX and it is recommended that no more than 2^20 bytes of pubdata can be published within a transaction.

Let's define the following constants:

  • BATCH_OVERHEAD_L1_GAS - The L1 gas overhead for a batch (proof verification, etc).
  • COMPUTE_OVERHEAD_PART - The constant that represents the possibility that a batch can be sealed because of overuse of computation resources. It has range from 0 to 1. If it is 0, the compute will not depend on the cost of closing the batch. If it is 1, the gas limit per batch will have to cover the entire cost of closing the batch.
  • MAX_GAS_PER_BATCH - The maximum amount of gas that can be used by the batch. This value is derived from the circuits' limitation per batch.
  • PUBDATA_OVERHEAD_PART - The constant that represents the possibility that a batch can be sealed because of overuse of pubdata. It has range from 0 to 1. If it is 0, the pubdata will not depend on the cost of closing the batch. If it is 1, the pubdata limit per batch will have to cover the entire cost of closing the batch.
  • MAX_PUBDATA_PER_BATCH - The maximum amount of pubdata that can be used by the batch. Note that if the calldata is used as pubdata, this variable should not exceed 128kb.

And the following fluctuating variables:

  • MINIMAL_L2_GAS_PRICE - The minimal acceptable L2 gas price, i.e. the price that should include the cost of computation/proving as well as potential premium for congestion.
  • PUBDATA_BYTE_ETH_PRICE - The minimal acceptable price in ETH per each byte of pubdata. It should generally be equal to the expected price of a single blob byte or calldata byte (depending on the approach used).

Then:

  1. FAIR_L2_GAS_PRICE = MINIMAL_L2_GAS_PRICE + COMPUTE_OVERHEAD_PART * BATCH_OVERHEAD_L1_GAS / MAX_GAS_PER_BATCH
  2. FAIR_PUBDATA_PRICE = PUBDATA_BYTE_ETH_PRICE + PUBDATA_OVERHEAD_PART * BATCH_OVERHEAD_L1_GAS / MAX_PUBDATA_PER_BATCH

For L1→L2 transactions, the MAX_GAS_PER_BATCH variable is equal to L2_TX_MAX_GAS_LIMIT (since this amount of gas is enough to publish the maximal number of pubdata in the batch). Also, for additional security, for L1->L2 transactions the COMPUTE_OVERHEAD_PART = PUBDATA_OVERHEAD_PART = 1, i.e. since we are not sure what exactly will be the reason for us closing the batch. For L2 transactions, typically COMPUTE_OVERHEAD_PART = 0, since, unlike L1→L2 transactions, in case of an attack, the operator can simply censor bad transactions or increase the FAIR_L2_GAS_PRICE and so the operator can use average values for better UX.

Note on operator’s responsibility

To reiterate, the formulas above are used for L1→L2 transactions on L1 to protect the operator from malicious transactions. However, for L2 transactions, it is solely the responsibility of the operator to provide the correct values. It is designed this way for more fine-grained control over the system for the ZK Stack operators (including Validiums, maybe Era on top of another L1, etc).

This fee model also provides a very high degree of flexibility to the operator & so if we find out that we earn too much with a certain part, we could amend how the fair l2 gas price and fair pubdata price are generated and that’s it (there will be no further enforcements on the bootloader side).

In the long run, the consensus will ensure the correctness of these values on the main ZKsync Era (or maybe we’ll transition to a different system).

Overhead for transaction slot and memory

We also have a limit on the number of memory that can be consumed within a batch as well as the number of transactions that can be included there.

To simplify the codebase we've chosen the following constants:

  • TX_OVERHEAD_GAS = 10000 -- the overhead in gas for including a transaction into a batch.
  • TX_MEMORY_OVERHEAD_GAS = 10 -- the overhead for consuming a single byte of bootloader memory.

We've used roughly the following formulae to derive these values:

  1. TX_OVERHEAD_GAS = MAX_GAS_PER_BATCH / MAX_TXS_IN_BATCH. For L1->L2 transactions we used the MAX_GAS_PER_BATCH = 80kk and MAX_TXS_IN_BATCH = 10k. MAX_GAS_PER_BATCH / MAX_TXS_IN_BATCH = 8k, while we decided to use the 10k value to better take into account the load on the operator from storing the information about the transaction.
  2. TX_MEMORY_OVERHEAD_GAS = MAX_GAS_PER_BATCH / MAX_MEMORY_FOR_BATCH. For L1->L2 transactions we used the MAX_GAS_PER_BATCH = 80kk and MAX_MEMORY_FOR_BATCH = 32 * 600_000.

MAX_GAS_PER_BATCH / MAX_MEMORY_FOR_BATCH = 4, while we decided to use the 10 gas value to better take into account the load on the operator from storing the information about the transaction.

Future work will focus on removing the limit on the number of transactions’ slots completely as well as increasing the memory limit.

Note on L1→L2 transactions

The formulas above apply to L1→L2 transactions. However, note that the gas_per_pubdata is still kept as constant as 800. This means that a higher baseFee could be used for L1->L2 transactions to ensure that gas_per_pubdata remains at that value regardless of the price of the pubdata.

Refunds

Note, that the used constants for the fee model are probabilistic, i.e. we never know in advance the exact reason why a batch is going to be sealed. These constants are meant to cover the expenses of the operator over a longer period so we do not refund the fact that the transaction might've been charged for overhead above the level at which the transaction has brought the batch to being closed, since these funds are used to cover transactions that did not pay in full for the limited batch's resources that they used.

Refunds for repeated writes

ZKsync Era is a state diff-based rollup, i.e. the pubdata is published not for transactions, but for storage changes. This means that whenever a user writes into a storage slot, it incurs a certain amount of pubdata. However, not all writes are equal:

  • If a slot has been already written to in one of the previous batches, the slot has received a short ID, which allows it to require less pubdata in the state diff.
  • Depending on the value written into a slot, various compression optimizations could be used and so we should reflect that too.
  • Maybe the slot has been already written to in this batch so we don’t have to charge anything for it.

You can read more about how we treat the pubdata here.

The important part here is that while such refunds are inlined (i.e. unlike the refunds for overhead they happen in place during execution and not after the whole transaction has been processed), they are enforced by the operator. Right now, the operator is the one who decides what refund to provide.


Made with ❤️ by the ZKsync Community