Validium in ZK Stack
What is a DA Layer?
A Data Availability layer (or DA Layer) is a blockchain network optimized for storing data in a cost-effective way.
A DA layer does not prove the correctness of a L2 chain.
Often DA layers are used in tandem with Ethereum for consensus. A DA layer can prove on Ethereum that some specific data in their network exists. By extension, this means that it can prove that data exists in a L2 blockchain that uses that DA layer.
The ZKsync protocol for the Elastic Network then uses the DA layer's contracts on Ethereum to verify the inclusion proofs it receives from the chain.
What data does it store?
The DA layer receives and stores pubdata of L1 batches from a ZKsync chain. Pubdata consists of the state diffs, L2 to L1 logs, L2 to L1 messages, and published bytecodes.
After the data is sent to the DA layer, the corresponding entry in the node's data_availability
table is created.
Types of Validium
There are 3 different types of validiums based on the level of data availability they provide:
- Stage 0 - the simplest type of validium that only stores its pubdata in the database of the node(s) running the chain.
- Stage 1 - a validium that only sends its pubdata to the DA layer, but doesn’t verify its inclusion onchain.
- Stage 2 - a validium that sends its pubdata to the DA layer, and also verifies its inclusion on the L1, either by using verification bridges or ZK proofs directly.
There are multiple DA layers that ZK Stack will be integrated with, however the first on the list are: Avail, Celestia, and EigenDA.
How is DA validated?
The DA inclusion verification process consists of two steps, with a corresponding smart contract responsible for each of them:
- Committing to the pubdata, done by the
L2DAValidator
contract. - Using a commitment and a data provided by the operator to verify that the pubdata was included in the DA layer, done by the
L1DAValidator
contract.
L2DAValidator
The commitment is passed from L2DAValidator
to L1DAValidator
via L2→L1 logs, this happens within the system
contracts’ execution, so this process can be trusted as it is covered by ZK proof. The format of the commitment is
arbitrary, it is up to the L2DAValidator
to decide what is the one that suits the specific DA layer the most.
L1DAValidator
The IL1DAValidator
contract interface has one function: checkDA
which checks the data availability for the given batch input.
It returns a struct with the state diff hash, the hashes of the blobs on L1, and the commitments to the blobs on L1.
Separate contracts that implement this interface are deployed for each DA layer. These contracts are stateless and can't
be proxies, meaning that they are immutable once deployed.
The L1DAValidator
implementations may have different logic from DA layer to DA layer,
however the ZKsync protocol only requires it to return whether the commitment and proof pair is valid.
Approaches
There are 2 approaches that can be used here: verification bridges and ZK poofs. The chain operator would be able to change the implementations of the L1/L2DAValidators, which allows for applying updates/optimizations or completely changing the DA verification logic, e.g. due to migration to the different DA layer.
- Verification bridges: these are the smart contracts that get updated with the state roots of the specific DA layers.
They utilize the ZK proofs to verify the validity of the provided state roots.
Later on, the verification bridges can be called from
L1DAValidator
contract and normally use Merkle-proofs to verify against the known state roots. - ZK proofs: similarly to the verification bridges, it's possible to utilize ZK proofs here, but directly in the
L1DAValidator
. If we go a step further and do the ZK proof that has the following characteristics:- Complete the asserts that verification bridges do to ensure the validity of the state root.
- Verify that the pubdata commitment relates to the data included in the data root of some specific block.
- Verify that the data root is included in the state root. It would be possible only verify the proof without the need for additional checks against the bridges. This solution is especially relevant with DA layers that use the commitments that are inefficient to produce on EVM.
Working with Validiums
Server-related details
Getting a bit deeper into the technicalities, the process of persisting the pubdata in the different DA layers is handled by two components: DA dispatcher and DA client:
- DA client’s role is to abstract the access to the DA layer and generalize it to 3 functions:
dispatch_blob
,ensure_finality
andget_inclusion_data
. Clients are a part of thezksync-era
repository, see the code. - DA dispatcher is responsible for periodically fetching the pubdata from the database, and calling the DA client to dispatch a blob, check that it was finalized or get an inclusion proofs for it. It also handles retries in case the client function call failed with a retryable error.
The DA-related information is stored in the data_availability
table in the database.
A Validium’s new batches can’t be committed if their inclusion data in this table is NULL
.
Configuration
The Validium chain is required to have the following configuration:
- Set a pubdata sending mode general.yaml
pubdata_sending_mode: CUSTOM
- Configure DA dispatcher (reference values below) general.yaml
da_dispatcher: polling_interval_ms: 5000 max_rows_to_dispatch: 100 max_retries: 5 use_dummy_inclusion_data: false
- Add DA client’s config and secrets, see FAQ for details.
All the configs can also be set using environment variables instead of .yaml
files, but it is less preferred.
Deployed L1 DA Validators
Ethereum
Mainnet | Sepolia | |
---|---|---|
Rollup | 0x72213dfe8CA61B0A782970dCFebFb877778f9119 | 0xcc46b186bd4515fa996adf3c40344ed7d546a65b |
NoDA | 0x907b30407249949521Bf0c89A43558dae200146A | 0xacaf20636f1b57aa55f9cb1ac07d1fd9d8a84f00 |
Avail | 0x8f50d93B9955B285f787043B30B5F51D09bE0120 | 0x73D59FE232fCe421d1365D6A5BEEc49aCDE3d0d9 |
ZKsync Gateway
Mainnet | Sepolia | |
---|---|---|
Rollup | 0x595b8C88B9e5f3a4c596C3e81BE6e11D53Bb9200 | 0x931661BAdBc221bd686446F17A0EA40aa5196779 |
NoDA | 0xC6f5162f09bd1f6C5e3aBB8F8589B7A8E73CCa5C | 0xae56AB13f57225CefAF4A6D921Ec3C7d606450ea |
Avail | 0x23B5c45dfDcAe9EAD887d58201D5f7479f7d3214 | 0x23B5c45dfDcAe9EAD887d58201D5f7479f7d3214 |