Linea Plonk Verifier

1 Executive Summary

This report presents the results of our engagement with Linea to review Linea Plonk Verifier.

The review was conducted over two weeks, from May 8 2023 to June 23 2023, by Rai Yang,Tejaswa Rastogi and David Pearce, Joanne Fuller and Horacio Mijail from formal verification team. A total of 120 (30x3+?) person-days were spent.

The assessment team made a best effort review of the verifier due to lack of professional crytographic background of the team. We reviewed the verifier implementation in comparison to Plonk paper and Gnark prover implementation, in particular we reviewed the BSB22 commitment custom gate added to the original Plonk gate and discovered a substantial number of issues including some critical and high severity ones. We strongly recommend a through test of the verifier with other components of Linea (e.g. sequencer, prover etc.) on testnet after fixing the issues, before launching on the mainnet.

Given the lack of professional cryptographic review of the code and limited time, it is probable that the codebase contains additional high or critical-severity issues not identified in this review. Consequently, as the system evolves, a thorough and continuous assessment of the code for potential vulnerabilities is recommended.

2 Scope

Our review focused on the commit hash b3533f22f2a67a2c4c3e42bbf70f1c5c0af42336. The list of files in scope can be found in the Appendix.

2.1 Objectives

Together with the Linea team, we identified the following priorities for our review:

  1. Correctness of the Verifier’s implementation, consistent with Plonk paper and Gnark prover and without unintended edge cases.
  2. The BSB22 commitment custom gate
  3. Proof soundness and completeness
  4. Identify known vulnerabilities particular to smart contract systems, as outlined in our Smart Contract Best Practices, and the Smart Contract Weakness Classification Registry.

3 Security Specification

This section describes, from a security perspective, the expected behavior of the system under audit. It is not a substitute for documentation. The purpose of this section is to identify specific security properties that were validated by the audit team.

3.1 Actors

The relevant actors are listed below with their respective abilities:

  • Prover: Prove the correctness of the L2 block data(transactions) and state submitted by the sequencer
  • Verfier: Verify the proof and corresponding public inputs (L2 state and BSB22 commitment) are valid

3.2 Trust Model

Below is a non-exhaustive list of trust assumptions that we formulated while reviewing the system:

  • The CRS generated by a proper Plonk trusted setup, in this case it’s a toy CRS created by a single party and will be replaced by the CRS generated by Aztec ignition trusted setup for mainnet launch
  • The zkEVM circuit is correct
  • L1 ( Ethereum ) is secure

3.3 Security Properties

The following is a non-exhaustive list of security properties that were verified in this audit:

  • Proof is sound and complete e.g. invalid proof won’t pass the verification and valid proof would pass.

4 Findings

Each issue has an assigned severity:

  • Minor issues are subjective in nature. They are typically suggestions around best practices or readability. Code maintainers should use their own judgment as to whether to address such issues.
  • Medium issues are objective in nature but are not security vulnerabilities. These should be addressed unless there is a clear reason not to.
  • Major issues are security vulnerabilities that may not be directly exploitable or may require certain conditions in order to be exploited. All major issues should be addressed.
  • Critical issues are directly exploitable security vulnerabilities that need to be fixed.

4.1 No Proper Trusted Setup Critical  Acknowledged

Description

Linea uses Plonk proof system, which needs a preprocessed CRS (Common Reference String) for proving and verification, the Plonk system security is based on the existence of a trusted setup ceremony to compute the CRS, the current verifier uses a CRS created by one single party, which requires fully trust of the party to delete the toxic waste (trapdoor) which can be used to generate forged proof, undermining the security of the entire system

contracts/Verifier.sol:L29-L37

uint256 constant g2_srs_0_x_0 = 11559732032986387107991004021392285783925812861821192530917403151452391805634;
uint256 constant g2_srs_0_x_1 = 10857046999023057135944570762232829481370756359578518086990519993285655852781;
uint256 constant g2_srs_0_y_0 = 4082367875863433681332203403145435568316851327593401208105741076214120093531;
uint256 constant g2_srs_0_y_1 = 8495653923123431417604973247489272438418190587263600148770280649306958101930;

uint256 constant g2_srs_1_x_0 = 18469474764091300207969441002824674761417641526767908873143851616926597782709;
uint256 constant g2_srs_1_x_1 = 17691709543839494245591259280773972507311536864513996659348773884770927133474;
uint256 constant g2_srs_1_y_0 = 2799122126101651639961126614695310298819570600001757598712033559848160757380;
uint256 constant g2_srs_1_y_1 = 3054480525781015242495808388429905877188466478626784485318957932446534030175;

Recommendation

Conduct a proper MPC to generate CRS like the Powers of Tau MPC or use a trustworthy CRS generated by an exisiting audited trusted setup like Aztec’s ignition

4.2 Broken Lagrange Polynomial Evaluation at Zeta Critical  Acknowledged

Description

The Verifier calculates the Lagrange Polynomial at ζ with an efficient scheme as: Lj(ζ) = ωi/n * (ζn-1)/(ζ-ωi)

which has also been pointed out in the plonk paper. However, the computation ignores the fact that ζ can also be a root of unity, which means ζn - 1 will be 0 for any ζ that is a root of unity.

Thus, the formula will yield the Lagrange polynomial evaluation as 0, which is incorrect. Because the property of the Lagrange polynomial is: Lj(ζ) = 1, if i=j and 0 otherwise, where ζ belongs to domain H = ωi, ∀ 0<=i< n(n being the domain size)

Another way of calculating the Lagrange polynomial at zeta is: Lj(ζ) = yj * ∏ 0<= m <= k, m != j (ζ - xm)/(xj-xm); (k being the degree of polynomial)

If we consider the same evaluation for ζ at the root of unity in the second formula, it will correctly satisfy the property of the Lagrange polynomial stated above.

Hence, there is a need to fix the computation considering the case highlighted.

The problematic instances can be found in functions:

  • compute_ith_lagrange_at_z
  • batch_compute_lagranges_at_z
  • compute_alpha_square_lagrange_0

Recommendation

Consider adopting a strategy to use the second formula for the computation of Lagrange Polynomial evaluation at ζ if ζ is a root of unity.

4.3 Broken Logic for Modular Multiplicative Inverse Critical  Acknowledged

Description

The multiplicate inverse of an element α in a finite field Fpn can be calculated as αpn - 2. α can be any field element except 0 or the point at infinity.

This totally makes sense as there exists no field element x such that 0 * x = 1 mod p

However, it is allowed here and it is calculated like any other field element. It doesn’t revert, because 0 raised to any power modulo p will yield 0.

Thus the calculation points to a broken logic that defines the modular multiplicative inverse of 0 as 0.

Recommendation

The point at infinity can bring many mathematical flaws to the system. Hence require the utmost attention to be fixed.

4.4 Missing Verifying Paring Check Result Critical ✓ Fixed

Description

In function batch_verify_multi_points, the SNARK paring check is done by calling paring pre-compile let l_success := staticcall(sub(gas(), 2000),8,mPtr,0x180,0x00,0x20) and the only the execution status is stored in the final success state (state_success), but the the paring check result which is stored in 0x00 is not stored and checked, which means if the paring check result is 0 (pairing check failed), the proof would still pass verification, e.g. invalid proof with incorrect proof element proof_openings_selector_commit_api_at_zeta would pass the paring check. As a result it breaks the SNARK paring verification.

Examples

contracts/Verifier.sol:L586-L588

let l_success := staticcall(sub(gas(), 2000),8,mPtr,0x180,0x00,0x20)
// l_success := true
mstore(add(state, state_success), and(l_success,mload(add(state, state_success))))

Another example is, if either of the following is sent as a point at infinity or (0,0) as (x,y) co-ordinate:

  • commitment to the opening proof polynomial Wz
  • commitment to the opening proof polynomial Wzw

The proof will still work, since the pairing result is not being checked.

Recommendation

Verify paring check result and store it in the final success state after calling the paring pre-compile

4.5 Gas Greifing and Missing Return Status Check for staticcall(s), May Lead to Unexpected Outcomes Critical  Partially Addressed

Description

The gas supplied to the staticcall(s), is calculated by subtracting 2000 from the remaining gas at this point in time. However, if not provided enough gas, the staticcall(s) may fail and there will be no return data, and the execution will continue with the stale data that was previously there at the memory location specified by the return offset with the staticcall(s).

1- Predictable Derivation of Challenges

The function derive_gamma_beta_alpha_zeta is used to derive the challenge values gamma, beta, alpha, zeta. These values are derived from the prover’s transcript by hashing defined parameters and are supposed to be unpredictable by either the prover or the verifier. The hash is collected with the help of SHA2-256 precompile. The values are considered unpredictable, due to the assumption that SHA2-256 acts as a random oracle and it would be computationally infeasible for an attacker to find the pre-image of gamma. However, the assumption might be wrong.

Examples

contracts/Verifier.sol:L261

pop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1b), size, mPtr, 0x20)) //0x1b -> 000.."gamma"

contracts/Verifier.sol:L269

pop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1c), 0x24, mPtr, 0x20)) //0x1b -> 000.."gamma"

contracts/Verifier.sol:L279

pop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1b), 0x65, mPtr, 0x20)) //0x1b -> 000.."gamma"

contracts/Verifier.sol:L293

pop(staticcall(sub(gas(), 2000), 0x2, add(mPtr, 0x1c), 0xe4, mPtr, 0x20))

contracts/Verifier.sol:L694

pop(staticcall(sub(gas(), 2000), 0x2, add(mPtr,start_input), size_input, add(state, state_gamma_kzg), 0x20))

If the staticcall(s) fails, it will make the challenge values to be predictable and may help the prover in forging proofs and launching other adversarial attacks.

2- Incorrect Exponentiation

Functions compute_ith_lagrange_at_z, compute_pi, and verify compute modular exponentiation by making a staticcall to the precompile modexp as:

contracts/Verifier.sol:L335

pop(staticcall(sub(gas(), 2000),0x05,mPtr,0xc0,0x00,0x20))

contracts/Verifier.sol:L441

pop(staticcall(sub(gas(), 2000),0x05,mPtr,0xc0,mPtr,0x20))

contracts/Verifier.sol:L889

pop(staticcall(sub(gas(), 2000),0x05,mPtr,0xc0,mPtr,0x20))

However, if not supplied enough gas, the staticcall(s) will fail, thus returning no result and the execution will continue with the stale data.

3. Incorrect Point Addition and Scalar Multiplication

contracts/Verifier.sol:L555

pop(staticcall(sub(gas(), 2000),7,folded_evals_commit,0x60,folded_evals_commit,0x40))

contracts/Verifier.sol:L847

let l_success := staticcall(sub(gas(), 2000),6,mPtr,0x80,dst,0x40)

contracts/Verifier.sol:L858

let l_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,dst,0x40)

contracts/Verifier.sol:L868

let l_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,mPtr,0x40)

contracts/Verifier.sol:L871

l_success := and(l_success, staticcall(sub(gas(), 2000),6,mPtr,0x80,dst, 0x40))

For the same reason, point_add, point_mul, and point_acc_mul will return incorrect results. Matter of fact, point_acc_mul will not revert even if the scalar multiplication fails in the first step. Because, the memory location specified for the return offset, will still be containing the old (x,y) coordinates of src, which are points on the curve. Hence, it will proceed by incorrectly adding (x,y) coordinates of dst with it.

However, it will not be practically possible to conduct a gas griefing attack for staticcall(s) at the start of the top-level transaction. As it will require an attacker to pass a very low amount of gas to make the staticcall fail, but at the same time, that would not be enough to make the top-level transaction execute entirely and not run out of gas. But, this can still be conducted for the staticcall(s) that are executed at the near end of the top-level transaction.

Recommendation

  1. Check the returned status of the staticcall and revert if any of the staticcall’s return status has been 0.
  2. Also fix the comments mentioned for every staticcall, for instance: the function derive_beta says 0x1b -> 000.."gamma" while the memory pointer holds the ASCII value of string beta

4.6 Verifier Doesn’t Support Zero or Multiple BSB22 Commitments Major  Acknowledged

Description

The verifier currently supports single BSB22 commitment as Gnark only supports single Commit(..) call. If there is no or multiple BSB22 commitments/Commit calls, the verifier would fail in proof verification.

Examples

tmpl/template_verifier.go:L57-L58

uint256 constant vk_selector_commitments_commit_api_0_x = {{ (fpptr .Qcp.X).String }};
uint256 constant vk_selector_commitments_commit_api_0_y = {{ (fpptr .Qcp.Y).String }};

Recommendation

Add support for zero or multiple BSB22 commitments

4.7 Missing Tests for Edge Cases Major

Description

There are no test cases for invalid proof and public input such as proof elements not on curve, proof element is points of infinity, all proof elements are zero, wrong proof element, proof scalar element bigger than scalar field modulus, proof scalar element wrapping around scalar field modulus, public input greater than scalar field modulus etc. and no or multiple BSB22 commitments. There is only test for valid proof and one BSB22 commitment. Tests for all edge cases are crucial to check proof soundness in SNARK, missing it may result in missing some critical bugs, e.g. issue issue 4.4 issue issue 4.6

Recommendation

Add missing test cases

4.8 Missing Scalar Field Range Check in Scalar Multiplication Major ✓ Fixed

Description

There is no field element range check on scalar field proof elements e.g. proof_l_at_zeta, proof_r_at_zeta, proof_o_at_zeta, proof_s1_at_zeta,proof_s2_at_zeta, proof_grand_product_at_zeta_omega as mentioned in the step 2 of the verifier’s algorithm in the Plonk paper. The scalar multiplication functions point_mul and point_acc_mul call precompile ECMUL, according to EIP-169 , which would verify the point P is on curve and P.x and P.y is less than the base field modulus, however it doesn’t check the scalar s is less than scalar field modulus, if s is greater than scalar field modulus r_mod, it would cause unintended behavior of the contract, specifically if the scalar field proof element e are replaced by e + r_mod, the proof would still pass verification. Although in Plonk’s case, there is few attacker vectors could exists be based on this kind of proof malleability.

Examples

contracts/Verifier.sol:L852-L873

function point_mul(dst,src,s, mPtr) {
  // let mPtr := add(mload(0x40), state_last_mem)
  let state := mload(0x40)
  mstore(mPtr,mload(src))
  mstore(add(mPtr,0x20),mload(add(src,0x20)))
  mstore(add(mPtr,0x40),s)
  let l_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,dst,0x40)
  mstore(add(state, state_success), and(l_success,mload(add(state, state_success))))
}

// dst <- dst + [s]src (Elliptic curve)
function point_acc_mul(dst,src,s, mPtr) {
  let state := mload(0x40)
  mstore(mPtr,mload(src))
  mstore(add(mPtr,0x20),mload(add(src,0x20)))
  mstore(add(mPtr,0x40),s)
  let l_success := staticcall(sub(gas(), 2000),7,mPtr,0x60,mPtr,0x40)
  mstore(add(mPtr,0x40),mload(dst))
  mstore(add(mPtr,0x60),mload(add(dst,0x20)))
  l_success := and(l_success, staticcall(sub(gas(), 2000),6,mPtr,0x80,dst, 0x40))
  mstore(add(state, state_success), and(l_success,mload(add(state, state_success))))
}

Recommendation

Add scalar field range check on scalar multiplication functions point_mul and point_acc_mul or the scalar field proof elements.

4.9 Missing Public Inputs Range Check Major ✓ Fixed

Description

The public input is an array of uint256 numbers, there is no check if each public input is less than SNARK scalar field modulus r_mod, as mentioned in the step 3 of the verifier’s algorithm in the Plonk paper. Since public inputs are involved computation of Pi in the plonk gate which is in the SNARK scalar field, without the check, it might cause scalar field overflow and the verification contract would fail and revert. To prevent overflow and other unintended behavior there should be a range check for the public inputs.

Examples

contracts/Verifier.sol:L470

function Verify(bytes memory proof, uint256[] memory public_inputs)

contracts/Verifier.sol:L367-L383

sum_pi_wo_api_commit(add(public_inputs,0x20), mload(public_inputs), zeta)
pi := mload(mload(0x40))

function sum_pi_wo_api_commit(ins, n, z) {
  let li := mload(0x40)
  batch_compute_lagranges_at_z(z, n, li)
  let res := 0
  let tmp := 0
  for {let i:=0} lt(i,n) {i:=add(i,1)}
  {
    tmp := mulmod(mload(li), mload(ins), r_mod)
    res := addmod(res, tmp, r_mod)
    li := add(li, 0x20)
    ins := add(ins, 0x20)
  }
  mstore(mload(0x40), res)
}

Recommendation

Add range check for the public inputs require(input[i] < r_mod, "public inputs greater than snark scalar field");

4.10 Missing Field Element Check and on Curve Point Check for Proof Elements Major  Acknowledged

Description

There is no prime field element check and on curve point for proof elements proof_l_com_x, proof_l_com_y,proof_r_com_x,proof_r_com_y, proof_o_com_x, proof_o_com_y, proof_h_0_x, proof_h_0_y,proof_h_1_x, proof_h_1_y,proof_h_2_x, proof_h_2_y, proof_batch_opening_at_zeta, proof_opening_at_zeta_omega, proof_selector_commit_api_commitment, as mentioned in

step 1 Validate ([a]1, [b]1, [c]1, [z]1, [tlo]1, [tmid]1, [thi]1, [Wz]1, [Wzω]1) ∈ G9

of the verifier’s algorithm in the Plonk paper. Although there is field element check and curve point check in ECCADD, ECCMUL and ECCParing precompiles on those elements, in which the precompile would revert on failed check but it would consume gas on revert and there is no error information. It’s better to check explicitly and revert on fail to prevent unintended behavior of the verification contract.

Recommendation

Add field element, group element and curve point check for proof elements and revert if the check fails.
`

4.11 Missing Length Check for proof Major ✓ Fixed

Description

The Verify function has the following signature:

function Verify(bytes memory proof, uint256[] memory public_inputs)

Here, proof is a dynamically sized array of bytes (padded upto the nearest word). The function derive_gamma(aproof, pub_inputs) uses this array and makes some assumptions about its length. Specifically, that it is (vk_nb_commitments_commit_api * 3 * 0x20) + 0x360 bytes long (when including the initial length field of the bytes array). However, there is no check that the proof supplied in the calldata (which originates within ZkEvmV2 where it is loaded into memory) has the correct length. This could result in the proof and pub_inputs overlapping in memory, leading to unintended consequences.

Also, if mistakenly appended extra bits to the proof, it will not affect the proof verification as the Verifier doesn’t account for any extra bits after the y coordinate of the last commitment. But it will surely make the verification expensive, as it will still be copied down into memory.

Recommendation

Add an appropriate length check at some point in the pipeline to ensure this doesn’t cause any unintended problems.

4.12 Allowing Program Execution Even After a Failed Step May Lead to Unnecessary Wastage of Gas Medium

Description

The Verifier stores the result of computations obtained in different steps of Verifier algorithm. The result is stored at a designated memory location state_success by doing bitwise & with the previous result, and if the final result at the end of all the steps comes out to be 1 or true, it verifies the proof.

However, it makes no sense to continue with the rest of the operations, if any step results into a failure, as the proof verification will be failing anyways. But, it will result into wastage of more gas for the zkEVM Operator.

The functions which update the state_success state are:

  • point_mul
  • point_add
  • point_acc_mul
  • verify_quotient_poly_eval_at_zeta
  • batch_verify_multi_points

Recommendation

It would be best to revert, the moment any step fails.

4.13 Loading Arbitrary Data as Wire Commitments Medium  Acknowledged

Description

Function load_wire_commitments_commit_api as the name suggests, loads wire commitments from the proof into the memory array wire_commitments. The array is made to hold 2 values per commitment or the size of the array is 2 * vk_nb_commitments_commit_api, which makes sense as these 2 values are the x & y co-ordinates of the commitments.

contracts/Verifier.sol:L453-L454

uint256[] memory wire_committed_commitments = new uint256[](2*vk_nb_commitments_commit_api);
load_wire_commitments_commit_api(wire_committed_commitments, proof);

Coming back to the functionload_wire_commitments_commit_api, it extracts both the x & y coordinates of a commitment in a single iteration. However, the loop runs 2 * vk_nb_commitments_commit_api, or in other words, twice as many of the required iterations. For instance, if there is 1 commitment, it will run two times. The first iteration will pick up the actual coordinates and the second one can pick any arbitrary data from the proof(if passed) and load it into memory. Although, this data which has been loaded in an extra iteration seems harmless but still adds an overhead for the processing.

contracts/Verifier.sol:L307

for {let i:=0} lt(i, mul(vk_nb_commitments_commit_api,2)) {i:=add(i,1)}

Recommendation

The number of iterations should be equal to the size of commitments, i.e., vk_nb_commitments_commit_api. So consider switching from:

for {let i:=0} lt(i, mul(vk_nb_commitments_commit_api,2)) {i:=add(i,1)}

to:

for {let i:=0} lt(i, vk_nb_commitments_commit_api) {i:=add(i,1)}

4.14 Broken Logic for Batch Inverting Field Elements Minor

Description

The function compute_pi calculates public input polynomial evaluation at 𝜁 without api commit as: PI(𝜁) = ∑i∈[ℓ] ωiLi(𝜁)

The function first calculates: n-1 * (𝜁n-1)

and then calls a function batch_invert to find modular multiplicate inverses of: 𝜁 - ωi where 0 < i < n ; n is the number of public inputs and 𝜁0 = 1.

The explanation of which is provided by the author as:

Ex: if ins = [a₀, a₁, a₂] it returns [a₀^{-1},a₁^{-1}, a₂^{-1}] (the aᵢ are on 32 bytes)
mPtr is the free memory to use.

It uses the following method (example with 3 elements):
* first compute [1, a₀, a₀a₁, a₀a₁a₂]
* compute u := (a₀a₁a₂)^{-1}
* compute a₂^{-1} = u*a₀a₁, replace u by a₂*u=(a₀a₁)^{-1}
* compute a₁^{-1} = u*a₀, replace u by a₁*u = a₀^{-1}
* a₀^{-1} = u

However, it doesn’t account for the fact that elements passed to the function can be 0 as well. The reason is, 𝜁 can be a root of unity. Since the function first calculates an aggregated inverse, thus even if a single element is 0, the aggregated inverse will be 0, and thus all the individual inverses will be 0, which is contrary to the desired logic of finding individual inverses.

Recommendation

Consider fixing the computation as per the recommendations in issue 4.2

4.15 Unused State Fields Minor  Acknowledged

Description

There are three state fields which exist but are neither defined nor used in the computation:

  • state_su and state_sv
  • state_alpha_square_lagrange_one

It is unclear whether or not these were intended to play a part or not.

Examples

contracts/Verifier.sol:L138-L140

// challenges related to KZG
uint256 constant state_sv = 0x80;
uint256 constant state_su = 0xa0;

contracts/Verifier.sol:L166

uint256 constant state_alpha_square_lagrange_one = 0x200;

Recommendation

4.16 Overwritten Assignment to state_success Minor ✓ Fixed

Description

In the function verify_quotient_poly_eval_at_zeta(aproof) we have the following two lines at the end:

mstore(add(state, state_success), mload(computed_quotient))
mstore(add(state, state_success),eq(mload(computed_quotient), mload(s2)))

In essence, this is writing to the state_success variable twice and, hence, the first assignment is lost. Its unclear to me what exactly was intended here, but I’m assuming the first assignment can be removed.

Examples

Recommendation

Determine the correct course of action: either removing the first assignment, or using an and() to combine the two values together.

4.17 PlonkVerifier.abi Is Out of Sync Minor

Description

Running make solc creates a PlonkVerifier.abi file that is empty ([]). This seems correct for the existing Verifier.sol file. In contrast, the committed PlonkVerifier.abi refers to a function PrintUint256 that is not in the current Verifier.sol.

This probably means that the committed PlonkVerifier.abi was generated from a different Verifier.sol file. It would be good to make sure that the committed files are correct.

Possibly related: the .bin files generated (using 0.8.19+commit.7dd6d404.Darwin.appleclang) are different from the committed ones, so they probably are also out of sync. Else, the exact compiler used should be documented.

Examples

Recommendation

Document the compiler used. Ensure that the committed files are consistent. Ensure that there is a single source of truth (1 solidity file + compiler is better than 1 solidity file + 1 bin file that can get out of sync).

4.18 Makefile: Target Order Minor

Description

The target all in the Makefile ostensibly wants to run the targets clean and solc in that order.

all: clean solc

However prerequisites in GNU Make are not ordered, and they might even run in parallel. In this case, this could cause spurious behavior like overwrite errors or files being deleted just after being created.

Recommendation

The Make way to ensure that targets run one after the other is

all: clean
	$(MAKE) solc

Also all should be listed in the PHONY targets.

4.19 Floating Pragma

Description

The Verifier can be compiled with any minor version of compiler 0.8. It may lead to inconsistent behavior or produce unintended results, due to the bugs that have been identified in specific compiler versions. For instance, an optimizer bug that was discovered in 0.8.13, which effects unused memory/storage writes in an inline assembly block, which is similar to the pattern being followed by the Verifier. Although, we have not observed a negative effect of the said bug, but we still recommend working with a fixed compiler version, to avoid potential compiler-specific issues.

contracts/Verifier.sol:L17

pragma solidity ^0.8.0;

Recommendation

The contract should be tested and compiled with a fixed compiler version

4.20 Deviation in Implementation From the Intended Approach/Plonk Paper & Other General Observations

Description

We have observed some deviations in the implementation in some computations w.r.t the intended approach described by the author and also with the Plonk Paper. We have also observed some computations which require a thorough lookup

1- Function compute_gamma_kzg defines the process to derive an unpredictable value gamma with the following inputs mentioned by the authors

The process for deriving γ is the same as in derive_gamma but this time the inputs are
in this order (the [] means it's a commitment):
* ζ
* [H] ( = H₁ + ζᵐ⁺²*H₂ + ζ²⁽ᵐ⁺²⁾*H₃ )
* [Linearised polynomial]
* [L], [R], [O]
* [S₁] [S₂]
* [Pi_{i}] (wires associated with custom gates)
Then there are the purported evaluations of the previously committed polynomials:
* H(ζ)
* Linearised_polynomial(ζ)
* L(ζ), R(ζ), O(ζ), S₁(ζ), S₂(ζ)
* Pi_{i}(ζ)

However, instead of using [Pii] and Pii(ζ), i.e., commitments to the custom gates’ wire values and their evaluations at ζ, the function uses [Qci] and Qci(ζ), which are the commitments to the custom gates’ selectors and their evaluations at ζ

contracts/Verifier.sol:L669-L671

mstore(add(mPtr,offset), vk_selector_commitments_commit_api_0_x)
mstore(add(mPtr,add(offset, 0x20)), vk_selector_commitments_commit_api_0_y)
offset := add(offset, 0x40)

2- Approach to compute linearized polynomial

Compute the commitment to the linearized polynomial equal to
L(ζ)[Qₗ]+r(ζ)[Qᵣ]+R(ζ)L(ζ)[Qₘ]+O(ζ)[Qₒ]+[Qₖ]+Σᵢqc'ᵢ(ζ)[BsbCommitmentᵢ] +
α*( Z(μζ)(L(ζ)+β*S₁(ζ)+γ)*(R(ζ)+β*S₂(ζ)+γ)[S₃]-[Z](L(ζ)+β*id_{1}(ζ)+γ)*(R(ζ)+β*id_{2(ζ)+γ)*(O(ζ)+β*id_{3}(ζ)+γ) ) +
α²*L₁(ζ)[Z]

Unlike the other assignment polynomials, where the input polynomial evaluation at ζ is multiplied by its respective selector commitment, For instance: L(ζ)[Ql]

In the case of custom gates, it is being computed in reverse as: Qci(ζ)[Pii]

contracts/Verifier.sol:L726-L735

let commits_api_at_zeta := add(aproof, proof_openings_selector_commit_api_at_zeta)
let commits_api := add(aproof, add(proof_openings_selector_commit_api_at_zeta, mul(vk_nb_commitments_commit_api, 0x20)))
for {let i:=0} lt(i, vk_nb_commitments_commit_api) {i:=add(i,1)}
{
  mstore(mPtr, mload(commits_api))
  mstore(add(mPtr, 0x20), mload(add(commits_api, 0x20)))
  point_acc_mul(l_state_linearised_polynomial,mPtr,mload(commits_api_at_zeta),add(mPtr,0x40))
  commits_api_at_zeta := add(commits_api_at_zeta, 0x20)
  commits_api := add(commits_api, 0x40)
}

3- Approach to compute linearized polynomial and quotient polynomial evaluation at zeta

The signs for the parts of the expressions to compute r̄ are reversed, if compared with the plonk paper

* s₁ = α*Z(μζ)(l(ζ)+β*s₁(ζ)+γ)*(r(ζ)+β*s₂(ζ)+γ)*β
* s₂ = -α*(l(ζ)+β*ζ+γ)*(r(ζ)+β*u*ζ+γ)*(o(ζ)+β*u²*ζ+γ) + α²*L₁(ζ)

And the sign for the term in t¯ computation has been reversed as well: αz¯w(l¯+βs¯σ1+γ)(r¯+βs¯σ2+γ)(o¯+γ)

contracts/Verifier.sol:L832

mstore(computed_quotient, addmod(mload(computed_quotient), mload(s1), r_mod))

contracts/Verifier.sol:L782

s2 := sub(r_mod, s2)

Recommendation

There is a need to thoroughly review the highlighted code sections

4.21 Proof Size Can Be Reduced by Removing Linearization Polynomial Evaluation at Zeta

Description

The Linearization Polynomial evaluation at zeta (r¯) can be removed from the proof as the verifier computes the Linearization Polynomial anyways. The Verifier’s algorithm in the latest release of the plonk paper has also been updated, considering the same. However, it might require changing/adjusting a lot of code sections, hence it is not recommended for the current version of the verifier but definitely can be considered for future updates.

4.22 Unnecessary Memory Read/Write Operations

Description

Function verify_quotient_poly_eval_at_zeta tries to find whether m¯+PI(ζ)+αz¯w(l¯+βs¯σ1+γ)(r¯+βs¯σ2+γ)(o¯+γ)−L1(ζ)α2 is equal to t¯ZH(ζ) or not, where x¯=x(ζ) and m¯ is the linearization polynomial evaluation at ζ

However, instead of directly working with local assembly variables, the function uses memory operations mload and mstore which is unnecessary and adds gas overhead for the calculations.

For instance,

# With memory

let s1 := add(mload(0x40), state_last_mem)
mstore(s1, mulmod(mload(add(aproof,proof_s1_at_zeta)),mload(add(state, state_beta)), r_mod))
mstore(s1, addmod(mload(s1), mload(add(state, state_gamma)), r_mod))
mstore(s1, addmod(mload(s1), mload(add(aproof, proof_l_at_zeta)), r_mod))

# Without memory

let s1 := mulmod(mload(add(aproof,proof_s1_at_zeta)),mload(add(state, state_beta)), r_mod)
s1:= addmod(s1, mload(add(state, state_gamma)), r_mod)
s1:= addmod(s1, mload(add(aproof, proof_l_at_zeta)), r_mod)

Also, the function stores the computed quotient at the memory location of state_success as:

mstore(add(state, state_success), mload(computed_quotient))

It is unnecessary as it is logically reserved for the calculation results, and also because of the next line which is actually storing the desired comparison result.

mstore(add(state, state_success),eq(mload(computed_quotient), mload(s2)))

Recommendation

The same calculations can be achieved by simply using the local assembly variables. Also, the unnecessary memory written to state_success for computed_quotient mentioned above can be removed.

4.23 Unused Evaluation of Z(x)-1 at Zeta ✓ Fixed

Description

The solidity variable zeta_power_n_minus_one is defined at Line 361, and calculated at Lines 445 and 446. However, this variable does not appear to be actually used anywhere. Instead, there are several places where the value is recalculated from scratch.

Examples

contracts/Verifier.sol:L361

uint256 zeta_power_n_minus_one;

contracts/Verifier.sol:L445-L446

zeta_power_n_minus_one := pow(zeta, vk_domain_size, mload(0x40))
zeta_power_n_minus_one := addmod(zeta_power_n_minus_one, sub(r_mod, 1), r_mod)

Recommendation

Presumably, this variable can be removed without any consequences.

4.24 Spurious Debugging Code

Description

There are two examples of debugging code left within the code base.

Examples

check := mload(add(mem, state_check_var))
mstore(add(state, state_check_var), acc_gamma)
mstore(add(state, state_check_var), mload(add(folded_quotients, 0x20)))

Recommendation

Remove debugging code prior to deployment.

4.25 Utils - Possible Code Improvements

Descriptions and Recommendations

The contract defines function hash_fr to calculate field elements over finite field F from message hash, expanded with expand_msg. However, we found opportunities for a couple of code improvements that may enhance code readability.

A. The following code excerpt prepends a byte string of 64 zeroes for the padding Z_pad

contracts/Utils.sol:L39-L41

for (uint i=0; i<64; i++){
    tmp = abi.encodePacked(tmp, zero);
}

However, as it is a static value and need not be calculated dynamically, it can be simplified by predefining it as an initial value for tmp as:

bytes memory tmp = hex'00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000';

B. The following code excerpt performs a xor operation with b0 and b1 bytes32 hash strings in a for a loop.

contracts/Utils.sol:L51-L56

tmp = abi.encodePacked(uint8(b0[0]) ^ uint8(b1[0]));
for (uint i=1; i<32; i++){
    tmp = abi.encodePacked(tmp, uint8(b0[i]) ^ uint8(b1[i]));
}

tmp = abi.encodePacked(tmp, uint8(2), dst, sizeDomain);

Instead of performing xor operation byte-by-byte, solidity provides the capability to xor the whole fixed byte string at once, hence it can be simplified as:

bytes32 xorTmp = b0 ^ b1;
tmp = abi.encodePacked(xorTmp, uint8(2), dst, sizeDomain);

Appendix 1 — Formal Verification

As part of the internal audit conducted by Diligence for Linea, key aspects of the PLONK verifier were formally verified by the ConsenSys Trustworthy Smart Contracts (TSC) team. This appendix outline the scope of the formal verification proof, and the methodology taken in developing it.

DISCLAIMER. Whilst this proof makes specific correctness claims about key properties of the contract-under-audit (see below), it should not be misconstrued as indicating the contract as a whole is correct. This proof does not constitute a total proof of correctness for the PLONK verifier contract.

A.1.1 Methodology

The proof in this repository was developed using the Dafny programming language. This system was chosen because of its familiarity to the TSC team. Dafny has seen reasonably wide use within various Universities and industrial organisations, such as Amazon Web Services and Microsoft. Dafny has also been used previously within ConsenSys to verify the Eth 2.0 Specification.

  • Dafny v3.13.0 was primarily used for this proof, along with the command-line option --disable-nonlinear-arithmetic.
  • Z3 v4.8.5 was selected as the automated theorem prover for this project. Note that this is not the default for Dafny v3.13.0.

Solidity Translation

An initial translation of the contract code written in Solidity into Dafny was done using the custom-built Dolidity tool. This translation was relatively straightforward, given the relatively limited subset of Solidity used in the contract. However, some aspects of the translation needed to be completed by hand. In particular, explicit memory allocation at the Solidity level (e.g. new uint256[](...)) required manual translation.

Dafny Model

A significantly simplified model of the EVM was used for this proof which, for example, ignores gas costs and contract storage (e.g. because the latter is not used). The model of memory was word oriented rather than byte oriented to reduce the amount of non-linear arithmetic. This suited the given contract well, as in the majority of cases it only uses word-aligned memory.

A.1.2 Outcomes

The primary outcomes from the formal verification of the PLONK verifier are:

  • Termination. Execution of the Verify() has been shown to terminate as expected (subject to Gas availability). There are no unexpected infinite loops.

  • Overflow / Underflow. Checks against integer overflow / underflow have been performed against all arithmetic operations. In all-but-one case, the absence of overflow / underflow was shown. In fact, an integer underflow can occur within batch_invert(), but it is consistent with correct operation.

  • Immutable Inputs. It has been shown that execution of Verify() cannot (accidentally) modify the two input arrays (public_inputs and proof) in any way. NOTE: this is subject to an assumption that the size of these two arrays are passed correctly to the Verify() function (e.g. they are correctly sized, do not overlap, etc).

  • State Modification. The state structure is held in memory above the free memory pointer. All functions which can modify this structure have been identified, along with the fields which they can modify. No accidental or invalid modifications to this structure have been detected.

  • State Reset. The state_success field of the state structure is considered a high-value data field. In particular, verification succeeds when this field holds a non-zero value at the end of Verify(). A key property for this field is that, having been assigned 0 (i.e. false), it should not be possible for any further execution to reset this field back to 1 (i.e. true). No violations of this property were detected.

  • Scratch Memory. Scratch memory is used in several places for different reasons: (1) it is used to setup input values for calls to precompiled contracts (e.g. in point_add(), etc); (2) it is used for holding temporary return values (e.g. derive_gamma()); (3) it is used to store chunks of temporary data (e.g. in batch_invert() and across compute_kzg() / fold_state()). Incorrect use of scratch memory could result in high-value data being accidentally overwritten. No invalid usages of scratch memory have been detected.

In addition, the batch_invert() function has been shown (through a mixture of verification and testing) to correctly implement “Montgomery’s Trick” for inverting an array of field elements.

Appendix 2 - Files in Scope

This audit covered the following files:

File SHA-1 hash
contracts/Utils.sol 6cd2a21ae3d2efe161bd0a672674d9e068ac1843
contracts/Verifier.sol 6f287c947e14fa90cfbf8330ff6b5e145531a271

Appendix 3 - Disclosure

ConsenSys Diligence (“CD”) typically receives compensation from one or more clients (the “Clients”) for performing the analysis contained in these reports (the “Reports”). The Reports may be distributed through other means, including via ConsenSys publications and other distributions.

The Reports are not an endorsement or indictment of any particular project or team, and the Reports do not guarantee the security of any particular project. This Report does not consider, and should not be interpreted as considering or having any bearing on, the potential economics of a token, token sale or any other product, service or other asset. Cryptographic tokens are emergent technologies and carry with them high levels of technical risk and uncertainty. No Report provides any warranty or representation to any Third-Party in any respect, including regarding the bugfree nature of code, the business model or proprietors of any such business model, and the legal compliance of any such business. No third party should rely on the Reports in any way, including for the purpose of making any decisions to buy or sell any token, product, service or other asset. Specifically, for the avoidance of doubt, this Report does not constitute investment advice, is not intended to be relied upon as investment advice, is not an endorsement of this project or team, and it is not a guarantee as to the absolute security of the project. CD owes no duty to any Third-Party by virtue of publishing these Reports.

PURPOSE OF REPORTS The Reports and the analysis described therein are created solely for Clients and published with their consent. The scope of our review is limited to a review of code and only the code we note as being within the scope of our review within this report. Any Solidity code itself presents unique and unquantifiable risks as the Solidity language itself remains under development and is subject to unknown risks and flaws. The review does not extend to the compiler layer, or any other areas beyond specified code that could present security risks. Cryptographic tokens are emergent technologies and carry with them high levels of technical risk and uncertainty. In some instances, we may perform penetration testing or infrastructure assessments depending on the scope of the particular engagement.

CD makes the Reports available to parties other than the Clients (i.e., “third parties”) – on its website. CD hopes that by making these analyses publicly available, it can help the blockchain ecosystem develop technical best practices in this rapidly evolving area of innovation.

LINKS TO OTHER WEB SITES FROM THIS WEB SITE You may, through hypertext or other computer links, gain access to web sites operated by persons other than ConsenSys and CD. Such hyperlinks are provided for your reference and convenience only, and are the exclusive responsibility of such web sites’ owners. You agree that ConsenSys and CD are not responsible for the content or operation of such Web sites, and that ConsenSys and CD shall have no liability to you or any other person or entity for the use of third party Web sites. Except as described below, a hyperlink from this web Site to another web site does not imply or mean that ConsenSys and CD endorses the content on that Web site or the operator or operations of that site. You are solely responsible for determining the extent to which you may use any content at any other web sites to which you link from the Reports. ConsenSys and CD assumes no responsibility for the use of third party software on the Web Site and shall have no liability whatsoever to any person or entity for the accuracy or completeness of any outcome generated by such software.

TIMELINESS OF CONTENT The content contained in the Reports is current as of the date appearing on the Report and is subject to change without notice. Unless indicated otherwise, by ConsenSys and CD.