Recent zk-SNARK constructions such as Marlin and PLONK relying on Polynomial commitment schemes are not inherently tied to R1CS constraints to specify the statement to be proven. This is roughly because the verifier equation is checked "in the clear" on a random opening of the prover polynomials, rather than "in the exponent" using a pairing. We provide a framework to capture more general and flexible constraints which we call *turbo-PLONK* *programs*. We give an example of how this framework allows for more concise representation of fixed-base elliptic curve scalar multiplication (only 1 turbo-PLONK gate for each two input bits), a primitive useful for constructing Pedersen hashes. We also include benchmarks of proving time for scalar multiplication on the Grumkin curve (a 255 bit curve embedded over bn254). Our results indicate a 2x improvement over Groth16.

A turbo-PLONK program $\mathscr{P}$ can be thought of as sequence of states $v\in \mathbb{F}^w$, where the *state size* $w$ is chosen by the program designer. The proof size and prover run time increase linearly in $w$ and standard choices are 3 or 4.

A *valid execution trace *$T\in (\mathbb{F}^w)^t$ for $\mathscr{P}$satisfies $\mathscr{P}$'s *transition gate* and *copy constraint*.

**transition gate **- A transition constraint of $\mathscr{P}$ is a $2w+\ell$ -variate degree at most $d$ polynomial $P$ , where $d$ is the *degree * of $\mathscr{P}$, and is recommended to be set to $w+1$, and $\ell$ is the *selector number*.

The transition gate of $\mathscr{P}$ consists of all the transition constraints together with $\ell$ *selector vectors* $q_1,\ldots,q_\ell\in \mathbb{F}^t$. $T$ is said to satisfy the transition gate if for each transition constraint$i\in [t]$ , $P(T_{i,1},\ldots,T_{i,w},T_{i+1,1},\ldots,T_{i+1,w},q_1(i),\ldots,q_\ell(i)) = 0$.

**copy constraint **- this is a partition of $w \times t$ into distinct sets $\{S_i\}$ . $T$ is said to satisfy the copy constraint if $T_{i,j} = T_{i',j'}$ whenever $(i,j)$ and $(i',j')$ belong to the same set in the partition.

The copy constraint can be thought of as enabling memory access, since we can force a subsequent value in the execution trace to be equal to a preceding one.

** **Let's see how an arithmetic circuit can be represented in this format. Every row will correspond to a gate, and the row values will be the incoming and outgoing wire values of the gate; so to represent fan-in $t$ circuits we'll use $w=t+1$. For example, for fan-in 2 the row values $v_1,v_2,v_3$** **will correspond to the left,right and output wire values of the gate associated with the row.

The transition constraints will be somewhat "degenerate" in the sense that we won't use the "next row" values$T_{i+1,1},\ldots T_{i+1,w}$. They will check if the correct input and output relations hold inside the row according to whether it's an addition or multiplication gate. This can be done with 3 selectors and one constraint polynomial $P$ of degree $t+1$ . E.g., for fan-in 2 we use:

â€‹$P(q_L,q_R,q_M,x_1,x_2,x_3) := q_L\cdot x_1 + q_R\cdot x_2 + q_M \cdot x_1 x_2-x_3$

By setting $q_L(i)=q_R(i)=1,q_M(i) =0$ for an addition gate and $q_L(i)=q_R(i)=0,q_M(i)=1$ for a multiplication gate, we can see the above equation checks the correct input/output relation in both cases.

The copy constraint will enforce the wiring of the circuit. For example, if the left wire of the 4th gate is the output wire of the second gate, we will enforce $T_{2,3} = T_{4,1}$.

â€‹