Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Arrabiata: prepare constraints for cross-terms computation #2702

Merged
merged 5 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions arrabbiata/src/columns.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ use std::{
};
use strum_macros::{EnumCount as EnumCountMacro, EnumIter};

use crate::NUMBER_OF_COLUMNS;

/// This enum represents the different gadgets that can be used in the circuit.
/// The selectors are defined at setup time, can take only the values `0` or
/// `1` and are public.
Expand Down Expand Up @@ -36,6 +38,27 @@ pub enum Column {
X(usize),
}

/// Convert a column to a usize. This is used by the library [mvpoly] when we
/// need to compute the cross-terms.
/// For now, only the private inputs and the public inputs are converted,
/// because there might not need to treat the selectors in the polynomial while
/// computing the cross-terms (FIXME: check this later, but pretty sure it's the
/// case).
///
/// Also, the [mvpoly::monomials] implementation of the trait [mvpoly::MVPoly]
/// will be used, and the mapping here is consistent with the one expected by
/// this implementation, i.e. we simply map to an increasing number starting at
/// 0, without any gap.
impl From<Column> for usize {
fn from(val: Column) -> usize {
match val {
Column::X(i) => i,
Column::PublicInput(i) => NUMBER_OF_COLUMNS + i,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't having (at least) a full columns for PI can create issue on the perfs of the verifier.
More importantely, this translate in the folder verifier which will handle a big PI.
We can use the PLONK way to handle PI to avoid a PI being linear in the number of rows

Since this is not the topic of the PR, we can see that later

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I want to investigate other solutions to handle PI.

Column::Selector(_) => unimplemented!("Selectors are not supported. This method is supposed to be called only to compute the cross-term and an optimisation is in progress to avoid the inclusion of the selectors in the multi-variate polynomial."),
}
}
}

pub struct Challenges<F: Field> {
/// Challenge used to aggregate the constraints
pub alpha: F,
Expand Down
6 changes: 6 additions & 0 deletions arrabbiata/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,12 @@ pub fn main() {
// FIXME:
// Compute the accumulator for the permutation argument

// FIXME:
// Commit to the accumulator and absorb the commitment

// FIXME:
// Coin challenge α for combining the constraints

// FIXME:
// Compute the cross-terms

Expand Down
39 changes: 39 additions & 0 deletions mvpoly/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -254,6 +254,45 @@ pub trait MVPoly<F: PrimeField, const N: usize, const D: usize>:
u2: F,
) -> HashMap<usize, F>;

/// Compute the cross-terms of the given polynomial, scaled by the given
/// scalar.
///
/// More explicitly, given a polynomial `P(X1, ..., Xn)` and a scalar α, the
/// method computes the the cross-terms of the polynomial `Q(X1, ..., Xn, α)
/// = α * P(X1, ..., Xn)`. For this reason, the method takes as input the
/// two different scalars `scalar1` and `scalar2` as we are considering the
/// scaling factor as a variable.
///
/// This method is particularly useful when you need to compute a
/// (possibly random) combinaison of polynomials `P1(X1, ..., Xn), ...,
/// Pm(X1, ..., Xn)`, like when computing a quotient polynomial in the PlonK
/// PIOP, as the result is the sum of individual "scaled" polynomials:
/// ```text
/// Q(X_1, ..., X_n, α_1, ..., α_m) =
/// α_1 P1(X_1, ..., X_n) +
/// ...
/// α_m Pm(X_1, ..., X_n) +
/// ```
///
/// The polynomial must not necessarily be homogeneous. For this reason, the
/// values `u1` and `u2` represents the extra variable that is used to make
/// the polynomial homogeneous.
///
/// The homogeneous degree is supposed to be the one defined by the type of
/// the polynomial `P`, i.e. `D`.
///
/// The output is a map of `D` values that represents the cross-terms
/// for each power of `r`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd add in the description of the funciton something like the following to summarize
let res = compute_cross_terms_scaled(P,eval1,eval2,u1,u2,scalar1,scalar2)
let P'[X_1, ..., X_n,U,Alpha] be P homogenised by U and scaled by Alpha
Then for all r,
P'(eval_1,u1,scalar1) + r^{D+1} P'(eval_2,u2,scalar2) + \sum_{i=1}^{D} r^i res[i] =
P'(eval_1 +reval_2, u1 + ru_2,scalar1+r*scalar2)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll do it in a follow-up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated in the HackMD.

fn compute_cross_terms_scaled(
&self,
eval1: &[F; N],
eval2: &[F; N],
u1: F,
u2: F,
scalar1: F,
scalar2: F,
) -> HashMap<usize, F>;

/// Modify the monomial in the polynomial to the new value `coeff`.
fn modify_monomial(&mut self, exponents: [usize; N], coeff: F);

Expand Down
61 changes: 51 additions & 10 deletions mvpoly/src/monomials.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
use crate::{
prime,
utils::{compute_indices_nested_loop, naive_prime_factors, PrimeNumberGenerator},
MVPoly,
};
use ark_ff::{One, PrimeField, Zero};
use kimchi::circuits::{expr::Variable, gate::CurrOrNext};
use num_integer::binomial;
Expand All @@ -8,12 +13,6 @@ use std::{
ops::{Add, Mul, Neg, Sub},
};

use crate::{
prime,
utils::{compute_indices_nested_loop, naive_prime_factors, PrimeNumberGenerator},
MVPoly,
};

/// Represents a multivariate polynomial in `N` variables with coefficients in
/// `F`. The polynomial is represented as a sparse polynomial, where each
/// monomial is represented by a vector of `N` exponents.
Expand Down Expand Up @@ -472,6 +471,41 @@ impl<const N: usize, const D: usize, F: PrimeField> MVPoly<F, N, D> for Sparse<F
cross_terms_by_powers_of_r
}

fn compute_cross_terms_scaled(
&self,
eval1: &[F; N],
eval2: &[F; N],
u1: F,
u2: F,
scalar1: F,
scalar2: F,
) -> HashMap<usize, F> {
assert!(
D >= 2,
"The degree of the polynomial must be greater than 2"
);
let cross_terms = self.compute_cross_terms(eval1, eval2, u1, u2);

let mut res: HashMap<usize, F> = HashMap::new();
cross_terms.iter().for_each(|(power_r, coeff)| {
res.insert(*power_r, *coeff * scalar1);
});
cross_terms.iter().for_each(|(power_r, coeff)| {
res.entry(*power_r + 1)
.and_modify(|e| *e += *coeff * scalar2)
.or_insert(*coeff * scalar2);
});
let eval1_hom = self.homogeneous_eval(eval1, u1);
res.entry(1)
.and_modify(|e| *e += eval1_hom * scalar2)
.or_insert(eval1_hom * scalar2);
let eval2_hom = self.homogeneous_eval(eval2, u2);
res.entry(D)
.and_modify(|e| *e += eval2_hom * scalar1)
.or_insert(eval2_hom * scalar1);
res
}

fn modify_monomial(&mut self, exponents: [usize; N], coeff: F) {
self.monomials
.entry(exponents)
Expand Down Expand Up @@ -520,12 +554,19 @@ impl<F: PrimeField, const N: usize, const D: usize> From<F> for Sparse<F, N, D>
}
}

impl<F: PrimeField, const N: usize, const D: usize, const M: usize> From<Sparse<F, N, D>>
for Result<Sparse<F, M, D>, String>
impl<F: PrimeField, const N: usize, const D: usize, const M: usize, const D_PRIME: usize>
From<Sparse<F, N, D>> for Result<Sparse<F, M, D_PRIME>, String>
{
fn from(poly: Sparse<F, N, D>) -> Result<Sparse<F, M, D>, String> {
fn from(poly: Sparse<F, N, D>) -> Result<Sparse<F, M, D_PRIME>, String> {
if M < N {
return Err("The number of variables must be greater than N".to_string());
return Err(format!(
"The final number of variables {M} must be greater than {N}"
));
}
if D_PRIME < D {
return Err(format!(
"The final degree {D_PRIME} must be greater than initial degree {D}"
));
}
let mut monomials = HashMap::new();
poly.monomials.iter().for_each(|(exponents, coeff)| {
Expand Down
12 changes: 12 additions & 0 deletions mvpoly/src/prime.rs
Original file line number Diff line number Diff line change
Expand Up @@ -432,6 +432,18 @@ impl<F: PrimeField, const N: usize, const D: usize> MVPoly<F, N, D> for Dense<F,
unimplemented!()
}

fn compute_cross_terms_scaled(
&self,
_eval1: &[F; N],
_eval2: &[F; N],
_u1: F,
_u2: F,
_scalar1: F,
_scalar2: F,
) -> HashMap<usize, F> {
unimplemented!()
}

fn modify_monomial(&mut self, exponents: [usize; N], coeff: F) {
let mut prime_gen = PrimeNumberGenerator::new();
let primes = prime_gen.get_first_nth_primes(N);
Expand Down
92 changes: 92 additions & 0 deletions mvpoly/tests/monomials.rs
Original file line number Diff line number Diff line change
Expand Up @@ -712,3 +712,95 @@ fn test_from_expr_ec_addition() {
assert_eq!(eval, exp_eval);
}
}

#[test]
fn test_cross_terms_fixed_polynomial_and_eval_homogeneous_degree_3() {
// X
let x = {
// We say it is of degree 2 for the cross-term computation
let mut x = Sparse::<Fp, 1, 2>::zero();
x.add_monomial([1], Fp::one());
x
};
// X * Y
let scaled_x = {
let scaling_var = {
let mut v = Sparse::<Fp, 2, 2>::zero();
v.add_monomial([0, 1], Fp::one());
v
};
let x: Sparse<Fp, 2, 2> = {
let x: Result<Sparse<Fp, 2, 2>, String> = x.clone().into();
x.unwrap()
};
x.clone() * scaling_var
};
// x1 = 42, α1 = 1
// x2 = 42, α2 = 2
let eval1: [Fp; 2] = [Fp::from(42), Fp::one()];
let eval2: [Fp; 2] = [Fp::from(42), Fp::one() + Fp::one()];
let u1 = Fp::one();
let u2 = Fp::one() + Fp::one();
let scalar1 = eval1[1];
let scalar2 = eval2[1];

let cross_terms_scaled_p1 = {
// When computing the cross-terms, the method supposes that the polynomial
// is of degree D - 1.
// We do suppose we homogenize to degree 3.
let scaled_x: Sparse<Fp, 2, 3> = {
let p: Result<Sparse<Fp, 2, 3>, String> = scaled_x.clone().into();
p.unwrap()
};
scaled_x.compute_cross_terms(&eval1, &eval2, u1, u2)
};
let cross_terms = {
let x: Sparse<Fp, 1, 2> = {
let x: Result<Sparse<Fp, 1, 2>, String> = x.clone().into();
x.unwrap()
};
x.compute_cross_terms_scaled(
&eval1[0..1].try_into().unwrap(),
&eval2[0..1].try_into().unwrap(),
u1,
u2,
scalar1,
scalar2,
)
};
assert_eq!(cross_terms, cross_terms_scaled_p1);
}

#[test]
fn test_cross_terms_scaled() {
let mut rng = o1_utils::tests::make_test_rng(None);
let p1 = unsafe { Sparse::<Fp, 4, 2>::random(&mut rng, None) };
let scaled_p1 = {
// Scaling variable is U. We do this by adding a new variable.
let scaling_variable: Sparse<Fp, 5, 3> = {
let mut p: Sparse<Fp, 5, 3> = Sparse::<Fp, 5, 3>::zero();
p.add_monomial([0, 0, 0, 0, 1], Fp::one());
p
};
// Simply transforming p1 in the expected degree and with the right
// number of variables
let p1 = {
let p1: Result<Sparse<Fp, 5, 3>, String> = p1.clone().into();
p1.unwrap()
};
scaling_variable.clone() * p1.clone()
};
let random_eval1: [Fp; 5] = std::array::from_fn(|_| Fp::rand(&mut rng));
let random_eval2: [Fp; 5] = std::array::from_fn(|_| Fp::rand(&mut rng));
let scalar1 = random_eval1[4];
let scalar2 = random_eval2[4];
let u1 = Fp::rand(&mut rng);
let u2 = Fp::rand(&mut rng);
let cross_terms = {
let eval1: [Fp; 4] = random_eval1[0..4].try_into().unwrap();
let eval2: [Fp; 4] = random_eval2[0..4].try_into().unwrap();
p1.compute_cross_terms_scaled(&eval1, &eval2, u1, u2, scalar1, scalar2)
};
let scaled_cross_terms = scaled_p1.compute_cross_terms(&random_eval1, &random_eval2, u1, u2);
assert_eq!(cross_terms, scaled_cross_terms);
}
Loading