# estimator.gb.AroraGB#

class estimator.gb.AroraGB[source]#
__call__(params: LWEParameters, success_probability=0.99, omega=2, log_level=1, **kwds)[source]#

Arora-GB as described in [ICALP:AroGe11], [EPRINT:ACFP14].

Parameters:
• params – LWE parameters.

• success_probability – targeted success probability < 1.

• omega – linear algebra constant.

Returns:

A cost dictionary

The returned cost dictionary has the following entries:

• `rop`: Total number of word operations (≈ CPU cycles).

• `m`: Number of samples consumed.

• `dreg`: The degree of regularity or “solving degree”.

• `t`: Polynomials of degree 2t + 1 are considered.

• `mem`: Total memory usage.

EXAMPLE:

```>>> from estimator import *
>>> params = LWE.Parameters(n=64, q=7681, Xs=ND.DiscreteGaussian(3.0), Xe=ND.DiscreteGaussian(3.0), m=2**50)
>>> LWE.arora_gb(params)
rop: ≈2^307.1, m: ≈2^46.8, dreg: 99, t: 25, mem: ≈2^307.1, tag: arora-gb
```

TESTS:

```>>> LWE.arora_gb(params.updated(m=2**120))
rop: ≈2^282.6, m: ≈2^101.1, dreg: 83, t: 36, mem: ≈2^282.6, tag: arora-gb
>>> LWE.arora_gb(params.updated(Xe=ND.UniformMod(7)))
rop: ≈2^60.6, dreg: 7, mem: ≈2^60.6, t: 3, m: ≈2^30.3, tag: arora-gb
>>> LWE.arora_gb(params.updated(Xe=ND.CenteredBinomial(8)))
rop: ≈2^122.3, dreg: 19, mem: ≈2^122.3, t: 8, m: ≈2^50.0, tag: arora-gb
>>> LWE.arora_gb(params.updated(Xs=ND.UniformMod(5), Xe=ND.CenteredBinomial(4), m=1024))
rop: ≈2^227.2, dreg: 54, mem: ≈2^227.2, t: 4, m: 1024, tag: arora-gb
>>> LWE.arora_gb(params.updated(Xs=ND.UniformMod(3), Xe=ND.CenteredBinomial(4), m=1024))
rop: ≈2^189.9, dreg: 39, mem: ≈2^189.9, t: 4, m: 1024, tag: arora-gb
>>> Xs, Xe =ND.SparseTernary(1024, 64, 0), ND.DiscreteGaussian(2**10)
>>> LWE.arora_gb(LWE.Parameters(n=1024, q=2**40, Xs=Xs, Xe=Xe))
rop: ≈2^inf, dreg: ≈2^inf, tag: arora-gb
```

Martin R. Albrecht, Carlos Cid, Jean-Charles Faugère & Ludovic Perret. (2014). Algebraic algorithms for LWE. https://eprint.iacr.org/2014/1018

Sanjeev Arora & Rong Ge. (2011). New algorithms for learning in presence of errors. In L. Aceto, M. Henzinger, & J. Sgall, ICALP 2011, Part I (pp. 403–415).: Springer, Heidelberg.

Methods

 `cost_Gaussian_like`(params[, ...]) Estimate cost using absolute bounds for secrets and Gaussian tail bounds for noise. `cost_bounded`(params[, success_probability, ...]) Estimate cost using absolute bounds for secrets and noise. `equations_for_secret`(params) Return `(d,n)` tuple to encode that n equations of degree d are available from the LWE secret. Probability that a Gaussian is within C standard deviations.