estimator.lwe_bkw.CodedBKW#

class estimator.lwe_bkw.CodedBKW[source]#
__call__(params: estimator.lwe_parameters.LWEParameters, ntest=None, log_level=1)[source]#

Coded-BKW as described in [C:GuoJohSta15].

Parameters
  • params – LWE parameters

  • ntest – Number of coordinates to hypothesis test.

Returns

A cost dictionary

The returned cost dictionary has the following entries:

  • rop: Total number of word operations (≈ CPU cycles).

  • b: BKW tables have size q^b.

  • t1: Number of plain BKW tables.

  • t2: Number of Coded-BKW tables.

  • : Hypothesis testing has tables of size q^{ℓ+1}

  • #cod: Number of coding steps.

  • #top: Number of guessing steps (typically zero)

  • #test: Number of coordinates to do hypothesis testing on.

EXAMPLE:

>>> from sage.all import oo
>>> from estimator import *
>>> LWE.coded_bkw(LightSaber.updated(m=oo))
rop: ≈2^171.7, m: ≈2^159.4, mem: ≈2^160.4, b: 12, t1: 3, t2: 18, ℓ: 11, #cod: 423, #top: 1...

We may need to amplify the number of samples, which modifies the noise distribution:

>>> from sage.all import oo
>>> from estimator import *
>>> LightSaber
LWEParameters(n=512, q=8192, Xs=D(σ=1.58), Xe=D(σ=2.00), m=512, tag='LightSaber')
>>> cost = LWE.coded_bkw(LightSaber); cost
rop: ≈2^184.3, m: ≈2^172.2, mem: ≈2^173.2, b: 13, t1: 0, t2: 18, ℓ: 12, #cod: 456, #top: 0...
>>> cost["problem"]
LWEParameters(n=512, q=8192, Xs=D(σ=1.58), Xe=D(σ=10.39), m=..., tag='LightSaber')

TESTS:

>>> LWE.coded_bkw(schemes.TFHE630)
rop: ≈2^153.1, m: ≈2^139.4, mem: ≈2^132.6, b: 4, t1: 0, t2: 24, ℓ: 3, #cod: 552, #top: 0, #test: 78, ...

Note

See also [C:KirFou15].

Methods

N(i, sigma_set, b, q)

Return N_i for the i-th [N_i, b] linear code.

__init__()

b(params[, ntest, log_level])

cost(t2, b, ntest, params[, ...])

Coded-BKW cost.

t1(ell, t2, b[, ntest])

We compute t1 from N_i by observing that any N_i ≤ b gives no advantage over vanilla BKW, but the estimates for coded BKW always assume quantisation noise, which is too pessimistic for N_i ≤ b.

Attributes

ntest