# estimator.lwe_bkw.CodedBKW#

class estimator.lwe_bkw.CodedBKW[source]#
__call__(params: LWEParameters, ntest=None, log_level=1)[source]#

Coded-BKW as described in [C:GuoJohSta15].

Parameters:
• params – LWE parameters

• ntest – Number of coordinates to hypothesis test.

Returns:

A cost dictionary

The returned cost dictionary has the following entries:

• `rop`: Total number of word operations (≈ CPU cycles).

• `b`: BKW tables have size q^b.

• `t1`: Number of plain BKW tables.

• `t2`: Number of Coded-BKW tables.

• `ℓ`: Hypothesis testing has tables of size q^{ℓ+1}

• `#cod`: Number of coding steps.

• `#top`: Number of guessing steps (typically zero)

• `#test`: Number of coordinates to do hypothesis testing on.

EXAMPLE:

```>>> from sage.all import oo
>>> from estimator import *
>>> LWE.coded_bkw(schemes.LightSaber.updated(m=oo))
rop: ≈2^171.7, m: ≈2^159.4, mem: ≈2^160.4, b: 12, t1: 3, t2: 18, ℓ: 11, #cod: 423, #top: 1...
```

We may need to amplify the number of samples, which modifies the noise distribution:

```>>> from sage.all import oo
>>> from estimator import *
>>> schemes.Kyber512
LWEParameters(n=512, q=3329, Xs=D(σ=1.22), Xe=D(σ=1.22), m=512, tag='Kyber 512')
>>> cost = LWE.coded_bkw(schemes.Kyber512); cost
rop: ≈2^178.8, m: ≈2^166.8, mem: ≈2^167.8, b: 14, t1: 0, t2: 16, ℓ: 13, #cod: 448, #top: 0, #test: 64, ...
>>> cost["problem"]
LWEParameters(n=512, q=3329, Xs=D(σ=1.22), Xe=D(σ=6.24), m=..., tag='Kyber 512')
```

TESTS:

```>>> LWE.coded_bkw(schemes.TFHE630)
rop: ≈2^144.7, m: ≈2^131.8, mem: ≈2^132.8, b: 4, t1: 0, t2: 27, ℓ: 3, #cod: 559, #top: 0, #test: 71, ...
```

Note

 `N`(i, sigma_set, b, q) Return N_i for the i-th [N_i, b] linear code. `b`(params[, ntest, log_level]) `cost`(t2, b, ntest, params[, ...]) Coded-BKW cost. `t1`(ell, t2, b[, ntest]) We compute t1 from N_i by observing that any N_i ≤ b gives no advantage over vanilla BKW, but the estimates for coded BKW always assume quantisation noise, which is too pessimistic for N_i ≤ b.