Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minimum distance algorithms for calculating code distance for Quantum LDPC codes #353

Open
Fe-r-oz opened this issue Sep 4, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@Fe-r-oz
Copy link
Contributor

Fe-r-oz commented Sep 4, 2024

Is your feature request related to a problem? Please describe.

@esabo, pinging you as you are working on this! Please feel free to add more insights.

CodingTheory.jl has introduced the Trellis algorithm that uses MacWilliams identity for the calculation of minimum distance. In the qdistrnd branch, there is the new QDistRnd algorithm that will be released by next month(tentative), for the calculation of the minimum distance. They have utilized Oscar->Nemo>etc... for implementing minimum distance algorithms.  For more details, see #14 (comment).

There is a question that arose two days ago while designing BBQLDPC in #352 that was how do one find the code distance of general Quantum LDPC codes? This is because there is no easy way to calculate the combination Kernel(Hx)\rowspace(Hz) . Eric notes that the latter is NP-complete in classical and #P in quantum sense. We can calculate ker(Hx) and rowspace(Hz) easily but not the combination ker(Hx)/rs(Hz).  For calculating the code distance of sophisticated codes, we require the use of minimum distance algorithms because d = min {|v|: v ∈ ker(A)\rs(B)} is NP-complete classical and #P in the quantum case.

Bravyi used the Integer Linear Programming technique (ILP) to calculate the minimum code distance. As Eric notes that "There’s not really a better way than standard enumeration-based algorithms like Brouwer-Zimmerman or probabilistic algorithms (under which the LP solution falls)". So, documenting all the approaches and utilizing the available ones for calculating code distance becomes an important correctness check.

Describe the solution you’d like

First flesh out all the different approaches for calculating the code distance of Quantum LDPC codes.  This will require some literature review.  This will be helpful and provide a better perspective.  The paper to start with is: Fault-tolerant quantum computing with color codes. This is the paper that is used by Bravyi for calculating the code distance. In this paper, the authors "present integer-program-based decoding algorithm for identifying the most likely error given the (possibly faulty) syndrome".  Are there any ILP packages in julia that can be leveraged? See Fastest open-licensed solver for integer problems for a general overview.

Note that the regular code distance algorithm that BP+OSD uses fails for n>10, and that too is for generic codes.  

Try calculating the code distance from the  minimum algorithms from CodingTheory.jl as they are implemented using Oscar. Some are in the dev phase and will be released towards the end of this month(tentative).  Try calculating code distance for simpler quantum codes first using these methods.  Check out minimum_distance from CodingTheory.jl. Note that suppose, you have your parity check matrix, H which you can find using stab_to_gf2(parity_checks(code))) to use minimum distance from CodingTheory.jl because they have their own StabilizerCode representation, do the following:

stab  = StabilizerCode(H)
minimum_distance(stab) 

Make sure that you are using matrix(GF(2), H) for the binary check matrices.

In addition, the ILP approach that is used by Bravyi requires the calculation of logical operators for the CSS code. Given a CSS code, we currently have the functionality to summonparity_check_xand parity_check_z which correspond to hx and hz. But we don't have the functionality to summon lx and lz which are logical operators of CSS code.

A small task would be to calculate logical operators for any quantum code. Calculate and test for all the current quantum codes.  See https://pypi.org/project/bposd/ as they provide examples for calculating logical operators for Steane code.

Describe alternatives you’ve considered

To be added soon.

Additional Context

Eric's PhD dissertation was on the Trellis Decoding And Applications For Quantum Error Correction. In it, he makes an interesting remark: "We present a state-of-the-art algorithm for computing the minimum distance for any stabilizer code over a finite field of prime dimension. We also define a new weight enumerator for stabilizer codes over F_2 incorporating the phases of each stabilizer and provide a trellis-based algorithm to compute it" For Binary Quantum LDPC codes, this would be helpful in finding the code distance. Trellis algorithm are already present in CodingTheory.jl

@Fe-r-oz Fe-r-oz added the enhancement New feature or request label Sep 4, 2024
@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Sep 5, 2024

I have initiated an issue to monitor the progress of the new functionality described in paragraph three of the solution section, specifically concerning the calculation of the minimum distance for QLDPC codes: esabo/CodingTheory#27.
I am working on using minimum_distance from CodingTheory.jl for code distance calculation of the bivariate bicycle quantum LDPC code (BBQLDPC) that was introduced in #352

@esabo
Copy link

esabo commented Sep 5, 2024

There are BZ-type algorithms as implemented in MAGMA. This has currently not yet been pushed to a public branch and is relatively poor. Grassl published three minor updates to these algorithms last week, which I plan on adding. There are probabilistic algorithms, which for quantum are similar to QDistRand. There is a paper coming out this month by a group in England about improvements to this algorithm, which I also plan on implementing. Typical in the field after Bravyi's work is using BP (-OSD) or LP to repeatedly decode and record the smallest logical they see. Then finally there's the trellis approach, which I have found able to do codes of O(10k) qubits quite easily. There are several other classical approaches that have not yet been applied to the quantum setting, if they are even at all applicable. There has been little to no research on quantum minimum distance finding compared to the classical and there are clear directions in which to start.

Note that I currently disabled the trellis code because I switched the entire library from using length n vectors over GF(q^2) to symplectic length 2n vectors over GF(q), but the trellis requires the extension field representation. I could potentially just use the symplectic_to_quadratic function at the beginning of every trellis function, but I discovered a new way of creating the trellises (Chapter 3 of my thesis) which blows the current code out of the water in terms of speed and simplicity and have not yet written a robust version of it. In the end, that entire trellis file will likely be replaced with this new method.

As you mentioned, I do strongly support the use of the MacWilliams identities in computing distances. If the dual is significantly smaller and you know the weight enumerator, you can immediately know the distance of the code. The trellis makes computing the weight enumerator and the distance equal difficulty, so it made sense to set it up to compute the smaller of the two objects.

@esabo
Copy link

esabo commented Sep 5, 2024

Note that I have not implemented the LP solver yet. Feel free to do that and push it haha. See the JuMP extension where I use LP for classical LDPC codes for an example on using the solver.

@esabo
Copy link

esabo commented Sep 5, 2024

See also the function logicals for computing the logicals. Doing so automatically in every constructor is one of the reasons my package is slow for codes with several thousand qubits. The plan is to eventually support sparse matrices to fix this. I've currently disabled this for qudit codes due to a typo that I haven't gotten around to fixing yet, but there are three algorithms for this in the binary case. These are represented by the optional arguments logs_alg in several functions.

@Krastanov
Copy link
Member

@esabo , sidenote FYI, we have a relatively fast computation of logical operators (and it can still be made much faster):

julia> c1 = random_stabilizer(1000, 2000); # k=1000, n=2000 dense random code

julia> @time canonicalize_gott!(c1); # this can be made non-allocating and faster, but it has not been a priority
  0.045933 seconds (57.02 k allocations: 48.347 MiB, 1.54% gc time)

julia> c2 = random_stabilizer(1000, 2000);

julia> @time MixedDestabilizer(c2); # this is not optimized fully, can still be much faster
  0.135252 seconds (187.09 k allocations: 198.919 MiB, 2.48% gc time)

@esabo
Copy link

esabo commented Sep 6, 2024

This is currently the default method in my package to compute this for binary codes. The canonical form is different for qudits. I would have to check, but I don't think ours allocates. The only way we could speed our implement of it up is to remove it from Oscar and make it pure Julia matrices. But then we sacrifice the correctness and robustness of using a finite field library (Flint via Oscar) and all algorithms become necessarily more complicated by having to implement our own finite fields (which will not be faster than Flint).

@Krastanov
Copy link
Member

I do not have the benchmarks to back this up, but some of what you said is surprising to me.

  • I would have expected stabilizer tableaux algebra to be faster here because we have SIMD-ified special-purpose bit-packed arrays. I am sure Flint is heavily optimized, but the fact that they need to support any field might forbid some special-case optimizations we have. I should do some benchmarks -- if Flint is indeed faster, we should probably slowly transition to using it. Anyway, this does not have any bearing on your work, it was just idle curiosity of mine about what to do for QuantumClifford.

  • In QuantumClifford we keep track of phases, which computationally is about 50% of the work (but we can dissable them).

  • For what is worth, the operations we use often are not allocating -- we just have not bothered optimizing everything else yet. Here is a different type of canonicalization that is more representative of performance we can achieve

julia> c1 = random_stabilizer(1000, 2000);

julia> @time canonicalize!(c1);
  0.008481 seconds (1 allocation: 32 bytes)

Pardon the tangent I went on here. Anyway, for fancy ECC algebraic operations, I am looking forward to just depending on your library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants