-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SteaneReedMuller Code #251
Conversation
Had to close #249 because of the merge conflicts introduced by #250. Gottesman subset is being tests with official Gottesman test. |
It was told that the Stabilizer can accept two bool arguments, no need to pass ;UINt8 argument, but the following happens when testing for large stabilizer circuits; Stabilizer(extended_Hx, extended_Hz) SteaneReedMuller(1,3) to (1,7) don't have this error. |
An easy way to check what constructors are available is:
I think you have been using method 5, but method 7 should be more convenient. Given your error message, it seems the problem is that somewhere a Also, it is worrisome that this happens only for some sizes. It implies there might be some size-dependent type-instability, which is usually a bad sign. |
The Hx and Hz are both Boolean Matrices before they are passed down to the Stabilizer. I will check out how the vector{any} comes into play. Also, I will introspect on the tip: size-dependent type-instability, |
On the master branch tests pass. You can disregard nightly/performance/downgrade/buildkite, but the rest should pass here. Let's keep this in draft form until the tests pass. |
To run the tests locally, you can do |
|
code_k(c::....) = ....will give 0 at (4, 2) based on Eq 2. Steane calls it ''code size falls to 0.'' Not sure how the base test perceives k =0 for frame or other calculations that require k. I've come to appreciate that even my mistakes uncover loopholes in the tests. The base test is more comprehensive now as a result.
As per Steane's paper, I have made sure that when the k falls to 0, it should be a throw as Steane point that the method does not work in these cases. Kindly please confirm this as well from the paper. n + k
This helps to identify the following cases:
Edit: |
I don't know precisely the definition of code_s(c::...) so I didn't included it. Maybe it's total number of syndromes bits? I have included If there are improvements that can be made, if further attention is required in tests, etc. please let me know. |
Maybe it's better to include a sentence in docstring why Generator matrics are actually stabilizers meaning same as parity check tableaus for some values, this could really prevent confusion if people see parity check Tableau and ask "Why is formed in the structure of Generator Matrixs which is G = H as follows
"The method works for k > 0, and the generators for k < 0 are stabilisers of k > 0 codes." It would be helpful if permission is granted to add this line in the docstring. |
As per def. of I guess code_s is total rows of stabilizer since in CSS, we have different rows for Hz and Hx. This can be seen in different examples of CSS. In Steane's case, I have something like
since that's tells us total number of rows in this case. Edit: I don't think |
Very sorry for not providing complete details. My bad. Problem was in the when Complete appropriate trace:
|
It might help to run the other ECC tests as well, to see what fails. The one about encoding&syndrome circuits is probably too involved, with too many pieces that work together, so it is difficult to debug. What other tests fail? In particular, what are the It is indeed quite probably that the issue is the k=0 codes. If it happens only for them, we should: (1) understand what specifically about the assumptions in the tests is wrong and (2) filter out these codes only for these specific tests. |
I would need to understand better what that means, but yeah, go ahead and add it. I will ask more questions when I get to reviewing this. |
Every other test seem to work well. It's seems to me that it's the k = 0, and the way the code_base is not to be setup does provides error when k = 0. Please have a look at CI tests. They provide the same error that is locally.
|
Breakdown after running each encoding and decoding file separately Firstly,
K<0 error in test_ecc.
test_ecc_codeproperties: I have not provided the code_s in the file as I don't know it's definition.
syndromes
|
What's does |
This list of properties QuantumClifford.jl/src/QuantumClifford.jl Line 1082 in 11ae38c
|
I think I understand what k=0 is. I do not think it makes sense to have situations where k<0. |
It is not obvious to me that the other tests work. If I am reading this right, all these tests are not executed because the test before them failed. https://github.com/QuantumSavory/QuantumClifford.jl/blob/master/test/runtests.jl#L62-L74 |
I phrased it wrong as a general statement. Testing individually, |
I have been trying to ask whether we should include k = 0 for sometime ;/ Even prepared an equation image for it! I will add an condition that does not include k =0 cases to avoid cases which have the following error I will make sure that the verify with Steane Paper, so that the algorithm is correct. Thanks for your comments! I will try to improve the work. |
To clarify, k=0 can make perfect sense. I do not understand what k<0 is. |
This k<0 has been dangerous for me as well ;/ I don't understand how 'generators for k < 0 are stabilizers of k > 0 codes.' Given my limited knowledge on the subject as I am just starting out, I have not read how generator matrices are considered as stabilizers/parity matrices when k goes negative. The paper does not elaborate on this either. |
Seeking out review so that I can make it better.
Paper has 1 small paragraph about it, so I had carefully added it in docstring. Edit |
Kindly please review this as well. Looking forward to your comments! |
I will look into it sometime late next week. Ping me if I have not by then. |
Benchmark ResultJudge resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsA ratio greater than
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfoTarget
Baseline
Target resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Baseline resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Runtime information
Architecture: x86_64
Benchmark ResultJudge resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsA ratio greater than
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfoTarget
Baseline
Target resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Baseline resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
Runtime information
Architecture: x86_64
Benchmark ResultJudge resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsA ratio greater than
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfoTarget
Baseline
Target resultBenchmark Report for /home/runner/work/QuantumClifford.jl/QuantumClifford.jlJob Properties
ResultsBelow is a table of this job's results, obtained by running the benchmarks.
Benchmark Group ListHere's a list of all the benchmark groups executed by this job:
Julia versioninfo
|
Removed unnessary complexity. Removed the cases where construction method fails as per Steane. Significantly making is simpler, easy to understand Removed unnecessary helper function |
Dear @Krastanov I am looking forward to your comments and review! |
Showing Table 1 from Steane paper as reference here
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a lot of code instances that are not tested for correctness. Adding these to the test runner would make it easier to know whether there are errors here. I have not done a very in-depth review of the codes being produced by this yet -- I will wait for the test runner to verify them first.
I made some minor suggestions on formatting. The docstring for the code confuses me a bit, but it seems like only a minor change is necessary there.
Overall this looks pretty good, but let's first see what the test runner says when given a larger family of codes.
src/ecc/codes/steanereedmuller.jl
Outdated
@@ -0,0 +1,80 @@ | |||
"""The family of non-CSS Steane-Reed-Muller codes, as discovered by Steane in his 1999 paper [steane1999quantum](@cite). | |||
|
|||
The construction method is effective for k > 0, and generators G = (G_x | G_z) for k < 0 serve as stabilizers (parity check matrices) H = (H_x | H_z) for k > 0 codes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand this sentence. Also, k
is not defined.
Also, you have not specified what are the arguments this constructor takes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added the explanation provided by Steane with a bit more description and also explained the parameters as well. I hope that it makes sense now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please let me know does docstring makes sense now? Or I should add more details? I tried to include what was provided in the Steane Paper.
Thanks for your comments. I will work on the changes. Our limit is from 1 to 7 for both r and t, and most of the code instances, such as entire t =5 given n , t =4 given n have k < 0 so that will give argument error. I will check all the cases when k > 0 such as (2,5), (2, 6), (2 ,7), (3,7). Also, I will form larger table to see which values give k> 0 when t =6, and t =7 |
Co-authored-by: Stefan Krastanov <github.acc@krastanov.org>
Co-authored-by: Stefan Krastanov <github.acc@krastanov.org>
Co-authored-by: Stefan Krastanov <github.acc@krastanov.org>
Co-authored-by: Stefan Krastanov <github.acc@krastanov.org>
reminder: please do not push after every single small edit. Try to structure your pull request and do pushes only when something substantial is ready. One way to think about this: do not make pushes until you run the tests locally. |
Thanks for your comments. I have applied the code review suggestions, deleted the steane_convention helper function and adder proper docstring. Locally, there is problem with using `LDPC Decoders "0.3.1" and building the gives error. The problems goes away by reverting back to 0.3.0 ERROR: Unsatisfiable requirements detected for package LDPCDecoders [3c486d74]: |
The latest version of LDPCDecoders is 0.3.1 https://juliahub.com/ui/Packages/General/LDPCDecoders Your local environment was probably not updated to know about it. A |
The CSS |
No description provided.