From e2f35846c8f0e5a8698806a99eee86bebc940e0e Mon Sep 17 00:00:00 2001 From: Steven Silvester Date: Tue, 31 Oct 2023 07:49:02 -0500 Subject: [PATCH] Undo vendor changes --- .pre-commit-config.yaml | 2 + .../github.com/klauspost/compress/README.md | 88 ++++++------- .../klauspost/compress/fse/README.md | 44 +++---- .../klauspost/compress/huff0/README.md | 46 +++---- vendor/github.com/klauspost/compress/s2sx.mod | 1 + .../klauspost/compress/zstd/README.md | 118 +++++++++--------- .../github.com/montanaflynn/stats/.gitignore | 2 +- .../montanaflynn/stats/CHANGELOG.md | 16 +-- vendor/github.com/montanaflynn/stats/Makefile | 8 +- vendor/github.com/youmark/pkcs8/LICENSE | 2 +- vendor/github.com/youmark/pkcs8/README.md | 1 + 11 files changed, 166 insertions(+), 162 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 57a3d44cb3..f634d87abb 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -10,8 +10,10 @@ repos: - id: check-merge-conflict - id: check-json - id: end-of-file-fixer + exclude: ^vendor/ exclude_types: [json,yaml] - id: trailing-whitespace + exclude: ^vendor/ exclude_types: [json,yaml] - repo: https://github.com/executablebooks/mdformat diff --git a/vendor/github.com/klauspost/compress/README.md b/vendor/github.com/klauspost/compress/README.md index 95541e61f3..3429879eb6 100644 --- a/vendor/github.com/klauspost/compress/README.md +++ b/vendor/github.com/klauspost/compress/README.md @@ -27,7 +27,7 @@ This package provides various compression algorithms. * Add [snappy replacement package](https://github.com/klauspost/compress/tree/master/snappy). * zstd: Fix incorrect encoding in "best" mode [#415](https://github.com/klauspost/compress/pull/415) -* Aug 3, 2021 (v1.13.3) +* Aug 3, 2021 (v1.13.3) * zstd: Improve Best compression [#404](https://github.com/klauspost/compress/pull/404) * zstd: Fix WriteTo error forwarding [#411](https://github.com/klauspost/compress/pull/411) * gzhttp: Return http.HandlerFunc instead of http.Handler. Unlikely breaking change. [#406](https://github.com/klauspost/compress/pull/406) @@ -49,7 +49,7 @@ This package provides various compression algorithms. * May 25, 2021 (v1.12.3) * deflate: Better/faster Huffman encoding [#374](https://github.com/klauspost/compress/pull/374) * deflate: Allocate less for history. [#375](https://github.com/klauspost/compress/pull/375) - * zstd: Forward read errors [#373](https://github.com/klauspost/compress/pull/373) + * zstd: Forward read errors [#373](https://github.com/klauspost/compress/pull/373) * Apr 27, 2021 (v1.12.2) * zstd: Improve better/best compression [#360](https://github.com/klauspost/compress/pull/360) [#364](https://github.com/klauspost/compress/pull/364) [#365](https://github.com/klauspost/compress/pull/365) @@ -57,7 +57,7 @@ This package provides various compression algorithms. * deflate: Improve level 5+6 compression [#367](https://github.com/klauspost/compress/pull/367) * s2: Improve better/best compression [#358](https://github.com/klauspost/compress/pull/358) [#359](https://github.com/klauspost/compress/pull/358) * s2: Load after checking src limit on amd64. [#362](https://github.com/klauspost/compress/pull/362) - * s2sx: Limit max executable size [#368](https://github.com/klauspost/compress/pull/368) + * s2sx: Limit max executable size [#368](https://github.com/klauspost/compress/pull/368) * Apr 14, 2021 (v1.12.1) * snappy package removed. Upstream added as dependency. @@ -70,7 +70,7 @@ This package provides various compression algorithms.
See changes prior to v1.12.1 - + * Mar 26, 2021 (v1.11.13) * zstd: Big speedup on small dictionary encodes [#344](https://github.com/klauspost/compress/pull/344) [#345](https://github.com/klauspost/compress/pull/345) * zstd: Add [WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) encoder option [#336](https://github.com/klauspost/compress/pull/336) @@ -92,7 +92,7 @@ This package provides various compression algorithms. * s2: Less upfront decoder allocation. [#322](https://github.com/klauspost/compress/pull/322) * zstd: Faster "compression" of incompressible data. [#314](https://github.com/klauspost/compress/pull/314) * zip: Fix zip64 headers. [#313](https://github.com/klauspost/compress/pull/313) - + * Jan 14, 2021 (v1.11.7) * Use Bytes() interface to get bytes across packages. [#309](https://github.com/klauspost/compress/pull/309) * s2: Add 'best' compression option. [#310](https://github.com/klauspost/compress/pull/310) @@ -129,53 +129,53 @@ This package provides various compression algorithms.
See changes prior to v1.11.0 - -* July 8, 2020 (v1.10.11) + +* July 8, 2020 (v1.10.11) * zstd: Fix extra block when compressing with ReadFrom. [#278](https://github.com/klauspost/compress/pull/278) * huff0: Also populate compression table when reading decoding table. [#275](https://github.com/klauspost/compress/pull/275) - -* June 23, 2020 (v1.10.10) + +* June 23, 2020 (v1.10.10) * zstd: Skip entropy compression in fastest mode when no matches. [#270](https://github.com/klauspost/compress/pull/270) - -* June 16, 2020 (v1.10.9): + +* June 16, 2020 (v1.10.9): * zstd: API change for specifying dictionaries. See [#268](https://github.com/klauspost/compress/pull/268) * zip: update CreateHeaderRaw to handle zip64 fields. [#266](https://github.com/klauspost/compress/pull/266) * Fuzzit tests removed. The service has been purchased and is no longer available. - -* June 5, 2020 (v1.10.8): + +* June 5, 2020 (v1.10.8): * 1.15x faster zstd block decompression. [#265](https://github.com/klauspost/compress/pull/265) - -* June 1, 2020 (v1.10.7): + +* June 1, 2020 (v1.10.7): * Added zstd decompression [dictionary support](https://github.com/klauspost/compress/tree/master/zstd#dictionaries) * Increase zstd decompression speed up to 1.19x. [#259](https://github.com/klauspost/compress/pull/259) * Remove internal reset call in zstd compression and reduce allocations. [#263](https://github.com/klauspost/compress/pull/263) - -* May 21, 2020: (v1.10.6) + +* May 21, 2020: (v1.10.6) * zstd: Reduce allocations while decoding. [#258](https://github.com/klauspost/compress/pull/258), [#252](https://github.com/klauspost/compress/pull/252) * zstd: Stricter decompression checks. - + * April 12, 2020: (v1.10.5) * s2-commands: Flush output when receiving SIGINT. [#239](https://github.com/klauspost/compress/pull/239) - -* Apr 8, 2020: (v1.10.4) + +* Apr 8, 2020: (v1.10.4) * zstd: Minor/special case optimizations. [#251](https://github.com/klauspost/compress/pull/251), [#250](https://github.com/klauspost/compress/pull/250), [#249](https://github.com/klauspost/compress/pull/249), [#247](https://github.com/klauspost/compress/pull/247) -* Mar 11, 2020: (v1.10.3) +* Mar 11, 2020: (v1.10.3) * s2: Use S2 encoder in pure Go mode for Snappy output as well. [#245](https://github.com/klauspost/compress/pull/245) * s2: Fix pure Go block encoder. [#244](https://github.com/klauspost/compress/pull/244) * zstd: Added "better compression" mode. [#240](https://github.com/klauspost/compress/pull/240) * zstd: Improve speed of fastest compression mode by 5-10% [#241](https://github.com/klauspost/compress/pull/241) * zstd: Skip creating encoders when not needed. [#238](https://github.com/klauspost/compress/pull/238) - -* Feb 27, 2020: (v1.10.2) + +* Feb 27, 2020: (v1.10.2) * Close to 50% speedup in inflate (gzip/zip decompression). [#236](https://github.com/klauspost/compress/pull/236) [#234](https://github.com/klauspost/compress/pull/234) [#232](https://github.com/klauspost/compress/pull/232) * Reduce deflate level 1-6 memory usage up to 59%. [#227](https://github.com/klauspost/compress/pull/227) - + * Feb 18, 2020: (v1.10.1) * Fix zstd crash when resetting multiple times without sending data. [#226](https://github.com/klauspost/compress/pull/226) * deflate: Fix dictionary use on level 1-6. [#224](https://github.com/klauspost/compress/pull/224) * Remove deflate writer reference when closing. [#224](https://github.com/klauspost/compress/pull/224) - -* Feb 4, 2020: (v1.10.0) + +* Feb 4, 2020: (v1.10.0) * Add optional dictionary to [stateless deflate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc#StatelessDeflate). Breaking change, send `nil` for previous behaviour. [#216](https://github.com/klauspost/compress/pull/216) * Fix buffer overflow on repeated small block deflate. [#218](https://github.com/klauspost/compress/pull/218) * Allow copying content from an existing ZIP file without decompressing+compressing. [#214](https://github.com/klauspost/compress/pull/214) @@ -187,7 +187,7 @@ This package provides various compression algorithms. See changes prior to v1.10.0 * Jan 20,2020 (v1.9.8) Optimize gzip/deflate with better size estimates and faster table generation. [#207](https://github.com/klauspost/compress/pull/207) by [luyu6056](https://github.com/luyu6056), [#206](https://github.com/klauspost/compress/pull/206). -* Jan 11, 2020: S2 Encode/Decode will use provided buffer if capacity is big enough. [#204](https://github.com/klauspost/compress/pull/204) +* Jan 11, 2020: S2 Encode/Decode will use provided buffer if capacity is big enough. [#204](https://github.com/klauspost/compress/pull/204) * Jan 5, 2020: (v1.9.7) Fix another zstd regression in v1.9.5 - v1.9.6 removed. * Jan 4, 2020: (v1.9.6) Regression in v1.9.5 fixed causing corrupt zstd encodes in rare cases. * Jan 4, 2020: Faster IO in [s2c + s2d commandline tools](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) compression/decompression. [#192](https://github.com/klauspost/compress/pull/192) @@ -211,7 +211,7 @@ This package provides various compression algorithms. * Nov 10, 2019: Fix inconsistent error returned by zstd decoder. * Oct 28, 2019 (v1.9.1) ztsd: Fix crash when compressing blocks. [#174](https://github.com/klauspost/compress/pull/174) * Oct 24, 2019 (v1.9.0) zstd: Fix rare data corruption [#173](https://github.com/klauspost/compress/pull/173) -* Oct 24, 2019 zstd: Fix huff0 out of buffer write [#171](https://github.com/klauspost/compress/pull/171) and always return errors [#172](https://github.com/klauspost/compress/pull/172) +* Oct 24, 2019 zstd: Fix huff0 out of buffer write [#171](https://github.com/klauspost/compress/pull/171) and always return errors [#172](https://github.com/klauspost/compress/pull/172) * Oct 10, 2019: Big deflate rewrite, 30-40% faster with better compression [#105](https://github.com/klauspost/compress/pull/105)
@@ -229,7 +229,7 @@ This package provides various compression algorithms. * Sep 5, 2019: Lazy initialization of zstandard predefined en/decoder tables. * Aug 26, 2019: (v1.8.1) S2: 1-2% compression increase in "better" compression mode. * Aug 26, 2019: zstd: Check maximum size of Huffman 1X compressed literals while decoding. -* Aug 24, 2019: (v1.8.0) Added [S2 compression](https://github.com/klauspost/compress/tree/master/s2#s2-compression), a high performance replacement for Snappy. +* Aug 24, 2019: (v1.8.0) Added [S2 compression](https://github.com/klauspost/compress/tree/master/s2#s2-compression), a high performance replacement for Snappy. * Aug 21, 2019: (v1.7.6) Fixed minor issues found by fuzzer. One could lead to zstd not decompressing. * Aug 18, 2019: Add [fuzzit](https://fuzzit.dev/) continuous fuzzing. * Aug 14, 2019: zstd: Skip incompressible data 2x faster. [#147](https://github.com/klauspost/compress/pull/147) @@ -260,14 +260,14 @@ This package provides various compression algorithms. * Jan 14, 2017: Reduce stack pressure due to array copies. See [Issue #18625](https://github.com/golang/go/issues/18625). * Oct 25, 2016: Level 2-4 have been rewritten and now offers significantly better performance than before. * Oct 20, 2016: Port zlib changes from Go 1.7 to fix zlib writer issue. Please update. -* Oct 16, 2016: Go 1.7 changes merged. Apples to apples this package is a few percent faster, but has a significantly better balance between speed and compression per level. +* Oct 16, 2016: Go 1.7 changes merged. Apples to apples this package is a few percent faster, but has a significantly better balance between speed and compression per level. * Mar 24, 2016: Always attempt Huffman encoding on level 4-7. This improves base 64 encoded data compression. * Mar 24, 2016: Small speedup for level 1-3. * Feb 19, 2016: Faster bit writer, level -2 is 15% faster, level 1 is 4% faster. * Feb 19, 2016: Handle small payloads faster in level 1-3. * Feb 19, 2016: Added faster level 2 + 3 compression modes. * Feb 19, 2016: [Rebalanced compression levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/), so there is a more even progresssion in terms of compression. New default level is 5. -* Feb 14, 2016: Snappy: Merge upstream changes. +* Feb 14, 2016: Snappy: Merge upstream changes. * Feb 14, 2016: Snappy: Fix aggressive skipping. * Feb 14, 2016: Snappy: Update benchmark. * Feb 13, 2016: Deflate: Fixed assembler problem that could lead to sub-optimal compression. @@ -312,19 +312,19 @@ The packages contains the same as the standard library, so you can use the godoc Currently there is only minor speedup on decompression (mostly CRC32 calculation). -Memory usage is typically 1MB for a Writer. stdlib is in the same range. -If you expect to have a lot of concurrently allocated Writers consider using +Memory usage is typically 1MB for a Writer. stdlib is in the same range. +If you expect to have a lot of concurrently allocated Writers consider using the stateless compress described below. # Stateless compression -This package offers stateless compression as a special option for gzip/deflate. +This package offers stateless compression as a special option for gzip/deflate. It will do compression but without maintaining any state between Write calls. This means there will be no memory kept between Write calls, but compression and speed will be suboptimal. -This is only relevant in cases where you expect to run many thousands of compressors concurrently, -but with very little activity. This is *not* intended for regular web servers serving individual requests. +This is only relevant in cases where you expect to run many thousands of compressors concurrently, +but with very little activity. This is *not* intended for regular web servers serving individual requests. Because of this, the size of actual Write calls will affect output size. @@ -344,14 +344,14 @@ A `bufio.Writer` can of course be used to control write sizes. For example, to u w := bufio.NewWriterSize(gzw, 4096) defer w.Flush() - - // Write to 'w' + + // Write to 'w' ``` -This will only use up to 4KB in memory when the writer is idle. +This will only use up to 4KB in memory when the writer is idle. -Compression is almost always worse than the fastest compression level -and each write will allocate (a little) memory. +Compression is almost always worse than the fastest compression level +and each write will allocate (a little) memory. # Performance Update 2018 @@ -386,7 +386,7 @@ Looking at level 6, this package is 88% faster, but will output about 6% more da This test is for typical data files stored on a server. In this case it is a collection of Go precompiled objects. They are very compressible. -The picture is similar to the web content, but with small differences since this is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a bit of compression. +The picture is similar to the web content, but with small differences since this is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a bit of compression. The standard library seems suboptimal on level 3 and 4 - offering both worse compression and speed than level 6 & 7 of this package respectively. @@ -418,13 +418,13 @@ This is mainly a test of how good the algorithms are at detecting un-compressibl ## Huffman only compression -This compression library adds a special compression level, named `HuffmanOnly`, which allows near linear time compression. This is done by completely disabling matching of previous data, and only reduce the number of bits to represent each character. +This compression library adds a special compression level, named `HuffmanOnly`, which allows near linear time compression. This is done by completely disabling matching of previous data, and only reduce the number of bits to represent each character. This means that often used characters, like 'e' and ' ' (space) in text use the fewest bits to represent, and rare characters like 'ยค' takes more bits to represent. For more information see [wikipedia](https://en.wikipedia.org/wiki/Huffman_coding) or this nice [video](https://youtu.be/ZdooBTdW5bM). Since this type of compression has much less variance, the compression speed is mostly unaffected by the input data, and is usually more than *180MB/s* for a single core. -The downside is that the compression ratio is usually considerably worse than even the fastest conventional compression. The compression ratio can never be better than 8:1 (12.5%). +The downside is that the compression ratio is usually considerably worse than even the fastest conventional compression. The compression ratio can never be better than 8:1 (12.5%). The linear time compression can be used as a "better than nothing" mode, where you cannot risk the encoder to slow down on some content. For comparison, the size of the "Twain" text is *233460 bytes* (+29% vs. level 1) and encode speed is 144MB/s (4.5x level 1). So in this case you trade a 30% size increase for a 4 times speedup. diff --git a/vendor/github.com/klauspost/compress/fse/README.md b/vendor/github.com/klauspost/compress/fse/README.md index 68b7ce95fb..ea7324da67 100644 --- a/vendor/github.com/klauspost/compress/fse/README.md +++ b/vendor/github.com/klauspost/compress/fse/README.md @@ -1,14 +1,14 @@ # Finite State Entropy This package provides Finite State Entropy encoding and decoding. - -Finite State Entropy (also referenced as [tANS](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS)) + +Finite State Entropy (also referenced as [tANS](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS)) encoding provides a fast near-optimal symbol encoding/decoding for byte blocks as implemented in [zstandard](https://github.com/facebook/zstd). This can be used for compressing input with a lot of similar input values to the smallest number of bytes. This does not perform any multi-byte [dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ coders, -but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding. +but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding. * [Godoc documentation](https://godoc.org/github.com/klauspost/compress/fse) @@ -18,10 +18,10 @@ but it can be used as a secondary step to compressors (like Snappy) that does no # Usage -This package provides a low level interface that allows to compress single independent blocks. +This package provides a low level interface that allows to compress single independent blocks. -Each block is separate, and there is no built in integrity checks. -This means that the caller should keep track of block sizes and also do checksums if needed. +Each block is separate, and there is no built in integrity checks. +This means that the caller should keep track of block sizes and also do checksums if needed. Compressing a block is done via the [`Compress`](https://godoc.org/github.com/klauspost/compress/fse#Compress) function. You must provide input and will receive the output and maybe an error. @@ -37,43 +37,43 @@ These error values can be returned: As can be seen above there are errors that will be returned even under normal operation so it is important to handle these. -To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/fse#Scratch) object -that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same -object can be used for both. +To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/fse#Scratch) object +that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same +object can be used for both. Be aware, that when re-using a `Scratch` object that the *output* buffer is also re-used, so if you are still using this you must set the `Out` field in the scratch to nil. The same buffer is used for compression and decompression output. Decompressing is done by calling the [`Decompress`](https://godoc.org/github.com/klauspost/compress/fse#Decompress) function. You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back -your input was likely corrupted. +your input was likely corrupted. -It is important to note that a successful decoding does *not* mean your output matches your original input. +It is important to note that a successful decoding does *not* mean your output matches your original input. There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid. For more detailed usage, see examples in the [godoc documentation](https://godoc.org/github.com/klauspost/compress/fse#pkg-examples). # Performance -A lot of factors are affecting speed. Block sizes and compressibility of the material are primary factors. -All compression functions are currently only running on the calling goroutine so only one core will be used per block. +A lot of factors are affecting speed. Block sizes and compressibility of the material are primary factors. +All compression functions are currently only running on the calling goroutine so only one core will be used per block. The compressor is significantly faster if symbols are kept as small as possible. The highest byte value of the input -is used to reduce some of the processing, so if all your input is above byte value 64 for instance, it may be -beneficial to transpose all your input values down by 64. +is used to reduce some of the processing, so if all your input is above byte value 64 for instance, it may be +beneficial to transpose all your input values down by 64. -With moderate block sizes around 64k speed are typically 200MB/s per core for compression and -around 300MB/s decompression speed. +With moderate block sizes around 64k speed are typically 200MB/s per core for compression and +around 300MB/s decompression speed. -The same hardware typically does Huffman (deflate) encoding at 125MB/s and decompression at 100MB/s. +The same hardware typically does Huffman (deflate) encoding at 125MB/s and decompression at 100MB/s. # Plans -At one point, more internals will be exposed to facilitate more "expert" usage of the components. +At one point, more internals will be exposed to facilitate more "expert" usage of the components. -A streaming interface is also likely to be implemented. Likely compatible with [FSE stream format](https://github.com/Cyan4973/FiniteStateEntropy/blob/dev/programs/fileio.c#L261). +A streaming interface is also likely to be implemented. Likely compatible with [FSE stream format](https://github.com/Cyan4973/FiniteStateEntropy/blob/dev/programs/fileio.c#L261). # Contributing -Contributions are always welcome. Be aware that adding public functions will require good justification and breaking -changes will likely not be accepted. If in doubt open an issue before writing the PR. +Contributions are always welcome. Be aware that adding public functions will require good justification and breaking +changes will likely not be accepted. If in doubt open an issue before writing the PR. \ No newline at end of file diff --git a/vendor/github.com/klauspost/compress/huff0/README.md b/vendor/github.com/klauspost/compress/huff0/README.md index 64b21698f4..8b6e5c6638 100644 --- a/vendor/github.com/klauspost/compress/huff0/README.md +++ b/vendor/github.com/klauspost/compress/huff0/README.md @@ -1,14 +1,14 @@ # Huff0 entropy compression This package provides Huff0 encoding and decoding as used in zstd. - -[Huff0](https://github.com/Cyan4973/FiniteStateEntropy#new-generation-entropy-coders), -a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU + +[Huff0](https://github.com/Cyan4973/FiniteStateEntropy#new-generation-entropy-coders), +a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU (Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds. This can be used for compressing input with a lot of similar input values to the smallest number of bytes. This does not perform any multi-byte [dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ coders, -but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding. +but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding. * [Godoc documentation](https://godoc.org/github.com/klauspost/compress/huff0) @@ -20,12 +20,12 @@ This ensures that most functionality is well tested. # Usage -This package provides a low level interface that allows to compress single independent blocks. +This package provides a low level interface that allows to compress single independent blocks. -Each block is separate, and there is no built in integrity checks. -This means that the caller should keep track of block sizes and also do checksums if needed. +Each block is separate, and there is no built in integrity checks. +This means that the caller should keep track of block sizes and also do checksums if needed. -Compressing a block is done via the [`Compress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress1X) and +Compressing a block is done via the [`Compress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress1X) and [`Compress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress4X) functions. You must provide input and will receive the output and maybe an error. @@ -42,48 +42,48 @@ These error values can be returned: As can be seen above some of there are errors that will be returned even under normal operation so it is important to handle these. -To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object -that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same -object can be used for both. +To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object +that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same +object can be used for both. Be aware, that when re-using a `Scratch` object that the *output* buffer is also re-used, so if you are still using this you must set the `Out` field in the scratch to nil. The same buffer is used for compression and decompression output. -The `Scratch` object will retain state that allows to re-use previous tables for encoding and decoding. +The `Scratch` object will retain state that allows to re-use previous tables for encoding and decoding. ## Tables and re-use -Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results. +Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results. -The Scratch object allows you to set a [`ReusePolicy`](https://godoc.org/github.com/klauspost/compress/huff0#ReusePolicy) +The Scratch object allows you to set a [`ReusePolicy`](https://godoc.org/github.com/klauspost/compress/huff0#ReusePolicy) that controls this behaviour. See the documentation for details. This can be altered between each block. Do however note that this information is *not* stored in the output block and it is up to the users of the package to record whether [`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable) should be called, -based on the boolean reported back from the CompressXX call. +based on the boolean reported back from the CompressXX call. -If you want to store the table separate from the data, you can access them as `OutData` and `OutTable` on the +If you want to store the table separate from the data, you can access them as `OutData` and `OutTable` on the [`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object. ## Decompressing The first part of decoding is to initialize the decoding table through [`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable). -This will initialize the decoding tables. -You can supply the complete block to `ReadTable` and it will return the data part of the block -which can be given to the decompressor. +This will initialize the decoding tables. +You can supply the complete block to `ReadTable` and it will return the data part of the block +which can be given to the decompressor. -Decompressing is done by calling the [`Decompress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress1X) +Decompressing is done by calling the [`Decompress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress1X) or [`Decompress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress4X) function. For concurrently decompressing content with a fixed table a stateless [`Decoder`](https://godoc.org/github.com/klauspost/compress/huff0#Decoder) can be requested which will remain correct as long as the scratch is unchanged. The capacity of the provided slice indicates the expected output size. You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back -your input was likely corrupted. +your input was likely corrupted. -It is important to note that a successful decoding does *not* mean your output matches your original input. +It is important to note that a successful decoding does *not* mean your output matches your original input. There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid. # Contributing -Contributions are always welcome. Be aware that adding public functions will require good justification and breaking +Contributions are always welcome. Be aware that adding public functions will require good justification and breaking changes will likely not be accepted. If in doubt open an issue before writing the PR. diff --git a/vendor/github.com/klauspost/compress/s2sx.mod b/vendor/github.com/klauspost/compress/s2sx.mod index b605e2d52b..2263853fca 100644 --- a/vendor/github.com/klauspost/compress/s2sx.mod +++ b/vendor/github.com/klauspost/compress/s2sx.mod @@ -1,3 +1,4 @@ module github.com/klauspost/compress go 1.16 + diff --git a/vendor/github.com/klauspost/compress/zstd/README.md b/vendor/github.com/klauspost/compress/zstd/README.md index cb45dee4f5..c8f0f16fc1 100644 --- a/vendor/github.com/klauspost/compress/zstd/README.md +++ b/vendor/github.com/klauspost/compress/zstd/README.md @@ -1,12 +1,12 @@ -# zstd +# zstd -[Zstandard](https://facebook.github.io/zstd/) is a real-time compression algorithm, providing high compression ratios. +[Zstandard](https://facebook.github.io/zstd/) is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. -A high performance compression algorithm is implemented. For now focused on speed. +A high performance compression algorithm is implemented. For now focused on speed. -This package provides [compression](#Compressor) to and [decompression](#Decompressor) of Zstandard content. +This package provides [compression](#Compressor) to and [decompression](#Decompressor) of Zstandard content. -This package is pure Go and without use of "unsafe". +This package is pure Go and without use of "unsafe". The `zstd` package is provided as open source software using a Go standard license. @@ -20,25 +20,25 @@ Install using `go get -u github.com/klauspost/compress`. The package is located ## Compressor -### Status: +### Status: -STABLE - there may always be subtle bugs, a wide variety of content has been tested and the library is actively +STABLE - there may always be subtle bugs, a wide variety of content has been tested and the library is actively used by several projects. This library is being [fuzz-tested](https://github.com/klauspost/compress-fuzz) for all updates. -There may still be specific combinations of data types/size/settings that could lead to edge cases, -so as always, testing is recommended. +There may still be specific combinations of data types/size/settings that could lead to edge cases, +so as always, testing is recommended. -For now, a high speed (fastest) and medium-fast (default) compressor has been implemented. +For now, a high speed (fastest) and medium-fast (default) compressor has been implemented. -* The "Fastest" compression ratio is roughly equivalent to zstd level 1. +* The "Fastest" compression ratio is roughly equivalent to zstd level 1. * The "Default" compression ratio is roughly equivalent to zstd level 3 (default). * The "Better" compression ratio is roughly equivalent to zstd level 7. * The "Best" compression ratio is roughly equivalent to zstd level 11. -In terms of speed, it is typically 2x as fast as the stdlib deflate/gzip in its fastest mode. +In terms of speed, it is typically 2x as fast as the stdlib deflate/gzip in its fastest mode. The compression ratio compared to stdlib is around level 3, but usually 3x as fast. - + ### Usage An Encoder can be used for either compressing a stream via the @@ -66,37 +66,37 @@ func Compress(in io.Reader, out io.Writer) error { ``` Now you can encode by writing data to `enc`. The output will be finished writing when `Close()` is called. -Even if your encode fails, you should still call `Close()` to release any resources that may be held up. +Even if your encode fails, you should still call `Close()` to release any resources that may be held up. The above is fine for big encodes. However, whenever possible try to *reuse* the writer. -To reuse the encoder, you can use the `Reset(io.Writer)` function to change to another output. -This will allow the encoder to reuse all resources and avoid wasteful allocations. +To reuse the encoder, you can use the `Reset(io.Writer)` function to change to another output. +This will allow the encoder to reuse all resources and avoid wasteful allocations. -Currently stream encoding has 'light' concurrency, meaning up to 2 goroutines can be working on part -of a stream. This is independent of the `WithEncoderConcurrency(n)`, but that is likely to change +Currently stream encoding has 'light' concurrency, meaning up to 2 goroutines can be working on part +of a stream. This is independent of the `WithEncoderConcurrency(n)`, but that is likely to change in the future. So if you want to limit concurrency for future updates, specify the concurrency you would like. -You can specify your desired compression level using `WithEncoderLevel()` option. Currently only pre-defined +You can specify your desired compression level using `WithEncoderLevel()` option. Currently only pre-defined compression settings can be specified. #### Future Compatibility Guarantees This will be an evolving project. When using this package it is important to note that both the compression efficiency and speed may change. -The goal will be to keep the default efficiency at the default zstd (level 3). -However the encoding should never be assumed to remain the same, +The goal will be to keep the default efficiency at the default zstd (level 3). +However the encoding should never be assumed to remain the same, and you should not use hashes of compressed output for similarity checks. The Encoder can be assumed to produce the same output from the exact same code version. -However, the may be modes in the future that break this, -although they will not be enabled without an explicit option. +However, the may be modes in the future that break this, +although they will not be enabled without an explicit option. This encoder is not designed to (and will probably never) output the exact same bitstream as the reference encoder. Also note, that the cgo decompressor currently does not [report all errors on invalid input](https://github.com/DataDog/zstd/issues/59), -[omits error checks](https://github.com/DataDog/zstd/issues/61), [ignores checksums](https://github.com/DataDog/zstd/issues/43) +[omits error checks](https://github.com/DataDog/zstd/issues/61), [ignores checksums](https://github.com/DataDog/zstd/issues/43) and seems to ignore concatenated streams, even though [it is part of the spec](https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#frames). #### Blocks @@ -109,9 +109,9 @@ This function can be called concurrently, but each call will only run on a singl Encoded blocks can be concatenated and the result will be the combined input stream. Data compressed with EncodeAll can be decoded with the Decoder, using either a stream or `DecodeAll`. -Especially when encoding blocks you should take special care to reuse the encoder. -This will effectively make it run without allocations after a warmup period. -To make it run completely without allocations, supply a destination buffer with space for all content. +Especially when encoding blocks you should take special care to reuse the encoder. +This will effectively make it run without allocations after a warmup period. +To make it run completely without allocations, supply a destination buffer with space for all content. ```Go import "github.com/klauspost/compress/zstd" @@ -120,17 +120,17 @@ import "github.com/klauspost/compress/zstd" // For this operation type we supply a nil Reader. var encoder, _ = zstd.NewWriter(nil) -// Compress a buffer. +// Compress a buffer. // If you have a destination buffer, the allocation in the call can also be eliminated. func Compress(src []byte) []byte { return encoder.EncodeAll(src, make([]byte, 0, len(src))) -} +} ``` -You can control the maximum number of concurrent encodes using the `WithEncoderConcurrency(n)` +You can control the maximum number of concurrent encodes using the `WithEncoderConcurrency(n)` option when creating the writer. -Using the Encoder for both a stream and individual blocks concurrently is safe. +Using the Encoder for both a stream and individual blocks concurrently is safe. ### Performance @@ -256,13 +256,13 @@ nyc-taxi-data-10M.csv gzkp 1 3325605752 922257165 16780 189.00 Staus: STABLE - there may still be subtle bugs, but a wide variety of content has been tested. This library is being continuously [fuzz-tested](https://github.com/klauspost/compress-fuzz), -kindly supplied by [fuzzit.dev](https://fuzzit.dev/). -The main purpose of the fuzz testing is to ensure that it is not possible to crash the decoder, -or run it past its limits with ANY input provided. - +kindly supplied by [fuzzit.dev](https://fuzzit.dev/). +The main purpose of the fuzz testing is to ensure that it is not possible to crash the decoder, +or run it past its limits with ANY input provided. + ### Usage -The package has been designed for two main usages, big streams of data and smaller in-memory buffers. +The package has been designed for two main usages, big streams of data and smaller in-memory buffers. There are two main usages of the package for these. Both of them are accessed by creating a `Decoder`. For streaming use a simple setup could look like this: @@ -276,14 +276,14 @@ func Decompress(in io.Reader, out io.Writer) error { return err } defer d.Close() - + // Copy content... _, err = io.Copy(out, d) return err } ``` -It is important to use the "Close" function when you no longer need the Reader to stop running goroutines. +It is important to use the "Close" function when you no longer need the Reader to stop running goroutines. See "Allocation-less operation" below. For decoding buffers, it could look something like this: @@ -299,13 +299,13 @@ var decoder, _ = zstd.NewReader(nil) // so it will be allocated by the decoder. func Decompress(src []byte) ([]byte, error) { return decoder.DecodeAll(src, nil) -} +} ``` -Both of these cases should provide the functionality needed. -The decoder can be used for *concurrent* decompression of multiple buffers. -It will only allow a certain number of concurrent operations to run. -To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder. +Both of these cases should provide the functionality needed. +The decoder can be used for *concurrent* decompression of multiple buffers. +It will only allow a certain number of concurrent operations to run. +To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder. ### Dictionaries @@ -323,24 +323,24 @@ When registering multiple dictionaries with the same ID, the last one will be us It is possible to use dictionaries when compressing data. -To enable a dictionary use `WithEncoderDict(dict []byte)`. Here only one dictionary will be used -and it will likely be used even if it doesn't improve compression. +To enable a dictionary use `WithEncoderDict(dict []byte)`. Here only one dictionary will be used +and it will likely be used even if it doesn't improve compression. The used dictionary must be used to decompress the content. -For any real gains, the dictionary should be built with similar data. +For any real gains, the dictionary should be built with similar data. If an unsuitable dictionary is used the output may be slightly larger than using no dictionary. Use the [zstd commandline tool](https://github.com/facebook/zstd/releases) to build a dictionary from sample data. -For information see [zstd dictionary information](https://github.com/facebook/zstd#the-case-for-small-data-compression). +For information see [zstd dictionary information](https://github.com/facebook/zstd#the-case-for-small-data-compression). -For now there is a fixed startup performance penalty for compressing content with dictionaries. -This will likely be improved over time. Just be aware to test performance when implementing. +For now there is a fixed startup performance penalty for compressing content with dictionaries. +This will likely be improved over time. Just be aware to test performance when implementing. ### Allocation-less operation -The decoder has been designed to operate without allocations after a warmup. +The decoder has been designed to operate without allocations after a warmup. -This means that you should *store* the decoder for best performance. +This means that you should *store* the decoder for best performance. To re-use a stream decoder, use the `Reset(r io.Reader) error` to switch to another stream. A decoder can safely be re-used even if the previous stream failed. @@ -350,7 +350,7 @@ So you *must* use this if you will no longer need the Reader. For decompressing smaller buffers a single decoder can be used. When decoding buffers, you can supply a destination slice with length 0 and your expected capacity. -In this case no unneeded allocations should be made. +In this case no unneeded allocations should be made. ### Concurrency @@ -368,14 +368,14 @@ So effectively this also means the decoder will "read ahead" and prepare data to Since "blocks" are quite dependent on the output of the previous block stream decoding will only have limited concurrency. In practice this means that concurrency is often limited to utilizing about 2 cores effectively. - - + + ### Benchmarks These are some examples of performance compared to [datadog cgo library](https://github.com/DataDog/zstd). -The first two are streaming decodes and the last are smaller inputs. - +The first two are streaming decodes and the last are smaller inputs. + ``` BenchmarkDecoderSilesia-8 3 385000067 ns/op 550.51 MB/s 5498 B/op 8 allocs/op BenchmarkDecoderSilesiaCgo-8 6 197666567 ns/op 1072.25 MB/s 270672 B/op 8 allocs/op @@ -422,18 +422,18 @@ While this isn't widely supported it can be useful for internal files. To support the compression and decompression of these files you must register a compressor and decompressor. It is highly recommended registering the (de)compressors on individual zip Reader/Writer and NOT -use the global registration functions. The main reason for this is that 2 registrations from +use the global registration functions. The main reason for this is that 2 registrations from different packages will result in a panic. It is a good idea to only have a single compressor and decompressor, since they can be used for multiple zip files concurrently, and using a single instance will allow reusing some resources. -See [this example](https://pkg.go.dev/github.com/klauspost/compress/zstd#example-ZipCompressor) for +See [this example](https://pkg.go.dev/github.com/klauspost/compress/zstd#example-ZipCompressor) for how to compress and decompress files inside zip archives. # Contributions -Contributions are always welcome. +Contributions are always welcome. For new features/fixes, remember to add tests and for performance enhancements include benchmarks. For general feedback and experience reports, feel free to open an issue or write me on [Twitter](https://twitter.com/sh0dan). diff --git a/vendor/github.com/montanaflynn/stats/.gitignore b/vendor/github.com/montanaflynn/stats/.gitignore index 5f6289b211..96b11286e5 100644 --- a/vendor/github.com/montanaflynn/stats/.gitignore +++ b/vendor/github.com/montanaflynn/stats/.gitignore @@ -1,2 +1,2 @@ coverage.out -.directory +.directory \ No newline at end of file diff --git a/vendor/github.com/montanaflynn/stats/CHANGELOG.md b/vendor/github.com/montanaflynn/stats/CHANGELOG.md index f1fe51467b..532f6ed3fd 100644 --- a/vendor/github.com/montanaflynn/stats/CHANGELOG.md +++ b/vendor/github.com/montanaflynn/stats/CHANGELOG.md @@ -54,11 +54,11 @@ Several functions were renamed in this release. They will still function but may - Add Nearest Rank method of calculating percentiles - Add errors for all functions - Add sample -- Add Linear, Exponential and Logarithmic Regression -- Add sample and population variance and deviation -- Add Percentile and Float64ToInt -- Add Round -- Add Standard deviation -- Add Sum -- Add Min and Ma- x -- Add Mean, Median and Mode +- Add Linear, Exponential and Logarithmic Regression +- Add sample and population variance and deviation +- Add Percentile and Float64ToInt +- Add Round +- Add Standard deviation +- Add Sum +- Add Min and Ma- x +- Add Mean, Median and Mode diff --git a/vendor/github.com/montanaflynn/stats/Makefile b/vendor/github.com/montanaflynn/stats/Makefile index 89236b6973..87844f485d 100644 --- a/vendor/github.com/montanaflynn/stats/Makefile +++ b/vendor/github.com/montanaflynn/stats/Makefile @@ -6,12 +6,12 @@ doc: webdoc: godoc -http=:44444 -format: +format: go fmt test: - go test -race - + go test -race + check: format test benchmark: @@ -24,6 +24,6 @@ coverage: lint: format go get github.com/alecthomas/gometalinter gometalinter --install - gometalinter + gometalinter default: lint test diff --git a/vendor/github.com/youmark/pkcs8/LICENSE b/vendor/github.com/youmark/pkcs8/LICENSE index b2fdfd5c8b..c939f44810 100644 --- a/vendor/github.com/youmark/pkcs8/LICENSE +++ b/vendor/github.com/youmark/pkcs8/LICENSE @@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. +SOFTWARE. \ No newline at end of file diff --git a/vendor/github.com/youmark/pkcs8/README.md b/vendor/github.com/youmark/pkcs8/README.md index 13b85f91bb..f2167dbfe7 100644 --- a/vendor/github.com/youmark/pkcs8/README.md +++ b/vendor/github.com/youmark/pkcs8/README.md @@ -18,3 +18,4 @@ This package depends on golang.org/x/crypto/pbkdf2 package. Use the following co ```text go get golang.org/x/crypto/pbkdf2 ``` +