Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add zstd compression #866

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Add zstd compression #866

wants to merge 1 commit into from

Conversation

robUx4
Copy link
Contributor

@robUx4 robUx4 commented Oct 24, 2024

Based on #423 but for the v5 document.

Keeping as draft because the use of a dictionary may not be usable in the wild (it requires a state ? which makes seeking inmpossible).

Also, as said in #423 (comment), there is a header to the Zstandard stream that we should strip. In that case we should explain what needs to be stripped.

Based on ietf-wg-cellar#423.

Co-authored-by: rcombs <rcombs@rcombs.me>
@robUx4
Copy link
Contributor Author

robUx4 commented Oct 24, 2024

poke @rcombs

@rcombs
Copy link
Contributor

rcombs commented Oct 25, 2024

I would definitely want to spec out dictionary support; it should be entirely doable, though the implementation would be a little tricky on the encode side. Basically:

  • Take all the packets you're going to mux, in matroska format (i.e. for subtitles, remove timestamp headers)
  • Pass them to ZDICT_optimizeTrainFromBuffer_cover or the like
  • Use the result as the dictionary, both in the matroska header and when compressing packets
  • Compress each packet independently, passing in the now-static dictionary

And then at demux-time, the packets are decompressed independently (same as with zlib today), with the dictionary from the header passed in as context.

In principle, this technique could be applied to zlib as well, and would probably yield similar gains; there just isn't a good standard library for producing zlib dictionaries (whereas the zstd library includes routines for it).

@robUx4
Copy link
Contributor Author

robUx4 commented Oct 25, 2024

We should probably prototype this in real world files to see how this pans out. It can be done in libavformat, mkvtoolnix or mkclean. (I probably won't have time in the coming days)

My main concern is the parts that could/should be stripped from the zstd bitstream and what parts need to be injected when decoding. All this should be properly documented.

@rcombs
Copy link
Contributor

rcombs commented Oct 25, 2024

Definitely agreed. I think it'd be tricky to do in lavf (which isn't really set up to do a 2-pass mux), so maybe mkvtoolnix would make more sense? I'm not familiar with that codebase, though.

Alternately, I could build something in lavf that just generates the dictionary, and then separately add support for reading a pre-existing dictionary from disk? The latter is probably useful regardless; it'd be pretty straightforward to take a large sample of subtitle files and train a dictionary from them, which would likely outperform the default zstd/zlib ones (though not to the same extent as doing it per-mux).

@JeromeMartinez
Copy link
Contributor

Take all the packets you're going to mux,

There must be a fallback in the case you don't have any packet before you start to mux (real time streaming).

@rcombs
Copy link
Contributor

rcombs commented Oct 25, 2024

For streaming, you could either not use a dictionary, or use a pre-generated one.

@mbunkus
Copy link
Contributor

mbunkus commented Oct 25, 2024

I think it'd be tricky to do in lavf (which isn't really set up to do a 2-pass mux), so maybe mkvtoolnix would make more sense

In general MKVToolNix (rather: mkvmerge) isn't set up for 2-pass-muxes either. Lots of global state exists. What could be done for testing/evaluation purposes is what was done back in the day & what you described above: modifying mkvmerge to do some kind of manual 2-pass process, with the first pass producing a file on disk that contains statistics/the dictionary produced during the first pass, and the second pass reading said file. I would not want to have this type of manual invocation & external temporary file as a production mechanism, though.

What I thought about & what might be slightly more doable would be to buffer the first 5 MB of content that would be written out to the disk, build the dictionary over the accumulated data & then use the dictionary for both the buffered data & all subsequent frames. That would be quite a bit easier than implementing a full 2-pass system within a single invocation of mkvmerge, for sure.

@rcombs
Copy link
Contributor

rcombs commented Oct 25, 2024

Hmmm, the whole input subtitle file has to be loaded into memory in order to correctly implement eg ASS ReadOrder, doesn't it? Could a dictionary be trained off of that?

@mbunkus
Copy link
Contributor

mbunkus commented Oct 25, 2024

In case of mkvmerge that would only work if the subtitle file is a separate file, but not if it's a track multiplexed into any of the more complex containers (usually Matroska, MP4, MPEG-TS). In that case mkvmerge cannot simply read all data solely belonging to the subtitle track. Instead it would have to read the whole source file, making it a very costly operation.

@robUx4
Copy link
Contributor Author

robUx4 commented Oct 26, 2024

I'll have a look. mkclean has a 2 pass --remux mode to handle as much header stripping as possible per track. It could be used to feed a dictionary as well. I can't promise when it will be done though. In the end it's equivalent to adding full support for zstd.

@rcombs
Copy link
Contributor

rcombs commented Oct 26, 2024

I could imagine a world where lavf and mkvmerge add support for using a passed-in dictionary, and we just separately make a tool that takes one or more input files and creates an appropriate dictionary (that could be done as a muxer in lavf, for instance). Then the user could either run the dictionary-generation tool as a pre-mux step, or generate a generic dictionary from a corpus of representative input files (eg for streaming).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants