Very impressed #79
Replies: 6 comments
-
Plug-in support disappeared from the browsers: yes, because cuminas.jp dropped any plugin support long time ago. Plugin for IE 8 still worked, but IE8 was deprecated. One day (3 years ago or so), Chrome plugin stopped working and cuminas.jp did not take any action then. So, JS is not fast and everybody knows it. It's slow but it works. |
Beta Was this translation helpful? Give feedback.
-
Thank you. I have read the source code of DjVuLibre, so I know who you are. The acceptable user experience is achieved through the use of Web Workers, caching pages and conversion of raw pixels into PNG files. All operations with a djvu run in the background thread. In the continuous scroll mode, all pages are converted into PNG, it takes 100-300 ms more, but PNG is usually no bigger than 0.5 MB, while raw pixels take about 30 MB. Also PNG can be scaled smoothly and quickly by the browser. Then the viewer caches and precaches up to 30 pages in the continuous scroll mode, and 3 pages in the single page scroll mode, and tries to update them before they are shown to the user. So if a user reads pages sequentially, everything works well enough. Browsers dropped support for all native plug-ins (it was called N-API or similar). The plug-ins were able to extend the browser, e.g. really render tags with djvu files, as if those tags were natively supported, while extensions are just hidden persistent web pages with some additional privileges and APIs. Now there are only extension. The DjVu.js extension looks for tags and replaces them the viewer dynamically, it mimics the behavior of the plug-in, but it's just about standard operations with DOM. The only way right now to use C++ in the browser is to install an app, install an extension and then make the extension to communicate with the native app, e.g. the extension could send a document to the app, the app renders with quickly and sends images back to the extension. Another option is to compile DjVuLibre into Web Assembly. However, according to my experience there is no guarantee that it will work significantly faster than JS, and still some JS layer will be required. Once Web Assembly supports multithreading, it may become more viable. |
Beta Was this translation helpful? Give feedback.
-
And it's worth mentioning that 3 years ago (10.2018) people from Cuminas contacted me and said that they were concerned with the fact the native plug-in wasn't supported by all major browsers any more. They thought about it and discovered my project. The only feature that it lacked in their opinion was the support of indirect djvu files. At that time I had never worked with indirect (multifile) djvu documents and thought that they were not used at all. People from Cuminas provided me with their editor, which can convert any bundled djvu into an indirect one, and offered a donation. I managed to implement the initial support of indirect djvu files within 3 days and they paid for it. Actually I would implement it for free too (but probably not too fast, as I usually do), but the donation, the first and the last :), was a pleasant surprise. Since then I improved the support of indirect djvu files and added some major features like the continuous scroll mode. But the initial support was their contribution. And then it turned out people use indirect djvu and this type of documents isn't too rare. Thus, Cuminas contributed to this project and did it for public benefit. |
Beta Was this translation helpful? Give feedback.
-
I think this feeling that WASM is at the same speed as JS came from the people that tried both with DOM. Where all efficiency or deficiency of any particular language is unnoticeable because DOM operations take the most of the time. Why do so many JS frameworks appeared using shadow DOM techniques when in fact DOM itself hardly needed to be optimized? WASM is a stack machine with all possible arguments fitting into static 8-64 bits structures, no virtual table overheads, no dynamic languages overhead. If that's true, then I think that compression techniques such as djvu (employing many such techniques) working at bit-level could be faster in WASM. (@leonbottou can say if it's false/true/maybe true). It's just my guess, I did not measure it to prove or disapprove. |
Beta Was this translation helpful? Give feedback.
-
Another option is to compile DjVuLibre into Web Assembly....
Somebody tried to compile djvulibre with emscripten and the result was
very underwhelming. Much slower than this JS reimplementation.
The acceptable user experience is achieved through the use of Web
Workers, caching pages and conversion of raw pixels into PNG files. All
operations with a djvu run in the background thread. In the continuous
scroll mode, all pages are converted into PNG, it takes 100-300 ms more.
This is also the way djview works. Machines were a lot slower and we had
to use all the gamut of predecoding, partial rendering, etc. This is
part of the problem in fact. All these optimizations have to be
reconsidered in the new environment, that is, faster machines, but very
constrained access to the fast hardware. I find these capricious
constraints very stifling. This is why I like better the option of
using an external native app. But the JS code still has to handle all
the GUI and all the passing data back and forth. In the end, it is not
clear that this is faster.
- L.
…On 2/4/22 16:32, remis wrote:
@RussCoder <https://github.com/RussCoder>
Another option is to compile DjVuLibre into Web Assembly.
However, according to my experience there is no guarantee that
it will work significantly faster than JS...
I think this feeling that WASM is at the same speed as JS came from
the people that tried both with DOM. Where all efficiency or
deficiency of any particular language is unnoticeable because DOM
operations take the most of the time. Why do so many JS frameworks
appeared using /shadow DOM/ techniques when in fact DOM itself hardly
needed to be optimized?
WASM is a stack machine with all possible arguments fitting into
static 8-64 bits structures
<https://github.com/WebAssembly/design/blob/main/Semantics.md>, no
virtual table overheads, no dynamic languages overhead.
If that's true, then I think that compression techniques such as djvu
(employing many such techniques) *working at bit-level* could be
faster in WASM. ***@***.*** <https://github.com/leonbottou> can say
if it's false/true/maybe true).
It's just my guess, I did not measure it to prove or disapprove.
—
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEVO5LE66TQSUEZW4B6VJDUZRAVRANCNFSM5KQZTARA>.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
If it was a public attempt it would be interesting to see measurement results: what they tried and what they got. |
Beta Was this translation helpful? Give feedback.
-
I am one of the original djvu authors. I was very sad when plug-in support disappeared from the browsers because I didn't think a Javascript solution would be fast enough. Your project has proven me wrong and this is very good news.
Beta Was this translation helpful? Give feedback.
All reactions