Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :)
Some background to this PR...
I've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services.
Now I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process.
My first attempt I tried to adapt to more robust lxml package allowing huge data and with "recover", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble.
And with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a "hugexmlparser" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end before new (malformed?) it would add this after encounter a new start-tag.
The code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted.
Now with the above we pass this a minor changed version of save_note_recovery() assure the existing works.
Also adding this as a new recover-enex command to click and kept the original options.
A couple of new tests was added as well to check against using this command.
Now this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this repository.
As a second step .. When the time allows it would have been nice to also be able to easily export from SQLite to formatted HTML/MD and attachments saved... but that might perhaps be better a separate project ... or if you or someone else have something that might shared to save some trouble, I would be interested ;-)