You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While readability has a reasonable test suite, it pales in comparison to Readability.js test suite with its 124 test cases.
With the help of a small Elixir program, we're going to discover main differences in behavior between us and them. This issue will serve as a Meta issue to keep track of all bug fixes and reasonable improvement opportunities.
The best approach would be to "steal" test suite from readability and try to make it pass with an Elixir version 🤔 It would require quite a lot of changes to the underlying implementation, but should be possible.
I agree - ideally, we would just use their test suite unchanged (or programmatically modified) as part of readability test suite.
There would be a few challenges, though: for example, they wrap their results in <div id="readability-page-1" class="page">, they keep the full page, including the title, with the current readability's logic removes from the page, h1 in the input sometimes becomes h2 in the output.
My current approach was to use it for inspiration to find new bugs and edge case, and than manually add (possibly adjusted) tests to improve our own test suite.
While
readability
has a reasonable test suite, it pales in comparison to Readability.js test suite with its 124 test cases.With the help of a small Elixir program, we're going to discover main differences in behavior between us and them. This issue will serve as a Meta issue to keep track of all bug fixes and reasonable improvement opportunities.
The text was updated successfully, but these errors were encountered: