You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Travis currently checks to see if update.py has been run the same day as the push. This doesn't actually tell us much, considering there is nothing about it which actually confirms that the derivatives are generated properly; only the expectation that this was done.
Some alternative/additional checks we could do:
In addition to basic unit testing, it would be super helpful if editors could suggest specific folios/entries for me to use as individual test cases. Especially ones that are edge cases.
We cannot (should not) generate the entire manuscript during CI, but having 5-10 good test cases would greatly improve our confidence about being correct.
To start with, I'm using a copy of folio 170r.
In addition to basic unit testing, it would be super helpful if editors could suggest specific folios/entries for me to use as individual test cases. Especially ones that are edge cases.
We cannot (should not) generate the entire manuscript during CI, but having 5-10 good test cases would greatly improve our confidence about being correct.
To start with, I'm using a copy of folio 170r.
Actually we shouldn't do this. At least not for unit testing. For now we will focus on ensuring that each component works according to specification, which is not guaranteed by hand-picking a few interesting entries.
Travis currently checks to see if update.py has been run the same day as the push. This doesn't actually tell us much, considering there is nothing about it which actually confirms that the derivatives are generated properly; only the expectation that this was done.
Some alternative/additional checks we could do:
The text was updated successfully, but these errors were encountered: