Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix test failures and warnings from pytest within test suite #317

Merged
merged 17 commits into from
Oct 28, 2020

Conversation

brocksam
Copy link
Contributor

This PR addresses the failing tests and warnings raised by pytest when running the test suite using pytest. Included changes are as follows:

  • Issue pytest failures #307 previously raised the failing of tests/test_ode.py::test_initial_parameters and specified that the reason for this failing was due to a too tight absolute tolerance on the numerical result of the fit. This test now passes by increasing the acceptable absolute numerical tolerance from 1e-8 to 5e-7.
  • DeprecationWarnings were being raised in a number of places, primarily resulting from the deprecation of the autogeneration of names for Parameters and Variables, as well as the use of the newly-preferred use of connectivity_mappings when instantiating CallableNumericalModels. The test suite has been refactored to use the new beast-practice approach, or to use a pytest.warns context manager in places where this makes more sense in the test to handle the DeprecationWarning.
  • Small refactors to certain tests (specifically what used to be tests/test_general::test_model_callable and tests/test_minimize.py::test_minimize) where "TODO" comments suggested splitting specific tests in to constituent parts would make sense.

@brocksam brocksam marked this pull request as draft September 23, 2020 11:43
@brocksam brocksam marked this pull request as ready for review September 23, 2020 11:52
@brocksam brocksam mentioned this pull request Sep 23, 2020
Copy link
Collaborator

@pckroon pckroon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fantastic, thanks so much!
You currently tagged test_general::test_model_callable_from_model_init with a skip because the order of the resulting values. Could you instead fix that by simply sorting the result of the model call?
Other than that this LGTM!

Comment on lines +269 to +272
@pytest.mark.skip(reason=("Test fails a proportion of the time because `z_1`, "
"`z_2` and `z_3` are not necessarily ordered as "
"expected so the assert statements fail. `z_1` is "
"frequenty equal to either 36 or 72."))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My suggestion for fixing this is to sort the values:
z_1, z_2, z_3 = sorted(model(3, 3, 2, 2))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pckroon, thanks! Yup, the reason I marked it for skipping was because I didn't know how to fix it. Will indeed follow your suggestion and double check it fixes it, then unmark the test for skipping.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively we could consider this a bug. It's a little bit silly that when you create a model with a list you don't get the resulting values in the same order, even if it's not the recommended way of creating a model.
@tBuLi Opinion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI simply applying sorted() to it as you suggested doesn't fix it. I'm looking in to a bug fix for it now. Will keep you posted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the issue is coming from BaseModel._init_from_dict which toposorts the values in the model and then stores this "sorted" ordering in its internal state. In the case of this test there is no preferred ordering of the three expressions in the model as each only depends on a, b and x, all of which are independent of each other.

I agree it makes sense to have the resulting values returned in the same order in which they are supplied. One workaround would be to internally store in the BaseModel the order of the model's expressions when supplied as then return the values according to this order? Would you like me to try implement this as part of this PR, or should I leave this test marked with a skip, file an issue relating to this bug and address in a separate PR?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're willing to take a stab at fixing the underlying issue that should be a separate PR (makes review easier).
I think BaseModel is the right place to "remember" the order of the components. Note that the order in which we should return them should be completely independent of what the topological order is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have opened an issue for this in #318. With this issue and the above do we agree that it is okay to leave this specific test marked as for skipping and resolve this review comment?

Copy link
Collaborator

@pckroon pckroon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With #318 this PR LGTM. I'll give @tBuLi a chance to go over it one more time before he hits merge.

@pckroon
Copy link
Collaborator

pckroon commented Oct 21, 2020

Ping @tBuLi
I'll merge this on Friday if you don't have any complaints.

@pckroon pckroon merged commit e7311f9 into tBuLi:master Oct 28, 2020
@pckroon
Copy link
Collaborator

pckroon commented Oct 28, 2020

Thanks again @brocksam, and sorry for the long wait.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants