-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide an easier way to dump test results + results bag contents at the end of each test node #58
Comments
If one wants to harvest at the end of each test node, would you get the session from the request then lookup the testid? I reasly like @smarie pytest-cases and pytest-harvest! |
Thanks @joetristano for the kind words! Indeed currently one way to do this is for example to create a @fixture
def my_results_dumper(results_bag, request):
"""A fixture to dump the results after each run, to a distinct file named after the test node id"""
# Let the test execute
yield results_bag
# Grab current test
item = request.node
testnode_id = item.nodeid.split('::')[-1] # get the test_id
# Store the test's information in a dictionary
(_, _), status_dct = get_pytest_status(item, durations_in_ms=False, current_request=request)
result = dict(test_id=testnode_id, status=status_dct.get("call")[0], duration_ms=status_dct.get("call")[1])
# Add all of the `results_bag` in this dictionary
result.update(results_bag)
# Finally dump as json file
dest_folder = Path(".results")
dest_folder.mkdir(exist_ok=True, parents=True)
filename = str(folder) + "/%s.json" % testnode_id
json.dump(result, filename, indent=4, default=str, sort_keys=True) Something like this. |
In the context of benchmarks such as the example in https://smarie.github.io/pytest-patterns/examples/data_science_benchmark/
Each test node result is saved in the
results_bag
fixture, and later intest_synthesis
the globalmodule_results_[dct/df]
fixture is used to collect everything.This is a bit sad when the objective is just to create results and store them in files (one file per id), after each node.
Providing an easy way to hook after each test run and write results to json or csv files would be much more convenient.
Maybe even a commandline option ?
The text was updated successfully, but these errors were encountered: