Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a simple test filter #35

Merged
merged 4 commits into from
Jan 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,16 @@ python3 test-runner/wasi_test_runner.py
-r adapters/wasmtime.sh # path to a runtime adapter
```

Optionally you can specify test cases to skip with the `--exclude-filter` option.

```bash
python3 test-runner/wasi_test_runner.py \
-t ./tests/assemblyscript/testsuite/ `# path to folders containing .wasm test files` \
./tests/c/testsuite/ \
--exclude-filter examples/skip.json \
-r adapters/wasmtime.sh # path to a runtime adapter
```

## Contributing

All contributions are very welcome. Contributors can help with:
Expand Down
5 changes: 5 additions & 0 deletions examples/skip.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we actually need such a detailed filters for now. Initially I was thinking of the following requirements:

  • being able to do both: include and exclude tests (e.g. --filter-include ".*thread.*" --filter-exclude ".*open.*")
  • pass regular expressions so we can e.g. disable the whole testsuite (something like --filter-exclude "WASI C tests.*")

But I'm curious what are your thoughts on that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my intention is to implement something simplest first. (regex etc is more complex and advanced in my taste.)

also i think it makes sense to have a filter in a file because i guess it's almost static for a given runtime.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I don't suggest implementing all that at once, but I'd like us to think a bit about the API before we move forward. The proposed API of the JSON file only allows for disabling tests, and I'm afraid that if we want to add more features (like the ones mentioned above) we'll either have to make backward-incompatible changes or make the API inconsistent. As I said, we don't have to implement everything in the first iteration, but at least having placeholders would probably simplify further development.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i feel compatibility concern is not too important at this point.

also, your requirements are not clear to me.
if you explain it a bit more, maybe i can suggest a schema.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure; so first of all, I think we should be able to allow users to both select exclusion and inclusion filter (at the moment only the exclusion is possible). Also, we might allow for wildcard/regex, but I assume this could be just a feature request for the future, and we can implement it in the future; just an idea (although feel free to come up with something different):

{
  "include": [], // if empty, all tests are included
  "exclude/ignore(whatever sounds better)": [] // if empty, none of the tests are excluded
}

Element in the array could be a full path to a test, e.g. WASI C tests.clock_getres-monotonic, so we can in the future implement wildcards to only enable clock_getres tests, with: WASI C tests.clock_getres*.

As I said, this is just an example schema to explain what I mean; please let me know if it's clear what we try to achieve here and share your ideas.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my suggestion for this PR:

  • rename --filter option to --exclude-filter
  • no changes to schema
  • we can consider adding --include-filter, --exclude-regex-filter etc later when/if necessary.

Copy link
Collaborator

@loganek loganek Jan 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename --filter option to --exclude-filter

Given we already define filters in the JSON file, I don't know if there's a need for having two cli parameters, or instead could we have just one (--filter) and update the schema so it allows for specifying both inclusion and exclusion - I don't have a strong opinion; just a thought, as I haven't seen them separate in other farameworks.

no changes to schema

Not sure if we actually need a nested structure here (testsuite->tests); if we implement regex/glob we could probably just have a list of strings? We can start with this one as it might seem a bit simpler but once we update it, we'll have to update all the consumers as well (not sure how many runtimes will onboard by then though).

I'm also not 100% sure about file vs cli parameter. Whereas I see the benefit you mentioned above, I think it
doesn't matter that much for runtimes as they'll have their own CI scripts that wrap around the call to the tests. The benefit of having everything through CLI is that developers can easily just filter some of the tests they're currently working on without modifying file (and risking commiting it accidentally). Similar to my previous point, it is something we can change in the future, but depending on the number of adopters, it might be easier or harder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename --filter option to --exclude-filter

Given we already define filters in the JSON file, I don't know if there's a need for having two cli parameters, or instead could we have just one (--filter) and update the schema so it allows for specifying both inclusion and exclusion - I don't have a strong opinion; just a thought, as I haven't seen them separate in other farameworks.

given the current cli options which allows to specify multiple testsuites,
i thought it's natural to accept multiple filters.

no changes to schema

Not sure if we actually need a nested structure here (testsuite->tests); if we implement regex/glob we could probably just have a list of strings? We can start with this one as it might seem a bit simpler but once we update it, we'll have to update all the consumers as well (not sure how many runtimes will onboard by then though).

i'm not the person who invented the testsuite->tests structure in this repo.
honestly speaking i don't think the testsuite concept makes much sense.

maybe we can use some kind of string match with test id, where test id is:

test_id = testsuite + '/' + test

I'm also not 100% sure about file vs cli parameter. Whereas I see the benefit you mentioned above, I think it doesn't matter that much for runtimes as they'll have their own CI scripts that wrap around the call to the tests. The benefit of having everything through CLI is that developers can easily just filter some of the tests they're currently working on without modifying file (and risking commiting it accidentally). Similar to my previous point, it is something we can change in the future, but depending on the number of adopters, it might be easier or harder.

as a developer, i feel it easier to copy/modify a file than tweaking cli options.
but i can understand you have a different preference.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To unblock the change, let's move on with your suggestion here #35 (comment), we'll adapt it once we have more feedback.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To unblock the change, let's move on with your suggestion here #35 (comment), we'll adapt it once we have more feedback.

done

"WASI C tests": {
"stat-dev-ino": "d_ino is known broken in this combination of engine and wasi-sdk version."
}
}
1 change: 1 addition & 0 deletions test-runner/.pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,6 @@ disable=
C0114, # Missing module docstring
C0115, # Missing class docstring
C0116, # Missing function or method docstring
R0903, # Too few public methods
[FORMAT]
max-line-length=120
4 changes: 2 additions & 2 deletions test-runner/requirements/dev.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
-r common.txt
flake8==5.0.4
mypy==0.910
mypy==0.991
pylint==2.14.3
pytest==6.2.5
coverage==6.3.3
coverage==6.3.3
14 changes: 12 additions & 2 deletions test-runner/tests/test_test_suite_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,12 @@ def test_runner_end_to_end() -> None:

reporters = [Mock(), Mock()]

filt = Mock()
filt.should_skip.return_value = (False, None)
filters = [filt]

with patch("glob.glob", return_value=test_paths):
suite = tsr.run_tests_from_test_suite("my-path", runtime, validators, reporters) # type: ignore
suite = tsr.run_tests_from_test_suite("my-path", runtime, validators, reporters, filters) # type: ignore
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we validate if filters actually were called?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it'd be also good to validate the case when the filter returns False and in that case the test is skipped


# Assert manifest was read correctly
assert suite.name == "test-suite"
Expand Down Expand Up @@ -91,9 +95,15 @@ def test_runner_end_to_end() -> None:
for config, output in zip(expected_config, outputs):
validator.assert_any_call(config, output)

# Assert filter calls
for filt in filters:
assert filt.should_skip.call_count == 3
for test_case in expected_test_cases:
filt.should_skip.assert_any_call(suite.name, test_case.name)


@patch("os.path.exists", Mock(return_value=False))
def test_runner_should_use_path_for_name_if_manifest_does_not_exist() -> None:
suite = tsr.run_tests_from_test_suite("my-path", Mock(), [], [])
suite = tsr.run_tests_from_test_suite("my-path", Mock(), [], [], [])

assert suite.name == "my-path"
15 changes: 15 additions & 0 deletions test-runner/wasi_test_runner/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@

from .runtime_adapter import RuntimeAdapter
from .harness import run_all_tests
from .filters import TestFilter
from .filters import JSONTestExcludeFilter
from .reporters import TestReporter
from .reporters.console import ConsoleTestReporter
from .reporters.json import JSONTestReporter
Expand All @@ -23,6 +25,14 @@ def main() -> int:
nargs="+",
help="Locations of suites (directories with *.wasm test files).",
)
parser.add_argument(
"-f",
"--exclude-filter",
required=False,
nargs="+",
default=[],
help="Locations of test exclude filters (JSON files).",
)
parser.add_argument(
"-r", "--runtime-adapter", required=True, help="Path to a runtime adapter."
)
Expand All @@ -45,11 +55,16 @@ def main() -> int:

validators: List[Validator] = [exit_code_validator, stdout_validator]

filters: List[TestFilter] = []
for filt in options.exclude_filter:
filters.append(JSONTestExcludeFilter(filt))

return run_all_tests(
RuntimeAdapter(options.runtime_adapter),
options.test_suite,
validators,
reporters,
filters,
)


Expand Down
30 changes: 30 additions & 0 deletions test-runner/wasi_test_runner/filters.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
from typing import Tuple, Union, Literal
from abc import ABC
from abc import abstractmethod

import json


class TestFilter(ABC):
@abstractmethod
def should_skip(
self, test_suite_name: str, test_name: str
) -> Union[Tuple[Literal[True], str], Tuple[Literal[False], Literal[None]]]:
pass


class JSONTestExcludeFilter(TestFilter):
def __init__(self, filename: str) -> None:
with open(filename, encoding="utf-8") as file:
self.filter_dict = json.load(file)

def should_skip(
self, test_suite_name: str, test_name: str
) -> Union[Tuple[Literal[True], str], Tuple[Literal[False], Literal[None]]]:
test_suite_filter = self.filter_dict.get(test_suite_name)
if test_suite_filter is None:
return False, None
why = test_suite_filter.get(test_name)
if why is not None:
return True, why
return False, None
4 changes: 3 additions & 1 deletion test-runner/wasi_test_runner/harness.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from typing import List

from .filters import TestFilter
from .reporters import TestReporter
from .test_suite_runner import run_tests_from_test_suite
from .runtime_adapter import RuntimeAdapter
Expand All @@ -11,12 +12,13 @@ def run_all_tests(
test_suite_paths: List[str],
validators: List[Validator],
reporters: List[TestReporter],
filters: List[TestFilter],
) -> int:
ret = 0

for test_suite_path in test_suite_paths:
test_suite = run_tests_from_test_suite(
test_suite_path, runtime, validators, reporters
test_suite_path, runtime, validators, reporters, filters,
)
for reporter in reporters:
reporter.report_test_suite(test_suite)
Expand Down
1 change: 1 addition & 0 deletions test-runner/wasi_test_runner/reporters/console.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ def finalize(self, version: RuntimeVersion) -> None:
print(f" Total: {suite.test_count}")
self._print_pass(f" Passed: {suite.pass_count}")
self._print_fail(f" Failed: {suite.fail_count}")
self._print_skip(f" Skipped: {suite.skip_count}")
print("")

print(
Expand Down
29 changes: 27 additions & 2 deletions test-runner/wasi_test_runner/test_suite_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from datetime import datetime
from typing import List, cast

from .filters import TestFilter
from .runtime_adapter import RuntimeAdapter
from .test_case import (
Result,
Expand All @@ -25,28 +26,52 @@ def run_tests_from_test_suite(
runtime: RuntimeAdapter,
validators: List[Validator],
reporters: List[TestReporter],
filters: List[TestFilter],
) -> TestSuite:
test_cases: List[TestCase] = []
test_start = datetime.now()

_cleanup_test_output(test_suite_path)

test_suite_name = _read_manifest(test_suite_path)

for test_path in glob.glob(os.path.join(test_suite_path, "*.wasm")):
test_case = _execute_single_test(runtime, validators, test_path)
test_name = os.path.splitext(os.path.basename(test_path))[0]
for filt in filters:
# for now, just drop the skip reason string. it might be
# useful to make reporters report it.
skip, _ = filt.should_skip(test_suite_name, test_name)
if skip:
test_case = _skip_single_test(runtime, validators, test_path)
break
else:
test_case = _execute_single_test(runtime, validators, test_path)
test_cases.append(test_case)
for reporter in reporters:
reporter.report_test(test_case)

elapsed = (datetime.now() - test_start).total_seconds()

return TestSuite(
name=_read_manifest(test_suite_path),
name=test_suite_name,
time=test_start,
duration_s=elapsed,
test_cases=test_cases,
)


def _skip_single_test(
_runtime: RuntimeAdapter, _validators: List[Validator], test_path: str
) -> TestCase:
config = _read_test_config(test_path)
return TestCase(
name=os.path.splitext(os.path.basename(test_path))[0],
config=config,
result=Result(output=Output(0, "", ""), is_executed=False, failures=[]),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the output be optional? Or rather have an union of output|skip_reason and the value is picked up based on is_executed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe. i'd like to postpone it for later PRs.

it would be nicer to be able to represent "timed out" as well. #42

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, we can refactor that piece as part of #42 then.

duration_s=0,
)


def _execute_single_test(
runtime: RuntimeAdapter, validators: List[Validator], test_path: str
) -> TestCase:
Expand Down