-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --durations
to some pytest
commands in CI workflows.
#622
Conversation
@@ -146,4 +146,4 @@ jobs: | |||
working-directory: ${{ env.LIBSHORTFIN_DIR }} | |||
run: | | |||
ctest --timeout 30 --output-on-failure --test-dir build | |||
pytest -s | |||
pytest -s --durations=10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logs: https://github.com/nod-ai/shark-ai/actions/runs/12055697598/job/33616607357?pr=622#step:11:1630
============================= slowest 10 durations =============================
3.56s setup tests/invocation/mobilenet_program_test.py::test_invoke_mobilenet_single_per_fiber
1.79s call tests/invocation/mobilenet_program_test.py::test_invoke_mobilenet_parallel_per_call_explicit
1.43s setup tests/examples/fastapi_test.py::test_error_response
1.22s call tests/examples/async_test.py::test_async_process
0.64s setup tests/apps/sd/components/tokenizer_test.py::test_transformers_tokenizer
0.46s call tests/examples/async_test.py::test_async_queue
0.43s setup tests/invocation/vmfb_buffer_access_test.py::test_kvcache_noreturn[False]
0.42s setup tests/invocation/vmfb_buffer_access_test.py::test_kvcache_noreturn[True]
0.42s call tests/examples/async_test.py::test_async_basic_asyncio
0.41s call tests/invocation/vmfb_buffer_access_test.py::test_kvcache_noreturn[True]
@@ -69,7 +69,7 @@ jobs: | |||
- name: Run sharktank tests | |||
if: ${{ !cancelled() }} | |||
run: | | |||
pytest -n 4 sharktank/ | |||
pytest -n 4 sharktank/ --durations=10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logs: https://github.com/nod-ai/shark-ai/actions/runs/12055697584/job/33616607407?pr=622#step:6:201
============================= slowest 10 durations =============================
7.61s call tests/models/llama/moe_block_test.py::MoeBlockTest::test
6.67s call tests/examples/main_test.py::ShardingTests::testExportFfnNet
5.47s call tests/models/punet/sharded_resnet_block_with_iree_test.py::test_sharded_resnet_block_with_iree
5.11s call tests/models/llama/attention_test.py::AttentionBlockTest::test
3.82s call tests/models/llama/kv_cache_test.py::KVCacheTest::testDirectAndPagedKVCachePrefill
2.48s call tests/layers/paged_llama_attention_block_test.py::PagedLlamaAttentionBlockTest::testExportDecomposed
1.97s call tests/kernels/mmtfp_test.py::mmtfp_test::testExportDynamicDims
1.78s call tests/layers/paged_llama_attention_block_test.py::PagedLlamaAttentionBlockTest::testExportNondecomposed
1.55s call tests/layers/sharded_conv2d_with_iree_test.py::test_sharded_conv2d_with_iree
1.34s call tests/kernels/mmt_block_scaled_offset_q4_test.py::mmt_block_scaled_offset_q4_unsigned_test::testExportDynamicDims
pytest \ | ||
--with-t5-data \ | ||
sharktank/tests/models/t5/t5_test.py | ||
sharktank/tests/models/t5/t5_test.py \ | ||
--durations=0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logs: https://github.com/nod-ai/shark-ai/actions/runs/12055697584/job/33616608523?pr=622#step:6:547
============================== slowest durations ===============================
203.98s call tests/models/t5/t5_test.py::T5EncoderIreeTest::testV1_1CompareXxlIreeF32AgainstTorchEagerF32
112.76s call tests/models/t5/t5_test.py::T5EncoderIreeTest::testV1_1CompareXxlIreeBf16AgainstTorchEagerF32
26.04s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testV1_1XxlF32CompareTorchEagerAgainstHuggingFace
23.90s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testV1_1XxlBf16CompareTorchEagerAgainstHuggingFaceF32
20.66s call tests/models/t5/t5_test.py::T5EncoderIreeTest::testV1_1CompareSmallIreeF32AgainstTorchEagerF32
19.83s call tests/models/t5/t5_test.py::T5EncoderIreeTest::testV1_1CompareSmallIreeBf16AgainstTorchEagerF32
8.60s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testXxlBf16AgainstFluxGolden
5.76s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testV1_1SmallBf16CompareTorchEagerAgainstHuggingFace
5.21s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testV1_1SmallBf16CompareTorchEagerAgainstHuggingFaceF32
4.83s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testV1_1SmallF32CompareTorchEagerAgainstHuggingFace
2.01s call tests/models/t5/t5_test.py::T5EncoderEagerTest::testV1_1SmallCompareTorchEagerHuggingFaceBf16AgainstF32
0.04s call tests/models/t5/t5_test.py::T5LayerFFTest::testCompareAgainstTransformers_0
0.01s call tests/models/t5/t5_test.py::T5LayerFFTest::testCompareAgainstTransformers_2
0.01s call tests/models/t5/t5_test.py::T5AttentionTest::testCompareAgainstTransformers_1
0.01s call tests/models/t5/t5_test.py::T5AttentionTest::testCompareAgainstTransformers_0
(45 durations < 0.005s hidden. Use -vv to show these durations.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
@ScottTodd seems like there are unit test failures after merging this commit: https://github.com/nod-ai/shark-ai/actions/runs/12058325320/job/33624798311 |
I don't see how that could be related 🤔 maybe re-run that workflow at an older commit to see if something about the deps changed? |
I don't think it is related. It's more likely that this might be cause the latest iree nightly releases are used (?). |
This will help triage slow tests.
Docs: