-
Notifications
You must be signed in to change notification settings - Fork 740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[action] [PR:15969] Change the verify_in_flight_buffer_pkts to use ingress duthost's buffer size #16295
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…er size. (sonic-net#15969) Description of PR Summary: The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size. Approach What is the motivation for this PR? How did you do it? Updated the function to take ingress_duthost and egress_duthost, instead of just duthost. How did you verify/test it? Ran it in my TB: =========================================================================================================== PASSES =========================================================================================================== ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] ___________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info0] ___________________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info1] ___________________________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] _____________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] _____________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] _____________________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] _____________________________________________________________________________ --------------------------------------------------------------- generated xml file: /run_logs/ixia/buffer_size/2024-12-09-23-31-36/tr_2024-12-09-23-31-36.xml ---------------------------------------------------------------- INFO:root:Can not get Allure report URL. Please check logs --------------------------------------------------------------------------------------------------- live log sessionfinish --------------------------------------------------------------------------------------------------- 01:13:18 __init__.pytest_terminal_summary L0067 INFO | Can not get Allure report URL. Please check logs ================================================================================================== short test summary info =================================================================================================== PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info0] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info1] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type fast is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type fast is not supported on cisco-8000 switches ================================================================================== 10 passed, 8 skipped, 14 warnings in 6099.48s (1:41:39) =================================================================================== sonic@snappi-sonic-mgmt-vanilla-202405-t2:/data/tests$ Any platform specific information? co-authorized by: jianquanye@microsoft.com
/azp run |
8 tasks
Original PR: #15969 |
Azure Pipelines successfully started running 1 pipeline(s). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of PR
Summary:
The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size.
Type of change
Back port request
Approach
What is the motivation for this PR?
How did you do it?
Updated the function to take ingress_duthost and egress_duthost, instead of just duthost.
How did you verify/test it?
Ran it in my TB:
Any platform specific information?
@sdszhang , @auspham : Pls let me know if this has to be only for cisco-8000, or applicable to all platforms.