-
Notifications
You must be signed in to change notification settings - Fork 740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change the verify_in_flight_buffer_pkts to use ingress duthost's buffer size #15969
Change the verify_in_flight_buffer_pkts to use ingress duthost's buffer size #15969
Conversation
@@ -570,7 +572,7 @@ def verify_in_flight_buffer_pkts(duthost, | |||
data_flow_config = snappi_extra_params.traffic_flow_config.data_flow_config | |||
tx_frames_total = sum(metric.frames_tx for metric in flow_metrics if data_flow_config["flow_name"] in metric.name) | |||
tx_bytes_total = tx_frames_total * data_flow_config["flow_pkt_size"] | |||
dut_buffer_size = get_lossless_buffer_size(host_ans=duthost) | |||
dut_buffer_size = get_lossless_buffer_size(host_ans=ingress_duthost) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this change is for cisco platform only? Also what if egress port is long link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdszhang , I am not sure about being cisco specific. We need to bring it up in community.
The reverse direction(short -> long) the buffer size is the incoming side and will be 64 MB only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just checked in the community meeting, this is also applicable for Nokia platform too.
…er size. (sonic-net#15969) Description of PR Summary: The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size. Approach What is the motivation for this PR? How did you do it? Updated the function to take ingress_duthost and egress_duthost, instead of just duthost. How did you verify/test it? Ran it in my TB: =========================================================================================================== PASSES =========================================================================================================== ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] ___________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info0] ___________________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info1] ___________________________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] _____________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] _____________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] _____________________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] _____________________________________________________________________________ --------------------------------------------------------------- generated xml file: /run_logs/ixia/buffer_size/2024-12-09-23-31-36/tr_2024-12-09-23-31-36.xml ---------------------------------------------------------------- INFO:root:Can not get Allure report URL. Please check logs --------------------------------------------------------------------------------------------------- live log sessionfinish --------------------------------------------------------------------------------------------------- 01:13:18 __init__.pytest_terminal_summary L0067 INFO | Can not get Allure report URL. Please check logs ================================================================================================== short test summary info =================================================================================================== PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info0] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info1] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type fast is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type fast is not supported on cisco-8000 switches ================================================================================== 10 passed, 8 skipped, 14 warnings in 6099.48s (1:41:39) =================================================================================== sonic@snappi-sonic-mgmt-vanilla-202405-t2:/data/tests$ Any platform specific information? co-authorized by: jianquanye@microsoft.com
Cherry-pick PR to 202405: #16029 |
…er size. (#15969) Description of PR Summary: The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size. Approach What is the motivation for this PR? How did you do it? Updated the function to take ingress_duthost and egress_duthost, instead of just duthost. How did you verify/test it? Ran it in my TB: =========================================================================================================== PASSES =========================================================================================================== ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] ___________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info0] ___________________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info1] ___________________________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] _____________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] _____________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] _____________________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] _____________________________________________________________________________ --------------------------------------------------------------- generated xml file: /run_logs/ixia/buffer_size/2024-12-09-23-31-36/tr_2024-12-09-23-31-36.xml ---------------------------------------------------------------- INFO:root:Can not get Allure report URL. Please check logs --------------------------------------------------------------------------------------------------- live log sessionfinish --------------------------------------------------------------------------------------------------- 01:13:18 __init__.pytest_terminal_summary L0067 INFO | Can not get Allure report URL. Please check logs ================================================================================================== short test summary info =================================================================================================== PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info0] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info1] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type fast is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type fast is not supported on cisco-8000 switches ================================================================================== 10 passed, 8 skipped, 14 warnings in 6099.48s (1:41:39) =================================================================================== sonic@snappi-sonic-mgmt-vanilla-202405-t2:/data/tests$ Any platform specific information? co-authorized by: jianquanye@microsoft.com
…er size. (sonic-net#15969) Description of PR Summary: The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size. Approach What is the motivation for this PR? How did you do it? Updated the function to take ingress_duthost and egress_duthost, instead of just duthost. How did you verify/test it? Ran it in my TB: =========================================================================================================== PASSES =========================================================================================================== ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] ___________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info0] ___________________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info1] ___________________________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] _____________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] _____________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] _____________________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] _____________________________________________________________________________ --------------------------------------------------------------- generated xml file: /run_logs/ixia/buffer_size/2024-12-09-23-31-36/tr_2024-12-09-23-31-36.xml ---------------------------------------------------------------- INFO:root:Can not get Allure report URL. Please check logs --------------------------------------------------------------------------------------------------- live log sessionfinish --------------------------------------------------------------------------------------------------- 01:13:18 __init__.pytest_terminal_summary L0067 INFO | Can not get Allure report URL. Please check logs ================================================================================================== short test summary info =================================================================================================== PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info0] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info1] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type fast is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type fast is not supported on cisco-8000 switches ================================================================================== 10 passed, 8 skipped, 14 warnings in 6099.48s (1:41:39) =================================================================================== sonic@snappi-sonic-mgmt-vanilla-202405-t2:/data/tests$ Any platform specific information? co-authorized by: jianquanye@microsoft.com
Cherry-pick PR to 202411: #16295 |
…er size. (#15969) Description of PR Summary: The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size. Approach What is the motivation for this PR? How did you do it? Updated the function to take ingress_duthost and egress_duthost, instead of just duthost. How did you verify/test it? Ran it in my TB: =========================================================================================================== PASSES =========================================================================================================== ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] ___________________________________________________________________________ ___________________________________________________________________________ test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] ___________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info0] ___________________________________________________________________________________ __________________________________________________________________________________ test_pfc_pause_multi_lossless_prio[multidut_port_info1] ___________________________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] _____________________________________________________________________ _____________________________________________________________________ test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] _____________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] _____________________________________________________________________________ ____________________________________________________________________________ test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] _____________________________________________________________________________ --------------------------------------------------------------- generated xml file: /run_logs/ixia/buffer_size/2024-12-09-23-31-36/tr_2024-12-09-23-31-36.xml ---------------------------------------------------------------- INFO:root:Can not get Allure report URL. Please check logs --------------------------------------------------------------------------------------------------- live log sessionfinish --------------------------------------------------------------------------------------------------- 01:13:18 __init__.pytest_terminal_summary L0067 INFO | Can not get Allure report URL. Please check logs ================================================================================================== short test summary info =================================================================================================== PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info0-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio[multidut_port_info1-yy39top-lc4|4] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info0] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio[multidut_port_info1] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info0-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_single_lossless_prio_reboot[multidut_port_info1-cold-yy39top-lc4|3] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info0-cold] PASSED snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py::test_pfc_pause_multi_lossless_prio_reboot[multidut_port_info1-cold] SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:139: Reboot type fast is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type warm is not supported on cisco-8000 switches SKIPPED [2] snappi_tests/multidut/pfc/test_multidut_pfc_pause_lossless_with_snappi.py:199: Reboot type fast is not supported on cisco-8000 switches ================================================================================== 10 passed, 8 skipped, 14 warnings in 6099.48s (1:41:39) =================================================================================== sonic@snappi-sonic-mgmt-vanilla-202405-t2:/data/tests$ Any platform specific information? co-authorized by: jianquanye@microsoft.com
Description of PR
Summary:
The function: verify_in_flight_buffer_pkts is using the egress duthost's buffer size to verify the amount of packets that are transmitted is below the buffer size. That number is greatly influenced by the ingress buffer size when long links are in use as HBM is used with large XOFF threshold. Update this function to use the ingress DUT's buffer size.
Type of change
Back port request
Approach
What is the motivation for this PR?
How did you do it?
Updated the function to take ingress_duthost and egress_duthost, instead of just duthost.
How did you verify/test it?
Ran it in my TB:
Any platform specific information?
@sdszhang , @auspham : Pls let me know if this has to be only for cisco-8000, or applicable to all platforms.