Replies: 7 comments 2 replies
-
I'd suggest you make sure you're installing into a completely clean fresh virtualenv for testing. |
Beta Was this translation helpful? Give feedback.
-
This ticket is not about stric bug in httpcore. If you don't see anything valueable in tjat output feel free to close that ticket. And don't worry on building my rpm package with httpcore I building that package in env where is installed only whhat is specified in BuildRequires and of course pytest is not failing in such 😀 |
Beta Was this translation helpful? Give feedback.
-
Just started updating.spec for 0.13.7 abd I found one thing. On diagnosing that kind of issue can be used https://github.com/mrbean-bremen/pytest-find-dependencies/ Examples:
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
Using --randomly-seed=2919291752
rootdir: /home/tkloczko/rpmbuild/BUILD/httpcore-0.13.7, configfile: setup.cfg
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, aspectlib-1.5.2, toolbox-0.5, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, xprocess-0.18.1, black-0.3.12, asyncio-0.15.1, subtests-0.5.0, isort-2.0.0, hypothesis-6.14.6, mock-3.6.1, profiling-1.7.0, randomly-3.8.0, Faker-8.12.1, nose2pytest-1.0.8, pyfakefs-4.5.1, tornado-0.8.1, twisted-1.13.3, aiohttp-0.3.0, localserver-0.5.0, anyio-3.3.1, trio-0.7.0
collected 200 items
tests/async_tests/test_interfaces.py F...........F....................F.F..........F........s............F.EEFFFFFFFFFFFFFFFFFFF............F..EF......... [ 58%]
tests/test_threadsafety.py .. [ 59%]
tests/async_tests/test_connection_pool.py .... [ 61%]
tests/sync_tests/test_connection_pool.py .... [ 63%]
tests/sync_tests/test_http11.py ...... [ 66%]
tests/async_tests/test_retries.py ...... [ 69%]
tests/test_exported_members.py . [ 70%]
tests/sync_tests/test_interfaces.py .F.........E..F...F.....F............ [ 88%]
tests/sync_tests/test_http2.py ... [ 90%]
tests/test_map_exceptions.py ... [ 91%]
tests/async_tests/test_http2.py FFF [ 93%]
tests/test_utils.py ... [ 94%]
tests/backend_tests/test_asyncio.py EE [ 95%]
tests/sync_tests/test_retries.py F.. [ 97%]
tests/async_tests/test_http11.py ...... [100%]
================================================================================== ERRORS ==================================================================================
______________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[asyncio-anyio] _______________________________________________
@pytest.fixture(scope="function")
def too_many_open_files_minus_one() -> typing.Iterator[None]:
# Fixture for test regression on https://github.com/encode/httpcore/issues/182
# Max number of descriptors chosen according to:
# See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
# "To monitor file descriptors greater than 1023, use poll or epoll instead."
max_num_descriptors = 1023
files = []
while True:
> f = open("/dev/null")
E OSError: [Errno 24] Too many open files: '/dev/null'
tests/conftest.py:175: OSError
_______________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[asyncio-auto] _______________________________________________
@pytest.fixture(scope="function")
def too_many_open_files_minus_one() -> typing.Iterator[None]:
# Fixture for test regression on https://github.com/encode/httpcore/issues/182
# Max number of descriptors chosen according to:
# See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
# "To monitor file descriptors greater than 1023, use poll or epoll instead."
max_num_descriptors = 1023
files = []
while True:
> f = open("/dev/null")
E OSError: [Errno 24] Too many open files: '/dev/null'
tests/conftest.py:175: OSError
________________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[trio-anyio] ________________________________________________
[..]
============================================================================= warnings summary =============================================================================
tests/async_tests/test_interfaces.py: 46 warnings
tests/async_tests/test_connection_pool.py: 4 warnings
tests/async_tests/test_retries.py: 3 warnings
tests/async_tests/test_http11.py: 6 warnings
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================= short test summary info ==========================================================================
SKIPPED [1] tests/async_tests/test_interfaces.py:323: The trio backend does not support local_address
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[asyncio-anyio] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[asyncio-auto] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[trio-anyio] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/sync_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[sync] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/backend_tests/test_asyncio.py::TestSocketStream::TestIsReadable::test_returns_true_when_transport_has_no_socket - OSError: [Errno 24] Too many open files
ERROR tests/backend_tests/test_asyncio.py::TestSocketStream::TestIsReadable::test_returns_true_when_socket_is_readable - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-DEFAULT] - trio.TrioInternalError: in...
FAILED tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] - trio.TrioInternalErro...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-TUNNEL_ONLY] - trio.TrioInternalError...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-TUNNEL_ONLY] - trio.TrioInternalError: ...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] - trio.TrioInternalError: inte...
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-auto-4-1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[trio-True-TUNNEL_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-anyio-4-5] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-auto-4-5] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] - OSError: [Errno 24] Too...
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[trio-anyio-url1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_reuse_connection[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-anyio-True-60.0-expected_during_active1-expected_during_idle1] - OSError: [...
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[asyncio-anyio-FORWARD_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_https_request_reuse_connection[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_closing_http_request[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-auto-True-0.0-expected_during_active3-expected_during_idle3] - OSError: [Er...
FAILED tests/async_tests/test_interfaces.py::test_http_request_reuse_connection[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_https_request[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_cannot_reuse_dropped_connection[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[asyncio-True-DEFAULT] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_cannot_reuse_dropped_connection[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-anyio-4-3] - httpcore.ConnectError: All connection attempts failed
FAILED tests/async_tests/test_interfaces.py::test_https_request[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] - httpcore.ConnectError: [Errno...
FAILED tests/sync_tests/test_interfaces.py::test_http_proxy[sync-TUNNEL_ONLY] - httpcore.ProxyError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] - httpcore.ConnectError:...
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] - httpcore.ConnectError: [...
FAILED tests/async_tests/test_http2.py::test_post_request - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_http2.py::test_request_with_missing_host_header - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_http2.py::test_get_request - OSError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_retries.py::test_retries_enabled - assert [0.0003035068...6286373138428] == []
==================================================== 35 failed, 158 passed, 1 skipped, 59 warnings, 6 errors in 28.97s =====================================================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
Using --randomly-seed=552270808
rootdir: /home/tkloczko/rpmbuild/BUILD/httpcore-0.13.7, configfile: setup.cfg
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, aspectlib-1.5.2, toolbox-0.5, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, xprocess-0.18.1, black-0.3.12, asyncio-0.15.1, subtests-0.5.0, isort-2.0.0, hypothesis-6.14.6, mock-3.6.1, profiling-1.7.0, randomly-3.8.0, Faker-8.12.1, nose2pytest-1.0.8, pyfakefs-4.5.1, tornado-0.8.1, twisted-1.13.3, aiohttp-0.3.0, localserver-0.5.0, anyio-3.3.1, trio-0.7.0
collected 200 items
tests/sync_tests/test_connection_pool.py .... [ 2%]
tests/async_tests/test_http2.py ... [ 3%]
tests/async_tests/test_retries.py ...... [ 6%]
tests/sync_tests/test_retries.py ... [ 8%]
tests/sync_tests/test_http11.py ...... [ 11%]
tests/backend_tests/test_asyncio.py .. [ 12%]
tests/test_utils.py ... [ 13%]
tests/test_threadsafety.py .. [ 14%]
tests/async_tests/test_http11.py ...... [ 17%]
tests/async_tests/test_connection_pool.py .... [ 19%]
tests/test_exported_members.py . [ 20%]
tests/sync_tests/test_http2.py ... [ 21%]
tests/async_tests/test_interfaces.py .........F......F....EFFFFFFFFFFFFFEFFFFFFFE..F.....F...FF..FFFs....F..FF.FFFFEFFFFFFFFFFFFFEFFF.FFFFFFFFFFFFFFFFFFFF [ 80%]
tests/sync_tests/test_interfaces.py FFFFE.FF..FFFF.FFFFFF..FF.FFFFFFFFF.. [ 98%]
tests/test_map_exceptions.py ... [100%]
================================================================================== ERRORS ==================================================================================
_______________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[asyncio-auto] _______________________________________________
[..]
============================================================================= warnings summary =============================================================================
tests/async_tests/test_http2.py: 3 warnings
tests/async_tests/test_retries.py: 3 warnings
tests/async_tests/test_http11.py: 6 warnings
tests/async_tests/test_connection_pool.py: 4 warnings
tests/async_tests/test_interfaces.py: 21 warnings
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================= short test summary info ==========================================================================
SKIPPED [1] tests/async_tests/test_interfaces.py:323: The trio backend does not support local_address
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[asyncio-auto] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[trio-auto] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_explicit_backend_name[asyncio] - OSError: [Errno 24] Too many open files
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[trio-anyio] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_explicit_backend_name[trio] - OSError: [Errno 24] Too many open files
ERROR tests/sync_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[sync] - OSError: [Errno 24] Too many open files: '/dev/null'
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-TUNNEL_ONLY] - trio.TrioInternalError...
FAILED tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_unix_domain_socket[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[asyncio-False-DEFAULT] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[asyncio-True-DEFAULT] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[asyncio-True-TUNNEL_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-TUNNEL_ONLY] - OSError: [Errno 24] Too ...
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[trio-anyio-4-3] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[asyncio-anyio-FORWARD_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[asyncio-anyio-url1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[trio-anyio-url0] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_reuse_connection[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[asyncio-auto-DEFAULT] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-auto-False-0.0-expected_during_active2-expected_during_idle2] - OSError: [E...
FAILED tests/async_tests/test_interfaces.py::test_https_request_reuse_connection[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[asyncio-anyio-url0] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[trio-anyio-False-0.0-expected_during_active2-expected_during_idle2] - OSError: [Err...
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[trio-auto-DEFAULT] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-auto-4-1] - httpcore.ConnectError: All connection attempts failed
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] - trio.TrioInternalErro...
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[trio-auto-True-0.0-expected_during_active3-expected_during_idle3] - httpcore.Connec...
FAILED tests/async_tests/test_interfaces.py::test_closing_http_request[trio-auto] - httpcore.ConnectError: all attempts to connect to localhost:8002 failed
FAILED tests/async_tests/test_interfaces.py::test_http2_request[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[trio-False-DEFAULT] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[trio-auto-4-5] - httpcore.ConnectError: all attempts to connect to localhos...
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[trio-anyio-DEFAULT] - httpcore.ConnectError: all attempts to connect to 127.0.0.1:8080 failed
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[trio-anyio-False-60.0-expected_during_active0-expected_during_idle0] - httpcore.Con...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] - OSError: [Errno 24] Too...
FAILED tests/async_tests/test_interfaces.py::test_http_request_local_address[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_https_requests[asyncio-False-TUNNEL_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[trio-anyio-4-1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http2_request[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_reuse_connection[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_local_address[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_reuse_connection[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[asyncio-anyio-TUNNEL_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_reuse_connection[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[trio-auto-False-0.0-expected_during_active2-expected_during_idle2] - OSError: [Errn...
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[trio-auto-url1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_cannot_reuse_dropped_connection[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request_unix_domain_socket[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[asyncio-auto-url1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-auto-False-60.0-expected_during_active0-expected_during_idle0] - OSError: [...
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[trio-auto-4-3] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_closing_http_request[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-auto-4-3] - httpcore.ConnectError: All connection attempts failed
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-anyio-4-1] - httpcore.ConnectError: All connection attempts failed
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[trio-anyio-True-0.0-expected_during_active3-expected_during_idle3] - OSError: [Errn...
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[trio-anyio-4-5] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-DEFAULT] - OSError: [Errno 24] Too ma...
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[trio-anyio-TUNNEL_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-anyio-4-5] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[asyncio-anyio-4-3] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[trio-auto-4-1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-auto-True-60.0-expected_during_active1-expected_during_idle1] - OSError: [E...
FAILED tests/async_tests/test_interfaces.py::test_https_request_reuse_connection[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-anyio-True-0.0-expected_during_active3-expected_during_idle3] - OSError: [E...
FAILED tests/async_tests/test_interfaces.py::test_closing_http_request[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_proxy[trio-auto-FORWARD_ONLY] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[trio-anyio-dns-resolution-failed] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_https_request[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_https_request[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_request_unsupported_protocol[trio-anyio-url1] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_http_request[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-anyio-False-0.0-expected_during_active2-expected_during_idle2] - OSError: [...
FAILED tests/async_tests/test_interfaces.py::test_connection_pool_get_connection_info[asyncio-anyio-True-60.0-expected_during_active1-expected_during_idle1] - OSError: [...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] - OSError: [Errno 24] Too many...
FAILED tests/sync_tests/test_interfaces.py::test_connection_pool_get_connection_info[sync-True-0.0-expected_during_active3-expected_during_idle3] - httpcore.ConnectError...
FAILED tests/sync_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[sync-4-1] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[sync-4-3] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_http_request[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_explicit_backend_name - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_https_requests[False-DEFAULT] - httpcore.ProxyError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_closing_http_request[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_connection_pool_get_connection_info[sync-True-60.0-expected_during_active1-expected_during_idle1] - httpcore.ConnectErro...
FAILED tests/sync_tests/test_interfaces.py::test_connection_pool_get_connection_info[sync-False-0.0-expected_during_active2-expected_during_idle2] - httpcore.ConnectErro...
FAILED tests/sync_tests/test_interfaces.py::test_http2_request[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_http_proxy[sync-FORWARD_ONLY] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] - httpcore.ConnectError: [Errno...
FAILED tests/sync_tests/test_interfaces.py::test_http_proxy[sync-TUNNEL_ONLY] - httpcore.ProxyError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] - httpcore.ConnectError:...
FAILED tests/sync_tests/test_interfaces.py::test_https_request_reuse_connection[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_http_proxy[sync-DEFAULT] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_http_request_cannot_reuse_dropped_connection[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_max_keepalive_connections_handled_correctly[sync-4-5] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] - httpcore.ConnectError: [...
FAILED tests/sync_tests/test_interfaces.py::test_proxy_https_requests[True-DEFAULT] - httpcore.ProxyError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_https_requests[False-TUNNEL_ONLY] - httpcore.ProxyError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_connection_pool_get_connection_info[sync-False-60.0-expected_during_active0-expected_during_idle0] - httpcore.ConnectErr...
FAILED tests/sync_tests/test_interfaces.py::test_http_request_local_address[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_http_request_unix_domain_socket[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_https_requests[True-TUNNEL_ONLY] - httpcore.ProxyError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_https_request[sync] - httpcore.ConnectError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_http_request_reuse_connection[sync] - httpcore.ConnectError: [Errno 24] Too many open files
===================================================== 99 failed, 94 passed, 1 skipped, 37 warnings, 6 errors in 34.33s =====================================================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'. |
Beta Was this translation helpful? Give feedback.
-
Nevertheless even with disabled + /usr/bin/pytest -ra -p no:randomly
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/tkloczko/rpmbuild/BUILD/httpcore-0.13.7, configfile: setup.cfg
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, aspectlib-1.5.2, toolbox-0.5, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, xprocess-0.18.1, black-0.3.12, asyncio-0.15.1, subtests-0.5.0, isort-2.0.0, hypothesis-6.14.6, mock-3.6.1, profiling-1.7.0, Faker-8.12.1, nose2pytest-1.0.8, pyfakefs-4.5.1, tornado-0.8.1, twisted-1.13.3, aiohttp-0.3.0, localserver-0.5.0, anyio-3.3.1, trio-0.7.0
collected 200 items
tests/test_exported_members.py . [ 0%]
tests/test_map_exceptions.py ... [ 2%]
tests/test_threadsafety.py .. [ 3%]
tests/test_utils.py ... [ 4%]
tests/async_tests/test_connection_pool.py .... [ 6%]
tests/async_tests/test_http11.py ...... [ 9%]
tests/async_tests/test_http2.py ... [ 11%]
tests/async_tests/test_interfaces.py .................................................FFFFFF..s...........................................FEEEFFFFFFFFFFFF [ 69%]
tests/async_tests/test_retries.py EEEEEE [ 72%]
tests/backend_tests/test_asyncio.py EE [ 73%]
tests/sync_tests/test_connection_pool.py .... [ 75%]
tests/sync_tests/test_http11.py ...... [ 78%]
tests/sync_tests/test_http2.py ... [ 80%]
tests/sync_tests/test_interfaces.py .............FF..F...............E... [ 98%]
tests/sync_tests/test_retries.py F.. [100%]
================================================================================== ERRORS ==================================================================================
______________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[asyncio-anyio] _______________________________________________
@pytest.fixture(scope="function")
def too_many_open_files_minus_one() -> typing.Iterator[None]:
# Fixture for test regression on https://github.com/encode/httpcore/issues/182
# Max number of descriptors chosen according to:
# See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
# "To monitor file descriptors greater than 1023, use poll or epoll instead."
max_num_descriptors = 1023
files = []
while True:
> f = open("/dev/null")
E OSError: [Errno 24] Too many open files: '/dev/null'
tests/conftest.py:175: OSError
________________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[trio-auto] _________________________________________________
@pytest.fixture(scope="function")
def too_many_open_files_minus_one() -> typing.Iterator[None]:
# Fixture for test regression on https://github.com/encode/httpcore/issues/182
# Max number of descriptors chosen according to:
# See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
# "To monitor file descriptors greater than 1023, use poll or epoll instead."
max_num_descriptors = 1023
files = []
while True:
> f = open("/dev/null")
E OSError: [Errno 24] Too many open files: '/dev/null'
tests/conftest.py:175: OSError
________________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[trio-anyio] ________________________________________________
@pytest.fixture(scope="function")
def too_many_open_files_minus_one() -> typing.Iterator[None]:
# Fixture for test regression on https://github.com/encode/httpcore/issues/182
# Max number of descriptors chosen according to:
# See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
# "To monitor file descriptors greater than 1023, use poll or epoll instead."
max_num_descriptors = 1023
files = []
while True:
> f = open("/dev/null")
E OSError: [Errno 24] Too many open files: '/dev/null'
tests/conftest.py:175: OSError
________________________________________________________________ ERROR at setup of test_no_retries[asyncio] ________________________________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_no_retries[asyncio]>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8cfd160>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_________________________________________________________________ ERROR at setup of test_no_retries[trio] __________________________________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_no_retries[trio]>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8cf7490>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_____________________________________________________________ ERROR at setup of test_retries_enabled[asyncio] ______________________________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_retries_enabled[asyncio]>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8ce4ca0>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_______________________________________________________________ ERROR at setup of test_retries_enabled[trio] _______________________________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_retries_enabled[trio]>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8cca640>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_____________________________________________________________ ERROR at setup of test_retries_exceeded[asyncio] _____________________________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_retries_exceeded[asyncio]>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8cc6af0>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
______________________________________________________________ ERROR at setup of test_retries_exceeded[trio] _______________________________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_retries_exceeded[trio]>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8f99af0>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_____________________________________ ERROR at setup of TestSocketStream.TestIsReadable.test_returns_true_when_transport_has_no_socket _____________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_returns_true_when_transport_has_no_socket>>
> ???
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8d17430>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_______________________________________ ERROR at setup of TestSocketStream.TestIsReadable.test_returns_true_when_socket_is_readable ________________________________________
fixturedef = <FixtureDef argname='event_loop' scope='function' baseid=''>, request = <SubRequest 'event_loop' for <Function test_returns_true_when_socket_is_readable>>
@pytest.hookimpl(hookwrapper=True)
def pytest_fixture_setup(fixturedef, request):
"""Adjust the event loop policy when an event loop is produced."""
if fixturedef.argname == "event_loop":
outcome = yield
> loop = outcome.get_result()
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:227: in event_loop
loop = asyncio.get_event_loop_policy().new_event_loop()
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
return self._loop_factory()
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
super().__init__(selector)
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
selector = selectors.DefaultSelector()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb90bec40>
def __init__(self):
super().__init__()
> self._selector = self._selector_cls()
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
___________________________________________________ ERROR at setup of test_broken_socket_detection_many_open_files[sync] ___________________________________________________
@pytest.fixture(scope="function")
def too_many_open_files_minus_one() -> typing.Iterator[None]:
# Fixture for test regression on https://github.com/encode/httpcore/issues/182
# Max number of descriptors chosen according to:
# See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
# "To monitor file descriptors greater than 1023, use poll or epoll instead."
max_num_descriptors = 1023
files = []
while True:
> f = open("/dev/null")
E OSError: [Errno 24] Too many open files: '/dev/null'
tests/conftest.py:175: OSError
================================================================================= FAILURES =================================================================================
______________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] _______________________________________
runner = Runner(clock=SystemClock(offset=115838.96171863361), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<selec..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb8c90430>, pr... b'/'), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'http', port=80, proxy_mode='DEFAULT')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
> runner.entry_queue.wakeup.wakeup_on_signals()
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2034:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <trio._core._wakeup_socketpair.WakeupSocketpair object at 0x7f9cb8c17fa0>
def wakeup_on_signals(self):
assert self.old_wakeup_fd is None
if not is_main_thread():
return
fd = self.write_sock.fileno()
if HAVE_WARN_ON_FULL_BUFFER:
self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False)
else:
self.old_wakeup_fd = signal.set_wakeup_fd(fd)
if self.old_wakeup_fd != -1:
> warnings.warn(
RuntimeWarning(
"It looks like Trio's signal handling code might have "
"collided with another library you're using. If you're "
"running Trio in guest mode, then this might mean you "
"should set host_uses_signal_set_wakeup_fd=True. "
"Otherwise, file a bug on Trio and we'll help you figure "
"out what's going on."
)
)
E RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning
The above exception was the direct cause of the following exception:
runner = Runner(clock=SystemClock(offset=115838.96171863361), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<selec..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb8c90430>, pr... b'/'), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'http', port=80, proxy_mode='DEFAULT')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
runner.entry_queue.wakeup.wakeup_on_signals()
if "before_run" in runner.instruments:
runner.instruments.call("before_run")
runner.clock.start_clock()
runner.init_task = runner.spawn_impl(
runner.init, (async_fn, args), None, "<init>", system_task=True
)
# You know how people talk about "event loops"? This 'while' loop right
# here is our event loop:
while runner.tasks:
if runner.runq:
timeout = 0
else:
deadline = runner.deadlines.next_deadline()
timeout = runner.clock.deadline_to_sleep_time(deadline)
timeout = min(max(0, timeout), _MAX_TIMEOUT)
idle_primed = None
if runner.waiting_for_idle:
cushion, _ = runner.waiting_for_idle.keys()[0]
if cushion < timeout:
timeout = cushion
idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE
# We use 'elif' here because if there are tasks in
# wait_all_tasks_blocked, then those tasks will wake up without
# jumping the clock, so we don't need to autojump.
elif runner.clock_autojump_threshold < timeout:
timeout = runner.clock_autojump_threshold
idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK
if "before_io_wait" in runner.instruments:
runner.instruments.call("before_io_wait", timeout)
# Driver will call io_manager.get_events(timeout) and pass it back
# in through the yield
events = yield timeout
runner.io_manager.process_events(events)
if "after_io_wait" in runner.instruments:
runner.instruments.call("after_io_wait", timeout)
# Process cancellations due to deadline expiry
now = runner.clock.current_time()
if runner.deadlines.expire(now):
idle_primed = None
# idle_primed != None means: if the IO wait hit the timeout, and
# still nothing is happening, then we should start waking up
# wait_all_tasks_blocked tasks or autojump the clock. But there
# are some subtleties in defining "nothing is happening".
#
# 'not runner.runq' means that no tasks are currently runnable.
# 'not events' means that the last IO wait call hit its full
# timeout. These are very similar, and if idle_primed != None and
# we're running in regular mode then they always go together. But,
# in *guest* mode, they can happen independently, even when
# idle_primed=True:
#
# - runner.runq=empty and events=True: the host loop adjusted a
# deadline and that forced an IO wakeup before the timeout expired,
# even though no actual tasks were scheduled.
#
# - runner.runq=nonempty and events=False: the IO wait hit its
# timeout, but then some code in the host thread rescheduled a task
# before we got here.
#
# So we need to check both.
if idle_primed is not None and not runner.runq and not events:
if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE:
while runner.waiting_for_idle:
key, task = runner.waiting_for_idle.peekitem(0)
if key[0] == cushion:
del runner.waiting_for_idle[key]
runner.reschedule(task)
else:
break
else:
assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK
runner.clock._autojump()
# Process all runnable tasks, but only the ones that are already
# runnable now. Anything that becomes runnable during this cycle
# needs to wait until the next pass. This avoids various
# starvation issues by ensuring that there's never an unbounded
# delay between successive checks for I/O.
#
# Also, we randomize the order of each batch to avoid assumptions
# about scheduling order sneaking in. In the long run, I suspect
# we'll either (a) use strict FIFO ordering and document that for
# predictability/determinism, or (b) implement a more
# sophisticated scheduler (e.g. some variant of fair queueing),
# for better behavior under load. For now, this is the worst of
# both worlds - but it keeps our options open. (If we do decide to
# go all in on deterministic scheduling, then there are other
# things that will probably need to change too, like the deadlines
# tie-breaker and the non-deterministic ordering of
# task._notify_queues.)
batch = list(runner.runq)
runner.runq.clear()
if _ALLOW_DETERMINISTIC_SCHEDULING:
# We're running under Hypothesis, and pytest-trio has patched
# this in to make the scheduler deterministic and avoid flaky
# tests. It's not worth the (small) performance cost in normal
# operation, since we'll shuffle the list and _r is only
# seeded for tests.
batch.sort(key=lambda t: t._counter)
_r.shuffle(batch)
else:
# 50% chance of reversing the batch, this way each task
# can appear before/after any other task.
if _r.random() < 0.5:
batch.reverse()
while batch:
task = batch.pop()
GLOBAL_RUN_CONTEXT.task = task
if "before_task_step" in runner.instruments:
runner.instruments.call("before_task_step", task)
next_send_fn = task._next_send_fn
next_send = task._next_send
task._next_send_fn = task._next_send = None
final_outcome = None
try:
# We used to unwrap the Outcome object here and send/throw
# its contents in directly, but it turns out that .throw()
# is buggy, at least on CPython 3.6:
# https://bugs.python.org/issue29587
# https://bugs.python.org/issue29590
# So now we send in the Outcome object and unwrap it on the
# other side.
msg = task.context.run(next_send_fn, next_send)
except StopIteration as stop_iteration:
final_outcome = Value(stop_iteration.value)
except BaseException as task_exc:
# Store for later, removing uninteresting top frames: 1
# frame we always remove, because it's this function
# catching it, and then in addition we remove however many
# more Context.run adds.
tb = task_exc.__traceback__.tb_next
for _ in range(CONTEXT_RUN_TB_FRAMES):
tb = tb.tb_next
final_outcome = Error(task_exc.with_traceback(tb))
# Remove local refs so that e.g. cancelled coroutine locals
# are not kept alive by this frame until another exception
# comes along.
del tb
if final_outcome is not None:
# We can't call this directly inside the except: blocks
# above, because then the exceptions end up attaching
# themselves to other exceptions as __context__ in
# unwanted ways.
runner.task_exited(task, final_outcome)
# final_outcome may contain a traceback ref. It's not as
# crucial compared to the above, but this will allow more
# prompt release of resources in coroutine locals.
final_outcome = None
else:
task._schedule_points += 1
if msg is CancelShieldedCheckpoint:
runner.reschedule(task)
elif type(msg) is WaitTaskRescheduled:
task._cancel_points += 1
task._abort_func = msg.abort_func
# KI is "outside" all cancel scopes, so check for it
# before checking for regular cancellation:
if runner.ki_pending and task is runner.main_task:
task._attempt_delivery_of_pending_ki()
task._attempt_delivery_of_any_pending_cancel()
elif type(msg) is PermanentlyDetachCoroutineObject:
# Pretend the task just exited with the given outcome
runner.task_exited(task, msg.final_outcome)
else:
exc = TypeError(
"trio.run received unrecognized yield message {!r}. "
"Are you trying to use a library written for some "
"other framework like asyncio? That won't work "
"without some kind of compatibility shim.".format(msg)
)
# The foreign library probably doesn't adhere to our
# protocol of unwrapping whatever outcome gets sent in.
# Instead, we'll arrange to throw `exc` in directly,
# which works for at least asyncio and curio.
runner.reschedule(task, exc)
task._next_send_fn = task.coro.throw
# prevent long-lived reference
# TODO: develop test for this deletion
del msg
if "after_task_step" in runner.instruments:
runner.instruments.call("after_task_step", task)
del GLOBAL_RUN_CONTEXT.task
# prevent long-lived references
# TODO: develop test for these deletions
del task, next_send, next_send_fn
except GeneratorExit:
# The run-loop generator has been garbage collected without finishing
warnings.warn(
RuntimeWarning(
"Trio guest run got abandoned without properly finishing... "
"weird stuff might happen"
)
)
except TrioInternalError:
raise
except BaseException as exc:
> raise TrioInternalError("internal error in Trio - please file a bug!") from exc
E trio.TrioInternalError: internal error in Trio - please file a bug!
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2244: TrioInternalError
____________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] ____________________________________
runner = Runner(clock=SystemClock(offset=197773.9476832302), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb8373dc0>, pr...), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'http', port=80, proxy_mode='FORWARD_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
> runner.entry_queue.wakeup.wakeup_on_signals()
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2034:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <trio._core._wakeup_socketpair.WakeupSocketpair object at 0x7f9cb8251ac0>
def wakeup_on_signals(self):
assert self.old_wakeup_fd is None
if not is_main_thread():
return
fd = self.write_sock.fileno()
if HAVE_WARN_ON_FULL_BUFFER:
self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False)
else:
self.old_wakeup_fd = signal.set_wakeup_fd(fd)
if self.old_wakeup_fd != -1:
> warnings.warn(
RuntimeWarning(
"It looks like Trio's signal handling code might have "
"collided with another library you're using. If you're "
"running Trio in guest mode, then this might mean you "
"should set host_uses_signal_set_wakeup_fd=True. "
"Otherwise, file a bug on Trio and we'll help you figure "
"out what's going on."
)
)
E RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning
The above exception was the direct cause of the following exception:
runner = Runner(clock=SystemClock(offset=197773.9476832302), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb8373dc0>, pr...), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'http', port=80, proxy_mode='FORWARD_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
runner.entry_queue.wakeup.wakeup_on_signals()
if "before_run" in runner.instruments:
runner.instruments.call("before_run")
runner.clock.start_clock()
runner.init_task = runner.spawn_impl(
runner.init, (async_fn, args), None, "<init>", system_task=True
)
# You know how people talk about "event loops"? This 'while' loop right
# here is our event loop:
while runner.tasks:
if runner.runq:
timeout = 0
else:
deadline = runner.deadlines.next_deadline()
timeout = runner.clock.deadline_to_sleep_time(deadline)
timeout = min(max(0, timeout), _MAX_TIMEOUT)
idle_primed = None
if runner.waiting_for_idle:
cushion, _ = runner.waiting_for_idle.keys()[0]
if cushion < timeout:
timeout = cushion
idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE
# We use 'elif' here because if there are tasks in
# wait_all_tasks_blocked, then those tasks will wake up without
# jumping the clock, so we don't need to autojump.
elif runner.clock_autojump_threshold < timeout:
timeout = runner.clock_autojump_threshold
idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK
if "before_io_wait" in runner.instruments:
runner.instruments.call("before_io_wait", timeout)
# Driver will call io_manager.get_events(timeout) and pass it back
# in through the yield
events = yield timeout
runner.io_manager.process_events(events)
if "after_io_wait" in runner.instruments:
runner.instruments.call("after_io_wait", timeout)
# Process cancellations due to deadline expiry
now = runner.clock.current_time()
if runner.deadlines.expire(now):
idle_primed = None
# idle_primed != None means: if the IO wait hit the timeout, and
# still nothing is happening, then we should start waking up
# wait_all_tasks_blocked tasks or autojump the clock. But there
# are some subtleties in defining "nothing is happening".
#
# 'not runner.runq' means that no tasks are currently runnable.
# 'not events' means that the last IO wait call hit its full
# timeout. These are very similar, and if idle_primed != None and
# we're running in regular mode then they always go together. But,
# in *guest* mode, they can happen independently, even when
# idle_primed=True:
#
# - runner.runq=empty and events=True: the host loop adjusted a
# deadline and that forced an IO wakeup before the timeout expired,
# even though no actual tasks were scheduled.
#
# - runner.runq=nonempty and events=False: the IO wait hit its
# timeout, but then some code in the host thread rescheduled a task
# before we got here.
#
# So we need to check both.
if idle_primed is not None and not runner.runq and not events:
if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE:
while runner.waiting_for_idle:
key, task = runner.waiting_for_idle.peekitem(0)
if key[0] == cushion:
del runner.waiting_for_idle[key]
runner.reschedule(task)
else:
break
else:
assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK
runner.clock._autojump()
# Process all runnable tasks, but only the ones that are already
# runnable now. Anything that becomes runnable during this cycle
# needs to wait until the next pass. This avoids various
# starvation issues by ensuring that there's never an unbounded
# delay between successive checks for I/O.
#
# Also, we randomize the order of each batch to avoid assumptions
# about scheduling order sneaking in. In the long run, I suspect
# we'll either (a) use strict FIFO ordering and document that for
# predictability/determinism, or (b) implement a more
# sophisticated scheduler (e.g. some variant of fair queueing),
# for better behavior under load. For now, this is the worst of
# both worlds - but it keeps our options open. (If we do decide to
# go all in on deterministic scheduling, then there are other
# things that will probably need to change too, like the deadlines
# tie-breaker and the non-deterministic ordering of
# task._notify_queues.)
batch = list(runner.runq)
runner.runq.clear()
if _ALLOW_DETERMINISTIC_SCHEDULING:
# We're running under Hypothesis, and pytest-trio has patched
# this in to make the scheduler deterministic and avoid flaky
# tests. It's not worth the (small) performance cost in normal
# operation, since we'll shuffle the list and _r is only
# seeded for tests.
batch.sort(key=lambda t: t._counter)
_r.shuffle(batch)
else:
# 50% chance of reversing the batch, this way each task
# can appear before/after any other task.
if _r.random() < 0.5:
batch.reverse()
while batch:
task = batch.pop()
GLOBAL_RUN_CONTEXT.task = task
if "before_task_step" in runner.instruments:
runner.instruments.call("before_task_step", task)
next_send_fn = task._next_send_fn
next_send = task._next_send
task._next_send_fn = task._next_send = None
final_outcome = None
try:
# We used to unwrap the Outcome object here and send/throw
# its contents in directly, but it turns out that .throw()
# is buggy, at least on CPython 3.6:
# https://bugs.python.org/issue29587
# https://bugs.python.org/issue29590
# So now we send in the Outcome object and unwrap it on the
# other side.
msg = task.context.run(next_send_fn, next_send)
except StopIteration as stop_iteration:
final_outcome = Value(stop_iteration.value)
except BaseException as task_exc:
# Store for later, removing uninteresting top frames: 1
# frame we always remove, because it's this function
# catching it, and then in addition we remove however many
# more Context.run adds.
tb = task_exc.__traceback__.tb_next
for _ in range(CONTEXT_RUN_TB_FRAMES):
tb = tb.tb_next
final_outcome = Error(task_exc.with_traceback(tb))
# Remove local refs so that e.g. cancelled coroutine locals
# are not kept alive by this frame until another exception
# comes along.
del tb
if final_outcome is not None:
# We can't call this directly inside the except: blocks
# above, because then the exceptions end up attaching
# themselves to other exceptions as __context__ in
# unwanted ways.
runner.task_exited(task, final_outcome)
# final_outcome may contain a traceback ref. It's not as
# crucial compared to the above, but this will allow more
# prompt release of resources in coroutine locals.
final_outcome = None
else:
task._schedule_points += 1
if msg is CancelShieldedCheckpoint:
runner.reschedule(task)
elif type(msg) is WaitTaskRescheduled:
task._cancel_points += 1
task._abort_func = msg.abort_func
# KI is "outside" all cancel scopes, so check for it
# before checking for regular cancellation:
if runner.ki_pending and task is runner.main_task:
task._attempt_delivery_of_pending_ki()
task._attempt_delivery_of_any_pending_cancel()
elif type(msg) is PermanentlyDetachCoroutineObject:
# Pretend the task just exited with the given outcome
runner.task_exited(task, msg.final_outcome)
else:
exc = TypeError(
"trio.run received unrecognized yield message {!r}. "
"Are you trying to use a library written for some "
"other framework like asyncio? That won't work "
"without some kind of compatibility shim.".format(msg)
)
# The foreign library probably doesn't adhere to our
# protocol of unwrapping whatever outcome gets sent in.
# Instead, we'll arrange to throw `exc` in directly,
# which works for at least asyncio and curio.
runner.reschedule(task, exc)
task._next_send_fn = task.coro.throw
# prevent long-lived reference
# TODO: develop test for this deletion
del msg
if "after_task_step" in runner.instruments:
runner.instruments.call("after_task_step", task)
del GLOBAL_RUN_CONTEXT.task
# prevent long-lived references
# TODO: develop test for these deletions
del task, next_send, next_send_fn
except GeneratorExit:
# The run-loop generator has been garbage collected without finishing
warnings.warn(
RuntimeWarning(
"Trio guest run got abandoned without properly finishing... "
"weird stuff might happen"
)
)
except TrioInternalError:
raise
except BaseException as exc:
> raise TrioInternalError("internal error in Trio - please file a bug!") from exc
E trio.TrioInternalError: internal error in Trio - please file a bug!
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2244: TrioInternalError
____________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-TUNNEL_ONLY] _____________________________________
runner = Runner(clock=SystemClock(offset=63201.61027561586), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb83559d0>, pr...'), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'http', port=80, proxy_mode='TUNNEL_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
> runner.entry_queue.wakeup.wakeup_on_signals()
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2034:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <trio._core._wakeup_socketpair.WakeupSocketpair object at 0x7f9cb8c239d0>
def wakeup_on_signals(self):
assert self.old_wakeup_fd is None
if not is_main_thread():
return
fd = self.write_sock.fileno()
if HAVE_WARN_ON_FULL_BUFFER:
self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False)
else:
self.old_wakeup_fd = signal.set_wakeup_fd(fd)
if self.old_wakeup_fd != -1:
> warnings.warn(
RuntimeWarning(
"It looks like Trio's signal handling code might have "
"collided with another library you're using. If you're "
"running Trio in guest mode, then this might mean you "
"should set host_uses_signal_set_wakeup_fd=True. "
"Otherwise, file a bug on Trio and we'll help you figure "
"out what's going on."
)
)
E RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning
The above exception was the direct cause of the following exception:
runner = Runner(clock=SystemClock(offset=63201.61027561586), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb83559d0>, pr...'), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'http', port=80, proxy_mode='TUNNEL_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
runner.entry_queue.wakeup.wakeup_on_signals()
if "before_run" in runner.instruments:
runner.instruments.call("before_run")
runner.clock.start_clock()
runner.init_task = runner.spawn_impl(
runner.init, (async_fn, args), None, "<init>", system_task=True
)
# You know how people talk about "event loops"? This 'while' loop right
# here is our event loop:
while runner.tasks:
if runner.runq:
timeout = 0
else:
deadline = runner.deadlines.next_deadline()
timeout = runner.clock.deadline_to_sleep_time(deadline)
timeout = min(max(0, timeout), _MAX_TIMEOUT)
idle_primed = None
if runner.waiting_for_idle:
cushion, _ = runner.waiting_for_idle.keys()[0]
if cushion < timeout:
timeout = cushion
idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE
# We use 'elif' here because if there are tasks in
# wait_all_tasks_blocked, then those tasks will wake up without
# jumping the clock, so we don't need to autojump.
elif runner.clock_autojump_threshold < timeout:
timeout = runner.clock_autojump_threshold
idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK
if "before_io_wait" in runner.instruments:
runner.instruments.call("before_io_wait", timeout)
# Driver will call io_manager.get_events(timeout) and pass it back
# in through the yield
events = yield timeout
runner.io_manager.process_events(events)
if "after_io_wait" in runner.instruments:
runner.instruments.call("after_io_wait", timeout)
# Process cancellations due to deadline expiry
now = runner.clock.current_time()
if runner.deadlines.expire(now):
idle_primed = None
# idle_primed != None means: if the IO wait hit the timeout, and
# still nothing is happening, then we should start waking up
# wait_all_tasks_blocked tasks or autojump the clock. But there
# are some subtleties in defining "nothing is happening".
#
# 'not runner.runq' means that no tasks are currently runnable.
# 'not events' means that the last IO wait call hit its full
# timeout. These are very similar, and if idle_primed != None and
# we're running in regular mode then they always go together. But,
# in *guest* mode, they can happen independently, even when
# idle_primed=True:
#
# - runner.runq=empty and events=True: the host loop adjusted a
# deadline and that forced an IO wakeup before the timeout expired,
# even though no actual tasks were scheduled.
#
# - runner.runq=nonempty and events=False: the IO wait hit its
# timeout, but then some code in the host thread rescheduled a task
# before we got here.
#
# So we need to check both.
if idle_primed is not None and not runner.runq and not events:
if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE:
while runner.waiting_for_idle:
key, task = runner.waiting_for_idle.peekitem(0)
if key[0] == cushion:
del runner.waiting_for_idle[key]
runner.reschedule(task)
else:
break
else:
assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK
runner.clock._autojump()
# Process all runnable tasks, but only the ones that are already
# runnable now. Anything that becomes runnable during this cycle
# needs to wait until the next pass. This avoids various
# starvation issues by ensuring that there's never an unbounded
# delay between successive checks for I/O.
#
# Also, we randomize the order of each batch to avoid assumptions
# about scheduling order sneaking in. In the long run, I suspect
# we'll either (a) use strict FIFO ordering and document that for
# predictability/determinism, or (b) implement a more
# sophisticated scheduler (e.g. some variant of fair queueing),
# for better behavior under load. For now, this is the worst of
# both worlds - but it keeps our options open. (If we do decide to
# go all in on deterministic scheduling, then there are other
# things that will probably need to change too, like the deadlines
# tie-breaker and the non-deterministic ordering of
# task._notify_queues.)
batch = list(runner.runq)
runner.runq.clear()
if _ALLOW_DETERMINISTIC_SCHEDULING:
# We're running under Hypothesis, and pytest-trio has patched
# this in to make the scheduler deterministic and avoid flaky
# tests. It's not worth the (small) performance cost in normal
# operation, since we'll shuffle the list and _r is only
# seeded for tests.
batch.sort(key=lambda t: t._counter)
_r.shuffle(batch)
else:
# 50% chance of reversing the batch, this way each task
# can appear before/after any other task.
if _r.random() < 0.5:
batch.reverse()
while batch:
task = batch.pop()
GLOBAL_RUN_CONTEXT.task = task
if "before_task_step" in runner.instruments:
runner.instruments.call("before_task_step", task)
next_send_fn = task._next_send_fn
next_send = task._next_send
task._next_send_fn = task._next_send = None
final_outcome = None
try:
# We used to unwrap the Outcome object here and send/throw
# its contents in directly, but it turns out that .throw()
# is buggy, at least on CPython 3.6:
# https://bugs.python.org/issue29587
# https://bugs.python.org/issue29590
# So now we send in the Outcome object and unwrap it on the
# other side.
msg = task.context.run(next_send_fn, next_send)
except StopIteration as stop_iteration:
final_outcome = Value(stop_iteration.value)
except BaseException as task_exc:
# Store for later, removing uninteresting top frames: 1
# frame we always remove, because it's this function
# catching it, and then in addition we remove however many
# more Context.run adds.
tb = task_exc.__traceback__.tb_next
for _ in range(CONTEXT_RUN_TB_FRAMES):
tb = tb.tb_next
final_outcome = Error(task_exc.with_traceback(tb))
# Remove local refs so that e.g. cancelled coroutine locals
# are not kept alive by this frame until another exception
# comes along.
del tb
if final_outcome is not None:
# We can't call this directly inside the except: blocks
# above, because then the exceptions end up attaching
# themselves to other exceptions as __context__ in
# unwanted ways.
runner.task_exited(task, final_outcome)
# final_outcome may contain a traceback ref. It's not as
# crucial compared to the above, but this will allow more
# prompt release of resources in coroutine locals.
final_outcome = None
else:
task._schedule_points += 1
if msg is CancelShieldedCheckpoint:
runner.reschedule(task)
elif type(msg) is WaitTaskRescheduled:
task._cancel_points += 1
task._abort_func = msg.abort_func
# KI is "outside" all cancel scopes, so check for it
# before checking for regular cancellation:
if runner.ki_pending and task is runner.main_task:
task._attempt_delivery_of_pending_ki()
task._attempt_delivery_of_any_pending_cancel()
elif type(msg) is PermanentlyDetachCoroutineObject:
# Pretend the task just exited with the given outcome
runner.task_exited(task, msg.final_outcome)
else:
exc = TypeError(
"trio.run received unrecognized yield message {!r}. "
"Are you trying to use a library written for some "
"other framework like asyncio? That won't work "
"without some kind of compatibility shim.".format(msg)
)
# The foreign library probably doesn't adhere to our
# protocol of unwrapping whatever outcome gets sent in.
# Instead, we'll arrange to throw `exc` in directly,
# which works for at least asyncio and curio.
runner.reschedule(task, exc)
task._next_send_fn = task.coro.throw
# prevent long-lived reference
# TODO: develop test for this deletion
del msg
if "after_task_step" in runner.instruments:
runner.instruments.call("after_task_step", task)
del GLOBAL_RUN_CONTEXT.task
# prevent long-lived references
# TODO: develop test for these deletions
del task, next_send, next_send_fn
except GeneratorExit:
# The run-loop generator has been garbage collected without finishing
warnings.warn(
RuntimeWarning(
"Trio guest run got abandoned without properly finishing... "
"weird stuff might happen"
)
)
except TrioInternalError:
raise
except BaseException as exc:
> raise TrioInternalError("internal error in Trio - please file a bug!") from exc
E trio.TrioInternalError: internal error in Trio - please file a bug!
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2244: TrioInternalError
_____________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-DEFAULT] ______________________________________
runner = Runner(clock=SystemClock(offset=195401.2470424917), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb82764c0>, pr...'/'), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'https', port=443, proxy_mode='DEFAULT')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
> runner.entry_queue.wakeup.wakeup_on_signals()
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2034:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <trio._core._wakeup_socketpair.WakeupSocketpair object at 0x7f9cb8c17430>
def wakeup_on_signals(self):
assert self.old_wakeup_fd is None
if not is_main_thread():
return
fd = self.write_sock.fileno()
if HAVE_WARN_ON_FULL_BUFFER:
self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False)
else:
self.old_wakeup_fd = signal.set_wakeup_fd(fd)
if self.old_wakeup_fd != -1:
> warnings.warn(
RuntimeWarning(
"It looks like Trio's signal handling code might have "
"collided with another library you're using. If you're "
"running Trio in guest mode, then this might mean you "
"should set host_uses_signal_set_wakeup_fd=True. "
"Otherwise, file a bug on Trio and we'll help you figure "
"out what's going on."
)
)
E RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning
The above exception was the direct cause of the following exception:
runner = Runner(clock=SystemClock(offset=195401.2470424917), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb82764c0>, pr...'/'), server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'https', port=443, proxy_mode='DEFAULT')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
runner.entry_queue.wakeup.wakeup_on_signals()
if "before_run" in runner.instruments:
runner.instruments.call("before_run")
runner.clock.start_clock()
runner.init_task = runner.spawn_impl(
runner.init, (async_fn, args), None, "<init>", system_task=True
)
# You know how people talk about "event loops"? This 'while' loop right
# here is our event loop:
while runner.tasks:
if runner.runq:
timeout = 0
else:
deadline = runner.deadlines.next_deadline()
timeout = runner.clock.deadline_to_sleep_time(deadline)
timeout = min(max(0, timeout), _MAX_TIMEOUT)
idle_primed = None
if runner.waiting_for_idle:
cushion, _ = runner.waiting_for_idle.keys()[0]
if cushion < timeout:
timeout = cushion
idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE
# We use 'elif' here because if there are tasks in
# wait_all_tasks_blocked, then those tasks will wake up without
# jumping the clock, so we don't need to autojump.
elif runner.clock_autojump_threshold < timeout:
timeout = runner.clock_autojump_threshold
idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK
if "before_io_wait" in runner.instruments:
runner.instruments.call("before_io_wait", timeout)
# Driver will call io_manager.get_events(timeout) and pass it back
# in through the yield
events = yield timeout
runner.io_manager.process_events(events)
if "after_io_wait" in runner.instruments:
runner.instruments.call("after_io_wait", timeout)
# Process cancellations due to deadline expiry
now = runner.clock.current_time()
if runner.deadlines.expire(now):
idle_primed = None
# idle_primed != None means: if the IO wait hit the timeout, and
# still nothing is happening, then we should start waking up
# wait_all_tasks_blocked tasks or autojump the clock. But there
# are some subtleties in defining "nothing is happening".
#
# 'not runner.runq' means that no tasks are currently runnable.
# 'not events' means that the last IO wait call hit its full
# timeout. These are very similar, and if idle_primed != None and
# we're running in regular mode then they always go together. But,
# in *guest* mode, they can happen independently, even when
# idle_primed=True:
#
# - runner.runq=empty and events=True: the host loop adjusted a
# deadline and that forced an IO wakeup before the timeout expired,
# even though no actual tasks were scheduled.
#
# - runner.runq=nonempty and events=False: the IO wait hit its
# timeout, but then some code in the host thread rescheduled a task
# before we got here.
#
# So we need to check both.
if idle_primed is not None and not runner.runq and not events:
if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE:
while runner.waiting_for_idle:
key, task = runner.waiting_for_idle.peekitem(0)
if key[0] == cushion:
del runner.waiting_for_idle[key]
runner.reschedule(task)
else:
break
else:
assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK
runner.clock._autojump()
# Process all runnable tasks, but only the ones that are already
# runnable now. Anything that becomes runnable during this cycle
# needs to wait until the next pass. This avoids various
# starvation issues by ensuring that there's never an unbounded
# delay between successive checks for I/O.
#
# Also, we randomize the order of each batch to avoid assumptions
# about scheduling order sneaking in. In the long run, I suspect
# we'll either (a) use strict FIFO ordering and document that for
# predictability/determinism, or (b) implement a more
# sophisticated scheduler (e.g. some variant of fair queueing),
# for better behavior under load. For now, this is the worst of
# both worlds - but it keeps our options open. (If we do decide to
# go all in on deterministic scheduling, then there are other
# things that will probably need to change too, like the deadlines
# tie-breaker and the non-deterministic ordering of
# task._notify_queues.)
batch = list(runner.runq)
runner.runq.clear()
if _ALLOW_DETERMINISTIC_SCHEDULING:
# We're running under Hypothesis, and pytest-trio has patched
# this in to make the scheduler deterministic and avoid flaky
# tests. It's not worth the (small) performance cost in normal
# operation, since we'll shuffle the list and _r is only
# seeded for tests.
batch.sort(key=lambda t: t._counter)
_r.shuffle(batch)
else:
# 50% chance of reversing the batch, this way each task
# can appear before/after any other task.
if _r.random() < 0.5:
batch.reverse()
while batch:
task = batch.pop()
GLOBAL_RUN_CONTEXT.task = task
if "before_task_step" in runner.instruments:
runner.instruments.call("before_task_step", task)
next_send_fn = task._next_send_fn
next_send = task._next_send
task._next_send_fn = task._next_send = None
final_outcome = None
try:
# We used to unwrap the Outcome object here and send/throw
# its contents in directly, but it turns out that .throw()
# is buggy, at least on CPython 3.6:
# https://bugs.python.org/issue29587
# https://bugs.python.org/issue29590
# So now we send in the Outcome object and unwrap it on the
# other side.
msg = task.context.run(next_send_fn, next_send)
except StopIteration as stop_iteration:
final_outcome = Value(stop_iteration.value)
except BaseException as task_exc:
# Store for later, removing uninteresting top frames: 1
# frame we always remove, because it's this function
# catching it, and then in addition we remove however many
# more Context.run adds.
tb = task_exc.__traceback__.tb_next
for _ in range(CONTEXT_RUN_TB_FRAMES):
tb = tb.tb_next
final_outcome = Error(task_exc.with_traceback(tb))
# Remove local refs so that e.g. cancelled coroutine locals
# are not kept alive by this frame until another exception
# comes along.
del tb
if final_outcome is not None:
# We can't call this directly inside the except: blocks
# above, because then the exceptions end up attaching
# themselves to other exceptions as __context__ in
# unwanted ways.
runner.task_exited(task, final_outcome)
# final_outcome may contain a traceback ref. It's not as
# crucial compared to the above, but this will allow more
# prompt release of resources in coroutine locals.
final_outcome = None
else:
task._schedule_points += 1
if msg is CancelShieldedCheckpoint:
runner.reschedule(task)
elif type(msg) is WaitTaskRescheduled:
task._cancel_points += 1
task._abort_func = msg.abort_func
# KI is "outside" all cancel scopes, so check for it
# before checking for regular cancellation:
if runner.ki_pending and task is runner.main_task:
task._attempt_delivery_of_pending_ki()
task._attempt_delivery_of_any_pending_cancel()
elif type(msg) is PermanentlyDetachCoroutineObject:
# Pretend the task just exited with the given outcome
runner.task_exited(task, msg.final_outcome)
else:
exc = TypeError(
"trio.run received unrecognized yield message {!r}. "
"Are you trying to use a library written for some "
"other framework like asyncio? That won't work "
"without some kind of compatibility shim.".format(msg)
)
# The foreign library probably doesn't adhere to our
# protocol of unwrapping whatever outcome gets sent in.
# Instead, we'll arrange to throw `exc` in directly,
# which works for at least asyncio and curio.
runner.reschedule(task, exc)
task._next_send_fn = task.coro.throw
# prevent long-lived reference
# TODO: develop test for this deletion
del msg
if "after_task_step" in runner.instruments:
runner.instruments.call("after_task_step", task)
del GLOBAL_RUN_CONTEXT.task
# prevent long-lived references
# TODO: develop test for these deletions
del task, next_send, next_send_fn
except GeneratorExit:
# The run-loop generator has been garbage collected without finishing
warnings.warn(
RuntimeWarning(
"Trio guest run got abandoned without properly finishing... "
"weird stuff might happen"
)
)
except TrioInternalError:
raise
except BaseException as exc:
> raise TrioInternalError("internal error in Trio - please file a bug!") from exc
E trio.TrioInternalError: internal error in Trio - please file a bug!
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2244: TrioInternalError
___________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] ___________________________________
runner = Runner(clock=SystemClock(offset=150828.6110303711), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb808a8b0>, pr... server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'https', port=443, proxy_mode='FORWARD_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
> runner.entry_queue.wakeup.wakeup_on_signals()
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2034:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <trio._core._wakeup_socketpair.WakeupSocketpair object at 0x7f9cb82b98e0>
def wakeup_on_signals(self):
assert self.old_wakeup_fd is None
if not is_main_thread():
return
fd = self.write_sock.fileno()
if HAVE_WARN_ON_FULL_BUFFER:
self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False)
else:
self.old_wakeup_fd = signal.set_wakeup_fd(fd)
if self.old_wakeup_fd != -1:
> warnings.warn(
RuntimeWarning(
"It looks like Trio's signal handling code might have "
"collided with another library you're using. If you're "
"running Trio in guest mode, then this might mean you "
"should set host_uses_signal_set_wakeup_fd=True. "
"Otherwise, file a bug on Trio and we'll help you figure "
"out what's going on."
)
)
E RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning
The above exception was the direct cause of the following exception:
runner = Runner(clock=SystemClock(offset=150828.6110303711), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<select..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb808a8b0>, pr... server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'https', port=443, proxy_mode='FORWARD_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
runner.entry_queue.wakeup.wakeup_on_signals()
if "before_run" in runner.instruments:
runner.instruments.call("before_run")
runner.clock.start_clock()
runner.init_task = runner.spawn_impl(
runner.init, (async_fn, args), None, "<init>", system_task=True
)
# You know how people talk about "event loops"? This 'while' loop right
# here is our event loop:
while runner.tasks:
if runner.runq:
timeout = 0
else:
deadline = runner.deadlines.next_deadline()
timeout = runner.clock.deadline_to_sleep_time(deadline)
timeout = min(max(0, timeout), _MAX_TIMEOUT)
idle_primed = None
if runner.waiting_for_idle:
cushion, _ = runner.waiting_for_idle.keys()[0]
if cushion < timeout:
timeout = cushion
idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE
# We use 'elif' here because if there are tasks in
# wait_all_tasks_blocked, then those tasks will wake up without
# jumping the clock, so we don't need to autojump.
elif runner.clock_autojump_threshold < timeout:
timeout = runner.clock_autojump_threshold
idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK
if "before_io_wait" in runner.instruments:
runner.instruments.call("before_io_wait", timeout)
# Driver will call io_manager.get_events(timeout) and pass it back
# in through the yield
events = yield timeout
runner.io_manager.process_events(events)
if "after_io_wait" in runner.instruments:
runner.instruments.call("after_io_wait", timeout)
# Process cancellations due to deadline expiry
now = runner.clock.current_time()
if runner.deadlines.expire(now):
idle_primed = None
# idle_primed != None means: if the IO wait hit the timeout, and
# still nothing is happening, then we should start waking up
# wait_all_tasks_blocked tasks or autojump the clock. But there
# are some subtleties in defining "nothing is happening".
#
# 'not runner.runq' means that no tasks are currently runnable.
# 'not events' means that the last IO wait call hit its full
# timeout. These are very similar, and if idle_primed != None and
# we're running in regular mode then they always go together. But,
# in *guest* mode, they can happen independently, even when
# idle_primed=True:
#
# - runner.runq=empty and events=True: the host loop adjusted a
# deadline and that forced an IO wakeup before the timeout expired,
# even though no actual tasks were scheduled.
#
# - runner.runq=nonempty and events=False: the IO wait hit its
# timeout, but then some code in the host thread rescheduled a task
# before we got here.
#
# So we need to check both.
if idle_primed is not None and not runner.runq and not events:
if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE:
while runner.waiting_for_idle:
key, task = runner.waiting_for_idle.peekitem(0)
if key[0] == cushion:
del runner.waiting_for_idle[key]
runner.reschedule(task)
else:
break
else:
assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK
runner.clock._autojump()
# Process all runnable tasks, but only the ones that are already
# runnable now. Anything that becomes runnable during this cycle
# needs to wait until the next pass. This avoids various
# starvation issues by ensuring that there's never an unbounded
# delay between successive checks for I/O.
#
# Also, we randomize the order of each batch to avoid assumptions
# about scheduling order sneaking in. In the long run, I suspect
# we'll either (a) use strict FIFO ordering and document that for
# predictability/determinism, or (b) implement a more
# sophisticated scheduler (e.g. some variant of fair queueing),
# for better behavior under load. For now, this is the worst of
# both worlds - but it keeps our options open. (If we do decide to
# go all in on deterministic scheduling, then there are other
# things that will probably need to change too, like the deadlines
# tie-breaker and the non-deterministic ordering of
# task._notify_queues.)
batch = list(runner.runq)
runner.runq.clear()
if _ALLOW_DETERMINISTIC_SCHEDULING:
# We're running under Hypothesis, and pytest-trio has patched
# this in to make the scheduler deterministic and avoid flaky
# tests. It's not worth the (small) performance cost in normal
# operation, since we'll shuffle the list and _r is only
# seeded for tests.
batch.sort(key=lambda t: t._counter)
_r.shuffle(batch)
else:
# 50% chance of reversing the batch, this way each task
# can appear before/after any other task.
if _r.random() < 0.5:
batch.reverse()
while batch:
task = batch.pop()
GLOBAL_RUN_CONTEXT.task = task
if "before_task_step" in runner.instruments:
runner.instruments.call("before_task_step", task)
next_send_fn = task._next_send_fn
next_send = task._next_send
task._next_send_fn = task._next_send = None
final_outcome = None
try:
# We used to unwrap the Outcome object here and send/throw
# its contents in directly, but it turns out that .throw()
# is buggy, at least on CPython 3.6:
# https://bugs.python.org/issue29587
# https://bugs.python.org/issue29590
# So now we send in the Outcome object and unwrap it on the
# other side.
msg = task.context.run(next_send_fn, next_send)
except StopIteration as stop_iteration:
final_outcome = Value(stop_iteration.value)
except BaseException as task_exc:
# Store for later, removing uninteresting top frames: 1
# frame we always remove, because it's this function
# catching it, and then in addition we remove however many
# more Context.run adds.
tb = task_exc.__traceback__.tb_next
for _ in range(CONTEXT_RUN_TB_FRAMES):
tb = tb.tb_next
final_outcome = Error(task_exc.with_traceback(tb))
# Remove local refs so that e.g. cancelled coroutine locals
# are not kept alive by this frame until another exception
# comes along.
del tb
if final_outcome is not None:
# We can't call this directly inside the except: blocks
# above, because then the exceptions end up attaching
# themselves to other exceptions as __context__ in
# unwanted ways.
runner.task_exited(task, final_outcome)
# final_outcome may contain a traceback ref. It's not as
# crucial compared to the above, but this will allow more
# prompt release of resources in coroutine locals.
final_outcome = None
else:
task._schedule_points += 1
if msg is CancelShieldedCheckpoint:
runner.reschedule(task)
elif type(msg) is WaitTaskRescheduled:
task._cancel_points += 1
task._abort_func = msg.abort_func
# KI is "outside" all cancel scopes, so check for it
# before checking for regular cancellation:
if runner.ki_pending and task is runner.main_task:
task._attempt_delivery_of_pending_ki()
task._attempt_delivery_of_any_pending_cancel()
elif type(msg) is PermanentlyDetachCoroutineObject:
# Pretend the task just exited with the given outcome
runner.task_exited(task, msg.final_outcome)
else:
exc = TypeError(
"trio.run received unrecognized yield message {!r}. "
"Are you trying to use a library written for some "
"other framework like asyncio? That won't work "
"without some kind of compatibility shim.".format(msg)
)
# The foreign library probably doesn't adhere to our
# protocol of unwrapping whatever outcome gets sent in.
# Instead, we'll arrange to throw `exc` in directly,
# which works for at least asyncio and curio.
runner.reschedule(task, exc)
task._next_send_fn = task.coro.throw
# prevent long-lived reference
# TODO: develop test for this deletion
del msg
if "after_task_step" in runner.instruments:
runner.instruments.call("after_task_step", task)
del GLOBAL_RUN_CONTEXT.task
# prevent long-lived references
# TODO: develop test for these deletions
del task, next_send, next_send_fn
except GeneratorExit:
# The run-loop generator has been garbage collected without finishing
warnings.warn(
RuntimeWarning(
"Trio guest run got abandoned without properly finishing... "
"weird stuff might happen"
)
)
except TrioInternalError:
raise
except BaseException as exc:
> raise TrioInternalError("internal error in Trio - please file a bug!") from exc
E trio.TrioInternalError: internal error in Trio - please file a bug!
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2244: TrioInternalError
___________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-TUNNEL_ONLY] ____________________________________
runner = Runner(clock=SystemClock(offset=111258.94132721354), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<selec..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb81f64c0>, pr..., server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'https', port=443, proxy_mode='TUNNEL_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
> runner.entry_queue.wakeup.wakeup_on_signals()
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2034:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <trio._core._wakeup_socketpair.WakeupSocketpair object at 0x7f9cb8361550>
def wakeup_on_signals(self):
assert self.old_wakeup_fd is None
if not is_main_thread():
return
fd = self.write_sock.fileno()
if HAVE_WARN_ON_FULL_BUFFER:
self.old_wakeup_fd = signal.set_wakeup_fd(fd, warn_on_full_buffer=False)
else:
self.old_wakeup_fd = signal.set_wakeup_fd(fd)
if self.old_wakeup_fd != -1:
> warnings.warn(
RuntimeWarning(
"It looks like Trio's signal handling code might have "
"collided with another library you're using. If you're "
"running Trio in guest mode, then this might mean you "
"should set host_uses_signal_set_wakeup_fd=True. "
"Otherwise, file a bug on Trio and we'll help you figure "
"out what's going on."
)
)
E RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning
The above exception was the direct cause of the following exception:
runner = Runner(clock=SystemClock(offset=111258.94132721354), instruments={'_all': {}}, io_manager=EpollIOManager(_epoll=<selec..._autojump_threshold=inf, is_guest=False, guest_tick_scheduled=False, ki_pending=False, waiting_for_idle=SortedDict({}))
async_fn = functools.partial(<function _trio_test_runner_factory.<locals>._bootstrap_fixtures_and_run_test at 0x7f9cb81f64c0>, pr..., server=<tests.utils.HypercornServer object at 0x7f9cb91689d0>, protocol=b'https', port=443, proxy_mode='TUNNEL_ONLY')
args = (), host_uses_signal_set_wakeup_fd = False
def unrolled_run(runner, async_fn, args, host_uses_signal_set_wakeup_fd=False):
locals()[LOCALS_KEY_KI_PROTECTION_ENABLED] = True
__tracebackhide__ = True
try:
if not host_uses_signal_set_wakeup_fd:
runner.entry_queue.wakeup.wakeup_on_signals()
if "before_run" in runner.instruments:
runner.instruments.call("before_run")
runner.clock.start_clock()
runner.init_task = runner.spawn_impl(
runner.init, (async_fn, args), None, "<init>", system_task=True
)
# You know how people talk about "event loops"? This 'while' loop right
# here is our event loop:
while runner.tasks:
if runner.runq:
timeout = 0
else:
deadline = runner.deadlines.next_deadline()
timeout = runner.clock.deadline_to_sleep_time(deadline)
timeout = min(max(0, timeout), _MAX_TIMEOUT)
idle_primed = None
if runner.waiting_for_idle:
cushion, _ = runner.waiting_for_idle.keys()[0]
if cushion < timeout:
timeout = cushion
idle_primed = IdlePrimedTypes.WAITING_FOR_IDLE
# We use 'elif' here because if there are tasks in
# wait_all_tasks_blocked, then those tasks will wake up without
# jumping the clock, so we don't need to autojump.
elif runner.clock_autojump_threshold < timeout:
timeout = runner.clock_autojump_threshold
idle_primed = IdlePrimedTypes.AUTOJUMP_CLOCK
if "before_io_wait" in runner.instruments:
runner.instruments.call("before_io_wait", timeout)
# Driver will call io_manager.get_events(timeout) and pass it back
# in through the yield
events = yield timeout
runner.io_manager.process_events(events)
if "after_io_wait" in runner.instruments:
runner.instruments.call("after_io_wait", timeout)
# Process cancellations due to deadline expiry
now = runner.clock.current_time()
if runner.deadlines.expire(now):
idle_primed = None
# idle_primed != None means: if the IO wait hit the timeout, and
# still nothing is happening, then we should start waking up
# wait_all_tasks_blocked tasks or autojump the clock. But there
# are some subtleties in defining "nothing is happening".
#
# 'not runner.runq' means that no tasks are currently runnable.
# 'not events' means that the last IO wait call hit its full
# timeout. These are very similar, and if idle_primed != None and
# we're running in regular mode then they always go together. But,
# in *guest* mode, they can happen independently, even when
# idle_primed=True:
#
# - runner.runq=empty and events=True: the host loop adjusted a
# deadline and that forced an IO wakeup before the timeout expired,
# even though no actual tasks were scheduled.
#
# - runner.runq=nonempty and events=False: the IO wait hit its
# timeout, but then some code in the host thread rescheduled a task
# before we got here.
#
# So we need to check both.
if idle_primed is not None and not runner.runq and not events:
if idle_primed is IdlePrimedTypes.WAITING_FOR_IDLE:
while runner.waiting_for_idle:
key, task = runner.waiting_for_idle.peekitem(0)
if key[0] == cushion:
del runner.waiting_for_idle[key]
runner.reschedule(task)
else:
break
else:
assert idle_primed is IdlePrimedTypes.AUTOJUMP_CLOCK
runner.clock._autojump()
# Process all runnable tasks, but only the ones that are already
# runnable now. Anything that becomes runnable during this cycle
# needs to wait until the next pass. This avoids various
# starvation issues by ensuring that there's never an unbounded
# delay between successive checks for I/O.
#
# Also, we randomize the order of each batch to avoid assumptions
# about scheduling order sneaking in. In the long run, I suspect
# we'll either (a) use strict FIFO ordering and document that for
# predictability/determinism, or (b) implement a more
# sophisticated scheduler (e.g. some variant of fair queueing),
# for better behavior under load. For now, this is the worst of
# both worlds - but it keeps our options open. (If we do decide to
# go all in on deterministic scheduling, then there are other
# things that will probably need to change too, like the deadlines
# tie-breaker and the non-deterministic ordering of
# task._notify_queues.)
batch = list(runner.runq)
runner.runq.clear()
if _ALLOW_DETERMINISTIC_SCHEDULING:
# We're running under Hypothesis, and pytest-trio has patched
# this in to make the scheduler deterministic and avoid flaky
# tests. It's not worth the (small) performance cost in normal
# operation, since we'll shuffle the list and _r is only
# seeded for tests.
batch.sort(key=lambda t: t._counter)
_r.shuffle(batch)
else:
# 50% chance of reversing the batch, this way each task
# can appear before/after any other task.
if _r.random() < 0.5:
batch.reverse()
while batch:
task = batch.pop()
GLOBAL_RUN_CONTEXT.task = task
if "before_task_step" in runner.instruments:
runner.instruments.call("before_task_step", task)
next_send_fn = task._next_send_fn
next_send = task._next_send
task._next_send_fn = task._next_send = None
final_outcome = None
try:
# We used to unwrap the Outcome object here and send/throw
# its contents in directly, but it turns out that .throw()
# is buggy, at least on CPython 3.6:
# https://bugs.python.org/issue29587
# https://bugs.python.org/issue29590
# So now we send in the Outcome object and unwrap it on the
# other side.
msg = task.context.run(next_send_fn, next_send)
except StopIteration as stop_iteration:
final_outcome = Value(stop_iteration.value)
except BaseException as task_exc:
# Store for later, removing uninteresting top frames: 1
# frame we always remove, because it's this function
# catching it, and then in addition we remove however many
# more Context.run adds.
tb = task_exc.__traceback__.tb_next
for _ in range(CONTEXT_RUN_TB_FRAMES):
tb = tb.tb_next
final_outcome = Error(task_exc.with_traceback(tb))
# Remove local refs so that e.g. cancelled coroutine locals
# are not kept alive by this frame until another exception
# comes along.
del tb
if final_outcome is not None:
# We can't call this directly inside the except: blocks
# above, because then the exceptions end up attaching
# themselves to other exceptions as __context__ in
# unwanted ways.
runner.task_exited(task, final_outcome)
# final_outcome may contain a traceback ref. It's not as
# crucial compared to the above, but this will allow more
# prompt release of resources in coroutine locals.
final_outcome = None
else:
task._schedule_points += 1
if msg is CancelShieldedCheckpoint:
runner.reschedule(task)
elif type(msg) is WaitTaskRescheduled:
task._cancel_points += 1
task._abort_func = msg.abort_func
# KI is "outside" all cancel scopes, so check for it
# before checking for regular cancellation:
if runner.ki_pending and task is runner.main_task:
task._attempt_delivery_of_pending_ki()
task._attempt_delivery_of_any_pending_cancel()
elif type(msg) is PermanentlyDetachCoroutineObject:
# Pretend the task just exited with the given outcome
runner.task_exited(task, msg.final_outcome)
else:
exc = TypeError(
"trio.run received unrecognized yield message {!r}. "
"Are you trying to use a library written for some "
"other framework like asyncio? That won't work "
"without some kind of compatibility shim.".format(msg)
)
# The foreign library probably doesn't adhere to our
# protocol of unwrapping whatever outcome gets sent in.
# Instead, we'll arrange to throw `exc` in directly,
# which works for at least asyncio and curio.
runner.reschedule(task, exc)
task._next_send_fn = task.coro.throw
# prevent long-lived reference
# TODO: develop test for this deletion
del msg
if "after_task_step" in runner.instruments:
runner.instruments.call("after_task_step", task)
del GLOBAL_RUN_CONTEXT.task
# prevent long-lived references
# TODO: develop test for these deletions
del task, next_send, next_send_fn
except GeneratorExit:
# The run-loop generator has been garbage collected without finishing
warnings.warn(
RuntimeWarning(
"Trio guest run got abandoned without properly finishing... "
"weird stuff might happen"
)
)
except TrioInternalError:
raise
except BaseException as exc:
> raise TrioInternalError("internal error in Trio - please file a bug!") from exc
E trio.TrioInternalError: internal error in Trio - please file a bug!
/usr/lib/python3.8/site-packages/trio/_core/_run.py:2244: TrioInternalError
________________________________________________________ test_broken_socket_detection_many_open_files[asyncio-auto] ________________________________________________________
pyfuncitem = <Function test_broken_socket_detection_many_open_files[asyncio-auto]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:61: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:108: in _make_self_pipe
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
family = <AddressFamily.AF_UNIX: 1>, type = <SocketKind.SOCK_STREAM: 1>, proto = 0
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/socket.py:571: OSError
_________________________________________________________ test_cannot_connect_tcp[asyncio-auto-connection-refused] _________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[asyncio-auto-connection-refused]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb81bbe20>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_______________________________________________________ test_cannot_connect_tcp[asyncio-auto-dns-resolution-failed] ________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[asyncio-auto-dns-resolution-failed]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb81b79d0>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
________________________________________________________ test_cannot_connect_tcp[asyncio-anyio-connection-refused] _________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[asyncio-anyio-connection-refused]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb77bf8b0>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
_______________________________________________________ test_cannot_connect_tcp[asyncio-anyio-dns-resolution-failed] _______________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[asyncio-anyio-dns-resolution-failed]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb81b5880>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
__________________________________________________________ test_cannot_connect_tcp[trio-auto-connection-refused] ___________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[trio-auto-connection-refused]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/anyio/_backends/_trio.py:777: in call
???
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1995: in start_guest_run
runner = setup_runner(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
clock = SystemClock(offset=140163.1855713802), instruments = {'_all': {}}, restrict_keyboard_interrupt_to_checkpoints = False
def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints):
"""Create a Runner object and install it as the GLOBAL_RUN_CONTEXT."""
# It wouldn't be *hard* to support nested calls to run(), but I can't
# think of a single good reason for it, so let's be conservative for
# now:
if hasattr(GLOBAL_RUN_CONTEXT, "runner"):
raise RuntimeError("Attempted to call run() from inside a run()")
if clock is None:
clock = SystemClock()
instruments = Instruments(instruments)
> io_manager = TheIOManager()
E OSError: [Errno 24] Too many open files
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1816: OSError
_________________________________________________________ test_cannot_connect_tcp[trio-auto-dns-resolution-failed] _________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[trio-auto-dns-resolution-failed]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/anyio/_backends/_trio.py:777: in call
???
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1995: in start_guest_run
runner = setup_runner(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
clock = SystemClock(offset=42214.72969040071), instruments = {'_all': {}}, restrict_keyboard_interrupt_to_checkpoints = False
def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints):
"""Create a Runner object and install it as the GLOBAL_RUN_CONTEXT."""
# It wouldn't be *hard* to support nested calls to run(), but I can't
# think of a single good reason for it, so let's be conservative for
# now:
if hasattr(GLOBAL_RUN_CONTEXT, "runner"):
raise RuntimeError("Attempted to call run() from inside a run()")
if clock is None:
clock = SystemClock()
instruments = Instruments(instruments)
> io_manager = TheIOManager()
E OSError: [Errno 24] Too many open files
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1816: OSError
__________________________________________________________ test_cannot_connect_tcp[trio-anyio-connection-refused] __________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[trio-anyio-connection-refused]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/anyio/_backends/_trio.py:777: in call
???
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1995: in start_guest_run
runner = setup_runner(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
clock = SystemClock(offset=23007.63619410392), instruments = {'_all': {}}, restrict_keyboard_interrupt_to_checkpoints = False
def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints):
"""Create a Runner object and install it as the GLOBAL_RUN_CONTEXT."""
# It wouldn't be *hard* to support nested calls to run(), but I can't
# think of a single good reason for it, so let's be conservative for
# now:
if hasattr(GLOBAL_RUN_CONTEXT, "runner"):
raise RuntimeError("Attempted to call run() from inside a run()")
if clock is None:
clock = SystemClock()
instruments = Instruments(instruments)
> io_manager = TheIOManager()
E OSError: [Errno 24] Too many open files
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1816: OSError
________________________________________________________ test_cannot_connect_tcp[trio-anyio-dns-resolution-failed] _________________________________________________________
pyfuncitem = <Function test_cannot_connect_tcp[trio-anyio-dns-resolution-failed]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/anyio/_backends/_trio.py:777: in call
???
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1995: in start_guest_run
runner = setup_runner(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
clock = SystemClock(offset=48600.49839722717), instruments = {'_all': {}}, restrict_keyboard_interrupt_to_checkpoints = False
def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints):
"""Create a Runner object and install it as the GLOBAL_RUN_CONTEXT."""
# It wouldn't be *hard* to support nested calls to run(), but I can't
# think of a single good reason for it, so let's be conservative for
# now:
if hasattr(GLOBAL_RUN_CONTEXT, "runner"):
raise RuntimeError("Attempted to call run() from inside a run()")
if clock is None:
clock = SystemClock()
instruments = Instruments(instruments)
> io_manager = TheIOManager()
E OSError: [Errno 24] Too many open files
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1816: OSError
__________________________________________________________________ test_cannot_connect_uds[asyncio-auto] ___________________________________________________________________
pyfuncitem = <Function test_cannot_connect_uds[asyncio-auto]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8cb6f70>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
__________________________________________________________________ test_cannot_connect_uds[asyncio-anyio] __________________________________________________________________
pyfuncitem = <Function test_cannot_connect_uds[asyncio-anyio]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/contextlib.py:113: in __enter__
???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:42: in get_runner
???
/usr/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1849: in __init__
???
/usr/lib64/python3.8/asyncio/events.py:758: in new_event_loop
???
/usr/lib64/python3.8/asyncio/events.py:656: in new_event_loop
???
/usr/lib64/python3.8/asyncio/unix_events.py:54: in __init__
???
/usr/lib64/python3.8/asyncio/selector_events.py:58: in __init__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f9cb8ca5790>
> ???
E OSError: [Errno 24] Too many open files
/usr/lib64/python3.8/selectors.py:349: OSError
____________________________________________________________________ test_cannot_connect_uds[trio-auto] ____________________________________________________________________
pyfuncitem = <Function test_cannot_connect_uds[trio-auto]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/anyio/_backends/_trio.py:777: in call
???
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1995: in start_guest_run
runner = setup_runner(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
clock = SystemClock(offset=197741.54726111895), instruments = {'_all': {}}, restrict_keyboard_interrupt_to_checkpoints = False
def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints):
"""Create a Runner object and install it as the GLOBAL_RUN_CONTEXT."""
# It wouldn't be *hard* to support nested calls to run(), but I can't
# think of a single good reason for it, so let's be conservative for
# now:
if hasattr(GLOBAL_RUN_CONTEXT, "runner"):
raise RuntimeError("Attempted to call run() from inside a run()")
if clock is None:
clock = SystemClock()
instruments = Instruments(instruments)
> io_manager = TheIOManager()
E OSError: [Errno 24] Too many open files
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1816: OSError
___________________________________________________________________ test_cannot_connect_uds[trio-anyio] ____________________________________________________________________
pyfuncitem = <Function test_cannot_connect_uds[trio-anyio]>
> ???
/usr/lib/python3.8/site-packages/anyio/pytest_plugin.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.8/site-packages/anyio/_backends/_trio.py:777: in call
???
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1995: in start_guest_run
runner = setup_runner(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
clock = SystemClock(offset=194116.53316786623), instruments = {'_all': {}}, restrict_keyboard_interrupt_to_checkpoints = False
def setup_runner(clock, instruments, restrict_keyboard_interrupt_to_checkpoints):
"""Create a Runner object and install it as the GLOBAL_RUN_CONTEXT."""
# It wouldn't be *hard* to support nested calls to run(), but I can't
# think of a single good reason for it, so let's be conservative for
# now:
if hasattr(GLOBAL_RUN_CONTEXT, "runner"):
raise RuntimeError("Attempted to call run() from inside a run()")
if clock is None:
clock = SystemClock()
instruments = Instruments(instruments)
> io_manager = TheIOManager()
E OSError: [Errno 24] Too many open files
/usr/lib/python3.8/site-packages/trio/_core/_run.py:1816: OSError
______________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] _______________________________________
proxy_server = (b'http', b'127.0.0.1', 8080, b'/'), server = <tests.utils.HypercornServer object at 0x7f9cb91689d0>, proxy_mode = 'DEFAULT', protocol = b'http', port = 80
@pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
@pytest.mark.parametrize("protocol,port", [(b"http", 80), (b"https", 443)])
# Filter out ssl module deprecation warnings and asyncio module resource warning,
# convert other warnings to errors.
@pytest.mark.filterwarnings("ignore:.*(SSLContext|PROTOCOL_TLS):DeprecationWarning")
@pytest.mark.filterwarnings("ignore::ResourceWarning:asyncio")
@pytest.mark.filterwarnings("error")
def test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool(
proxy_server: URL,
server: Server,
proxy_mode: str,
protocol: bytes,
port: int,
):
with httpcore.SyncHTTPProxy(proxy_server, proxy_mode=proxy_mode) as http:
for _ in range(100):
try:
> _ = http.handle_request(
method=b"GET",
url=(protocol, b"blockedhost.example.com", port, b"/"),
headers=[(b"host", b"blockedhost.example.com")],
stream=httpcore.ByteStream(b""),
extensions={},
)
tests/sync_tests/test_interfaces.py:309:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
httpcore/_sync/http_proxy.py:115: in handle_request
return self._forward_request(
httpcore/_sync/http_proxy.py:175: in _forward_request
) = connection.handle_request(
httpcore/_sync/connection.py:136: in handle_request
self.socket = self._open_socket(timeout)
httpcore/_sync/connection.py:163: in _open_socket
return self._backend.open_tcp_stream(
httpcore/_backends/sync.py:144: in open_tcp_stream
return SyncSocketStream(sock=sock)
/usr/lib64/python3.8/contextlib.py:131: in __exit__
self.gen.throw(type, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
map = {<class 'socket.timeout'>: <class 'httpcore.ConnectTimeout'>, <class 'OSError'>: <class 'httpcore.ConnectError'>}
@contextlib.contextmanager
def map_exceptions(map: Dict[Type[Exception], Type[Exception]]) -> Iterator[None]:
try:
yield
except Exception as exc: # noqa: PIE786
for from_exc, to_exc in map.items():
if isinstance(exc, from_exc):
> raise to_exc(exc) from None
E httpcore.ConnectError: [Errno 24] Too many open files
httpcore/_exceptions.py:12: ConnectError
____________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] ____________________________________
proxy_server = (b'http', b'127.0.0.1', 8080, b'/'), server = <tests.utils.HypercornServer object at 0x7f9cb91689d0>, proxy_mode = 'FORWARD_ONLY', protocol = b'http'
port = 80
@pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
@pytest.mark.parametrize("protocol,port", [(b"http", 80), (b"https", 443)])
# Filter out ssl module deprecation warnings and asyncio module resource warning,
# convert other warnings to errors.
@pytest.mark.filterwarnings("ignore:.*(SSLContext|PROTOCOL_TLS):DeprecationWarning")
@pytest.mark.filterwarnings("ignore::ResourceWarning:asyncio")
@pytest.mark.filterwarnings("error")
def test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool(
proxy_server: URL,
server: Server,
proxy_mode: str,
protocol: bytes,
port: int,
):
with httpcore.SyncHTTPProxy(proxy_server, proxy_mode=proxy_mode) as http:
for _ in range(100):
try:
> _ = http.handle_request(
method=b"GET",
url=(protocol, b"blockedhost.example.com", port, b"/"),
headers=[(b"host", b"blockedhost.example.com")],
stream=httpcore.ByteStream(b""),
extensions={},
)
tests/sync_tests/test_interfaces.py:309:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
httpcore/_sync/http_proxy.py:115: in handle_request
return self._forward_request(
httpcore/_sync/http_proxy.py:175: in _forward_request
) = connection.handle_request(
httpcore/_sync/connection.py:136: in handle_request
self.socket = self._open_socket(timeout)
httpcore/_sync/connection.py:163: in _open_socket
return self._backend.open_tcp_stream(
httpcore/_backends/sync.py:144: in open_tcp_stream
return SyncSocketStream(sock=sock)
/usr/lib64/python3.8/contextlib.py:131: in __exit__
self.gen.throw(type, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
map = {<class 'socket.timeout'>: <class 'httpcore.ConnectTimeout'>, <class 'OSError'>: <class 'httpcore.ConnectError'>}
@contextlib.contextmanager
def map_exceptions(map: Dict[Type[Exception], Type[Exception]]) -> Iterator[None]:
try:
yield
except Exception as exc: # noqa: PIE786
for from_exc, to_exc in map.items():
if isinstance(exc, from_exc):
> raise to_exc(exc) from None
E httpcore.ConnectError: [Errno 24] Too many open files
httpcore/_exceptions.py:12: ConnectError
___________________________________ test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] ___________________________________
proxy_server = (b'http', b'127.0.0.1', 8080, b'/'), server = <tests.utils.HypercornServer object at 0x7f9cb91689d0>, proxy_mode = 'FORWARD_ONLY', protocol = b'https'
port = 443
@pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
@pytest.mark.parametrize("protocol,port", [(b"http", 80), (b"https", 443)])
# Filter out ssl module deprecation warnings and asyncio module resource warning,
# convert other warnings to errors.
@pytest.mark.filterwarnings("ignore:.*(SSLContext|PROTOCOL_TLS):DeprecationWarning")
@pytest.mark.filterwarnings("ignore::ResourceWarning:asyncio")
@pytest.mark.filterwarnings("error")
def test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool(
proxy_server: URL,
server: Server,
proxy_mode: str,
protocol: bytes,
port: int,
):
with httpcore.SyncHTTPProxy(proxy_server, proxy_mode=proxy_mode) as http:
for _ in range(100):
try:
> _ = http.handle_request(
method=b"GET",
url=(protocol, b"blockedhost.example.com", port, b"/"),
headers=[(b"host", b"blockedhost.example.com")],
stream=httpcore.ByteStream(b""),
extensions={},
)
tests/sync_tests/test_interfaces.py:309:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
httpcore/_sync/http_proxy.py:115: in handle_request
return self._forward_request(
httpcore/_sync/http_proxy.py:175: in _forward_request
) = connection.handle_request(
httpcore/_sync/connection.py:136: in handle_request
self.socket = self._open_socket(timeout)
httpcore/_sync/connection.py:163: in _open_socket
return self._backend.open_tcp_stream(
httpcore/_backends/sync.py:144: in open_tcp_stream
return SyncSocketStream(sock=sock)
/usr/lib64/python3.8/contextlib.py:131: in __exit__
self.gen.throw(type, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
map = {<class 'socket.timeout'>: <class 'httpcore.ConnectTimeout'>, <class 'OSError'>: <class 'httpcore.ConnectError'>}
@contextlib.contextmanager
def map_exceptions(map: Dict[Type[Exception], Type[Exception]]) -> Iterator[None]:
try:
yield
except Exception as exc: # noqa: PIE786
for from_exc, to_exc in map.items():
if isinstance(exc, from_exc):
> raise to_exc(exc) from None
E httpcore.ConnectError: [Errno 24] Too many open files
httpcore/_exceptions.py:12: ConnectError
_____________________________________________________________________________ test_no_retries ______________________________________________________________________________
server = <tests.utils.HypercornServer object at 0x7f9cb91689d0>
> ???
tests/sync_tests/test_retries.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
httpcore/_sync/connection_pool.py:234: in handle_request
???
httpcore/_sync/connection.py:136: in handle_request
self.socket = self._open_socket(timeout)
httpcore/_sync/connection.py:163: in _open_socket
return self._backend.open_tcp_stream(
tests/sync_tests/test_retries.py:32: in open_tcp_stream
return super().open_tcp_stream(*args, **kwargs)
httpcore/_backends/sync.py:144: in open_tcp_stream
return SyncSocketStream(sock=sock)
/usr/lib64/python3.8/contextlib.py:131: in __exit__
self.gen.throw(type, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
map = {<class 'socket.timeout'>: <class 'httpcore.ConnectTimeout'>, <class 'OSError'>: <class 'httpcore.ConnectError'>}
@contextlib.contextmanager
def map_exceptions(map: Dict[Type[Exception], Type[Exception]]) -> Iterator[None]:
try:
yield
except Exception as exc: # noqa: PIE786
for from_exc, to_exc in map.items():
if isinstance(exc, from_exc):
> raise to_exc(exc) from None
E httpcore.ConnectError: [Errno 16] Device or resource busy
httpcore/_exceptions.py:12: ConnectError
============================================================================= warnings summary =============================================================================
tests/async_tests/test_connection_pool.py: 4 warnings
tests/async_tests/test_http11.py: 6 warnings
tests/async_tests/test_http2.py: 3 warnings
tests/async_tests/test_interfaces.py: 47 warnings
/usr/lib/python3.8/site-packages/trio/_core/_wakeup_socketpair.py:83: RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================= short test summary info ==========================================================================
SKIPPED [1] tests/async_tests/test_interfaces.py:323: The trio backend does not support local_address
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[asyncio-anyio] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[trio-auto] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[trio-anyio] - OSError: [Errno 24] Too many open files: '/dev/null'
ERROR tests/async_tests/test_retries.py::test_no_retries[asyncio] - OSError: [Errno 24] Too many open files
ERROR tests/async_tests/test_retries.py::test_no_retries[trio] - OSError: [Errno 24] Too many open files
ERROR tests/async_tests/test_retries.py::test_retries_enabled[asyncio] - OSError: [Errno 24] Too many open files
ERROR tests/async_tests/test_retries.py::test_retries_enabled[trio] - OSError: [Errno 24] Too many open files
ERROR tests/async_tests/test_retries.py::test_retries_exceeded[asyncio] - OSError: [Errno 24] Too many open files
ERROR tests/async_tests/test_retries.py::test_retries_exceeded[trio] - OSError: [Errno 24] Too many open files
ERROR tests/backend_tests/test_asyncio.py::TestSocketStream::TestIsReadable::test_returns_true_when_transport_has_no_socket - OSError: [Errno 24] Too many open files
ERROR tests/backend_tests/test_asyncio.py::TestSocketStream::TestIsReadable::test_returns_true_when_socket_is_readable - OSError: [Errno 24] Too many open files
ERROR tests/sync_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[sync] - OSError: [Errno 24] Too many open files: '/dev/null'
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] - trio.TrioInternalError: inte...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] - trio.TrioInternalError:...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-TUNNEL_ONLY] - trio.TrioInternalError: ...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-DEFAULT] - trio.TrioInternalError: in...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] - trio.TrioInternalErro...
FAILED tests/async_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-TUNNEL_ONLY] - trio.TrioInternalError...
FAILED tests/async_tests/test_interfaces.py::test_broken_socket_detection_many_open_files[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[asyncio-auto-connection-refused] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[asyncio-auto-dns-resolution-failed] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[asyncio-anyio-connection-refused] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[asyncio-anyio-dns-resolution-failed] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[trio-auto-connection-refused] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[trio-auto-dns-resolution-failed] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[trio-anyio-connection-refused] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_tcp[trio-anyio-dns-resolution-failed] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[asyncio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[asyncio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[trio-auto] - OSError: [Errno 24] Too many open files
FAILED tests/async_tests/test_interfaces.py::test_cannot_connect_uds[trio-anyio] - OSError: [Errno 24] Too many open files
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-DEFAULT] - httpcore.ConnectError: [Errno...
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[http-80-FORWARD_ONLY] - httpcore.ConnectError: [...
FAILED tests/sync_tests/test_interfaces.py::test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool[https-443-FORWARD_ONLY] - httpcore.ConnectError:...
FAILED tests/sync_tests/test_retries.py::test_no_retries - httpcore.ConnectError: [Errno 16] Device or resource busy
==================================================== 23 failed, 164 passed, 1 skipped, 60 warnings, 12 errors in 22.41s ====================================================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'. |
Beta Was this translation helpful? Give feedback.
-
Just tested 0.14.1 and now pytest is able to pass however it shows some strange call trace. + PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-httpcore-0.14.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-httpcore-0.14.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -p no:randomly
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
rootdir: /home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1, configfile: setup.cfg
plugins: rerunfailures-9.1.1, cov-2.12.1, forked-1.3.0, xdist-2.3.0, flake8-1.0.7, shutil-1.7.0, virtualenv-1.7.0, trio-0.7.0, mock-3.6.1, timeout-2.0.1, anyio-3.3.1, tornado-0.8.1, asyncio-0.15.1, httpbin-1.0.0
collected 121 items
tests/test_api.py ... [ 2%]
tests/test_models.py ............. [ 13%]
tests/_async/test_connection.py ............ [ 23%]
tests/_async/test_connection_pool.py ................. [ 37%]
tests/_async/test_http11.py .............. [ 48%]
tests/_async/test_http2.py .............. [ 60%]
tests/_async/test_http_proxy.py ...... [ 65%]
tests/_async/test_integration.py ...... [ 70%]
tests/_sync/test_connection.py ...... [ 75%]
tests/_sync/test_connection_pool.py .......... [ 83%]
tests/_sync/test_http11.py ....... [ 89%]
tests/_sync/test_http2.py ....... [ 95%]
tests/_sync/test_http_proxy.py ... [ 97%]
tests/_sync/test_integration.py ... [100%]
============================================================================= warnings summary =============================================================================
tests/_sync/test_connection_pool.py::test_connection_pool_concurrency_same_domain_closing
/usr/lib/python3.8/site-packages/_pytest/threadexception.py:75: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-10
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/h11/_state.py", line 249, in _fire_event_triggered_transitions
new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
KeyError: <class 'h11._events.ConnectionClosed'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_exceptions.py", line 8, in map_exceptions
yield
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/http11.py", line 169, in _receive_event
event = self._h11_state.next_event()
File "/usr/lib/python3.8/site-packages/h11/_connection.py", line 443, in next_event
exc._reraise_as_remote_protocol_error()
File "/usr/lib/python3.8/site-packages/h11/_util.py", line 76, in _reraise_as_remote_protocol_error
raise self
File "/usr/lib/python3.8/site-packages/h11/_connection.py", line 427, in next_event
self._process_event(self.their_role, event)
File "/usr/lib/python3.8/site-packages/h11/_connection.py", line 242, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "/usr/lib/python3.8/site-packages/h11/_state.py", line 238, in process_event
self._fire_event_triggered_transitions(role, event_type)
File "/usr/lib/python3.8/site-packages/h11/_state.py", line 251, in _fire_event_triggered_transitions
raise LocalProtocolError(
h11._util.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/tests/_sync/test_connection_pool.py", line 322, in fetch
with pool.stream("GET", f"https://{domain}/") as response:
File "/usr/lib64/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/interfaces.py", line 73, in stream
response = self.handle_request(request)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/connection_pool.py", line 248, in handle_request
raise exc
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/connection_pool.py", line 232, in handle_request
response = connection.handle_request(request)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/connection.py", line 89, in handle_request
return self._connection.handle_request(request)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/http11.py", line 102, in handle_request
raise exc
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/http11.py", line 81, in handle_request
) = self._receive_response_headers(**kwargs)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/http11.py", line 143, in _receive_response_headers
event = self._receive_event(timeout=timeout)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_sync/http11.py", line 169, in _receive_event
event = self._h11_state.next_event()
File "/usr/lib64/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.1/httpcore/_exceptions.py", line 12, in map_exceptions
raise to_exc(exc)
httpcore.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))
-- Docs: https://docs.pytest.org/en/stable/warnings.html
===================================================================== 121 passed, 1 warning in 26.33s ====================================================================== |
Beta Was this translation helpful? Give feedback.
-
Just tested new 0.14.4 and it looks better :) + PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-httpcore-0.14.4-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-httpcore-0.14.4-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -p no:randomly
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4, configfile: setup.cfg
plugins: httpbin-1.0.1, asyncio-0.16.0, trio-0.7.0, anyio-3.3.4
collected 130 items
tests/test_api.py ... [ 2%]
tests/test_models.py ............. [ 12%]
tests/_async/test_connection.py ............ [ 21%]
tests/_async/test_connection_pool.py ................... [ 36%]
tests/_async/test_http11.py .............. [ 46%]
tests/_async/test_http2.py ................ [ 59%]
tests/_async/test_http_proxy.py ........ [ 65%]
tests/_async/test_integration.py ...... [ 70%]
tests/_sync/test_connection.py ...... [ 74%]
tests/_sync/test_connection_pool.py ........... [ 83%]
tests/_sync/test_http11.py ....... [ 88%]
tests/_sync/test_http2.py ........ [ 94%]
tests/_sync/test_http_proxy.py .... [ 97%]
tests/_sync/test_integration.py 127.0.0.1 - - [05/Jan/2022 15:29:33] "GET / HTTP/1.1" 200 12144
... [100%]
============================================================================= warnings summary =============================================================================
tests/_sync/test_connection_pool.py::test_connection_pool_concurrency_same_domain_closing
/usr/lib/python3.8/site-packages/_pytest/threadexception.py:75: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-10
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/h11/_state.py", line 249, in _fire_event_triggered_transitions
new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
KeyError: <class 'h11._events.ConnectionClosed'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_exceptions.py", line 8, in map_exceptions
yield
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/http11.py", line 169, in _receive_event
event = self._h11_state.next_event()
File "/usr/lib/python3.8/site-packages/h11/_connection.py", line 443, in next_event
exc._reraise_as_remote_protocol_error()
File "/usr/lib/python3.8/site-packages/h11/_util.py", line 76, in _reraise_as_remote_protocol_error
raise self
File "/usr/lib/python3.8/site-packages/h11/_connection.py", line 427, in next_event
self._process_event(self.their_role, event)
File "/usr/lib/python3.8/site-packages/h11/_connection.py", line 242, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "/usr/lib/python3.8/site-packages/h11/_state.py", line 238, in process_event
self._fire_event_triggered_transitions(role, event_type)
File "/usr/lib/python3.8/site-packages/h11/_state.py", line 251, in _fire_event_triggered_transitions
raise LocalProtocolError(
h11._util.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/tests/_sync/test_connection_pool.py", line 358, in fetch
with pool.stream("GET", f"https://{domain}/") as response:
File "/usr/lib64/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/interfaces.py", line 73, in stream
response = self.handle_request(request)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/connection_pool.py", line 244, in handle_request
raise exc
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/connection_pool.py", line 228, in handle_request
response = connection.handle_request(request)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/connection.py", line 90, in handle_request
return self._connection.handle_request(request)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/http11.py", line 102, in handle_request
raise exc
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/http11.py", line 81, in handle_request
) = self._receive_response_headers(**kwargs)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/http11.py", line 143, in _receive_response_headers
event = self._receive_event(timeout=timeout)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_sync/http11.py", line 169, in _receive_event
event = self._h11_state.next_event()
File "/usr/lib64/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tkloczko/rpmbuild/BUILD/httpcore-0.14.4/httpcore/_exceptions.py", line 12, in map_exceptions
raise to_exc(exc)
httpcore.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))
-- Docs: https://docs.pytest.org/en/stable/warnings.html
====================================================================== 130 passed, 1 warning in 9.12s ====================================================================== |
Beta Was this translation helpful? Give feedback.
-
With that commit still I see in pytest one warning ============================================================================= warnings summary =============================================================================
../../../../../usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:112
/usr/lib/python3.8/site-packages/pytest_asyncio/plugin.py:112: DeprecationWarning: The 'asyncio_mode' default value will change to 'strict' in future, please explicitly use 'asyncio_mode=strict' or 'asyncio_mode=auto' in pytest configuration file.
config.issue_config_time_warning(LEGACY_MODE, stacklevel=2)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
===================================================================== 130 passed, 1 warning in 10.24s ====================================================================== BTW any chance to restore sphinx suport? 🤔 |
Beta Was this translation helpful? Give feedback.
-
I know that I have instaled a lot of pytest extensions which may produce sometimes false positive results however I think that some of those reported erros are about missing some other steps in testing procedure.
Some of those errors/warnings may actually point on som eerrors.
Full pytest log in attachment and here is just summary info
python-httpcore.txt
Beta Was this translation helpful? Give feedback.
All reactions