Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test with latencies added in #668

Open
andrewgazelka opened this issue Dec 9, 2024 · 5 comments
Open

test with latencies added in #668

andrewgazelka opened this issue Dec 9, 2024 · 5 comments
Assignees

Comments

@andrewgazelka
Copy link
Collaborator

andrewgazelka commented Dec 9, 2024

esp for bows, breaking blocks, etc

🤖
To simulate inconsistent latency in TCP connections on macOS and Linux, you can utilize tools like Toxiproxy and speedbump.

Toxiproxy is a TCP proxy designed for simulating network conditions, including latency, bandwidth constraints, and connection instability. It supports various platforms, including macOS and Linux. You can configure Toxiproxy to introduce specific network conditions to test how your application behaves under different scenarios.

speedbump is another tool that acts as a TCP proxy to simulate variable network latency. It allows you to add base latency and configure latency variations using different waveforms like sine, sawtooth, square, and triangle waves. This flexibility enables the simulation of fluctuating network conditions to observe their impact on your application.

Both tools are open-source and can be integrated into your testing environment to help assess and improve your application's resilience to network variability.

Copy link

linear bot commented Dec 9, 2024

@TestingPlant
Copy link
Collaborator

We should also test with slow clients. Player movement packets could cause clients with slow internet download speeds (even temporarily) to cause the player to get kicked from #313 because a lot of player movement packets are being sent per tick, but I want to test that first to make sure it's actually an issue

@TestingPlant TestingPlant self-assigned this Dec 9, 2024
@andrewgazelka
Copy link
Collaborator Author

note that #313 is actually good to some extent cause we don't want to have to store a history of every single packet... I want to focus on the big issues which is like do we act differently than vanilla if a player has slow internet / high latency in ways we can easily improev upon. I think block breaking/placing there are some issues atm with this

@TestingPlant
Copy link
Collaborator

After testing with my own home internet when connected to the test server, I'm occasionally getting:

2024-12-20T01:40:56.757973Z  WARN flushed packets to player in 2.982425216s
2024-12-20T01:41:00.050636Z  WARN flushed packets to player in 3.292486256s
2024-12-20T01:41:01.562223Z  WARN flushed packets to player in 1.511388514s
2024-12-20T01:41:03.005341Z  WARN flushed packets to player in 1.442882672s
2024-12-20T01:41:03.849048Z  WARN server_reader_loop:handle_flush:flush_task: Failed to send data to player: failed to send packet to player, channel is full: true
2024-12-20T01:41:03.849214Z  WARN handle_broadcast_local:broadcast_local_task: Failed to send data to player: failed to send packet to player: send to a closed channel

For context, my home ISP sometimes slows down a lot randomly and this issue causes keepalive timeouts in other Minecraft servers, so this isn't necessarily Hyperion's fault. However, having thousands of players plus entities would increase the bandwidth requirements, and if a player's ISP decides to slow down below that minimum bandwidth at any point during the event, they might get disconnected for being too slow.

I'm thinking of re-implementing droppable packets again to solve this, but I'll try to solve #761 first so that running with several thousand bots work to be able to test the effects of droppable packets.

@andrewgazelka
Copy link
Collaborator Author

After testing with my own home internet when connected to the test server, I'm occasionally getting:

2024-12-20T01:40:56.757973Z  WARN flushed packets to player in 2.982425216s
2024-12-20T01:41:00.050636Z  WARN flushed packets to player in 3.292486256s
2024-12-20T01:41:01.562223Z  WARN flushed packets to player in 1.511388514s
2024-12-20T01:41:03.005341Z  WARN flushed packets to player in 1.442882672s
2024-12-20T01:41:03.849048Z  WARN server_reader_loop:handle_flush:flush_task: Failed to send data to player: failed to send packet to player, channel is full: true
2024-12-20T01:41:03.849214Z  WARN handle_broadcast_local:broadcast_local_task: Failed to send data to player: failed to send packet to player: send to a closed channel

For context, my home ISP sometimes slows down a lot randomly and this issue causes keepalive timeouts in other Minecraft servers, so this isn't necessarily Hyperion's fault. However, having thousands of players plus entities would increase the bandwidth requirements, and if a player's ISP decides to slow down below that minimum bandwidth at any point during the event, they might get disconnected for being too slow.

I'm thinking of re-implementing droppable packets again to solve this, but I'll try to solve #761 first so that running with several thousand bots work to be able to test the effects of droppable packets.

Okay, I'm definitely fine with having droppable packets. My goal is to support around 30,000 players on a 16-core machine. This would mean that having 10,000 players on a 16-core machine should utilize about a third of resources per core. If we had 1,000 players, we would only be using 1/30th of the resources per core (roughly speaking... I am aware that sync time cost is constant ... but basically just where we can extrapolate and think we can have around 30k players)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants