Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower RTT with Meteor.call #35

Open
nilnullzip opened this issue Jan 5, 2016 · 4 comments
Open

Lower RTT with Meteor.call #35

nilnullzip opened this issue Jan 5, 2016 · 4 comments

Comments

@nilnullzip
Copy link

I find that a simple Meteor.call ping runs with about half the RTT of TimeSync. So to meteor.com, I'm seeing something like 150ms vs 300 for TimeSync. Locally, I see 3ms vs 25ms. I think perhaps the difference is due to the WebSocket on the DDP call. In fact if I disable web sockets locally, I see RTT of 35ms with the Meteor.call vs TimeSync's 25ms. All this was on a non-SSL connection.

Are there perhaps other advantages to the WebApp/HTTP method employed by TimeSync?

ping_RTT = new ReactiveVar()
Meteor.setInterval ->
  # ping_RTT.set undefined
  t1 = Date.now()
  Meteor.call 'ping', t1, (e, t)->
    t2 = Date.now()
    ping_RTT.set t2 - t1
@mizzao
Copy link
Collaborator

mizzao commented Jan 5, 2016

Thanks for looking into this. I was using HTTP.call because I thought it lowered latency, but hadn't considered that it would be happening over normal AJAX instead of WebSocket. The first versions of this library did use Meteor.call. This definitely raises an interesting counterexample.

Would you be able to take some more data samples of the settings you described? I'd be happy to change to using Meteor.call - in fact, it would make things a lot simpler (e.g. #30, #31).

@nilnullzip
Copy link
Author

I'll try to get some numbers over SSL.

So a third possibility after HTTP and Meter.call is a raw websocket. The concern I have with DDP is that a single websocket is multiplexed to serve multiple sources. This has to introduce queueing delays. A dedicated websocket would presumably work more freely, with the multiplexing of the channel being done at the OS level.

@mizzao
Copy link
Collaborator

mizzao commented Jan 6, 2016

Yes, that's right, I remember now. When DDP traffic is heavy the RTT delay can be heavily biased in one direction vs. the other, so the computed offset is inaccurate. As we have it now, there may be a little more latency, as it's not over WS, but it's also not fighting with the rest of the DDP traffic.

A dedicated WS would still be done in the application level - browser/node, not the OS, but might be a little more efficient. I wonder if it will actually be worth it though, if just for this purpose.

@nilnullzip
Copy link
Author

The dedicated WS would still have to go through node delay, no worse than HTTP. And no worse than DDP.

However the WS has lower latency vs HTTP because it's not opening a new TCP/IP connection on each request. The connection is already set up.

And the WS should have lower latency than DDP because it's request is not blocked by the DDP queue.

Another thing to consider is that it's only the lowest delay that matters, not the longer ones. The best way to do this is to take multiple samples and choose the one with the shortest RTT. Perhaps you could get lucky with DDP. But probably the websocket is the best approach as it's performance is independent of the DDP queue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants