Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of remote:prot=tcp on resource constrained devices #92

Open
phlash opened this issue Jul 27, 2021 · 4 comments
Open

Use of remote:prot=tcp on resource constrained devices #92

phlash opened this issue Jul 27, 2021 · 4 comments
Labels

Comments

@phlash
Copy link
Member

phlash commented Jul 27, 2021

Originally posted by @guruofquality in pothosware/SoapyFCDPP#13 (comment)

After testing with my own remoting solution, updated to use the direct buffer API, I discovered a couple of bugs:

How does it (remoting solution) compare when the protocol is set to tcp for soapy remote?
https://github.com/pothosware/SoapyRemote/wiki#remoteprot

SoapyRemote is trying to have headers with metadata and some kind of flow control. But if plain tcp is useful, I dont see why that couldnt be a mode in SoapyStreamEndpoint.cpp

@phlash
Copy link
Member Author

phlash commented Jul 27, 2021

So I've now tried this (remote device: OrangePi Zero LTS using a FUNcube Dongle Pro+, with latest SoapyFCDPP driver, local device my Lenovo E590 laptop using WiFi to introduce some network jitter!), result: stable for a few seconds, then begins emitting many XRUN recoveries (readStream recovered from..), eventually stalling (Gqrx see no more input).

I suspect this is due to the still quite low transfer size / period selected (1006 frames), whereby any network jitter destabilises the flow control. My own solution has no flow control, uses a larger transfer size / period (default 24000 frames), traded for latency of course, but does not have these challenges.

For comparison, omitting remote:prot=tcp (and thus using UDP) results in a transfer size of 357 frames, and reports dropped packets ('S' appears on client) constantly when testing with SoapySDRUtil.

@guruofquality
Copy link
Contributor

@phlash Something has to be very broken with the flow control.

I pushed a branch to disable flow control, if thats worth trying

What transfer size are you using? This is where transfer size is defined: https://github.com/pothosware/SoapyRemote/blob/master/common/SoapyRemoteDefs.hpp#L91 Its currently 4096 because some platforms would bomb out on larger sizes, I think apple and/or windows. It think though it could easily be increased on linux.

The flow control window comes from the https://github.com/pothosware/SoapyRemote/wiki#remotewindow remote:window setting which supposedly resizes the socket buffer on the receive size so the kernel guarantees that much space. And the window is just that divided by the transfer size. Its currently set to 42 MiB by default. Which should have been like 10K of these transfers before needing a response from flow control.

@phlash
Copy link
Member Author

phlash commented Jul 28, 2021

@guruofquality I'll give that try tomorrow. I'm only guessing that it's the flow control going wrong somewhere, as that seems to be the major difference between the approach taken here and my dumb 'let the kernel sort it out' approach (that is only possible when using TCP).

@phlash
Copy link
Member Author

phlash commented Jul 31, 2021

@guruofquality Flow control is exonerated, your test build has similar behaviour to unmodified SoapyRemote, as does my own code with a small period specified for the ALSA buffer in SoapyFCDPP. Looks like any overrun/overflow is down to the ability of my small test CPU to keep up when there is more task switching in general, eg: if I enable TRACE logging with small ALSA periods, then I see overflows continuously, especially if that logging also goes over the network to the client.

I have made a couple of changes to my own solution that seem to help, and may be worth considering for SoapyRemote:

  • Using direct buffer API to access the underlying driver, for SoapyFCDPP these are ALSA buffers in kernel space.
  • Writing from a mapped direct buffer to the TCP socket, thus avoiding any copy to/from user space, this halved CPU load 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants