Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.2.0 fails to start: [haproxy.main()] Cannot raise FD limit to 8094, limit is 1024 #134

Closed
powerman opened this issue Aug 19, 2024 · 11 comments

Comments

@powerman
Copy link

There is a related issue haproxy/haproxy#1866, except I'm using amd64.

0.1.2 works:

$ docker run -i -t --rm tecnativa/docker-socket-proxy:0.1.2
[WARNING] 231/063747 (1) : config : missing timeouts for backend 'docker-events'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 231/063747 (1) : Can't open global server state file '/var/lib/haproxy/server-state': No such file or directory
[WARNING] 231/063747 (1) : [haproxy.main()] Cannot raise FD limit to 8094, limit is 4096. This will fail in >= v2.3
Proxy dockerbackend started.
Proxy docker-events started.
Proxy dockerfrontend started.
[NOTICE] 231/063747 (1) : haproxy version is 2.2.32-4081d5a
[ALERT] 231/063747 (1) : [haproxy.main()] FD limit (4096) too low for maxconn=4000/maxsock=8094. Please raise 'ulimit-n' to 8094 or more to avoid any trouble.This will fail in >= v2.3
[NOTICE] 231/063747 (1) : New worker #1 (12) forked

0.2.0 does not work:

$ docker run -i -t --rm tecnativa/docker-socket-proxy:0.2.0
[NOTICE]   (1) : haproxy version is 3.0.2-a45a8e6
[WARNING]  (1) : config : missing timeouts for backend 'docker-events'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[NOTICE]   (1) : config : config: Can't open global server state file '/var/lib/haproxy/server-state': No such file or directory
[ALERT]    (1) : [haproxy.main()] Cannot raise FD limit to 8094, limit is 1024.
$
@powerman
Copy link
Author

With no strict-limits in haproxy.cfg.template it starts:

[WARNING]  (1) : [haproxy.main()] Cannot raise FD limit to 8094, limit is 4096.
[ALERT]    (1) : [haproxy.main()] FD limit (4096) too low for maxconn=4000/maxsock=8094. Please raise 'ulimit-n' to 8094 or more to avoid any trouble.
[NOTICE]   (1) : New worker (12) forked
[NOTICE]   (1) : Loading success.

But at a glance maybe it makes more sense to change maxconn and maxsock to 1000 instead - it should be more than enough for docker-socket-proxy use case.

@josep-tecnativa
Copy link
Contributor

josep-tecnativa commented Aug 19, 2024

I prefer adding no strict-limits in haproxy.cfg. Could you take a look at this PR #135? I think this would fix the issue.

@pedrobaeza
Copy link
Member

pedrobaeza commented Sep 9, 2024

We have reverted to previous version of HAProxy, so this problem shouldn't be anymore.

@polarathene
Copy link

For context, 1024 should always be the soft limit (in many releases of Docker / containerd) this is unfortunately infinity (a ridiculous number well over a million or billion IIRC depending on host). 4096 is the default hard limit in the kernel, where again in Docker / containerd releases you may find a systemd service sets LimitNOFILE=infinity, which raises this up high (usually that's ok for hard limit).

Soft limit should stay at 1024 and the software itself should raise the limit (not exceeding the hard limit) by as much as it requires at runtime. With systemd the defaults should be 1024:524288 which should be absolutely fine, you can check in a container with ulimit -Sn (soft limit) and ulimit -Hn (hard limit).

Docker from v25 I think has fixed this with their systemd config files, and will instead inherit the limits from the system (which in most cases on linux will be the systemd default) and the container will inherit that. You can override this per container too if you need to.

Containerd 2.0 needs to be released before it's related config is fixed too. And then a follow-up Docker release that upgrades to using containerd 2.0. After that point it should be less of a problem to think about 😅


I am a bit curious about your environment as to why you were getting limits of 1024 and 4096? The error message doesn't seem to distinguish between soft limit and hard limit. Were you running via rootless or something other than Docker? (or an older version of Docker?)

@powerman
Copy link
Author

I am a bit curious about your environment as to why you were getting limits of 1024 and 4096?

Nothing fancy (rootless/outdated/etc.). Just no systemd, so 1024/4096 because kernel defaults wasn't modified. I'm using Gentoo linux and Runit as boot/service manager.

@polarathene
Copy link

I'm using Gentoo linux and Runit as boot/service manager.

Oh... then just manage limits with runit config? There's not really anything a project can do when it tries to do the right thing but you forbid it.

While 1024 should remain the global default soft limit (due to some software still using select() syscalls IIRC where higher limit is incompatible), the 4096 hard limit is typically way too low. Systemd devs looked into it when they made the change around 2019 and very few packages needed more than 300k, but as it's a hard limit raising that to 524288 was fine.

There is some software that doesn't play nicely like Envoy. Last I checked there was no documentation about the expectation for high limit (over a million), and instead of handling it at runtime they expect you to have the soft limit raised before running Envoy. They usually deploy via containers I think where they've been able to leverage the bug there with infinity, but that's soon to change 😅

So HAProxy was recently version bumped from 2.2 (July 2020 and EOL) to 3.0 (May 2024), and that was reverted due to a misconfigured niche environment? 🤔


FWIW, I'm assuming Docker daemon / containerd will also be running with that 1024:4096 default? (since that's what the containers inherit IIRC)

That eventually becomes a problem, and was why the limit was initially raised in 2014 for those services since operations like docker system prune or other maintenance tasks would fail if there was a lot of FDs that were opened to perform that.

I believe they improved on that minimizing the issue, but some other issue that was difficult to troubleshoot raised the limits further to infinity as a misunderstood fix (and prior to another systemd change that would allow infinity to map to 2^30 instead of 2^20, aka 1000x more). I did some heavy investigation into all that and pushed hard to get that resolved since the soft limit was causing various breakage / regressions in software when it was expected to be 1024.

You have nothing to worry about raising the hard limit to a reasonable amount like 524288 on your system. You can probably go much lower if you're not running databases and the like, but this shouldn't contribute to any resource overhead concerns (like it once did in the past IIRC). If the software still complains with the 1024 soft limit however, I would only selectively raise that for the service itself within the container, or when acceptable the entire container (since that's easier). Well behaved software should work smoothly with just the hard limit raised though.

@polarathene
Copy link

polarathene commented Sep 14, 2024

So HAProxy was recently version bumped from 2.2 (July 2020 and EOL) to 3.0 (May 2024), and that was reverted due to a misconfigured niche environment? 🤔

Nope, the earlier comment about the revert lacked context, the revert was for another reason.

We have reverted to previous version of HAProxy

Context (it'll be switched back to 3.0 when possible):


When that update lands, should you have the limits issue follow advice above. You can probably explicitly configure docker-socket-proxy container with --ulimit / ulimits, if you don't want to raise hard limit elsewhere for some reason.

@powerman
Copy link
Author

You can probably explicitly configure docker-socket-proxy container with --ulimit / ulimits, if you don't want to raise hard limit elsewhere for some reason.

It turns out docker won't allow --ulimit nofile=… values greater than (or even equal to!) hard nofile limit of dockerd process, even if both dockerd and container are running as root. So at least one runit service (docker itself) must be configured to have increased hard nofile limit to make it possible to configure (some) containers to have nofile limit greater than 4096.

@polarathene
Copy link

It turns out docker won't allow --ulimit nofile=… values greater than (or even equal to!) hard nofile limit of dockerd process, even if both dockerd and container are running as root.

What version of Docker are you running for reference?

I think when I looked into it a year or so ago I was able to lower the hard limit and possibly raise it above the dockerd process, but it might not have been via --ulimit (there was a few different areas to tweak it).

From memory if the process spawns children they can use as many FD as their soft limit allows (it doesn't contribute to parents or siblings in a process group AFAIK), but the hard limit could not be raised above what the parent had, only lowered further. However I think containers were created without the daemon as the parent which allowed raising the hard limit beyond the daemons, but perhaps something has changed or I'm not recalling that correctly 😅

@powerman
Copy link
Author

What version of Docker are you running for reference?

26.1.0

I think when I looked into it a year or so ago I was able to lower the hard limit and possibly raise it above the dockerd process

Sure, you can lower it, but you can't raise it.

but the hard limit could not be raised above what the parent had

root (a real one, outside of a container) can raise it above parent's limit

@polarathene
Copy link

root (a real one, outside of a container) can raise it above parent's limit

It might be because I was using systemd instead of runit. IIRC the containers were being added into a separate systemd slice (might have been a customization on my end at the time) than the one the daemon was operating in. Or it may have been from the /etc/docker/daemon.json ulimit setting rather than container specific ulimit config option. Fairly sure it was raised above when the daemon created the new container, but I could be wrong 😓

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants