-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Child processes are not reaped - growing number of zombie processes #144
Comments
Hi @vlcinsky , I understand that including process reaping in dockerize is desirable in some cases, however I'd suggest considering using tini as your
|
Thanks @JamesJJ for nice hint. Anyway word "some" in
sounds weird to me. Utility starting processes within docker shall do process cleanup. Resolutions are at least two:
|
These look the same to me. What am I missing? |
@fiendish you are right, I put a note there, I cannot remember exact fix, but it was related to reaping and possibly using bash or sh with I would close this issue as improperly reported, but the core cause ( |
@vlcinsky I think this is a use case problem. The way I see it, What you are trying to achieve is what looks like a sidecar container for a k8s pod. If that is the case, you might want to have a deep look into k8s cronjobs. Furthermore, with |
@mwmahlberg I will think of it. Anyway, we do use docker swarm where is no sidecar at hand. I also would like to see log entries created by the script. I am afraid with using cron, these get hidden somewhere. |
@vlcinsky That heavily depends on the log driver you choose and how you do your logging. If you run the image on its own via
, it runs a script that „logs“ a „Hello, cronrunner!“ to stdout and you can view that in the logs as usual. (You need to wait about 1 minute until you see the first entry.) Since I perfectly understand that logs are valuable event streams, let me suggest a more sophisticated setup, though. I strongly suggest the fluentd log driver. Simply run a fluentd service in global mode and choose one (or more!) of the output plugins coming with fluentd or one (or more) of the gazillion third party output plugins. Personally, I save the logs for review and long term storage in Influxdb (view them with Grafana, InfluxDB comes with retention policies, so they are rotated out with about minute precision), send a duplicate of all non-debug logs via the syslog output plugin to OSSIM and certain errors (yes, there are all kind of matchers) to Kafka. To make that setup as easy as possible, you should use some structured logging for the log messages. You see, you do not need to be concerned about your logs. ;) |
We used dockerize with script, which is regularly fetching some external data.
The script is here: https://gitlab.com/tamtamresearch/cet/datahub/app-es-openlr/svc_es_openlr_doit/snippets/1908196
After a day we have found, CPU usage (in %) is growing:
Apparently, we did one system reboot.
Researching the cause, we have found, that there are thousands of zombie processes and the number was steadily growing.
The call in our Dockerfile looked like:
Changing it to (note: the call I put into original issue was mistakenly simple copy of the previous one, I cannot remember exactly what was the fix, I guess we have used shell or bash with
-c
)resolved the problem: the number of zombie processes is zero now.
Conclusions
To me it seems, like
dockerize
script is not taking care of reaping child processes which were terminated and due to the environment, where it runs, nothing else does this important cleanup work.It seems we hit similar issue as author of pull request #126
The text was updated successfully, but these errors were encountered: