-
Notifications
You must be signed in to change notification settings - Fork 898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Log is full of "failed to start a background worker" #7602
Comments
@cheggerdev Thank you for the bug report. The error is generated when the scheduler cannot spawn a new job and So, can you provide some more information? In particular:
|
Log from a fresh (re-)start:
Yes, I do. |
Hi, I got more log output with
and the result:
|
This line is generated when the
I am not sure there is a good way to check the number of slots or slot assignment through the SQL interface, but could you check |
=> 42 most times |
What type of bug is this?
Configuration
What subsystems and features are affected?
Background worker
What happened?
The timescaledb log is full of
zabbix-timescaledb-1 | 2025-01-19 09:55:28.045 UTC [37] WARNING: failed to launch job 3 "Job History Log Retention Policy [3]": failed to start a background worker
zabbix-timescaledb-1 | 2025-01-19 09:55:29.723 UTC [36] WARNING: failed to launch job 3 "Job History Log Retention Policy [3]": failed to start a background worker
Increasing the workers in config files has no effect in the sense of the launch failures do not disappear.
max_worker_processes = 64 (increased from 32)
timescaledb.max_background_workers = 48 (increased from 8 to 16, then to 32, then to 48)
max_parallel_workers = 4 (number of CPUs)
show timescaledb.telemetry_level; => basic
TimescaleDB version affected
docker-compose image tag latest-pg16
PostgreSQL version used
16
What operating system did you use?
Alpine Linux
What installation method did you use?
Docker
What platform did you run on?
Other
Relevant log output and stack trace
How can we reproduce the bug?
The text was updated successfully, but these errors were encountered: