You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
the jobs/filter endpoint gets loaded when viewing the jobs page. the user_list query is unoptimised
When you have a jid's table with a few 100k rows it can bring your machine to a crawl.
the current code has to load every single row into memory. json parse it in python then find all unique "users" across every dict.
while this is fine in a dev environment, it can consume gigs of memory on a large db.
To Reproduce
create or import a very large jid's table.
load the jobs page in alcali a few times. watch memory usage on postgres + python alcali gunicorn
Expected behavior
not consuming multiple gb of memory
Additional context
I would recommend removing the user_list filter on this endpoint OR at least giving a config option to disable it.
The jid's table does not have a username column to filter on by default, any searching of it is going to cpu and memory intensive on larger dbs
The text was updated successfully, but these errors were encountered:
Describe the bug
the jobs/filter endpoint gets loaded when viewing the jobs page. the user_list query is unoptimised
When you have a jid's table with a few 100k rows it can bring your machine to a crawl.
the current code has to load every single row into memory. json parse it in python then find all unique "users" across every dict.
while this is fine in a dev environment, it can consume gigs of memory on a large db.
To Reproduce
create or import a very large jid's table.
load the jobs page in alcali a few times. watch memory usage on postgres + python alcali gunicorn
Expected behavior
not consuming multiple gb of memory
Additional context
I would recommend removing the user_list filter on this endpoint OR at least giving a config option to disable it.
The jid's table does not have a username column to filter on by default, any searching of it is going to cpu and memory intensive on larger dbs
The text was updated successfully, but these errors were encountered: