-
Notifications
You must be signed in to change notification settings - Fork 395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Db cleanup failes after upgrade to python3.9 and airflow2.2.2 #121
Comments
Seems duplicated with #117 |
I think that the issue is very similar but I'm using airflow 2.3.2 from docker with python 3.7. I'm getting the same error only for following tasks: cleanup_TaskInstance, cleanup_BaseXCom, cleanup_TaskReschedule, cleanup_RenderedTaskInstanceFields. Other tasks finish successfully.
|
Tasks This seems to be related to missing columns in the db tables to be cleaned for those two tasks. I fixed it replacing I also suggest adding
in |
I read thru #117 and it looks that comment from @PhilippDB makes sense and can be the solution. @e-compagno please be aware of what wrote @tylerwmarrs - using start_date or end_date may refer to incorrect records that are not tied with records in dag_run table. The tables have following constrains in DDL:
|
GoogleCloudPlatform/python-docs-samples#7847 Fixed it for me |
The problem is still active in version 2.4.1. The cloud composer version does not fix Would
fixes the issue? |
|
you can find the time column in model file. for example TaskInstance you can use queued_dttm/end_date |
Fixes the issue for |
I have recently upgraded to Python3.9 Airflow2.2.2. Following error is occurring repeatedly after the upgrade. I've only changed the parameters indicated in the repo README and is running everything else the same. Other lib versions:
SQLAlchemy 1.4.1
Flask-SQLAlchemy 2.4.3
Can you please check the problem here?
The text was updated successfully, but these errors were encountered: