Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running out of memory when pushing Andromeda tables to external databases using bulk import #256

Open
schuemie opened this issue Oct 30, 2023 · 1 comment
Labels

Comments

@schuemie
Copy link
Member

Noticed while trying to push the concept_relationship table to Postgres. It probably has to with writing the Andromeda table to CSV prior to calling the PG bulk upload tool

@schuemie schuemie added the bug label Oct 30, 2023
@schuemie
Copy link
Member Author

This is actually harder than I thought. insertTable() does some operations on the data, like (optionally) converting column names to snake_case, and inferring column types for the table creation SQL, that don't work out-of-the-box on Andromeda tables. For now, insertTable() explicitly converts Andromeda tables to data frames and throws a warning when it does.

For larger tables, for now the best option is to do the batching outside of insertTable().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant