You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
S3 and ADLS connectors allow setting credentials per bucket in the same Spark session. However, I am unable to find a similar config for GCS.
My use case is to read from a GCS bucket in a project and write to a GCS bucket in another project with a different set of credentials. Overriding the Hadoop conf just before writing causes failures due to race conditions in the token caching.
Is there a workaround to achieve this in Spark for GCS?
The text was updated successfully, but these errors were encountered:
We resolved this by extending GoogleHadoopFileSystem and overriding initialize method to read our custom auth properties and set fs.gs.auth for the URI representing the GCS bucket.
S3 and ADLS connectors allow setting credentials per bucket in the same Spark session. However, I am unable to find a similar config for GCS.
My use case is to read from a GCS bucket in a project and write to a GCS bucket in another project with a different set of credentials. Overriding the Hadoop conf just before writing causes failures due to race conditions in the token caching.
Is there a workaround to achieve this in Spark for GCS?
The text was updated successfully, but these errors were encountered: