-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark dependencies pod failing on Openshift #87
Comments
Looks like it's a java version issue as I was using a custom image with openjdk Java 11 |
Then can we close this? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am running Spark Dependencies as pods on Openshift 3.6 using the helm charts (we can't use operators). Currently, I have the collector and query pods running ok with the example hot rod app.
jaeger is feeding to a separate Elasticsearch cluster for the backend storage, but when I try to run the Spark dependencies job, the pods ends up in to crashloopbackoff.
I see the following in the logs for the Spark pod:
what do these logs mean?
The text was updated successfully, but these errors were encountered: