Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade bazel-ios-fork to v2.10.2 #6

Draft
wants to merge 317 commits into
base: bazel-ios-fork
Choose a base branch
from

Conversation

chenj-hub
Copy link

Merging v2.10.2 of bazel-buildfarm to our bazel-ios-fork

werkt and others added 30 commits October 5, 2023 23:06
A shard server is impractical without operation subscription, partition
subscription confirmation between servers and workers.
The failsafe execution is configuration that is likely not desired on
workers. This change removes the failsafe behavior from workers via
backplane config, and relegates the setting of failsafe boolean to
server config. If the option is restored for workers, it can be added to
worker configs so that configs may continue to be shared between workers
and servers and retain independent addressability.
Internally driven metrics and scaling controls have low, if any, usage
rates. Prometheus has largely succeeded independent publication of
metrics, and externally driven scaling is the norm. These modules have
been incomplete between cloud providers, and for the functional side of
AWS, bind us to springboot. Removing them for the sake of reduced
dependencies and complexity.
Remove this unused OperationQueue feature which provides no invocations
on any use.
Continue the loop while we have *not* matched successfully and avoid a
confusing inversion in getMatched()
Distinguish the valid/unique/propagating methods of entry listening.
The only signal to a waiting match that will halt its current listen
loop for a valid unique operation is an interrupt.
Distinguish target param with GRPC type storage from FILESYSTEM
definition
Reinstate prior usage of LoggingMain for safe shutdown, with added
release mechanism for interrupted processes. All invoked shutdowns are
graceful, with vastly improved shutdown speed for empty workers waiting
for pipeline stages.
Tiny code cleanup
Will include operation root and inform directory cache effectiveness.
Selecting realInputDirectories by regex permits flexible patterns that
can yield drastic improvements in directory reuse for specialized
deployments. runfiles in particular are hazardous expansions of
nearly-execroot in the case of bazel.

Care must be taken to match directories exclusively.
The entire input tree is traversed for matches against expanded paths
under the root, to allow for nested selection.
Each match thus costs the number of input directories.
Counterintuitively, OutputFiles are augmented to avoid the recursive
check for OutputDirectories which only applies to actual reported
results, resulting in a path match when creating the exec root.
Regex style is java.util.Pattern, and must match the full input
directory.
This will include the path to the missed directory and the operation
which required it.
Prevent adding duplicate realInputDirectories matches
Ensure that the last leg of the execution presents a directory, rather
than the parent, per OutputDirectory's stamping.
Support a `--redis_uri` command line option for start-up.
Bump fro 0.0.6 -> 0.0.9
jasonschroeder-sfdc and others added 23 commits April 11, 2024 14:29
Since we declare a buildifier target in //BUILD, we can't make this a
dev dependency.
When a duplicate output stream is detected, we must signal the
writeWinner (because the write exists) and onInsert (because it was
inserted) for an output stream creation. If we're racing, we should be
eventually convergent, but this absolutely fixes a hang which occurs on
this sentinel stream's return into getOutput, where the future might
never be triggered otherwise.
Move future completion into the only scope it was actually missing from
- getOutput for Write, rather than inducing all `put` calls into posts
to backplane via onPut with duplicates.
@chenj-hub chenj-hub force-pushed the jackies/upgrade-bazel-buildfarm-to-v2.10.2 branch 2 times, most recently from eef5cb9 to d7d6604 Compare August 13, 2024 14:30
@chenj-hub
Copy link
Author

chenj-hub commented Aug 13, 2024

Resolved git conflict between v.2.10.2 and current bazel-ios-fork from running: git show 26cb1df068059b698a6a503194b2c88404f42ad6 --remerge-diff:

commit 26cb1df068059b698a6a503194b2c88404f42ad6
Merge: 8bbaada0 ece844a1
Author: Jackie Springstead-Chen <jackies@squareup.com>
Date:   Tue Aug 13 16:36:14 2024 -0400

    Merge commit 'ece844a103eb0561a2bd1bf91129c12915a28ea8' into jackies/upgrade-bazel-buildfarm-to-v2.10.2

diff --git a/.bazelci/format.sh b/.bazelci/format.sh
remerge CONFLICT (content): Merge conflict in .bazelci/format.sh
index 6b896f9e..79436008 100755
--- a/.bazelci/format.sh
+++ b/.bazelci/format.sh
@@ -69,23 +69,6 @@ run_java_formatter () {
     java -jar $LOCAL_FORMATTER -i $files
 }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-run_proto_formatter () {
-    # Check whether any formatting changes need to be made.
-    # This is intended to be done by the CI.
-    if [[ "$@" == "--check" ]]
-    then
-        find $PWD -name '*.proto' -exec $BAZEL run $CLANG_FORMAT -- -i --dry-run --Werror {} +
-        handle_format_error_check
-        return
-    fi
-
-    # Fixes formatting issues
-    find $PWD -name '*.proto' -exec $BAZEL run $CLANG_FORMAT -- -i {} +
-}
-
-=======
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 run_buildifier () {
     $BAZEL run $BUILDIFIER -- -r > /dev/null 2>&1
 }
diff --git a/BUILD b/BUILD
remerge CONFLICT (content): Merge conflict in BUILD
index 02328b4f..a1f6749a 100644
--- a/BUILD
+++ b/BUILD
@@ -3,11 +3,7 @@ load("@rules_oci//oci:defs.bzl", "oci_image", "oci_image_index", "oci_push", "oc
 load("@rules_pkg//:pkg.bzl", "pkg_tar")
 load("@rules_pkg//pkg:mappings.bzl", "pkg_attributes", "pkg_files")
 load("//:jvm_flags.bzl", "server_jvm_flags", "worker_jvm_flags")
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-load("@rules_pkg//pkg:tar.bzl", "pkg_tar")
-=======
 load("//container:defs.bzl", "oci_image_env")
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 
 package(default_visibility = ["//visibility:public"])
 
@@ -50,21 +46,9 @@ DEFAULT_PACKAGE_DIR = "app/build_buildfarm"
 # operating systems.  We make a best effort and ensure they all work in the below images.
 pkg_tar(
     name = "execution_wrappers",
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-    data = [
-        ":as-nobody",
-        ":delay",
-        #":linux-sandbox.binary", # Darwin build is broken
-        ":macos-wrapper",
-        #":process-wrapper.binary", # Darwin build is broken
-        ":skip_sleep.binary",
-        ":skip_sleep.preload",
-        ":tini.binary",
-=======
     srcs = [
         ":exec-wrapper-files",
         ":exec-wrapper-helpers",
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     ],
     package_dir = DEFAULT_PACKAGE_DIR,
     tags = ["container"],
@@ -233,22 +217,6 @@ oci_image(
     ],
 )
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-oss_audit(
-    name = "buildfarm-shard-worker-audit",
-    src = "//src/main/java/build/buildfarm:buildfarm-shard-worker",
-    tags = ["audit"],
-)
-
-pkg_tar(
-    name = "buildfarm-shard-worker-tar",
-    srcs = [
-        "//examples:example_configs",
-        "//src/main/java/build/buildfarm:buildfarm-shard-worker_deploy.jar",
-        "//src/main/java/build/buildfarm:configs",
-    ],
-)
-=======
 [
     oci_image_index(
         name = "buildfarm-%s" % image,
@@ -291,4 +259,3 @@ pkg_tar(
         "worker",
     ]
 ]
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
diff --git a/defs.bzl b/defs.bzl
deleted file mode 100644
remerge CONFLICT (modify/delete): defs.bzl deleted in ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write) and modified in 8bbaada0 (Revert "Temporary fix to use the official Blake3 support").  Version 8bbaada0 (Revert "Temporary fix to use the official Blake3 support") of defs.bzl left in tree.
index 08f9a5c5..00000000
--- a/defs.bzl
+++ /dev/null
@@ -1,170 +0,0 @@
-"""
-buildfarm definitions that can be imported into other WORKSPACE files
-"""
-
-load("@rules_jvm_external//:defs.bzl", "maven_install")
-load("@remote_apis//:repository_rules.bzl", "switched_rules_by_language")
-load(
-    "@io_bazel_rules_docker//repositories:repositories.bzl",
-    container_repositories = "repositories",
-)
-load("@io_grpc_grpc_java//:repositories.bzl", "grpc_java_repositories")
-load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
-load("@com_grail_bazel_toolchain//toolchain:rules.bzl", "llvm_toolchain")
-load("@io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_repositories")
-load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
-
-IO_NETTY_MODULES = [
-    "buffer",
-    "codec",
-    "codec-http",
-    "codec-http2",
-    "codec-socks",
-    "common",
-    "handler",
-    "handler-proxy",
-    "resolver",
-    "transport",
-    "transport-native-epoll",
-    "transport-native-kqueue",
-    "transport-native-unix-common",
-]
-
-IO_GRPC_MODULES = [
-    "api",
-    "auth",
-    "core",
-    "context",
-    "netty",
-    "stub",
-    "protobuf",
-    "testing",
-    "services",
-    "netty-shaded",
-]
-
-COM_AWS_MODULES = [
-    "autoscaling",
-    "core",
-    "ec2",
-    "secretsmanager",
-    "sns",
-    "ssm",
-    "s3",
-]
-
-ORG_SPRING_MODULES = [
-    "spring-beans",
-    "spring-core",
-    "spring-context",
-    "spring-web",
-]
-
-ORG_SPRING_BOOT_MODULES = [
-    "spring-boot-autoconfigure",
-    "spring-boot",
-    "spring-boot-starter-web",
-    "spring-boot-starter-thymeleaf",
-]
-
-def buildfarm_init(name = "buildfarm"):
-    """
-    Initialize the WORKSPACE for buildfarm-related targets
-
-    Args:
-      name: the name of the repository
-    """
-    maven_install(
-        artifacts = ["com.amazonaws:aws-java-sdk-%s:1.11.729" % module for module in COM_AWS_MODULES] +
-                    [
-                        "com.fasterxml.jackson.core:jackson-databind:2.15.0",
-                        "com.github.ben-manes.caffeine:caffeine:2.9.0",
-                        "com.github.docker-java:docker-java:3.2.11",
-                        "com.github.jnr:jffi:1.2.16",
-                        "com.github.jnr:jffi:jar:native:1.2.16",
-                        "com.github.jnr:jnr-constants:0.9.9",
-                        "com.github.jnr:jnr-ffi:2.1.7",
-                        "com.github.jnr:jnr-posix:3.0.53",
-                        "com.github.pcj:google-options:1.0.0",
-                        "com.github.serceman:jnr-fuse:0.5.5",
-                        "com.github.luben:zstd-jni:1.5.5-7",
-                        "com.github.oshi:oshi-core:6.4.0",
-                        "com.google.auth:google-auth-library-credentials:0.9.1",
-                        "com.google.auth:google-auth-library-oauth2-http:0.9.1",
-                        "com.google.code.findbugs:jsr305:3.0.1",
-                        "com.google.code.gson:gson:2.9.0",
-                        "com.google.errorprone:error_prone_annotations:2.9.0",
-                        "com.google.errorprone:error_prone_core:0.92",
-                        "com.google.guava:failureaccess:1.0.1",
-                        "com.google.guava:guava:31.1-jre",
-                        "com.google.j2objc:j2objc-annotations:1.1",
-                        "com.google.jimfs:jimfs:1.1",
-                        "com.google.protobuf:protobuf-java-util:3.10.0",
-                        "com.google.protobuf:protobuf-java:3.10.0",
-                        "com.google.truth:truth:0.44",
-                        "org.slf4j:slf4j-simple:1.7.35",
-                        "com.googlecode.json-simple:json-simple:1.1.1",
-                        "com.jayway.jsonpath:json-path:2.4.0",
-                        "io.github.lognet:grpc-spring-boot-starter:4.5.4",
-                        "org.bouncycastle:bcprov-jdk15on:1.70",
-                        "net.jcip:jcip-annotations:1.0",
-                    ] + ["io.netty:netty-%s:4.1.90.Final" % module for module in IO_NETTY_MODULES] +
-                    ["io.grpc:grpc-%s:1.53.0" % module for module in IO_GRPC_MODULES] +
-                    [
-                        "io.prometheus:simpleclient:0.10.0",
-                        "io.prometheus:simpleclient_hotspot:0.10.0",
-                        "io.prometheus:simpleclient_httpserver:0.10.0",
-                        "junit:junit:4.13.1",
-                        "javax.annotation:javax.annotation-api:1.3.2",
-                        "net.javacrumbs.future-converter:future-converter-java8-guava:1.2.0",
-                        "org.apache.commons:commons-compress:1.21",
-                        "org.apache.commons:commons-pool2:2.9.0",
-                        "org.apache.commons:commons-lang3:3.12.0",
-                        "commons-io:commons-io:2.11.0",
-                        "me.dinowernli:java-grpc-prometheus:0.5.0",
-                        "org.apache.tomcat:annotations-api:6.0.53",
-                        "org.checkerframework:checker-qual:2.5.2",
-                        "org.mockito:mockito-core:2.25.0",
-                        "org.openjdk.jmh:jmh-core:1.23",
-                        "org.openjdk.jmh:jmh-generator-annprocess:1.23",
-                        "org.redisson:redisson:3.13.1",
-                    ] + ["org.springframework.boot:%s:2.7.4" % module for module in ORG_SPRING_BOOT_MODULES] +
-                    ["org.springframework:%s:5.3.23" % module for module in ORG_SPRING_MODULES] +
-                    [
-                        "org.threeten:threetenbp:1.3.3",
-                        "org.xerial:sqlite-jdbc:3.34.0",
-                        "org.jetbrains:annotations:16.0.2",
-                        "org.yaml:snakeyaml:2.0",
-                        "org.projectlombok:lombok:1.18.24",
-                    ],
-        generate_compat_repositories = True,
-        repositories = [
-            "https://repo.maven.apache.org/maven2",
-            "https://jcenter.bintray.com",
-        ],
-    )
-
-    switched_rules_by_language(
-        name = "bazel_remote_apis_imports",
-        java = True,
-    )
-
-    container_repositories()
-
-    protobuf_deps()
-
-    grpc_java_repositories()
-
-    k8s_repositories()
-
-    rules_pkg_dependencies()
-
-    native.bind(
-        name = "jar/redis/clients/jedis",
-        actual = "@jedis//jar",
-    )
-
-    llvm_toolchain(
-        name = "llvm_toolchain",
-        llvm_version = "16.0.0",
-    )
diff --git a/deps.bzl b/deps.bzl
deleted file mode 100644
remerge CONFLICT (modify/delete): deps.bzl deleted in ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write) and modified in 8bbaada0 (Revert "Temporary fix to use the official Blake3 support").  Version 8bbaada0 (Revert "Temporary fix to use the official Blake3 support") of deps.bzl left in tree.
index c7dc10fd..00000000
--- a/deps.bzl
+++ /dev/null
@@ -1,192 +0,0 @@
-"""
-buildfarm dependencies that can be imported into other WORKSPACE files
-"""
-
-load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive", "http_file", "http_jar")
-load("@bazel_tools//tools/build_defs/repo:utils.bzl", "maybe")
-
-RULES_JVM_EXTERNAL_TAG = "4.2"
-RULES_JVM_EXTERNAL_SHA = "cd1a77b7b02e8e008439ca76fd34f5b07aecb8c752961f9640dea15e9e5ba1ca"
-
-def archive_dependencies(third_party):
-    return [
-        {
-            "name": "platforms",
-            "urls": [
-                "https://mirror.bazel.build/github.com/bazelbuild/platforms/releases/download/0.0.6/platforms-0.0.6.tar.gz",
-                "https://github.com/bazelbuild/platforms/releases/download/0.0.6/platforms-0.0.6.tar.gz",
-            ],
-            "sha256": "5308fc1d8865406a49427ba24a9ab53087f17f5266a7aabbfc28823f3916e1ca",
-        },
-        {
-            "name": "rules_jvm_external",
-            "strip_prefix": "rules_jvm_external-%s" % RULES_JVM_EXTERNAL_TAG,
-            "sha256": RULES_JVM_EXTERNAL_SHA,
-            "url": "https://github.com/bazelbuild/rules_jvm_external/archive/%s.zip" % RULES_JVM_EXTERNAL_TAG,
-        },
-        {
-            "name": "rules_pkg",
-            "sha256": "8a298e832762eda1830597d64fe7db58178aa84cd5926d76d5b744d6558941c2",
-            "url": "https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.7.0/rules_pkg-0.7.0.tar.gz",
-        },
-
-        # Kubernetes rules.  Useful for local development with tilt.
-        {
-            "name": "io_bazel_rules_k8s",
-            "strip_prefix": "rules_k8s-0.7",
-            "url": "https://github.com/bazelbuild/rules_k8s/archive/refs/tags/v0.7.tar.gz",
-            "sha256": "ce5b9bc0926681e2e7f2147b49096f143e6cbc783e71bc1d4f36ca76b00e6f4a",
-        },
-
-        # Needed for "well-known protos" and @com_google_protobuf//:protoc.
-        {
-            "name": "com_google_protobuf",
-            "sha256": "dd513a79c7d7e45cbaeaf7655289f78fd6b806e52dbbd7018ef4e3cf5cff697a",
-            "strip_prefix": "protobuf-3.15.8",
-            "urls": ["https://github.com/protocolbuffers/protobuf/archive/v3.15.8.zip"],
-        },
-        {
-            "name": "com_github_bazelbuild_buildtools",
-            "sha256": "a02ba93b96a8151b5d8d3466580f6c1f7e77212c4eb181cba53eb2cae7752a23",
-            "strip_prefix": "buildtools-3.5.0",
-            "urls": ["https://github.com/bazelbuild/buildtools/archive/3.5.0.tar.gz"],
-        },
-
-        # Needed for @grpc_java//compiler:grpc_java_plugin.
-        {
-            "name": "io_grpc_grpc_java",
-            "sha256": "78bf175f9a8fa23cda724bbef52ad9d0d555cdd1122bcb06484b91174f931239",
-            "strip_prefix": "grpc-java-1.54.1",
-            "urls": ["https://github.com/grpc/grpc-java/archive/v1.54.1.zip"],
-        },
-        {
-            "name": "rules_pkg",
-            "sha256": "335632735e625d408870ec3e361e192e99ef7462315caa887417f4d88c4c8fb8",
-            "urls": [
-                "https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.9.0/rules_pkg-0.9.0.tar.gz",
-                "https://github.com/bazelbuild/rules_pkg/releases/download/0.9.0/rules_pkg-0.9.0.tar.gz",
-            ],
-        },
-        {
-            "name": "rules_license",
-            "sha256": "6157e1e68378532d0241ecd15d3c45f6e5cfd98fc10846045509fb2a7cc9e381",
-            "urls": [
-                "https://github.com/bazelbuild/rules_license/releases/download/0.0.4/rules_license-0.0.4.tar.gz",
-                "https://mirror.bazel.build/github.com/bazelbuild/rules_license/releases/download/0.0.4/rules_license-0.0.4.tar.gz",
-            ],
-        },
-
-        # The APIs that we implement.
-        {
-            "name": "googleapis",
-            "build_file": "%s:BUILD.googleapis" % third_party,
-            "patch_cmds": ["find google -name 'BUILD.bazel' -type f -delete"],
-            "patch_cmds_win": ["Remove-Item google -Recurse -Include *.bazel"],
-            "sha256": "745cb3c2e538e33a07e2e467a15228ccbecadc1337239f6740d57a74d9cdef81",
-            "strip_prefix": "googleapis-6598bb829c9e9a534be674649ffd1b4671a821f9",
-            "url": "https://github.com/googleapis/googleapis/archive/6598bb829c9e9a534be674649ffd1b4671a821f9.zip",
-        },
-        {
-            "name": "remote_apis",
-            "build_file": "%s:BUILD.remote_apis" % third_party,
-            "sha256": "e9a69cf51df14e20b7d3623ac9580bc8fb9275dda46305788e88eb768926b9c3",
-            "strip_prefix": "remote-apis-8f539af4b407a4f649707f9632fc2b715c9aa065",
-            "url": "https://github.com/bazelbuild/remote-apis/archive/8f539af4b407a4f649707f9632fc2b715c9aa065.zip",
-        },
-        {
-            "name": "rules_cc",
-            "sha256": "3d9e271e2876ba42e114c9b9bc51454e379cbf0ec9ef9d40e2ae4cec61a31b40",
-            "strip_prefix": "rules_cc-0.0.6",
-            "url": "https://github.com/bazelbuild/rules_cc/releases/download/0.0.6/rules_cc-0.0.6.tar.gz",
-        },
-
-        # Used to format proto files
-        {
-            "name": "com_grail_bazel_toolchain",
-            "sha256": "b2d168315dd0785f170b2b306b86e577c36e812b8f8b05568f9403141f2c24dd",
-            "strip_prefix": "toolchains_llvm-0.9",
-            "url": "https://github.com/grailbio/bazel-toolchain/archive/refs/tags/0.9.tar.gz",
-            "patch_args": ["-p1"],
-            "patches": ["%s:clang_toolchain.patch" % third_party],
-        },
-        {
-            "name": "io_bazel_rules_docker",
-            "sha256": "b1e80761a8a8243d03ebca8845e9cc1ba6c82ce7c5179ce2b295cd36f7e394bf",
-            "urls": ["https://github.com/bazelbuild/rules_docker/releases/download/v0.25.0/rules_docker-v0.25.0.tar.gz"],
-        },
-
-        # Bazel is referenced as a dependency so that buildfarm can access the linux-sandbox as a potential execution wrapper.
-        {
-            "name": "bazel",
-            "sha256": "06d3dbcba2286d45fc6479a87ccc649055821fc6da0c3c6801e73da780068397",
-            "strip_prefix": "bazel-6.0.0",
-            "urls": ["https://github.com/bazelbuild/bazel/archive/refs/tags/6.0.0.tar.gz"],
-            "patch_args": ["-p1"],
-            "patches": ["%s/bazel:bazel_visibility.patch" % third_party],
-        },
-
-        # Optional execution wrappers
-        {
-            "name": "skip_sleep",
-            "build_file": "%s:BUILD.skip_sleep" % third_party,
-            "sha256": "03980702e8e9b757df68aa26493ca4e8573770f15dd8a6684de728b9cb8549f1",
-            "strip_prefix": "TARDIS-f54fa4743e67763bb1ad77039b3d15be64e2e564",
-            "url": "https://github.com/Unilang/TARDIS/archive/f54fa4743e67763bb1ad77039b3d15be64e2e564.zip",
-        },
-        {
-            "name": "rules_oss_audit",
-            "sha256": "02962810bcf82d0c66f929ccc163423f53773b8b154574ca956345523243e70d",
-            "strip_prefix": "rules_oss_audit-1b2690cefd5a960c181e0d89bf3c076294a0e6f4",
-            "url": "https://github.com/vmware/rules_oss_audit/archive/1b2690cefd5a960c181e0d89bf3c076294a0e6f4.zip",
-        },
-    ]
-
-def buildfarm_dependencies(repository_name = "build_buildfarm"):
-    """
-    Define all 3rd party archive rules for buildfarm
-
-    Args:
-      repository_name: the name of the repository
-    """
-    third_party = "@%s//third_party" % repository_name
-    for dependency in archive_dependencies(third_party):
-        params = {}
-        params.update(**dependency)
-        name = params.pop("name")
-        maybe(http_archive, name, **params)
-
-    # Enhanced jedis 3.2.0 containing several convenience, performance, and
-    # robustness changes.
-    # Notable features include:
-    #   Cluster request pipelining, used for batching requests for operation
-    #   monitors and CAS index.
-    #   Blocking request (b* prefix) interruptibility, using client
-    #   connection reset.
-    #   Singleton-redis-as-cluster - support treating a non-clustered redis
-    #   endpoint as a cluster of 1 node.
-    # Other changes are redis version-forward treatment of spop and visibility
-    # into errors in cluster unreachable and cluster retry exhaustion.
-    # Details at https://github.com/werkt/jedis/releases/tag/3.2.0-594c20da20
-    maybe(
-        http_jar,
-        "jedis",
-        sha256 = "72c749c02b775c0371cfc8ebcf713032910b7c6f365d958c3c000838f43f6a65",
-        urls = [
-            "https://github.com/werkt/jedis/releases/download/3.2.0-594c20da20/jedis-3.2.0-594c20da20.jar",
-        ],
-    )
-
-    maybe(
-        http_jar,
-        "opentelemetry",
-        sha256 = "0523287984978c091be0d22a5c61f0bce8267eeafbbae58c98abaf99c9396832",
-        urls = [
-            "https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/download/v1.11.0/opentelemetry-javaagent.jar",
-        ],
-    )
-
-    http_file(
-        name = "tini",
-        sha256 = "12d20136605531b09a2c2dac02ccee85e1b874eb322ef6baf7561cd93f93c855",
-        urls = ["https://github.com/krallin/tini/releases/download/v0.18.0/tini"],
-    )
diff --git a/src/main/java/build/buildfarm/cas/cfc/CASFileCache.java b/src/main/java/build/buildfarm/cas/cfc/CASFileCache.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/cas/cfc/CASFileCache.java
index f1951f43..6cdb9b75 100644
--- a/src/main/java/build/buildfarm/cas/cfc/CASFileCache.java
+++ b/src/main/java/build/buildfarm/cas/cfc/CASFileCache.java
@@ -363,25 +363,7 @@ public abstract class CASFileCache implements ContentAddressableStorage {
     this.delegate = delegate;
     this.delegateSkipLoad = delegateSkipLoad;
     this.directoriesIndexDbName = directoriesIndexDbName;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
     this.keyReferences = Maps.newConcurrentMap();
-    if (publishTtlMetric) {
-      casTtl =
-          Histogram.build()
-              .name("cas_ttl_s")
-              .buckets(
-                  3600, // 1 hour
-                  21600, // 6 hours
-                  86400, // 1 day
-                  345600, // 4 days
-                  604800, // 1 week
-                  1210000 // 2 weeks
-                  )
-              .help("The amount of time CAS entries live on L1 storage before expiration (seconds)")
-              .register();
-    }
-=======
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 
     entryPathStrategy = new HexBucketEntryPathStrategy(root, hexBucketLevels);
 
@@ -1976,19 +1958,6 @@ public abstract class CASFileCache implements ContentAddressableStorage {
         || e instanceof ClosedByInterruptException;
   }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-  private int getLockedReferenceCount(Entry e) {
-    synchronized (this) {
-      Integer keyCt = keyReferences.get(e.key);
-      int refCt = e.referenceCount;
-      if (keyCt == null) {
-        return refCt;
-      } else {
-        // When the Entry is in an unreferenced sate state ( refCt == -1 ) -
-        // we don't want to subtract from this value
-        return keyCt + Math.min(Math.max(refCt, 0), 0);
-      }
-=======
   private Entry safeStorageInsertion(String key, Entry entry) {
     Lock lock;
     try {
@@ -2035,7 +2004,20 @@ public abstract class CASFileCache implements ContentAddressableStorage {
         }
       }
       lock.unlock();
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
+    }
+  }
+
+  private int getLockedReferenceCount(Entry e) {
+    synchronized (this) {
+      Integer keyCt = keyReferences.get(e.key);
+      int refCt = e.referenceCount;
+      if (keyCt == null) {
+        return refCt;
+      } else {
+        // When the Entry is in an unreferenced sate state ( refCt == -1 ) -
+        // we don't want to subtract from this value
+        return keyCt + Math.min(Math.max(refCt, 0), 0);
+      }
     }
   }
 
@@ -2231,7 +2213,6 @@ public abstract class CASFileCache implements ContentAddressableStorage {
   }
 
   private void removeFilePath(Path path) throws IOException {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
     if (!Files.exists(path)) {
       return;
     }
@@ -2242,19 +2223,10 @@ public abstract class CASFileCache implements ContentAddressableStorage {
     }
 
     if (Files.isDirectory(temp)) {
-      log.log(Level.INFO, "removing existing directory " + path + " for fetch");
-      Directories.remove(temp);
+      log.log(Level.FINER, "removing existing directory " + path + " for fetch");
+      Directories.remove(temp, fileStore);
     } else {
       Files.delete(temp);
-=======
-    if (Files.exists(path)) {
-      if (Files.isDirectory(path)) {
-        log.log(Level.FINER, "removing existing directory " + path + " for fetch");
-        Directories.remove(path, fileStore);
-      } else {
-        Files.delete(path);
-      }
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     }
   }
 
@@ -2273,7 +2245,6 @@ public abstract class CASFileCache implements ContentAddressableStorage {
     return directory;
   }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
   // Unlocks keys
   public void unlockKeys(Iterable<String> keys) throws IOException {
     synchronized (this) {
@@ -2389,10 +2360,7 @@ public abstract class CASFileCache implements ContentAddressableStorage {
     }
   }
 
-  public ListenableFuture<Path> putDirectory(
-=======
   public ListenableFuture<PathResult> putDirectory(
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
       Digest digest, Map<Digest, Directory> directoriesIndex, ExecutorService service) {
     // Claim lock.
     // Claim the directory path so no other threads try to create/delete it.
@@ -2742,11 +2710,7 @@ public abstract class CASFileCache implements ContentAddressableStorage {
   private void copyExternalInput(Digest digest, CancellableOutputStream out)
       throws IOException, InterruptedException {
     Retrier retrier = new Retrier(Backoff.sequential(5), Retrier.DEFAULT_IS_RETRIABLE);
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-    log.log(Level.FINE, format("downloading %s", DigestUtil.toString(digest)));
-=======
     log.log(Level.FINER, format("downloading %s", DigestUtil.toString(digest)));
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     try {
       retrier.execute(
           () -> {
@@ -2944,7 +2908,6 @@ public abstract class CASFileCache implements ContentAddressableStorage {
     }
   }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
   // Atomic file system mutation helpers
   //
   // Assuming the the file system has atomic renames.
@@ -2981,18 +2944,9 @@ public abstract class CASFileCache implements ContentAddressableStorage {
     }
   }
 
-  private void deleteExpiredKey(Path path) throws IOException {
-    // We don't want publishing the metric to delay the deletion of the file.
-    // We publish the metric only after the file has been deleted.
-    long createdTime = 0;
-    if (publishTtlMetric) {
-      createdTime = path.toFile().lastModified();
-    }
-=======
   private void deleteExpiredKey(String key) throws IOException {
-    Path path = getRemovingPath(key);
-    long createdTimeMs = Files.getLastModifiedTime(path).to(MILLISECONDS);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
+      Path path = getRemovingPath(key);
+      long createdTimeMs = Files.getLastModifiedTime(path).to(MILLISECONDS);
 
     deleteFilePath(path);
 
@@ -3030,7 +2984,6 @@ public abstract class CASFileCache implements ContentAddressableStorage {
                     (expiredEntry) -> {
                       String expiredKey = expiredEntry.key;
                       try {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
                         // asnyc and possible to have gotten a key by now
                         if (keyReferences.get(key) != null) {
                           log.log(
@@ -3039,12 +2992,8 @@ public abstract class CASFileCache implements ContentAddressableStorage {
                                   "CASFileCache::putImpl ignore deletion for %s expiration due to key reference",
                                   expiredKey));
                         } else {
-                          Path path = getPath(expiredKey);
-                          deleteExpiredKey(path);
+                          deleteExpiredKey(expiredKey);
                         }
-=======
-                        deleteExpiredKey(expiredKey);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
                       } catch (NoSuchFileException eNoEnt) {
                         log.log(
                             Level.SEVERE,
@@ -3260,16 +3209,10 @@ public abstract class CASFileCache implements ContentAddressableStorage {
         Entry existingEntry = null;
         boolean inserted = false;
         try {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
           log.log(Level.FINEST, "comitting " + key + " from " + writePath);
           Path cachePath = CASFileCache.this.getPath(key);
           CASFileCache.this.renamePath(writePath, cachePath);
-          existingEntry = storage.putIfAbsent(key, entry);
-=======
-          // acquire the key lock
-          Files.createLink(CASFileCache.this.getPath(key), writePath);
           existingEntry = safeStorageInsertion(key, entry);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
           inserted = existingEntry == null;
         } catch (FileAlreadyExistsException e) {
           log.log(Level.FINER, "file already exists for " + key + ", nonexistent entry will fail");
diff --git a/src/main/java/build/buildfarm/common/OperationFailer.java b/src/main/java/build/buildfarm/common/OperationFailer.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/common/OperationFailer.java
index 49843df1..cb52cc3c 100644
--- a/src/main/java/build/buildfarm/common/OperationFailer.java
+++ b/src/main/java/build/buildfarm/common/OperationFailer.java
@@ -20,14 +20,9 @@ import build.bazel.remote.execution.v2.ExecutionStage;
 import build.buildfarm.v1test.ExecuteEntry;
 import com.google.longrunning.Operation;
 import com.google.protobuf.Any;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-import com.google.rpc.PreconditionFailure;
-import io.grpc.Status.Code;
-import java.net.InetAddress;
-import com.google.common.base.Strings;
-=======
 import com.google.rpc.Status;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
+import com.google.common.base.Strings;
+import java.net.InetAddress;
 
 /**
  * @class OperationFailer
@@ -36,35 +31,22 @@ import com.google.rpc.Status;
  *     finished and failed.
  */
 public class OperationFailer {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-
   // Not great - consider using publicName if we upstream
   private static String hostname = null;
   private static String getHostname() {
-    if (!Strings.isNullOrEmpty(hostname)) {
+      if (!Strings.isNullOrEmpty(hostname)) {
+          return hostname;
+      }
+      try {
+          hostname = InetAddress.getLocalHost().getHostName();
+      } catch (Exception e) {
+          hostname = "_unknown_host_";
+      }
       return hostname;
-    }
-    try {
-      hostname = InetAddress.getLocalHost().getHostName();
-    } catch (Exception e) {
-      hostname = "_unknown_host_";
-    }
-    return hostname;
   }
 
-  public static Operation get(
-      Operation operation,
-      ExecuteEntry executeEntry,
-      String failureType,
-      String failureMessage,
-      String failureDetails) {
-    return operation
-        .toBuilder()
-        .setName(executeEntry.getOperationName())
-=======
   public static Operation get(Operation operation, ExecuteEntry executeEntry, Status status) {
     return operation.toBuilder()
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
         .setDone(true)
         .setName(executeEntry.getOperationName())
         .setMetadata(
@@ -82,27 +64,4 @@ public class OperationFailer {
         .setStage(stage)
         .build();
   }
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-
-  private static ExecuteResponse failResponse(
-      ExecuteEntry executeEntry, String failureType, String failureMessage, String failureDetails) {
-    PreconditionFailure.Builder preconditionFailureBuilder = PreconditionFailure.newBuilder();
-    preconditionFailureBuilder
-        .addViolationsBuilder()
-        .setType(failureType)
-        .setSubject(String.format("[%s] %s", OperationFailer.getHostname(), "blobs/" + DigestUtil.toString(executeEntry.getActionDigest())))
-        .setDescription(String.format("[%s] %s", OperationFailer.getHostname(), failureDetails));
-    PreconditionFailure preconditionFailure = preconditionFailureBuilder.build();
-
-    return ExecuteResponse.newBuilder()
-        .setStatus(
-            com.google.rpc.Status.newBuilder()
-                .setCode(Code.FAILED_PRECONDITION.value())
-                .setMessage(failureMessage)
-                .addDetails(Any.pack(preconditionFailure))
-                .build())
-        .build();
-  }
-=======
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 }
diff --git a/src/main/java/build/buildfarm/common/grpc/Retrier.java b/src/main/java/build/buildfarm/common/grpc/Retrier.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/common/grpc/Retrier.java
index 43de167f..3e81bbf5 100644
--- a/src/main/java/build/buildfarm/common/grpc/Retrier.java
+++ b/src/main/java/build/buildfarm/common/grpc/Retrier.java
@@ -100,17 +100,10 @@ public class Retrier {
 
     static Supplier<Backoff> sequential(int maxAttempts) {
       return exponential(
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-          /* initial=*/ Duration.ZERO,
-          /* max=*/ Duration.ZERO,
-          /* multiplier=*/ 1.1,
-          /* jitter=*/ 0.0,
-=======
           /* initial= */ Duration.ZERO,
           /* max= */ Duration.ZERO,
           /* multiplier= */ 1.1,
           /* jitter= */ 0.0,
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
           maxAttempts);
     }
 
diff --git a/src/main/java/build/buildfarm/common/redis/RedisClient.java b/src/main/java/build/buildfarm/common/redis/RedisClient.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/common/redis/RedisClient.java
index 40ef067d..dbceb4aa 100644
--- a/src/main/java/build/buildfarm/common/redis/RedisClient.java
+++ b/src/main/java/build/buildfarm/common/redis/RedisClient.java
@@ -24,15 +24,12 @@ import java.net.SocketException;
 import java.net.SocketTimeoutException;
 import java.util.concurrent.atomic.AtomicReference;
 import java.util.function.Consumer;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
+import redis.clients.jedis.UnifiedJedis;
+import redis.clients.jedis.exceptions.JedisClusterOperationException;
 import java.util.function.Supplier;
 import java.util.logging.Level;
 import lombok.extern.java.Log;
 import redis.clients.jedis.JedisCluster;
-=======
-import redis.clients.jedis.UnifiedJedis;
-import redis.clients.jedis.exceptions.JedisClusterOperationException;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 import redis.clients.jedis.exceptions.JedisConnectionException;
 import redis.clients.jedis.exceptions.JedisDataException;
 import redis.clients.jedis.exceptions.JedisException;
@@ -83,15 +80,10 @@ public class RedisClient implements Closeable {
     }
   }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
   // We store the factory in case we want to re-create the jedis client.
-  private Supplier<JedisCluster> jedisClusterFactory;
+  private Supplier<UnifiedJedis> unifiedJedisFactory;
 
-  // The jedis client.
-  private JedisCluster jedis;
-=======
   private final UnifiedJedis jedis;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 
   private boolean closed = false;
 
@@ -100,15 +92,15 @@ public class RedisClient implements Closeable {
   }
 
   public RedisClient(
-      Supplier<JedisCluster> jedisClusterFactory,
+      Supplier<UnifiedJedis> unifiedJedisFactory,
       int reconnectClientAttempts,
       int reconnectClientWaitDurationMs) {
     try {
-      this.jedis = jedisClusterFactory.get();
+      this.jedis = unifiedJedisFactory.get();
     } catch (Exception e) {
       log.log(Level.SEVERE, "Unable to establish redis client: " + e.toString());
     }
-    this.jedisClusterFactory = jedisClusterFactory;
+    this.unifiedJedisFactory = unifiedJedisFactory;
     this.reconnectClientAttempts = reconnectClientAttempts;
     this.reconnectClientWaitDurationMs = reconnectClientWaitDurationMs;
   }
@@ -205,7 +197,7 @@ public class RedisClient implements Closeable {
   private void rebuildJedisCluser() {
     try {
       log.log(Level.SEVERE, "Rebuilding redis client");
-      jedis = jedisClusterFactory.get();
+      jedis = unifiedJedisFactory.get();
     } catch (Exception e) {
       redisClientRebuildErrorCounter.inc();
       log.log(Level.SEVERE, "Failed to rebuild redis client");
diff --git a/src/main/java/build/buildfarm/instance/shard/RedisShardBackplane.java b/src/main/java/build/buildfarm/instance/shard/RedisShardBackplane.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/instance/shard/RedisShardBackplane.java
index 808614d3..22b61910 100644
--- a/src/main/java/build/buildfarm/instance/shard/RedisShardBackplane.java
+++ b/src/main/java/build/buildfarm/instance/shard/RedisShardBackplane.java
@@ -538,22 +538,18 @@ public class RedisShardBackplane implements Backplane {
     // Construct a single redis client to be used throughout the entire backplane.
     // We wish to avoid various synchronous and error handling issues that could occur when using
     // multiple clients.
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
     client =
         new RedisClient(
             jedisClusterFactory,
             configs.getBackplane().getReconnectClientAttempts(),
             configs.getBackplane().getReconnectClientWaitDurationMs());
-    // Create containers that make up the backplane
-    state = DistributedStateCreator.create(client);
-=======
+  // Create containers that make up the backplane
     start(new RedisClient(jedisClusterFactory.get()), clientPublicName);
-  }
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
+}
 
   private void start(RedisClient client, String clientPublicName) throws IOException {
-    // Create containers that make up the backplane
-    start(client, DistributedStateCreator.create(client), clientPublicName);
+      // Create containers that make up the backplane
+      start(client, DistributedStateCreator.create(client), clientPublicName);
   }
 
   @VisibleForTesting
diff --git a/src/main/java/build/buildfarm/instance/shard/Writes.java b/src/main/java/build/buildfarm/instance/shard/Writes.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/instance/shard/Writes.java
index 11433de5..d085107c 100644
--- a/src/main/java/build/buildfarm/instance/shard/Writes.java
+++ b/src/main/java/build/buildfarm/instance/shard/Writes.java
@@ -39,6 +39,7 @@ import java.io.IOException;
 import java.util.UUID;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
 
 class Writes {
   private final LoadingCache<BlobWriteKey, Instance> blobWriteInstances;
@@ -116,20 +117,22 @@ class Writes {
     }
   }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-  Writes(CacheLoader<BlobWriteKey, Instance> instanceSupplier) {
-    this(instanceSupplier, /* writeExpiresAfter=*/ 1);
-=======
   Writes(Supplier<Instance> instanceSupplier) {
     this(instanceSupplier, /* writeExpiresAfter= */ 1);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
   }
 
-  Writes(CacheLoader<BlobWriteKey, Instance> instanceSupplier, long writeExpiresAfter) {
+  Writes(Supplier<Instance> instanceSupplier, long writeExpiresAfter) {
     blobWriteInstances =
         CacheBuilder.newBuilder()
             .expireAfterWrite(writeExpiresAfter, TimeUnit.HOURS)
-            .build(instanceSupplier);
+            .build(
+                new CacheLoader<BlobWriteKey, Instance>() {
+                  @SuppressWarnings("NullableProblems")
+                  @Override
+                  public Instance load(BlobWriteKey key) {
+                    return instanceSupplier.get();
+                  }
+                });
   }
 
   public Write get(
diff --git a/src/main/java/build/buildfarm/worker/Pipeline.java b/src/main/java/build/buildfarm/worker/Pipeline.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/worker/Pipeline.java
index f3def5f9..a753cf3f 100644
--- a/src/main/java/build/buildfarm/worker/Pipeline.java
+++ b/src/main/java/build/buildfarm/worker/Pipeline.java
@@ -14,7 +14,6 @@
 
 package build.buildfarm.worker;
 
-import com.google.common.util.concurrent.SettableFuture;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
@@ -25,38 +24,26 @@ import lombok.extern.java.Log;
 @Log
 public class Pipeline {
   private final Map<PipelineStage, Thread> stageThreads;
-  private final PipelineStageThreadGroup stageThreadGroup;
   private final Map<PipelineStage, Integer> stageClosePriorities;
   private Thread joiningThread = null;
   private boolean closing = false;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-=======
 
   // FIXME ThreadGroup?
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 
   public Pipeline() {
     stageThreads = new HashMap<>();
     stageClosePriorities = new HashMap<>();
-    stageThreadGroup = new PipelineStageThreadGroup();
   }
 
   public void add(PipelineStage stage, int closePriority) {
-    stageThreads.put(stage, new Thread(stageThreadGroup, stage, stage.name()));
+    stageThreads.put(stage, new Thread(stage));
     if (closePriority < 0) {
       throw new IllegalArgumentException("closePriority cannot be negative");
     }
     stageClosePriorities.put(stage, closePriority);
   }
 
-  /**
-   * Start the pipeline.
-   *
-   * <p>You can provide callback which is invoked when any stage has an uncaught exception, for
-   * instance to shutdown the worker gracefully
-   */
-  public void start(SettableFuture<Void> uncaughtExceptionFuture) {
-    stageThreadGroup.setUncaughtExceptionFuture(uncaughtExceptionFuture);
+  public void start() {
     for (Thread stageThread : stageThreads.values()) {
       stageThread.start();
     }
diff --git a/src/main/java/build/buildfarm/worker/PipelineStage.java b/src/main/java/build/buildfarm/worker/PipelineStage.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/worker/PipelineStage.java
index b90fdad9..b33cf9ea 100644
--- a/src/main/java/build/buildfarm/worker/PipelineStage.java
+++ b/src/main/java/build/buildfarm/worker/PipelineStage.java
@@ -58,17 +58,8 @@ public abstract class PipelineStage implements Runnable {
 
   @Override
   public void run() {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-    try {
-      runInterruptible();
-    } catch (InterruptedException e) {
-      // ignore
-    } finally {
-      boolean wasInterrupted = Thread.interrupted();
-=======
     boolean keepRunningStage = true;
     while (keepRunningStage) {
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
       try {
         runInterruptible();
 
diff --git a/src/main/java/build/buildfarm/worker/shard/CFCExecFileSystem.java b/src/main/java/build/buildfarm/worker/shard/CFCExecFileSystem.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/worker/shard/CFCExecFileSystem.java
index f977b36f..bb667fcd 100644
--- a/src/main/java/build/buildfarm/worker/shard/CFCExecFileSystem.java
+++ b/src/main/java/build/buildfarm/worker/shard/CFCExecFileSystem.java
@@ -215,15 +215,11 @@ class CFCExecFileSystem implements ExecFileSystem {
           onKey.accept(key);
           if (digest.getSizeBytes() != 0) {
             try {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
               // Coordinated with the CAS - consider adding an API for safe path
               // access
               synchronized (fileCache) {
-                Files.createLink(filePath, fileCachePath);
+                Files.createLink(path, fileCachePath);
               }
-=======
-              Files.createLink(path, fileCachePath);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
             } catch (IOException e) {
               return immediateFailedFuture(e);
             }
@@ -303,25 +299,6 @@ class CFCExecFileSystem implements ExecFileSystem {
                     onKey,
                     inputDirectories));
       } else {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-        downloads =
-            concat(
-                downloads,
-                ImmutableList.of(
-                    transform(
-                        linkDirectory(dirPath, digest, directoriesIndex),
-                        (result) -> {
-                          // note: this could non-trivial make sync due to
-                          // the way decrementReferences is implemented.
-                          // we saw null entries in the built immutable list
-                          // without synchronization
-                          synchronized (inputDirectories) {
-                            inputDirectories.add(digest);
-                          }
-                          return null;
-                        },
-                        fetchService)));
-=======
         linkedDirectories.add(
             transform(
                 linkDirectory(dirPath, digest, directoriesIndex),
@@ -333,7 +310,6 @@ class CFCExecFileSystem implements ExecFileSystem {
                   return null;
                 },
                 fetchService));
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
       }
       if (Thread.currentThread().isInterrupted()) {
         break;
@@ -460,14 +436,6 @@ class CFCExecFileSystem implements ExecFileSystem {
     ImmutableList.Builder<String> inputFiles = new ImmutableList.Builder<>();
     ImmutableList.Builder<Digest> inputDirectories = new ImmutableList.Builder<>();
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-    // Get lock keys so we can increment them prior to downloading
-    // and no other threads can to create/delete during
-    // eviction or the invocation of fetchInputs
-    Iterable<String> lockedKeys =
-        fileCache.lockDirectoryKeys(execDir, inputRootDigest, directoriesIndex);
-
-=======
     Set<Path> linkedInputDirectories =
         ImmutableSet.copyOf(
             Iterables.transform(
@@ -476,7 +444,12 @@ class CFCExecFileSystem implements ExecFileSystem {
 
     log.log(
         Level.FINER, "ExecFileSystem::createExecDir(" + operationName + ") calling fetchInputs");
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
+    // Get lock keys so we can increment them prior to downloading
+    // and no other threads can to create/delete during
+    // eviction or the invocation of fetchInputs
+    Iterable<String> lockedKeys =
+        fileCache.lockDirectoryKeys(execDir, inputRootDigest, directoriesIndex);
+
     Iterable<ListenableFuture<Void>> fetchedFutures =
         fetchInputs(
             execDir,
@@ -528,12 +501,8 @@ class CFCExecFileSystem implements ExecFileSystem {
       if (!success) {
         log.log(Level.INFO, "Failed to create exec dir (" + operationName + "), cleaning up");
         fileCache.decrementReferences(inputFiles.build(), inputDirectories.build());
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
         fileCache.unlockKeys(lockedKeys);
-        Directories.remove(execDir);
-=======
         Directories.remove(execDir, fileStore);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
       }
     }
 
diff --git a/src/main/java/build/buildfarm/worker/shard/RemoteCasWriter.java b/src/main/java/build/buildfarm/worker/shard/RemoteCasWriter.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/worker/shard/RemoteCasWriter.java
index 2086f54d..0ba8db48 100644
--- a/src/main/java/build/buildfarm/worker/shard/RemoteCasWriter.java
+++ b/src/main/java/build/buildfarm/worker/shard/RemoteCasWriter.java
@@ -50,22 +50,13 @@ import lombok.extern.java.Log;
 
 @Log
 public class RemoteCasWriter implements CasWriter {
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-  private final Set<String> workerSet;
-=======
   private final Backplane backplane;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
   private final LoadingCache<String, Instance> workerStubs;
   private final Retrier retrier;
 
   public RemoteCasWriter(
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-      Set<String> workerSet, LoadingCache<String, Instance> workerStubs, Retrier retrier) {
-    this.workerSet = workerSet;
-=======
       Backplane backplane, LoadingCache<String, Instance> workerStubs, Retrier retrier) {
     this.backplane = backplane;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     this.workerStubs = workerStubs;
     this.retrier = retrier;
   }
@@ -86,11 +77,7 @@ public class RemoteCasWriter implements CasWriter {
       Throwable cause = e.getCause();
       Throwables.throwIfInstanceOf(cause, IOException.class);
       Throwables.throwIfUnchecked(cause);
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-      throw new RuntimeException(cause);
-=======
       throw new IOException(cause);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     }
   }
 
@@ -100,46 +87,6 @@ public class RemoteCasWriter implements CasWriter {
     String workerName = getRandomWorker();
     Write write = getCasMemberWrite(digest, digestFunction, workerName);
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-    try {
-      return streamIntoWriteFuture(in, write, digest).get();
-    } catch (ExecutionException e) {
-      Throwable cause = e.getCause();
-      Throwables.throwIfInstanceOf(cause, IOException.class);
-      // prevent a discard of this frame
-      Status status = Status.fromThrowable(cause);
-      throw status.asRuntimeException();
-    }
-  }
-
-  private Write getCasMemberWrite(
-      Digest digest, DigestFunction.Value digestFunction, String workerName) throws IOException {
-    Instance casMember = workerStub(workerName);
-
-    return casMember.getBlobWrite(
-        Compressor.Value.IDENTITY,
-        digest,
-        digestFunction,
-        UUID.randomUUID(),
-        RequestMetadata.getDefaultInstance());
-  }
-
-  @Override
-  public void insertBlob(Digest digest, DigestFunction.Value digestFunction, ByteString content)
-      throws IOException, InterruptedException {
-    insertBlobToCasMember(digest, digestFunction, content);
-  }
-
-  private void insertBlobToCasMember(Digest digest, DigestFunction.Value digestFunction, ByteString content)
-      throws IOException, InterruptedException {
-    try (InputStream in = content.newInput()) {
-      retrier.execute(() -> writeToCasMember(digest, digestFunction, in));
-    } catch (RetryException e) {
-      Throwable cause = e.getCause();
-      Throwables.throwIfInstanceOf(cause, IOException.class);
-      Throwables.throwIfUnchecked(cause);
-      throw new RuntimeException(cause);
-=======
     write.reset();
     try {
       return streamIntoWriteFuture(in, write, digest).get();
@@ -149,7 +96,6 @@ public class RemoteCasWriter implements CasWriter {
       // prevent a discard of this frame
       Status status = Status.fromThrowable(cause);
       throw new IOException(status.asException());
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     }
   }
 
diff --git a/src/main/java/build/buildfarm/worker/shard/Worker.java b/src/main/java/build/buildfarm/worker/shard/Worker.java
remerge CONFLICT (content): Merge conflict in src/main/java/build/buildfarm/worker/shard/Worker.java
index 0c4628fc..60f8cf34 100644
--- a/src/main/java/build/buildfarm/worker/shard/Worker.java
+++ b/src/main/java/build/buildfarm/worker/shard/Worker.java
@@ -21,7 +21,6 @@ import static build.buildfarm.common.io.Utils.getUser;
 import static com.google.common.base.Preconditions.checkArgument;
 import static com.google.common.base.Preconditions.checkState;
 import static java.util.concurrent.Executors.newSingleThreadExecutor;
-import static java.util.concurrent.Executors.newSingleThreadScheduledExecutor;
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static java.util.logging.Level.INFO;
 import static java.util.logging.Level.SEVERE;
@@ -42,10 +41,7 @@ import build.buildfarm.common.config.Cas;
 import build.buildfarm.common.config.GrpcMetrics;
 import build.buildfarm.common.grpc.Retrier;
 import build.buildfarm.common.grpc.Retrier.Backoff;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-=======
 import build.buildfarm.common.grpc.TracingMetadataUtils.ServerHeadersInterceptor;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 import build.buildfarm.common.services.ByteStreamService;
 import build.buildfarm.common.services.ContentAddressableStorageService;
 import build.buildfarm.instance.Instance;
@@ -66,11 +62,6 @@ import build.buildfarm.worker.SuperscalarPipelineStage;
 import build.buildfarm.worker.resources.LocalResourceSetUtils;
 import com.google.common.cache.LoadingCache;
 import com.google.common.collect.Lists;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-import com.google.common.util.concurrent.SettableFuture;
-import com.google.devtools.common.options.OptionsParsingException;
-=======
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 import com.google.longrunning.Operation;
 import com.google.protobuf.ByteString;
 import com.google.protobuf.Duration;
@@ -96,24 +87,11 @@ import java.util.Random;
 import java.util.UUID;
 import java.util.concurrent.Executor;
 import java.util.concurrent.ExecutorService;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.ScheduledFuture;
-=======
 import java.util.concurrent.atomic.AtomicBoolean;
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 import java.util.logging.Level;
 import javax.annotation.Nullable;
 import javax.naming.ConfigurationException;
 import lombok.extern.java.Log;
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.boot.SpringApplication;
-import org.springframework.boot.autoconfigure.SpringBootApplication;
-import org.springframework.context.ApplicationContext;
-import org.springframework.context.annotation.ComponentScan;
-=======
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
 
 @Log
 public final class Worker extends LoggingMain {
@@ -158,7 +136,6 @@ public final class Worker extends LoggingMain {
   private LoadingCache<String, Instance> workerStubs;
   private AtomicBoolean released = new AtomicBoolean(true);
 
-  @Autowired private ApplicationContext springContext;
   /**
    * The method will prepare the worker for graceful shutdown when the worker is ready. Note on
    * using stderr here instead of log. By the time this is called in PreDestroy, the log is no
@@ -206,46 +183,8 @@ public final class Worker extends LoggingMain {
     }
   }
 
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-  private void exitPostPipelineFailure() {
-    // Shutdown the worker if a pipeline fails. By means of the spring lifecycle
-    // hooks - e.g. the `PreDestroy` hook here - it will attempt to gracefully
-    // spin down the pipeline
-
-    // By calling these spring shutdown facilities; we're open to the risk that
-    // a subsystem may be hanging a criticial thread indeffinitly. Deadline the
-    // shutdown workflow to ensure we don't leave a zombie worker in this
-    // situation
-    ScheduledExecutorService shutdownDeadlineExecutor = newSingleThreadScheduledExecutor();
-
-    // This may be shorter than the action timeout; assume we have interrupted
-    // actions in a fatal uncaught exception.
-    int forceShutdownDeadline = 60;
-    ScheduledFuture<?> termFuture =
-        shutdownDeadlineExecutor.schedule(
-            new Runnable() {
-              public void run() {
-                log.log(
-                    Level.SEVERE,
-                    String.format(
-                        "Force terminating due to shutdown deadline exceeded (%d seconds)",
-                        forceShutdownDeadline));
-                System.exit(1);
-              }
-            },
-            forceShutdownDeadline,
-            SECONDS);
-
-    // Consider defining exit codes to better afford out of band instance
-    // recovery
-    int code = SpringApplication.exit(springContext, () -> 1);
-    termFuture.cancel(false);
-    shutdownDeadlineExecutor.shutdown();
-    System.exit(code);
-=======
   private Worker() {
     super("BuildFarmShardWorker");
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
   }
 
   private Operation stripOperation(Operation operation) {
@@ -645,11 +584,7 @@ public final class Worker extends LoggingMain {
     CasWriter writer;
     if (!configs.getWorker().getCapabilities().isCas()) {
       Retrier retrier = new Retrier(Backoff.sequential(5), Retrier.DEFAULT_IS_RETRIABLE);
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-      writer = new RemoteCasWriter(backplane.getStorageWorkers(), workerStubs, retrier);
-=======
       writer = new RemoteCasWriter(backplane, workerStubs, retrier);
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
     } else {
       writer = new LocalCasWriter(execFileSystem);
     }
@@ -695,13 +630,7 @@ public final class Worker extends LoggingMain {
     PrometheusPublisher.startHttpServer(configs.getPrometheusPort());
     startFailsafeRegistration();
 
-    // Listen for pipeline unhandled exceptions
-    ExecutorService pipelineExceptionExecutor = newSingleThreadExecutor();
-    SettableFuture<Void> pipelineExceptionFuture = SettableFuture.create();
-    pipelineExceptionFuture.addListener(this::exitPostPipelineFailure, pipelineExceptionExecutor);
-
-    pipeline.start(pipelineExceptionFuture);
-
+    pipeline.start();
     healthCheckMetric.labels("start").inc();
     executionSlotsTotal.set(configs.getWorker().getExecuteStageWidth());
     inputFetchSlotsTotal.set(configs.getWorker().getInputFetchStageWidth());
diff --git a/src/test/java/build/buildfarm/cas/cfc/CASFileCacheTest.java b/src/test/java/build/buildfarm/cas/cfc/CASFileCacheTest.java
remerge CONFLICT (content): Merge conflict in src/test/java/build/buildfarm/cas/cfc/CASFileCacheTest.java
index ad3ef220..e24c1426 100644
--- a/src/test/java/build/buildfarm/cas/cfc/CASFileCacheTest.java
+++ b/src/test/java/build/buildfarm/cas/cfc/CASFileCacheTest.java
@@ -1112,23 +1112,6 @@ class CASFileCacheTest {
     CASFileCache flakyExternalCAS =
         new CASFileCache(
             root,
-<<<<<<< 8bbaada0 (Revert "Temporary fix to use the official Blake3 support")
-            /* maxSizeInBytes=*/ 1024,
-            /* maxEntrySizeInBytes=*/ 1024,
-            /* hexBucketLevels=*/ 1,
-            storeFileDirsIndexInMemory,
-            /* publishTtlMetric=*/ false,
-            /* execRootFallback=*/ false,
-            DIGEST_UTIL,
-            expireService,
-            /* accessRecorder=*/ directExecutor(),
-            storage,
-            /* directoriesIndexDbName=*/ ":memory:",
-            /* onPut=*/ digest -> {},
-            /* onExpire=*/ digests -> {},
-            /* delegate=*/ null,
-            /* delegateSkipLoad=*/ false) {
-=======
             /* maxSizeInBytes= */ 1024,
             /* maxEntrySizeInBytes= */ 1024,
             /* hexBucketLevels= */ 1,
@@ -1143,7 +1126,6 @@ class CASFileCacheTest {
             /* onExpire= */ digests -> {},
             /* delegate= */ null,
             /* delegateSkipLoad= */ false) {
->>>>>>> ece844a1 (Reduce DUPLICATE_OUTPUT_STREAM future to write)
           boolean throwUnavailable = true;
 
           @Override

@chenj-hub chenj-hub force-pushed the jackies/upgrade-bazel-buildfarm-to-v2.10.2 branch from de650e7 to 64b71b2 Compare August 13, 2024 20:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.