Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump TFLite Version #345

Merged
merged 3 commits into from
Jul 8, 2024
Merged

Conversation

boomanaiden154
Copy link
Collaborator

This patch bumps the TFLite version (along with versions of dependencies) so that we can get rid of the pthreadpool dependency.

This patch bumps the TFLite version (along with versions of
dependencies) so that we can get rid of the pthreadpool dependency.
@boomanaiden154 boomanaiden154 marked this pull request as ready for review July 4, 2024 06:49
@boomanaiden154 boomanaiden154 requested review from mtrofin and petrhosek and removed request for mtrofin July 4, 2024 06:49
@boomanaiden154
Copy link
Collaborator Author

Maybe this isn't ready. I'm seeing some LLVM test failures:

********************
FAIL: LLVM :: CodeGen/MLRegAlloc/dev-mode-prio-logging.ll (582 of 24983)
******************** TEST 'LLVM :: CodeGen/MLRegAlloc/dev-mode-prio-logging.ll' FAILED ********************
Exit Code: 134

Command Output (stdout):
--
Writing output spec to /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel/output_spec.json.

--
Command Output (stderr):
--
RUN: at line 6: /llvm-project/build/bin/llc -o /dev/null -mtriple=x86_64-linux-unknown -regalloc=greedy    -regalloc-enable-priority-advisor=development    -regalloc-priority-training-log=/llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1    < /llvm-project/llvm/test/CodeGen/MLRegAlloc/Inputs/input.ll
+ /llvm-project/build/bin/llc -o /dev/null -mtriple=x86_64-linux-unknown -regalloc=greedy -regalloc-enable-priority-advisor=development -regalloc-priority-training-log=/llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1
RUN: at line 10: "/usr/bin/python3.10" /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/log_reader.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1 > /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1.readable
+ /usr/bin/python3.10 /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/log_reader.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1
RUN: at line 11: /llvm-project/build/bin/FileCheck --input-file /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1.readable /llvm-project/llvm/test/CodeGen/MLRegAlloc/dev-mode-prio-logging.ll --check-prefixes=CHECK,NOML
+ /llvm-project/build/bin/FileCheck --input-file /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1.readable /llvm-project/llvm/test/CodeGen/MLRegAlloc/dev-mode-prio-logging.ll --check-prefixes=CHECK,NOML
RUN: at line 12: diff /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1.readable /llvm-project/llvm/test/CodeGen/MLRegAlloc/Inputs/reference-prio-log-noml.txt
+ diff /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp1.readable /llvm-project/llvm/test/CodeGen/MLRegAlloc/Inputs/reference-prio-log-noml.txt
RUN: at line 14: rm -rf /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp && mkdir /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp
+ rm -rf /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp
+ mkdir /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp
RUN: at line 15: "/usr/bin/python3.10" /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/gen-regalloc-priority-test-model.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel
+ /usr/bin/python3.10 /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/gen-regalloc-priority-test-model.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel
2024-07-04 17:31:49.868359: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:31:49.934849: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:31:49.935581: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-04 17:31:51.274415: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
RUN: at line 16: "/usr/bin/python3.10" /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/saved-model-to-tflite.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp
+ /usr/bin/python3.10 /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/saved-model-to-tflite.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp
2024-07-04 17:31:55.289130: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:31:55.361753: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:31:55.362503: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-04 17:31:56.767185: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-07-04 17:31:59.944727: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:364] Ignored output_format.
2024-07-04 17:31:59.944897: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:367] Ignored drop_control_dependency.
2024-07-04 17:31:59.946121: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel
2024-07-04 17:31:59.946628: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2024-07-04 17:31:59.946705: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel
2024-07-04 17:31:59.949693: F tensorflow/tsl/platform/default/env.cc:74] Check failed: ret == 0 (11 vs. 0)Thread tf_Compute creation via pthread_create() failed.
/llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.script: line 10: 386571 Aborted                 "/usr/bin/python3.10" /llvm-project/llvm/test/CodeGen/MLRegAlloc/../../../lib/Analysis/models/saved-model-to-tflite.py /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp_savedmodel /llvm-project/build/test/CodeGen/MLRegAlloc/Output/dev-mode-prio-logging.ll.tmp

--

********************
******************** TEST 'LLVM :: Transforms/Inline/ML/bounds-checks-rewards.ll' FAILED ********************
Exit Code: 134

Command Output (stdout):
--
Output model to: [/llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel]
The model will always return: 1
Writing output spec to /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel/output_spec.json.

--
Command Output (stderr):
--
RUN: at line 10: rm -rf /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp
+ rm -rf /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp
RUN: at line 11: rm -rf /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel
+ rm -rf /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel
RUN: at line 12: "/usr/bin/python3.10" /llvm-project/llvm/test/Transforms/Inline/ML/../../../../lib/Analysis/models/gen-inline-oz-test-model.py /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel
+ /usr/bin/python3.10 /llvm-project/llvm/test/Transforms/Inline/ML/../../../../lib/Analysis/models/gen-inline-oz-test-model.py /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel
2024-07-04 17:30:35.732872: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:30:35.804436: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:30:35.805205: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-04 17:30:37.138657: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
RUN: at line 13: "/usr/bin/python3.10" /llvm-project/llvm/test/Transforms/Inline/ML/../../../../lib/Analysis/models/saved-model-to-tflite.py /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp
+ /usr/bin/python3.10 /llvm-project/llvm/test/Transforms/Inline/ML/../../../../lib/Analysis/models/saved-model-to-tflite.py /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp
2024-07-04 17:30:40.305600: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:30:40.346275: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-04 17:30:40.346837: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-04 17:30:41.626710: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-07-04 17:30:44.001516: F tensorflow/tsl/platform/default/env.cc:74] Check failed: ret == 0 (11 vs. 0)Thread tf_Compute creation via pthread_create() failed.
/llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.script: line 10: 363961 Aborted                 "/usr/bin/python3.10" /llvm-project/llvm/test/Transforms/Inline/ML/../../../../lib/Analysis/models/saved-model-to-tflite.py /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp_savedmodel /llvm-project/build/test/Transforms/Inline/ML/Output/bounds-checks-rewards.ll.tmp

--

********************

@boomanaiden154
Copy link
Collaborator Author

Nevermind. That's some thread limit issue on my cloudtop. It goes away when running the tests individually and comes up with the previous version of build_tflite.sh as well.

@mtrofin
Copy link
Collaborator

mtrofin commented Jul 8, 2024

can you remove the plibthreadpool installation in the setup script?

@boomanaiden154
Copy link
Collaborator Author

can you remove the plibthreadpool installation in the setup script?

Yep. Removed.

@boomanaiden154 boomanaiden154 merged commit 3990d98 into google:main Jul 8, 2024
15 checks passed
@boomanaiden154 boomanaiden154 deleted the tflite-update-7-1-24 branch July 8, 2024 17:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants