Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to distinguish between stock TensorFlow and Intel TensorFlow for 2.5.0 version quickly and conveniently. #89

Open
zhixingheyi-tian opened this issue Aug 26, 2021 · 5 comments

Comments

@zhixingheyi-tian
Copy link

How to distinguish between stock TensorFlow and Intel TensorFlow for 2.5.0 version quickly and conveniently.

@dmsuehir @ashahba Would you please guide this?

@dmsuehir
Copy link
Contributor

@zhixingheyi-tian The documentation here has a section called "sanity check" with a sample script on how to do this. Using info from that script with pip install tensorflow==2.5.0 and pip install intel-tensorflow==2.5.0 I found that this worked:

from tensorflow.python.util import _pywrap_util_port
_pywrap_util_port.IsMklEnabled()

It returns True for intel-tensorflow and False for stock tensorflow.

@zhixingheyi-tian
Copy link
Author

@dmsuehir
Thanks very much.

For Stock TensorFlow 2.5.0, if enable os.environ["TF_ENABLE_ONEDNN_OPTS"] = '1', does it mean completely equivalent to Intel TensorFlow 2.5.0 , and does it include all optimizations from Intel TensorFlow?

Thanks

@dmsuehir
Copy link
Contributor

dmsuehir commented Sep 24, 2021

@zhixingheyi-tian There's a table in the documentation here under the section called "Differences between Intel Optimization for Tensorflow and official TensorFlow for running on Intel CPUs after v2.5" that compares Intel TensorFlow to stock TensorFlow. It also shows which features are enabled with TF_ENABLE_ONEDNN_OPTS. You'll be able to see from there that it's not completely equivalent.

ashahba pushed a commit that referenced this issue Apr 1, 2022
…#89)

* Add PyTorch SPR Mask R-CNN package specs, docs, and quickstart filesg

* Update build arg

* Doc updates for training

* Print status

* update error handling

* try to run without pretrained model for training

* Fix else

* Doc update

* Doc updates

* Regenerated dockerfiles

* Add new line at EOF
@PLEX-GR00T
Copy link

PLEX-GR00T commented May 1, 2022

Does this thing work in Google Colab? I was trying pip install intel-tensorflow in the google colab, and it also shows True for
from tensorflow.python.util import _pywrap_util_port'
_pywrap_util_port.IsMklEnabled()

But, Ido not have GPU access of the Colab afterwards why? what should I do. It is more slower then usual Colab GPU speed.

@sramakintel
Copy link
Contributor

@PLEX-GR00T and @zhixingheyi-tian is your issue resolved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants