Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unresolved symbol: _Z13get_global_idj on Intel GPU #102

Open
Darwin2011 opened this issue Jun 27, 2017 · 8 comments
Open

Unresolved symbol: _Z13get_global_idj on Intel GPU #102

Darwin2011 opened this issue Jun 27, 2017 · 8 comments
Assignees
Labels

Comments

@Darwin2011
Copy link

Hello,

I am trying to follow your tutorial to build tensorflow with in branch dev/intel_gpu. Then I run one minimal testcases in tensorflow and it shows the following errors. Could you give me some suggestions how to run with Intel GPUs?

Unresolved symbol: _Z13get_global_idj
Aborting...

After some search, I find that this issue are similar to # codeplaysoftware/computecpp-sdk#19

And the following is my opencl information.

Number of platforms:                             1
  Platform Profile:                              FULL_PROFILE
  Platform Version:                              OpenCL 1.2 beignet 1.2 (git-097365e)
  Platform Name:                                 Intel Gen OCL Driver
  Platform Vendor:                               Intel
  Platform Extensions:                           cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_3d_image_writes cl_khr_image2d_from_buffer cl_khr_spir cl_khr_icd cl_intel_accelerator cl_intel_subgroups


  Platform Name:                                 Intel Gen OCL Driver
Number of devices:                               1
  Device Type:                                   CL_DEVICE_TYPE_GPU
  Device ID:                                     32902
  Max compute units:                             72
  Max work items dimensions:                     3
    Max work items[0]:                           512
    Max work items[1]:                           512
    Max work items[2]:                           512
  Max work group size:                           512
  Preferred vector width char:                   16
  Preferred vector width short:                  8
  Preferred vector width int:                    4
  Preferred vector width long:                   2
  Preferred vector width float:                  4
  Preferred vector width double:                 0
  Native vector width char:                      8
  Native vector width short:                     8
  Native vector width int:                       4
  Native vector width long:                      2
  Native vector width float:                     4
  Native vector width double:                    2
  Max clock frequency:                           1000Mhz
  Address bits:                                  32
  Max memory allocation:                         3221225472
  Image support:                                 Yes
  Max number of images read arguments:           128
  Max number of images write arguments:          8
  Max image 2D width:                            8192
  Max image 2D height:                           8192
  Max image 3D width:                            8192
  Max image 3D height:                           8192
  Max image 3D depth:                            2048
  Max samplers within kernel:                    16
  Max size of kernel argument:                   1024
  Alignment (bits) of base address:              1024
  Minimum alignment (bytes) for any datatype:    128
  Single precision floating point capability
    Denorms:                                     No
    Quiet NaNs:                                  Yes
    Round to nearest even:                       Yes
    Round to zero:                               No
    Round to +ve and infinity:                   No
    IEEE754-2008 fused multiply-add:             No
  Cache type:                                    Read/Write
  Cache line size:                               64
  Cache size:                                    8192
  Global memory size:                            4294967296
  Constant buffer size:                          134217728
  Max number of constant args:                   8
  Local memory type:                             Global
  Local memory size:                             65536
  Error correction support:                      0
  Unified memory for Host and Device:            1
  Profiling timer resolution:                    80
  Device endianess:                              Little
  Available:                                     Yes
  Compiler available:                            Yes
  Execution capabilities:
    Execute OpenCL kernels:                      Yes
    Execute native function:                     Yes
  Queue properties:
    Out-of-Order:                                No
    Profiling :                                  Yes
  Platform ID:                                   0x7f234d9b4bc0
  Name:                                          Intel(R) HD Graphics Skylake Server GT4
  Vendor:                                        Intel
  Device OpenCL C version:                       OpenCL C 1.2 beignet 1.2 (git-097365e)
  Driver version:                                1.2
  Profile:                                       FULL_PROFILE
  Version:                                       OpenCL 1.2 beignet 1.2 (git-097365e)
  Extensions:                                    cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_3d_image_writes cl_khr_image2d_from_buffer cl_khr_spir cl_khr_icd cl_intel_accelerator cl_intel_subgroups cl_khr_fp16

And the following is computecpp_info information.

ComputeCpp Info (CE 0.2.1)

********************************************************************************

Toolchain information:

GLIBC version: 2.19
GLIBCXX: 20150426
This version of libstdc++ is supported.

********************************************************************************


Device Info:

Discovered 1 devices matching:
  platform    : <any>
  device type : <any>

--------------------------------------------------------------------------------
Device 0:

  Device is supported                     : UNTESTED - Device not tested on this OS
  CL_DEVICE_NAME                          : Intel(R) HD Graphics Skylake Server GT4
  CL_DEVICE_VENDOR                        : Intel
  CL_DRIVER_VERSION                       : 1.2
  CL_DEVICE_TYPE                          : CL_DEVICE_TYPE_GPU

If you encounter problems when using any of these OpenCL devices, please consult
this website for known issues:
https://computecpp.codeplay.com/releases/v0.2.1/platform-support-notes

Thanks.

@lukeiwanski lukeiwanski self-assigned this Jun 27, 2017
@lukeiwanski
Copy link
Owner

Hi @Darwin2011 ,

Could you try adding -m32 flag as mentioned codeplaysoftware/computecpp-sdk#19 (comment) to https://github.com/lukeiwanski/tensorflow/blob/master/third_party/sycl/crosstool/computecpp.tpl#L76?

We want to have a better way of dealing with device and host pointer missmatch.

@lukeiwanski
Copy link
Owner

Did that help?

@Darwin2011
Copy link
Author

Sorry for the late reply.
I have tried to build tensorflow with 32bits toolchain but until now I still cannot make it work.

@DuncanMcBain
Copy link
Collaborator

Hi @Darwin2011, what sort of errors are you getting when using -m32? This should instruct compute++ to compile for a 32-bit architecture, though you will need the corresponding 32-bit header files for the program to compile correctly. It's possible that if you don't have them installed, the compilation will fail and the program will not work. If it's compiling correctly and is failing at runtime, I'll take another look and might be able to suggest some more possible fixes.

@lukeiwanski
Copy link
Owner

@Darwin2011 is this still the case?

@DuncanMcBain
Copy link
Collaborator

Well, since this issue is quite old, things have changed rather dramatically - we can offer you updated instructions if you'd like them.

@smilesun
Copy link

smilesun commented May 22, 2018

Is there a step by step tutorial to build tensorflow to work for skylake GT2? I am eager to try that out on my computer. For example, which branch should I use? Are there other dependencies? Which bazel option should I use?

@DuncanMcBain
Copy link
Collaborator

Hi @smilesun, you should use the branch dev/amd_gpu for your hardware with the most recent ComputeCpp release (v0.8.0). The instructions here will tell you how to build the branch (you can ignore the bits about AMD drivers, assuming yours are correctly set up already).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants