Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propeller slows down clang ~20% #181

Open
foxtran opened this issue Oct 26, 2023 · 5 comments
Open

Propeller slows down clang ~20% #181

foxtran opened this issue Oct 26, 2023 · 5 comments

Comments

@foxtran
Copy link

foxtran commented Oct 26, 2023

I have tried to reproduce optimization clang with Propeller.

After all modifications described in #179 and #180, modified https://github.com/google/autofdo/blob/master/propeller_optimize_clang.sh started to work on my machine.

Unfortunately, the results looks very strange. Applying propeller to clang slows it down about 20%:

BASELINE (samples=3):

 Performance counter stats for 'bash -c numactl -C 0-75 ninja -j76 clang && ninja clean' (3 runs):

    29910755307348      instructions:u            #    1.19  insn per cycle           ( +-  0.00% )
    25108949747468      cycles:u                                                      ( +-  0.01% )
     1776069921406      L1-icache-misses:u                                            ( +-  0.00% )
        7994150086      iTLB-misses:u                                                 ( +-  0.01% )

           124.271 +- 0.242 seconds time elapsed  ( +-  0.19% )

PROPELLER (samples=3):

 Performance counter stats for 'bash -c numactl -C 0-75 ninja -j76 clang && ninja clean' (3 runs):

    30835896384964      instructions:u            #    1.01  insn per cycle           ( +-  0.00% )
    30547030491210      cycles:u                                                      ( +-  0.01% )
     2491256701229      L1-icache-misses:u                                            ( +-  0.00% )
        7456817103      iTLB-misses:u                                                 ( +-  0.05% )

          148.1088 +- 0.0623 seconds time elapsed  ( +-  0.04% )

I used numactl to pin threads to HW cores. When I disables pins, the results were improved slightly, but, the gap between baseline and propeller continues to be significant:

BASELINE (samples=5):

 Performance counter stats for 'bash -c ninja clang && ninja clean' (5 runs):

    29911672560885      instructions:u            #    0.73  insn per cycle           ( +-  0.00% )
    40762914939742      cycles:u                                                      ( +-  0.01% )
     2198963872412      L1-icache-misses:u                                            ( +-  0.01% )
       16606325255      iTLB-misses:u                                                 ( +-  0.05% )

           119.413 +- 0.212 seconds time elapsed  ( +-  0.18% )

PROPELLER (samples=5):

 Performance counter stats for 'bash -c ninja clang && ninja clean' (5 runs):

    30835273549813      instructions:u            #    0.63  insn per cycle           ( +-  0.00% )
    49008268336239      cycles:u                                                      ( +-  0.01% )
     3025079343587      L1-icache-misses:u                                            ( +-  0.02% )
       16944457932      iTLB-misses:u                                                 ( +-  0.03% )

           139.041 +- 0.250 seconds time elapsed  ( +-  0.18% )

In the case of pinned threads, propeller slightly decreased iTLB misses, while L1-icache-misses increased about 1.5x times.

Tested in RAM-disk on $ lscpu

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              152
On-line CPU(s) list: 0-151
Thread(s) per core:  2
Core(s) per socket:  38
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               106
Model name:          Intel(R) Xeon(R) Platinum 8368 CPU @ 2.40GHz
Stepping:            6
CPU MHz:             3400.000
CPU max MHz:         3400.0000
CPU min MHz:         800.0000
BogoMIPS:            4800.00
Virtualization:      VT-x
L1d cache:           48K
L1i cache:           32K
L2 cache:            1280K
L3 cache:            58368K
NUMA node0 CPU(s):   0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150
NUMA node1 CPU(s):   1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151

Used OS:

$ cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.8 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.8 (Green Obsidian)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2029-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-8"
ROCKY_SUPPORT_PRODUCT_VERSION="8.8"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"

Used linux kernel:

$ uname -a
Linux XXX.XXX.XXX.XXX 4.18.0-477.21.1.el8_8.x86_64 #1 SMP Tue Aug 8 21:30:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Gists:
with numactl:
https://gist.github.com/foxtran/b7fedfbb0bd036629448ce62d18bd7a6
without numactl:
https://gist.github.com/foxtran/fdc4abf8e2de127800f670b9edeeb9f2

Applied patches (with #179, #180):
for numactl:

--- propeller_optimize_clang.sh.orig	2023-10-26 16:54:07.550679311 +0900
+++ propeller_optimize_clang.sh	2023-10-25 20:10:02.862877767 +0900
@@ -30,25 +30,25 @@ PATH_TO_TRUNK_LLVM_BUILD=${BASE_PROPELLE
 PATH_TO_TRUNK_LLVM_INSTALL=${BASE_PROPELLER_CLANG_DIR}/trunk_llvm_install
 # Build Trunk LLVM
 mkdir -p ${PATH_TO_LLVM_SOURCES} && cd ${PATH_TO_LLVM_SOURCES}
-git clone git@github.com:llvm/llvm-project.git
+git clone -b release/17.x --single-branch https://github.com/llvm/llvm-project.git
 mkdir -p ${PATH_TO_TRUNK_LLVM_BUILD} && cd ${PATH_TO_TRUNK_LLVM_BUILD}
 cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD=X86 \
       -DCMAKE_INSTALL_PREFIX="${PATH_TO_TRUNK_LLVM_INSTALL}" \
       -DLLVM_ENABLE_RTTI=On -DLLVM_INCLUDE_TESTS=Off \
       -DLLVM_ENABLE_PROJECTS="clang;lld" ${PATH_TO_LLVM_SOURCES}/llvm-project/llvm
-ninja install
+numactl -C 0-75 ninja -j76 install

 #Build create_llvm_prof
 PATH_TO_CREATE_LLVM_PROF=${BASE_PROPELLER_CLANG_DIR}/create_llvm_prof_build
 mkdir -p ${PATH_TO_CREATE_LLVM_PROF} && cd ${PATH_TO_CREATE_LLVM_PROF}

-git clone --recursive git@github.com:google/autofdo.git
+git clone --recursive https://github.com/google/autofdo.git
 mkdir build && cd build
 cmake -G Ninja -DCMAKE_INSTALL_PREFIX="." \
       -DCMAKE_C_COMPILER="${PATH_TO_TRUNK_LLVM_INSTALL}/bin/clang" \
       -DCMAKE_CXX_COMPILER="${PATH_TO_TRUNK_LLVM_INSTALL}/bin/clang++" \
       -DLLVM_PATH="${PATH_TO_TRUNK_LLVM_INSTALL}" ../autofdo/
-ninja
+numactl -C 0-75 ninja -j76
 ls create_llvm_prof

 # Common CMAKE Flags
@@ -70,7 +70,7 @@ BASELINE_CC_LD_CMAKE_FLAGS=(
 PATH_TO_BASELINE_CLANG_BUILD=${BASE_PROPELLER_CLANG_DIR}/baseline_clang_build
 mkdir -p ${PATH_TO_BASELINE_CLANG_BUILD} && cd ${PATH_TO_BASELINE_CLANG_BUILD}
 cmake -G Ninja "${COMMON_CMAKE_FLAGS[@]}" "${BASELINE_CC_LD_CMAKE_FLAGS[@]}" ${PATH_TO_LLVM_SOURCES}/llvm-project/llvm
-ninja clang
+numactl -C 0-75 ninja -j76 clang

 # Labels CMAKE Flags
 LABELS_CC_LD_CMAKE_FLAGS=(
@@ -84,13 +84,13 @@ LABELS_CC_LD_CMAKE_FLAGS=(
 PATH_TO_LABELS_CLANG_BUILD=${BASE_PROPELLER_CLANG_DIR}/labels_clang_build
 mkdir -p ${PATH_TO_LABELS_CLANG_BUILD} && cd ${PATH_TO_LABELS_CLANG_BUILD}
 cmake -G Ninja "${COMMON_CMAKE_FLAGS[@]}" "${LABELS_CC_LD_CMAKE_FLAGS[@]}" ${PATH_TO_LLVM_SOURCES}/llvm-project/llvm
-ninja clang
+numactl -C 0-75 ninja -j76 clang

 # Set up Benchmarking and BUILD
 BENCHMARKING_CLANG_BUILD=${BASE_PROPELLER_CLANG_DIR}/benchmarking_clang_build
 mkdir -p ${BENCHMARKING_CLANG_BUILD} && cd ${BENCHMARKING_CLANG_BUILD}
 mkdir -p symlink_to_clang_binary && cd symlink_to_clang_binary
-CLANG_VERSION=$(sed -Ene 's!^CLANG_EXECUTABLE_VERSION:STRING=(.*)$!\1!p' ${PATH_TO_TRUNK_LLVM_BUILD}/CMakeCache.txt)
+CLANG_VERSION=$(sed -Ene 's!^CLANG_EXECUTABLE_VERSION:STRING=(.*)$!\1!p' ${PATH_TO_TRUNK_LLVM_BUILD}/CMakeCache.txt) #'
 ln -sf ${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang
 ln -sf ${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang++

@@ -111,6 +111,7 @@ ls perf.data
 cd ${BENCHMARKING_CLANG_BUILD}
 ${PATH_TO_CREATE_LLVM_PROF}/build/create_llvm_prof --format=propeller \
   --binary=${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} \
+  --profiled_binary_name=${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} \
   --profile=perf.data --out=cluster.txt  --propeller_symorder=symorder.txt 2>/dev/null 1>/dev/null
 ls cluster.txt symorder.txt

@@ -126,7 +127,7 @@ PROPELLER_CC_LD_CMAKE_FLAGS=(
 PATH_TO_PROPELLER_CLANG_BUILD=${BASE_PROPELLER_CLANG_DIR}/propeller_build
 mkdir -p ${PATH_TO_PROPELLER_CLANG_BUILD} && cd ${PATH_TO_PROPELLER_CLANG_BUILD}
 cmake -G Ninja "${COMMON_CMAKE_FLAGS[@]}" "${PROPELLER_CC_LD_CMAKE_FLAGS[@]}" ${PATH_TO_LLVM_SOURCES}/llvm-project/llvm
-ninja clang
+numactl -C 0-75 ninja -j76 clang

 # Run comparison of baseline verus propeller optimized clang
 cd ${BENCHMARKING_CLANG_BUILD}/symlink_to_clang_binary
@@ -134,11 +135,11 @@ ln -sf ${PATH_TO_BASELINE_CLANG_BUILD}/b
 ln -sf ${PATH_TO_BASELINE_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang++
 cd ..
 ninja clean
-perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "ninja -j48 clang && ninja clean"
+perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "numactl -C 0-75 ninja -j76 clang && ninja clean"

 cd ${BENCHMARKING_CLANG_BUILD}/symlink_to_clang_binary
 ln -sf ${PATH_TO_PROPELLER_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang
 ln -sf ${PATH_TO_PROPELLER_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang++
 cd ..
 ninja clean
-perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "ninja -j48 clang && ninja clean"
+perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "numactl -C 0-75 ninja -j76 clang && ninja clean"

Without numactl:

--- propeller_optimize_clang.sh.orig	2023-10-26 16:54:07.550679311 +0900
+++ propeller_optimize_clang-nopins.sh	2023-10-25 20:23:21.399123785 +0900
@@ -30,7 +30,7 @@ PATH_TO_TRUNK_LLVM_BUILD=${BASE_PROPELLE
 PATH_TO_TRUNK_LLVM_INSTALL=${BASE_PROPELLER_CLANG_DIR}/trunk_llvm_install
 # Build Trunk LLVM
 mkdir -p ${PATH_TO_LLVM_SOURCES} && cd ${PATH_TO_LLVM_SOURCES}
-git clone git@github.com:llvm/llvm-project.git
+git clone -b release/17.x --single-branch https://github.com/llvm/llvm-project.git
 mkdir -p ${PATH_TO_TRUNK_LLVM_BUILD} && cd ${PATH_TO_TRUNK_LLVM_BUILD}
 cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD=X86 \
       -DCMAKE_INSTALL_PREFIX="${PATH_TO_TRUNK_LLVM_INSTALL}" \
@@ -42,7 +42,7 @@ ninja install
 PATH_TO_CREATE_LLVM_PROF=${BASE_PROPELLER_CLANG_DIR}/create_llvm_prof_build
 mkdir -p ${PATH_TO_CREATE_LLVM_PROF} && cd ${PATH_TO_CREATE_LLVM_PROF}

-git clone --recursive git@github.com:google/autofdo.git
+git clone --recursive https://github.com/google/autofdo.git
 mkdir build && cd build
 cmake -G Ninja -DCMAKE_INSTALL_PREFIX="." \
       -DCMAKE_C_COMPILER="${PATH_TO_TRUNK_LLVM_INSTALL}/bin/clang" \
@@ -90,7 +90,7 @@ ninja clang
 BENCHMARKING_CLANG_BUILD=${BASE_PROPELLER_CLANG_DIR}/benchmarking_clang_build
 mkdir -p ${BENCHMARKING_CLANG_BUILD} && cd ${BENCHMARKING_CLANG_BUILD}
 mkdir -p symlink_to_clang_binary && cd symlink_to_clang_binary
-CLANG_VERSION=$(sed -Ene 's!^CLANG_EXECUTABLE_VERSION:STRING=(.*)$!\1!p' ${PATH_TO_TRUNK_LLVM_BUILD}/CMakeCache.txt)
+CLANG_VERSION=$(sed -Ene 's!^CLANG_EXECUTABLE_VERSION:STRING=(.*)$!\1!p' ${PATH_TO_TRUNK_LLVM_BUILD}/CMakeCache.txt) #'
 ln -sf ${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang
 ln -sf ${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang++

@@ -111,6 +111,7 @@ ls perf.data
 cd ${BENCHMARKING_CLANG_BUILD}
 ${PATH_TO_CREATE_LLVM_PROF}/build/create_llvm_prof --format=propeller \
   --binary=${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} \
+  --profiled_binary_name=${PATH_TO_LABELS_CLANG_BUILD}/bin/clang-${CLANG_VERSION} \
   --profile=perf.data --out=cluster.txt  --propeller_symorder=symorder.txt 2>/dev/null 1>/dev/null
 ls cluster.txt symorder.txt

@@ -134,11 +135,11 @@ ln -sf ${PATH_TO_BASELINE_CLANG_BUILD}/b
 ln -sf ${PATH_TO_BASELINE_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang++
 cd ..
 ninja clean
-perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "ninja -j48 clang && ninja clean"
+perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "ninja clang && ninja clean"

 cd ${BENCHMARKING_CLANG_BUILD}/symlink_to_clang_binary
 ln -sf ${PATH_TO_PROPELLER_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang
 ln -sf ${PATH_TO_PROPELLER_CLANG_BUILD}/bin/clang-${CLANG_VERSION} clang++
 cd ..
 ninja clean
-perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "ninja -j48 clang && ninja clean"
+perf stat -r5 -e instructions,cycles,L1-icache-misses,iTLB-misses -- bash -c "ninja clang && ninja clean"
@foxtran foxtran changed the title AutoFDO slow down clang ~20% AutoFDO slows down clang ~20% Oct 26, 2023
@foxtran foxtran changed the title AutoFDO slows down clang ~20% Propeller slows down clang ~20% Oct 26, 2023
@foxtran
Copy link
Author

foxtran commented Oct 28, 2023

Unfortunately, I reproduced that result on AMD Ryzen 7700.

BASELINE:

Performance counter stats for 'bash -c ninja clang && ninja clean' (5 runs):

30,795,048,649,078      instructions                     #    0.78  insn per cycle              ( +-  0.00% )  (75.02%)
39,645,798,620,353      cycles                                                                  ( +-  0.01% )  (75.02%)
    82,058,692,208      L1-icache-misses                                                        ( +-  0.01% )  (75.02%)
    48,600,346,966      iTLB-misses                                                             ( +-  0.03% )  (75.03%)

          512.3711 +- 0.0459 seconds time elapsed  ( +-  0.01% )

PROPELLER:

Performance counter stats for 'bash -c ninja clang && ninja clean' (5 runs):

31,738,767,572,617      instructions                     #    0.62  insn per cycle              ( +-  0.00% )  (75.02%)
51,114,393,709,732      cycles                                                                  ( +-  0.02% )  (75.02%)
    93,464,053,895      L1-icache-misses                                                        ( +-  0.01% )  (75.02%)
    66,714,207,526      iTLB-misses                                                             ( +-  0.02% )  (75.01%)

          654.6605 +- 0.0541 seconds time elapsed  ( +-  0.01% )

Processor:

# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         48 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               AuthenticAMD
  BIOS Vendor ID:        Advanced Micro Devices, Inc.
  Model name:            AMD Ryzen 7 7700 8-Core Processor

Used OS:

# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux trixie/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=trixie
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Kernel:

Linux XXX.XXX.XXX.XXX 6.5.0-1-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.5.6-2.1 (2023-10-08) x86_64 GNU/Linux

Perf version:

# perf --version
perf version 6.5.6

@lifengxiang1025
Copy link

llvm18 use Fixed MBB ID(llvm/llvm-project@3d6841b). While autofdo now revert the code which support 'Fixed MBB ID' (ad3e924). I think it may be the reason.

@zcfh
Copy link

zcfh commented Apr 9, 2024

Has this issue been resolved?
Use trunk to build (changed two places to solve coredump #190 ), and reproduced the results on the Intel machine.
And setting PATH_TO_TRUNK_LLVM_INSTALL=llvm17 is still slow. Does autofdo also need to switch to llvm17? @lifengxiang1025

base_line:

 Performance counter stats for 'bash -c ninja -j48 clang && ninja clean' (5 runs):

31,416,027,077,187      instructions              #    0.71  insn per cycle           ( +-  0.03% )  (95.99%)
44,129,300,535,322      cycles                                                        ( +-  0.03% )  (95.89%)
 2,417,617,615,381      L1-icache-misses                                              ( +-  0.06% )  (95.80%)
    18,663,312,138      iTLB-misses                                                   ( +-  0.05% )  (95.69%)

           353.482 +- 0.258 seconds time elapsed  ( +-  0.07% )

propeller

32,310,032,286,299      instructions              #    0.63  insn per cycle           ( +-  0.02% )  (96.32%)
51,216,471,339,669      cycles                                                        ( +-  0.04% )  (96.23%)
 3,250,199,372,974      L1-icache-misses                                              ( +-  0.05% )  (96.14%)
    20,012,345,482      iTLB-misses                                                   ( +-  0.05% )  (96.05%)

           406.699 +- 0.304 seconds time elapsed  ( +-  0.07% )
$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz

@lifengxiang1025
Copy link

Has this issue been resolved? Use trunk to build (changed two places to solve coredump #190 ), and reproduced the results on the Intel machine. And setting PATH_TO_TRUNK_LLVM_INSTALL=llvm17 is still slow. Does autofdo also need to switch to llvm17? @lifengxiang1025

I use this code snap(llvm/llvm-project@3d6841b) and propeller seems work well with llvm16(I think llvm17 is ok). The reason is:

llvm18 use Fixed MBB ID(llvm/llvm-project@3d6841b). While autofdo now revert the code which support 'Fixed MBB ID' (ad3e924). I think it may be the reason.

@Patrick-ICT
Copy link

Yes, my experiment results also showed the root cause might be the MBB ID. The original control flow graph of the tested function is

Screenshot 2024-08-06 at 14 29 56

Using the audoFDO with llvm19, the CFG became into
Screenshot 2024-08-06 at 15 08 54

We can see some basic blocks are split (like the BB2 became BB2-1 and BB2-2), which is unnecessary because the instruction before the unconditional branch in BB2-1 is a call and there is no C++ exception-related code in BB2.

Besides, some basic blocks are hot but their only predecessor is not. For example, BB17 is identified as a hot BB but its predecessor BB16 is not. BB34 and BB41 are the same. Also, some basic blocks are hot but their successor is not. For example, BB37 and BB44 are identified as hot but their predecessor BB45 is not. Also, there's no C++ exception code.

Then I switched to llvm17. However, only the function reorder seems functional. I also verified llvm17 using a hand-written C++ code example, the cold/hot split is not functional. The llvm19 worked for this small example.

At last, I used llvm17 to generate the profile data and llvm19 to generate the optimized binary. It works.
Screenshot 2024-08-06 at 14 58 30

I haven't checked if llvm17 contains the cold/hot split code. I will check it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants