-
Notifications
You must be signed in to change notification settings - Fork 0
I ran scons -j4
to compile the code, but the build seems to take forever on TEMLogger.cpp and eventually, if I stop the build with Ctrl-C, I get this error:
g++: internal compiler error: Killed (program cc1plus)
What does it mean and how do I fix it?
Not sure what it means, but it happens sometimes when building the model for the first time after a scons --clean
or make clean
or a fresh checkout from git/github. So far I have been able to fix it by simply making a serial build (scons
) and then the problem goes away. Subsequent builds before another clean and I can use -j4
for a parallel build.
Here is one instance of the entire error for example:
[vagrant@localhost dvm-dos-tem] (devel)$ scons -j4
scons: Reading SConscript files ...
/usr/lib64/openmpi/bin/mpic++
scons: done reading SConscript files.
scons: Building targets ...
/usr/lib64/openmpi/bin/mpic++ -o src/TEM.o -c -Werror -ansi -g -fPIC -DBOOST_ALL_DYN_LINK -DGNU_FPE -m64 -DWITHMPI -I/usr/include -I/usr/include/openmpi-x86_64 -I/usr/include/jsoncpp -I~/usr/local/include src/TEM.cpp
/usr/lib64/openmpi/bin/mpic++ -o src/TEMUtilityFunctions.o -c -Werror -ansi -g -fPIC -DBOOST_ALL_DYN_LINK -DGNU_FPE -m64 -DWITHMPI -I/usr/include -I/usr/include/openmpi-x86_64 -I/usr/include/jsoncpp -I~/usr/local/include src/TEMUtilityFunctions.cpp
/usr/lib64/openmpi/bin/mpic++ -o src/CalController.o -c -Werror -ansi -g -fPIC -DBOOST_ALL_DYN_LINK -DGNU_FPE -m64 -DWITHMPI -I/usr/include -I/usr/include/openmpi-x86_64 -I/usr/include/jsoncpp -I~/usr/local/include src/CalController.cpp
/usr/lib64/openmpi/bin/mpic++ -o src/TEMLogger.o -c -Werror -ansi -g -fPIC -DBOOST_ALL_DYN_LINK -DGNU_FPE -m64 -DWITHMPI -I/usr/include -I/usr/include/openmpi-x86_64 -I/usr/include/jsoncpp -I~/usr/local/include src/TEMLogger.cpp
g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://bugzilla.redhat.com/bugzilla> for instructions.
scons: *** [src/TEM.o] Error 4
^[[A^Cscons: *** [src/TEMLogger.o] Build interrupted.
scons: *** [src/TEMUtilityFunctions.o] Build interrupted.
scons: *** [src/CalController.o] Build interrupted.
scons: Build interrupted.
scons: building terminated because of errors.
scons: writing .sconsign file.
What is the difference between Cohort's array of "bd" objects vs Cohort's "bdall" object? If 'all' is usually used as a prefix/suffix for a value summed across pfts, how does it make sense to have a 'gppall' value for each PFT?
Turns out that the gppall for each PFT is the sum of gpp across the compartments (leaf, stem, root). Looking at fluxes.h
and the dimensions of the members in a2v
was helpful for remembering this.
So here is the GPP for leaf, stem and root of PFT 3
cohort.bd[3].m_a2v.gppall
While here is the GPP for all PFTs, (presumably all compartments too):
cohort.bdall.m_a2v.gppall