Skip to content
AminaJLi edited this page Apr 16, 2023 · 26 revisions

Miscellaneous tips for Linux terminal:

  • Most frequently used longer commands can be aliased to a shorter, easier-to-remember name by using ~/.bash_profile or ~/.bashrc. Add a line with alias newcommand='<insert full command here>' to either. (You can also use aliasing to cover misspellings of common commands 👀). There is an example .bashrc file copied from Pod with some aliases in the TutorialFiles folder.
  • nohup <insert full command here> > filename.out & is amazingly useful for interactive commands that take a while to run. (Notably, the hadd command to combine root files for training and testing subsets can take a while.) nohup stands for "no hangup" so your command will continue to run even if your ssh session times out. The output that would be typically printed to your terminal is saved in filename.out and the ampersand (&) works as it usually does, running the command in the background so you can continue to use the terminal.
  • Use scp username@server.hostname:/path/to/remote/file . or scp /path/to/local/file username@server.hostname:/path/to/destination to move files from your local storage to a remote server. The structure is similar to the cp command so a period can be used to stand for your current directory. For large files or many files, it's much better to use Globus.
  • ls -F | grep -v / | wc -l will show the number of files in a directory (not counting subdirectories).
  • hadd final.root file1.root file2.root will merge file1 and file2 into a new rootfile. This command also works with wildcards so you can use hadd final.root filename_run*.root to merge all rootfiles that start with "filename_run" into one. Not a Linux tip but a ROOT one -- a working install of ROOT is needed.

Miscellaneous tips for Docker:

  • To use an environmental variable inside the container, add -e YOURVARIABLE=value to the docker run command. For the LDMX framework, the docker run command is bundled into the alias ldmx. Search the ldmx-env.sh file for docker run and you can add your variable there. Make sure to source ldmx-env.sh after your changes.

Miscellaneous tips for Pod:

  • The job files for slurm have many options that can be modified. (If you're using scripts to generate the job files, look in the script for the options so they apply to all job files.) Useful ones for keeping track of your jobs are --mail-type=BEGIN,END,FAIL and --mail-user=username@ucsb.edu which turn on email notifications. You will be emailed when your jobs begin running, finish, and/or fail.
  • Use the correct resource for your job length. If it's around an hour or less, you can run it interactively in the terminal. If it's a few hours (up to around 5 hours), you can use the short slurm queue with sbatch -p short. Anything longer than that, you should submit to the usual slurm queue with sbatch.
  • You can check your queue of jobs with squeue -u your_username.
  • If you notice an issue with one job in your queue, take note of the job ID and kill it with scancel <JOB_ID>.
  • If your jobs begin failing or realize you've made a mistake, kill all your jobs with scancel -u your_username. In theory, this is the same as scancel --me but I like the safety of explicitly inputting my own username.
  • Environmental variables are automatically used with the version of singularity on Pod. You just need to set the variable with export YOURVARIABLE=value.

Miscellaneous tips for SLAC:

  • Don't forget to include the -XY flag when logging in to SLAC! Without this, you won't be able to view the contents of ROOT files interactively through a TBrowser.
  • Use the -j flag when doing ldmx make install to expedite the process. A recommended setting is -j2, but you could go higher. Do keep the setting below -j10.
  • If you find that treeMaker.py isn't cooperating when you try to submit some batch jobs, it's probably because the container can't find cellmodule.txt or libFramework.so or both. To get around this, open up treeMaker.py and change lines 7 and 8 so that they use absolute file paths.
  • Use the bjobs command to monitor the status of all of your batch jobs. Alternatively, use bpeek <JOB_ID> to closely monitor the output of the job with the specified job ID.
  • If you're submitting a particularly large batch job and you need more time/memory, use the long queue and increase -W and -n appropriately. Do limit the number of cores you use to a reasonable amount.
  • If you absolutely must terminate all of your batch jobs, use bkill 0. Alternatively, use bkill <JOB_ID> to just kill the job with the specified job ID.