From ea8a07ae0016cdd381443afd99ab5aa775a07c30 Mon Sep 17 00:00:00 2001 From: Konstantin Androsov Date: Tue, 11 Jun 2024 12:34:12 +0200 Subject: [PATCH] Deployed 98589f0 with MkDocs version: 1.6.0 --- 404.html | 74 +++++ analysis/index.html | 659 ++++++++++++++++++++++++++++++++++++++ hh_bbtautau/index.html | 172 ++++++++-- index.html | 203 ++++++------ search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes stat_inference/index.html | 606 +++++++++++++++++++++++++++++++++++ 7 files changed, 1578 insertions(+), 138 deletions(-) create mode 100644 analysis/index.html create mode 100644 stat_inference/index.html diff --git a/404.html b/404.html index eb96218a..630c0fac 100644 --- a/404.html +++ b/404.html @@ -183,6 +183,23 @@ +
  • + + + + + + Analysis + + +
  • + + + + + + +
  • @@ -196,6 +213,23 @@ + + + + +
  • + + + + + + Statistical inference + + +
  • + + + @@ -273,6 +307,26 @@ +
  • + + + + + Analysis + + + + +
  • + + + + + + + + +
  • @@ -287,6 +341,26 @@ + + + + + + +
  • + + + + + Statistical inference + + + + +
  • + + + diff --git a/analysis/index.html b/analysis/index.html new file mode 100644 index 00000000..8ffa37e2 --- /dev/null +++ b/analysis/index.html @@ -0,0 +1,659 @@ + + + + + + + + + + + + + + + + + + + + + + + Analysis - FLAF + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + Skip to content + + +
    +
    + +
    + + + + + + +
    + + + + + + + +
    + +
    + + + + +
    +
    + + + +
    +
    +
    + + + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + + + + + + + + +

    Common analysis steps

    +

    Remarks:

    +
      +
    • +

      commands bellow assume that ERA variable is set. E.g. +

      ERA=Run2_2016
      +
      + Alternatively you can add ERA=Run2_2016; ... in front of each command.

      +
    • +
    • +

      version argument alows to produce different versions of the same task. In the command below --version dev is used for illustration purposes. You can replace it with your version naming.

      +
    • +
    • --workflow can be htcondor or local. It is recommended to develop and test locally and then switch to htcondor for production. In examples below --workflow local is used for illustration purposes.
    • +
    • when running on htcondor it is recommended to add --transfer-logs to the command to transfer logs to local.
    • +
    • --customisations argument is used to pass custom parameters to the task in form param1=value1,param2=value2,...
    • +
    • if you want to run only on few files, you can specify list of branches to run using --branches argument. E.g. --branches 2,7-10,17.
    • +
    • to get status, use --print-stauts N,K where N is depth for task dependencies, K is depths for file dependencies. E.g. --print-status 3,1.
    • +
    • to remove task output use --remove-output N,a, where N is depth for task dependencies. E.g. --remove-output 0,a.
    • +
    +

    Create input file list

    +
    law run InputFileTask  --period ${ERA} --version dev
    +
    +

    Create anaCache

    +
    law run AnaCacheTask  --period ${ERA} --workflow local --version dev
    +
    +

    Create anaTuple

    +
    law run AnaTupleTask --period ${ERA} --workflow local --version dev
    +
    + + + + + + + + + + + + + +
    +
    + + + +
    + + + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/hh_bbtautau/index.html b/hh_bbtautau/index.html index c53c5c76..87533a43 100644 --- a/hh_bbtautau/index.html +++ b/hh_bbtautau/index.html @@ -9,9 +9,11 @@ - + + + @@ -189,6 +191,23 @@ + +
  • + + + + + + Analysis + + +
  • + + + + + + @@ -205,6 +224,23 @@ + + + + +
  • + + + + + + Statistical inference + + +
  • + + + @@ -280,6 +316,26 @@ + + +
  • + + + + + Analysis + + + + +
  • + + + + + + + @@ -437,6 +493,15 @@ + + +
  • + + + How to run HHbtag training skim ntuple production + + +
  • @@ -447,6 +512,26 @@ + + + + + + +
  • + + + + + Statistical inference + + + + +
  • + + + @@ -582,6 +667,15 @@ + + +
  • + + + How to run HHbtag training skim ntuple production + + +
  • @@ -618,57 +712,61 @@

    Simple commands

    DeepTau 2p1

    AnaCache Production

    -
    year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run InputFileTask  --period Run2_${year} --version ${dir}
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run AnaCacheTask  --period Run2_${year} --workflow htcondor --version ${dir} --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run InputFileTask  --period ${era} --version ${dir}
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run AnaCacheTask  --period ${era} --workflow htcondor --version ${dir} --transfer-logs
     

    AnaTuple Production (AFTER AnaCacheTask)

    -
    year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run InputFileTask  --period Run2_${year} --version ${dir}
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run AnaTupleTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; mkdir /eos/user/v/vdamante/HH_bbtautau_resonant_Run2/${dir}/Run2_${year}/data
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run DataMergeTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run InputFileTask  --period ${era} --version ${dir}
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run AnaTupleTask --period ${era} --version ${dir} --workflow htcondor --transfer-logs
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; mkdir /eos/user/v/vdamante/HH_bbtautau_resonant_Run2/${dir}/Run2_${era}/data
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run DataMergeTask --period ${era} --version ${dir} --workflow htcondor --transfer-logs
     

    AnaCacheTuple Production (AFTER AnaTupleTask)

    -
    year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run AnaCacheTupleTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run AnaCacheTupleTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs
     

    Histograms Production (AFTER AnaTupleTask but NOT NECESSAIRLY AnaCacheTupleTask)

    -
    year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run HistProducerFileTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run HistProducerSampleTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run  MergeTask --period Run2_${year}  --version ${dir}  --workflow htcondor --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run HistProducerFileTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run HistProducerSampleTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run  MergeTask --period Run2_${era}  --version ${dir}  --workflow htcondor --transfer-logs
     

    Work in progress -

    year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run  HistRebinnerTask --period Run2_${year}  --version ${dir}  --workflow htcondor --transfer-logs #This does something only for KinFit_m, currently
    -year=2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run  HaddMergedTask --period Run2_${year}  --version ${dir}  --workflow htcondor --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run  HistRebinnerTask --period Run2_${era}  --version ${dir}  --workflow htcondor --transfer-logs #This does something only for KinFit_m, currently
    +era=Run2_2016; dir=v8_deepTau2p1_onlyTauTau_HTT; law run  HaddMergedTask --period Run2_${era}  --version ${dir}  --workflow htcondor --transfer-logs
     

    DeepTau 2p5

    AnaCache Production

    -
    year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run InputFileTask  --period Run2_${year} --version ${dir}
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run AnaCacheTask  --period Run2_${year} --workflow htcondor --version ${dir} --transfer-logs --customisations deepTauVersion=2p5
    +
    era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run InputFileTask  --period Run2_${era} --version ${dir}
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run AnaCacheTask  --period Run2_${era} --workflow htcondor --version ${dir} --transfer-logs --customisations deepTauVersion=2p5
     

    AnaTuple Production (AFTER AnaCacheTask)

    -
    year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run InputFileTask  --period Run2_${year} --version ${dir}
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run AnaTupleTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; mkdir /eos/user/v/vdamante/HH_bbtautau_resonant_Run2/${dir}/Run2_${year}/data
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run DataMergeTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5 # not sure it's needed in this step but I add it usually
    +
    era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run InputFileTask  --period Run2_${era} --version ${dir}
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run AnaTupleTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; mkdir /eos/user/v/vdamante/HH_bbtautau_resonant_Run2/${dir}/Run2_${era}/data
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run DataMergeTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5 # not sure it's needed in this step but I add it usually
     

    AnaCacheTuple Production (AFTER AnaTupleTask)

    -
    year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run AnaCacheTupleTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; mkdir -p /eos/home-k/kandroso/cms-hh-bbtautau/anaCache/Run2_${year}/data/${dir}
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run DataCacheMergeTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5
    +
    era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run AnaCacheTupleTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; mkdir -p /eos/home-k/kandroso/cms-hh-bbtautau/anaCache/Run2_${era}/data/${dir}
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run DataCacheMergeTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs --customisations deepTauVersion=2p5
     

    Histograms Production (AFTER AnaTupleTask but NOT NECESSAIRLY AnaCacheTupleTask)

    -
    year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run HistProducerFileTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run HistProducerSampleTask --period Run2_${year} --version ${dir} --workflow htcondor --transfer-logs
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run  MergeTask --period Run2_${year}  --version ${dir}  --workflow htcondor --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run HistProducerFileTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run HistProducerSampleTask --period Run2_${era} --version ${dir} --workflow htcondor --transfer-logs
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run  MergeTask --period Run2_${era}  --version ${dir}  --workflow htcondor --transfer-logs
     

    Work in progress -

    year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run  HistRebinnerTask --period Run2_${year}  --version ${dir}  --workflow htcondor --transfer-logs #This does something only for KinFit_m, currently
    -year=2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run  HaddMergedTask --period Run2_${year}  --version ${dir}  --workflow htcondor --transfer-logs
    +
    era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run  HistRebinnerTask --period Run2_${era}  --version ${dir}  --workflow htcondor --transfer-logs #This does something only for KinFit_m, currently
    +era=Run2_2016; dir=v8_deepTau2p5_onlyTauTau_HTT; law run  HaddMergedTask --period Run2_${era}  --version ${dir}  --workflow htcondor --transfer-logs
     

    Tips

    1. For local production switch from --workflow htcondor to --workflow local
    2. To produce specific branches add --branches X1,X2,..., where Xi are the branch numbers
    +

    How to run HHbtag training skim ntuple production

    +
    python Studies/HHBTag/CreateTrainingSkim.py --inFile $CENTRAL_STORAGE/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root --outFile output/skim.root --mass 350 --sample GluGluToBulkGraviton --year 2018 >& EventInfo.txt
    +python Common/SaveHisto.txt --inFile $CENTRAL_STORAGE/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root --outFile output/skim.root
    +
    @@ -704,7 +802,7 @@

    Tips&par diff --git a/index.html b/index.html index a4966479..1cdd0d2c 100644 --- a/index.html +++ b/index.html @@ -10,7 +10,7 @@ - + @@ -192,6 +192,23 @@ +
  • + + + + + + Analysis + + +
  • + + + + + + +
  • @@ -205,6 +222,23 @@ + + + + +
  • + + + + + + Statistical inference + + +
  • + + +

    @@ -326,46 +360,30 @@ -
  • - - - How to run limits - - + -
  • + -
  • - - - How to run nanoAOD->nanoAOD skims production - - +
  • - + + -
  • - - - How to run HHbtag training skim ntuple production - - - -
  • -
  • - - - How to run Histogram production - - -
  • - - - - + + +
  • + + + + + Analysis + + + +
  • @@ -390,6 +408,26 @@ + + + + + + +
  • + + + + + Statistical inference + + + + +
  • + + +
    @@ -432,42 +470,6 @@ - - -
  • - - - How to run limits - - - -
  • - -
  • - - - How to run nanoAOD->nanoAOD skims production - - - -
  • - -
  • - - - How to run HHbtag training skim ntuple production - - - -
  • - -
  • - - - How to run Histogram production - - -
  • @@ -518,53 +520,38 @@

    How to install
    git clone --recursive git@github.com:cms-flaf/Framework.git FLAF
     

    +
  • +

    Create a user customisation file config/user_custom.yaml. It should contain all user-specific modifications that you don't want to be committed to the central repository. Below is example of minimal content of the file (replace USER_NAME and ANA_FOLDER with your values): +

    fs_default:
    +    - 'T3_CH_CERNBOX:/store/user/USER_NAME/ANA_FOLDER/'
    +fs_anaCache:
    +    - 'T3_CH_CERNBOX:/store/user/USER_NAME/ANA_FOLDER/'
    +fs_anaTuple:
    +    - 'T3_CH_CERNBOX:/store/user/USER_NAME/ANA_FOLDER/'
    +analysis_config_area: config/HH_bbtautau
    +compute_unc_variations: true
    +store_noncentral: true
    +

    +
  • How to load environment

    -

    Following command activates the framework environment: -

    source env.sh
    -

    -

    How to run limits

    1. -

      As a temporary workaround, if you want to run multiplie commands, to avoid delays to load environment each time run: -

      cmbEnv /bin/zsh # or /bin/bash
      -
      - Alternatively add cmbEnv in front of each command. E.g. -
      cmbEnv python3 -c 'print("hello")'
      +

      Following command activates the framework environment: +

      source env.sh
       

    2. -

      Create datacards. -

      python3 StatInference/dc_make/create_datacards.py --input PATH_TO_SHAPES  --output PATH_TO_CARDS --config PATH_TO_CONFIG
      -
      - Available configurations:

      - +

      For the new installation or after you implement new law tasks, you need to update the law index: +

      law index --verbose
      +

    3. -

      Run limits. -

      law run PlotResonantLimits --version dev --datacards 'PATH_TO_CARDS/*.txt' --xsec fb --y-log
      -
      - Hints:

      -
        -
      • use --workflow htcondor to submit on HTCondor (by default it runs locally)
      • -
      • add --remove-output 4,a,y to remove previous output files
      • -
      • add --print-status 0 to get status of the workflow (where 0 is a depth). Useful to get the output file name.
      • -
      • for more details see cms-hh inference documentation
      • -
      +

      Initialize voms proxy: +

      voms-proxy-init -voms cms -rfc -valid 192:00
      +

    -

    How to run nanoAOD->nanoAOD skims production

    -
    law run CreateNanoSkims --version prod_v1 --periods 2016,2016APV,2017,2018 --ignore-missing-samples True
    -
    -

    How to run HHbtag training skim ntuple production

    -
    python Studies/HHBTag/CreateTrainingSkim.py --inFile $CENTRAL_STORAGE/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root --outFile output/skim.root --mass 350 --sample GluGluToBulkGraviton --year 2018 >& EventInfo.txt
    -python Common/SaveHisto.txt --inFile $CENTRAL_STORAGE/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root --outFile output/skim.root
    -
    -

    How to run Histogram production

    -

    Please, see the file all_commands.txt (to be updated)

    @@ -601,13 +588,13 @@

    How to run Histogram production +