Skip to content

Commit

Permalink
Merge pull request #244 from ENCODE-DCC/dev
Browse files Browse the repository at this point in the history
v2.0.1
  • Loading branch information
leepc12 authored Nov 4, 2021
2 parents 43c10a6 + 769ca5a commit 796317e
Show file tree
Hide file tree
Showing 5 changed files with 144 additions and 47 deletions.
18 changes: 16 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,20 @@ commands:
steps:
- run:
command: |
sudo apt-get update && sudo apt-get install software-properties-common git wget curl default-jre -y
sudo apt-get update && sudo apt-get install software-properties-common git wget curl -y
# install java 11
sudo add-apt-repository ppa:openjdk-r/ppa -y
sudo apt-get update && sudo apt-get install openjdk-11-jdk -y
# automatically set 11 as default java
sudo update-java-alternatives -a
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt-get update && sudo apt-get install python3.6 -y
sudo wget --no-check-certificate https://bootstrap.pypa.io/get-pip.py
sudo python3.6 get-pip.py
sudo ln -s /usr/bin/python3.6 /usr/local/bin/python3
pip3 install --upgrade pip
pip3 install caper google-cloud-storage
Expand All @@ -51,11 +59,17 @@ commands:
echo ${GCLOUD_SERVICE_ACCOUNT_SECRET_JSON} > tmp_secret_key.json
export GOOGLE_APPLICATION_CREDENTIALS=$PWD/tmp_secret_key.json
# add docker image to input JSON
cat ${INPUT} | jq ".+{\"chip.docker\": \"${TAG}\"}" > input_with_docker.json
caper run ../../../chip.wdl \
--backend gcp --gcp-prj ${GOOGLE_PROJECT_ID} \
--gcp-service-account-key-json $PWD/tmp_secret_key.json \
--out-gcs-bucket ${CAPER_OUT_DIR} --tmp-gcs-bucket ${CAPER_TMP_DIR} \
-i ${INPUT} -m metadata.json --docker ${TAG}
-i input_with_docker.json -m metadata.json --docker ${TAG}
rm -f input_with_docker.json
res=$(jq '.outputs["chip.qc_json_ref_match"]' metadata.json)
[[ "$res" != true ]] && exit 100
Expand Down
81 changes: 61 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,33 @@

## Download new Caper>=2.0

New Caper is out. You need to update your Caper to work with the latest ENCODE ATAC-seq pipeline.
New Caper is out. You need to update your Caper to work with the latest ENCODE ChIP-seq pipeline.
```bash
$ pip install caper --upgrade
```

## Local/HPC users and new Caper>=2.0

There are tons of changes for local/HPC backends: `local`, `slurm`, `sge`, `pbs` and `lsf`(added). Make a backup of your current Caper configuration file `~/.caper/default.conf` and run `caper init`. Local/HPC users need to reset/initialize Caper's configuration file according to your chosen backend. Edit the configuration file and follow instructions in there.
```bash
$ cd ~/.caper
$ cp default.conf default.conf.bak
$ caper init [YOUR_BACKEND]
```

In order to run a pipeline, you need to add one of the following flags to specify the environment to run each task within. i.e. `--conda`, `--singularity` and `--docker`. These flags are not required for cloud backend users (`aws` and `gcp`).
```bash
# for example
$ caper run ... --singularity
```

For Conda users, **RE-INSTALL PIPELINE'S CONDA ENVIRONMENT AND DO NOT ACTIVATE CONDA ENVIRONMENT BEFORE RUNNING PIPELINES**. Caper will internally call `conda run -n ENV_NAME CROMWELL_JOB_SCRIPT`. Just make sure that pipeline's new Conda environments are correctly installed.
```bash
$ scripts/uninstall_conda_env.sh
$ scripts/install_conda_env.sh
```


## Introduction
This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor and histone ChIP-seq pipeline specifications (by Anshul Kundaje) in [this google doc](https://docs.google.com/document/d/1lG_Rd7fnYgRpSIqrIfuVlAz2dW1VaSQThzk836Db99c/edit#).

Expand All @@ -21,34 +43,35 @@ This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor an

## Installation


1) Make sure that you have Python>=3.6. Caper does not work with Python2. Install Caper and check its version >=2.0.
```bash
$ python --version
$ pip install caper
$ caper -v
```
2) Make a backup of your Caper configuration file `~/.caper/default.conf` if you are upgrading from old Caper(<2.0.0). Reset/initialize Caper's configuration file. Read Caper's [README](https://github.com/ENCODE-DCC/caper/blob/master/README.md) carefully to choose a backend for your system. Follow the instruction in the configuration file.
```bash
# make a backup of ~/.caper/default.conf if you already have it
$ caper init [YOUR_BACKEND]

2) Git clone this pipeline.
> **IMPORTANT**: use `~/chip-seq-pipeline2/chip.wdl` as `[WDL]` in Caper's documentation.
# then edit ~/.caper/default.conf
$ vi ~/.caper/default.conf
```

3) Git clone this pipeline.
> **IMPORTANT**: use `~/chip-seq-pipeline2/chip.wdl` as `[WDL]` in Caper's documentation.
```bash
$ cd
$ git clone https://github.com/ENCODE-DCC/chip-seq-pipeline2
```


3) (Optional for Conda) Install pipeline's Conda environments if you don't have Singularity or Docker installed on your system. We recommend to use Singularity instead of Conda. If you don't have Conda on your system, install [Miniconda3](https://docs.conda.io/en/latest/miniconda.html).
4) (Optional for Conda) Install pipeline's Conda environments if you don't have Singularity or Docker installed on your system. We recommend to use Singularity instead of Conda. If you don't have Conda on your system, install [Miniconda3](https://docs.conda.io/en/latest/miniconda.html).
```bash
$ cd chip-seq-pipeline2
# uninstall old environments (<2.0.0)
$ bash scripts/uninstall_conda_env.sh
$ bash scripts/install_conda_env.sh
```

4) Follow [Caper's README](https://github.com/ENCODE-DCC/caper) carefully. Find an instruction for your platform and run `caper init`. Edit the initialized Caper's configuration file (`~/.caper/default.conf`).
```bash
$ caper init [YOUR_PLATFORM]
$ vi ~/.caper/default.conf
```

## Test run

Expand All @@ -63,10 +86,35 @@ The followings are just examples. Please read [Caper's README](https://github.co

# Or submit it as a leader job (with long/enough resources) to SLURM (Stanford Sherlock) with Singularity
# It will fail if you directly run the leader job on login nodes
$ sbatch -p [SLURM_PARTITON] -J [WORKFLOW_NAME] --export=ALL --mem 4G -t 4-0 --wrap "caper chip atac.wdl -i https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.json --singularity"
$ sbatch -p [SLURM_PARTITON] -J [WORKFLOW_NAME] --export=ALL --mem 4G -t 4-0 --wrap "caper chip chip.wdl -i https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.json --singularity"
```


## Running a pipeline on Terra/Anvil (using Dockstore)

Visit our pipeline repo on [Dockstore](https://dockstore.org/workflows/github.com/ENCODE-DCC/chip-seq-pipeline2). Click on `Terra` or `Anvil`. Follow Terra's instruction to create a workspace on Terra and add Terra's billing bot to your Google Cloud account.

Download this [test input JSON for Terra](https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.terra.json) and upload it to Terra's UI and then run analysis.

If you want to use your own input JSON file, then make sure that all files in the input JSON are on a Google Cloud Storage bucket (`gs://`). URLs will not work.


## Running a pipeline on DNAnexus (using Dockstore)

Sign up for a new account on [DNAnexus](https://platform.dnanexus.com/) and create a new project on either AWS or Azure. Visit our pipeline repo on [Dockstore](https://dockstore.org/workflows/github.com/ENCODE-DCC/chip-seq-pipeline2). Click on `DNAnexus`. Choose a destination directory on your DNAnexus project. Click on `Submit` and visit DNAnexus. This will submit a conversion job so that you can check status of it on `Monitor` on DNAnexus UI.

Once conversion is done download one of the following input JSON files according to your chosen platform (AWS or Azure) for your DNAnexus project:
- AWS: https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only_dx.json
- Azure: https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only_dx_azure.json

You cannot use these input JSON files directly. Go to the destination directory on DNAnexus and click on the converted workflow `chip`. You will see input file boxes in the left-hand side of the task graph. Expand it and define FASTQs (`fastq_repX_R1`) and `genome_tsv` as in the downloaded input JSON file. Click on the `common` task box and define other non-file pipeline parameters.


## Running a pipeline on DNAnexus (using our pre-built workflows)

See [this](docs/tutorial_dx_web.md) for details.



## Input JSON file

Expand All @@ -82,13 +130,6 @@ You can run this pipeline on [truwl.com](https://truwl.com/). This provides a we

If you do not run the pipeline on Truwl, you can still share your use-case/job on the platform by getting in touch at [info@truwl.com](mailto:info@truwl.com) and providing your inputs.json file.

## Running a pipeline on DNAnexus

You can also run this pipeline on DNAnexus without using Caper or Cromwell. There are two ways to build a workflow on DNAnexus based on our WDL.

1) [dxWDL CLI](docs/tutorial_dx_cli.md)
2) [DNAnexus Web UI](docs/tutorial_dx_web.md)

## How to organize outputs

Install [Croo](https://github.com/ENCODE-DCC/croo#installation). **You can skip this installation if you have installed pipeline's Conda environment and activated it**. Make sure that you have python3(> 3.4.1) installed on your system. Find a `metadata.json` on Caper's output directory.
Expand Down
54 changes: 29 additions & 25 deletions chip.wdl
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,20 @@ struct RuntimeEnvironment {
}

workflow chip {
String pipeline_ver = 'v2.0.0'
String pipeline_ver = 'v2.0.1'

meta {
version: 'v2.0.0'
author: 'Jin wook Lee (leepc12@gmail.com) at ENCODE-DCC'
description: 'ENCODE TF/Histone ChIP-Seq pipeline'
version: 'v2.0.1'

author: 'Jin wook Lee'
email: 'leepc12@gmail.com'
description: 'ENCODE TF/Histone ChIP-Seq pipeline. See https://github.com/ENCODE-DCC/chip-seq-pipeline2 for more details. e.g. example input JSON for Terra/Anvil.'
organization: 'ENCODE DCC'

specification_document: 'https://docs.google.com/document/d/1lG_Rd7fnYgRpSIqrIfuVlAz2dW1VaSQThzk836Db99c/edit?usp=sharing'

default_docker: 'encodedcc/chip-seq-pipeline:v2.0.0'
default_singularity: 'library://leepc12/default/chip-seq-pipeline:v2.0.0'
default_docker: 'encodedcc/chip-seq-pipeline:v2.0.1'
default_singularity: 'library://leepc12/default/chip-seq-pipeline:v2.0.1'
croo_out_def: 'https://storage.googleapis.com/encode-pipeline-output-definition/chip.croo.v5.json'

parameter_group: {
Expand Down Expand Up @@ -67,8 +71,8 @@ workflow chip {
}
input {
# group: runtime_environment
String docker = 'encodedcc/chip-seq-pipeline:v2.0.0'
String singularity = 'library://leepc12/default/chip-seq-pipeline:v2.0.0'
String docker = 'encodedcc/chip-seq-pipeline:v2.0.1'
String singularity = 'library://leepc12/default/chip-seq-pipeline:v2.0.1'
String conda = 'encode-chip-seq-pipeline'
String conda_macs2 = 'encode-chip-seq-pipeline-macs2'
String conda_spp = 'encode-chip-seq-pipeline-spp'
Expand Down Expand Up @@ -117,9 +121,9 @@ workflow chip {
Array[File] bams = []
Array[File] nodup_bams = []
Array[File] tas = []
Array[File?] peaks = []
Array[File?] peaks_pr1 = []
Array[File?] peaks_pr2 = []
Array[File] peaks = []
Array[File] peaks_pr1 = []
Array[File] peaks_pr2 = []
File? peak_ppr1
File? peak_ppr2
File? peak_pooled
Expand Down Expand Up @@ -1703,7 +1707,7 @@ workflow chip {
# we have all tas and ctl_tas (optional for histone chipseq) ready, let's call peaks
scatter(i in range(num_rep)) {
Boolean has_input_of_call_peak = defined(ta_[i])
Boolean has_output_of_call_peak = i<length(peaks) && defined(peaks[i])
Boolean has_output_of_call_peak = i<length(peaks)
if ( has_input_of_call_peak && !has_output_of_call_peak && !align_only_ ) {
call call_peak { input :
peak_caller = peak_caller_,
Expand Down Expand Up @@ -1748,7 +1752,7 @@ workflow chip {
# call peaks on 1st pseudo replicated tagalign
Boolean has_input_of_call_peak_pr1 = defined(spr.ta_pr1[i])
Boolean has_output_of_call_peak_pr1 = i<length(peaks_pr1) && defined(peaks_pr1[i])
Boolean has_output_of_call_peak_pr1 = i<length(peaks_pr1)
if ( has_input_of_call_peak_pr1 && !has_output_of_call_peak_pr1 && !true_rep_only ) {
call call_peak as call_peak_pr1 { input :
peak_caller = peak_caller_,
Expand Down Expand Up @@ -1777,7 +1781,7 @@ workflow chip {
# call peaks on 2nd pseudo replicated tagalign
Boolean has_input_of_call_peak_pr2 = defined(spr.ta_pr2[i])
Boolean has_output_of_call_peak_pr2 = i<length(peaks_pr2) && defined(peaks_pr2[i])
Boolean has_output_of_call_peak_pr2 = i<length(peaks_pr2)
if ( has_input_of_call_peak_pr2 && !has_output_of_call_peak_pr2 && !true_rep_only ) {
call call_peak as call_peak_pr2 { input :
peak_caller = peak_caller_,
Expand Down Expand Up @@ -2061,7 +2065,7 @@ workflow chip {
call reproducibility as reproducibility_overlap { input :
prefix = 'overlap',
peaks = select_all(overlap.bfilt_overlap_peak),
peaks_pr = overlap_pr.bfilt_overlap_peak,
peaks_pr = if defined(overlap_pr.bfilt_overlap_peak) then select_first([overlap_pr.bfilt_overlap_peak]) else [],
peak_ppr = overlap_ppr.bfilt_overlap_peak,
peak_type = peak_type_,
chrsz = chrsz_,
Expand All @@ -2074,7 +2078,7 @@ workflow chip {
call reproducibility as reproducibility_idr { input :
prefix = 'idr',
peaks = select_all(idr.bfilt_idr_peak),
peaks_pr = idr_pr.bfilt_idr_peak,
peaks_pr = if defined(idr_pr.bfilt_idr_peak) then select_first([idr_pr.bfilt_idr_peak]) else [],
peak_ppr = idr_ppr.bfilt_idr_peak,
peak_type = peak_type_,
chrsz = chrsz_,
Expand Down Expand Up @@ -2112,7 +2116,7 @@ workflow chip {
ctl_lib_complexity_qcs = select_all(filter_ctl.lib_complexity_qc),
jsd_plot = jsd.plot,
jsd_qcs = jsd.jsd_qcs,
jsd_qcs = if defined(jsd.jsd_qcs) then select_first([jsd.jsd_qcs]) else [],
frip_qcs = select_all(call_peak.frip_qc),
frip_qcs_pr1 = select_all(call_peak_pr1.frip_qc),
Expand All @@ -2122,13 +2126,13 @@ workflow chip {
frip_qc_ppr2 = call_peak_ppr2.frip_qc,
idr_plots = select_all(idr.idr_plot),
idr_plots_pr = idr_pr.idr_plot,
idr_plots_pr = if defined(idr_pr.idr_plot) then select_first([idr_pr.idr_plot]) else [],
idr_plot_ppr = idr_ppr.idr_plot,
frip_idr_qcs = select_all(idr.frip_qc),
frip_idr_qcs_pr = idr_pr.frip_qc,
frip_idr_qcs_pr = if defined(idr_pr.frip_qc) then select_first([idr_pr.frip_qc]) else [],
frip_idr_qc_ppr = idr_ppr.frip_qc,
frip_overlap_qcs = select_all(overlap.frip_qc),
frip_overlap_qcs_pr = overlap_pr.frip_qc,
frip_overlap_qcs_pr = if defined(overlap_pr.frip_qc) then select_first([overlap_pr.frip_qc]) else [],
frip_overlap_qc_ppr = overlap_ppr.frip_qc,
idr_reproducibility_qc = reproducibility_idr.reproducibility_qc,
overlap_reproducibility_qc = reproducibility_overlap.reproducibility_qc,
Expand Down Expand Up @@ -2945,7 +2949,7 @@ task reproducibility {
# in a sorted order. for example of 4 replicates,
# 1,2 1,3 1,4 2,3 2,4 3,4.
# x,y means peak file from rep-x vs rep-y
Array[File]? peaks_pr # peak files from pseudo replicates
Array[File] peaks_pr # peak files from pseudo replicates
File? peak_ppr # Peak file from pooled pseudo replicate.
String peak_type
File chrsz # 2-col chromosome sizes file
Expand Down Expand Up @@ -3060,9 +3064,9 @@ task qc_report {
Array[File] xcor_plots
Array[File] xcor_scores
File? jsd_plot
Array[File]? jsd_qcs
Array[File] jsd_qcs
Array[File] idr_plots
Array[File]? idr_plots_pr
Array[File] idr_plots_pr
File? idr_plot_ppr
Array[File] frip_qcs
Array[File] frip_qcs_pr1
Expand All @@ -3071,10 +3075,10 @@ task qc_report {
File? frip_qc_ppr1
File? frip_qc_ppr2
Array[File] frip_idr_qcs
Array[File]? frip_idr_qcs_pr
Array[File] frip_idr_qcs_pr
File? frip_idr_qc_ppr
Array[File] frip_overlap_qcs
Array[File]? frip_overlap_qcs_pr
Array[File] frip_overlap_qcs_pr
File? frip_overlap_qc_ppr
File? idr_reproducibility_qc
File? overlap_reproducibility_qc
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"chip.pipeline_type" : "tf",
"chip.genome_tsv" : "gs://encode-pipeline-genome-data/genome_tsv/v3/hg38_chr19_chrM.terra.tsv",
"chip.fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/rep1.subsampled.25.fastq.gz"
],
"chip.fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/rep2.subsampled.20.fastq.gz"
],
"chip.ctl_fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/ctl1.subsampled.25.fastq.gz"
],
"chip.ctl_fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI/fastq_subsampled/ctl2.subsampled.25.fastq.gz"
],
"chip.paired_end" : false,
"chip.title" : "ENCSR000DYI (subsampled 1/25, chr19_chrM only)",
"chip.description" : "CEBPB ChIP-seq on human A549 produced by the Snyder lab"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"chip.pipeline_type" : "tf",
"chip.genome_tsv" : "gs://encode-pipeline-genome-data/genome_tsv/v3/hg38_chr19_chrM.terra.tsv",
"chip.fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep1-R1.subsampled.50.fastq.gz"
],
"chip.fastqs_rep1_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep1-R2.subsampled.50.fastq.gz"
],
"chip.fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep2-R1.subsampled.50.fastq.gz"
],
"chip.fastqs_rep2_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/rep2-R2.subsampled.50.fastq.gz"
],
"chip.ctl_fastqs_rep1_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl1-R1.subsampled.80.fastq.gz"
],
"chip.ctl_fastqs_rep1_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl1-R2.subsampled.80.fastq.gz"
],
"chip.ctl_fastqs_rep2_R1" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl2-R1.subsampled.80.fastq.gz"
],
"chip.ctl_fastqs_rep2_R2" : ["gs://encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR936XTK/fastq_subsampled/ctl2-R2.subsampled.80.fastq.gz"
],
"chip.paired_end" : true,
"chip.title" : "ENCSR936XTK (subsampled 1/50, chr19 and chrM Only)",
"chip.description" : "ZNF143 ChIP-seq on human GM12878"
}

0 comments on commit 796317e

Please sign in to comment.