Skip to content
orion edited this page Sep 23, 2024 · 21 revisions

Welcome!

Here is the wiki of the PRIME telescope photometry pipeline! Here will be pages of information on how to use the pipeline's scripts, how these scripts function, and other various things, such as recommended index and config files. I'll try my best to explain everything about the pipeline, and will be updating new and current pages over time as functionality improves!

To jump right into how to utilize the pipeline, look at the Quick Start Guide below! Then, for further information, examine the master.py and multi_master.py scripts section (as explained below!)

Quick Start Guide

Setting up Access

To begin, if you're using the main pipeline on the PRIME computer, you'll need to access the computer via SSH or Microsoft Remote Desktop. I personally use Remote Desktop, as it makes it easy for me to examine scripts, files, and results visually. There is a Google Doc that details how to set up the VPN and access goddardpc01 (I won't link it here as it contains sensitive logins, but if you're in the PRIME ToO slack, it should be a bookmark under 'PRIME data retrieval'). Remember, the pipeline is stored on goddardpc01 specifically, so access that computer only.

Confirming an Observation

Confirm that you can access goddardpc01. Before we move to utilizing the pipeline itself, we should first confirm the observation we want to process. It's good practice to verify that your observation target has been observed in the date and filter you think it is. You can verify by checking the PRIME ramp log at the link below:

http://www-ir.ess.sci.osaka-u.ac.jp/prime_staff/LOG/Ramp_LOG/

Simply navigate to your date and search for your field and filter.

For the purposes of this guide, let us assume a transient has been observed. We'll use GRB240825a as an example. This was observed by PRIME on 8/25/2024. We have the date, now we need the field number (OBJNAME in the log). If you're in the PRIME discord, you should be able to navigate to the obs_request channel. This is where observation request csvs are sent to the observers. You can scroll until you find the csv titled 'GRB240825a.csv'. Within this csv, you can find the ObjectName column. This corresponds to the field on the observation grid that should be observed, and is the field number we need. If you aren't currently in the PRIME discord, then the field number for this GRB is field16359.

Let's just try and process J band for the filter. So now we have the necessary date, field, and filter. Look at the corresponding log to determine if the information is accurate (the observation should be there!). Once you've confirmed the observation was taken, we can move on.

Utilizing the Pipeline

Now we can finally start using the pipeline! Remote into goddardpc01 and open up WSL (it should be on the taskbar).

Currently, to get the pipeline ready for use, I utilize these commands:

conda activate prime-photometrus
cd /mnt/c/PycharmProjects/prime-photometry/photomitrus/
export PYTHONPATH=.

Once you input these, we can use the script most often utilized in the pipeline: multi_master.py. This allows the data download, processing, and stacking for 1 or more detectors (chips) all from a single command. This script takes several fields (such as date, filter, etc.) as input, if you want detailed explanations of every argument, go to the Scripts section.

We first need to determine what parent directory all the processing will take place in. Preferably, it should be a new directory in the /mnt/d/PRIME_photometry_test_files/ path (this is where most observations are stored). For the sake of this guide, let's make the directory: /mnt/d/PRIME_photometry_test_files/pipeline_demo/.

Let's run the pipeline only on chip 2, to save time and disk space. To run multi_master.py on this observation, we should utilize the command:

python ./multi_master.py -shift -parent /mnt/d/PRIME_photometry_test_files/pipeline_demo/ -target field16539 -date 20240825 -filter J -chip 2

Once you run this command, you'll notice the script is quite verbose in WSL. This is good for monitoring progress.

Sections

Scripts:

Most of the wiki will be about the scripts, how they function, what they produce, and how to use them. NOTE: Not every utilized script currently has complete documentation, but it will be updated over time!

  • Master & Multi_master: The pipeline itself will be run from either master.py or multi-master.py. For most use case scenarios and normal observations multi-master.py is sufficient, though specific scenarios may require the greater number of knobs to turn given by master.py. In any case, the sections on these scripts are the most important, though other script documentation is useful for knowing how the master scripts run. To begin to understand how to run things, I recommend first reading through multi-master.py (utilized the most), then master.py, and finally the other scripts.

  • Astrometry: This section details each of the 4 current astrometry scripts that can be used in the pipeline.

  • Photometry: This section details how photometry is generated for a chip's data, and what data products are produced.

  • Stack: This section details how the final pipeline step, stacking, works.

Installation instructions for PRIME Pipeline (taken from initial readme)

Clone the repo

  1. Navigate to the main page of this repo: https://github.com/Oriohno/prime-photometry. Then click the Code button and copy the HTML or SSH link.

  2. Open command line and change your current working directory (cd) to the place you wish to clone the repo.

  3. Finally, use the command 'git clone' as below to clone locally.

     git clone https://github.com/Oriohno/prime-photometry.git
    

Conda environment setup

  1. Begin by navigating to the .yml file included in this repo. cd to to /prime-photometry/ for ease of using the command below.

  2. To create the conda environment, run the command below.

     conda env create -f prime-photometry.yml
    
  3. You can then activate the environment by using:

     conda activate prime-photometry
    

Index file installation

About index files

To utilize a key part of the pipeline, astrometry.net, you will need index files from which the package can pull from. See these links for key details about and where to download index files:

https://astrometry.net/doc/readme.html

http://data.astrometry.net/

For PRIME I found the 4200 series is useful, as I first downloaded those and they've worked (mostly) just fine, being built off of 2MASS. Astrometry.net recommends to download index files with sky-marks 10% to 100% the size of your images. For PRIME, individual detector images are in the 30-40' size range, thus I should download series 08 to 01. I've done this for some areas, but have yet to complete coverage.

If you want to save time (and delay the inevitable), you can download appropriate index files only for the area where your images are located. You can use these maps to determine what numbers correspond to what index files:

https://github.com/dstndstn/astrometry.net/blob/master/util/hp.png

https://github.com/dstndstn/astrometry.net/blob/master/util/hp2.png

It is also recommended to download the 4100 and some of the 5200 series, though I haven't yet tested if these are better for PRIME than the 4200.

Installation placement

We must place the index files in its own directory, named whatever you like. Find the 'astrometry.cfg' file, likely located where the astrometry python package is installed. It should look like this below:

https://github.com/dstndstn/astrometry.net/blob/main/etc/astrometry.cfg

Under the comment "# In which directories should we search for indices?", add your index file directory after "add_path", like the example below:

    # In which directories should we search for indices?
    add_path ../../../indexes

Now, astrometry.net should be able to use your index files.