Skip to content

pfcon workflow example

Dan McPherson edited this page Aug 20, 2018 · 8 revisions

Workflow

Abstract

This page presents a sample workflow, demonstrating JSON payload sent to pfcon and asynchronous status query.

Preconditions

HOST_IP

You must export an environment variable, HOST_IP to the terminal running pfurl and pfcon. On Linux this is:

export HOST_IP=$(ip route | grep -v docker | awk '{if(NF==11) print $9}')

The companion services, pfioh and pman are assumed started on some (typically remote) host and assumed to be network accessible.

Start

Start pfcon on localhost using

pfcon --forever --httpResponse

Run a sample plugin/app

Using pfurl, a typical start message to pfcon could be constructed as follows:

pfurl --verb POST --raw --http ${HOST_IP}:5005/api/v1/cmd \
      --httpResponseBodyParse \
      --jsonwrapper 'payload' --msg '
        {   "action": "coordinate",
            "threadAction":     true,
            "meta-store": {
                        "meta":         "meta-compute",
                        "key":          "jid"
            },
             
            "meta-data": {
                "remote": {
                        "key":          "%meta-store"
                },
                "localSource": {
                        "path":         "/home/rudolphpienaar/Pictures"
                },
                "localTarget": {
                        "path":         "/home/rudolphpienaar/tmp/Pictures",
                        "createDir":    true
                },
                "specialHandling": {
                        "op":           "plugin",
                        "cleanup":      true
                },
                "transport": {
                    "mechanism":    "compress",
                    "compress": {
                        "archive":  "zip",
                        "unpack":   true,
                        "cleanup":  true
                    }
                },
                "service":              "host"
            },

            "meta-compute":  {
                "cmd":      "$execshell $selfpath/$selfexec --sleepLength 0 --prefix out- /share/incoming /share/outgoing",
                "auid":     "rudolphpienaar",
                "jid":      "89",
                "threaded": true,
                "container":   {
                        "target": {
                            "image":            "fnndsc/pl-simpledsapp",
                            "cmdParse":         true
                        },
                        "manager": {
                            "image":            "fnndsc/swarm",
                            "app":              "swarm.py",
                            "env":  {
                                "meta-store":   "key",
                                "serviceType":  "docker",
                                "shareDir":     "%shareDir",
                                "serviceName":  "89"
                            }
                        }
                },
                "service":              "host"
            }
        }
'

Query the status of the job

pfurl --verb POST --raw \
      --http ${HOST_IP}:5005/api/v1/cmd \
      --httpResponseBodyParse --jsonwrapper 'payload' \
      --msg '
        {   "action":           "status",
            "threadAction":     false,
            "meta": {
                "remote": {
                        "key":          "testService"
                }
            }
        }' --quiet --jsonpprintindent 4

NB NB NB

After each workflow call, you need to manually remove the service from the swarm manager, as well as remove the shared directories on the remote server. Also remove the local target directory!

On remote

sudo rm -fr /home/tmp/share/key-simpledsapp-1 ; dss testService ; dks

On local

rm -fr /home/rudolph/tmp/Pictures