Skip to content

TranslateMetadata

John Bogovic edited this page Jan 20, 2022 · 4 revisions

Translate Metadata

Translations are written in jq. See the jq manual to learn more.

Practical examples

COSEM to canonical

With multiscales:

include "n5";
addPaths | walk ( if isCosem then cosemToTransform else . end ) | addAllMultiscales

If you know your n5 container only has single scales, you can omit the addPaths and addAllMultiscales functions:

include "n5";
walk ( if isCosem then cosemToTransform else . end )

uses:

N5Viewer to canonical

With multiscales:

include "n5";
addPaths | walk ( if isN5V then n5vToTransform else . end ) | addAllMultiscales

If you know your n5 container only has single scales, you can omit the addPaths and addAllMultiscales functions:

include "n5";
walk ( if isN5V then n5vToTransform else . end )

uses:

N5Viewer and COSEM to canonical

include "n5";
addPaths |  walk (
	if isCosem then cosemToTransform
	elif isN5V then n5vToTransform
	else . end )
| addAlMultiscales

ImageJ to canonical

include "n5";
walk (
	if isIJ then ijToTransform
	else . end )

uses:

Clear and set metadata

Sets metadata for a single dataset ("my/dataset"), clearing any non-required. clearAndSetMetadata creates a canonical metadata object with intensity range, spatial calibration, and color information.

def clearAndSetMetadata: clearDatasetMetadata
	| . + arrayUnitAxisToTransform( [4,0,0,0, 0,3,0,0, 0,0,2,0]; 
	     "mm"; 
	     axesFromLabels(["x","y","z"];"mm"))
	| . + intensityRange( 42; 1412 ) 
	| . + { "color" : rgbaColor( 0; 255; 0; 255 )};

addPaths | getSubTree ("my/dataset") |= ( .attributes |= clearAndSetMetadata)

uses built in functions:

Tutorial

N5viewer to canonical

This tutorial shows how to write a translation that converts n5-viewer style spatial metadata to "canonical" metadata style from scratch. I.e. a function that adds a new attribute to any attribute object in the tree with a pixelResolution field.

Input
{ 
    "pixelResolution": { 
        "dimensions": [8, 7, 6],
        "unit": "mm"
    }
}
Desired output
{ 
    "pixelResolution": { 
        "dimensions": [8, 7, 6],
        "unit": "mm"
    },
    "spatialTransform": {
        "transform":{
            "type": "affine",
            "affine": [8, 0, 0, 0, 0, 7, 0, 0, 0, 0, 6, 0]
        },
        "unit":"mm"
    }
}

We'll use the built-in jq functions built-in jq functions walk, type, and has. The walk function applies a function to every part of the metadata. type returns the type of the input object: one of object, array, string, number, or null The has function checks for the existance of a field in an object.

First let's write a function that checks if the input is an object and has the pixelResolution field. We'll call it isN5v for "is n5-viewer".

def isN5v: type == "object" and has("pixelResolution");

Next, let's write a function that builds a canonical transform object from the pixel resolution object. We'll use the n5 jq functions arrayAndUnitToTransform (see below). To use it, we need to make an array containing a flattened affine matrix as an array, and spatial units. For the above example, we want this output:

[ [8, 0, 0, 0, 0, 7, 0, 0, 0, 0, 6, 0],
"mm" ]

This produces the output we need:

[
  [ .pixelResolution.dimensions[0], 0, 0, 0,
     0, .pixelResolution.dimensions[1], 0, 0,
     0, 0, .pixelResolution.dimensions[2], 0 ],
  .pixelResolution.unit
]
  • .pixelResolution gets the pixelResolution object
  • .pixelResolution.dimensions gets the dimensions array
  • .pixelResolution.dimensions[0] gets the first value (0th index) of the dimensions array.
  • .pixelResolution.unit gets the value of the unit field in the pixelResolution Object

This also works:

.pixelResolution | 
[ [ .dimensions[0], 0, 0, 0, 0, .dimensions[1], 0, 0, 0, 0, .dimensions[2], 0 ],
.unit ]

Here | is the pipe operator, which feeds the output of the operation on the left to the operation on the right. Finally, we need to pass the output of the above to the arrayAndUnitToTransform function, which we can simply do with another pipe:

def convert: 
    .pixelResolution | 
    [ [ .dimensions[0], 0, 0, 0, 0, .dimensions[1], 0, 0, 0, 0, .dimensions[2], 0 ],
      .unit ]
    | arrayAndUnitToTransform;

We also made this whole operation a function called convert. It's output will be a new object, which we want to add to the current attributes.

Finally, let's apply our new repeatedly with walk, but only for those relevant parts of the metadata tree (where isN5v returns true).

walk( if isN5v then . + convert else . end )
  • . + convert adds the result of the convert method to the current object.
  • else . end returns the current state of the tree wherever isN5v is false, and makes sure our translation only affects the relevant parts of the metadata tree.

Putting it all together, we have:

def convert: .pixelResolution | 
    [ [ .dimensions[0], 0, 0, 0, 0, .dimensions[1], 0, 0, 0, 0, .dimensions[2], 0 ],
      .unit ] 
    | arrayAndUnitToTransform;
def isN5v: type == "object" and has("pixelResolution");

walk( if isN5v then . + convert else . end )

Quiz

Try writing a translation function that applies to this metadata tree.

I.e. we need a function that converts this input:

{
    "physicalScales": {
        "x":2.0,
        "y":3.0,
        "z":4.0
    },
    "physicalUnit": "cm"
}

to this output:

{
  "pixelResolution": {
    "dimensions": [ 1, 2, 3 ],
    "unit": "um"
  },
  "downsamplingFactors": [ 2, 4, 8 ],
  "transform": {
    "type": "affine",
    "affine": [ 2, 0, 0, 0.5, 0, 8, 0, 1.5, 0, 0, 24, 3.5 ]
  },
  "unit": "um"
}

Built-in functions

N5Viewer

isN5v

Returns true when called from a tree node that has metadata in n5 viewers metadata dialect.

Example

Input 1:

{
    "pixelResolution": [1,2,3],
    "downsamplingFactors": [2,2,2]
}

Output 1: true

Input 2:

{
	"resolution": [1,2,3],
	"downsamplingFactors": [2,2,2]
}

Output 2: false

n5vToTransform

Adds a canonical transform object from n5 viewer metadata dialect.

Example

Input:

{
	"pixelResolution": {
		"dimensions":[1,2,3],
		"unit": "um"
    }
	"downsamplingFactors": [2,2,2]
}

Output:

{
  "pixelResolution": {
    "dimensions": [ 1, 2, 3 ],
    "unit": "um"
  },
  "downsamplingFactors": [ 2, 4, 8 ],
  "transform": {
    "type": "affine",
    "affine": [ 2, 0, 0, 0.5, 0, 8, 0, 1.5, 0, 0, 24, 3.5 ]
  },
  "unit": "um"
}

COSEM

isCosem

Returns true when called from a tree node that has metadata in the COSEM metadata dialect.

Example

Input 1:

{
  "transform": {
    "axes": [ "z", "y", "x" ],
    "scale": [ 3, 2, 1 ],
    "translate": [ 0.3, 0.2, 0.1 ],
    "units": [ "mm", "mm", "mm" ]
  }
}

Output 1: true

Input 2:

{
    "pixelResolution": {
		"dimensions": [1,2,3],
		"unit": "um"
    }
    "downsamplingFactors": [2,2,2]
}

Output 2: false

cosemToTransform

Adds a canonical transform object from the COSEM metadata attributes.

Example

Input:

{
  "transform": {
    "axes": [ "z", "y", "x" ],
    "scale": [ 3, 2, 1 ],
    "translate": [ 0.3, 0.2, 0.1 ],
    "units": [ "mm", "mm", "mm" ],
    "axisIndexes": [ 2, 1, 0 ]
  }
}

Output:

{
  "transform": {
    "axes": [ "z", "y", "x" ],
    "scale": [ 3, 2, 1 ],
    "translate": [ 0.3, 0.2, 0.1 ],
    "units": [ "mm", "mm", "mm" ],
    "axisIndexes": [ 2, 1, 0 ]
  },
  "spatialTransform": {
    "transform": {
      "type": "affine",
      "affine": [ 1, 0, 0, 0.1, 0, 2, 0, 0.2, 0, 0, 3, 0.3 ]
    },
    "unit": "mm"
  }
}

OME-NGFF

See the Ome-Ngff v0.3 specification.

Returns true when called from a tree node that has metadata in the OME-NGFF multiscale metadata.

Example

Input:

{ "attributes: {
    "multiscales": [
        {
            "axes": [ "z", "y", "x" ],
            "datasets": [
                { "path": "s0" },
                { "path": "s1" },
                { "path": "s2" }
            ],
            "metadata": {
                "order": 0,
                "preserve_range": true,
                "scale": [ 0.5, 0.5, 0.5 ]
            },
            "name": "zyx",
            "type": "skimage.transform._warps.rescale",
            "version": "0.3"
        }
    ]
  },
  "children" : []
}

Output: true

Given a multiscales object, returns a map from dataset names to pixel-to-physical transforms. Useful because in some OME-NGFF specifications, this information is not present in the dataset-level metadata attributes.

Example

Input:

{
  "axes": [ "z", "y", "x" ],
  "datasets": [
    { "path": "s0" },
    { "path": "s1" },
    { "path": "s2" }
  ],
  "metadata": {
    "order": 0,
    "preserve_range": true,
    "scale": [ 0.5, 0.5, 0.5 ]
  },
  "name": "zyx",
  "type": "skimage.transform._warps.rescale",
  "version": "0.3"
}

Output:

{
  "s0": {
    "transform": {
      "type": "scale",
      "scale": [ 0.5, 0.5, 0.5 ]
    }
  },
  "s1": {
    "transform": {
      "type": "scale",
      "scale": [ 0.25, 0.25, 0.25
      ]
    }
  },
  "s2": {
    "transform": {
      "type": "scale",
      "scale": [ 0.125, 0.125, 0.125
      ]
    }
  }
}

When called from a tree node that has metadata in the OME-NGFF multiscale metadata, adds appropriate transformation metadata to its child nodes, where this transformation is inferred from the multiscale metadata with omeNgffTransformsFromMultiscale.

Others

Returns true when called from a tree node that represents an n5 dataset.

Example

Input:

{ 
    "attributes": { 
        "dimensions": [8, 8],
        "dataType": "uint8"
    },
    "children" : {}
}

Output: true

[isAttributes]

Returns true when called from a tree node that represents the attributes of an n5 group or dataset.

Examples

Input 1:

{
    "attributes": {
        "dimensions": [8, 8],
        "dataType": "uint8"
    },
    "children" : {}
}

Output 2: false

Input 1:

{
	"dimensions": [8, 8],
	"dataType": "uint8"
}

Output 2: true

addPaths

Adds path variables into attribute objects throughout the tree. Useful for making local operations aware of their global location in the tree.

Example

Input:

{
  "attributes": {},
  "children": {
    "c0": {
      "attributes": {},
      "children": {
        "s0": {
          "attributes": { }
        },
        "s1": {
          "attributes": { }
        }
      }
    }
  }
}

Output:

{
  "attributes": {
    "path": ""
  },
  "children": {
    "c0": {
      "attributes": {
        "path": "c0"
      },
      "children": {
        "s0": {
          "attributes": {
            "path": "c0/s0"
          }
        },
        "s1": {
          "attributes": {
            "path": "c0/s1"
          }
        }
      }
    }
  }
}

Creates a canonical spatialTransform object from a two element array containing a flat affine transform, and spatial units.

Example

Input:

[ [1, 2, 3, 4, 5, 6], "parsec"]

Output:

{
  "spatialTransform": {
    "transform": {
      "type": "affine",
      "affine": [1, 2, 3, 4, 5, 6]
    },
    "unit": "parsec"
  }
}

Returns the 2D identity matrix (homogeneous coordinates) as a flat array, i.e. [1,0,0, 0,1,0]

Returns the 3D identity matrix (homogeneous coordinates) as a flat array, i.e. [1,0,0,0, 0,1,0,0, 0,0,1,0]

Returns a 2D matrix (homogeneous coordinates) as a flat array, but replaces the diagonal elements with the elements of the argument.

Returns a 2D matrix (homogeneous coordinates) as a flat array, but replaces the translation elements with the elements of the argument.

Returns a 3D matrix (homogeneous coordinates) as a flat array, but replaces the diagonal elements with the elements of the argument.

Returns a 3D matrix (homogeneous coordinates) as a flat array, but replaces the translation elements with the elements of the argument.

Elementwise array multiplication.

Example

Input:

[1, 2, 3] as $x | [4, 5, 6] as $y | arrMultiply( $x; $y )

Output:

[ 4, 10, 18 ]