The OwnTracks Recorder is a lightweight program for storing and accessing location data published via MQTT by the OwnTracks apps. It is a compiled program which is easily installed and operated even on low-end hardware, and it doesn't require external an external database. It is also suited for you to record and store the data you publish via our Hosted mode.
There are two main components: the recorder obtains, stores, and serves data, and the ocat command-line utility reads stored data in a variety of formats.
The recorder serves two purposes:
- It subscribes to an MQTT broker and awaits messages published from the OwnTracks apps, storing these in a particular fashion into what we call the store which is basically a bunch of files on the file system.
- It provides a Web server which serves static pages, a REST API you use to request data from the store, and a Websocket server. The distribution comes with a few examples of how to access the data through its HTTP interface (REST API). In particular a table of last locations has been made available as well as a live map which updates via the recorder's Websocket interface when location publishes are received. In addition we provide maps with last points or tracks using the GeoJSON produced by the recorder.
Some examples of what the recorder's built-in HTTP server is capable of:
Retrieve the last position of a particular user. Note that we get the same data as reported by ocat.
$ curl http://127.0.0.2:8083/api/0/last -d user=demo -d device=iphone
[
{
"tst": 1440405601,
"acc": 10,
"_type": "location",
"alt": 262,
"lon": 13.60279820860699,
"vac": 6,
"vel": 18,
"lat": 51.06263391678321,
"cog": 82,
"tid": "NE",
"batt": 99,
"username": "demo",
"device": "iphone",
"topic": "owntracks/demo/iphone",
"ghash": "u31dmx9",
"cc": "DE",
"addr": "E40, 01156 Dresden, Germany"
}
]
By specifying a format
we can produce GeoJSON, say. Normally, the API retrieves the last 6 hours of data but we can extend or limit this with the from
and to
parameters.
http://127.0.0.2:8083/map/index.html?user=demo&device=iphone&format=geojson&from=2014-01-01
In a suitable Web browser, the result is
If we change the format
parameter of the previous URL to linestring
, the result is
The recorder's Web server also provides a tabular display which shows the last position of devices, their address, country, etc. Some of the columns are sortable, you can search for users/devices and click on the address to have a map opened at the device's last location.
The recorder's built-in Websocket server updates a map as it receives publishes from the OwnTracks devices. Here's an example:
ocat is a CLI query program for data stored by recorder: it prints data from storage in a variety of output formats:
- JSON
- GeoJSON (points)
- GeoJSON (line string)
- CSV
- GPX
The ocat utility accesses storage directly — it doesn’t use the recorder’s REST interface. ocat has a daunting number of options, some combinations of which make no sense at all.
Some example uses we consider useful:
ocat --list
show which uers are in storage.ocat --list --user jjolie
show devices for the specified userocat --user jjolie --device ipad
print JSON data for the user's device produced during the last 6 hours.ocat --last
print the LAST position of all users, devices. Can be combined with--user
and--device
.ocat ... --format csv
produces CSV. Limit the fields you want extracted with--fields lat,lon,cc
for example.ocat ... --format xml
produces XML. Limit the fields you want extracted with--fields lat,lon,cc
for example.
<?xml version='1.0' encoding='UTF-8'?>
<?xml-stylesheet type='text/xsl' href='owntracks.xsl'?>
<owntracks>
<point>
<tst>1440395361</tst>
<acc>3000.000000</acc>
<alt>51</alt>
<lon>10.027857</lon>
<vac>29.000000</vac>
<vel>-1</vel>
<lat>52.378886</lat>
<cog>-1</cog>
<tid>NE</tid>
<batt>96</batt>
<ghash>u1r1upq</ghash>
<cc>DE</cc>
<addr>Heidecker Weg 86, 31275 Lehrte, Germany</addr>
<isorcv>2015-08-24T05:55:07Z</isorcv>
<isotst>2015-08-24T05:49:21Z</isotst>
</point>
...
</owntracks>
ocat ... --limit 10
prints data for the current month, starting now and going backwards; only 10 locations will be printed. Generally, the--limit
option reads the storage back to front which makes no sense in some combinations.
Specifying --fields lat,tid,lon
will request just those JSON elements from storage. (Note that doing so with output GPX or GEOJSON could render those formats useless if, say, `lat is missing in the list of fields.)
The --from
and --to
options allow you to specify a UTC date and/or timestamp from which respectively until which data will be read. By default, the last 6 hours of data are produced. If --from
is not specified, it therefore defaults to now minus 6 hours. If --to
is not specified it defaults to now. Dates and times must be specified as strings, and the following formats are recognized:
%Y-%m-%dT%H:%M:%S
%Y-%m-%dT%H:%M
%Y-%m-%dT%H
%Y-%m-%d
%Y-%m
The --limit
option restricts the output to the last specified number of records. This is a bit of an "expensive" operation because we search the .rec
files backwards (i.e. from end to beginning).
The recorder has been running for a while, and the OwnTracks apps have published data. Let us have a look at some of this data.
We obtain a list of users from the store:
$ ocat --list
{
"results": [
"demo"
]
}
From which devices has user demo published data?
$ ocat --list --user demo
{
"results": [
"iphone"
]
}
Where was demo's iphone last seen?
$ ocat --last --user demo --device iphone
[
{
"tst": 1440405601,
"acc": 10,
"_type": "location",
"alt": 262,
"lon": 13.60279820860699,
"vac": 6,
"vel": 18,
"lat": 51.06263391678321,
"cog": 82,
"tid": "NE",
"batt": 99,
"username": "demo",
"device": "iphone",
"topic": "owntracks/demo/iphone",
"ghash": "u31dmx9",
"cc": "DE",
"addr": "E40, 01156 Dresden, Germany"
}
]
Several things worth mentioning:
- The returned data structure is an array of JSON objects; had we omitted specifying a particular device or even a particular user we would have obtained the last position of all this user's devices or all users' devices respectively.
- If you are familiar with the JSON data reported by the OwnTracks apps you'll notice that this JSON contains more information: this is provided on the fly by ocat and the REST API, e.g. from the reverse-geo cache the recorder maintains.
We can limit the number of returned elements: Let's do this as CSV, and limit the fields we are given:
$ ocat --user demo --device iphone --limit 4 --format csv --fields isotst,vel,addr
isotst,vel,addr
2015-08-24T08:40:01Z,18,E40, 01156 Dresden, Germany
2015-08-24T08:35:01Z,40,E40, 01723 Wilsdruff, Germany
2015-08-24T08:30:00Z,50,A14, 01683 Nossen, Germany
2015-08-24T08:24:59Z,40,A14, 04741 RoĂźwein, Germany
You will require:
- libmosquitto
- libCurl
- lmdb included
- Optionally Lua
Obtain and download the software, either as a package, if available, via our Homebrew Tap on Mac OS X, directly as a clone of the repository, or as a tar ball which you unpack. Copy the included config.mk.in
file to config.mk
and edit that. You specify the features or tweaks you need. (The file is commented.) Pay particular attention to the installation directory and the value of the store (STORAGEDEFAULT
): that is where the recorder will store its files. DOCROOT
is the root of the directory from which the recorder's HTTP server will serve files.
Type make
and watch the fun. When that finishes, you should have at least two executable programs called ot-recorder
which is the recorder proper, and ocat
. If you want you can install these using make install
, but this is not necessary: the programs will run from whichever directory you like if you add --doc-root ./docroot
to the recorder options.
Ensure the LMDB databases are initialized by running the following command which is safe to do, also after an upgrade. (This initialization is non-destructive -- it will not delete any data.)
ot-recorder --initialize
Unless already provided by the package you installed, we recommend you create a shell script with which you hence-force launch the recorder. Note that you can have it subscribe to multiple topics, and you can launch sundry instances of the recorder (e.g. for distinct brokers) as long as you ensure:
- that each instance uses a distinct
--storage
- that each instance uses a distinct
--http-port
(or0
if you don't wish to provide HTTP support for a particular instance)
The recorder has, like ocat, a daunting number of options, most of which you will not require. Running either utility with the -h
or --help
switch will summarize their meanings. You can, for example launch with a specific storage directory, disable the HTTP server, change its port, etc.
If you require authentication or TLS to connect to your MQTT broker, pay attention to the $OTR_
environment variables listed in the help.
Launch the recorder:
$ ./ot-recorder 'owntracks/#'
Publish a location from your OwnTracks app and you should see the recorder receive that on the console. If you haven't disabled Geo-lookups, you'll also see the address from which the publish originated.
The location message received by the recorder will be written to storage.
You have an account with our Hosted platform and you want to store the data published by your device and the devices you track. Proceed as follows:
- Download the StartCom ca-bundle.pem file to a directory of choice, and make a note of the path to that file.
- Create a small shell script modelled after the one hereafter (you can copy it from etc/hosted.sh) with which to launch the recorder.
- Launch that shell script to have the recorder connect to Hosted and subscribe to messages your OwnTracks apps publish via Hosted.
#!/bin/sh
export OTR_USER="username" # your OwnTracks Hosted username
export OTR_DEVICE="device" # one of your OwnTracks Hosted device names
export OTR_TOKEN="xab0x993z8tdw" # the Token corresponding to above pair
export OTR_CAFILE="/path/to/startcom-ca-bundle.pem"
ot-recorder --hosted "owntracks/#"
Note in particular the --hosted
option: you specify neither a host name or a port number; the recorder has those built-in, and it uses a specific clientID for the MQTT connection. Other than that, there is no difference between the recorder connecting to Hosted or to your private MQTT broker.
When the recorder has received a publish or two, visit it with your favorite Web browser by pointing your browser at http://127.0.0.1:8083
.
We took a number of decisions for the design of the recorder and its utilities:
- Flat files. The filesystem is the database. Period. That's were everything is stored. It makes incremental backups, purging old data, manipulation via the Unix toolset easy. (Admittedly, for fast lookups we employ LMDB as a cache, but the final word is in the filesystem.) We considered all manner of databases and decided to keep this as simple and lightweight as possible. You can however have the recorder send data to a database of your choosing, in addition to the file system it uses, by utilizing our embedded Lua hook.
- We wanted to store received data in the format it's published in. As this format is JSON, we store this raw payload in the
.rec
files. If we add an attribute to the JSON published by our apps, you have it right there. There's one slight exception: the monthly logs (the.rec
files) have a leading timestamp and a relative topic; see below. - File names are lower case. A user called
JaNe
with a device namedmyPHONe
will be found in a file namedjane/myphone
. - All times are UTC (a.k.a. Zulu or GMT). We got sick and tired of converting stuff back and forth. It is up to the consumer of the data to convert to localtime if need be.
- The recorder does not provide authentication or authorization. Nothing at all. Zilch. Nada. Think about this before making it available on a publicly-accessible IP address. Or rather: don't think about it; just don't do it. You can of course place a HTTP proxy in front of the
recorder
to control access to it. ocat
, the cat program for the recorder uses the same back-end which is used by the API though it accesses it directly (i.e. without resorting to HTTP).
As mentioned earlier, data is stored in files, and these files are relative to STORAGEDIR
(compiled into the programs or specified as an option). In particular, the following directory structure can exist, whereby directories are created as needed by the recorder:
cards/
, optional, contains user cards which are published when either you or one of your trackers on Hosted adds a new device. This card is then stored here and used with, e.g.,ocat --last
to show a user's name and optional avatar.config/
, optional, contains the JSON of a device configuration (.otrc
) which was requested remotely via a dump command. Note that this will contain sensitive data.ghash/
, unless disabled, reverse Geo data is collected into an LMDB database located in this directory.last/
contains the last location published by devices. E.g. Jane's last publish from her iPhone would be inlast/jjolie/iphone/jjolie-iphone.json
. The JSON payload contained therein is enhanced with the fieldsuser
,device
,topic
, andghash
.monitor
a file which contains a timestamp and the last received topic (see Monitoring below).msg/
contains messages received by the Messaging system.photos/
optional; contains the binary photos from a card.rec/
the recorder data proper. One subdirectory per user, one subdirectory therein per device. Data files are namedYYYY-MM.rec
(e.g.2015-08.rec
for the data accumulated during the month of August 2015.waypoints/
contains a directory per user and device. Therein are individual files named by a timestamp with the JSON payload of published (i.e. shared) waypoints. The file names are timestamps because thetst
of a waypoint is its key. If a user publishes all waypoints from a device (Publish Waypoints), the payload is stored in this directory asusername-device.otrw
. (Note, that this is the JSON waypoints import format.)
You should definitely not modify or touch these files: they remain under the control of the recorder. You can of course, remove old .rec
files if they consume too much space.
If not disabled with option --norevgeo
, the recorder will attempt to perform a reverse-geo lookup on the location coordinates it obtains and store them in an LMDB database. If a lookup is not possible, for example because you're over quota, the service isn't available, etc., recorder keeps tracks of the coordinates which could not be resolved in a file named missing
:
$ cat store/ghash/missing
u0tfsr3 48.292223 8.274535
u0m97hc 46.652733 7.868803
...
This can be used to subsequently obtain missed lookups.
We recommend you keep reverse-geo lookups enabled, this data (country code cc
, and the locations address addr
) is used by the example Web apps provided by the recorder to show where a particular device is. In addition, this cached data is used the the API (also ocat) when printing location data.
The precision with which reverse-geo lookups are performed is controlled with the --precison
option to recorder (and with the --precision
option to ocat when you query for data). The default precision is compiled into the code (from config.mk
). The higher the number, the more frequently lookups are performed; conversely, the lower the number, the fewer lookups are performed. For example, a precision of 1 means that points within an area of approximately 5000 km^2 would resolve to a single address, whereas a precision of 7 means that points within an area of approximately 150 m^2 resolve to one address. The recorder obtains a location publish, extracts the latitude and longitude, and then calculates the geohash string and truncates it to precision. If the calculated geohash string can be found in our local LMDB cache, we consider the point cached; otherwise an actual reverse geo lookup (via HTTP) is performed and the result is cached in LMDB at the key of the geohash.
As an example, let's assume Jane's device is at position (lat, lon) 48.879840, 2.323522
, which resolves to a geohash string of length 7 u09whf7
. We can visualize this and show what this looks like. (See also: visualizing geohash.)
Every location publish outside that very small blue square would mean another lookup. If, however, we lower the precision to, say, 5, a much larger area is covered
and a precision of 2 would mean that a very large part of France resolves to a single address:
The bottom line: if you run the recorder with just a few devices and want to know quite exactly where you've been, use a high precision (7 is probably good). If you, on the other hand, run recorder with many devices and are only interested in where a device was approximately, lower the precision; this also has the effect that fewer reverse-geo lookups will be performed in the Google infrastructure. (Also: respect their quotas!)
As hinted to above, the address data obtained through a reverse-geo lookup is stored in an embedded LMDB database, the content of which we can look at with
$ ocat --dump
u09whf7 {"cc":"FR","addr":"1 Rue de Saint-PĂ©tersbourg, 75008 Paris, France","tst":1445435622,"locality":"Paris"}
u09ey1r {"cc":"FR","addr":"D83, 91590 La Ferté-Alais, France","tst":1445435679,"locality":"La Ferté-Alais"}
The key to this data is the geohash string (here with an example of precision 2).
In order to monitor the recorder, whenever an MQTT message is received, a monitor
file located relative to STORAGEDEFAULT is maintained. It contains a single line of text: the epoch timestamp and the last received topic separated from each other by a space.
1439738692 owntracks/jjolie/ipad
If recorder is built with WITH_PING
(default), a location publish to owntracks/ping/ping
(i.e. username is ping
and device is ping
) can be used to round-trip-test the recorder. For this particular username/device combination, recorder will store LAST position, but it will not keep a .REC
file for it. This can be used to verify, say, via your favorite monitoring system, that the recorder is still operational.
After sending a pingping, you can query the REST interface to determine the difference in time. The contrib/
directory has an example Python program (ot-ping.py
) which you can adapt as needed for use by Icinga or Nagios.
OK ot-recorder pingping at http://127.0.0.1:8085: 0 seconds difference
The recorder has a built-in HTTP server with which it servers static files from either the compiled-in default DOCROOT
directory or that specified at run-time with the --doc-root
option. Furthermore, it serves JSON data from the API end-point at /api/0/
and it has a built-in Websocket server for the live map.
The API basically serves the same data as ocat is able to produce.
The recorder's API provides most of the functions that are surfaced by ocat. GET and POST requests are supported, and if a username and device are needed, these can be passed in via X-Limit-User
and X-Limit-Device
headers alternatively to GET or POST parameters. (From and To dates may also be specified as X-Limit-From
and X-Limit-To
respectively.)
The API endpoint is at /api/0
and is followed by the verb.
Returns the content of the monitor
file as plain text.
curl 'http://127.0.0.1:8083/api/0/monitor'
1441962082 owntracks/jjolie/phone
Returns a list of last users' positions. (Can be limited by user and device.)
curl http://127.0.0.1:8083/api/0/last [-d user=jjolie [-d device=phone]]
List users. If user is specified, lists that user's devices. If both user and device are specified, lists that device's .rec
files.
Here comes the actual data. This lists users' locations and requires both user and device. Output format is JSON unless a different format is given (json
, geojson
, and linestring
are supported).
In order to limit the number of records returned, use limit which causes a reverse search through the .rec
files; this can be used to find the last N positions.
Date/time ranges may be specified as from and to with dates/times specified as described for ocat above.
curl http://127.0.0.1:8083/api/0/locations -d user=jpm -d device=5s
curl http://127.0.0.1:8083/api/0/locations -d user=jpm -d device=5s -d limit=1
curl http://127.0.0.1:8083/api/0/locations -d user=jpm -d device=5s -d format=geojson
curl http://127.0.0.1:8083/api/0/locations -d user=jpm -d device=5s -d from=2014-08-03
Query the geo cache for a particular lat and lon.
curl 'http://127.0.0.1:8083/api/0/q?lat=48.85833&lon=2.295'
{
"cc": "FR",
"addr": "9 Avenue Anatole France, 75007 Paris, France",
"tst": 1441984405
}
The reported timestamp was the time at which this cache entry was made. Note that this interface queries only -- it does not populate the cache.
Requires POST method and user. This is currently incomplete; it simply writes a key into LMDB consisting of "blockme user".
Requires GET method and user, and will return the image/png
40x40px photograph of a user if available in STORAGEDIR/photos/
or a transparent 40x40png with a black border otherwise.
If support for this is compiled in, this API endpoint allows a client to remove data from storage. (Warning: any client can do this, as there is no authentication/authorization in the recorder!)
curl 'http://127.0.0.1:8083/api/0/kill?user=ngin&device=ojo'
{
"path": "s0/rec/ngin/ojo",
"status": "OK",
"last": "s0/last/ngin/ojo/ngin-ojo.json",
"killed": [
"2015-09.rec",
]
}
The response contains a list of removed .rec
files, and file system operations are logged to syslog.
If recorder is compiled with Lua support, a Lua script you provide is launched at startup. Lua is a powerful, fast, lightweight, embeddable scripting language. You can use this to process location publishes in any way you desire: your imagination (and Lua-scripting knowhow) set the limits. Some examples:
- insert publishes into a database of your choice
- switch on the coffee machine when your OwnTracks device reports you're entering home (but see also mqttwarn
- write a file with data in a format of your choice (see
etc/example.lua
)
Run the recorder with the path to your Lua script specified in its --lua-script
option (there is no default). If the script cannot be loaded (e.g. because it cannot be read or contains syntax errors), the recorder unloads Lua and continues without your script.
If the Lua script can be loaded, it is automatically provided with a table variable called otr
which contains the following members:
otr.version
is a read-only string with the recorder version (example:"0.3.2"
)otr.log(s)
is a function which takes a strings
which is logged to syslog at the recorder's facility and log level INFO.otr.strftime(fmt, t)
is a function which takes a format stringfmt
(seestrftime(3)
) and an integer number of secondst
and returns a string with the formatted UTC time. Ift
is 0 or negative, the current system time is used.otr.putdb(key, value)
is a function which takes two stringsk
andv
and stores them in the named LMDB database calledluadb
. This can be viewed withotr.getdb(key)
is a function which takes a single stringkey
and returns the database value associated with that key ornil
if the key isn't stored.
ocat --dump=luadb
Your Lua script must provide the following functions:
This is invoked at start of recorder. If the function returns a non-zero value, recorder unloads Lua and disables its processing; i.e. the hook()
will not be invoked on location publishes.
This is invoked when the recorder stops, which it doesn't really do unless you CTRL-C it or send it a SIGTERM signal.
This function is invoked at every location publish processed by the recorder. Your function is passed three arguments:
- topic is the topic published to (e.g.
owntracks/jane/phone
) - type is the type of MQTT message. This is the
_type
in our JSON messages (e.g.location
,cmd
,transition
, ...) or"unknown"
. - location is a Lua table (associative array) with all the elements obtained in the JSON message. In the case of type being
location
, we also add country code (cc
) and the location's address (addr
) unless reverse-geo lookups have been disabled in recorder.
Assume the following small example Lua script in example.lua
:
local file
function otr_init()
otr.log("example.lua starting; writing to /tmp/lua.out")
file = io.open("/tmp/lua.out", "a")
file:write("written by OwnTracks Recorder version " .. otr.version .. "\n")
end
function otr_hook(topic, _type, data)
local timestr = otr.strftime("It is %T in the year %Y", 0)
print("L: " .. topic .. " -> " .. _type)
file:write(timestr .. " " .. topic .. " lat=" .. data['lat'] .. data['addr'] .. "\n")
end
function otr_exit()
end
When recorder is launched with --lua-script example.lua
it invokes otr_init()
which opens a file. Then, for each location received, it calls otr_hook()
which updates the file.
Assuming an OwnTracks device publishes this payload
{"cog":-1,"batt":-1,"lon":2.29513,"acc":5,"vel":-1,"vac":-1,"lat":48.85833,"t":"u","tst":1441984413,"alt":0,"_type":"location","tid":"JJ"}
the file /tmp/lua.out
would contain
written by OwnTracks Recorder version 0.3.0
It is 14:10:01 in the year 2015 owntracks/jane/phone lat=48.858339 Avenue Anatole France, 75007 Paris, France
An optional function you provide is called otr_putrec(u, d, s)
. If it exists,
it is called with the current user in u
, the device in d
and the payload
(which for OwnTracks apps is JSON but for, eg Greenwich devices might not be) in the string s
. If your function returns a
non-zero value, the recorder will not write the REC file for this publish.
After running otr_hook()
, the recorder attempts to invoke a Lua function for each of the elements in the extended JSON. If, say, your Lua script contains a function called hooklet_lat
, it will be invoked every time a lat
is received as part of the JSON payload. Similarly with hooklet_addr
, hooklet_cc
, hooklet_tst
, etc. These hooklets are invoked with the same parameters as otr_hook()
.
You define a hooklet function only if you're interested in expressly triggering on a particular JSON element.
The following environment variables control ocat's behaviour:
OCAT_FORMAT
can be set to the preferred output format. If unset, JSON is used. The--format
option overrides this setting.OCAT_USERNAME
can be set to the preferred username. The--user
option overrides this environment variable.OCAT_DEVICE
can be set to the preferred device name. The--device
option overrides this environment variable.
Running the recorder protected by an nginx or Apache server is possible and is the only recommended method if you want to server data behind localhost. This snippet shows how to do it, but you would also add authentication to that.
server {
listen 8080;
server_name 192.168.1.130;
location / {
root html;
index index.html index.htm;
}
# Proxy and upgrade Websocket connection
location /otr/ws {
rewrite ^/otr/(.*) /$1 break;
proxy_pass http://127.0.0.1:8084;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /otr/ {
proxy_pass http://127.0.0.1:8084/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Assuming you want to use Apache as a reverse proxy to the recorder, the following
may get you started. This will hand URIs which begin with /otr/
to the Recorder.
# Websocket URL endpoint
# a2enmod proxy_wstunnel
ProxyPass /otr/ws ws://127.0.0.1:8083/ws keepalive=on retry=60
ProxyPassReverse /otr/ws ws://127.0.0.1:8083/ws keepalive=on
# Static files
ProxyPass /otr http://127.0.0.1:8083/
ProxyPassReverse /otr http://127.0.0.1:8083/
ocat --load
and ocat --dump
can be use to load and dump the lmdb database respectively. There is some support for loading/dumping named databases using --load=xx
or --dump=xx
to specify the name. Use the mdb utilities to actually perform backups of these. load expects key/value strings in pairs, separated by exactly one space. If the value is the string DELETE
, the key is deleted from the database, which allows us to, say, remove a whole bunch of geohash prefixes in one go (but be careful doing this):
ocat --dump |
grep xxyz |
awk '{printf "%s DELETE\n", $1; }' |
ocat --load
This named lmdb database is keyed on topic name (owntracks/jane/phone
). If the topic of an incoming message is found in the database, the tid
member in the JSON payload is replaced by the value of this key.
apt-get install build-essential linux-headers-$(uname -r) libcurl4-openssl-dev libmosquitto-dev liblua5.2-dev