Releases: ubirch/ubirch-client-go
v2.0.2
v2.0.1
ubirch cloud client
[WIP]
This release contains breaking changes!
The client is now capable to run in a cluster.
We temporarily dropped support for file based context management. To migrate an existing file based context into a new database, run one instance of the client once with the --migrate
flag. It will exit with exit code 0 if the migration was successful.
For migration, the legacy 16 byte secret
must be present in the configuration for decryption of existing keys. Additionally, the new 32 byte long secret32
is needed for AES-256 encryption.
Changed:
-
context management through postgres database. DNS needs to be set through configuration.
"DSN": { "user":"<user-name>", "password":"<password>", "host":"<host-name>", "database": "<database-name>" }
port is fixed:
5432
full data source name:postgres://<user-name>:<password>@<host-name>:5432/<database-name>
-
AES-256 private key encryption (needs 32 bytes secret
secret32
to be set through configuration) -
identities are not anymore loaded from configuration upon start by default. To explicitly initialize identities from the configuration, start the client with the
--init-identities-conf
flag
Added:
-
/register
endpoint for identity registration. newregisterAuth
must be set through configuration and sent in theX-Auth-Token
header with a registration request.curl localhost:8080/register \ -X PUT \ -H "X-Auth-Token: <registerAuth>" \ -H "Content-Type: application/json" \ -d '{"uuid":"<UUID>","password":"<the auth token from the ubirch thing api>"}' \ -i
Removed:
- temporarily dropped support for file based context management
- temporarily dropped support for injected private keys through configuration
v1.2.2
v1.2.1
Changed:
- [Quickfix] send CSRs in go routines to speed up boot time
Removed:
- removed validity check for ubirch backend env configuration in order to support alternate environments
Fixed:
- fixed incorrect calculation of "requests per second overall" in loadtest
v1.2.0
Changed:
-
concurrent handling of chaining requests between multiple identities
Chaining is still handled synchronous for each identity, meaning that the concurrency only increases throughput of requests from multiple identities. The throughput for chaining for an individual identity is still at about 3 rps.
-
reduced default request buffer size to 20 in order to keep expected max. waiting time for accepted requests under 10 sec (calculated with worst case throughput of 2 rps)
Added:
- load test:
- test concurrent requests from multiple devices
- test overall throughput
- count failed requests and print summary
v.1.1.10
Added:
-
COSE service for the creation of ECDSA signed
COSE_Sign1
objects- endpoints for JSON or CBOR encoded original data
- endpoints for SHA256 hashes of a CBOR encoded signature structure (Sig_structure) for a COSE Single Signer Data Object (COSE_Sign1)
-
support for hash anchoring without chaining (signed msgpack UPPs)
v1.1.9
Changed:
- reduced response timeout: client will respond to HTTP requests within 20 seconds
- reduced shutdown timeout: client will gracefully shut down within 30 seconds after receiving
SIGTERM
- reduced request buffer size: with a request buffer size of 30 and an expected throughput of 3 rps, we can expect requests to be processed after 10 seconds or rejected immediately
v1.1.8
Changed:
- split up protocol context into key store and signature store
- encrypted keys are stored in
keys.json
file - signatures are stored in individual
<uuid>.bin
files insignatures
subfolder - backwards compatible: if present, legacy
protocol.json
file will be transferred into new structure at startup and then removed
- encrypted keys are stored in
- Dockerfile: upgrade to go version 1.16
Added:
- load test checks chain
- request buffer size is configurable via
RequestBufferSize
attribute inconfig.json
- extended log to include response content if response could not be sent
Removed:
- support for managing protocol context in DB has temporarily been removed (will be fixed soon)
Fixed:
- use non-blocking function for sending error messages to response channel
- changed log level for non-critical errors to
warn
v1.1.7
Changed:
- optimized request buffer size after load test
We currently seem to have an avg. throughput of just over 3rps. If a request will time out after 55s, the buffer should have a capacity of 150.
Fixed:
- do not block if trying to send response when receiver is already gone
- check before chaining if request context has already been canceled
Added:
- basic load test