-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(trace): implement otlp http backend #7
Conversation
Signed-off-by: GALLLASMILAN <gallas.milan@gmail.com>
b58793d
to
9cb14c6
Compare
src/app.ts
Outdated
// Custom parser for "application/x-protobuf" | ||
app.addContentTypeParser( | ||
'application/x-protobuf', | ||
{ parseAs: 'buffer' }, | ||
traceProtobufBufferParser | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's weird that traceProfobufBufferParser
parses anything protobuf.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to define the custom contentTypeParser
, otherwise I get following:
OTLPExporterError: Unsupported Media Type
at IncomingMessage.
error.
The traceProtobufBufferParser
filters the parser logic only for the v1/traces
route.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you utilize this somehow?
As with the other APIs, addContentTypeParser is encapsulated in the scope in which it is declared. This means that if you declare it in the root scope it will be available everywhere, while if you declare it inside a plugin it will be available only in that scope and its children.
export async function createTrace(traceBody: ExportTraceServiceRequest__Output): Promise<TraceDto> { | ||
const spans = [...traceBody.resourceSpans].flatMap((resourceSpan) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would also be good to check the instrumentation scope as a guardrail for unwanted data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done in f1485c7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the filter, wouldn't you rather fail if you get invalid data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, in case we send more data scopes on observe, it should be able to filter only the right one.
Signed-off-by: GALLLASMILAN <gallas.milan@gmail.com>
Signed-off-by: GALLLASMILAN <gallas.milan@gmail.com>
Signed-off-by: GALLLASMILAN <gallas.milan@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall structure looks good.
jobs: | ||
main: | ||
timeout-minutes: 20 | ||
name: Lint & Build & Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: I don't see any test :D
@@ -6,7 +6,8 @@ | |||
"description": "Observability API server for bee-agent-framework", | |||
"type": "module", | |||
"scripts": { | |||
"build": "rm -rf dist && tsc", | |||
"build": "rm -rf dist && tsc && cp -R src/protos dist/protos", | |||
"proto:generate": "proto-loader-gen-types --defaults --oneofs --longs=Number --enums=String --grpcLib=@grpc/grpc-js --outDir=./src/types/generated ./src/protos/*.proto && node scripts/add_js_extensions.js", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is hacky, we already have a proper setup for generating proto types in the framework using tsup
, please copy it from there
https://github.com/i-am-bee/bee-agent-framework/blob/main/scripts/ibm_vllm_generate_protos
|
||
const directory = path.resolve(__dirname, '../src/types/generated'); | ||
|
||
async function addJsExtensions(dir) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not necessary with a proper tsup setup (see framework)
@@ -0,0 +1,81 @@ | |||
// Copyright 2019, OpenTelemetry Authors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the http version? @opentelemetry/exporter-trace-otlp-http
We're getting rid of protobufs in as many places as possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The observe uses the http proto right now. I generate the types from the proto files provided by the OpenTelemetry framework.
We will use the same proto files for the grpc server in the future.
The HTTP proto with the default port 4318
is only one part the the standard OTLP server. The second part is grpc server on port 4317
that should be implemented later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second thing, is I need to parse protobuf to json.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed offline that protofile type generation will be addressed later in a separate PR
New OTLP backend
I implemented the custom OTLP backend that supports the HTTP-proto OpenTelemetry format a saves the Traces to the Observe (MongoDB) and MLflow.
See the
docs/using-with-opentelemetry.md
file for more info about integration with OpenTelemetry stack.The new routes are under the
v1
prefix.I added the new system route with the base data about versions and used environment variables. It's hidden in the Swagger
I use the
bee-agent-framework
directly in the integration tests.Typescript files are automatically generated from the proto files copied from https://github.com/open-telemetry/opentelemetry-proto/tree/main/opentelemetry/proto
the
traces/:id
andspans
routes return the trace by theframeworkTraceId
provided from the framework context. It is the only one ID that we are able to save in both places (observe, bee-api) and then load the trace by this attribute.We cannot process the POST
v1/traces
response right now the trace will be sent via the https://www.npmjs.com/package/@opentelemetry/sdk-node and the https://opentelemetry.io/docs/collector/Release instructions
4318
.otlphttp
exporter to theexporters
array in theservice.pipelines.traces/dev
section