Skip to content

OpenTelemetry Tracing

Effect GraphQL provides first-class OpenTelemetry integration for distributed tracing. This allows you to trace GraphQL requests through your entire system, visualize performance bottlenecks, and debug issues in production.

Terminal window
npm install @effect-gql/opentelemetry @effect/opentelemetry @opentelemetry/api @opentelemetry/sdk-trace-base
  1. Add tracing to your schema

    import { GraphQLSchemaBuilder, query } from "@effect-gql/core"
    import { withTracing } from "@effect-gql/opentelemetry"
    import * as S from "effect/Schema"
    const builder = GraphQLSchemaBuilder.empty
    .query("users", {
    type: S.Array(UserSchema),
    resolve: () => userService.getAll()
    })
    .pipe(withTracing())
  2. Configure OpenTelemetry

    import { NodeSdk } from "@effect/opentelemetry"
    import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base"
    import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"
    const TracingLayer = NodeSdk.layer(() => ({
    resource: { serviceName: "my-graphql-api" },
    spanProcessor: new BatchSpanProcessor(
    new OTLPTraceExporter({ url: "http://localhost:4318/v1/traces" })
    )
    }))
  3. Serve with tracing enabled

    import { serve } from "@effect-gql/node"
    await serve(builder, TracingLayer.pipe(Layer.merge(serviceLayer)), {
    port: 4000
    })

When tracing is enabled, your GraphQL requests produce a span hierarchy like this:

graphql.http (HTTP request span)
├── graphql.parse (parsing phase)
├── graphql.validate (validation phase)
└── graphql.execute
├── graphql.resolve Query.users
├── graphql.resolve User.posts
│ ├── graphql.resolve Post.author
│ └── graphql.resolve Post.comments
└── graphql.resolve User.followers
withTracing({
extension: {
// Add trace ID to response extensions
exposeTraceIdInResponse: true,
// Security: don't include query text in spans
includeQuery: false,
// Security: don't include variables in spans
includeVariables: false,
// Add custom attributes to all spans
customAttributes: {
"service.version": "1.0.0",
"deployment.environment": "production"
}
},
resolver: {
// Trace all resolvers (depth 0 = root fields)
minDepth: 0,
// Limit deep nesting
maxDepth: 10,
// Skip introspection queries
excludePatterns: [/^Query\.__/],
// Security: don't include resolver args
includeArgs: false,
// Include parent type in span name
includeParentType: true
}
})
OptionTypeDefaultDescription
includeQuerybooleanfalseInclude GraphQL query in span attributes
includeVariablesbooleanfalseInclude variables in span attributes
exposeTraceIdInResponsebooleanfalseAdd traceId/spanId to response extensions
customAttributesRecord<string, string | number | boolean>-Custom attributes for all spans
OptionTypeDefaultDescription
minDepthnumber0Minimum field depth to trace
maxDepthnumberInfinityMaximum field depth to trace
excludePatternsRegExp[][]Patterns to exclude (matched against Type.field)
includeArgsbooleanfalseInclude resolver arguments in spans
includeParentTypebooleantrueInclude parent type in span attributes
traceIntrospectionbooleanfalseTrace introspection fields
spanNameGenerator(info) => string-Custom span name function

Parse and validate spans include:

AttributeDescription
graphql.document.nameOperation name
graphql.document.operation_countNumber of operations
graphql.validation.error_countValidation error count

Each resolver span includes:

AttributeDescription
graphql.field.nameField name
graphql.field.pathFull path (e.g., Query.users.0.name)
graphql.field.typeReturn type
graphql.parent.typeParent type name
graphql.operation.nameOperation name
errortrue if resolver failed
error.typeError class name
error.messageError message

For more control, add the extension and middleware individually:

import {
tracingExtension,
resolverTracingMiddleware
} from "@effect-gql/opentelemetry"
const builder = GraphQLSchemaBuilder.empty
.extension(tracingExtension({
exposeTraceIdInResponse: true
}))
.middleware(resolverTracingMiddleware({
minDepth: 1, // Skip root fields
spanNameGenerator: (info) =>
`gql.${info.parentType.name}.${info.fieldName}`
}))
.query("users", { ... })

Extract trace context from incoming HTTP headers (W3C Trace Context):

import { extractTraceContext, parseTraceParent } from "@effect-gql/opentelemetry"
// In an HTTP handler
const traceContext = yield* extractTraceContext
if (traceContext) {
console.log(`Continuing trace: ${traceContext.traceId}`)
// The traced router automatically propagates this context
}

For HTTP-level tracing with context propagation:

import { makeTracedGraphQLRouter } from "@effect-gql/opentelemetry"
const router = makeTracedGraphQLRouter(schema, serviceLayer, {
path: "/graphql",
graphiql: { path: "/graphiql" },
rootSpanName: "graphql.http",
rootSpanAttributes: {
"service.name": "my-api"
},
propagateContext: true // Extract W3C trace context from headers
})
  1. Start Jaeger

    Terminal window
    docker run -d \
    -p 16686:16686 \
    -p 4318:4318 \
    jaegertracing/all-in-one:latest
  2. Configure OTLP exporter

    import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"
    const exporter = new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces"
    })
  3. Open Jaeger UI

    Navigate to http://localhost:16686 and select your service.

const TracingLayer = NodeSdk.layer(() => ({
resource: {
serviceName: "graphql-api",
serviceVersion: process.env.VERSION
},
spanProcessor: new BatchSpanProcessor(
new OTLPTraceExporter({
url: process.env.TEMPO_ENDPOINT,
headers: {
Authorization: `Bearer ${process.env.TEMPO_TOKEN}`
}
})
)
}))

When exposeTraceIdInResponse is enabled, responses include trace info:

{
"data": { "users": [...] },
"extensions": {
"tracing": {
"traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
"spanId": "00f067aa0ba902b7"
}
}
}

This is useful for:

  • Correlating frontend errors with backend traces
  • Debugging specific requests
  • Support ticket investigation

If you enable includeQuery or includeVariables:

  • Ensure your tracing backend has appropriate access controls
  • Consider using span processors to redact sensitive fields
  • Review compliance requirements (GDPR, HIPAA, etc.)

The tracing integration is designed for minimal overhead:

  • When disabled: Zero overhead - spans are Effect no-ops without a tracer
  • When enabled: ~0.1-0.5ms per span creation
  • Batching: Use BatchSpanProcessor to avoid blocking on export
  • Sampling: Configure sampling in your tracer for high-traffic services
import { AlwaysOnSampler, TraceIdRatioBasedSampler } from "@opentelemetry/sdk-trace-base"
// Sample 10% of traces in production
const sampler = process.env.NODE_ENV === "production"
? new TraceIdRatioBasedSampler(0.1)
: new AlwaysOnSampler()
const TracingLayer = NodeSdk.layer(() => ({
resource: { serviceName: "my-api" },
sampler,
spanProcessor: new BatchSpanProcessor(exporter)
}))