Skip to content

DataLoader Integration

Effect GraphQL includes built-in DataLoader support to solve the N+1 query problem common in GraphQL APIs. DataLoaders batch multiple individual requests into single bulk queries and cache results within a request.

Consider this GraphQL query:

query {
posts { # 1 query to fetch posts
author { # N queries to fetch each author
name
}
}
}

Without DataLoader, if you have 100 posts, this executes 101 database queries. With DataLoader, it’s just 2 queries: one for posts, one for all unique authors.

  1. Define your loaders

    import { Loader } from "@effect-gql/core"
    const loaders = Loader.define({
    UserById: Loader.single<string, User>({
    batch: (ids) => db.getUsersByIds(ids),
    key: (user) => user.id
    })
    })
  2. Add the loader layer

    const serviceLayer = Layer.mergeAll(
    DatabaseLive,
    loaders.toLayer()
    )
  3. Use in resolvers

    field("Post", "author", {
    type: UserSchema,
    resolve: (post) => loaders.load("UserById", post.authorId)
    })

One key maps to one value. Use for relationships like “post belongs to author”.

const loaders = Loader.define({
UserById: Loader.single<string, User>({
// Batch function receives all requested keys
batch: (ids) => Effect.gen(function* () {
const db = yield* Database
return yield* db.getUsersByIds(ids)
}),
// Key function extracts the ID from each returned value
key: (user) => user.id
})
})

Type Parameters:

  • K - The key type (typically string or number)
  • V - The value type (the entity being loaded)
  • R - Service requirements (inferred from the batch function)

How it works:

  1. Collect all load("UserById", id) calls in the current tick
  2. Call batch([id1, id2, id3, ...]) once
  3. Match returned values to keys using the key function
  4. Return individual values to each caller

One key maps to many values. Use for one-to-many relationships like “author has many posts”.

const loaders = Loader.define({
PostsByAuthorId: Loader.grouped<string, Post>({
// Batch function returns all posts for the requested author IDs
batch: (authorIds) => Effect.gen(function* () {
const db = yield* Database
return yield* db.getPostsByAuthorIds(authorIds)
}),
// GroupBy function determines which key each value belongs to
groupBy: (post) => post.authorId
})
})

How it works:

  1. Collect all load("PostsByAuthorId", authorId) calls
  2. Call batch([authorId1, authorId2, ...]) once
  3. Group returned values by groupBy function
  4. Return arrays to each caller
import { GraphQLSchemaBuilder, query, field, Loader } from "@effect-gql/core"
import { Effect, Context, Layer } from "effect"
import * as S from "effect/Schema"
// Schemas
const UserSchema = S.Struct({
id: S.String,
name: S.String,
email: S.String
})
const PostSchema = S.Struct({
id: S.String,
title: S.String,
content: S.String,
authorId: S.String
})
// Database service
class Database extends Context.Tag("Database")<Database, {
getAllPosts: () => Effect.Effect<Post[]>
getUsersByIds: (ids: readonly string[]) => Effect.Effect<User[]>
getPostsByAuthorIds: (authorIds: readonly string[]) => Effect.Effect<Post[]>
}>() {}
// Define all loaders
const loaders = Loader.define({
UserById: Loader.single<string, User>({
batch: (ids) => Effect.gen(function* () {
const db = yield* Database
return yield* db.getUsersByIds(ids)
}),
key: (user) => user.id
}),
PostsByAuthorId: Loader.grouped<string, Post>({
batch: (authorIds) => Effect.gen(function* () {
const db = yield* Database
return yield* db.getPostsByAuthorIds(authorIds)
}),
groupBy: (post) => post.authorId
})
})
// Build schema
const builder = GraphQLSchemaBuilder.empty.pipe(
// Object types
objectType({ name: "User", schema: UserSchema }),
objectType({ name: "Post", schema: PostSchema }),
// Query
query("posts", {
type: S.Array(PostSchema),
resolve: () => Effect.gen(function* () {
const db = yield* Database
return yield* db.getAllPosts()
})
}),
// Relational fields
field("Post", "author", {
type: UserSchema,
resolve: (post) => loaders.load("UserById", post.authorId)
}),
field("User", "posts", {
type: S.Array(PostSchema),
resolve: (user) => loaders.load("PostsByAuthorId", user.id)
})
)
// Create service layer with loaders
const serviceLayer = Layer.mergeAll(
DatabaseLive,
loaders.toLayer()
)

Create a registry of loaders:

const loaders = Loader.define({
// Each key becomes a loader name
UserById: Loader.single({ ... }),
PostsByAuthorId: Loader.grouped({ ... })
})

Create a Layer that provides fresh DataLoader instances. Should be created once per request:

const serviceLayer = loaders.toLayer()

Load a single value. Returns an Effect:

// For single loaders: returns the value
const user = yield* loaders.load("UserById", "123")
// For grouped loaders: returns an array
const posts = yield* loaders.load("PostsByAuthorId", "123")

Load multiple values in a single batch:

const users = yield* loaders.loadMany("UserById", ["1", "2", "3"])
// Returns: [User, User, User]

Direct access to DataLoader instances:

const result = yield* loaders.use(async (instances) => {
const user = await instances.UserById.load("123")
return user
})

DataLoaders should be scoped to each request to ensure:

  1. Cache isolation: One user’s request doesn’t leak data to another
  2. Fresh data: Each request starts with empty cache
  3. Batching window: Batches only collect calls within a single request

Effect GraphQL handles this automatically when you use toLayer():

// Each request gets fresh loader instances
const serviceLayer = Layer.mergeAll(
DatabaseLive,
loaders.toLayer() // Fresh instances per request
)

Map items to match requested keys (useful in batch functions):

batch: (ids) => Effect.gen(function* () {
const db = yield* Database
const users = yield* db.getUsersByIds(ids)
// Ensure order matches requested ids
return Loader.mapByKey(ids, users, (user) => user.id)
})

Group items by key (useful for debugging or custom logic):

const grouped = Loader.groupByKey(
["alice", "bob"],
posts,
(post) => post.authorId
)
// Map { "alice" => [Post, Post], "bob" => [Post] }

Keep all loaders in one file for easy discovery:

loaders.ts
export const loaders = Loader.define({
UserById: Loader.single({ ... }),
PostById: Loader.single({ ... }),
CommentsByPostId: Loader.grouped({ ... }),
// ... all loaders
})

Define key types explicitly:

type UserId = string & { readonly _brand: "UserId" }
const loaders = Loader.define({
UserById: Loader.single<UserId, User>({
batch: (ids) => ...,
key: (user) => user.id as UserId
})
})

Single loaders return an Error for missing keys. Handle gracefully:

field("Post", "author", {
type: S.NullOr(UserSchema),
resolve: (post) => loaders.load("UserById", post.authorId).pipe(
Effect.catchTag("Error", () => Effect.succeed(null))
)
})

Loaders can depend on other services:

const loaders = Loader.define({
UserById: Loader.single<string, User>({
batch: (ids) => Effect.gen(function* () {
const db = yield* Database
const cache = yield* CacheService
// Check cache first
const cached = yield* cache.getMany(ids)
const missing = ids.filter((id) => !cached.has(id))
if (missing.length > 0) {
const users = yield* db.getUsersByIds(missing)
yield* cache.setMany(users.map((u) => [u.id, u]))
return [...cached.values(), ...users]
}
return [...cached.values()]
}),
key: (user) => user.id
})
})

Wrap the batch function to see what’s being batched:

const loaders = Loader.define({
UserById: Loader.single<string, User>({
batch: (ids) => Effect.gen(function* () {
console.log(`Loading users: ${ids.join(", ")}`)
const db = yield* Database
const users = yield* db.getUsersByIds(ids)
console.log(`Loaded ${users.length} users`)
return users
}),
key: (user) => user.id
})
})

The underlying DataLoader tracks statistics:

loaders.use(async (instances) => {
const loader = instances.UserById
// DataLoader doesn't expose stats directly,
// but you can wrap load calls to track hits/misses
})