Back to Thoughts

TypeScript Patterns for AI Applications

Type-safe patterns for building robust AI-powered features with TypeScript, covering API wrappers, streaming, and error handling

TypeScriptDevelopmentAIBest Practices

TypeScript Patterns for AI Applications#

TypeScript's type system is a superpower for AI applications. When you're dealing with probabilistic outputs, external API dependencies, and complex data transformations, types catch errors that would otherwise surface in production. Here are patterns I use daily.

Typed LLM Responses with Zod#

Never trust raw LLM output. Parse and validate everything:

import { z } from "zod";
import { generateObject } from "ai";

// Define your expected structure
const ExtractedEntitySchema = z.object({
  entities: z.array(z.object({
    name: z.string(),
    type: z.enum(["person", "organization", "location", "concept"]),
    confidence: z.number().min(0).max(1)
  })),
  relationships: z.array(z.object({
    source: z.string(),
    target: z.string(),
    relation: z.string()
  }))
});

type ExtractedEntities = z.infer<typeof ExtractedEntitySchema>;

async function extractEntities(text: string): Promise<ExtractedEntities> {
  const { object } = await generateObject({
    model: openai("gpt-4o"),
    schema: ExtractedEntitySchema,
    prompt: `Extract entities and relationships from: ${text}`
  });
  
  return object; // Fully typed!
}

The schema serves double duty: runtime validation and compile-time types.

Discriminated Unions for AI Results#

AI operations have multiple failure modes. Model them explicitly:

type AIResult<T> = 
  | { status: "success"; data: T; tokensUsed: number }
  | { status: "rate_limited"; retryAfterMs: number }
  | { status: "content_filtered"; reason: string }
  | { status: "timeout"; elapsedMs: number }
  | { status: "error"; error: Error };

async function classifyText(text: string): Promise<AIResult<Classification>> {
  try {
    const response = await withTimeout(
      llm.classify(text),
      5000
    );
    
    return {
      status: "success",
      data: response.classification,
      tokensUsed: response.usage.total_tokens
    };
  } catch (e) {
    if (e instanceof RateLimitError) {
      return { status: "rate_limited", retryAfterMs: e.retryAfter };
    }
    if (e instanceof ContentFilterError) {
      return { status: "content_filtered", reason: e.reason };
    }
    if (e instanceof TimeoutError) {
      return { status: "timeout", elapsedMs: e.elapsed };
    }
    return { status: "error", error: e as Error };
  }
}

// Exhaustive handling - TypeScript ensures you handle all cases
function handleResult<T>(result: AIResult<T>): void {
  switch (result.status) {
    case "success":
      console.log("Got data:", result.data);
      break;
    case "rate_limited":
      scheduleRetry(result.retryAfterMs);
      break;
    case "content_filtered":
      logModeration(result.reason);
      break;
    case "timeout":
      metrics.recordTimeout(result.elapsedMs);
      break;
    case "error":
      reportError(result.error);
      break;
  }
}

Typed Streaming with AsyncIterables#

Streaming responses need careful typing:

interface StreamChunk {
  type: "text" | "tool_call" | "done";
  content?: string;
  toolCall?: {
    name: string;
    arguments: Record<string, unknown>;
  };
}

async function* streamChat(
  messages: Message[]
): AsyncIterable<StreamChunk> {
  const stream = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
    stream: true
  });
  
  for await (const chunk of stream) {
    const delta = chunk.choices[0]?.delta;
    
    if (delta?.content) {
      yield { type: "text", content: delta.content };
    }
    
    if (delta?.tool_calls?.[0]) {
      yield {
        type: "tool_call",
        toolCall: {
          name: delta.tool_calls[0].function.name,
          arguments: JSON.parse(
            delta.tool_calls[0].function.arguments
          )
        }
      };
    }
  }
  
  yield { type: "done" };
}

Generic Retry with Backoff#

A reusable pattern for flaky AI APIs:

interface RetryConfig {
  maxAttempts: number;
  baseDelayMs: number;
  maxDelayMs: number;
}

async function withRetry<T>(
  fn: () => Promise<T>,
  config: RetryConfig,
  shouldRetry: (error: unknown) => boolean = () => true
): Promise<T> {
  let lastError: unknown;
  
  for (let attempt = 0; attempt < config.maxAttempts; attempt++) {
    try {
      return await fn();
    } catch (e) {
      lastError = e;
      
      if (!shouldRetry(e) || attempt === config.maxAttempts - 1) {
        throw e;
      }
      
      const delay = Math.min(
        config.baseDelayMs * Math.pow(2, attempt),
        config.maxDelayMs
      );
      
      await sleep(delay + Math.random() * 100); // Jitter
    }
  }
  
  throw lastError;
}

// Usage
const result = await withRetry(
  () => extractEntities(longDocument),
  { maxAttempts: 3, baseDelayMs: 1000, maxDelayMs: 10000 },
  (e) => e instanceof RateLimitError || e instanceof TimeoutError
);

These patterns share a philosophy: make invalid states unrepresentable, make error handling explicit, and let the type system guide correct usage. When AI behavior is unpredictable, your code shouldn't be.

Scaling RLHF Systems: Lessons from the Trenches

Next Blog