Definition

LangChain is a framework for developing applications powered by large language models (LLMs). It provides a standard interface for chains, many integrations with other tools, and end-to-end chains for common applications.

In 2026, LangChain is primarily used through LCEL (LangChain Expression Language), a declarative way to compose chains that supports streaming, async, and parallel execution out of the box.


The Core Model: LCEL

LCEL treats components as "Runnables" that can be piped together using the | (pipe) operator.

chain = prompt | model | output_parser

Key Runnable Properties

  1. Invoke: Synchronous call (chain.invoke(input)).
  2. Stream: Returns a generator for streaming tokens (chain.stream(input)).
  3. Batch: Runs multiple inputs in parallel (chain.batch([input1, input2])).
  4. Transform: Used for mapping and complex logic within the pipe.

Applied Tutorial: Simple Structured Extractor

This project builds a service that converts unstructured feedback into a validated JSON schema. This is a foundational "AI bridge" pattern: taking messy human input and turning it into something a deterministic system can use.

1. Environment Setup

npm install langchain @langchain/openai zod
OPENAI_API_KEY=sk-...

2. Define the Schema

Use zod to define the contract. This ensures the LLM output is type-safe.

import { z } from "zod";

const FeedbackSchema = z.object({
  sentiment: z.enum(["positive", "negative", "neutral"]),
  summary: z.string().describe("A 10-word summary of the feedback"),
  issues: z.array(z.string()).describe("List of specific technical issues mentioned"),
  requires_followup: z.boolean()
});

type Feedback = z.infer<typeof FeedbackSchema>;

3. Initialize the Model

Bind the schema to the model to force structured output.

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  modelName: "gpt-4o",
  temperature: 0
}).withStructuredOutput(FeedbackSchema);

4. Create the Prompt Template

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a professional feedback analyst. Extract structured data from the provided text."],
  ["user", "{text}"]
]);

5. Compose and Execute

const chain = prompt | model;

const result = await chain.invoke({
  text: "The login page keeps timing out on Chrome, but the dashboard looks great once I get in."
});

console.log(result);
/*
{
  sentiment: 'neutral',
  summary: 'Login timeout on Chrome; dashboard visual quality is high.',
  issues: ['Login timeout on Chrome'],
  requires_followup: true
}
*/

Why use LangChain for this?

While you can call the OpenAI API directly, LangChain provides:

  1. Protocol Uniformity: Switching to Anthropic or Gemini requires changing only one line of code.
  2. Observability: One environment variable (LANGSMITH_TRACING=true) gives you full traces of the prompt, model latency, and output.
  3. Composability: Adding a retrieval step (RAG) or a deterministic validation step is just another | in the chain.

Production Heuristics

1. Prefer LCEL over "Legacy" Chains

Avoid LLMChain or SequentialChain. They are harder to debug and lack the advanced streaming support of LCEL.

2. Bind Schemas Early

Use .withStructuredOutput() rather than parsing JSON strings manually. Models are significantly more reliable when the schema is part of the API call (Function Calling / Tool Use).

3. Use ChatPromptTemplate

Never use string concatenation for prompts. It is prone to injection and makes it harder to manage multi-turn history.

4. Version your Prompts

Store prompts in a central registry (or LangSmith Hub) rather than hardcoding them. This allows prompt-only updates without re-deploying code.


References

  1. LangChain Expression Language (LCEL) — js.langchain.com/docs/concepts/lcel/
  2. Structured Output Guide — js.langchain.com/docs/how_to/structured_output/
  3. LangSmith Observability — smith.langchain.com
  4. Zod Documentation — zod.dev