Definition
Deterministic logic is computation in which the same input always produces the same output under fixed, enumerable rules.
A deterministic-logic-suffices decision is the architectural choice to solve a problem with explicit code rather than a probabilistic model when the problem can be specified as rules.
The default is deterministic.
AI is added only when a deterministic specification is not viable.
Decision Criteria
Use deterministic logic when
- The decision can be expressed as enumerable rules.
- The same input must always produce the same output.
- Errors carry legal, financial, safety, or compliance cost.
- Auditability is required.
- Latency budget is sub-second.
- Operating cost must be predictable.
Use AI when
- The input is unstructured natural language, audio, or image.
- The decision involves judgment over ambiguity.
- Outputs are advisory, not authoritative.
- The cost of occasional error is tolerable.
Use hybrid when
- The deterministic core handles authority.
- The AI step handles unstructured input.
Hybrid is the dominant production pattern.
Comparison
| Property | Deterministic | LLM |
|---|---|---|
| Repeatability | Exact | Approximate |
| Latency | Microseconds | Hundreds of ms |
| Cost per call | Negligible | Token-priced |
| Auditability | Direct | Trace-mediated |
| Explainability | Direct | Post-hoc |
| Failure mode | Bug | Hallucination |
| Test coverage | Achievable | Statistical |
Examples
Deterministic problems
- tax calculation
- shipping fee rules
- password validation
- discount eligibility
- invoice arithmetic
- access permission check
- compliance gate
- order status transition
- required field validation
Worked example
Rule:
const shipping = total > 100 ? 0 : 9.99;
LLM-replaced version:
const shipping = await llm.ask(`Should shipping be free for $${total}?`);
The LLM version is slower, more expensive, non-deterministic, and harder to audit. It introduces a regression surface where none existed.
Hidden Costs of LLM Use
When LLMs replace deterministic logic, additional cost classes are introduced:
- token cost
- latency
- evaluation infrastructure
- prompt regression risk
- model drift
- vendor dependency
- prompt injection surface
- observability overhead
- fallback path complexity
- compliance documentation burden
These costs are not visible in prototypes.
They dominate in production.
Hybrid Architecture
The dominant 2026 production pattern:
deterministic core
+ LLM at boundaries
+ structured outputs
+ deterministic validators
+ deterministic execution
Example: invoice processing
Deterministic:
- approval thresholds
- payment limits
- vendor allowlist
- duplicate detection
- ledger writes
LLM:
- PDF field extraction
- anomaly summarization
- policy-question answering
The LLM produces structured data.
Deterministic logic acts on it.
2026 Industry Cross-check
The deterministic-first heuristic is consistent with current published guidance.
Anthropic — Building Effective Agents
Anthropic distinguishes "workflows" (LLMs and tools orchestrated through predefined code paths) from "agents" (LLM-driven decisions about their own steps). Recommendation: prefer workflows for predictability and consistency. Use agents only when flexibility and model-driven decision-making are required.
Source: anthropic.com/research/building-effective-agents
OpenAI — Practical Guide to Building Agents
OpenAI recommends an escalation ladder: prompt → workflow → agent. Each step is justified only if the previous step is insufficient. Determinism is preferred where viable.
Microsoft — Azure AI Agent Design Patterns
Azure Architecture Center: use the lowest level of complexity that meets requirements. Multi-agent and autonomous systems add coordination overhead, latency, and failure modes. They are justified only when simpler architectures cannot meet the requirement.
Source: learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns
LangChain — LangGraph
LangGraph distinguishes graph-defined workflows from autonomous agent loops. Production deployment guidance favors graph-defined orchestration with human-in-the-loop checkpoints and durable state.
Source: docs.langchain.com/oss/python/langgraph/overview
NIST AI Risk Management Framework
NIST AI RMF requires traceability, measurability, and human oversight for AI components. Deterministic components satisfy these requirements directly. AI components require additional documentation, evaluation, and monitoring.
Source: nist.gov/itl/ai-risk-management-framework
EU AI Act
The EU AI Act, in its 2026 enforcement phase for high-risk systems, requires logging, human oversight, technical documentation, and conformity assessment. Deterministic substitution reduces the regulatory surface for any component where a rule-based implementation is feasible.
Operational Telemetry
Production patterns reported by observability vendors (LangSmith, Helicone, Braintrust, Arize, Vertex AI Eval) consistently show that deployed AI systems are deterministic pipelines with bounded LLM steps, not autonomous loops. Token cost, latency, and evaluation effort grow super-linearly with the number of LLM steps in a workflow.
Convergence
Across vendor guidance, regulatory frameworks, and operational telemetry, the 2026 consensus is:
- Default to deterministic.
- Add AI only at boundaries where determinism is infeasible.
- Validate AI output deterministically before action.
- Persist state deterministically.
- Gate authority deterministically.
Decision Heuristics
Two questions determine the architecture.
1. Can the decision be expressed as a rule?
| Answer | Architecture |
|---|---|
| Yes | Deterministic |
| No | LLM-assisted |
| Partially | Hybrid |
2. Can the system tolerate an incorrect decision?
| Answer | Architecture |
|---|---|
| No | Deterministic core, AI advisory only |
| Yes | AI may decide, with monitoring |
Anti-pattern
Common failure mode: using an LLM to perform a deterministic task because
- the team has access to an LLM API
- the demo is impressive
- the rule set is perceived as complex
- AI-first language exists in the organizational mandate
In each case, the deterministic alternative is simpler, cheaper, faster, more reliable, and more auditable.
The architectural decision is not "does the LLM work."
The architectural decision is "is the LLM necessary."
Removal Test
Before adding an LLM, run the removal test:
What changes if the LLM is removed?
If the answer is "nothing important," the LLM is decoration.
Decoration does not run production systems.
Summary
- Deterministic logic is the default.
- AI is added only where rules cannot be enumerated.
- Hybrid architectures are the 2026 production norm.
- Authority remains deterministic in all cases.
- The strongest AI architecture decision is identifying where AI must not be used.
References
- Anthropic — Building Effective Agents —
anthropic.com/research/building-effective-agents - Microsoft Azure Architecture Center — AI Agent Design Patterns —
learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns - LangChain — LangGraph overview —
docs.langchain.com/oss/python/langgraph/overview - NIST — AI Risk Management Framework —
nist.gov/itl/ai-risk-management-framework - OpenAI — A Practical Guide to Building Agents
- EU AI Act — high-risk system obligations, 2026 enforcement phase