Enterprise AI readiness is the degree to which an organization can articulate its goals, workflows, decisions, and economics with sufficient precision for AI systems to act on them.

Readiness is independent of model quality, vendor selection, headcount, and budget.

Readiness is a function of organizational self-knowledge expressed as durable, queryable artifacts.

Without readiness, AI deployment amplifies dysfunction. Cost grows. Output does not.


Required artifacts

A ready organization maintains stable, queryable answers to:

  1. Customer problem.
  2. Deficiency in existing solutions addressed by the product.
  3. Company goals.
  4. Metrics tied to each goal.
  5. Challenges blocking each goal.
  6. Strategies addressing each challenge.
  7. Projects implementing each strategy.
  8. Work performed within each project.
  9. People assigned to each unit of work.
  10. Cost of each unit of work.

Each artifact must be:

  1. Owned by a named individual or team.
  2. Recorded in a canonical source.
  3. Version-controlled or auditable.
  4. Machine-readable where possible.

Canonical sources: Notion, Confluence, Coda, Backstage, Git-tracked Markdown.

Ownership tracking: Linear, Jira, Asana, Backstage service catalog.


Stability requirement

Stability across reporting cycles is itself a readiness signal.

A ready organization's documented strategy in Q1 is materially the same in Q4, with project-level evidence of execution between them.

Quarterly restatement of strategy without execution indicates absence of underlying structure. Retrieval and agent systems cannot index entities whose identifiers do not persist across cycles.

Practice: maintain strategy/ and goals/ directories under version control. Reference SHAs from project tickets and OKR tools.


Workflow scoping

Readiness is scored per workflow, not per organization.

A workflow is ready if it satisfies:

  1. Observable inputs and outputs.
  2. A measurable success metric defined before deployment.
  3. A named owner with authority over the metric.
  4. A budget ceiling.
  5. A defined failure mode and rollback path.

A single ready workflow is sufficient to begin AI deployment. Organization-wide readiness is rarely the operating goal.


Metric specification

Metrics must be defined before architecture.

Each metric requires:

  1. Source of truth.
  2. Query that produces it.
  3. Update frequency.
  4. Owner.
  5. Threshold for success and failure.

Storage: BigQuery, Snowflake, Postgres, ClickHouse.

Definition layer: dbt, Cube, MetricFlow.

Surfacing: Looker, Hex, Mode, Metabase, Lightdash, PostHog.

Lineage: OpenLineage, Marquez, Dataplex.

Metrics without a stored query cannot gate deployment.


Cost attribution

Cost instrumentation precedes AI call instrumentation.

Per-call attribution requires:

  1. Tenant or workflow identifier on every LLM request.
  2. Token usage captured from the SDK response.
  3. Cost rollup keyed to the workflow.

SDKs exposing usage fields: OpenAI SDK, Anthropic SDK, Google Cloud Vertex AI SDK, Mistral SDK.

Per-request logging: Helicone, Langfuse, Braintrust, LangSmith.

Tracing: OpenTelemetry GenAI semantic conventions, Sentry tracing.

Infrastructure cost: BigQuery billing export, Google Cloud Cost Management, Cloud Run usage exports.

Vendor cost: Vendr, Cledara, Tropic.

Untracked cost invalidates ROI claims.


Failure modes of unready deployments

Observable patterns:

  1. Procurement without a target metric.
  2. Pilots that cannot be evaluated against a baseline.
  3. Quarterly vendor migration without controlled comparison.
  4. Increased operational variance rather than reduction.
  5. Cost growth without measurable output change.
  6. AI used to generate artifacts that the organization cannot consume (charts no one reads, summaries no workflow ingests).

The output is overhead, not leverage.


Readiness gap as roadmap

The documentation discipline required for AI readiness is independent of AI.

Closing the readiness gap reduces operational risk regardless of deployment.

Sequence:

  1. Catalog workflows. Tools: Backstage, internal wiki, workflows/ directory in a Git monorepo.
  2. Assign ownership. Tools: Linear teams, RACI tables in Notion, CODEOWNERS.
  3. Define metrics per workflow. Tools: dbt, Looker, Hex.
  4. Instrument cost. Tools: BigQuery billing export, Helicone, OpenTelemetry.
  5. Audit strategy stability across the last four reporting cycles.
  6. Identify the first workflow that satisfies all five readiness criteria.
  7. Deploy AI against that workflow.

Step 7 is gated by steps 1–6.


Reference stacks

Google Cloud

  • Catalog: Backstage on GKE, Dataplex.
  • Documentation: Notion, internal Cloud Storage + static site.
  • Metrics: BigQuery, Looker, dbt Cloud on Cloud Run.
  • Workflow execution: Cloud Workflows, Cloud Run jobs, Vertex AI Pipelines.
  • Cost: BigQuery billing export, Cloud Billing budgets, Cloud Monitoring alerts.
  • LLM calls: Vertex AI SDK with usage_metadata, Gemini API.

Node.js / TypeScript

  • Workflow orchestration: Temporal, XState, Inngest, Trigger.dev.
  • LLM clients: openai, @anthropic-ai/sdk, @google-cloud/vertexai, mistralai.
  • Schema validation: Zod, Valibot.
  • Tracing: OpenTelemetry JS, @vercel/otel.
  • Eval and logging: Langfuse, Braintrust, LangSmith, Helicone proxy.
  • Metrics: dbt + BigQuery, Lightdash, Hex notebooks.

Local / regulated

  • Models: Ollama, vLLM, Hugging Face Transformers.
  • Storage: Postgres, SQLite, DuckDB.
  • Orchestration: Node.js + Temporal self-hosted, Prefect.
  • Observability: Grafana, Prometheus, OpenTelemetry Collector.
  • Documentation: Git-tracked Markdown, MkDocs, Docusaurus.