How to Connect ChatGPT and Claude to Governed Business Metrics Without Direct Warehouse Access
At a glance
- Direct warehouse access is fast to prototype and poor for production governance.
- The safer pattern is model -> governed context layer -> approved query path -> warehouse.
- MCP makes that architecture more reusable because it standardizes how AI clients discover and call external tools and resources.
- OpenAI now documents MCP-based tools and connectors, and broader MCP adoption continues to grow across the AI ecosystem.
- The same governed metric layer can serve multiple AI clients, reducing vendor lock-in and integration drift.
- Kaelio auto-builds that governed context layer from your warehouse, semantic models, BI tools, and operational systems.
Reading time
5 minutes
Last reviewed
April 6, 2026
Topics
Business intelligence
By Luca Martial, CEO & Co-founder at Kaelio | Ex-Data Scientist | 2x founder in AI + Data | ex-CERN, ex-Dataiku ·
The easiest way to connect an LLM to company data is also the riskiest: point the model at the warehouse and hope prompts will do the rest. That approach breaks down quickly. The model sees physical tables instead of business definitions, permissions get flattened behind broad credentials, and every new AI client creates another custom integration to maintain.
There is a better pattern. Instead of exposing raw warehouse access, expose governed business metrics through a context layer that ChatGPT, Claude, or another AI client can call. The model should receive the business interface to your data, not the entire internal structure of the data warehouse.
Why Direct Warehouse Access Fails
When teams wire a model straight to Snowflake, BigQuery, or Databricks, they usually hit the same three problems.
1. Physical schema is not business context
Raw tables do not tell the model which revenue definition is canonical, which join path is approved, or how churn is segmented for the board deck. Even strong model reasoning cannot replace missing business structure.
2. Permissions get too broad
In practice, direct access often means one shared service credential or an execution path that is looser than the warehouse and BI paths humans normally use. That is exactly the opposite of what teams want for production analytics.
3. Every model vendor becomes a separate integration problem
One custom connector for ChatGPT. Another for Claude. Another for your internal agent framework. The query logic drifts, the permission handling drifts, and the sources cited by each client drift.
That is the integration tax MCP is trying to reduce.
What the Governed Architecture Looks Like
The production-safe pattern is straightforward:
- The user asks a question in ChatGPT, Claude, or another AI interface.
- The model calls a governed service through MCP or API.
- That service returns approved metrics, dimensions, filters, lineage, and constrained query tools.
- The warehouse query runs through governed execution paths.
- The final answer returns with source references or citations.
This architecture changes the model's job. Instead of inventing business logic from raw schema, it selects from governed business context.
That is why the context layer matters. It is not just a metadata store. It is the layer that converts your stack into something an AI system can consume safely.
What MCP Changes
Model Context Protocol is important because it gives AI systems a standard way to work with external context providers and tools. Instead of building bespoke adapters for every model or app surface, teams can expose one governed endpoint and reuse it across clients.
That matters even more now that OpenAI documents how to build and connect remote MCP servers for ChatGPT apps and API integrations, and has highlighted support for remote MCP servers in the Responses API. The point is not that every deployment must use one protocol immediately. The point is that the ecosystem is converging on a standard way to deliver context.
For data teams, that is a major architectural improvement:
- one governed service
- multiple AI clients
- shared metric definitions
- less vendor-specific glue code
What the Model Should Receive Instead of Raw Tables
If you are designing the interface correctly, the model should receive things like:
- approved metric definitions from dbt or other governed systems
- business-friendly dimensions and filters
- valid join paths
- explanatory descriptions
- source references or lineage context
- access-constrained query tools
If your team already uses a semantic layer such as dbt Semantic Layer or Looker's modeling layer, those investments become upstream inputs into the governed interface. The model should not need to reconstruct them from scratch.
ChatGPT and Claude Should Not Need Different Metric Logic
One of the biggest advantages of this pattern is architectural symmetry. Whether the end user asks in ChatGPT, Claude, or an internal assistant, the answer should come from the same governed business layer.
That means:
- the same metric definition for ARR
- the same access rules
- the same approved filters
- the same lineage and citation path
Without that shared layer, every client becomes its own semantics engine. That is how metric drift starts in AI deployments.
A Practical Rollout Pattern
If you want to connect production AI clients to business metrics, keep the rollout narrow and governed.
1. Start with approved metrics, not raw domains
Expose board-level or team-level KPIs first. Resist the urge to hand the model every table in the warehouse.
2. Keep the initial tool surface read-only
Read-only search, fetch, and governed query tools are much easier to secure and monitor than mixed read-write automation.
3. Preserve source visibility
The answer should make it easy to trace where the number came from. This is important for both trust and debugging.
4. Reuse your existing governance
NIST's AI RMF is clear that trustworthy AI requires ongoing governance and traceability. The easiest way to achieve that in analytics is to reuse the controls your data stack already supports rather than building new ones in the model layer.
Where Kaelio Fits
Kaelio auto-builds a governed context layer from your warehouse, semantic models, BI tools, and business knowledge. It can expose that context through MCP or API, so the same governed answer surface can be used by ChatGPT, Claude, or custom agents. That is the practical difference between "we connected a model to data" and "we deployed a governed AI analytics layer."
If your team wants faster self-serve analytics without raw warehouse exposure, that is the architecture to copy.
FAQ
Why not just give ChatGPT or Claude direct warehouse credentials?
Because raw warehouse access exposes physical schemas without enough business context. That leads to wrong joins, inconsistent metrics, broader-than-needed permissions, and brittle prompt engineering. A governed intermediary is safer and usually more accurate.
What role does MCP play in this architecture?
MCP gives models a standard way to connect to external tools and context providers. Instead of building a one-off integration for each model vendor, teams can expose a governed service once and reuse it across compatible clients.
Can ChatGPT and Claude both use the same governed metric layer?
Yes. That is one of the main advantages of a context-layer-plus-MCP approach. The same governed definitions can be served to multiple AI clients, reducing duplication and making vendor changes less disruptive. For more on that pattern, see Model Context Protocol and the future of governed AI data access.
What should the model receive instead of raw tables?
It should receive approved metrics, dimension definitions, valid filters, lineage or source references, and access-constrained query tools. The goal is to expose the business interface to the data, not the entire warehouse internals.
Sources
- https://modelcontextprotocol.io/docs/getting-started/intro
- https://modelcontextprotocol.io/specification/2025-06-18/architecture
- https://developers.openai.com/api/docs/guides/tools-connectors-mcp
- https://developers.openai.com/api/docs/guides/developer-mode
- https://developers.openai.com/api/docs/mcp
- https://openai.com/index/new-tools-and-features-in-the-responses-api/
- https://docs.getdbt.com/docs/build/metrics-overview
- https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl
- https://cloud.google.com/looker/docs/what-is-lookml
- https://www.nist.gov/document/about-nist-ai-rmf