Before AI asks the database: write a metrics contract

AI is making it easier to ask business data questions in plain language. That sounds liberating: “Which campaigns converted best?” or “Why is a competitor’s traffic growing?” But if the words behind the numbers are unclear, the answer becomes a more polished guess. Before an AI agent asks the database, write a metrics contract.
What changed: AI is moving closer to decision data
Perplexity recently introduced a Snowflake connector for Perplexity Computer. It lets users ask plain-language questions against Snowflake data while admins control access, review SQL, and keep business definitions consistent. Perplexity also describes its own internal Slackbot handling up to 3,000 weekly Snowflake queries, built on a shared context layer that explains tables, key metrics, and common questions.
Source: Perplexity — Computer brings data science to every team
The same movement is visible in marketing and competitor analysis. Manus has expanded its Similarweb integration, so users can inspect keywords, referral sites, landing pages, popular pages, and other growth signals. This is no longer just “show traffic”; it is “explain why demand is moving.”
Source: Manus — Similarweb upgrade
Technically, Google’s Genkit Middleware points in the same direction: agentic apps need controls for retries, fallback, observability, and human approval before risky tool calls run. Agentic AI here means a system that does not only answer in text, but can use tools, retrieve data, and continue through several steps.
Source: Google Developers Blog — Genkit Middleware
The problem is rarely the connector. It is the definition.
When a team connects AI to Snowflake, a CRM, ad platforms, or competitor data, the same questions appear:
- What does the metric mean? Is an “active customer” someone who logged in, paid, booked, opened an email, or had a meeting?
- Which source is authoritative? Is the answer based on warehouse data, finance data, manual spreadsheets, or third-party market data?
- Who is allowed to ask? Should everyone be able to ask about customer segments, margin, campaign cost, or support tickets?
- What caveats must the answer include? Are holidays, returns, trials, seasonality, small samples, or old data missing?
- What should AI never automate? An analysis may be fine. A price change, customer list, or database update may require review.
A metrics contract is a short, readable agreement between the business, the data owner, and the AI workflow. It says not only where the number lives, but how it may be used.
A simple metrics-contract template
Use this structure before letting an AI agent answer business questions:
- Metric: Name, business definition, and one example of an allowed question.
- Data source: System, table, report, Similarweb view, or other source the answer should rely on.
- Owner: The person allowed to change the definition when the business changes.
- Access: Roles that may ask, plus data that must never appear in AI answers.
- Allowed filters: Time, region, product, campaign, channel, or customer segment that may be used.
- Required caveats: Data freshness, known gaps, small samples, estimates, and limits of external data sources.
- Review rule: When SQL, report logic, or competitor interpretation must be reviewed by a person.
- Answer format: How the answer should show assumptions, sources, next steps, and uncertainty.
- Stop boundary: Decisions or actions the AI must never take without approval.
This is not bureaucracy for its own sake. It is what makes an AI answer useful enough to act on.
Three everyday examples
1. Support questions
Question: “What are the top drivers behind support tickets this month?”
The metrics contract should say which ticket types count, how duplicates are handled, whether chat and email are mixed, who may see customer examples, and when the answer should escalate the question to the support owner.
2. Campaign performance
Question: “Which campaigns had the highest conversion rate?”
Here, the contract needs to define conversion, attribution, period, included costs, whether returns are removed, and when AI may only suggest follow-up instead of budget changes.
3. Competitor analysis
Question: “Which keywords seem to drive the competitor’s growth?”
The contract should separate observed external signals from firm conclusions. Similarweb data can be highly useful, but it should be treated as a market indicator, not internal truth. The AI can suggest content angles, but it should not copy the competitor’s wording.
When Hammer can help
If you already have databases, reports, or market-intelligence tools but lack a safe path from question to decision, this fits a Tool Forge setup. Start with a metrics contract for three questions that actually affect daily work: one about customers, one about money, and one about capacity.
Hammer can help map data sources, write definitions, set review rules, and build a first data-agent workflow where the answer is traceable. The goal is not to connect as much data as possible. The goal is for the right person to get an answer that can be checked, understood, and used.
Your next step this week
Pick one question that usually gets sent to “the person who knows the data.” Write down:
- Which source the answer should use.
- What the metric means.
- Who owns the definition.
- Which caveats must always be visible.
- When a human must review the answer.
Once that is clear, you can start testing AI properly. Not as a shortcut around control, but as a way to make control faster.


