If you survey the DBAs who say they use AI daily, the pattern looks the same: they ask Copilot or Claude to generate a query, paste the result into SSMS, run it, and tweak until the answer looks right. That is a useful workflow. It is also the least valuable thing you can do with a language model in a database role. The high-leverage move is the opposite: instead of asking the AI to write SQL, hand it the SQL and let it read.

Why generation is the obvious move and the lower-value one

Generation is satisfying. You type a sentence, the AI types twelve lines of T-SQL, you save five minutes. The output is binary — either it runs and produces the answer, or it doesn't.

The problem is that those five minutes are the cheap part of the job. Writing the original query was never the bottleneck. The bottleneck is everything that happens after: confirming the query is correct, understanding why one plan is fast and another isn't, knowing whether an index will help or hurt, deciphering what a stored procedure inherited from someone who left in 2012 actually does.

Generation skips all of that. Reading is where the value lives.

Four things AI reads better than most DBAs in a hurry

1. Execution plans. Paste the XML or graphical-plan-as-text into a model and ask "what is wrong with this plan." A well-prompted local 14B model gets a plausible diagnosis on the first try most of the time. It will catch the implicit conversion in your join predicate that's killing your seek. It will notice the missing index suggestion the optimizer is whispering. It will spot a stale-stat scan you would have walked past on hour 11 of your day. Plan analysis is pattern recognition over a structured payload — exactly the kind of task LLMs handle well.

2. Stored procedure intent. Most production databases contain stored procedures nobody on the current team wrote. Ask the model: "Summarize what this procedure does in five sentences, then list the side effects." The output is a starting point, not a final answer, but it's a 30-second starting point versus your 30-minute one.

3. Schema review. Hand it a CREATE TABLE script and a sample row. Ask "what's structurally wrong here?" The model catches missing indexes on foreign keys, columns that should be NOT NULL, datatype mismatches across joins, and naming inconsistencies. None of this is novel. All of it is faster than doing it by hand.

4. T-SQL critique. Paste a query and ask "review this." You get back a list of things a senior DBA might say in code review: NOLOCK hints that shouldn't be there, scalar UDFs that will tank parallelism, sp_executesql calls without explicit parameter typing. The model is opinionated, sometimes wrong, but useful as a first reviewer before a human looks at it.

The compound effect: AI as the always-available code reviewer

What changes when reading is your default mode is the rate at which you fix things you would otherwise leave alone.

A query that's "fine" — runs in 200ms when the user expects 50ms — usually doesn't get optimized, because the cost of opening the plan and reading it carefully is higher than the value of saving 150ms. With AI as a reader, the cost drops by an order of magnitude. The 200ms query gets reviewed because reviewing it is now thirty seconds of work instead of fifteen minutes.

Multiply that across a thousand queries in a production workload and you have an environment where the marginal optimization actually happens. Not because the AI made you smarter; because it lowered the friction enough for you to do work you already knew how to do.

When generation is fine

There are places where AI-as-generator is the right tool: scaffolding a migration script, drafting a stored procedure from a spec, writing throwaway analytical queries against a data warehouse. Those are tasks where the cost of being wrong is low and the cost of being slow is high.

The line between generator and reader is the cost of failure. If the SQL is going to run in production against a regulated table, you want a reader. If the SQL is going to run once on a development database to confirm a hypothesis, the generator is fine.

How to start tomorrow

Three habits, in order of payoff:

  1. The plan-paste habit. Anytime a query takes longer than feels right, copy the actual execution plan into a chat with the model. Ask "what's the slow operator and why." Don't accept the first answer; ask follow-up questions. Within a week you will catch issues you would have shrugged at.
  2. The procedure-summary habit. When you inherit a database, walk the stored procedure list and let the model summarize each in five sentences. Save the summaries to a wiki. The next on-call engineer will thank you.
  3. The query-review habit. Before merging a PR that touches T-SQL, paste the query and ask the model what is wrong with it. Use the output as a checklist, not a verdict.

None of this requires changing your stack, your tooling, or your security posture. It requires changing the default verb. From "write me" to "read this."

The longer-form argument for treating AI as a reasoning layer rather than a generation layer is in The Birth of Bob, particularly the chapters on what work an LLM is reliable at and where it falls down. The short version: trust it as a reviewer, verify it as a writer.