The Job Your Prompt is Trying to Do

I’ve been thinking about why most people struggle to get useful output from AI.

It’s not that the models are bad. It’s not that prompting is hard. It’s that we’re asking the wrong question.

I see a lot of people treating AI like a search engine when they should be treating it like a hired hand.

There’s a concept from Harvard professor Clayton Christensen (and a stellar book by Tony Ulwick) called Jobs-to-be-Done (JTBD). The core idea is simple: people don’t buy products. They hire them to make progress in a specific situation.

The classic example is a milkshake. People weren’t buying milkshakes because they wanted dairy. They were hiring them to fill a boring commute, keep one hand free, and stay full until lunch.

Once you see it, you can’t unsee it. Every product, every tool, every decision is actually a job being filled.

And that includes your prompts.

Most people prompt like this:

“Summarize this document.”

That’s not a job. That’s a task. And tasks are vague.

The LLM doesn’t know:

  • Why you need this summary
  • Who it’s for
  • What you’re going to do with it
  • What “done” looks like

So it gives you a generic summary. Five bullet points. Competent. Useless.

Now try this:

“I’m about to enter a board meeting where I need to defend our Q4 spend. Summarize this financial doc and pull out the three biggest cost overruns that the CFO is likely to challenge me on, with one-sentence explanations I can use to justify each.”

See the difference?

You just hired the AI to do a real job. And when you hire it properly, it delivers.

If you want to stop wasting time on mediocre AI outputs, use this structure:

When [situation], I want to [motivation], so I can [expected outcome].

Example:

“When I’m reviewing a 50-page contract before signing, I want to identify hidden liabilities and ‘gotcha’ clauses, so I can decide whether to sign or send it back for redlines.”

This gives the AI three things:

  1. Situational context – The “semantic neighborhood” it should pull from​
  2. Intent – What you’re actually trying to accomplish
  3. Success criteria – What “done” looks like​

It’s not magic. It’s just clarity.

We’re at an inflection point with AI-assisted work. Tools like vibe coding, agentic workflows, and multi-step reasoning are shifting the bottleneck from “can we build it?” to “should we build it this way?”

The people who win in this environment aren’t the ones who know the most tricks. They’re the ones who can articulate intent clearly and evaluate whether the output actually solves the problem. At least – for now.

That’s a skill. And like development, it’s a skill that’s becoming essential across disciplines – not just for “prompt engineers” or AI specialists.

This isn’t a silver bullet. JTBD prompting won’t fix a bad model, and it won’t make up for unclear thinking on your part.

But it will force you to ask better questions. It will make you think harder about what you’re actually trying to accomplish. And more often than not, that clarity alone will get you 80% of the way there.

The other 20%? That’s judgment. Experience. The kind of thing that comes from having actually done the work and watched things break.

I’m experimenting with treating every AI interaction as a “hire.” Not just prompts, but entire workflows – delegating authority, setting checkpoints, defining “done.”

It’s early. I suspect JTBD won’t be the final form of how we work with LLM’s – but I’m convinced that the people who see this as a skill to layer into their existing expertise will be the ones who shape what comes next.


Posted

in

,

by

Tags: