Prompt Engineering for Humans

31 Mar 2026 in Business

A while back, I was working on trello-cli, a CLI tool I maintain. I've been slowly adding commands as time allows, but this time I wanted to try out this new-fangled "agentic development" thing that everyone is talking about.

So I opened up my terminal, asked it to "Add support for showing card details" and let it churn away. Once it was done I reviewed the output and, honestly, I was disappointed. It added a command that showed the board title, description, and labels. It was missing the card's due date, which is the most important information of all. Checklist support was nowhere to be seen.

I went to close my laptop, safe in the knowledge that AI wasn't coming for my job any time soon. Then I realised, the agent had done what I asked. It had added support for showing card details. I didn't specify which details I wanted to see.

So I gave it another go, this time assuming it didn't know anything about my intent:

Add a card:show command that accepts --board and --list as flags. Output the card title, description, due date, any attached checklists and labels. If any of these values do not exist, skip the header for that section. If checklists have more than 10 items, add a "+ X more" entry. Add an --all-details flag to show all of the information.

This time the output was fantastic. It was a command I could use immediately. What would have easily taken me an hour was finished in minutes. Nothing about the model changed. The only thing that changed was the context.

The fastest way to get bad output from an AI system is to give it a vague prompt. It turns out the same is true for humans, too.

Management = Context

This is exactly what good managers do. Or at least what they’re supposed to do. Good managers, it turns out, spend a lot of time doing prompt engineering for humans.

Vague prompts produce vague results. From AI. From teams. From organisations.

Management isn't about assigning work. It’s about designing context: goals, constraints, expectations, and validation. When those things are clear, teams move quickly and independently. When they aren’t, people guess.

The one-sentence management problem

Managers often give direction the same way people write bad AI prompts: short, clear, and wildly underspecified.

Here's something that I said in a documentation project at work:

“All the information needed to achieve a task must be on a single page.”

To me this was a precise requirement. Users must be able to achieve their goal by reading the content on a single page. What I didn't realise is that I had a set of assumptions about what that meant.

Bad prompts to AI fail fast; bad prompts to humans fail expensively.

Reverse-engineering intent from an underspecified prompt takes a lot of time, and it's usually missing nuance.

The rewrite

In this case the team built something that met the requirements, but didn't match my expectations. I could figure out how to achieve a use case from the page, but it was a painful experience copying arcane configs into various files. If I didn't read closely I'd paste the wrong thing into the wrong file, and none of it would work.

After some deliberation, I rephrased the requirements:

Users must be able to copy and paste down a page from top to bottom and be successful with their task. They must also have a way to validate that it's working.

The goal didn’t change. The context did.

Now we understand how the user behaves. We know what success looks like. We know how the user verifies the outcome.

Why constraints create autonomy

Managers sometimes hesitate to define constraints. They worry that too many rules will limit creativity. In practice, the opposite happens. Constraints remove ambiguity.

When teams understand the boundaries - what matters, what doesn’t, and how success will be measured - they stop waiting for clarification and start making decisions.

When projects struggle, the instinct is often to blame execution. The team misunderstood the task. The engineer built the wrong thing. The feature didn’t meet expectations.

Sometimes that’s true. But more often the real problem is simpler: the context wasn’t clear.

The context checklist

Before assigning work, ask yourself whether the prompt you’re giving your team includes five things:

The goal
What outcome are we trying to achieve?

The context
Why does this matter?

The constraints
What boundaries exist around time, scope, or resources?

The success criteria
What does good look like?

The validation mechanism
How will we know it works?

When these pieces are clear, the team doesn’t need constant clarification or supervision.

Specify, don't guess

AI didn’t introduce a new problem. It exposed an old one.

We’ve always relied on people to interpret vague instructions, fill in gaps, and guess at intent. Sometimes that works when you're a small team. At scale, it breaks.

Prompt engineering is just a new name for something managers have always been responsible for: making sure the work is understandable before it begins.

The difference now is that there’s no illusion. Machines don’t guess what you meant. They reflect exactly what you said and you have to deal with the consequences.