A Prompt Skill Is Product Logic, Not Just Better Wording

A Prompt Skill Is Product Logic, Not Just Better Wording

A prompt skill, in this context, is a reusable set of instructions that helps an AI system produce a better image prompt for a specific job.

Not a magic phrase.

Not a longer template.

A small piece of working knowledge that says: given this article, this image target, this model, and this stage of the workflow, create the right kind of prompt or edit instruction.

That distinction became important while working on pm_bot's image workflow.

The feature sits inside a daily publishing process. An article is being written or refined. Then the article needs supporting visuals. Sometimes that means an article image. Sometimes it means a hero image. Each target has its own purpose, framing, and level of polish.

So the job was not to build a clever image generator.

The job was to help the operator move from a draft article to a usable publishing asset without losing control of the work already done.

That is where the prompt skill started to improve.

At first, too much meaning was packed into one action. Generate, regenerate, refine, and start over were close enough in the interface that the system could function, but it still taught the wrong mental model.

If I already have an image that is mostly right, I am not asking the system to forget it.

I am asking for a controlled change.

If I do not have an image yet, I need a strong first prompt.

If the whole direction is wrong, I may need a clean reset.

Those are not three labels for the same button. They are three different jobs.

The first job is drafting.

Before an image exists, the product should help turn the article, the selected target, and the image model into a useful starting prompt. This is the familiar prompt-engineering part: subject, composition, constraints, mood, visual treatment, and output requirements.

But because this is a product workflow, the prompt cannot be an invisible instruction that disappears into an API call. It needs to be visible. It needs to be editable. It needs to be saved.

The second job is refinement.

Once an image exists, the operator is usually not asking for a brand-new concept. They are asking for a narrow change to the current visual. That means the generated image should become the base reference. The system should use the edit path and preserve the useful parts of the image unless the operator asks otherwise.

This changes the prompt skill's job.

It is no longer writing a first-pass generation prompt. It is writing careful edit instructions: keep this, change that, and do not disturb the parts that already work.

The third job is starting over.

Sometimes the image is simply wrong. In that case, a fresh generation from the prompt is the right action. But it should be presented as a reset, not as refinement.

That small distinction creates trust. The operator should know whether the product is preserving the current concept or discarding it before pressing the button.

Once those jobs were separated, the prompt skill became more useful without becoming dramatically more complicated.

The backend could persist prompt drafts by image target. The UI could rehydrate the correct prompt when switching between article and hero images. Refinement could preserve lineage from the existing generated image instead of behaving like a blind overwrite. Fresh regeneration could remain available, but as an explicit concept reset.

The larger lesson is simple:

Prompt quality is partly a state-management problem.

A better prompt template helps. But it does not solve the product problem by itself. The product has to know what kind of request the operator is making. It has to know what state should be reused. It has to know what state should survive failure. And it has to make those boundaries clear enough that the operator can trust the next action.

That is why the failure rules mattered.

A failed refinement should not destroy the current saved image. A failed prompt draft should not wipe out the existing prompt. A failed generation should leave the latest valid asset intact.

If those rules are wrong, the workflow feels fragile even when the generated prompt is impressive.

There was a smaller frontend lesson in the same work. A React effect dependency warning mattered because the prompt textarea was tied to rehydration behavior. If the UI resets prompt state too broadly when the operator switches targets or when data reloads, the product starts to feel unreliable.

The warning was small. The product issue was not.

Prompt UX is not only about the words in the box. It is also about when those words appear, when they persist, and when the product is allowed to change them.

This is the pattern I would keep using when turning prompt skills into product features.

Do not begin by asking how to make the prompt longer.

Begin by asking what job the user is trying to complete.

In this case, the job was to help a daily content workflow produce usable article and hero visuals without losing editable prompt work or existing image progress. Once that was clear, the implementation became clearer too:

Draft before an image exists.

Refine when the current image should be preserved.

Reset only when the operator deliberately wants a new direction.

Better prompting helped.

But the bigger improvement came from giving the prompt skill a proper place in the workflow.

-----------
If you find this content useful, please share it with this link: [https://patrickmichael.co.za/subscribe](https://patrickmichael.co.za/subscribe)

Classification