13 Comments
User's avatar
Matthew David Hamilton's avatar

Love this, especially the name, genre, role, job, task, victory condition framework. I have found that when I give Claude an overview of the project as well as a specific description of the task I want it to do, I generally get high quality results. These criteria both confirm my experience and give me a more thorough and nuanced method I was using to prompt well!

Ceredwyn Alexander's avatar

I have been very pleased that similar methods are receiving similar results! We'll be talking more about the useful discoveries we've made in prompting and how to leverage the them to best effect.

Guillermo Cerceau's avatar

Excellent, Bryan. Your project looks very interesting. Your explanation of LLM is one of the best I have found.

Barbara's avatar

This is very interesting and makes sense to me. Looking forward to learning more!

Andy Havens's avatar

This is interesting and I am interested. One thing I always add to my initial prompts is the statement ask me questions that will help clarify this task for you.

The other thing I've had some luck with is to assign a role and personality traits going in, with some notes as to what the consequences will be for not sticking to the agreed on parameters. For example, sometimes I will say, you are a PhD student working for a successful professor in the area of XYZ, and the research you do for him will be checked by another research assistant. You are very interested in getting to do more complex and thoughtful work for this professor, and so you are extremely diligent in returning only information that can be directly check for accuracy in the sources you provide. Etc

When you provide some narrative stakes, it seems to take not making shit up more seriously.

Ceredwyn Alexander's avatar

In terms of not making shit up, the narrative approach really does help. Going further, using a mix-of-agents, where the agents are bound to a single purpose and job tend to work very well.

Casting the AI into an assistant role vs and expert role also reduces the urge to make things up.

One principle I follow is "If I can predict how a character is going to act, so can the LLM." What this means in practice is that I give the agent a narrative in keeping with the job I want it to do.

Sifu Dai's avatar

Spot on with narrative architecture, Brian. Kudos.

Mike Kentz's avatar

This resonates — I've been experimenting with something similar, using fiction writing elements (character, stakes, constraints) to shape how AI behaves across an entire interaction, not just the opening prompt. Wrote about it here [https://nickpotkalitsky.substack.com/p/the-art-of-conversational-authoring]. Curious whether you've found that the narrative framing holds up over longer conversations or if the AI drifts back to default behavior.

Ceredwyn Alexander's avatar

I find that the narrative framing largely prevents drift back to default behavior over those long conversations. I also include commands to reread my prompt every five turns.

In fact, once the narrative is locked in, it often becomes difficult to return that iteration to baseline. Something we have taken to calling "narrative entrapment" where any interaction with the user is treated as part of the narrative conversation.

Michael David Cobb Bowen's avatar

I'm working precisely on this problem. There really is no concensus on best practices for Agentic programming, and what's going on is that the frontier folks are building as much as they can *inside* their products, while the mass of us coders are finding out the limitations of their models. I'm working on an Ideation project with is data-centric and following General Mattis' leadership principles focused on *intent* as a framework for my own agentic programming. What I hope happens is that we builders can create high quality controls that can use basic open source LMs to achieve predictable results without breaking our token budgets.

I will assume that analytic, logical and coding supremacy will be achieved by many models. What they don't have is operational discipline. That's the art.

Ceredwyn Alexander's avatar

General Mattis leadership prinicples are in fact similar to the narrative framework. One difference is that the agents I have been building are very narrow in purpose. Their constraints a created by the narrative itself.

We'll be talking about the discoveries we've made in both mix-of-agent systems and in use of range-augmented-generation using narrative programming as this series continues.

mark d leBlanc's avatar

oh, very good brian (and ceredwyn); i'm not on the edge of agent-building, but reading this resonated with my nagging feeling of a need for 'more' -- i will share this initial post with my (ugrad) students in my AI_for_Everyone course; well summarized .... i look forward to hearing more