Guides

Sections

ChatGPT

System

Click here to learn this content with an interactive guide

Generative language models like chatGPT present a fascinating and valuable opportunity to upgrade one's daily work and personal tasks. Whether that's using them to quickly draft emails or to learn about a topic you're interested in, it's quickly becoming clear that knowing how to ask an AI model is a very useful skill to posses. The better your prompt writing skills, the better the output will be - and the more effective and efficient you'll be at using tools like chatGPT.

Unless you're a developer messing about with the apis, the only real interface you have to work with models like chatGPT are through the prompts you write. This guide therefore aims to give you the basic guidelines to craft efficient prompts to help you achieve your goals.

Note that while this guide has been written with chatGPT in mind, it's very likely the guidelines and ideas we present here will be equally useful when working with other generative language models.

General prompt crafting guidelines

Regardless of what you're trying to do with chatGPT, there are some general guidelines that you should stick to when coming up with prompts. These guidelines are based on observations of how models like chatGPT tend to behave under normal conditions.

General prompt crafting guidelines

So unless you have experience with the specific type of prompt you're trying to write, we recommend you follow these guidelines. Once you feel confident that you know what you're doing, or that you've exhausted these guidelines, then feel free to experiment.

Be specific and descriptive
Chances are good that whenever you're interacting with a model like chatGPT, you're doing so with a goal in mind. There's something you're trying to achieve. This should be made clear in your prompt.

It might seem obvious but remember that each time you interact with a model, it only has the context that you provide it in your prompt. While you certainly know why you're asking what you're asking, the model doesn't. Details about the context, tone, constraints, etc. are all things you should include if they're relevant. Anything you don't specify is a hole you're leaving up to the model to fill in with whatever data happens to fit.

Break down tasks into chunks
While it may be tempting to try and come up with a massive prompt that provides the exact output you're looking for, it's often a better idea to break complex questions or instructions down into chunks.

Generative models like chatGPT are able to 'remember' some amount of context from the current interaction so you can build up to an answer or an output using multiple, simpler prompts. This approach has also been shown in some research papers to assist with providing more accurate answers as it appears to prime the model to answer in more cohesive ways.

Model the output with examples
If you're trying to get the model to reply in a specific format, it can often be useful to provide a couple of examples with their respective answers. You're trying to do is 'show' the model what you're trying to achieve.

Again, there have been research papers published on how this approach can improve the accuracy of certain types of prompts. This also has the added benefit that it's far easier to simply model the output you're expecting instead of trying to describe it with words.

Refine and experiment
Remember that prompts are not a one-shot type of deal. Don't get hung up on crafting the perfect prompt the first time around. Instead focus on getting the general idea down and testing that quickly.

It's far easier to refine and experiment with the parts of the prompt once you get your first output and see what tweaks you can make to improve it. Prompt engineering is a bit like playing tennis. Each time you serve the ball to your opponent you learn a little more about how they respond and you can develop strategies to exploit that to your benefit.

Crafting prompts based on intent

Now that we've gone over some general guidelines, let's move on to specific strategies you can apply based on the type of output you're trying to get.

Types of prompt intent

Broadly speaking there are four types of goals or prompt intents. In the real world these blend together and one type of intent can transition into another from one prompt to the next. Still, there's clearly a benefit to understanding which approaches work best when trying to achieve some specific output.

The following sections provide guidelines for optimizing each type of intent. Keep in mind that these are only guidelines based on what seems to work best so far. You should feel free to experiment by taking approaches meant for one type of intent and using them for another.

Learning something

These approaches are best when you're using a model like chatGPT to learn something. Thanks to the massive amount of data used to train models like chatGPT, it's very likely the model will have some knowledge about the topic you're interested in.

Combine that knowledge with the ability to present it in multiple ways and at different levels of understanding and you essentially have an on-demand tutor for just about any topic.

Start broad then zoom-in
If you're learning about a topic that is completely new to you, start broad with your questions to get a general idea of the 'shape' of a topic and then dig deeper.

Even if the question you have is in an area you feel somewhat comfortable in, it's useful to zoom out for a second and make sure your question isn't based on an assumption that might be incorrect.

When learning something new, you're trying to fit that new information in with all the other knowledge you already possess. By zooming out and trying to understand how this new topic is related to other topics you're familiar with, you'll have an easier time assimilating it.

Some prompts for quickly building this scaffolding are:

These and other prompts are available in the template we've provided with this guide, be sure to save that to your Workflowy account for easy access. And if you don't already have a Workflowy account, signup - it's free.

Ask follow-up questions
You'll usually need to do several follow-up questions when exploring a new topic. A large part of that comes down to the fact that you're charting new territory and so you're still figuring out your way around this new information. You're likely not sure what's important and what's not.

Good follow-up questions turn vague or abstract ideas into something more concrete. Each piece of information about a new topic or idea is like a single photograph of an object you've never seen before. The more photos of the object you have, taken from different angles and distances, the clearer the picture in your mind about the object will be. This is what you're trying to achieve by asking follow up questions.

Some prompts to help you do that are:


Having a conversation

Models like chatGPT allow users to interact with it via a chat-like interface. So it's natural to use this style of communication to use the model regardless of your actual intent. Whether you're chatting with the model because you're curious or you're trying to learn or achieve something specific, there are some things you should keep in mind that will help you be more effective in your interactions.

Make use of personas
This is by far one of the most common ways to use models like chatGPT. The idea is that by starting your interaction with a prompt that sets the point-of-view of the model or their frame of reference, you have more control over how the model will respond.

In practical terms this means you can ask the model to pretend to be an engineer, a writer, a productivity coach, and so on. Then when you chat, the model will form its responses in accordance with the point-of-view you've asked it to take.

This can be a useful approach for exploring topics and ideas when you're not sure what sort of questions to ask. This is also very useful when you're trying to get the model to produce some output in a very specific format like a programming language.

Here are some examples of personas you can use:

If you'd like a much larger list of over 140 persona prompts, check out our chatGPT prompts template.

Provide context
Another useful way to shape your conversations is by providing context. You can ask the model to take into consideration certain constraints. In this case you're not explicitly telling the model to respond in a particular way but instead drawing certain restrictions around the conversation.

The idea behind this is that by mentioning relevant topics, ideas, people, or concepts - you prime the model to incorporate those into the discussion. So while you might not want chatGPT to reply like a therapist, you would like it to consider commonly used cognitive behavioral therapy techniques in your discussion about some issues you've been dealing with.

Some ways to include context in your conversations:

Transforming data

Another common use of these models is to take some input and transform it somehow. This could be summarizing some text, pointing out the key ideas, converting natural text into code, or re-writing text in another style or language to name some examples.

In this use case, it's likely you have a good idea of the type of output you're looking for. You're not asking the model to get too creative, but simply take the information you provide and modify or translate it somehow. Two things to keep in mind in this case are to   set constraints and to clearly state your desired result.

Make use of constraints
Specifying constraints is a good way to shape the output of the model. These can be around aspects like the maximum length of the output, what language it should be in, what keywords it should include, what things should it not do, and what information should it ignore.

Constraints provide an additional layer of shaping when prompting models. After you've provided all the context and basically set the scene that the model will use to generate its output, adding constraints cuts off paths you don't want the model to consider.

One approach to making this easier on yourself is to clearly label each part of your request. In the same way you would submit a request to a person, you can label the different parts and simply mention these are constraints that should be considered in the request.

Some examples of constraints you might use:

Be clear about your end goal
Describing the desired output as part of the prompt is another useful technique. This is a simple way to point the model in the direction you want, and is an alternative to simply listing the constraints.

Prompt engineering is as much an art as it is a science, so while the technique seems similar to using constraints, it's worth remembering that having more tools in your toolkit when crafting prompts will give your more possibilities to explore.

In this approach, instead of listing out the constraints, you use natural language to describe the desired output.

Some examples of that might be:


Generating something

Using models to generate some output is another one of the main uses of these tools. Writing code, articles, poems, emails, diet plans, step-by-step problem solving, advice, songs and much more.  This is possibly the one use case that has most immediate potential for most users of chatGPT. Chatting with the model and using it to learn are obviously quite valuable but for most people, having the ability to produce just about any type of text or code in seconds is a game-changer in terms of productivity.

As we mentioned in the beginning of this guide, all the guidelines can be used regardless of the intent. We've simply grouped them based on where they seem to make the most sense. So things like providing context, using constraints, and using follow-ups are all worth doing when using a model to generate something.

Be specific
Instead of starting broad and then being specific, when using a model to generate something, it's best to clearly describe the output you're looking for from the beginning. You want to consider if there are any aspects you should mention to the model. Things like the tone the model should use, the scope it should consider, if there are any specific things about the format it should take into consideration.

When describing all these elements, it's best to stick to natural language and avoid using unnecessarily complex jargon. The more straightforward your description of the task, the easier it will be for the model to parse and produce the desired output.

And some examples of broad vs specific prompts:


Iterate on the output
One of the reasons it's important to not worry so much about making your prompt perfect the first time around is that tools like chatGPT are great for iterating. Once you have some output and can review it, you're ready to either tweak the original prompt or build on the output you got.

In other words it's a lot easier to work from an output that's 80% of the way to what you're looking for than to start from 0%. This tip is not so much about specific technique you can use but rather about a mental shift. Once you have some output, you can then apply any of the previous techniques we've mentioned to further refine it.

Even if the output misses the mark completely, it's often simple enough to correct the model by explaining what the error was to get a more suitable output.

Conclusion

We hope this guide has added a couple more tools to your toolbox when it comes to crafting good chatGPT prompts. Other large language models are likely to start coming out soon and so it's worth exploring these techniques and getting comfortable writing prompts.

Far from a passing fad, these tools are here to stay. Those that take the time to learn to use and exploit them to their fullest potential will have a significant advantage over those that don't.

And if you're curious about learning more advanced prompting techniques, we have a template you can use to get started.

Discover more systems ✨

As simple as paper.
Absurdly powerful.

Radical clarity and focus are only a signup away

View the template to copy it and get started