Best Practices For Prompt Engineering With The OpenAI
Best Practices For Prompt Engineering With The OpenAI
All Collections API General FAQ Best practices for prompt engineering with the OpenAI API
Table of contents
The official prompt engineering guide by OpenAI is usually the best place to start for
prompting tips.
Below we present a number of prompt formats we find work well, but feel free to explore
different formats, which may fit your task better.
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api 1/6
19/02/2024, 14:47 Best practices for prompt engineering with the OpenAI API | OpenAI Help Center
Summarize the text below as a bullet point list of the most important points.
Better ✅ :
Summarize the text below as a bullet point list of the most important points.
Text: """
{text input here}
"""
Less effective ❌ :
Better ✅ :
Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch
Extract the entities mentioned in the text below. Extract the following 4 entity types:
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api 2/6
19/02/2024, 14:47 Best practices for prompt engineering with the OpenAI API | OpenAI Help Center
Text: {text}
Show, and tell - the models respond better when shown specific format requirements.
This also makes it easier to programmatically parse out multiple outputs reliably.
Better ✅ :
Extract the important entities mentioned in the text below. First extract all company na
Desired format:
Company names: <comma_separated_list_of_company_names>
People names: -||-
Specific topics: -||-
General themes: -||-
Text: {text}
Text: {text}
Keywords:
Text 1: Stripe provides APIs that web developers can use to integrate payment processing
Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applicati
##
Text 2: OpenAI has trained cutting-edge language models that are very good at understand
Keywords 2: OpenAI, language models, text processing, API.
##
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api 3/6
19/02/2024, 14:47 Best practices for prompt engineering with the OpenAI API | OpenAI Help Center
Text 3: {text}
Keywords 3:
The description for this product should be fairly short, a few sentences only, and not t
Better ✅ :
The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR
Better ✅ :
The following is a conversation between an Agent and a Customer. The agent will attempt
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api 4/6
19/02/2024, 14:47 Best practices for prompt engineering with the OpenAI API | OpenAI Help Center
In this code example below, adding “import” hints to the model that it should start writing
in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.)
Better ✅ :
import
Parameters
Generally, we find that model and temperature are the most commonly used parameters to
alter the model output.
1. model - Higher performance models are generally more expensive and may have
higher latency.
2. temperature - A measure of how often the model outputs a less likely token. The
higher the temperature, the more random (and usually creative) the output. This,
however, is not the same as “truthfulness”. For most factual use cases such as data
extraction, and truthful Q&A, the temperature of 0 is best.
3. max_tokens (maximum length) - Does not control the length of the output, but a
hard cutoff limit for token generation. Ideally you won’t hit this limit often, as your
model will stop either when it thinks it’s finished, or when it hits a stop sequence you
defined.
4. stop (stop sequences) - A set of characters (tokens) that, when generated, will cause
the text generation to stop.
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api 5/6
19/02/2024, 14:47 Best practices for prompt engineering with the OpenAI API | OpenAI Help Center
Related Articles
😞😐😃
Did this answer your question?
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api 6/6