- …
Excerpts from Best practices for prompt engineering with the OpenAI API | OpenAI Help Center by :
How prompt engineering works
Due to the way OpenAI models are trained, there are specific prompt formats that work particularly well and lead to more useful model outputs.
The official prompt engineering guide by OpenAI is usually the best place to start for prompting tips.
Below we present a number of prompt formats we find work well, but feel free to explore different formats, which may fit your task better.
Rules of Thumb and Examples
Note: the “{text input here}” is a placeholder for actual text/context
1. Use the latest model
For best results, we generally recommend using the latest, most capable models. Newer models tend to be easier to prompt engineer.
2. Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context
Less effective ❌:
Better ✅:
3. Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc
Be specific about the context, outcome, length, format, style, etc
Less effective ❌:
Better ✅:
4. Articulate the desired output format through examples
Less effective ❌:
Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably.
Better ✅:
5. Start with zero-shot, then few-shot, neither of them worked, then fine-tune
✅ Zero-shot
✅ Few-shot - provide a couple of examples
✅Fine-tune: see fine-tune best practices here.
6. Reduce “fluffy” and imprecise descriptions
Less effective ❌:
Better ✅:
7. Instead of just saying what not to do, say what to do instead
Less effective ❌:
Better ✅:
8. Code Generation Specific - Use “leading words” to nudge the model toward a particular pattern
Less effective ❌:
In this code example below, adding “import” hints to the model that it should start writing in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.)
Better ✅:
9. Use the Generate Anything feature
Developers can use the ‘Generate Anything’ feature to describe a task or expected natural language output and receive a tailored prompt.
Parameters
Generally, we find that model
and temperature
are the most commonly used parameters to alter the model output.
For other parameter descriptions see the API reference.
Related Articles
[
Controlling the length of OpenAI model responses
](https://help.openai.com/en/articles/5072518-controlling-the-length-of-openai-model-responses)[
Doing math with OpenAI models
](https://help.openai.com/en/articles/6681258-doing-math-with-openai-models)[
How do I use the OpenAI API with text in different languages?
Function Calling in the OpenAI API
](https://help.openai.com/en/articles/8555517-function-calling-in-the-openai-api)[
Prompt engineering best practices for ChatGPT
](https://help.openai.com/en/articles/10032626-prompt-engineering-best-practices-for-chatgpt)