Writer has a more in-depth guide on Prompt Engineering you can check out here
Prompt engineering is the process of creating and fine-tuning the prompts used to generate text from a large language mode. This process is crucial in achieving the desired outcome and improving the performance of large language models.
Here are some best practices for prompt engineering with large language models:
- Define clear and concise prompts: Clearly define the task and the desired outcome, and keep the prompts short and focused. This helps the model to stay on track and generate coherent and relevant outputs.
- Example prompt: “Generate a summary of by ”.
- This example is a clear and concise prompt that clearly defines the task for the model, which is to generate a summary of a specific article. By keeping the prompt focused and concise, the model is less likely to wander off-topic and produce irrelevant outputs.
- Provide relevant context: Provide the model with relevant context and background information that is necessary to understand the prompt and generate accurate outputs.
- Example prompt: Generate an origin story about Santa Claus and make sure to include how Santa Claus met his reindeers.
- This example provides the model with the relevant context and background information necessary to generate a coherent and accurate output. By including information about the reindeers, the model can produce a story that aligns with the desired outcome.
- Use appropriate tone and style: Choose the appropriate tone and style that aligns with the desired output and the target audience.
- Example prompt: Generate a persuasive argument against using plastic straws and use a formal tone.
- This example clearly defines the desired tone and style for the output. By using a formal tone, the model is better equipped to produce a persuasive argument that aligns with the desired outcome and target audience.
- Iterate and refine: Iterate and refine the prompts to improve the performance of the model. Continuously monitor the output and provide feedback to the model to help it learn and adapt.
- Example prompt 1: Provide a product description using all of the product details provided below
- Example prompt 2: Generate a product description in an excited tone using the details listed below
- These examples are different formulations of the same prompt that might sound similar to humans but could lead to generations that are quite different from each other. This might happen, for instance, because our models have learned that the different formulations are used in very different contexts - so it is useful to try a range of different prompts for the problem.
- Illustrate the output: Provide a few examples of the types of generations you might want - this is called few-shot learning.
- Example output 1: Email subject line - Check out this new product!
- Example output 2: Email subject line - New product is on sale!
- These examples are various generations that you might want the model to produce. By illustrating these results, the model
will more accurately output similar results and generations.
Updated 7 months ago