...
- The available models
- The API Limitation
- How to test this API & get an API Key
- What would be the best Model to use & Pricing to follow
- The number of tokens that should be used in our prompt to prevent errors
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
GPT : Generative Pre-trained Transformer
|
Expand | ||
---|---|---|
| ||
As a language model based on the GPT-3 architecture, the GPT-3 API has some limitations, including:
|
|
Expand | ||
---|---|---|
| ||
As a language model based on the GPT-3 architecture, the GPT-3 API has some limitations, including:
|
Expand | ||
---|---|---|
| ||
For sure we need to test how limited the # of tokens in this example prompt | ||
Expand | ||
| ||
For sure we need to test how limited the # of tokens in this example prompt itself or
or
|
...
Expand | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||
In order to test the API we need to get an API Key:
|
Expand | ||
---|---|---|
| ||
Even with careful planning, it's important to be prepared for unexpected issues when using GPT-3 in your application. In some cases, the model may fail on a task, so it's helpful to consider what you can do to improve the reliability of your application. If your task involves logical reasoning or complexity, you may need to take additional steps to build more reliable prompts. For some helpful suggestions, consult our Techniques to improve reliability guide. Overall the recommendations revolve around:
|
Expand | ||
---|---|---|
| ||
Expand | ||
---|---|---|
| ||
Follow this link to get more info |
|
Expand | ||
---|---|---|
| ||
Even with careful planning, it's important to be prepared for unexpected issues when using GPT-3 in your application. In some cases, the model may fail on a task, so it's helpful to consider what you can do to improve the reliability of your application. If your task involves logical reasoning or complexity, you may need to take additional steps to build more reliable prompts. For some helpful suggestions, consult our Techniques to improve reliability guide. Overall the recommendations revolve around:
|
Expand | ||
---|---|---|
| ||
Expand | ||
---|---|---|
| ||
Follow this link to get more info |
Expand | ||
---|---|---|
| ||
How prompt engineering worksDue to the way the instruction-following models are trained or the data they are trained on, there are specific prompt formats that work particularly well and align better with the tasks at hand. Below we present a number of prompt formats we find work reliably well, but feel free to explore different formats, which may fit your task best. Rules of Thumb and Examples :Note: the "{text input here}" is a placeholder for actual text/context 1. Use the latest modelFor best results, we generally recommend using the latest, most capable models. As of November 2022, the best options are the “text-davinci-003” model for text generation, and the “code-davinci-002” model for code generation. 2. Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and contextLess effective ❌: Summarize the text below as a bullet point list of the most important points. Better ✅: Summarize the text below as a bullet point list of the most important points. 3. Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etcBe specific about the context, outcome, length, format, style, etc Less effective ❌: Write a poem about OpenAI. Better ✅: Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text to image ML model) in the style of a {famous poet} 4. Articulate the desired output format through examples (example 1, example 2).Less effective ❌: Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes. Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably. Better ✅: Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themes 5. Start with zero-shot, then few-shot (example), neither of them worked, then fine-tune✅ Zero-shot Extract keywords from the below text. ✅ Few-shot - provide a couple of examples Extract keywords from the corresponding texts below. ✅Fine-tune: see fine-tune best practices here. 6. Reduce “fluffy” and imprecise descriptionsLess effective ❌: The description for this product should be fairly short, a few sentences only, and not too much more. Better ✅: Use a 3 to 5 sentence paragraph to describe this product. 7. Instead of just saying what not to do, say what to do insteadLess effective ❌: The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT. Better ✅: The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article 8. Code Generation Specific - Use “leading words” to nudge the model toward a particular patternLess effective ❌: # Write a simple python function that # 1. Ask me for a number in mile In this code example below, adding “import” hints to the model that it should start writing in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.) Better ✅: # Write a simple python function that # 1. Ask me for a number in mile ParametersGenerally, we find that modelandtemperature are the most commonly used parameters to alter the model output.
For other parameter descriptions see the API reference. |
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
GPT-3.5-turbo vs. text-davinci-003 vs. text-davinci-002:
| ||||||||||
Expand | ||||||||||
| ||||||||||
How prompt engineering worksDue to the way the instruction-following models are trained or the data they are trained on, there are specific prompt formats that work particularly well and align better with the tasks at hand. Below we present a number of prompt formats we find work reliably well, but feel free to explore different formats, which may fit your task best. Rules of Thumb and Examples :Note: the "{text input here}" is a placeholder for actual text/context 1. Use the latest modelFor best results, we generally recommend using the latest, most capable models. As of November 2022, the best options are the “text-davinci-003” model for text generation, and the “code-davinci-002” model for code generation. 2. Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and contextLess effective ❌: Summarize the text below as a bullet point list of the most important points. Better ✅: Summarize the text below as a bullet point list of the most important points. 3. Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etcBe specific about the context, outcome, length, format, style, etc Less effective ❌: Write a poem about OpenAI. Better ✅: Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text to image ML model) in the style of a {famous poet} 4. Articulate the desired output format through examples (example 1, example 2).Less effective ❌: Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes. Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably. Better ✅: Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themesDesired format: Company names: <comma_separated_list_of_company_names> People names: -||- Specific topics: -||- General themes: -||- Text: {text} 5. Start with zero-shot, then few-shot (example), neither of them worked, then fine-tune✅ Zero-shot Extract keywords from the below text.Text: {text} Keywords: ✅ Few-shot - provide a couple of examples Extract keywords from the corresponding texts below. ✅Fine-tune: see fine-tune best practices here. 6. Reduce “fluffy” and imprecise descriptionsLess effective ❌: The description for this product should be fairly short, a few sentences only, and not too much more. Better ✅: Use a 3 to 5 sentence paragraph to describe this product. 7. Instead of just saying what not to do, say what to do insteadLess effective ❌: The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT. Better ✅: The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article 8. Code Generation Specific - Use “leading words” to nudge the model toward a particular patternLess effective ❌: # Write a simple python function that # 1. Ask me for a number in mile In this code example below, adding “import” hints to the model that it should start writing in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.) Better ✅: # Write a simple python function that # 1. Ask me for a number in mile ParametersGenerally, we find that modelandtemperature are the most commonly used parameters to alter the model output.
|