As mentioned in this section, OpenAI provides several models that we can use in order to achieve our specific tasks.
In this section, we'll dive deeper into:
- The available models
- The API Limitation
- How to test this API
- How to get an API Key
- What would be the best Model to use & pricing to follow
- The number of tokens that should be used in our prompt to prevent errors
Expand |
---|
title | The available models |
---|
|
Expand |
---|
| - Latest model
- With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with with greater accuracy.
is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style. - Following the research path from GPT, GPT-2, and GPT-3, the deep learning approach leverages more data and more computation to create increasingly sophisticated and capable language models
6 months were spent making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations. Price list for GPT-4 Model | Prompt | Completion | 8K context | $0.03 / 1K tokens | $0.06 / 1K tokens | 32K context | $0.06 / 1K tokens | $0.12 / 1K tokens |
GPT-4 models LATEST MODEL | DESCRIPTION | MAX TOKENS | TRAINING DATA |
---|
gpt-4 | More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration. | 8,192 tokens | Up to Sep 2021 | gpt-4-0314 | Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4 , this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 8,192 tokens | Up to Sep 2021 | gpt-4-32k | Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration. | 32,768 tokens | Up to Sep 2021 | gpt-4-32k-0314 | Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k , this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 32,768 tokens | Up to Sep 2021 |
For many basic tasks, the difference between GPT-4 and GPT-3.5 models is not significant. However, in more complex reasoning situations, GPT-4 is much more capable than any of our previous models. - Limitation:
GPT-4 is currently in a limited beta and only accessible to those who have been granted access. In order to use this API we need to join the waitlist to get access when capacity is available. |
Expand |
---|
| GPT-3.5 models can understand and generate natural language or code. Our most capable and cost effective model in the GPT-3.5 family is gpt-3.5-turbo which has been optimized for chat but works well for traditional completions tasks as well. LATEST MODEL | DESCRIPTION | MAX TOKENS | TRAINING DATA |
---|
gpt-3.5-turbo | Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003 . Will be updated with our latest model iteration. | 4,096 tokens | Up to Sep 2021 | gpt-3.5-turbo-0301 | Snapshot of gpt-3.5-turbo from March 1st 2023. Unlike gpt-3.5-turbo , this model will not receive updates, and will only be supported for a three month period ending on June 1st 2023. | 4,096 tokens | Up to Sep 2021 | text-davinci-003 | Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text. | 4,097 tokens | Up to Jun 2021 | text-davinci-002 | Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning | 4,097 tokens | Up to Jun 2021 | code-davinci-002 | Optimized for code-completion tasks | 8,001 tokens | Up to Jun 2021 |
Recommendation to use gpt-3.5-turbo over the other GPT-3.5 models because of its lower cost. Experimenting with gpt-3.5-turbo is a great way to find out what the API is capable of doing. After you have an idea of what you want to accomplish, you can stay with gpt-3.5-turbo or another model and try to optimize around its capabilities. Note: OpenAI models are non-deterministic, meaning that identical inputs can yield different outputs. Setting temperature to 0 will make the outputs mostly deterministic, but a small amount of variability may remain. |
|
Expand |
---|
title | What would be the best Model to use & pricing to follow |
---|
|
- We can use the GPT comparison tool that lets us run different models side-by-side to compare outputs, settings, and response times and then download the data into an Excel spreadsheet.
- Under the examples section, we can benefit from examples to choose what we want
- For example we can use text-davinci-003 for SQL Translation:
Image Added Image Added - Other example we can benefit from, to reduce the email content, before sending it in the prompt: using text-davinci-003
For sure we need to test how limited the # of tokens in this example prompt itself Image Added or Image Added Image Added Image Added - For retrieving data from the email we can try one of these examples: using text-davinci-003
Image Added or Image Added Image Added Image Added |