Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Expand
titleHow to test this API & get an API key

In order to test the API we need to get an API Key:

  • OpenAI provides a free tier for the GPT-3 API that allows developers to experiment with the API and build small-scale applications without incurring any costs.

  • The free tier provides access to a limited number of API requests per month, after which you will need to upgrade to a paid plan to continue using the API.
  • It's important to note that the availability and terms of these resources may change over time, so it's best to check with OpenAI directly for the most up-to-date information on their programs and offerings.
  • Supported languages: it provides several SDKs and libraries to use the API in different programming languages, including Python, Node.js, Java, Ruby, C#
  • The OpenAI API uses API keys for authentication. Visit your API keys page to retrieve the API key you'll use in your requests.

  • Avoid exposing the API keys in your code or in public repositories; instead, store them in a secure location. You should expose your keys to your application using environment variables or secret management service, so that you don't need to hard-code them in your codebase. Read more in our Best practices for API key safety.

  • I've found 2 ways to get an API Key:
    • Expand
      titleApply for access _ GPT-4

      Steps: 

      Note that in the new documentation under the Products section we can't find GPT-3 we find GPT-4, so the below section refer to GPT-4 instead of GPT-3

      • Go to the OpenAI website: Visit the OpenAI website at https://openai.com/.
      • Click on "Products": From the OpenAI homepage, click on the "Products" tab located in the top navigation menu.
      • Click on "GPT-3": From the Products page, click on the "GPT-3" option to learn more about the API.
      • Click on "Apply for Access": Once you have reviewed the information about the GPT-3 API, click on the Join API waitlist button located on the GPT-3 page.
      • Fill out the application form: Fill out the application form with your personal and project information. You will need to provide information about your intended use of the API, as well as your technical expertise and experience.
      • Agree to the terms: Read and agree to the terms of the GPT-3 API access agreement.
      • Submit your application: Once you have completed the application form and agreed to the terms, submit your application for review.
      • Wait for approval: The review process can take several weeks, and not all applications are approved. If your application is approved, you will receive an email with instructions on how to set up your account and obtain an API key.
      • Set up your account: Follow the instructions in the email to set up your OpenAI account. You will need to create a password and verify your email address.
      • Obtain your API key: Once your account is set up, log in to the OpenAI developer dashboard (https://beta.openai.com/login/) and navigate to the API keys section. Here you will find your API key, which you can copy and use in your application.
    • Expand
      titleGenerate an API Key for specific model for ex. for text-davinci-003
      1. Go to the OpenAI website at https://openai.com/.
      2. Log into your OpenAI dashboard
      3. Click on the "Get Started for Free" button in the top right corner of the page.
      4. Sign up for an OpenAI account by providing your email address and a password.
      5. Once you've signed up, log in to your OpenAI account.
      6. Navigate to the API Keys section of your account dashboard.
      7. Create a new API key by clicking the "New API Key" button.
      8. Select the text-davinci-003 model from the dropdown menu of available models.
      9. Give your API key a name and description, if desired.
      10. Click the "Create API Key" button to generate your new API key.
  • Create Request body
Expand
titleTokens number limitation
  • What are tokens and how to count them?
  • You can think of tokens as pieces of words, where 1,000 tokens is about 750 words.
  • It depends on the model used, for example for text-davinci-003 requests can use up to 4097 tokens shared between prompt and completion. If your prompt is 4000 tokens, your completion can be 97 tokens at most.
  • The limit is currently a technical limitation
  • Solution: there are often creative ways to solve problems within the limit, e.g. condensing the prompt, breaking the text into smaller pieces, etc.
  • Techniques for improving reliability around prompts

Even with careful planning, it's important to be prepared for unexpected issues when using GPT-3 in your application. In some cases, the model may fail on a task, so it's helpful to consider what you can do to improve the reliability of your application.

If your task involves logical reasoning or complexity, you may need to take additional steps to build more reliable prompts. For some helpful suggestions, consult our Techniques to improve reliability guide. Overall the recommendations revolve around:

    • Decomposing unreliable operations into smaller, more reliable operations (e.g., selection-inference prompting)
    • Using multiple steps or multiple relationships to make the system's reliability greater than any individual component (e.g., maieutic prompting)
Expand
titleAPI Data Usage Policies
  • Data usage policies can be found  here 
  • API usage policies can be found  here
      1. your profile icon at the top right
      2. Go to View API Keys and click Create new secret key to generate your API secret key.

      Below you can find a way suggested by chatGPT to generate an API Key for specific model for ex. for text-davinci-003;  but couldn't find it:

      1. Go to the OpenAI website at https://openai.com/
      2. Click on the "Get Started for Free" button in the top right corner of the page.
      3. Sign up for an OpenAI account by providing your email address and a password.
      4. Once you've signed up, log in to your OpenAI account.
      5. Navigate to the API Keys section of your account dashboard.
      6. Create a new API key by clicking the "New API Key" button.
      7. Select the text-davinci-003 model from the dropdown menu of available models.
      8. Give your API key a name and description, if desired.
      9. Click the "Create API Key" button to generate your new API key.
  • Create Request body
  • How to Use the OpenAI API: 

    The OpenAI API usage is simple and follows the conventional API consumption pattern.

    1. Install the openai package using pip: pip install openai. If using Node instead, you can do so using npm: npm install openai.
    2. Grab your API keys: Log into your OpenAI dashboard and click your profile icon at the top right. Go to View API Keys and click Create new secret key to generate your API secret key.
    3. Make API calls to your chosen model endpoints via a server-side language like Python or JavaScript (Node). Feed these to your custom APIs and test your endpoints.
    4. Then fetch custom APIs via JavaScript frameworks like React, Vue, or Angular.
    5. Present data (user requests and model responses) in a visually appealing UI, and your app is ready for real-world use.
Expand
titleTokens number limitation
  • What are tokens and how to count them?
  • You can think of tokens as pieces of words, where 1,000 tokens is about 750 words.
  • It depends on the model used, for example for text-davinci-003 requests can use up to 4097 tokens shared between prompt and completion. If your prompt is 4000 tokens, your completion can be 97 tokens at most.
  • The limit is currently a technical limitation
  • Solution: there are often creative ways to solve problems within the limit, e.g. condensing the prompt, breaking the text into smaller pieces, etc.
  • Techniques for improving reliability around prompts

Even with careful planning, it's important to be prepared for unexpected issues when using GPT-3 in your application. In some cases, the model may fail on a task, so it's helpful to consider what you can do to improve the reliability of your application.

If your task involves logical reasoning or complexity, you may need to take additional steps to build more reliable prompts. For some helpful suggestions, consult our Techniques to improve reliability guide. Overall the recommendations revolve around:

    • Decomposing unreliable operations into smaller, more reliable operations (e.g., selection-inference prompting)
    • Using multiple steps or multiple relationships to make the system's reliability greater than any individual component (e.g., maieutic prompting)
Expand
titleAPI Data Usage Policies
  • Data usage policies can be found  here 
  • API usage policies can be found  here
Expand
titleProduction best practices

Follow this link to get more info

Expand
titleBest practices for prompt engineering with OpenAI API

How prompt engineering works

Due to the way the instruction-following models are trained or the data they are trained on, there are specific prompt formats that work particularly well and align better with the tasks at hand. Below we present a number of prompt formats we find work reliably well, but feel free to explore different formats, which may fit your task best.

Rules of Thumb and Examples :

Note: the "{text input here}" is a placeholder for actual text/context

1. Use the latest model

For best results, we generally recommend using the latest, most capable models. As of November 2022, the best options are the “text-davinci-003” model for text generation, and the “code-davinci-002” model for code generation.

2. Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context

Less effective ❌:

Summarize the text below as a bullet point list of the most important points.
{text input here}

Better ✅:

Summarize the text below as a bullet point list of the most important points.
Text: """
{text input here}
"""

3. Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc

Be specific about the context, outcome, length, format, style, etc

Less effective ❌:

Write a poem about OpenAI.

Better ✅:

Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text to image ML model) in the style of a {famous poet}

4. Articulate the desired output format through examples (example 1, example 2).

Less effective ❌:

Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes.
Text: {text}

Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably.

Better ✅:

Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themes
Desired format:
Company names: <comma_separated_list_of_company_names>
People names: -||-
Specific topics: -||-
General themes: -||-

Text: {text}

5. Start with zero-shot, then few-shot (example), neither of them worked, then fine-tune

✅ Zero-shot

Extract keywords from the below text.
Text: {text}

Keywords:

✅ Few-shot - provide a couple of examples

Extract keywords from the corresponding texts below.
Text 1: Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications.
Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applications
##
Text 2: OpenAI has trained cutting-edge language models that are very good at understanding and generating text. Our API provides access to these models and can be used to solve virtually any task that involves processing language.
Keywords 2: OpenAI, language models, text processing, API.
##
Text 3: {text}
Keywords 3:

✅Fine-tune: see fine-tune best practices here.

6. Reduce “fluffy” and imprecise descriptions

Less effective ❌:

The description for this product should be fairly short, a few sentences only, and not too much more.

Better ✅:

Use a 3 to 5 sentence paragraph to describe this product.

7. Instead of just saying what not to do, say what to do instead

Less effective ❌:

The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT.
Customer: I can’t log in to my account.
Agent:

Better ✅:

The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article 
Customer: I can’t log in to my account.
Agent:

8. Code Generation Specific - Use “leading words” to nudge the model toward a particular pattern

Less effective ❌:

# Write a simple python function that

# 1. Ask me for a number in mile
# 2. It converts miles to kilometers

In this code example below, adding “import” hints to the model that it should start writing in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.)

Better ✅:

# Write a simple python function that

# 1. Ask me for a number in mile
# 2. It converts miles to kilometers
 
import

Parameters

Generally, we find that modelandtemperature are the most commonly used parameters to alter the model output.

    • model: Higher performance models are more expensive and have higher latency.
    • temperatureA measure of how often the model outputs a less likely token. The higher the temperature, the more random (and usually creative) the output. This, however, is not the same as “truthfulness”. For most factual use cases such as data extraction, and truthful Q&A, the temperature of 0 is best.
    • max_tokens (maximum length):  Does not control the length of the output, but a hard cutoff limit for token generation. Ideally you won’t hit this limit often, as your model will stop either when it thinks it’s finished, or when it hits a stop sequence you defined.
    • stop (stop sequences):  A set of characters (tokens) that, when generated, will cause the text generation to stop.

For other parameter descriptions see the API reference.

Expand
titleProduction best practices
Follow this link to get more info