Calculate how much you’re paying to use the OpenAI GPT API

Calculate how much youre paying for each call when using the OpenAI API

Most of you might have heard of ChatGPT, the conversational engine built on top of OpenAI’s GPT API. The API was opened to the public in 2022 and is now being used by an increasing amount of web apps abstracting the prompts to serve specific customer needs.

If you’re using the OpenAI API on a daily basis, it’s crucial to be able to know how much you pay for each text generation request, i.e. for each API call.

The OpenAI pricing model is based on the amount of processed tokens. Both the tokens of the prompt and of the output text are charged by OpenAI, at a rate of $0.02 per 1,000 tokens for the most advanced DaVinci model (1000 tokens roughly correspond to 750 words).

So if you prompt consists of 150 tokens and the answer consists of 850 tokens, you’ll pay $0.02 for 1000 tokens.

How to keep a detailed record of all API costs using Python

The USAGE page in your OPENAI account displays the charges per day and per 10-minute slot.

You might want to get a more accurate view on the API expenditure, on a per call or per script run basis.

Let me show you how to do this if you’re using Python to interact with the OpenAI API.

Let’s say that I want to ask GPT-3 to generate a real estate description based on a few pieces of information.

Here’s what the code might look like for the Python function, followed by a call to the function (question), passing two variables (prompt, facts).

You’ll notice that I’ve inserted a print(response) command in the function to print out the JSON response, presented in my second code frame. It helped me figure out how to access the different parts of the JSON response. You can of course comment out this command for subsequent runs.

def question(prompt, info):

  openai.api_key = "your api key"

  response = openai.Completion.create(
    model="text-davinci-003",
    prompt= f"{prompt} {info}",
    temperature=0.7,
    max_tokens=250,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0
  )

  print(response)

  answer = response["choices"][0]["text"]
  usage = response["usage"]["total_tokens"]

  return answer, usage

question("Write a 100-word real estate presentation based on those facts", "3 bedrooms, swimming pool, large garden, $1 million")

Here’s the JSON response returned by the OpenAI API. We’ll focus our attention on “text” and “total_tokens”.

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n\nThis stunning 3-bedroom property is the perfect place to call home, with its large garden and swimming pool. Located in a desirable area, it offers an ideal lifestyle with plenty of space for the whole family.\n\nThe home features a spacious kitchen and living area, complete with modern appliances. It also boasts a huge garden which is perfect for entertaining friends and family. The large swimming pool is a great place to cool off in the summer months.\n\nFor those looking for a luxurious lifestyle, this property is perfect. With a price tag of $1 million, you can rest assured that you are getting your money's worth. This is an opportunity not to be missed."
    }
  ],
  "created": 1676208247,
  "id": "xxxxxxxxxxxxxxx",
  "model": "text-davinci-003",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 138,
    "prompt_tokens": 24,
    "total_tokens": 162
  }
}

In the first frame above, you’ve seen that I accessed the text reply in the following way:

answer = response["choices"][0]["text"]
#if I print out the answer
print(answer)
#I get the raw text reply below
This stunning 3-bedroom property is the perfect place to call home, with its large garden and swimming pool. Located in a desirable area, it offers an ideal lifestyle with plenty of space for the whole family.\n\nThe home features a spacious kitchen and living area, complete with modern appliances. It also boasts a huge garden which is perfect for entertaining friends and family. The large swimming pool is a great place to cool off in the summer months.\n\nFor those looking for a luxurious lifestyle, this property is perfect. With a price tag of $1 million, you can rest assured that you are getting your money's worth. This is an opportunity not to be missed.

I accessed the token count in a similar way:

usage = response["usage"]["total_tokens"]

#I can print out the usage
print(usage)
#which gives me 162 for this example

Now that I have the usage in tokens, I can calculate the $ price, with a basic formula.

👉 Bear in mind that I’m converting the usage integer into a float number to be able to calculate a $ price which will include decimals (most calls will cost far less than a dollar).

cost = float(usage)/1000*0.02
print(cost)
#which gives me a cost of $0.00324

How to calculate the API cost for multiple calls inside the same Python script?

If you’re calling the OpenAI API multiple times in the same Python script, for instance in a FOR LOOP, you should initialize a usage variable before the FOR LOOP and increment the usage count by the amount of tokens at each iteration of the loop.

For instance, let’s say that we have a list of long paragraphs that we want to summarize.

usage = 0

list = ["long paragraph 1", "long paragraph 2", "long paragraph 3"]

for item in list:

  prompt = "Summarize the following text. TEXT: "
  info = item

  #first I call my function described above, passing "prompt" and "info"
  reply = question(prompt, info)

  #now I'm accessing the 2 elements returned by my function, using indexes (0 = the first element, 1 = the second element).

  reply_text = reply[0]
  reply_usage = reply[1]

  #finally I increment the usage count

  usage += reply_usage

#in the end, after the FOR LOOP, I calculate the cost based on the updated usage (for all my iterations). 

cost = float(usage)/1000*0.02

print(cost)

👉 If you have any other questions related to the OpenAI API, feel free to contact me.

🚀 Subscribe to my weekly newsletter packed with tips & tricks around AI, SEO, coding and smart automations

☕️ If you found this piece helpful, you can buy me coffee