How much does Chat GPT cost me?
OpenAI, the organization behind GPT-3 and ChatGPT, initially provided access to ChatGPT for free during its research preview phase. However, they may have introduced paid plans or pricing structures since my last update.
To get the most accurate and up-to-date information regarding the cost of using ChatGPT, I recommend visiting the official OpenAI website or contacting OpenAI directly. They typically provide details about pricing and subscription options on their website, and you can inquire about any changes or new pricing plans they may have introduced since my last knowledge update.
Current prices (as of 09/2023)
GPT-3.5 Turbo
4k context -> Input $0.0015 / 1K tokens , Output $0.002 / 1K tokens
16K context -> Input $0.003 / 1K tokens, Output $0.004 / 1K tokens
GPT-4
8k context -> Input $0.03 / 1K tokens , Output $0.06 / 1K tokens
32K context -> Input $0.06 / 1K tokens, Output $0.12 / 1K tokens
As you can see, GPT-4 costs more than GPT-3.5 Turbo because GPT-4 is newer, supports a better language model, can handle more content, and much more.
Each request made through Suppbot consumes a certain number of tokens. Your entire input, which includes all the text you provide, is counted as input tokens, and the response is counted as output tokens.
If you have the SUPPBOT PRO version and enable the option to save the requests, you can see the number of tokens used for each request entry. Additionally, in the Plugin Dashboard, you can monitor the current token usage. Of course, you can also view this information in your OpenAI account.

Price calculation Example
Usually, the ratio between input and output is around 9:1, which means approximately 90% of the tokens processed are input tokens, and 10% are output tokens. An average request (with detailed texts from your company in the Suppbot settings) has about 1500 tokens.
This means: 1350 Input Token and 150 Output Token.
At the current Chat-GPT prices for Chat-GPT-3.5 Turbo (8k context), this would amount to a total cost of $0.0495.
(1350 tokens / 1000) * $0.03 = $0.0405 (150 tokens / 1000) * $0.06 = $0.009
If each request incurs 1350 input tokens and 150 output tokens, then the number of tokens for, for example, $10 would be as follows:
Total cost for 1500 tokens (1350 input tokens + 150 output tokens):
$0.0405 (for the first 1350 tokens, based on $0.03 per 1000 tokens)
$0.009 (for the next 150 tokens, based on $0.06 per 1000 tokens)
Total: $0.0405 + $0.009 = $0.0495 for 1500 tokens
Example: Token for $10
Now we can calculate how many tokens you would get for $10:
Number of tokens for $10 = ($10 / $0.0495 per 1500 tokens) * 1500 tokens
This results in:
Number of tokens for $10 ≈ (2020.20) * 1500 tokens ≈ 3,030,300 tokens
So, you would get approximately 3,030,300 tokens for $10, based on the given token prices and the numbers of input and output tokens per request. Please note that this is an estimate, and the actual price may vary depending on the exact prices and fees from OpenAI.
Example: Requests for $10
We had already calculated that you would receive approximately 3,030,300 tokens for $10. Now, we divide this by the number of tokens per request:
Number of Requests = 3,030,300 tokens / 1500 tokens per request ≈ 2,020 requests
So, you could make approximately 2,020 requests for $10, assuming each request consumes 1350 input tokens and 150 output tokens, and token prices remain constant. Please note that this is a rough estimate, and the actual number of requests may vary depending on the exact token prices and fees of the provider.
As you can see, using AI to answer your queries is still very cost-effective.