gptcli icon indicating copy to clipboard operation
gptcli copied to clipboard

Feature request: Estimate token use/cost

Open fn5 opened this issue 2 years ago • 2 comments

A low priority idea. There is a slight delay between the use of tokens and when it shows up on the usage page: https://platform.openai.com/account/usage

It would be good to know the tokens required to send a query and/or the tokens used for a query.

Some ideas:

  • If a flag (confirmtokens) is set to true, after entering an input the number of tokens will be displayed and a further confirmation - like enter - is required to move on.
  • If a flag (showtokens) is set to true, the token usage / cost of an input will be displayed with the output such as [Tokens used: XX Est. Cost $0.0002]
  • The values should adjust based on which model is selected

OpenAI do provide some information on the way token usage should be estimated, or can be called from the API: https://platform.openai.com/docs/guides/chat/introduction#:~:text=it%27s%20more%20difficult%20to%20count%20how%20many%20tokens

Can't immediately see pricing information via the API, however they are published here: https://openai.com/pricing

fn5 avatar Mar 27 '23 03:03 fn5

Good idea! Token usage is a significant problem for online API call, especially for those expensive models (e.x $0.12 / 1K tokens for gpt-4-32k). I'd like to work on the showtokens feature first, and for the confirmtokens, maybe setting a money threshold to trigger the confirm would be better.

evilpan avatar Mar 27 '23 05:03 evilpan

We alredy calculating tokens , we can get the pricing from the openai

Then calculate cost.

Make sure each time before going to query we check the cost condition

Add flag to arg parser

How is my approach, if you find it online i can work on this issue @evilpan

SDcodehub avatar Jun 19 '23 11:06 SDcodehub