How AI tokens work

We use OpenAI to process all AI related queries. As such the way we consider tokens is the same as how OpenAI does.

They say:
Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end - tokens can include trailing spaces and even sub-words. Here are some helpful rules of thumb for understanding tokens in terms of lengths:

1 token ~= 4 chars in English
1 token ~= ¾ words
100 tokens ~= 75 words

Or

1-2 sentence ~= 30 tokens
1 paragraph ~= 100 tokens
1,500 words ~= 2048 tokens

To get additional context on how tokens stack up, consider this:

Wayne Gretzky’s quote "You miss 100% of the shots you don't take" contains 11 tokens.
OpenAI’s  charter  contains 476 tokens.
The transcript of the US Declaration of Independence contains 1,695 tokens.

To read more about this, see their support article here:

Please also bear in mind that there are a few other important things that affect the number of tokens used within your account:


Assistant

We use an OpenAI Assistant that we have built for  Project.co  to perform the actions within your account such as creating projects, tasks and more.

This assistant requires written instructions about how it should operate as well as text detailing all the functions it can use. All of this information needs to be transmitted with each new message thread the AI Assistant creates.

As such, this means that sending this information consumes tokens in addition to your input prompts and the output prompts of each thread as well.

So you will find that using the "Action" mode within the AI Assistant uses more tokens than simply the input and output tokens you have within your conversation with the AI Assistant.


Message thread

When you continue a message thread with multiple questions, all previous content in the thread will be transmitted back to OpenAI for context. This is how the OpenAI system works currently. As such, having long conversations can consume more tokens as the conversation history is sent with each new message for context.

As such, we would recommend starting each new request as a new conversation so that you minimise your token usage. For this reason we have implemented a feature that asks you if you want to continue the conversation or start a new conversation each time you open the AI Assistant.