← Back to Documentation

Model Pricing and Credit Consumption Guide

June 6, 2025

Credit Consumption Explanation

πŸ”Š Most of FeelFish's costs come from the computing power consumption of large model providers, which corresponds to AI Token consumption. FeelFish supports mainstream models worldwide, and the same AI request can choose different models. Since different models have different Token prices, FeelFish uses unified credits to measure computing power consumption.

Therefore, different models have their own credit consumption prices for Tokens. Taking DeepSeek V3.2 as an example, every million Tokens (roughly corresponding to 100,000 words or more) input requires 28,000,000 credits consumption, and output requires 42,000,000 credits consumption. It's roughly equivalent to consuming 28 credits for sending one character and 42 credits for returning one character.

πŸ§‘β€πŸ’» Of course, when actually creating content, the Tokens consumed for creating content are uncertain, so the actual credit consumption during creation depends on the creator's creative approach. To avoid wasting credits, we recommend:

  • Use the auxiliary creation panel for daily continuation and modification, which can output a large amount of content with minimal context.
  • When creating in intelligent agents, start a new conversation for each independent task, so each task carries less historical message context and consumes relatively fewer credits.
  • Try to use model services with input caching (such as DeepSeek), so that the credit consumption of historical messages will be less (for example, for DeepSeek V3.2, the credit consumption brought by historical messages is one-tenth of that of new messages)
  • Choose appropriate models for creation, and recommend using DeepSeek's latest models for daily creation.

Model Credit Consumption Comparison

The following table shows the credit consumption comparison of different models, all based on the official interface pricing of large model providers. Actual consumption is based on the number of input and output Tokens for each request to the large model. The table below only shows some model pricing and does not include the discounted prices provided by FeelFish cloud services.

For specific credit consumption corresponding to actual Tokens of all large models, please log in and visit Credit Consumption to view.

Model NameCredits per token (input/output)Price Rating
DeepSeek-R155/219⭐⭐⭐ (Moderate)
DeepSeek-V3.228/42 ❀️ Recommended⭐⭐ (Economical)
DeepSeek-V3.156/168⭐⭐ (Economical)
DeepSeek-V327/110⭐⭐ (Economical)
Kimi-K248/192⭐⭐ (Economical)
gpt-4.1200/800⭐⭐⭐⭐ (Expensive)
gpt-4o250/1000⭐⭐⭐⭐ (Expensive)
qwen-plus11/28⭐ (Very Economical)
qwen-turbo4/7⭐ (Very Economical)
gemini-2.5-pro125/1000⭐⭐⭐⭐ (Expensive)
grok-3300/1500⭐⭐⭐⭐ (Expensive)
  • ⭐: Very Economical
  • ⭐⭐: Economical
  • ⭐⭐⭐: Moderate
  • ⭐⭐⭐⭐: Expensive
  • ⭐⭐⭐⭐⭐: Very Expensive

About Historical Message Credit Consumption

When you create content in intelligent agents, each request sends all historical messages of the current conversation to the model. These messages also count as model Tokens and generate credit consumption. So when you continue creating in the same conversation, the credit consumption of historical messages will increase more and more.

Of course, because large model providers cache historical messages, the credit consumption of historical messages will be relatively less. For example, for DeepSeek V3.2, the credit consumption brought by historical messages is one-tenth of that of new messages. So the consumption of historical messages is not that large, but if you continue creating in the same conversation, the credit consumption of historical messages will increase more and more. So we recommend that after experiencing dozens of rounds of conversation, after completing an independent creative task, start a new conversation to continue creating. For example, you can create each chapter or a few chapters in a continuous plot in one conversation. When a plot is completed, start a new conversation to continue creating. You can ask the intelligent agent to help update the creative style, characters and setting information before starting a new conversation, and create a new intelligent context so that the plot and style consistency can be maintained in the new conversation.

You need to note that different models have different prices, and some model services do not have historical message caching services. You need to visit /account/credits to view the specific credit consumption of models.

How to View Credit Consumption for Each Request

You can view the credit consumption of each request in the log in the status bar below the FeelFish client editor, supported after version 2.5.2 (the displayed credit consumption does not consider historical message caching, so the displayed credits will be higher than the actual consumption, which will be fixed in version 2.7.3 and later. For actual consumption, please refer to the actual changes in account credits).