Model Pricing and Credit Consumption Guide
Credit Consumption Explanation
π Most of FeelFish's costs come from the computing power consumption of large model providers, which corresponds to AI Token consumption. FeelFish supports mainstream models worldwide, and the same AI request can choose different models. Since different models have different Token prices, FeelFish uses unified credits to measure computing power consumption.
Therefore, different models have their own credit consumption prices for Tokens. Taking DeepSeek V3.2 as an example, every million Tokens (roughly corresponding to 100,000 words or more) input requires 28,000,000 credits consumption, and output requires 42,000,000 credits consumption. It's roughly equivalent to consuming 28 credits for sending one character and 42 credits for returning one character.
π§βπ» Of course, when actually creating content, the Tokens consumed for creating content are uncertain, so the actual credit consumption during creation depends on the creator's creative approach. To avoid wasting credits, we recommend:
- Use the auxiliary creation panel for daily continuation and modification, which can output a large amount of content with minimal context.
- When creating in intelligent agents, start a new conversation for each independent task, so each task carries less historical message context and consumes relatively fewer credits.
- Configure the intelligent agent's historical message processing rules to limit an appropriate maximum number of historical messages.
- Choose appropriate models for creation, and recommend using DeepSeek's latest models for daily creation.
Model Credit Consumption Comparison
The following table shows the credit consumption comparison of different models, all based on the official interface pricing of large model providers. Actual consumption is based on the number of input and output Tokens for each request to the large model. The table below only shows some model pricing and does not include the discounted prices provided by FeelFish cloud services.
For specific credit consumption corresponding to actual Tokens of all large models, please log in and visit Credit Consumption to view.
Model Name | Credits per token (input/output) | Price Rating |
---|---|---|
DeepSeek-R1 | 55/219 | βββ (Moderate) |
DeepSeek-V3.2 | 28/42 β€οΈ Recommended | ββ (Economical) |
DeepSeek-V3.1 | 56/168 | ββ (Economical) |
DeepSeek-V3 | 27/110 | ββ (Economical) |
Kimi-K2 | 48/192 | ββ (Economical) |
gpt-4.1 | 200/800 | ββββ (Expensive) |
gpt-4o | 250/1000 | ββββ (Expensive) |
qwen-plus | 11/28 | β (Very Economical) |
qwen-turbo | 4/7 | β (Very Economical) |
gemini-2.5-pro | 125/1000 | ββββ (Expensive) |
grok-3 | 300/1500 | ββββ (Expensive) |
- β: Very Economical
- ββ: Economical
- βββ: Moderate
- ββββ: Expensive
- βββββ: Very Expensive