You can think of tokens as pieces of words used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words.
We're currently offering GPT-4o, GPT-4o-mini, GPT-4-Turbo, GPT-3.5-Turbo, Llama 3.1 70b, Llama 3.1 8b, Llama 3.2 1B, Llama 3.2 3B, Llama 3.2 11B Vision, Mixtral 8.7b, Gemma 7b, and Gemma 2.9b.
Pricing is tiered based on monthly spend, with discounts increasing as spend increases.
Base prices start with a built-in 25% discount to market prices, and increase as spend increases.
Discounts range from 25% to 40%, depending on the monthly spend and the specific model used.
Potential savings combine the listed price discount with an additional average 20% savings from CogCache serving cached responses. Actual savings derived from cognitive caching can be lower or higher, depending on the use case.
The maximum listed price discount is 40% at the highest spend tier.
If your monthly spend is over $50,000, contact our sales team.
No, a credit card is not required to use CogCache. You can start using CogCache immediately with a $20 credit.
Credits autofill based on the limits you set.
Private Endpoints create a secure, dedicated network interface that connects your AI application directly to CogCache through Azure's private backbone. This allows you to access CogCache's capabilities through you private virtual networks without exposing traffic to the public internet, ensuring enhanced security and compliance for sensitive AI workloads.
We monitor the total number of queries per day to our AI models.
Our system identifies and caches repetitive queries, addressing them from the cache instead of the LLM.
This reduction in direct LLM calls minimizes the computational load, leading to lower operational costs.
Over time, the cumulative savings from reduced LLM queries add up, driving down overall costs and improving efficiency.