Reverse Proxy API
GPT proxy.custom API key.
From time to time, our servers may experience high usage, leading to slower response rates and occasional error responses on Free GPT. Rest assured, we are continuously working on increasing our server capacity & OpenAI credits to accommodate a growing number of users.
For users seeking uninterrupted access to GPT and ongoing support for new features, we offer custom API keys. With these keys, you can enjoy unhindered usage of GPT, enabling you to leverage its capabilities to the fullest with Free GPT Playground.
Why use Reverse Proxy?
The OpenAI API is not free. It operates on a paid model based on the amount of data used. While there is an option for a free trial credit, this expires three months after the creation of an OpenAI account. After this period, users need to purchase credit to continue using the API.
A reverse proxy API, in general, is a service that sits between the client and a web server. It handles requests and responses on behalf of the server, intercepting requests from the client, forwarding them to the appropriate backend server, and then sending the corresponding responses back to the client.
multiple models
Our Reverse Proxy APIs include wide array of advanced AI models from different developers, offering versatility. From the highly sophisticated GPT-3.5, known for its exceptional language understanding and generation capabilities, to the latest GPT-4-Turbo. The API also provides access to Midjourney and Stable Diffusion, each tailored for unique AI Image Generation results. The LLaMA family, including various models like LLaMA-2-70b, LLaMA-2-13b, and LLaMA-2-7b, offers a range of scalable solutions for different project sizes. Additionally, our service includes Code-LLaMA models, which are specifically designed to aid in coding and programming challenges.
Pricing
Tier 1
$5.99
Tier 2
$9.99
Tier 3
$19.99
Credit Consumption Rules
Different models have different consumption rules:
Model | Billing Mode | Credit Standard |
---|---|---|
gpt-3.5-turbo | Per Use | 10 credits/use |
gpt-3.5-turbo-16k | Per Use | 10 credits/use |
gpt-4 | Per Use | 600 credits/use |
gpt-4-32k | Token | 2 credits/token |
gpt-4-v | Per Use | 800 credits/use |
gpt-4-all | Per Use | 1000 credits/use |
gpt-4-dalle | Per Use | 1000 credits/use |
gpt-4-1106-preview | Token | 1 credit/token |
claude-1-100k | Per Use | 20 credits/use |
claude-2-100k | Per Use | 20 credits/use |
midjourney | Per Use | 1500 credits/use |
mj | Per Use | 1500 credits/use |
google-palm | Per Use | 10 credits/use |
llama-2-70b | Per Use | 10 credits/use |
llama-2-13b | Per Use | 10 credits/use |
llama-2-7b | Per Use | 10 credits/use |
code-llama-34b | Per Use | 10 credits/use |
code-llama-13b | Per Use | 10 credits/use |
code-llama-7b | Per Use | 10 credits/use |
stable-diffusion | Per Use | 10 credits/use |
This table provides a simplified overview of the credit consumption for different AI models, focusing on the billing mode and the standard rate of credit usage.