Managed AI Proxy
The managed AI proxy lets your workspace code call AI providers (Anthropic, OpenAI, Google, Brave Search) without ever touching an API key. Rigbox holds the provider keys on your behalf, injects them at request time, and tracks usage against your credit balance.
How It Works
When managed mode is active for a workspace, outbound AI SDK requests are routed through a proxy running on the Rigbox data plane. The proxy:
- Intercepts the request from your workspace VM
- Injects the correct provider API key (which never enters your VM)
- Forwards the request to the upstream provider
- Logs token usage and deducts from your credit balance
- Returns the response to your code
Your workspace code uses the standard SDK for each provider — you only change the base URL to point at the proxy endpoint.
Because the proxy speaks the same protocol as each provider’s native API, you can switch between managed mode and your own keys with a single configuration change.
Supported Providers
| Provider | Models | SDK Compatibility |
|---|
| Anthropic | Claude 4 Opus, Claude 4 Sonnet, Claude 3.5 Haiku | Anthropic Python/JS SDK |
| OpenAI | GPT-4o, GPT-4o-mini, o3, o4-mini | OpenAI Python/JS SDK |
| Google | Gemini 2.5 Pro, Gemini 2.5 Flash | Google AI Python/JS SDK |
| Brave Search | Web Search API | REST / HTTP client |
Credit Tiers
Every Rigbox account comes with AI credits that reset monthly.
| Tier | Credits / Month | Best For |
|---|
| Free | 250 | Trying out the platform, small experiments |
| Pro | 2,000 | Active development, prototyping, demos |
Credits map roughly to API cost — one credit equals approximately $0.01 of upstream provider spend. The exact mapping depends on the model and token count.
Activate Managed Mode
Via the API
Set the AI configuration for a workspace to managed mode with your chosen provider and model.
curl -X PUT https://api.rigbox.dev/api/workspaces/{workspace_id}/ai-config \
-H "Authorization: Bearer $RIGBOX_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"mode": "managed",
"provider": "google",
"model": "google/gemini-2.5-pro"
}'
Via the CLI (Inside a Workspace VM)
If you are already inside a running workspace, the rig CLI can activate managed mode and configure your shell in one step.
This command:
- Enables managed proxy mode for the current workspace
- Prints shell
export statements that set the correct base URLs for each AI SDK
- Routes all subsequent AI SDK calls through the Rigbox proxy
Run eval $(rig proxy on) to apply the exports to your current shell session automatically.
To deactivate:
Check Your Credit Balance
Query your remaining credits at any time.
curl -s https://api.rigbox.dev/api/users/me/credits \
-H "Authorization: Bearer $RIGBOX_TOKEN" | jq .
See the Credits API reference for full response schema.
View Usage Breakdown
Get a daily breakdown of credit consumption across your workspaces.
curl -s https://api.rigbox.dev/api/users/me/ai-usage \
-H "Authorization: Bearer $RIGBOX_TOKEN" | jq .
See the AI Usage API reference for full response schema and query parameters.
What Happens When Credits Run Out
When your credit balance reaches zero:
- API requests return HTTP 402 — the proxy responds with a
credits_exhausted error code
- The UI shows an upgrade prompt — you can upgrade to Pro or switch to BYOK mode
- Existing services keep running — only new AI API calls are blocked; your workspace and non-AI services are unaffected
If you have a long-running agent or automated pipeline, it will start receiving 402 errors once credits are exhausted. Consider monitoring your balance programmatically using the credits endpoint.
End-to-End Example
Here is a complete flow: activate managed mode, make an AI call from inside the workspace, then check remaining credits.
Step 1 — Activate managed mode for a workspace:
curl -X PUT https://api.rigbox.dev/api/workspaces/ws_abc123/ai-config \
-H "Authorization: Bearer $RIGBOX_TOKEN" \
-H "Content-Type: application/json" \
-d '{"mode": "managed", "provider": "anthropic", "model": "claude-sonnet-4-20250514"}'
Step 2 — Inside the workspace VM, configure the SDK and make a call:
# Apply proxy environment variables
eval $(rig proxy on)
# The proxy intercepts this call — no API key needed in your code
from anthropic import Anthropic
client = Anthropic() # Uses proxy base URL from environment
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain microVMs in one paragraph."}]
)
print(message.content[0].text)
Step 3 — Check remaining credits:
curl -s https://api.rigbox.dev/api/users/me/credits \
-H "Authorization: Bearer $RIGBOX_TOKEN" | jq .
# {"remaining": 1795, "total": 2000, "mode": "managed"}
Credit deduction happens synchronously — the balance is updated before the proxy returns the response to your code, so the credits endpoint always reflects the latest usage.
Switching Providers
You can change the provider and model at any time without restarting the workspace. The next AI API call will use the new configuration.
# Switch from Anthropic to OpenAI
curl -X PUT https://api.rigbox.dev/api/workspaces/ws_abc123/ai-config \
-H "Authorization: Bearer $RIGBOX_TOKEN" \
-H "Content-Type: application/json" \
-d '{"mode": "managed", "provider": "openai", "model": "gpt-4o"}'
If you are using the CLI inside the VM, run rig proxy on again to refresh the environment variables for the new provider.
Next Steps