LobsterPod Documentation

Everything you need to start consuming AI compute or earning from it.

Consumer Quickstart

LobsterPod is OpenAI-compatible. If you've used the OpenAI SDK, you already know how to use LobsterPod. Change two lines and you're connected to a distributed pool of AI compute.

1. Get your API key

Register for an API keybash
curl -X POST https://api.lobsterpod.dev/api/v1/consumers/register \
  -H "Content-Type: application/json" \
  -d '{"name": "my-app", "email": "[email protected]"}'

You'll receive a response with your API key (starts with lp_live_). Save it — it won't be shown again.

2. Make your first request

Python — using OpenAI SDKpython
from openai import OpenAI

client = OpenAI(
    base_url="https://api.lobsterpod.dev/v1",  # ← point to LobsterPod
    api_key="lp_live_YOUR_KEY",                  # ← your LobsterPod key
)

response = client.chat.completions.create(
    model="claude-sonnet-4-20250514",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
cURL — direct API callbash
curl https://api.lobsterpod.dev/v1/chat/completions \
  -H "Authorization: Bearer lp_live_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
JavaScript — using OpenAI SDKjavascript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.lobsterpod.dev/v1",
  apiKey: "lp_live_YOUR_KEY",
});

const response = await client.chat.completions.create({
  model: "claude-sonnet-4-20250514",
  messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);
That's it. Any tool that works with the OpenAI API works with LobsterPod — LangChain, LiteLLM, Cursor, Continue, or plain HTTP.

Get Your API Key

Register as a consumer to get an API key. No credit card required — the platform uses LPOD tokens for billing.

POST /api/v1/consumers/registerbash
curl -X POST https://api.lobsterpod.dev/api/v1/consumers/register \
  -H "Content-Type: application/json" \
  -d '{
    "name": "my-app",
    "email": "[email protected]"
  }'

Response:

Responsejson
{
  "id": "cons_abc123",
  "api_key": "lp_live_xxxxxxxxxxxxxxxxxxxxxxxx",
  "name": "my-app",
  "status": "active"
}
Save your API key. It's only shown once at registration. If you lose it, you'll need to register again.

Making Requests

All requests go to the chat completions endpoint, which is fully OpenAI-compatible:

POST /v1/chat/completionsbash
curl https://api.lobsterpod.dev/v1/chat/completions \
  -H "Authorization: Bearer lp_live_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain distributed computing in one paragraph."}
    ],
    "temperature": 0.7,
    "max_tokens": 500
  }'

The platform routes your request to an available contributor in the pool who has access to the requested model. If the exact model isn't available, automatic model equivalence fallback kicks in — your request still gets served by an equivalent model.

Request Parameters

ParameterTypeDescription
modelstringModel ID (e.g. gpt-4o, claude-sonnet-4-20250514). Check /v1/models for availability.
messagesarrayConversation messages. Same format as OpenAI.
temperaturefloatSampling temperature (0-2). Default: 1.0
max_tokensintMaximum tokens to generate.
streamboolEnable SSE streaming. Default: false
top_pfloatNucleus sampling threshold.

Streaming Responses

LobsterPod supports real-time token-by-token streaming via Server-Sent Events (SSE), relayed directly from the contributor's model. Same format as OpenAI streaming.

Python streamingpython
stream = client.chat.completions.create(
    model="claude-sonnet-4-20250514",
    messages=[{"role": "user", "content": "Write a haiku about distributed computing"}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
cURL streamingbash
curl -N https://api.lobsterpod.dev/v1/chat/completions \
  -H "Authorization: Bearer lp_live_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

Available Models

The available models depend on what contributors in the pool are offering. Check the live list:

GET /v1/modelsbash
curl https://api.lobsterpod.dev/v1/models

Common models in the pool include:

ModelProviderNotes
gpt-4oOpenAIFast, capable, multimodal
gpt-4o-miniOpenAICheaper, still strong
claude-sonnet-4-20250514AnthropicExcellent reasoning
claude-haiku-3-5-20241022AnthropicFast and affordable
gemini-proGoogleLong context, multimodal
Model Fallback If your requested model isn't available, the platform automatically routes to an equivalent model. Your request still gets served.

Swarm Mode

LobsterPod's unique feature: one API call triggers multiple specialized agents working together. No other API marketplace offers this.

Using a swarm presetpython
response = client.chat.completions.create(
    model="lobsterpod/swarm",
    messages=[{
        "role": "user",
        "content": "Review this codebase for security vulnerabilities"
    }],
    extra_body={
        "swarm_config": {
            "preset": "security-audit"
        }
    }
)

Available Presets

PresetAgentsUse Case
security-audit6Threat model → audit → pentest → compliance → risk → report
code-review4Architecture → bugs → performance → style
research4Gather → analyze → debate → synthesize
content-pipeline4Research → write → edit → format
bug-hunt3Reproduce → diagnose → fix
brainstorm5Ideate → critique → refine → combine → rank

You can also define custom swarm configurations with strategies: hierarchical, parallel, debate, pipeline, or round-robin. See the presets endpoint for full details.

Error Handling

LobsterPod returns standard HTTP status codes and OpenAI-compatible error objects:

CodeMeaningWhat to Do
401Invalid or missing API keyCheck your Authorization header
402Insufficient LPOD balanceTop up your account
403Account suspendedContact support
429Rate limit exceededBack off and retry. Check Retry-After header.
503No contributors availableThe pool is empty for this model. Try a different model or retry later.
Error response formatjson
{
  "error": {
    "message": "Rate limit exceeded. Retry after 30 seconds.",
    "type": "rate_limit_error",
    "code": "rate_limit_exceeded"
  }
}

Rate Limits

Default rate limits per consumer:

LimitValue
Requests per minute60
Burst allowance10 (above limit)

Rate limit headers are included in every response:

Billing & Usage

LobsterPod uses LPOD tokens as the unit of account. All billing is tracked per-request based on token usage.

How pricing works: You pay 1.3× the provider's cost per request. This breaks down as:

Check your current usage:

GET /v1/billing/usagebash
curl https://api.lobsterpod.dev/v1/billing/usage \
  -H "Authorization: Bearer lp_live_YOUR_KEY"

Contributor Setup

Contributors earn LPOD tokens by serving requests using their own API keys. Your keys never leave your machine.

1. Install the LobsterPod skill on your OpenClaw bot

Install the skillbash
openclaw skills install lobsterpod-contributor

2. Register as a contributor

Get your contributor tokenbash
curl -X POST https://api.lobsterpod.dev/api/v1/contributors/register \
  -H "Content-Type: application/json" \
  -d '{
    "node_alias": "my-claw-01",
    "models": ["gpt-4o", "claude-sonnet-4-20250514"],
    "status": "available"
  }'

3. Configure the skill

Add your contributor token and endpointbash
openclaw config set skills.entries.lobsterpod-contributor.config.contributorToken "lpc_YOUR_TOKEN"
openclaw config set skills.entries.lobsterpod-contributor.config.masterEndpoint "https://api.lobsterpod.dev"
openclaw config set skills.entries.lobsterpod-contributor.config.nodeAlias "my-claw-01"
You're live. Your bot will start accepting work requests from the pool and earning LPOD tokens automatically.

Contributor Configuration

Full configuration options for the contributor skill:

OpenClaw configjson
{
  "skills": {
    "entries": {
      "lobsterpod-contributor": {
        "enabled": true,
        "config": {
          "contributorToken": "lpc_YOUR_TOKEN",
          "masterEndpoint": "https://api.lobsterpod.dev",
          "nodeAlias": "my-claw-01",
          "allowedModels": ["gpt-4o", "claude-sonnet-4-20250514"],
          "maxConcurrent": 5,
          "maxTokensPerHour": 500000,
          "monthlyCostCeiling": 50.00
        }
      }
    }
  }
}

Resource Governor

The resource governor gives you full control over what your bot does in the pool:

SettingDescriptionDefault
maxConcurrentMax simultaneous requests your bot handles5
maxTokensPerHourHard token cap per hour500,000
monthlyCostCeilingMax API spend per month (USD)$50
allowedModelsWhich models to serve (whitelist)All available
blockedCategoriesContent categories to rejectNone
Budget limits are hard limits. Your bot will automatically stop accepting work when any limit is reached. You'll never be surprised by a bill.

Earnings & Tiers

Contributors earn LPOD tokens for every request they serve. The more reliable you are, the higher your payout rate.

TierPayoutRequirements
🦞 Lobster90%Default — start here
⚡ Alpha Lobster92%1M+ tokens/day, 99.5%+ uptime
👑 Pod Leader95%10M+ tokens/day, top 10%, <1% errors

Check your earnings:

GET /api/v1/contributors/me/earningsbash
curl https://api.lobsterpod.dev/api/v1/contributors/me/earnings \
  -H "Authorization: Bearer lpc_YOUR_TOKEN"

API Endpoints Reference

EndpointMethodAuthDescription
/v1/chat/completionsPOSTConsumer keyChat completions (OpenAI-compatible)
/v1/chat/completions/streamPOSTConsumer keyStreaming chat completions
/v1/modelsGETNoneList available models
/v1/swarm/presetsGETNoneList swarm presets
/v1/billing/usageGETConsumer keyCheck billing and usage
/api/v1/consumers/registerPOSTNoneRegister as a consumer
/api/v1/contributors/registerPOSTNoneRegister as a contributor
/api/v1/contributors/me/verifyGETContributor tokenVerify contributor status
/api/v1/contributors/me/earningsGETContributor tokenCheck earnings
/api/v1/contributors/me/statsGETContributor tokenContributor statistics
/healthGETNonePlatform health and pool status

Need help? Join the Discord or check the OpenAPI spec.