API Documentation

Complete guide to integrating with our AI Proxy API. Get started with curl examples and SDKs for your favorite programming language.

Getting Started

The AI Proxy API enables seamless access to multiple AI models through one unified endpoint. Simply visit your dashboard to create an API key. Every key contains a Consumer Key and a Consumer Secret for request authentication. The Consumer Secret is never sent over the network. it is used to generate secure request signatures, ensuring maximum credential safety.

When creating an API key, you can configure optional settings such as token limits, request limits, AI model restrictions, and expiration dates.

All API keys use the same base URL. See the Signature Generation section to learn how to authenticate your requests, or use our SDKs which handles authentication automatically.

Base URL:

https://aiproxyapi-production.up.railway.app

Software Development Kits (SDKs)

We provide official SDKs for popular programming languages to make integration easier.

Py

Python SDK

Installation

pip install ai-proxy-sdk

Usage

from ai_proxy import Client

client = Client(api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="openai/gpt-4",
    messages=[
        {"role": "user", "content": "Hello, world!"}
    ]
)

print(response.choices[0].message.content)
JS

JavaScript/Node.js SDK

Installation

npm install ai-proxy-sdk
# or
yarn add ai-proxy-sdk

Usage

const { Client } = require('ai-proxy-sdk');

const client = new Client({
  apiKey: 'YOUR_API_KEY'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'openai/gpt-4',
    messages: [
      { role: 'user', content: 'Hello, world!' }
    ]
  });

  console.log(response.choices[0].message.content);
}

main();
PHP

PHP SDK

Installation

composer require ai-proxy/php-sdk

Basic Usage

The PHP SDK uses OAuth 1.0 authentication with consumer key and secret.

require_once 'vendor/autoload.php';

use AiProxy\Client;

$consumerKey = 'YOUR_CONSUMER_KEY';
$consumerSecret = 'YOUR_CONSUMER_SECRET';

// Production client (default base URL)
$client = new Client($consumerKey, $consumerSecret);

// Simple one-shot chat with a string prompt
$response = $client->chat('What are some fun things to do with AI?');

print_r($response);

Using a Local / Dev Server

$client = new Client(
    $consumerKey,
    $consumerSecret,
    'http://localhost:8003'
);

Advanced Usage with Messages Array

$messages = [
    ['role' => 'system', 'content' => 'You are a helpful assistant.'],
    ['role' => 'user', 'content' => 'What are some fun things to do with AI?'],
];

$response = $client->chat($messages);

Specifying Model and Extra Payload

$response = $client->chat(
    'Test request',
    'openai/gpt-4',  // Optional model (defaults to meta-llama/Llama-3.2-3B-Instruct-Turbo)
    [
        'temperature' => 0.7,
        'max_tokens' => 1000
    ]
);
Go

Go SDK

Installation

go get github.com/ai-proxy/go-sdk

Usage

package main

import (
    "fmt"
    "github.com/ai-proxy/go-sdk"
)

func main() {
    client := sdk.NewClient("YOUR_API_KEY")

    response, err := client.Chat.Completions.Create(&sdk.ChatCompletionRequest{
        Model: "openai/gpt-4",
        Messages: []sdk.Message{
            {Role: "user", Content: "Hello, world!"},
        },
    })

    if err != nil {
        panic(err)
    }

    fmt.Println(response.Choices[0].Message.Content)
}
Rb

Ruby SDK

Installation

gem install ai-proxy-sdk

Usage

require 'ai-proxy-sdk'

client = AiProxy::Client.new(api_key: 'YOUR_API_KEY')

response = client.chat.completions.create(
  model: 'openai/gpt-4',
  messages: [
    { role: 'user', content: 'Hello, world!' }
  ]
)

puts response.choices[0].message.content
J

Java SDK

Installation

<dependency>
    <groupId>com.ai-proxy</groupId>
    <artifactId>ai-proxy-sdk</artifactId>
    <version>1.0.0</version>
</dependency>

Usage

import com.aiproxy.Client;
import com.aiproxy.models.*;

Client client = new Client("YOUR_API_KEY");

ChatCompletionRequest request = ChatCompletionRequest.builder()
    .model("openai/gpt-4")
    .messages(Arrays.asList(
        new Message("user", "Hello, world!")
    ))
    .build();

ChatCompletionResponse response = client.chat().completions().create(request);
System.out.println(response.getChoices().get(0).getMessage().getContent());

Signature Generation

All API requests must be authenticated using OAuth 0-legged (also known as "two-legged OAuth"). This method uses your consumer key and consumer secret to generate a signature for each request. No user tokens are required.

How OAuth 0-Legged Works

OAuth 0-legged authentication requires you to include OAuth details in the Authorization header of every request. The header format is:

Authorization: OAuth oauth_consumer_key="YOUR_CONSUMER_KEY",
    oauth_signature_method="HMAC-SHA1",
    oauth_timestamp="TIMESTAMP",
    oauth_nonce="UNIQUE_RANDOM_STRING",
    oauth_version="1.0",
    oauth_signature="COMPUTED_SIGNATURE"

OAuth Parameters

Name Description
oauth_consumer_key Your consumer key provided when you create an API key. Equivalent to a username.
oauth_signature_method Always HMAC-SHA1 (see RFC 5849 Section 3.4.2).
oauth_timestamp Timestamp when the request was generated. Timestamp is valid for 5 minutes.
oauth_nonce Uniquely generated random string. Recommended length 32 characters. Same value cannot be used more than once.
oauth_version Always 1.0.
oauth_signature String made up of Request Method, normalized URL, and normalized parameters joined together, which is then encrypted using HMAC-SHA1 against your Consumer Secret. The request body is not included in the signature string.

Signature Generation Process

To generate the OAuth signature, follow these steps:

  1. Normalize the URL: Parse the request URL and build a normalized version (scheme://host:port/path). Include port only if it's non-standard (not 80 for HTTP, not 443 for HTTPS).
  2. Collect parameters: Gather all OAuth parameters (consumer key, timestamp, nonce, version, signature method) and any query parameters from the URL. Exclude the oauth_signature parameter itself.
  3. Normalize parameters: Sort all parameters alphabetically by key, then URL-encode each key and value, joining them with = and separating pairs with &.
  4. Build signature base string: Create a string in the format METHOD&URL&PARAMS where METHOD is the HTTP method (e.g., POST), URL is the normalized URL, and PARAMS is the normalized parameter string. URL-encode each component.
  5. Generate signature: Create a signing key by URL-encoding your consumer secret and appending & (since we don't use tokens). Then compute the HMAC-SHA1 hash of the signature base string using the signing key, and base64-encode the result.

Required Headers

Every API request must include the following headers:

  • Authorization: OAuth 1.0 authorization header with all OAuth parameters and signature
  • Content-Type: application/json
  • X-Payload-Signature: SHA256 hash of the JSON request body concatenated with your consumer key and the OAuth signature

Code Implementation

Here are the PHP functions to implement the signature generation process:

function generateOAuthParams(string $method, string $url, string $consumerKey, string $consumerSecret): array
{
    $params = [
        'oauth_consumer_key' => $consumerKey,
        'oauth_signature_method' => 'HMAC-SHA1',
        'oauth_timestamp' => (string) time(),
        'oauth_nonce' => bin2hex(random_bytes(8)),
        'oauth_version' => '1.0',
    ];

    $params['oauth_signature'] = generateSignature($method, $url, $params, $consumerSecret);

    return $params;
}

function generateSignature(string $method, string $url, array $params, string $consumerSecret): string
{
    // Parse and normalize URL
    $parsedUrl = parse_url($url);
    $scheme = $parsedUrl['scheme'] ?? 'http';
    $host = $parsedUrl['host'] ?? 'localhost';
    $port = $parsedUrl['port'] ?? null;
    $path = isset($parsedUrl['path']) ? ltrim($parsedUrl['path'], '/') : '';

    // Build normalized URL
    $normalizedUrl = $scheme . '://' . $host;
    if (($scheme === 'http' && $port !== null && $port !== 80) ||
        ($scheme === 'https' && $port !== null && $port !== 443)) {
        $normalizedUrl .= ':' . $port;
    }
    $normalizedUrl .= '/' . $path;

    // Get query parameters from URL
    $queryParams = [];
    if (isset($parsedUrl['query'])) {
        parse_str($parsedUrl['query'], $queryParams);
    }

    // Combine OAuth params and query params (excluding oauth_signature)
    $allParams = array_merge($params, $queryParams);
    unset($allParams['oauth_signature']);

    // Sort parameters
    ksort($allParams);

    // Normalize parameters
    $normalizedParams = [];
    foreach ($allParams as $key => $value) {
        $normalizedParams[] = urlencode($key) . '=' . urlencode((string) $value);
    }
    $paramString = implode('&', $normalizedParams);

    // Build signature base string: METHOD&URL&PARAMS
    $signatureBaseString = urlencode($method) . '&' . urlencode($normalizedUrl) . '&' . urlencode($paramString);

    // Build signing key (consumer secret + empty token secret)
    $signingKey = urlencode($consumerSecret) . '&';

    // Generate HMAC-SHA1 signature
    $signature = base64_encode(hash_hmac('sha1', $signatureBaseString, $signingKey, true));

    return $signature;
}

function buildAuthorizationHeader(array $oauthParams): string
{
    $header = 'OAuth ';
    $headerParts = [];

    foreach ($oauthParams as $key => $value) {
        if (strpos($key, 'oauth_') === 0) {
            $headerParts[] = urlencode($key) . '="' . urlencode((string) $value) . '"';
        }
    }

    $header .= implode(', ', $headerParts);

    return $header;
}

Complete Example

Here's a complete example of making an authenticated request:

$consumerKey = 'YOUR_CONSUMER_KEY';
$consumerSecret = 'YOUR_CONSUMER_SECRET';
$apiUrl = 'https://aiproxyapi-production.up.railway.app/chat';

// Prepare request payload
$payload = [
    'model' => 'meta-llama/Llama-3.2-3B-Instruct-Turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'What are some fun things to do with AI?'],
    ],
];
$jsonBody = json_encode($payload);

// Generate OAuth signature
$method = 'POST';
$oauthParams = generateOAuthParams($method, $apiUrl, $consumerKey, $consumerSecret);
$authHeader = buildAuthorizationHeader($oauthParams);

// Generate payload signature
$payloadSignature = hash('sha256', $jsonBody . $consumerKey . $oauthParams['oauth_signature']);

// Make request
$ch = curl_init();
curl_setopt_array($ch, [
    CURLOPT_URL => $apiUrl,
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_HTTPHEADER => [
        'Authorization: ' . $authHeader,
        'Accept: application/json',
        'Content-Type: application/json',
        'X-Payload-Signature: ' . $payloadSignature,
    ],
    CURLOPT_POST => true,
    CURLOPT_POSTFIELDS => $jsonBody,
]);

$response = curl_exec($ch);
curl_close($ch);

Tip: For PHP applications, we recommend using the PHP SDK which handles all signature generation automatically.

API Endpoints

POST

Chat Completions

/chat

Create a chat completion for the given conversation. This endpoint supports all major AI models. All requests must be authenticated using OAuth 0-legged signatures (see Signature Generation).

Parameters

Parameter Type Description Required/Default
messages array A list of messages comprising the conversation so far. Each message should have a "role" (system, user, or assistant) and "content". Required
model string The name of the model to query. Required if your API key is not restricted to a specific model. Prohibited if your API key is restricted to a specific model. Required/Prohibited
max_tokens integer The maximum number of tokens to generate in the completion. Optional
temperature number A decimal number from 0-1 that determines the degree of randomness in the response. Lower values (closer to 0) make the output more deterministic, while higher values (closer to 1) make it more random. Optional
frequency_penalty number A number between -2.0 and 2.0 where a positive value decreases the likelihood of repeating tokens that have already been mentioned in the text. Optional
repetition_penalty number A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition. Optional
test boolean If true, the response will not cost any balance and will return dummy data. Useful for testing without consuming credits. Default: false

Example - Request Headers

Authorization: OAuth oauth_consumer_key="KO45tkb7vs6HPdjZMkzWCgpKqGrycRol", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1766991207", oauth_nonce="b2362ab1f26de6ea", oauth_version="1.0", oauth_signature="1Dk9QHMOfVpSJMUmRJjOcwqanSI%3D"
Accept: application/json
Content-Type: application/json
User-Agent: MyTestClient/1.0
X-Payload-Signature: 09898af05784a4eb7d3a68323f0d56bea18598a71f4317c535297cdf7325b867

Example - Request Body

{
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What are some fun things to do with AI?"}
  ],
  "max_tokens": 1000,
  "temperature": 0.8,
  "frequency_penalty": 0.5,
  "repetition_penalty": 1.2,
  "test": false
}

Note: The model field is optional. If your API key is locked to a specific model, or if you omit this field, the default model will be used.

Full URL

https://aiproxyapi-production.up.railway.app/chat

Example - Response

{
  "id": "oR7yAhG-57nCBj-9b576165af92311c",
  "object": "chat.completion",
  "created": 1766991277,
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "prompt": [],
  "choices": [
    {
      "finish_reason": "stop",
      "seed": 1563180217642744800,
      "index": 0,
      "logprobs": null,
      "message": {
        "role": "assistant",
        "content": "There are many fun things you can do with AI, depending on your interests and the type of AI you're working with. Here are some ideas to get you started:\n\n1. **Chatbots and Conversations**: Engage in natural-sounding conversations with chatbots, like me, or build your own chatbot to practice conversational AI.\n2. **Text Generation**: Write stories, poetry, or even entire books using AI-powered writing tools like language generators or writing assistants.\n3. **Image and Video Editing**: Use AI-powered photo and video editing tools to create stunning visuals, such as automatic color correction, object removal, or video stabilization.\n4. **Game Development**: Create games using AI-powered game development tools, like procedural generation or AI-powered NPCs (non-player characters).\n5. **Music Composition**: Compose music using AI-powered music tools, such as generating melodies, harmonies, or even entire songs.\n6. **Language Translation**: Practice language translation using AI-powered translation tools, like Google Translate, or build your own translation system.\n7. **Personalized Recommendations**: Use AI-powered recommendation engines to suggest movies, books, music, or products based on your interests.\n8. **Creative Writing**: Use AI-powered writing tools to generate ideas, outlines, or even entire scripts for stories, plays, or screenplays.\n9. **Art and Design**: Create art using AI-powered tools, like generating abstract art or designing logos and graphics.\n10. **Science and Research**: Use AI-powered tools to analyze data, simulate experiments, or explore complex scientific concepts.\n\nSome popular AI platforms for experimentation include:\n\n* Google's TensorFlow and Keras\n* Microsoft's Azure Machine Learning\n* IBM's Watson Studio\n* Apple's Core ML\n* OpenCV for computer vision and image processing\n\nRemember to always follow the terms of service and usage guidelines for any AI platform or tool you use.\n\nWhich of these ideas sparks your curiosity?",
        "tool_calls": []
      }
    }
  ],
  "usage": {
    "prompt_tokens": 51,
    "completion_tokens": 390,
    "total_tokens": 441,
    "cached_tokens": 0
  }
}

Maintaining Conversation Context

Every query to the chat model is self-contained, meaning new queries won't automatically have access to previous messages. To maintain conversation context for long-running conversations (like chatbots), you need to include the conversation history in the messages array.

Use the assistant role to provide historical context of how the model has responded to prior queries. Include previous user messages with the user role and previous model responses with the assistant role.

Example - Multi-turn conversation:

{
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What are some fun things to do in New York?"},
    {"role": "assistant", "content": "You could visit the Empire State Building, walk through Central Park, or explore the Metropolitan Museum of Art!"},
    {"role": "user", "content": "That sounds fun! Where is the Empire State Building located?"}
  ]
}

In this example, the model has access to the previous conversation, so it can provide context-aware responses about the Empire State Building's location. How your application stores and manages historical messages is up to you.

Common Examples

The following examples show common request patterns. All requests require OAuth 0-legged authentication. For complete authentication examples, see the Signature Generation section or use the PHP SDK.

Simple Chat

A basic chat completion request with a single user message.

{
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "messages": [
    {"role": "user", "content": "Hello!"}
  ]
}

With System Message

Include a system message to set the assistant's behavior.

{
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
}

Multi-turn Conversation

Maintain context across multiple messages.

{
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "messages": [
    {"role": "user", "content": "What is 2+2?"},
    {"role": "assistant", "content": "2+2 equals 4."},
    {"role": "user", "content": "What about 4+4?"}
  ]
}

With Parameters

Include additional parameters like temperature and max_tokens.

{
  "model": "meta-llama/Llama-3.2-3B-Instruct-Turbo",
  "messages": [
    {"role": "user", "content": "Write a haiku"}
  ],
  "temperature": 0.7,
  "max_tokens": 100
}

Error Handling

The API uses standard HTTP status codes to indicate success or failure. Error responses include a JSON body with error details.

Error Response Format

{
  "error": {
    "message": "Invalid API key",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}

Common Status Codes

  • 200 - Success
  • 400 - Bad Request
  • 401 - Unauthorized
  • 404 - Not Found
  • 429 - Rate Limit Exceeded
  • 500 - Internal Server Error