api-request.js
1 const { Client } = require('bitmesh-ai');
2
3 const client = new Client({
4 consumerKey: process.env.BITMESH_CONSUMER_KEY,
5 consumerSecret: process.env.BITMESH_CONSUMER_SECRET
6 });
7
8 const response = await client.chat.completions.create({
9 model: 'meta-llama/Llama-3.2-3B-Instruct-Turbo',
10 messages: [
11 { role: 'system', content: 'You are a helpful assistant.' },
12 { role: 'user', content: 'What are some fun things to do with AI?' }
13 ], max_tokens: 1000
14 });
15
16 console.log(response.content);
JavaScript | UTF-8
Connected
Model switches in real-time
50+
AI Models
99.9%
Uptime
Minimum
Added Latency
Scale
Security Observability Control

Ship safely from day one

Everything you need — without building your own gateway or maintaining custom models.

Key protection

Your API key is managed on our side and never exposed to the client. Rotate and revoke instantly from the dashboard.

Rate limiting

Per-user and per-endpoint limits stop abuse before it impacts your usage.

Realtime logs

Inspect requests, responses and errors in one place. Filter by user, endpoint or provider.

Model control

Switch models or providers without shipping a new app build. Useful for A/B tests.

Usage alerts

Get notified when spend or token usage crosses thresholds you set.

Team ready

Invite teammates, share services and manage production vs. staging keys.

Access leading AI models

We provide the models — one API key gives you access to OpenAI, Anthropic, Mistral, and more.

OpenAI Anthropic Mistral Gemini Together Groq

Integrate in minutes

Point your existing OpenAI, Anthropic or other SDKs at our API. Get your API key from the dashboard — we provide the models, so no provider keys to manage.

  • • Get your API key from the dashboard
  • • One key to access all supported models
  • • Log, throttle and control requests without touching client code
// Base URL: https://api.bitmesh.ai

const { Client } = require('bitmesh-ai');

const client = new Client({
  consumerKey: process.env.BITMESH_CONSUMER_KEY,
  consumerSecret: process.env.BITMESH_CONSUMER_SECRET
});

const response = await client.chat.completions.create({
  model: 'meta-llama/Llama-3.2-3B-Instruct-Turbo',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What are some fun things to do with AI?' }
  ],
  max_tokens: 1000
});

FAQs

Still curious? Here are some answers to common questions.

Do I need to bring my own API keys?

No. We provide the models. Sign up, get your API key from the dashboard, and start building. No provider keys required.

What models do you offer?

We provide access to leading models from OpenAI, Anthropic, Mistral, Gemini, and more. One API key, one endpoint.

How much work is it to integrate?

Most teams can drop in the proxy URL and env var in under an hour. No infra to maintain.