Back to Models

Llama 4 Maverick Instruct (17Bx128E)

meta-llama/llama-4-maverick-17b-128e-instruct-fp8
Meta Chat

Description

Llama 4 Maverick Instruct (17Bx128E) by Meta. This model supports chat capabilities. Main use: Small & Fast, Function Calling, Vision. Variant: Instruct.

Specifications

Context Length:
1048576
Variant:
Instruct
Quantization:
FP8
Parameters:
17B
Input Modalities:
Text, Image
Output Modalities:
Text
Main Use:
Small & Fast, Function Calling, Vision

Pricing

Input Cost:
0.54 Credits / 1M tokens
Output Cost:
1.70 Credits / 1M tokens

Usage Example

const { Client } = require('ai-proxy-sdk');

const client = new Client({
  consumerKey: process.env.BITMESH_CONSUMER_KEY,
  consumerSecret: process.env.BITMESH_CONSUMER_SECRET,
  baseUrl: process.env.BITMESH_API_BASE_URL || 'https://api.bitmesh.ai'
});

const response = await client.chat.completions.create({
  model: 'meta-llama/llama-4-maverick-17b-128e-instruct-fp8',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What are some fun things to do with AI?' }
  ],
  max_tokens: 1000
});