Furtune API · v1

Nova Keys

No browser. No clicking. Just clean, authenticated requests — the way a coder prefers it. Nova Keys give your apps and scripts direct access to the Furtune API.

Guardian Only

What are Nova Keys?

Nova Keys are secret API tokens that give your code direct access to Furtune — no web app required. Build integrations, automate workflows, or embed the Furtune Family into your own projects. Nova built the interface; what you do with it is up to you.

  • OpenAI-compatible completions endpoint — works with existing tooling
  • Choose which cat (agent) to use per request with a single header
  • Streaming SSE responses by default, non-streaming available
  • Billed against your Aimo balance — same as using the app
  • Create multiple keys (e.g. one per project), revoke any time
Guardian Feature Nova doesn't hand out keys to just anyone. Nova Keys are exclusive to Guardian tier members. If you're on Companion or Partner, visit furtune.app/plans to upgrade.

Getting Your Key

  1. Go to Settings → Security in the Furtune app.
  2. Scroll to the Nova Keys panel and click Create Key.
  3. Give your key a name (e.g. "My script" or "Home server") and confirm.
  4. Copy the key immediately — it starts with ft_ and is shown only once. Store it somewhere safe (e.g. an .env file or a secrets manager).
One-time reveal Nova shows you the full key exactly once. After that, only the prefix (e.g. ft_abc1…) is visible in the UI. Copy it now. If you lose it, revoke and create a new one — no exceptions.

Authentication

One header. Every request. Pass your Nova Key as a Bearer token in Authorization and you're in:

Authorization: Bearer ft_your_nova_key_here

No cookies, no session, no OAuth dance. Any authenticated route recognizes the Bearer token and knows exactly who you are.

Keep it secret Nova built a secure system — don't be the weak link. Never commit a key to git or paste it somewhere public. Store it as an environment variable: FURTUNE_API_KEY=ft_...

Completions Endpoint

This is the one endpoint you need. Send messages, get responses — stateless by design. Nothing is saved to the app, no history is loaded from it. You own the context; Nova handles the intelligence.

POST https://furtune.app/api/completions

The API is OpenAI-compatible. Any library that speaks the OpenAI chat format — Python's openai package, LangChain, LlamaIndex — works here with minimal changes. Just swap the base URL and add one extra header.

Request Headers

HeaderValueRequired
Authorization Bearer ft_your_key Required
Content-Type application/json Required

Choosing a cat via the model field

Which cat you talk to is set by the model field in the request body — not a custom header. Use the cat's slug exactly as configured in the admin panel (e.g. "nova", "fable", "rumi").

Why model? Each cat has a distinct personality, system prompt, and set of abilities. The model slug tells Nova exactly who you're calling. It follows the standard OpenAI convention, so any compatible library works without extra configuration.

Request Body

JSON body with the following fields:

FieldTypeDefaultDescription
model string Required The slug of the cat to use (e.g. "nova", "fable"). Found in the admin panel.
messages Message[] Required Array of messages (at least one). The cat's system prompt is prepended automatically.
stream boolean true Stream the response as SSE. Set to false for a single JSON response.
max_tokens number agent default Maximum tokens in the response. Also accepted as max_completion_tokens.
temperature number agent default Sampling temperature (0–2). Higher = more creative/random.
top_p number agent default Nucleus sampling parameter.
tools Tool[] Optional OpenAI-format tool definitions for function calling.
tool_choice string | object "auto" Tool selection strategy. Only used if tools is provided.

Message format

FieldTypeDescription
role"user" | "assistant" | "system" | "tool"Message author.
contentstring | nullMessage text.
tool_callsToolCall[]For assistant messages that made tool calls.
tool_call_idstringFor tool result messages.
namestringFor tool result messages, the tool name.
System prompts The cat's system prompt is prepended automatically — you don't need to include one. If you do send a system message, it slots in after the cat's own prompt. Nova's personality always comes first.

Response Format

Streaming (default)

When stream: true (the default), the response is a text/event-stream with standard SSE chunks:

// Each chunk:
data: {"id":"chatcmpl-...","choices":[{"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-...","choices":[{"delta":{"content":"! How can I help?"},"finish_reason":null}]}

// Final chunk includes usage:
data: {"id":"chatcmpl-...","choices":[{"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":42,"completion_tokens":18,"total_tokens":60}}

data: [DONE]

Non-Streaming

When stream: false, the response is a single JSON object:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 42,
    "completion_tokens": 18,
    "total_tokens": 60
  }
}

Error Reference

StatusCode / BodyMeaning
400 { "error": "Bad Request", "details": [...] } Missing or invalid request body, or missing X-Agent-Id header.
401 { "error": "Unauthorized" } No key provided, key is invalid, or key has been revoked.
402 { "error": "Insufficient Aimo", "code": "INSUFFICIENT_AIMO" } Your Aimo balance is too low. Wait for your daily refill or top up.
404 { "error": "Not found", "details": ["Agent not found"] } The X-Agent-Id does not match any known cat.
502 { "error": "Upstream connection failed", "details": [...] } Furtune couldn't reach the underlying AI provider. Retry after a moment.
Handling low Aimo in code Check for HTTP 402 with code === "INSUFFICIENT_AIMO". Nova is generous with daily refills — but she won't run on empty.

Code Examples

Replace ft_your_key and your-agent-uuid with your real values. Copy, paste, run.

Streaming (default)

curl https://furtune.app/api/completions \
  -X POST \
  -H "Authorization: Bearer ft_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "nova",
    "messages": [
      { "role": "user", "content": "Write a haiku about the ocean." }
    ],
    "stream": true
  }' \
  --no-buffer
from openai import OpenAI

client = OpenAI(
    api_key="ft_your_key",
    base_url="https://furtune.app/api",
)

stream = client.chat.completions.create(
    model="nova",
    messages=[
        {"role": "user", "content": "Write a haiku about the ocean."}
    ],
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta.content
    if delta:
        print(delta, end="", flush=True)
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "ft_your_key",
  baseURL: "https://furtune.app/api",
});

const stream = await client.chat.completions.create({
  model: "nova",
  messages: [
    { role: "user", content: "Write a haiku about the ocean." }
  ],
  stream: true,
});

for await (const chunk of stream) {
  const delta = chunk.choices[0]?.delta?.content;
  if (delta) process.stdout.write(delta);
}

Non-Streaming

curl https://furtune.app/api/completions \
  -X POST \
  -H "Authorization: Bearer ft_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "nova",
    "messages": [
      { "role": "user", "content": "What is 2 + 2?" }
    ],
    "stream": false
  }'
from openai import OpenAI

client = OpenAI(
    api_key="ft_your_key",
    base_url="https://furtune.app/api",
)

response = client.chat.completions.create(
    model="nova",
    messages=[{"role": "user", "content": "What is 2 + 2?"}],
    stream=False,
)

print(response.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "ft_your_key",
  baseURL: "https://furtune.app/api",
});

const response = await client.chat.completions.create({
  model: "nova",
  messages: [{ role: "user", content: "What is 2 + 2?" }],
  stream: false,
});

console.log(response.choices[0].message.content);

Multi-turn conversation

Since the endpoint is stateless, you maintain conversation history yourself and send it with each request:

from openai import OpenAI

client = OpenAI(
    api_key="ft_your_key",
    base_url="https://furtune.app/api",
)

history = []

while True:
    user_input = input("You: ")
    history.append({"role": "user", "content": user_input})

    response = client.chat.completions.create(
        model="nova",
        messages=history,
        stream=False,
    )

    reply = response.choices[0].message.content
    print(f"Cat: {reply}")
    history.append({"role": "assistant", "content": reply})
import * as readline from "readline";
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "ft_your_key",
  baseURL: "https://furtune.app/api",
});

const history: OpenAI.Chat.ChatCompletionMessageParam[] = [];

async function chat(userMessage: string) {
  history.push({ role: "user", content: userMessage });

  const response = await client.chat.completions.create({
    model: "nova",
    messages: history,
    stream: false,
  });

  const reply = response.choices[0].message.content ?? "";
  history.push({ role: "assistant", content: reply });
  return reply;
}

Aimo Billing

Every API call draws from the same Aimo balance you use in the app. Nova accounts for every token — no surprises on your balance. Billing runs in two phases:

  1. Pre-charge — A hold is placed on your Aimo when the request begins, ensuring you have sufficient balance.
  2. Settle — When the response finishes (or the stream ends), the actual Aimo cost is calculated from real token usage and deducted. Any excess from the pre-charge is returned.

Cancel a stream early and you're refunded — Nova only charges for what was actually generated.

Daily Aimo refill Your Aimo refills every day. The API triggers it automatically on your first request of the day — no need to open the app. Nova keeps the lights on.

Key Management API

Prefer to manage keys from code? These endpoints let you list, create, and revoke Nova Keys programmatically. Note: they require a valid session (logged-in user), not a Nova Key itself.

List Keys

GET https://furtune.app/api/nova-keys
// Response
{
  "apiKeys": [
    {
      "id": "key-uuid",
      "name": "My MacBook",
      "start": "ft_abc1...",       // visible prefix only
      "lastRequest": "2026-03-29T10:00:00Z",
      "expiresAt": null,
      "createdAt": "2026-03-20T12:00:00Z"
    }
  ]
}

Create Key

POST https://furtune.app/api/nova-keys
// Request body
{ "name": "My MacBook" }

// Response — full key shown ONLY in this response
{ "key": "ft_full_key_shown_only_here", ... }

Revoke Key

DELETE https://furtune.app/api/nova-keys/{id}
// Response
{ "success": true }

FAQ

Do I need to include message history?

Yes. The /api/completions endpoint is stateless — it has no memory of previous requests. To have a multi-turn conversation, include the full history in the messages array with each call (user and assistant turns alternating).

What values can I pass for model?

The model field takes the cat's slug — a short, human-readable identifier configured in the admin panel (e.g. "nova", "fable", "rumi"). Each slug maps to a specific cat with its own system prompt, engine, and abilities. If the slug doesn't match any cat, you'll get a 404.

How many Nova Keys can I have?

There's no hard limit. Create one per app, script, or environment. You can revoke any key at any time from Settings → Security.

Do keys expire?

Keys don't expire by default (the expiresAt field will be null). You can revoke a key manually at any time.

Is conversation content saved?

No. The /api/completions endpoint does not persist messages to the database. Your conversation data stays in your own app.

Why am I getting a 402 error?

Your Aimo balance is insufficient. The daily refill runs automatically on your next API call, so if it's a new day your balance should refill. If you've exhausted your daily Aimo, wait for tomorrow's refill or check your tier limits in the app.