Quick Start Guide

5-Minute Integration

Get started with PromptGuard Platform in three simple steps. No code changes required for most integrations.

Step 1: Get Your API Key

  1. Sign up for a PromptGuard account
  2. Navigate to Settings → API Keys
  3. Create a new API key and copy it

Step 2: Configure Your Application

Update your LLM client to use the PromptGuard proxy:

Python (OpenAI)

import os
from openai import OpenAI

# Set PromptGuard as your API base
os.environ["OPENAI_API_BASE"] = "https://promptguard.neusta.de/api"

# Use your PromptGuard API key
client = OpenAI(api_key="pg_YOUR_API_KEY")

# Use OpenAI as normal - all requests are now protected
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)

Node.js (OpenAI)

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'pg_YOUR_API_KEY',
  baseURL: 'https://promptguard.neusta.de/api',
});

const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Step 3: Monitor Your Security

  1. Open the PromptGuard Dashboard
  2. View real-time threat detection
  3. Configure policies and alerts
  4. Run red team tests

That's it!

Your LLM application is now protected with enterprise-grade security. All requests are scanned for threats before reaching your LLM provider.

Next Steps