Everything you need to know about Velum, the Product Healing Agent — from setup to production.
Yes. Velum complements tools like Amplitude, Mixpanel, and PostHog. It sits on top of your existing stack, uses the same events you already track, and adds a healing layer: detecting hidden friction patterns and telling you what to fix first.
Any OpenAI-compatible API. Groq is the default (free tier available), but you can use OpenAI, Together, Mistral, Fireworks, or self-hosted models like Ollama and vLLM. Just set the provider and api_key in config. AI is optional — the core pattern detection is fully deterministic.
Velum processes event batches per API call — send hundreds to thousands of events per request. The pipeline is stateless and fast: pattern detection is deterministic (no LLM calls), and AI layers only fire for unknown vocab and final summaries. PostgreSQL handles baseline storage. It's built for production traffic.
When self-hosted, nothing leaves your network. Your events stay in your PostgreSQL database. The only external calls are to your configured LLM provider (Groq, OpenAI, etc.) for AI features — and those are optional. Velum has no telemetry, no analytics, no phone-home.
No. Velum is truly zero-config. Send any JSON events and the Context Enricher (Layer 0) auto-detects which field is the event name, user ID, timestamp, etc. Event properties are automatically classified as dimensions, targets, conditions, or measures.
Absolutely. Velum is a single POST endpoint that returns structured JSON — patterns, severity scores, confidence levels, and AI hypotheses. Your agent can call it, parse the response, and act on it: trigger alerts, file tickets, adjust feature flags, or feed results into other tools. No SDK, no UI interaction, no query language — just one API call and a machine-readable response.
MIT. Fully open source, free to use, modify, and deploy commercially. No usage limits, no feature gates.