VibeHunt
Back to browse
PromptBrake

PromptBrake

Security test LLM-powered API endpoints for prompt injection, jailbreaks, data leaks, tool abuse, and unsafe behavior. Get evidence-backed findings in minutes.

Visit

The tool runs automated security scans against LLM‑powered API endpoints, checking for prompt‑injection attempts, jailbreak style overrides, system‑prompt leaks, unsafe tool calls, data exposure, and output bypasses. It executes a suite of 13 predefined tests using more than 60 real‑world attack prompts, and returns PASS, WARN, or FAIL results together with the exact request and response evidence that triggered each finding. Scans complete in three to eight minutes and can be repeated after changes to prompts, models, or retrieval mechanisms.

It is aimed at development teams that ship AI features and need a quick, repeatable pre‑release gate for the endpoints they actually deploy. The service connects directly to OpenAI, Claude, Gemini, or any custom LLM API without requiring SDK changes or a new evaluation harness. All API keys are never stored, and the scans can be run on dev or staging keys.

What distinguishes the platform is its deterministic rule‑based scoring, which avoids relying on a secondary LLM for judgment, and its focus on providing concrete, actionable evidence that teams can use to remediate issues before release. The offering is experimental and can be tried for free without a credit‑card.

Reviews

Sign in to leave a review.

Loading reviews…

Similar apps