Grounded generation
The AI reads your OpenAPI spec, follows your project conventions, and outputs real TypeScript you can run and verify immediately — not a generic guess you have to rewrite.
Generated code uses Glubean workflow primitives — named steps, typed assertions, shared config — so the output looks like code your team wrote.
Learn moreYour spec is annotated with response shapes, error codes, and parameter constraints. Dual-spec generation strips internal noise so AI reads only the endpoints that matter.
Learn moreA skill file teaches AI your exact SDK API, auth patterns, assertion style, and conventions. Without it, output is roughly correct. With it, tests follow your team's patterns from the start.
Learn moreMCP tools let AI run the test, read structured failures, fix the code, and rerun — all in one chat turn. The AI validates its own output instead of hoping it works.
Learn moreEnd to end
Describe what you need, Glubean pulls context from your project, generates real code, and you validate the result with traces instead of hope.
Prompt and context
“Verify checkout with promo code, tax calculation, and webhook delivery.”
Context fed into generation
Generated output
import { test } from "@glubean/sdk";
export const listUserRepos = test(
{ id: "github-list-repos", tags: ["explore"] },
async (ctx) => {
const username = ctx.vars.require("GITHUB_USERNAME");
const token = ctx.vars.get("GITHUB_TOKEN");
const headers: Record<string, string> = {
Accept: "application/vnd.github+json",
};
if (token) {
headers["Authorization"] = `Bearer ${token}`;
}
const data = await ctx.http
.get(`https://api.github.com/users/${username}/repos`)
.json();
ctx.expect(data.length).toBeGreaterThan(0);
ctx.log("Repos", data.slice(0, 3));
},
);What the AI did
Used your OpenAPI routes
Matched auth, cart, promo, and checkout paths from the project spec.
Generated SDK workflow code
Named steps and typed assertions, not free-form fetch calls.
Followed project conventions
Respected existing config helpers and file placement.
Ready to run immediately
Click play to see traces and verify the draft before committing it.
Prompt
Context goes in
Code
TypeScript comes out
Trace
Evidence closes the loop
Inside the extension
The extension keeps the feedback loop tight: generate a draft, run it, check the trace, fix what needs fixing, all in the same window.
Run one workflow, one file, or the whole workspace directly from the editor gutter.
Step through real code and inspect variables instead of guessing from logs.
Compare two runs with native diff to see exactly what changed in the workflow.
Use built-in validation patterns so generated code has real assertions behind it.
Move across local, staging, and production context from the status bar without rebuilding the test.
Everything stays in TypeScript, so PR review, branching, and refactoring work like normal.