A single-file, embeddable process modeller that collapses three traditions — BPMN modelling rigour, case-handling logic, and RPA-style system orchestration — into one mental model the operator actually thinks in: a process is a chain of artifacts. Each node has a type. You talk, the flow builds, and the AI argues for where it belongs.
A conversational process modeller delivered as a single embeddable HTML file. The operator describes the process in plain English — "a WhatsApp order comes in, we interpret it, log it, then a human reviews it and routes it" — and the studio builds the flow live, one artifact at a time, while the transcript argues for every choice it's making.
Every node is one of seven artifact types: Trigger, Task, Decision Gate, AI Agent, Human-in-the-Loop, System / Integration, and Output. That's deliberately fewer primitives than BPMN and more expressive than an RPA step list — enough to describe any real operational flow without needing a notation PhD to read it.
It runs fully client-side: one file you can paste into a sales call, a customer workshop, or a discovery deck. No server, no install, no account.
Most "AI automation" conversations today happen at the tooling layer: point an agent at a CRM field, a ticket queue, an inbox. It automates one step. The process around it — who got the message, what triggered the next action, where the human still has to intervene — stays implicit.
This studio argues the other way. The unit of automation is the process, not the tool. The AI Agent is one node type out of seven, sitting in a flow next to Triggers, Systems, and Humans-in-the-Loop. That reframes the design question from "what can the agent do?" to "what should the agent be responsible for, given everything else the process is already doing?"
In practice: the operator sees exactly which steps an agent owns, which steps a system owns, and which steps stay with a person. Responsibility is a shape on the canvas, not an assumption in a vendor pitch.
Enterprise AI buyers almost never struggle with the question "could AI do something here?" — the answer is always yes. They struggle with "where, specifically, and with what trade-off?"
A live process modeller answers that question while the customer is in the room. You describe the current-state flow; the studio draws it. You mark where the pain is; it proposes an agent node and tells you what the agent is taking on — and, just as importantly, what stays human. The transcript keeps the reasoning visible so nothing looks like magic.
The output isn't a generated diagram — it's a shared artifact the room just built together. That artifact is the most valuable thing a discovery call can produce: a concrete, node-by-node picture of where automation pays off, where it doesn't, and why.