Most BPM dashboards show you counts: how many instances are running, how many completed today. The Process Butterfly shows you something deeper - the flow dynamics of your process: how tasks move between states, where they pool, where they leak, and whether the system is converging toward completion or spiralling toward abandonment. It is a Markov chain drawn as an SVG, embedded directly in the profile of every business application you deploy on Priostack.
Table of contents
Enterprise architects invest months drawing BPMN diagrams that describe how work should flow. Operations teams track service-level metrics that describe what the system is doing right now. The two views rarely meet. A BPMN diagram lives in Visio or Camunda Modeler; a dashboard lives in Grafana or DataDog. The architect cannot see the model running. The operator cannot understand the metrics without the model. The result: when something goes wrong - a spike in blocked tasks, a rising abandonment rate - the people who designed the process are not looking at the data, and the people looking at the data cannot diagnose the cause.
Priostack was built to close that gap. When you deploy a BPMN or CMMN process on Priostack and link it to a business application, the application's profile page immediately shows you a live visualization of how tasks are actually flowing through that process. We call that visualization the Process Butterfly.
The Process Butterfly is an SVG diagram that models task flow as a discrete-time Markov chain. A Markov chain is a mathematical model of a system that moves between states according to fixed transition probabilities. In our case, the states are the lifecycle phases a task can occupy - Waiting, Assigned, Blocked, Abandoned - and the transition rates are percentages derived from the actual instance history of your linked process.
The name comes from the visual shape. Tasks enter from the left (New Tasks), flow through a central dispatcher, and are routed outward to four corner states: Waiting (upper-left), Assigned (upper-right), Blocked (lower-left), Abandoned (lower-right). The arcs between the center and the corners cross each other, forming an X - the wing pattern of a butterfly. To the right sits the Completion donut, the terminal outcome gauge.
Every task that enters a process linked to a Priostack application passes through some subset of the following six states. The butterfly renders all six simultaneously so you can see the current distribution of work across the entire lifecycle.
| State | Position in diagram | What it means |
|---|---|---|
| New Tasks | Left input | Tasks that have just arrived and not yet been accepted into the dispatcher queue. This is your inbound load. |
| Task Dispatcher | Center | The orchestration hub. Every task passes through here. Its count is the heartbeat of the process - it should be non-zero while work is flowing. |
| Waiting | Upper left | Tasks dispatched but not yet picked up by a worker or resource. A large, static Waiting pool signals an assignment bottleneck. |
| Assigned | Upper right | Tasks actively being worked on. This is your healthy active-work pool. Most tasks should leave this state via Completion, not via a return loop. |
| Blocked | Lower left | Tasks that have hit a dependency they cannot resolve - a missing data input, a required upstream task, an external system timeout. Blocked tasks cannot proceed without intervention. |
| Abandoned | Lower right | Tasks that were dropped, timed out, or explicitly cancelled. A non-zero Abandoned rate is expected in any real process. A rising Abandoned rate is a warning signal. |
The Completion donut to the right is not a state - it is the terminal outcome gauge. It shows what fraction of tasks that have left the Assigned state exited via completion vs. via abandonment. A healthy process should show 80%+ green in the donut.
Every arc in the butterfly carries a percentage label. These labels are Markov transition rates: out of all tasks currently in the source state, what fraction will move to the target state on the next transition. Rates at a given node sum to 100% - all tasks must go somewhere.
There are two families of arcs, visually distinguished by color:
The default transition rates in Priostack represent a baseline process with moderate efficiency:
| Arc | Default rate | Interpretation |
|---|---|---|
| Dispatcher → Waiting | 40% | 40% of tasks go to a waiting queue before assignment. Lower is better - means more work gets assigned immediately. |
| Dispatcher → Assigned | 20% | Only 20% go directly into active work. A process optimised for immediate assignment should push this above 50%. |
| Dispatcher → Blocked | 20% | 20% hit a dependency immediately. This is a process design problem - blocked tasks at dispatch time indicate missing pre-conditions. |
| Dispatcher → Abandoned | 20% | 20% abandoned at dispatch. Typically caused by expired SLAs, missing input data, or routing failures. |
| Waiting → Dispatcher | 60% | 60% of waiting tasks return to dispatcher (re-assigned). 40% implicitly become blocked or abandoned - check your assignment logic. |
| Assigned → Completion | 65% | 65% of assigned tasks complete successfully. This is the single most important number in the diagram. |
| Assigned → Dispatcher | 80% | When assigned tasks do not complete, 80% are returned to the dispatcher rather than dropped. Good resilience. |
| Blocked → Dispatcher | 80% | 80% of blocked tasks eventually unblock and re-enter the dispatcher. Means your unblocking mechanisms (escalation, timeout, human intervention) are working. |
| Abandoned → Dispatcher | 15% | Only 15% of abandoned tasks are retried. This is expected - most abandoned tasks are genuinely terminal. |
| Abandoned → Completion | 5% | 5% of abandoned tasks reach partial completion. Rare but meaningful for compliance audit trails. |
The donut chart on the right side of the butterfly shows a single ratio: tasks that completed successfully (green) vs. tasks that ended in abandonment (red). It is derived from the terminal state distribution of the Markov chain - the stationary distribution across completion and abandonment outcomes.
A donut that is 85% green means that for every 100 tasks that enter your process, approximately 85 complete. The 15 that do not complete generate the red segment. For a support ticketing process this might be acceptable. For a payment processing workflow it is a critical failure rate requiring immediate investigation.
The donut updates in real time as Priostack's engine records instance completions. When you open an app profile after a deployment, the donut reflects the lifetime aggregate of all instances that have reached a terminal state.
The butterfly is designed to be read at a glance. You do not need to analyse every arc - the overall visual weight of the diagram tells the story.
If the Dispatcher node shows a high count relative to the corner nodes, the process is accumulating work in the routing layer rather than distributing it. This typically indicates resource starvation: not enough workers to pick up tasks, or assignment logic that is recirculating tasks rather than assigning them.
A healthy butterfly is asymmetric in a specific direction: Assigned (upper-right, green) should have a higher count than Blocked (lower-left, yellow) and much higher than Abandoned (lower-right, red). If the lower corners are heavier than the upper corners, work is failing faster than it is succeeding.
All arcs in the current implementation are drawn at the same stroke weight because the rates update frequently. However, the percentage labels immediately communicate dominance: a Dispatcher → Blocked rate above 30% is a red flag regardless of the overall task count.
Waiting tasks are neither progressing nor failing - they are queued. A large, stable Waiting count means your process is producing more tasks than your resources can consume. This is the operational equivalent of a full inbox: work is piling up. Monitor the Waiting/Assigned ratio - if Waiting consistently exceeds Assigned, you have a capacity problem, not a process design problem.
Over time, butterfly patterns converge into recognisable shapes. Here are the most common:
Most flow goes: New Tasks → Dispatcher → Assigned → Completion. Waiting and Blocked are small. Abandoned is near-zero. The donut is 90%+ green. This pattern means your process design is correct, resources are sufficient, and input data is clean.
Blocked (lower-left) accumulates tasks faster than the Blocked → Dispatcher return arc can drain them. Often caused by a data dependency that is never satisfied - a required external API that is down, a human approval step where the approver has left the company, a timer event that never fires. Blocked tasks with a zero return rate are effectively dead weight. Priostack's incident system surfaces these automatically.
The Abandoned corner grows faster than Completion. Dispatcher → Abandoned rate is high. This pattern indicates either SLA timeouts are too aggressive, input validation is rejecting valid work, or the process is being fed with tasks it was never designed to handle (wrong channel, wrong data format, wrong tenant). Check the New Tasks source first.
Waiting (upper-left) has 10× the count of Assigned (upper-right). The Dispatcher is routing tasks to Waiting because no worker is available to claim them. The return arc Waiting → Dispatcher is high (tasks keep re-queueing), but the forward arc Dispatcher → Assigned stays low. This is a resource provisioning problem: add workers, increase concurrency limits, or reduce task inflow.
Assigned → Dispatcher return rate is high, but Assigned → Completion is low. Workers are picking up tasks and immediately returning them rather than completing them. This usually means a task is being assigned to the wrong worker type - the assignment logic is matching on the wrong capability or role. Review your task routing rules.
The butterfly is not a static picture. Every node - New Tasks, Dispatcher, Waiting, Assigned, Blocked, Abandoned - is clickable. Clicking a node opens an instance drill-down panel directly below the diagram. The panel lists the actual process instances currently in that state, filtered to the instances belonging to the linked process of the current business application.
For each instance you see:
This drill-down path - from aggregate health indicator to individual instance - collapses what would normally be three separate tools (dashboard, process list, instance inspector) into a single interaction. You see "3 tasks blocked," you click Blocked, you see which three instances they are, you click the unresponsive one and send a compensation signal. The entire diagnostic loop happens in the app profile without leaving the business context.
The Process Butterfly does not exist in isolation. It is one panel in the business application profile - a page that consolidates every aspect of a deployed application into a single view.
The profile page is structured as a progression:
The butterfly appears third because it is only meaningful after the application exists and a process is running. Positioning it before version history and settings reflects its importance: the operational health of a running process is more time-sensitive than a list of saved snapshots. But it is positioned after the share panel because sharing the app is the moment of maximum value - the butterfly is for ongoing operations, not for the launch celebration.
The deepest purpose of the Process Butterfly is not monitoring. Monitoring is the mechanism. The deeper purpose is to make the promise of "architecture to execution" legible.
An enterprise architect draws a BPMN model. They describe the task states, the routing logic, the exception paths. They submit that model to Priostack. The engine starts executing instances. And then - in the same platform where the model was submitted - a diagram appears that shows the model moving. Not the intended flow, but the actual flow. Not the design, but the behavior.
The transition rates in the butterfly are derived from actual execution data. If the architect designed a process where Dispatcher → Assigned should be 70%, but the butterfly shows 20%, that is immediate, unambiguous feedback: the model is not executing as designed. The gap between intention and behavior is visible, quantified, and navigable.
This is the difference between a modeling tool and an execution platform. A modeling tool shows you the architecture. An execution platform shows you whether the architecture works. The Process Butterfly is the visual proof that Priostack is the latter.
Process execution is often treated as an IT concern separate from the business architecture that motivated it. The butterfly refuses that separation. It attaches the operational signal directly to the business object - the application - that a non-technical owner can understand and act on. The architect, the consultant, the product owner, and the operator are all looking at the same diagram. That shared view is not a feature. It is the product.