Runs
The Runs view shows all model comparison runs — batch jobs that re-run one or more AI models against a set of images. Use this view to track progress, inspect results, and clean up old runs.
Runs table
Section titled “Runs table”The main table lists comparison runs sorted by creation date, newest first. Up to 100 runs are shown.
| Column | Description |
|---|---|
| Checkbox | Select runs for bulk deletion. The header checkbox selects all. |
| Status | The current state of the run: pending, running, completed, or failed. |
| Created | When the run was created. |
| Models | The AI models included in this run, shown as badges. |
| Priority | The run’s priority level. |
| Progress | A progress bar showing how many images have been processed out of the total. |
| Actions | A View button to open the run detail page. |
Creating a new run
Section titled “Creating a new run”Comparison runs are created from the Home view:
- Filter and select images on the Home page.
- Click “Compare selected” in the filter pane to open the comparison page.
- Choose one or more models to run against the selected images.
- Set parameters — sleep between entries (how long to wait between processing images) and batch size.
- Click “Run Comparison” to start the run.
After creation, the run appears in the Runs table. You can navigate back here at any time to check its progress.
Viewing run details
Section titled “Viewing run details”Click View on any run to open its detail page. The detail page has two sections:
Run summary
Section titled “Run summary”| Field | Description |
|---|---|
| Run ID | A unique identifier for this run. |
| Models | The models used, shown as badges. |
| Inference time | Average and median inference time per model, with the number of samples measured. |
| Priority | The priority level of the run. |
| Status | Current state of the run. |
| Created | When the run was created. |
| Start time | When processing began, or “Immediate” if it started right away. |
Results table
Section titled “Results table”The entries table shows each image processed in the run. Entries are sorted by how much the models disagree — the most interesting cases appear first.
| Column | Description |
|---|---|
| Status | A colored dot: gray (pending), blue and pulsing (processing), green (completed), red (failed). |
| Difference | Whether the models agree. For classification: “All Match” (green) or “All Different” / “X/Y Agree” (red). For object detection: based on which classes were detected. For coordinate models: pixel distance between predicted locations. |
| Conf. diff | The confidence difference between models: low (green, under 10%), medium (yellow, 10–30%), or high (red, over 30%). |
| Model columns | One column per model showing its prediction for that image. |
| Error | Error details if the entry failed. |
| Actions | A View button to open the image in annotation view with the comparison results overlaid. |
Auto-refresh
Section titled “Auto-refresh”The detail page polls for updates every 5 seconds while the run is in progress. Auto-refresh stops automatically when the run completes or fails. You can toggle it off manually.
Deleting runs
Section titled “Deleting runs”- Select runs using the checkboxes in the table.
- Click “Delete Selected” — the count of selected runs is shown on the button.
- Confirm the deletion. The button changes to “Confirm Delete”. Click it again to proceed, or click “Cancel” to back out.