Statistics
The Statistics view gives you a visual overview of how predictions and annotations are distributed across your data over time. It helps you understand patterns, spot shifts, and monitor the overall state of your inspection data.
UI walkthrough
Section titled “UI walkthrough”The page is divided into two areas:
| Area | Description |
|---|---|
| Filter pane (left) | The same filter controls as the Home view. Use them to narrow the time range, rigs, cameras, or other criteria for the charts. |
| Charts (right) | A set of charts displaying different aspects of your data. |
Available charts
Section titled “Available charts”Depending on your site configuration, the following charts may be available:
| Chart | What it shows |
|---|---|
| Classification labels over time | How classification prediction labels are distributed across time buckets. Select a model from the dropdown to focus on a specific model’s predictions. |
| Object detection labels over time | How detected object types are distributed over time. Select a model to focus on its detections. |
| Number of images | Total images, annotated images, and triggered images over time. |
| Image annotation distribution | A stacked bar chart showing how annotations are distributed across labels over time. |
Chart controls
Section titled “Chart controls”Each chart may include controls to adjust its display:
| Control | What it does |
|---|---|
| Y max / Y min | Set upper or lower bounds for the Y axis. |
| Chart type | Toggle between different chart types (line, bar). |
| Percentage view | Show values as percentages instead of absolute counts. |
| Accumulated view | Show cumulative totals over time instead of per-period counts. |
Exporting data
Section titled “Exporting data”The Export Prediction Accuracy (CSV) button at the bottom of the page exports prediction accuracy data as a CSV file. This allows you to analyse the data in external tools such as spreadsheets.
How to use statistics effectively
Section titled “How to use statistics effectively”Statistics are most useful when you look at them with a specific question in mind:
- Is the model improving over time? Check whether the proportion of correct predictions increases across recent time periods.
- Are certain labels consistently wrong? Look for labels that appear frequently in predictions but rarely in annotations — or vice versa.
- Has something changed in production? A sudden shift in label distribution may indicate a change in the product or the inspection setup, not a model problem.
What’s next?
Section titled “What’s next?” Runs Compare model versions with structured comparison runs.
Thresholds Adjust the confidence thresholds that affect how predictions are evaluated.
How-To: Evaluate a model A step-by-step guide to investigating model behaviour.