Understand how an existing model behaves
Once a model has been trained and deployed, its predictions start appearing alongside your images. This guide explains how to use EyVz to understand what the model gets right, what it gets wrong, and where it is uncertain.
What you are looking for
Section titled “What you are looking for”When reviewing a model, you want to answer three questions:
- Where does the model agree with human judgment? These are cases where the prediction matches what an annotator would choose.
- Where does the model disagree? These are errors — the model predicts one thing, but the correct answer is something else.
- Where is the model uncertain? These are predictions with low confidence, where the model is effectively guessing.
Reviewing predictions on the Home page
Section titled “Reviewing predictions on the Home page”-
Open the Home view and set appropriate time and camera filters.
-
Show only predicted images. Set Show / hide predicted to “Predicted only” and click Apply filters.
-
Scan the image grid. Each card shows predictions below the image thumbnail as
label : value [confidence%]. For object detection models, bounding boxes are drawn directly on the image. -
Look at confidence scores. High confidence (for example, 90% or above) means the model is fairly certain. Low confidence (below 70%) means it is not sure. Use the confidence min/max filter to isolate images in different confidence ranges.
-
Compare predictions to annotations. For images that have been annotated, both the human label and the model prediction are shown on the card. Disagreements between the two indicate model errors.
Using filters to find problems
Section titled “Using filters to find problems”Low-confidence images
Section titled “Low-confidence images”- Set Confidence max to a value like 70.
- Click Apply filters.
- Browse the results. These are images where the model is least sure. Review them carefully — they often reveal what the model struggles with.
Mismatches between predictions and annotations
Section titled “Mismatches between predictions and annotations”- Set Show / hide annotated to “Annotated only” and Show / hide predicted to “Predicted only”.
- Click Apply filters.
- Examine cards where the prediction and annotation differ. Open these images in the annotation view for a closer look.
Filtering by specific prediction values
Section titled “Filtering by specific prediction values”Use the Predictions filter to find all images where the model predicted a specific label and value. For example, enter Defect:Scratch to see every image the model classified as having a scratch. Then check whether those predictions look correct.
Use negation (!Defect:Scratch) to find images where the model did not predict a scratch — helpful for checking whether the model is missing real defects.
Adjusting the display threshold
Section titled “Adjusting the display threshold”The Only show prediction above threshold slider in the filter pane controls which predictions are shown on image cards. This is a display filter — it does not change the data, only what you see.
- Set it low (for example, 10%) to see all predictions, including uncertain ones.
- Set it high (for example, 90%) to see only the predictions the model is most confident about.
Opening images for closer inspection
Section titled “Opening images for closer inspection”Click any image thumbnail to open it in the annotation view (read-only). In this view you can:
- See the full-size image with zoom and image filters
- Toggle between Annotation and Prediction display modes
- Examine bounding boxes and confidence scores in detail
Building a mental model
Section titled “Building a mental model”After reviewing a range of images across different confidence levels, you should be able to say:
- “The model is reliable for X” — high confidence, consistently correct
- “The model struggles with Y” — frequent errors or low confidence on certain types of images
- “The model is uncertain about Z” — low confidence in a specific area, worth watching
This understanding helps you decide whether to trust the model, where to focus your annotation effort, and whether the confidence thresholds need adjusting.