Skip to content

How-To Guides

This chapter covers three core workflows. Each guide walks through a real task from start to finish using what the interface actually provides.

When no model has been trained yet, you need to label images manually. These annotations become the training data for the first model.

  1. Open the Home view. You will see a grid of captured images with no predictions (since no model exists yet).

  2. Set the time range. In the filter pane, adjust the From and To fields to cover the period you want to annotate. Click Apply filters.

  3. Filter by rig and camera (optional). If you want to focus on images from a specific inspection rig or camera, add values to the Rig ID or Camera ID filters.

  4. Select images to annotate. You have two options:

    • Click Annotate displayed images to annotate everything currently shown in the grid.
    • Use the checkboxes on individual cards to pick specific images, then click Annotate selected images.
  5. Annotate each image. The annotation view opens with a queue of images. For each image:

    • Review the image carefully.
    • Select the appropriate label value for each annotation category using the radio buttons or controls provided.
    • If the image uses bounding boxes, draw or adjust boxes on the relevant areas.
    • Click to save and move to the next image.
  6. Navigate the queue. Use the navigation controls to move between images in your annotation queue. You can go forward and backward.

  7. Complete the batch. Once you have annotated all images in the queue, you return to the Home view. The annotated images now show your labels.

Once a model is deployed, you want to understand what it gets right, where it struggles, and how confident it is. Eydara gives you several tools for this.

  1. Open the Home view and make sure the time range covers a representative period.

  2. Filter to show only predicted images. Set Show/Hide predicted images to Predicted and click Apply filters. This removes images that have not been processed by a model.

  3. Review predictions at a glance. Scroll through the image grid and observe the predictions shown on each card. Look for patterns — are most predictions the same label? Are confidence scores generally high or low?

  4. Use the prediction threshold slider. Drag the Only show prediction above threshold (%) slider upward to hide low-confidence predictions from the cards. This reduces visual noise and lets you focus on what the model is most confident about.

  5. Filter by specific predictions. Use the Prediction filter to isolate images with a specific prediction, e.g. defect:scratch. This shows you all images the model classified that way.

  6. Check the uncertain zone. Set the Confidence (%) range to a low-to-mid range, such as 30–60%. These are the images where the model is least sure. Review them to understand where the model’s knowledge breaks down.

  7. Compare predictions to annotations. Set Show/Hide annotated images to Annotated. For annotated images, you can see both the human label and the model prediction on each card. Mismatches reveal where the model disagrees with human judgement.

  8. Use Statistics for the bigger picture. Switch to the Statistics view to see label distributions and trends over time. This helps you spot systematic patterns that are hard to see image by image.

Verify that a new model version is better than the current one

Section titled “Verify that a new model version is better than the current one”

When a new model is trained, you need to check whether it actually performs better than the current one before deploying it. Eydara’s comparison runs let you do this systematically.

  1. Select a representative set of images. In the Home view, use filters to find images that cover the range of conditions your model needs to handle — different products, defect types, lighting conditions, and so on. Select these images using the checkboxes or Select all.

  2. Start a comparison. Click Compare selected in the filter pane. On the comparison setup page, select the models you want to compare (the current model and the new candidate).

  3. Configure and create the run. Set the batch size and sleep duration if needed, then create the run. The run appears in the Runs view.

  4. Wait for completion. Monitor the run’s progress in the Runs view. The progress bar shows how many images have been processed.

  5. Open the results. Click on the completed run to see the comparison results.

  6. Sort by difference. In the results table, sort by the difference column. Entries with the largest difference are where the two models disagree most. Focus your review there.

  7. Review individual entries. Click through entries to see the detailed predictions from each model side by side. For each disagreement, ask: which model got it right?

  8. Check thresholds. Visit the Thresholds view to compare how the confidence distributions differ between the two models. A better model should generally show higher confidence on correct predictions and lower confidence on incorrect ones.

  9. Make a decision. Based on your review:

    • If the new model is clearly better across most cases, it can be promoted.
    • If results are mixed, investigate the cases where the new model is worse. Small regressions on rare cases might be acceptable; regressions on common or critical cases are not.
    • If the new model is not better, it should not be deployed. The comparison data can help guide the next training iteration.