Skip to content

Glossary

A human-provided label for an inspection image. When you annotate an image, you are recording what you see — for example, whether a defect is present and what type it is. Annotations are the ground truth that models learn from. Each image can have one set of annotations covering multiple labels.

A configuration that defines which labels apply to a given image position. Different cameras or image positions in a rig may inspect different parts of the product, so each has its own set of labels and options. Annotation configs also set the default values and confidence thresholds for each label. See also: Confidence threshold, Label.

A polling mode that refreshes the image grid every second to show newly captured images. Useful when monitoring a live production line. Can be toggled on or off in the filter pane on the Home page.

A rectangle drawn on an image to mark the location of a detected object. In object detection, both the model’s predictions and human annotations use bounding boxes to indicate where a defect or object appears. Each bounding box has a label, a value, and coordinates.

An annotation workflow where you work through a queue of images one after another. The queue is built from selected or displayed images on the Home page. In bulk mode, submitting an annotation automatically advances to the next image in the queue.

An identifier for a specific camera within a rig. A rig may contain several cameras, each capturing a different angle or part of the product. The camera ID distinguishes between them.

A type of model prediction that assigns a single label to the entire image. For example, a classification model might predict whether an image shows a “good” or “defective” product. The prediction includes a confidence score indicating how certain the model is. See also: Object detection.

A batch job that processes a set of images through one or more models and records the results. Compare runs are used to evaluate new model versions by running them side by side against the same images. Results are available in the Runs view.

A number between 0% and 100% that represents how certain the model is about a prediction. A high score (for example, 95%) means the model considers its prediction very likely to be correct. A low score (for example, 40%) means the model is uncertain. Confidence scores help you decide how much to trust a particular prediction, but they are not guarantees — a model can be confidently wrong.

The minimum confidence a prediction must have to be shown as a suggestion during annotation. If a model predicts a label with confidence below the threshold, the annotator sees the default value instead. Thresholds are configured per label in the Thresholds Statistics view. There is also a separate display threshold on the Home page that controls which predictions appear on image cards. See also: Default value, Thresholds Statistics.

A user-defined label applied to images for organizational purposes. Tags are freeform text (for example, edge-case, reviewed, priority). You can add or remove tags from selected images on the Home page and filter by them.

The value pre-selected for an annotation label when no prediction exists or when the model’s prediction falls below the confidence threshold. Defaults are configured per label in the Thresholds Statistics view. They represent the most common or safest assumption for that label.

An identifier that links all images captured from a single trigger event. When the production line triggers a capture, every camera in the rig fires simultaneously. All resulting images share the same frame trigger ID, making it possible to group images from the same physical item.

An identifier for the image position within a camera sequence. If a camera captures multiple images in a single sequence (for example, with different exposure settings), each position has a distinct image ID. The image ID determines which annotation config applies.

The output produced when an AI model processes an image. It contains the model’s predictions: labels, confidence scores, and — for object detection models — bounding box locations. Each image can have inference results from multiple models.

A category that annotators assign values to when labeling an image. For example, a label might be called “Surface quality” with options “Good”, “Scratch”, and “Dent”. Labels are defined in the system configuration and vary by image position. See also: Label map.

A mapping from internal numeric label IDs to human-readable names and colors. The label map translates what the system stores internally into what you see in the interface.

A trained AI system that processes images and produces predictions. Models are identified by a model ID and may have multiple versions. Different models can be trained for different inspection tasks. In EyVz, you see model predictions alongside your images and can compare how different models perform.

A type of model prediction that identifies and locates specific objects within an image using bounding boxes. Unlike classification, which labels the entire image, object detection marks where individual defects or objects appear. Each detection includes a class label, a confidence score, and bounding box coordinates. See also: Classification, Bounding box.

The AI model’s output for an image. Distinct from an annotation, which is a human-verified label. Predictions include one or more labels with confidence scores. For classification models, predictions are shown as label : value [confidence%] on image cards. For object detection models, predictions are drawn as bounding boxes on the image.

A physical inspection station on the production line. A rig contains one or more cameras arranged to capture images of products as they pass. Each rig has a unique identifier (rig ID). Different rigs may inspect different parts of the production process.

The identifier for a specific rig. Used to filter images by which inspection station captured them.

An identifier for the camera sequence configuration used during capture. A sequence defines how a camera captures images — including exposure settings and how many images to take per trigger.

An action that marks an image as skipped during annotation. Skipped images are hidden from future annotation queues by default (controlled by the “Hide skipped annotations” filter). Use this when an image is too ambiguous to label confidently. See also: Bulk mode.

A barcode value physically attached to the inspected item. When barcodes are read during capture, the sticker number links a physical product to its inspection images. Shown on image cards when available and configured.

A view in EyVz where you configure confidence thresholds and default values for each annotation label. Thresholds control when model predictions are strong enough to be offered as suggestions during annotation. See also: Confidence threshold, Default value.