Glossary
This glossary defines terms used throughout Eydara and its documentation. Definitions are written for domain experts — people who know their product, not the underlying technology.
Annotation
Section titled “Annotation”A label applied to an image by a human. Annotations represent expert judgement about what the image shows — for example, whether a product has a defect and what kind. Annotations are structured as label:value pairs, where the label is a category (such as defect) and the value is the specific determination (such as scratch). Annotations are used to train and improve the model. See also: Prediction.
Annotated by
Section titled “Annotated by”The user who provided the annotation for an image. This field is tracked so you can filter images by annotator — useful when reviewing annotations from different team members or when auditing annotation quality.
Auto-update
Section titled “Auto-update”A toggle in the filter pane that, when enabled, causes the Home view to poll the database every second for new images and refresh the grid automatically. Useful during active production monitoring. Should be disabled when not needed, as it adds continuous database load.
Bounding box
Section titled “Bounding box”A rectangle drawn on an image to mark the location of a specific object or region. Used in object detection tasks, where the model or the annotator needs to indicate not just what is in the image but where it is. Bounding boxes can be drawn, resized, and repositioned in the annotation view.
Camera ID
Section titled “Camera ID”An identifier for a specific camera at an inspection rig. Each camera captures images from a particular angle or position. Filtering by camera ID lets you focus on images from one viewing angle.
Classification
Section titled “Classification”A type of prediction where the model assigns a category to the entire image (or to a predefined region). For example, classifying an image as quality:ok or quality:reject. Unlike object detection, classification does not identify where something is — only what the overall assessment is. See also: Object detection.
Comparison run
Section titled “Comparison run”A structured process where a set of images is sent through two or more models so their predictions can be compared side by side. Runs are created from the Home view and tracked in the Runs view. Each run records the predictions from each model for every image, making it possible to identify where models agree and where they differ. See also: Run.
Confidence score
Section titled “Confidence score”A number between 0 and 1 that accompanies every model prediction. It represents how certain the model is about its prediction. A confidence of 0.95 means the model is quite sure; 0.30 means it is largely uncertain. Confidence scores are not probabilities in a strict sense — they indicate relative certainty, not a guarantee. See also: Threshold.
Custom tags
Section titled “Custom tags”User-defined labels that can be attached to images for organisation and filtering. Unlike annotations, tags do not represent expert judgement about the image content — they are metadata for workflow management. Examples might include tagging images for later review or marking them as part of a specific batch.
Default value
Section titled “Default value”The annotation value that is pre-selected when you open an image for annotation. For each label, a default value is configured in the Thresholds view. This saves time during annotation, but you should always verify that the default is correct before confirming.
Filter pane
Section titled “Filter pane”The panel on the left side of the Home and Statistics views that contains all filter controls. Filters narrow down which images appear in the grid. Changes take effect only when you click Apply filters. See also: Filtering documentation.
Filter profile
Section titled “Filter profile”A saved set of filter values that can be loaded with one click. Filter profiles let you quickly switch between commonly used filter combinations without manually re-entering each value.
Frame trigger
Section titled “Frame trigger”An identifier that links an image capture event to a specific trigger signal — typically generated by a sensor detecting a product on the production line. Filtering by frame trigger lets you find all images associated with a specific trigger event.
Image ID
Section titled “Image ID”A sequence number identifying a specific image within a capture sequence. Different from the internal database identifier, the image ID corresponds to the position within the configured capture sequence.
Ingest
Section titled “Ingest”The pipeline stage that stores processed images and registers them in the system so they become visible in the interface. You do not interact with ingestion directly — it happens automatically after images are captured and preprocessed.
A trained AI vision model that produces predictions for images. Each model has a unique identifier (Model ID) and may specialise in different types of inspection tasks. Multiple models can be deployed simultaneously, each analysing images for different purposes.
Model ID
Section titled “Model ID”A unique identifier for a specific trained model. Used to filter images by which model produced the predictions and to select models for comparison runs.
Negation
Section titled “Negation”A filtering technique where clicking on an already-added filter value toggles it to an exclusion. Negated values are prefixed with ! and appear in red. When negated, the filter means “exclude images matching this value” rather than “include them.”
Object detection
Section titled “Object detection”A type of prediction where the model identifies specific objects within an image and marks their location with bounding boxes. Each detected object has a class label and a confidence score. Unlike classification, object detection tells you both what and where. See also: Classification.
Prediction
Section titled “Prediction”An output produced by a model for an image. Predictions indicate what the model thinks the image shows and how confident it is. Displayed as label:value [confidence%] on image cards. Predictions are the model’s best guess — they should always be verified against human judgement, especially for critical decisions. See also: Annotation, Confidence score.
Prediction threshold slider
Section titled “Prediction threshold slider”A display-only control in the filter pane that hides predictions below a certain confidence level from image cards. Unlike the confidence range filter, it does not remove any images from the results — it only reduces visual clutter on the cards. See also: Confidence score.
An inspection station consisting of one or more cameras positioned to photograph products as they pass through. Each rig has a unique Rig ID. Different rigs may photograph the same product from different angles or inspect different product types.
Rig ID
Section titled “Rig ID”A unique identifier for an inspection rig. Used to filter images by which rig captured them.
Short for comparison run. See: Comparison run.
Skipped annotation
Section titled “Skipped annotation”An annotation status indicating that the annotator chose to skip the image rather than label it — typically because the image was unclear, damaged, or not suitable for annotation. By default, skipped images are hidden from the Home view.
Threshold
Section titled “Threshold”A configured confidence level below which a model prediction is treated as uncertain. Each annotation label can have its own threshold, set in the Thresholds view. Thresholds are a human decision about acceptable uncertainty — they do not change the model, but they change how predictions are interpreted and used. See also: Confidence score.