In-Sight ViDi Statistics Overview
Statistical analysis is the core of In-Sight ViDi deep learning tools because they validate the software’s performance. This training video illustrates some of the most important statistical concepts that go into developing machine vision applications with In-Sight ViDi.
Things you will learn about in this video:
Ground truth. In a deep learning application, the software uses ground truth to draw inferences, or predictions of accuracy based on a statistical analysis of ground-truth facts.
Labels. In-Sight ViDi developers use labels to establish ground truth. Each label tells the software exactly what to look for in a digital image. The software uses labels to draw comparisons between different objects or features within the digital image.
Markings. This is where the inferences come into play. In-Sight ViDi tools use statistical analysis to infer the meaning of data generated by labels in multiple digital images. The software’s inferences are represented in graphics that help developers understand the application’s performance. The core of deep learning is to draw the most accurate inferences. To do that, the ground truth and labeling must be as accurate as possible.
Test/train split. Some of the digital images imported into In-Sight ViDi are used to train the application, while the remaining images are used to test its accuracy. Typically, the split is 50/50 and the test/training images are chosen at random.
Positive/negative predictions. A deep-learning application must be able to predict whether the information in a digital photo matches the ground truth data generated by labeling. This process creates four outcomes: true positive, true negative, false positive, and false negative.
The video illustrates the point with an example of a woman asking her doctor if she is pregnant.
- True positive. The doctor rightly tells the woman she is pregnant.
- True negative. The doctor rightly tells the woman she is not pregnant.
- False positive. The doctor wrongly tells the woman she is pregnant.
- False negative. The doctor wrongly tells the woman she is not pregnant.
The software compares these four outcomes to generate data that helps machine-vision developers fine-tune the accuracy of their applications. The goal is to train the software to correctly distinguish between these four outcomes.
Production errors. In production environments, machine vision systems automate inspections to improve accuracy and efficiency. The goal is to reduce errors to as close to zero as possible, but a few errors inevitably appear. This is the related terminology:
- Overkill. A false positive (or Type 1) error. The inspection mistakenly rejects a part that has no defects.
- Escape. A false negative (or Type 2) error. The inspection fails to reject a bad part.
Confusion matrix. This is a graphic illustrating the difference between ground truth and the application’s predictions. Ideally, true-positive outcomes form a diagonal line sloping downward from the top left of the image to the bottom right.
Recall. This is the percentage of features in digital images that are correctly identified. It’s calculated by dividing total positives by the sum of total positives and false negatives (TP/TP+FN).
Precision. The percentage of detected features that match a labeled feature or class. It’s calculated by dividing total positives by the sum of total positives and false positives (TP/TP+FP).
F Score. The harmonic mean of recall and precision. It’s a calculate averaged of recall and precision data.
Each of the In-Sight ViDi tools — ViDi Read, ViDi Check and ViDi Detect — uses these statistical frameworks differently, but they all aim to make the most accurate predictions. The links below go to Help pages that further illustrate these points. They also support the topics covered in the video.