Anomaly Detection with the k-NN Model
November 15, 2021

Anomaly Detection with the k-NN Model

Vinay Senthil
Vinay Senthil

Finding a defective item amongst thousands of non-defective objects is an essential task across industries. For example, identifying a dented can on a production line manufacturing thousands of cans per minute or spotting a defective solar panel in a grid stretching miles are both crucial tasks in their respective industries.  People often refer to this as “anomaly detection,” as you’re looking for something that’s not the way it should be.

Traditionally, this form of repetitive visual inspection has been time consuming, demanding hours of human labor to process trillions of pixels. However, organizations have unlocked real business value and freed human creativity by integrating cutting-edge computer vision to augment human sight and supercharge anomaly detection workflows. In some cases, AI has improved manufacturing defect detection rates by 90% and can identify problems in a CT scan up to 150 times faster than it would take a physician. In fact, CrowdAI receives numerous requests from clients who want to deploy computer vision in their anomaly detection workflows.

Why anomaly detection is a unique computer vision problem

In anomaly detection, a learning algorithm studies a set of example media called the training set. Then, based on what it sees, the algorithm determines if new images are “normal” or “anomalous.” This process is known as supervised learning, since we’re the ones flagging examples of anomalies for the algorithm to learn. However, supervised learning will only take you so far. First, anomalies can take a virtually infinite number of forms, making it near impossible to teach a model to recognize every single irregularity that can occur in the real world. Second, a problem called class imbalance may occur, which is when we have too many samples of media depicting “normal” objects and not enough “anomalous” object examples, usually due to the rarity of anomalies in practice.At CrowdAI, our research team is experimenting with innovative model architectures that leverage unsupervised learning, which doesn’t require humans to label examples of anomalies in order for the model to learn. With our approach, we can form an understanding of what “normal” objects look like, then train a model that considers anything outside that description to be anomalous.

Warning: we’re moving into some technical territory!

Threshold Distancing and Anomaly Detection

One unsupervised approach uses a k-Nearest Neighbors (k-NN) model, a distance-based thresholding technique, to determine if an object is anomalous or not. Here’s an overview of how that might work. 

Imagine we want to find faulty boxes on a production line. We would start by asking ourselves the question, ”what makes a box a box?” Specifically, what features are so essential to the concept of a box such that we’d only call a given object a box if they were present?

From individual features to a feature space

We answer this question by defining these essential features of a box. We then turn the range of possibilities of a particular feature (such as height) into coordinates that can be plotted on a graph. The result is what machine learning engineers call a feature space, an abstract representation of the original images on a graph. The closer two dots are to each other in the feature space, the more similar the features of those images (e.g. heights) are, indicating that the original boxes are similar. Distance-based thresholding uses this type of understanding of an object and its features to establish a standard of what is considered a “normal” object or box. 

Remember, if we plot the features of all “normal” boxes, we will see a tightly-packed cluster of points in the feature space—they are near one another because they are similar. We then can classify any box whose features fall at least a certain distance away from this cluster as “anomalous”. 

For example, the average height and width of the boxes in our data set could be 5 inches and 7 inches, respectively.  We can then make a rule (or threshold) that any box that is more than one inch away from these averages can be considered anomalous. With this threshold, we can now classify a box with features (x, y), where x is its height and y is its width. A box with (5.5, 7) is classified as “normal”, since it is within the one-inch threshold, whereas a box with (8, 9) would be deemed anomalous. Determining this tolerated range is the foundational concept behind distance-based thresholding and anomaly detection.

Below is an example of a workflow for anomaly detection in the CrowdAI Platform.

At CrowdAI, we believe that anomaly detection in combination with unsupervised learning  offers many advantages: enterprise agility by reducing the need for labeled images, shorter model training times, and minimized modifications to model architecture. We’re continuing to explore the uses for various anomaly detection models alongside several other techniques to help our clients quickly realize improved business outcomes. 

For more deep-dives into anomaly detection and other exciting research in computer vision, follow our LinkedIn page, subscribe to our mailing list, and continue to read the “CrowdAI Research blog series”. 


Understanding AI
Advancing AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.