Video Annotation Best Practices with Annika Deurlington
In this post, we sit down with Annika Deurlington, Commercial Program Manager at CrowdAI. She shares best practices for video annotation and reveals hidden industry tricks.
You’ve invested time, effort, and money into your AI project but it fails to advance out of your lab. Even though you have data and models, it is not enough to jump start your AI project. According to Venturebeat, only 13% of data science projects actually make it into production.
So how can you increase the odds that your projects will make it into production?
Our team at CrowdAI has spoken with or worked with dozens of teams across a wide array of industries and sectors. We’ve spoken with innovation teams who are motivated to understand how new technology works and are willing to spend the time to assess if and how it can benefit the enterprise. We’ve also worked with operational teams who see the pressing need for computer vision to maximize the value they get from their visual data, but who don’t have the data science support or know-how to put AI to work in a real-world environment.
Most importantly: both of these types of teams are interested in taking their organization’s use of visual media to the next level.
From our conversations, we’ve compiled a list of problems and—more importantly—solutions for why projects get stuck in R&D. Understanding why AI efforts fail to become operational is important, but it is even more important to learn how to move past it.
Typically, we’ve found AI projects that fail to get out of the R&D phase tend to do so for one or both of these reasons:
This is the “I already have some data on the shelf, so I should be fine, right?” problem, and we understand that impulse.
It’s true: you need data to build an AI model! But this hides a lot of nuance that you’ll need to consider if you ever want that model to be useful in the real world. AI models can be a bit like cars: they need maintenance and care if they’re to last a long time. Start thinking now about the production environment where your imagery is being collected today—cameras on a manufacturing line, CCTV, satellites. Is this the same source of the data you already have? If so, how common (or rare) is the object you’re looking for in that imagery?
Sure, you could train a model to work on your existing data, but without thinking now about the future, that model will likely only ever be useful in R&D.
Subject matter experts (SMEs) are perhaps the most critical members of the team when it comes to taking AI into production. After all, it’s their workflow that we’re trying to make easier with AI!
We’ve found time and again that it’s mission-critical to involve the SMEs who best know the visual problem from the very beginning of the AI journey. With computer vision specifically, the goal is to create a model that can enhance the visual expertise these SMEs have spent years building. Their intuition for what counts as “good” or “bad” or “a problem” in an image or video is precisely what the model is attempting to learn. In essence, you’re creating AI to digitize their eyes and expertise, so they can focus on more complex tasks.
Though these obstacles have hindered many innovators from effectively using computer vision, we have some tips that can help you overcome them.
Over the next few blog posts, I’ve asked my team to dive deeper into the issues and best recommendations for moving past these roadblocks to getting your AI into production.
(UPDATE!) Here are those articles: