AI & ML Model Maintenance

Share Now

Managing Federal AI Initiatives Post-Production by Mitigating Model Drift

Phone pointed at a Russian sign. The phone is translating the text in the image live.

Share Now

Deploying an AI Model is Only the Beginning

Once you have built your Machine Learning (ML) Model you might wonder what is next. You infused the model with a diverse set of training data, and it is achieving great results, but that does not mean it is time to move onto your next initiative.

In fact, deployment is only the beginning of your ML Model’s lifecycle.

There is a final, but continuous, phase of development: model monitoring and maintenance. Like a piece of machinery, AI Models need regular tune ups and upgrades to continue to meet expectations. Without monitoring and maintenance, your model’s accuracy will be severely impacted over time.

You will need to monitor your model frequently post-deployment to ensure proper performance. Regular monitoring will help you predict the need to retrain your model to mitigate inaccuracies. It is most successful when the infrastructure is established at the outset of the project, as part of the overall project plan.

What is Model Drift?

Graphs of model accuracy with and without retraining. Left: without model drift, the model continually drops in model accuracy. Right: continual retraining mitigates that decline as soon as it happens.

Initial training of a model requires data, generally historical data. When deployed, the model will remain stagnant with its static training data, but in the real-world data is not stagnant. Instead, models operate in everchanging settings, encountering new situations constantly. These changes cause degradation as the model can no longer predict and interpret the unfamiliar data. This performance decline is called model drift sometimes called concept drift.

Let’s walk through an example. The Army utilizes a talent management system to find the best potential candidates for future assignments.  Their system ensures that the right people fill the right positions by matching open opportunities with personnel that have the correct skillset.

These potential positions and training opportunities are constantly changing as new opportunities arise based on emerging global threats, technological advances, and an ever-evolving pool of soldiers. Each time a new job is created for developing a desired skillset (some recent examples include: Space Force, UAV Operators, Data Scientists and Technicians), the model needs to understand the potential connections with tangentially related skills to source current service members best suited for new missions and emerging technology.

If the talent management system is not adequately updated and the model not retrained to understand the new dataset’s implications on personnel training and management, the system ceases to be useful in ensuring the right people fill the correct roles. This can mean time wasted in training soldiers with no prior background in the mission objective.

Why Model Maintenance is Critical

To solve the above problem, engineers infuse the model with new, accurate data on potential jobs and trainable skills, then help the model form connections with already present datasets. After this update, the model’s performance returns to its former precision.

Model drift detection on the above talent management example has clearly defined opportunities for scheduling retraining. Every time a new skillset or mission objective is created, the model needs to understand how the skillset relates to other relevant skills thereby optimizing personnel assignments. In other deployed models, exterior changes are more difficult to detect due to subtle data shift. In those cases, there are other model-specific metrics that can be implemented to monitor the model’s performance.

When to Retrain a Model

Each model needs to be individually analyzed when determining the rate of retraining. Three factors control this rate: the model, the data, and the mission objectives. Depending on the task, the data may change rapidly or intermittently. The rate of change will control when to retrain. For example, the information gathered about COVID-19 is rapidly evolving and as more data is acquired, the models tracking the virus need continuous updating. Whereas other types of models live in less dynamic environments, they do not need as frequent retraining.

Retraining has two main approaches to choose from that are determined on an individual basis.

Time-based: Retraining your model at preplanned, regular intervals, regardless of its performance. Since the model is being indiscriminately retrained, it is essential to have a clear understanding of the average rate of change in your model’s variables.  If the training intervals are too spaced out, the model’s performance will suffer.

Continuous: Establishing a set of performance metrics and monitoring them to determine when retraining is needed based on a predetermined maximum threshold for error and other bias metrics. To accomplish this, a comprehensive set of clearly defined measurements require regular monitoring to accurately predict when model drift is occurring.

Your experience with the data set and use case will help inform the approach to retraining your team will take. You can also use a combination of the two techniques. Modern ML training tools are more advanced than ever. They can precisely detect biases and failures rapidly to allow for appropriate response times. We also recommend using Human-Machine Teaming – employing human annotators to check the model’s performance and ensure precision.

How to Plan for Post-Production Success

Woman and man collaborated around a computer of coding.

Retraining should not be an afterthought. Retraining should be part of the overall project plan when deploying AI model development. To have a successful retraining pipeline, begin preparing for this process early by acquiring a team and deciding the metrics indicating retraining is required. Things you will need include:

A dedicated MLOps team. MLOps for AI is like DevOps for software development: a collaboration of data scientists and production teams. This collaboration is a key departure from the traditional format where data scientists build the model and engineers maintain it with little interaction between teams after each part is complete.

If gathering a dedicated MLOps team is not possible, start by fostering lines of communication between the engineers and scientists early in the process. Having the two teams work closely together helps create a well-rounded model maintenance plan. The data scientists know how the model was built and trained. This knowledge improves the engineer’s decisions when producing and maintaining the models.

Efficiently implement customer feedback. The team in charge of model maintenance, whether an MLOps or engineering team, need to not only connect with each other, but keep constant communication with the customer facing teams. The customers using the models will be extremely effective in identifying model errors or areas of underperformance. Efficient transfer of this information back to the production team allows for quicker model retraining and a better outcome for the customer’s AI initiative.

Ensure stakeholders understand the retraining process. As we have discussed, many people think deploying an AI Model is the end of the development process, executives and stakeholders are no exception. Make sure to educate these critical members on the importance of continuous retraining early in the project to ensure their full understanding and investment in a post-deployment lifecycle, not just the development stages.

Develop the retraining pipeline. Now that your entire team understands the importance of retraining, it is time to sketch out a strategy prior to the model’s launch. This allows you ample time to procure the right tools – such as a team of Humans-in-the-Loop (HITL) to constantly validate the data – and develop a proper infrastructure to immediately begin the retraining process.

Once the model is ready, you can simultaneously launch the AI Model and begin the retraining pipeline. By preplanning your retraining strategy, your AI project will be set up for continued success.

How Figure Eight Federal Mitigates Model Drift

Figure Eight Federal is here to help you develop a retraining pipeline. With over 15 years of experience and a crowd of over a million annotators ready to train and monitor your data, we set your AI initiatives up for success from the very beginning. We have encountered all the possible errors and know exactly how to mitigate them before they happen. If you are ready to take your AI initiative from pilot to production and beyond, schedule a demo with us today.

If you are ready to take your AI initiatives from pilot to production and beyond, schedule a demo today.

Get Started

Fully customizable AI solutions will help your organizations work faster and with more accuracy.