Adam Wenchel

Beyond Accuracy: Monitoring Models for Data Drift to Ensure Performance Over Time

To ensure that machine learning models are continually achieving business goals, companies must monitor their model performance constantly. Issues such as the degradation of model performance over time can be detected by spotting changes to model accuracy or data drift. In this talk, we'll cover the basics of model accuracy and data drift, as well as the most common metrics used for each analysis. We'll also cover how Arthur enables the continuous monitoring of these critical features for organizations with ML models in production.

Key Takeaways:

*What data drift is, how we can calculate it, and what types of data drift we need to look out for (e.g. multivariate vs. univariate, different metrics for different model types)

*How to combine drift monitoring with other analysis, such as bias or accuracy metrics, to get a full view of model performance

*How to incorporate model monitoring tools into your AI stack to automatically detect performance degradation and achieve AI maturity

Adam co-founded Arthur and serves as CEO. Adam has over 20 years of experience in the AI, Machine Learning and software development spaces. Prior to founding Arthur, he founded and acted as CEO for Anax Security, a DC-based startup focused on Machine Learning for large-scale defensive cybersecurity. After Anax’s acquisition by Capital One, Adam served as Capital One’s VP of AI & Data Innovation, leading transformative projects across the business. There, Adam helped bring AI observability, fairness, and explainability to high value areas such as credit, user experience, cybersecurity, marketing, fraud & financial crimes monitoring and operations automation. Adam holds a BS in Computer Science from the University of Maryland.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more