As teams apply optimization earlier and more frequently in the modeling process, they develop high-performing models at a faster pace. This virtuous cycle increases the number of models that make it into production, which amplifies the impact of these models on the business. At the Deep Learning Summit in San Francisco, SigOpt will be showcasing their model optimization software and how they automate model tuning to accelerate the model development process and amplify the impact of models in production at scale.
We spoke to Nick Payton, B2B Marketing Lead at SigOpt to learn more.
SigOpt’s mission is to empower experts. We provide a software solution that optimizes any model - whether machine learning, deep learning, simulation or any other type. Our customers use this solution to automate the experimentation process during the model development phase and maximize model performance in production. We work with customers building all types of models, but have recently seen particular usage grow in deep learning for enterprise applications and high-dimensional simulations for finance and trading.
I am responsible for marketing and partnerships at SigOpt. Because our solution is specialized around the task of optimization and considered a best-in-class product for this challenge, we partner with most other AI companies, including Amazon, Google, NVIDIA, Intel, and H20.ai, among others.
Our solution makes it easy and reliable to use our proprietary set of Bayesian and other global optimization algorithms to tune any model. We have written this code to be robust across any model type within any framework and built on any infrastructure at any scale.
From a user’s standpoint, we have extracted away all of the complexity. Their experience is simply entering a few lines of code into their existing notebook, which, when executed, triggers the optimization loop that our algorithms run to identify the best configuration of parameters. They can track the progress of these experiments in a simple web dashboard, which also captures each parameter configuration so any model can easily be reproduced.
Behind the API, our algorithms are a bit more complex. Consistent with our mission, our optimization algorithms are capable of helping any expert solve any permutation of this parameter optimization problem. SigOpt’s Optimization Engine is powered by a variety of proprietary Bayesian and global optimization algorithms that come from the “optimal learning” field of research. The goal of this approach is to search a model’s hyperparameter space to maximize a pre-defined objective metric or other model output. This Bayesian approach efficiently and intelligently trades off “exploration” and “exploitation” to uncover a globally optimal configuration of any model between 10x and 100x faster than traditional methods like random and grid search. We also pair our core optimization algorithms with advanced algorithms like multimetric and multitask optimization that supercharge this process to solve even the most complex model optimization challenges.
Consistent with our mission, we designed a solution that can be used by any practitioner. So before deciding it isn’t for you, I’d recommend you give it a try. As a black-box optimization solution, our algorithms can optimize any model. And our API is designed to be easy to use in any notebook.
That said, we have seen the most adoption among researchers and engineers who are either rapidly experimenting, using a variety of models or developing relatively complex (>5 dimensions) models. As the volume, variety or complexity of models grows, so does our value as a solution.
We have also found teams see a lot of value when they integrate our API directly in their modeling platform. Our solution helps them provide best-in-class optimization to their users without requiring the end-to-end model management lock-in that most other solutions require.
Given that I am on the business side, I have not had the pleasure of implementing much AI software. But our CTO, Patrick Hayes, explains in this webinar some of the methods to tackle length training cycles.
There are attributes of nearly every market that make me excited to witness the upcoming transformations. We are seeing a great number of impressive innovations within our portfolio of users today that span a ridiculously broad array of fields.
But if I had to choose, I’d say the potential impact of AI on healthcare and transportation. I’ve worked most of my career in regulated industries with government policy implications for go-to-market strategies. There tend to be significant barriers to these markets, but they also contain some of the largest opportunities for transformation. And because they impact such large swaths of our economy in such systemic ways, they have the chance to impact every single person’s life.
As someone on the business rather than marketing side of AI, my advice is of only limited value to your readers, especially as it relates to the skills required. There are two considerations, however, that come to mind.
First, consider whether you enjoy variety. AI is unique in its near universal applicability. There are AI use cases that promise to transform any industry. This variety of opportunities and endless potential for applications is energizing for some, but can be demotivating for others.
Second, consider whether you are comfortable with an uncertain, evolving industry. The best tools for deep learning from a year ago are no longer used by most practitioners today. The rapid pace of evolution for techniques, tools and solutions in this space creates the most opportunity for those who thrive in an uncertain environment. Many of the underlying mathematical principles remain relatively consistent, but the application of them - and the tools that enable this application - are changing at a blistering pace.
If you are weighing a career, of course, we are hiring!
I look forward to getting additional user feedback on some of our features like Multitask optimization that are designed with the deep learning user in mind. Multitask optimization is a relatively recent upgrade to our set of optimization algorithms that is particularly useful for “expensive” deep learning functions. This method samples partial-cost functions to inexpensively learn about full-cost functions. Through this process, teams who use this solution are able to much more efficiently optimize models that take days to train. The key to this method, however, is that it shortens the time required to optimize a model without sacrificing performance. In trials our customers have run, they find better performing models in a fraction of the time required by other optimization techniques.
Join the discussion with SigOpt and many other industry leaders on how we should be bridging the gap between the latest technological research advancements and real-world applications in business and society. Sign up here now to join the annual Deep Learning Summit in San Francisco.