Chris Moody

Data Scientist
Stitch Fix

Practical, Active, Interpretable & Deep Learning

I'll review applied deep learning techniques we use at Stitch Fix to understand our client's personal style. Interpretable deep learning models are not only useful to scientists, but lead to better client experiences -- no one wants to interact with a black box virtual assistant. We do this in several ways. We've extended factorization machines with variational techniques, which allows us to learn quickly by finding the most polarizing examples. And by enforcing sparsity in our models, we have RNNs and CNNs that reveal how they function. The result is a dynamic machine that learns quickly and challenges our client's style.

Chris Moody came from a Physics background from Caltech and UCSC, and is now a scientist at Stitch Fix's Data Labs. He has an avid interest in NLP, has dabbled in deep learning, variational methods, and Gaussian Processes. He's contributed to the Chainer deep learning library (http://chainer.org/), the super-fast Barnes-Hut version of t-SNE to scikit-learn (http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and written (one of the few!) sparse tensor factorization libraries in Python (https://github.com/stitchfix/ntflib). Lately he's been working on lda2vec (https://lda2vec.readthedocs.org/en/latest/).

Buttontwitter Buttonlinkedin

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium