Word2vec is most probably the most popular model used to produce Word Embeddings. Word Embeddings are a set of language modeling and feature engineering techniques commonly used in Natural Language Processing to capture semantic similarities between linguistic items. A natural application for related models is automatic search engine query expansion. In this talk we will give an overview of word2vec and related models and we will explore how these techniques can be used for automatic query expansion.
Kostas joined Argos in 2017 as a Lead Machine Learning engineer. Prior to Argos, he worked at Royal Mail, Mailonline, Pearson and in research; he was involved in a broad range of projects from European FP6 research programs to EdTech, Analytics, Search, Predictive Modelling using Machine Learning and AI. He is interested in Deep Learning, Distributed Computing, Optimisation, Search, Predictive Analytics and Natural Language Processing.