Nick Ryder

Zero Shot Capabilities at Scale

One lofty goal of artificial intelligence research is developing systems with the ability to perform well on a variety of tasks without demonstrations, which is called "zero shot learning". By leveraging natural language signals and a deep understanding of how compute, data, and models interact, we've built systems with such capabilities across a variety of modalities. I will present three historic models which had unprecedented zero shot capabilities: GPT-3, CLIP, and Codex.

Nick Ryder is a research scientist at OpenAI focusing on both the engineering and science of scaling of large language models. He's worked on projects such as GPT-3 and Codex. His primary research interests include scaling laws of generative modeling and distributed computing for large transformers.

Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more