Find your next talent
19 Apr 2021 Full Time
It’s time to supercharge our efforts with new pipelines/tools for visualization and training, debugging and testing! We want to build common tools that enable the wider research team to create such prototypes rapidly and perform detailed experimentation at scale.
You’ll come across unique challenges that combine state-of-the-art computer systems with cutting AI algorithms so some ML experience is required.
Responsibilities: - Collaborate with researchers to implement and evaluate ML algorithms - Go-to person for the team to build/scale infrastructure and tools for research - Join collaborative research projects that have built momentum and are looking to scale - Report and present software developments including status updates and results - Architect and implement software libraries for research prototypes across the range of DeepMind research projects - Identify and tackle problems within a research context - Research products instead of prototypes - helping to drive the focus on scalability/use-ability in the wider organization - Provide software design and programming support to research projects - Challenge researchers and collaborators to push to maintain robust coding, design and processes across research teams
San Francisco US
1 Apr 2021 Full Time
Join the API team, a core group who work to bring OpenAI's technology to the world in partnership with other organizations.
We are looking for a self-starter engineer who loves building and running production systems. In this role, you will build the systems that power a breadth of production ML use cases. You’ll also work closely with and directly accelerate machine learning researchers, but don't need to be a machine learning expert yourself. We value people who can quickly obtain a deep technical understanding of new domains, and enjoy being self-directed and identifying the most important problems to solve.
We look for a track record of the following: - Experience designing, implementing, and running production services. - Comfort managing and monitoring infrastructure deployments. - Willingness to debug problems across the stack, such as networking issues, performance problems, or memory leaks. - Experience with high-performance computing or infrastructure orchestration tools are a bonus. - While we don't require machine learning expertise, it's a bonus to find someone with experience in tools of high performance computing, machine learning optimizations, and model scaling. This could include experience with mpi, NCCL, CUDA kernels, model and parameter sharding, Infiniband, and GPU hardware. Ultimately it is more important one has a willingness to explore, understand systems, and learn how it all ties together. We are constantly discovering new capabilities in our models. Turning those discoveries into safe and performant production systems requires a generalist mindset and curiosity.
You might be a good fit if you: - Are self-directed and enjoy figuring out the most important problem to work on. - Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done. - Know your way around a Unix shell. - Build tools to accelerate your own workflows, but only when off-the-shelf solutions would not do. - Have been a startup founder or an early-stage engineer. - Enjoy fast paced work environment with tight feedback loops.