
REGISTRATION & LIGHT BREAKFAST
Chris Corriere - SJ Technologies
The Worst Game of Telephone Ever
Telephone is the party game where a phrase is whispered from the first player to the last until it spoken out loud to reveal how much the message has changed in transit. Data scientists often create models in R, excel, or even with pencil on paper. What happens when these models and algorithms are handed off to a development team to implement? How often is the final product actually doing what was originally intended? Is this the worst game of telephone ever? Can devOps practices help avoid these problems? What can data science & machine learning do for devOps? Please join Chris Corriere, a devOps advocate with SJ Technologies, as he shares stories about failure, success, trust, communication, Nash games, automation, and machine learning.
Map Reduce for Prioritizing a Backlog The Prisoners' Dilemma & The Stag Hunt The Difference between Complicated & Complex Domains Moving from Maps to Models devOps Dojo Practices for Machine Learning
Chris Corriere has been working with data, phones, networks and writing software for over twenty years. His background in mathematics and engineering has allowed him to adapt to new and industry-specific technologies and provided many unique consulting opportunities. As a devOps professional Chris is committed to culture, automation, learning, sharing, and having a good time while getting work done. Chris is currently a senior devOps advocate with SJ Technologies focused on dojo practices, mapping, & complexity science.


CURRENT LANDSCAPE


Marios-Eleftherios Fokaefs - Assistant Professor - École Polytechnique de Montréal
Intelligent Decisions for DevOps Processes: AI in the Service of Software Engineering
Marios-Eleftherios Fokaefs - École Polytechnique de Montréal
Intelligent Decisions for DevOps Processes: AI in the Service of Software Engineering
AI and Software Engineering have always been in tandem throughout the history of Computer Science. AI algorithms have been implemented in exceptionally useful tools and products using well-established software processes. On the other side, AI has frequently contributed smart analytics in supporting software development. Now, we enter a new era for both disciplines, where speed, volume and accuracy are crucial challenges. Now, the synergy between the two disciplines is as pertinent as ever. In this talk, I will explore this collaboration mainly from the perspective of Software Engineering and show examples of AI contributing smart methods and analytics to support modern software processes.
Marios Fokaefs is an Assistant Professor at École Polytechnique de Montréal, Canada. He holds a BSc in Applied Informatics from the University of Macedonia, Greece, a MSc and a PhD in Software Engineering from the Department of Computing Science, University of Alberta, Canada. His research interests include DevOps, software evolution, change management, service-oriented architectures, cloud systems, Internet-of-Things and software engineering economics. Dr Fokaefs is an IEEE Member and an IBM CAS Faculty. His research has been supported and funded by IBM, AITF from Alberta, ORF, OCE and SOSCIP from Ontario, as well as the Natural Sciences and Engineering Research Council of Canada.

GAINING INSIGHTS FROM DATA
Kohsuke Kawaguchi - CloudBees
Wasted Gold Mine & What Data Can Do To DevOps
As CTO of CloudBees and the creator of Jenkins, I get to see lots of real-world software development. Our automation in software development is sufficiently broad that it is producing lots of data, but by and large most of those are simply thrown away. Yet at the same time, the management is feeling like they are flying blind because they have little insight! In this talk, Kohsuke will discuss how we collectively seem to miss the golden opportunity to improve the software development process itself, based on data. IOW, learning is lacking at the organizational level, let alone "machine" learning!
Kohsuke Kawaguchi is the creator of Jenkins. He is a well-respected developer and popular speaker at industry and Jenkins community events. Kawaguchi’s sensibilities in creating Jenkins and his deep understanding of how to translate its capabilities into usable software have also had a major impact on CloudBees’ strategy as a company. Before joining CloudBees, Kawaguchi was with Sun Microsystems and Oracle, where he worked on a variety of projects and initiated the open source work that led to Jenkins.


INTRODUCTION TO MACHINE LEARNING
Boshika Tara - Capital One
Demystifying Neural Networks
Neural Networks are at the cutting edge of Machine Learning, and Artificial Intelligence. Often the black-box reference made to explain NN's leads to folks with little experience in Machine Learning, statistics or big data to become intimidated. There is a powerful synergy between DevOps and Neural Networks, NN's that use automatic feature extraction give exponential leverage in the processing of monitoring production data, identifying patterns or anomalies by analyzing application log events, and verifying production deployments.
In this talk I will walk you through how we can "Demystify Neural Nets". By utilizing open-source libraries like TensorFlow, and Keras, I will demonstrate how one can build a neural network in no time. The specific focus for this presentation will be Long Short-term Memory neural networks (LSTM), that are extremely valuable in building models for time-series data like system metrics data, and application log data. This presentation is geared towards folks interested in the field of ML, both beginners and intermediate levels.
Boshika has four plus years of experience working as a full-stack engineer in San Francisco Bay Area and Los Angeles. She currently works for Commercial Tech at Capital One, where she is building microservices in Golang for commercial document migration. She is also part of the agenda at Commercial Tech to apply machine learning to solve business cases related to data classification, and data extraction. Boshika became fascinated by the field of data science and machine learning while doing bench research at Stanford University, where she was using ML algorithms to analyze large scale genome sequencing data. She is also currently pursuing her Master’s in Data Science at Johns Hopkins University.


COFFEE
Technical Presentations
EXPLORING THE RELATIONSHIP BETWEEN ML & DEVOPS


Omari S. Felix - DevOps Engineer - Capital One
Forecasting Risk using CI, CD and Machine Learning
Omari S. Felix - Capital One
Forecasting Risk using CICD and Machine Learning
The software development process does not have a singular formula for success. There are many factors that influence this process such as code management, quality analysis, testing strategies to name a few. Pinpointing the features that correlate to the success of a deployment are subjective to a developer’s personal style of software engineering and testing. I will be presenting a project that implemented a machine learning component to analyze build features and gauge them against a pool of build records from a project and organization perspective.
Omari S. Felix, a DevOps Engineer at Capital One where he implements DevOps tools and guide teams on utilizing techniques designed to improve their software development and delivery. A graduate from Virginia State University and attended N.C. A&T State University. He has experience in multiple technology spaces include mobile development, automation, and machine learning to name a few.


Pooyan Jamshidi - Assistant Professor - University of South Carolina
Machine Learning Meets DevOps
Pooyan Jamshidi - University of South Carolina
Machine Learning Meets DevOps
Today’s mandate for faster business innovation, faster response to changes in the market, and faster development of new products demand a new paradigm for software development. DevOps is a set of practices that aim to decrease the time between changing a system in Development, and transferring the change to the Operation environment, and exploiting the Operation data back in the Development. DevOps practices are typically relying on large amount of data coming from Operation. The amount of data depends on the architectural style, the underlying development technologies and deployment infrastructure. However, in order to make effective decisions in Development, e.g., architecture changes in continuous delivery pipelines, there has to be efficient processing of big data in Operation. In this situation where data streams are increasingly large-scale, dynamical and heterogeneous, mathematical and algorithmic creativity are required to bring statistical methodology to bear. Statistical machine learning can fill the gap between operation and development with more efficient analytical techniques. Such analytical techniques can provide more deep knowledge and can uncover the underlying patterns in the operational data, e.g., to detect anomalies in the operation or detect performance anti-patterns. This knowledge can be very practical if detected ontime to refactor the development artifacts including code, architecture and deployment.
In this talk, I will start motivating the necessity of data-driven analytics for generating feedback to Dev from Ops based on my previous experience with large-scale big data systems in industry. I will present our recent work on configuration tuning of big data software, where we primarily applied Bayesian Optimization and Gaussian Processes to effectively find optimum configurations. I will also talk about transfer learning to exploit complimentary and cheap information (e.g., past measurements in a continuous delivery pipeline regarding early versions of the system) to enable learning accurate models efficiently and with considerably less cost. Results show that despite the high cost of measurement on the real system, learning performance models can become surprisingly cheap as long as certain properties are reused across environments. In the second half of the talk, I will present empirical evidence, which lays a foundation for a theory explaining why and when transfer learning works by showing the similarities of performance behavior across environments. I will present observations of environmental changes‘ impacts (such as changes to hardware, workload, and software versions which are predominant in DevOps) for a selected set of configurable systems from different domains to identify the key elements that can be exploited for transfer learning. These observations demonstrate a promising path for building efficient, reliable, and dependable software systems.
Pooyan Jamshidi is an Assistant Professor at the University of South Carolina. Prior to his current position, he was a research associate at Carnegie Mellon University (2016-2018) and Imperial College London (2014-2016), where he primarility worked on transfer learning for performance analyses of highly-configurable systems including robotics and big data systems. He holds a Ph.D. from Dublin City University (2010-2014). Pooyan's general research interests are at the intersection of software engineering, systems, and machine learning, and his focus is primarily in the areas of distributed machine learning. Pooyan has spent 7 years in software industry before his PhD.


SCALING MACHINE LEARNING FOR DEVOPS


Diego Oppenheimer - Founder & CEO - Algorithmia
Deploying Scalable ML Models in the Enterprise
Diego Oppenheimer - Algorithmia
Deploying Scalable ML Models in the Enterprise
After massive investments in collecting and cleaning data, then training Machine Learning models—Enterprises discover the big challenges in deploying models to production and managing their growing portfolio of ML models. This talk will cover the strategic and technical hurdles each company must overcome and the best practices we've developed while deploying over 5,000 ML models for 75,000 engineers.
Diego Oppenheimer, founder and CEO of Algorithmia, is an entrepreneur and product developer with an extensive background in all things data. Prior to founding Algorithmia he designed, managed and shipped some of Microsoft’s most used data analysis products including Excel, Power Pivot, SQL Server and Power BI. Diego holds a Bachelors degree in Information Systems and a Masters degree in Business Intelligence and Data Analytics from Carnegie Mellon University.



LUNCH
IMPLEMENTATION IN PRACTICE
Practical Examples


Nicolas Brousse - Director, Operations Engineering - Adobe
Improving Adobe Experience Cloud Services Dependability with Machine Learning
Nicolas Brousse - Adobe
Improving Adobe Experience Cloud Services Dependability with Machine Learning
Adobe Experience Cloud is a collection of best-in-class solutions for marketing, analytics, advertising, and commerce. All integrated on a cloud platform for a single experience system of record. The Adobe Experience Cloud's SRE team works hand-in-hand with the Product and Engineering teams to build dependable services. In this presentation you will learn how the team leverage Adobe's artificial intelligence and machine learning engine, to first, build predictive auto-scaling and self-healing services. Second, to provide insight and automate risk classification of production changes to reduce impact on services availability.
This talk will discuss work and finding by Adobe's SRE using LSTM, ETS, Linear Regression, Boosted Tree, Multi-Layer Perceptron, and Support Vector Models.
Nicolas Brousse, a Cloud Technology Leader, became Director of Operations Engineering at Adobe (NASDAQ: ADBE) after the acquisition of TubeMogul (NASDAQ: TUBE). As TubeMogul's sixth employee and first operations hire, Nicolas has built and grown Adobe/TubeMogul's infrastructure over the past ten years from several machines to over eight thousand servers that handle ±350 billions requests per day for clients like Allstate, Chrysler, Heineken and Hotels.com. Adept at adapting quickly to ongoing business needs and constraints, Nicolas leads a global team of site reliability engineers, cloud engineers, software engineers, security engineers, and database architects that build, manage, and monitor Adobe Advertising Cloud's infrastructure 24/7 and adhere to "DevOps" methodology. https://nicolas.brousse.info/




Binwei Yang - Principal Engineer - Walmart Labs
Large Scale Inference Generation Using Jenkins/Blue Ocean and GPU Cluster
Binwei Yang - Walmart Labs
Large Scale Inference Generation Using Jenkins/Blue Ocean and GPU Cluster
For our ecommerce use cases, we have several catalogs in O(106) items scale for which we need to generate embeddings on a daily basis. The catalogs are constantly expanding and updated. These embeddings could be created from TensorFlow models trained with catalog images and/or meta data for the items. For different use cases, the TensorFlow models and/or the catalogs are different, and we need a common platform for large scale inference generation. We are a very small team of 2 data scientists and 1 engineer. We have access to GPU cluster with different types of Nvidia GPUs, and these GPU servers are used primarily for training purpose.
Among the challenges we face are,
Selecting idle GPU servers with matching docker containers for them Distributing the workload of inference generation to idle GPU servers Automating the integration tests for model server and API server (model version matching the embeddings, as well as supported by the docker containers in production) Working with Walmart infrastructure for docker containers (security constraints, no nvidia-docker on GPU servers etc.)
Our solution consists of a Jenkins cluster with all the GPU servers added as worker nodes. We use parallel Blue Ocean pipeline to distribute the workload. For various Nvidia GPUs, we maintain docker containers that support the corresponding CUDA and Nvidia driver versions. We take this approach because it is easy to add new GPU servers, and also upgrade the docker images for GPU servers. We use object storage for both models and embeddings, which are stored in the same subfolders, in order to ensure that the generated embeddings and the TensorFlow model are ready for deployment together.
In my talk, I would also touch on the DevOps aspects of building docker images do’s and don’ts (keeping the docker images lean, learnings from building TensorFlow model server subject to enterprise constraints etc.) and deployment to Kubernetes (using init-containers, rolling updates etc.)
Future directions we are exploring:
TensorRT inference server Kubernetes on Nvidia GPUs Google TPU for inference
Binwei Yang is an engineer, a hacker, and a hustler. He is passionate about becoming a lifelong learner and equipping youth in underserved communities with high-tech skills. Binwei graduated from University of Southern California with a Master in Computer Engineering and Ph.D. in Physics, and has more than 20 years of professional experience of creating massively scalable customer-facing applications. He currently works on computer vision as a principal engineer for Walmart Labs.

Bishnu Nayak - FixStream
AIOps Solution Enables Self-Healing Hybrid IT
Enterprises running their digital business services in hybrid heterogeneous IT environment require visibility and proactive insights into problem areas. Operational data volume and number of data sources have increased tremendously in modern data centers further worsening the problem of visibility and correlation. Artificial Intelligence for IT Operations (AIOps) leverages Big Data, Machine Learning and AI techniques to deliver proactive and predictive insights, recommendations and remediation. By combining the power of auto-discovery and correlation with ML and AI, AIOps solution suppresses events to reduce noise for faster root cause analysis, predicts business incidents, detects multivariate anomaly scenarios and enables enterprises with top-down visibility from business KPIs to application to infrastructure. This talk will discuss how AIOps solution modernizes and unifies IT Operations using its auto-discovery, correlation, machine learning and AI techniques.
Bishnu is the CTO of FixStream, an emerging AIOps startup in Silicon Valley where he guides FixStream's overall technology, product strategy and roadmap, R&D and engagement with technology partners, enterprise customers and service providers. Prior to FixStream, Bishnu spent thirteen years at AT&T in various executive architect and strategist positions where he led technology programs in the domains of Cloud, Enterprise Architecture, Big Data, Mobile, DevOps etc. Bishnu has a flair for all things Cloud technology, Big Data and, ML and APIs. Bishnu is very passionate about cutting-edge technologies and is very active in various industry recognized communities, blogs and forums. Bishnu is an active member of the Forbes Technology Council (ForbesTechCouncil.com) where he has been actively participating and contributing since 2016.



COFFEE
OPTIMIZING DEVOPS


Chris Corriere - Senior DevOps Advocate - SJ Technologies
The Worst Game of Telephone Ever
Chris Corriere - SJ Technologies
The Worst Game of Telephone Ever
Telephone is the party game where a phrase is whispered from the first player to the last until it spoken out loud to reveal how much the message has changed in transit. Data scientists often create models in R, excel, or even with pencil on paper. What happens when these models and algorithms are handed off to a development team to implement? How often is the final product actually doing what was originally intended? Is this the worst game of telephone ever? Can devOps practices help avoid these problems? What can data science & machine learning do for devOps? Please join Chris Corriere, a devOps advocate with SJ Technologies, as he shares stories about failure, success, trust, communication, Nash games, automation, and machine learning.
Map Reduce for Prioritizing a Backlog The Prisoners' Dilemma & The Stag Hunt The Difference between Complicated & Complex Domains Moving from Maps to Models devOps Dojo Practices for Machine Learning
Chris Corriere has been working with data, phones, networks and writing software for over twenty years. His background in mathematics and engineering has allowed him to adapt to new and industry-specific technologies and provided many unique consulting opportunities. As a devOps professional Chris is committed to culture, automation, learning, sharing, and having a good time while getting work done. Chris is currently a senior devOps advocate with SJ Technologies focused on dojo practices, mapping, & complexity science.



PANEL: From Theory to Practice: Making Machine Learning in DevOps a Reality
Anurag Bihani - Schlumberger
Anurag is currently working as a Cloud Software Engineer in the Cloud Infrastructure Team at the Geophysics Technology Centre, Schlumberger. His work entails developing and maintaining scalable backend services and CI/CD pipelines. Before this, Anurag earned his Masters in Computer Science from the University of Florida and his Bachelors in Computer Engineering from the University of Pune. His other research interests include Information Security, Machine Learning, and IoT. While he’s not developing code, Anurag likes to travel and explore, he’s an avid photographer and tries to combine his hobbies as a Google Maps Local Guide.

Adam McMurchie - NatWest
Five years ago Adam talked here at ReWork on the future of AI in banking, he laid out a roadmap of things to come - previewing China’s Smile to Pay (which shortly after ballooned to 400 million users), smart contracts, personalised banking and other initiatives that have come to fruition. That said, Adam had also warned on the grave consequences of failure and significant challenges to arise both practical and regulatory which closely describes the state of quagmire we now find ourselves in.
From Alexa telling a 10-year old girl to touch a live plug with a penny, to regulatory breaches such as Uber test driving autonomous cars without state permission, running 6 red lights as a result.
Finance hasn’t been spared the controversy either, with multiple institutions losing significant wealth from poorly deployed AI, to colossal investment wastage by backing the wrong AI sectors.
In this talk Adam will share key insights & learnings on how organisations can better traverse these pitfalls by designing, building and deploying finance driven AI that is both sustainable and cost effective. He will also layout key milestones to enable future proofing imminent threats of supply chain failures and global talent shortages so that we can navigate the next five years of AI in finance with confidence.
Adam McMurchie is the Lead Cloud Data Engineer at NatWest. He was previously the leader in Devops and an A.I expert working in the banks SAO platform on the forefront of technology development in finance. With a broad exposure to a range of technologies, Adam drives an ethos of simplification, cloud agnosticism and specialises in spotting the next trends in fin tech. Additionally, Adam also has a background in science with a physics degree specialising in NeuroComputing and is a polyglot linguist & seasoned translator. Adam has pooled these skills to deliver full stack novel solutions from tensor flow driven mobile apps, to personalized banking chatbots. Adam also develops apps designed around the ethos of Social Utility, including Flood/Storm reporting, EV Vehicle bay monitoring and preservation of endangered languages.


David Pierce - USAA
David Pierce is a Senior Software Engineer passionate about bridging Analytics & Engineering by developing Comprehensive Data Strategies on Heterogeneous Infrastructure. By building Modular Data Pipelines and treating infrastructure as a software problem, Data SMEs can leverage Automated Machine Learning and Self-Healing Data to continually evolve their applications and solutions through self-service tools.


Giulia Toti - University of Houston
Giulia Toti is an Assistant Instructional faculty at the University of Houston. She offers several core classes in the CS curriculum and a selection of courses designed in collaboration with the Hewlett Packard Enterprise Data Science Institute to increase the presence of machine learning in the curriculum.
Giulia obtained a PhD in computer science from the University of Houston in 2016. Upon graduation, Giulia joined the Addictions Department of the King’s College London as postdoctoral researcher. There, she worked on mining of large electronic health records databases and on the development of risk prediction models. In Fall 2017 Giulia came back to the University of Houston as instructional faculty. Besides teaching, she works to promote research among undergraduates and is one of the mentors for the Summer Research Experience for Undergraduates (REU) program. She is also a member of The National Center for Women in Information Technology's (NCWIT) and involved in increasing women representation in the department.


Diego Oppenheimer - Algorithmia
Deploying Scalable ML Models in the Enterprise
After massive investments in collecting and cleaning data, then training Machine Learning models—Enterprises discover the big challenges in deploying models to production and managing their growing portfolio of ML models. This talk will cover the strategic and technical hurdles each company must overcome and the best practices we've developed while deploying over 5,000 ML models for 75,000 engineers.
Diego Oppenheimer, founder and CEO of Algorithmia, is an entrepreneur and product developer with an extensive background in all things data. Prior to founding Algorithmia he designed, managed and shipped some of Microsoft’s most used data analysis products including Excel, Power Pivot, SQL Server and Power BI. Diego holds a Bachelors degree in Information Systems and a Masters degree in Business Intelligence and Data Analytics from Carnegie Mellon University.



CONVERSATION & DRINKS

DOORS OPEN
Chris Corriere - SJ Technologies
The Worst Game of Telephone Ever
Telephone is the party game where a phrase is whispered from the first player to the last until it spoken out loud to reveal how much the message has changed in transit. Data scientists often create models in R, excel, or even with pencil on paper. What happens when these models and algorithms are handed off to a development team to implement? How often is the final product actually doing what was originally intended? Is this the worst game of telephone ever? Can devOps practices help avoid these problems? What can data science & machine learning do for devOps? Please join Chris Corriere, a devOps advocate with SJ Technologies, as he shares stories about failure, success, trust, communication, Nash games, automation, and machine learning.
Map Reduce for Prioritizing a Backlog The Prisoners' Dilemma & The Stag Hunt The Difference between Complicated & Complex Domains Moving from Maps to Models devOps Dojo Practices for Machine Learning
Chris Corriere has been working with data, phones, networks and writing software for over twenty years. His background in mathematics and engineering has allowed him to adapt to new and industry-specific technologies and provided many unique consulting opportunities. As a devOps professional Chris is committed to culture, automation, learning, sharing, and having a good time while getting work done. Chris is currently a senior devOps advocate with SJ Technologies focused on dojo practices, mapping, & complexity science.


OPERATIONALIZING DATA


Faiyadh Shahid - Research Engineer - EmbodyVR
AI DevOps for large-scale 3D Audio experiences
Faiyadh Shahid - EmbodyVR
AI DevOps for large-scale 3D Audio experiences
Machine learning is a science that involves learning from data, and deriving inferences based on data. In the industry, this process can get increasingly chaotic and complex with time as the number of models increase. Often there are multiple data scientists working in isolated environments, trying multiple machine learning approaches/experiments, who then go on to produce fragmented results. Furthermore, data products based on machine learning may involve several machine learning components. Under this scenario, it is an incredibly daunting task to track and evaluate experiments to select the best workflows to put into production.
We will describe an end to end framework that tracks code, data and model simultaneously, specifically applied to the field of immersive 3D audio. This framework, is generalizable and can be used to automate evaluation and optimization of the best performing machine learning workflows for large-scale deployment.
Faiyadh Shahid graduated from University of Southern California and Texas A&M University with a specialization in Electrical Engineering, with the highest distinction. His internships at Mathworks and Canon Information and Imaging Institute shaped his passion for software development and rigorous test automation practices . After his Masters, Faiyadh joined EmbodyVR as the second employee in the role of a Research Engineer. While contributing to the development of novel machine learning and signal processing algorithms, Faiyadh built the entire backend infrastructure that currently supports multiple gaming and entertainment clients. He has two patents and two publications to his name. He loves to begin each day with a piece of chocolate cake!



Jyotsna Chatradhi - Principal Software Engineer - Broadcom
Harnessing AI and Machine Learning for Self-Driven IT Ops
Jyotsna Chatradhi - Broadcom
Harnessing AI and Machine Learning for Self-Driven IT Ops
The thought of a self-driven data center often conjures up visions from classic sci-fi flicks like War Games or Tron. The reality is self-driven IT ops is all about augmenting the operator, not replacing them. This session takes a behind the scenes look at how machine learning and operational intelligence are changing how data centers are managed. We’ll explore how your IT ops team can do more with less and increase efficiency by: Proactively detecting when something is going wrong sooner, Automatically correlating alert data from many sources to deliver more actionable insights & Automating corrective action based on learned behavior.
Jyotsna Chatradhi is a Principal Software Engineer at Broadcom and currently part of engineering team for the product mainframe operational intelligence. Previously she worked for companies like Whamtech, Thomson Reuters and Citrix. She has a deep expertise in full stack for more than 12 years with a focus on angular, big data analysis , distributed programming and machine learning. She volunteers for many organizations, including Grace Hopper Conference and Boys and Girls club of America on importance of STEM education. Jyotsna Chatradhi received Masters in computer science from IIITB. She has been recognized for her work and published 5 patents in area of supply chain and management. She also has been nominated as women in open source for year 2017. In her spare time she is a machine learning evangelist @ Broadcom and also a F45 enthusiast.Jyotsna tweets at @jyotsnac.



COFFEE

INTELLIGENT PIPELINES
Juni Mukherjee - CloudBees
5 Plumbing Tips For A Smart Pipeline
Pipelines are process-as-code and leverage software gates that need to open for artifacts to be promoted from Dev to Staging to Production. This presentation gives 5 tips on how to design pipelines and their gates to infuse intelligence into them. And most importantly, explains why and how this technology would change our lives for the better. This presentation would also touch upon continuous governance and how it will drive smart pipelines to become the heart of the next revolution of intelligent automation.
Juni is a Product Marketer at CloudBees and is a thought citizen in the DevSecOps and Continuous Everything space. She has helped organizations build continuous delivery pipelines, and would love to solve the problems that plague our industry today. She has authored a couple of books. Juni has worked for tech companies, where she led software engineering projects that help teams improve Time-2-Market. She has worked across diverse domains like identity, security, media, advertisement, retail, camera, phone, banking, and insurance. She leverages her experience in graduate research and software architecture to design and build modular and scalable platforms that improve velocity and productivity of engineering teams. She designs metrics and dashboards to objectively measure behavior, to drive the organization's vision, and to help teams focus on solving software delivery problems, starting with the highest RoI."


MACHINE LEARNING & THE CLOUD


Chandni Sharma - Cloud Engineer - Google
KubeFlow - Machine learning with Kubernetes
Chandni Sharma - Google
Chandni is a Cloud Engineer on the Google Cloud Team. In this role, she focuses on AI/ML, Big Data, Blockchain and Kubernetes for various customer use cases. Previously she was working with NTT Data ( Global Fortune ranking 55) in Big Data and Data Science. Chandni has a Masters from Northeastern University where she focused on Data Science within Information Systems.

CI, CD & MACHINE LEARNING


Vilas Veeraraghavan - Director of Engineering - Walmart Labs
Using Data to Infer Deployment Profiles for Cloud Applications
Vilas Veeraraghavan - Walmart Labs
Using data to infer deployment profiles for cloud applications
At Walmart there are hundreds of products under each technology pillar. These typically consist of micro services or applications that are deployed into the hybrid cloud (public and private). Our challenge is to increase velocity of application teams by helping them figure out the right profile of machines where they deploy and the scale. For teams that deploy to the public cloud, the cloud itself is also a variable. Empirical data has shown us that certain workloads may perform better in different clouds. We actively measure these behaviors and collect data from both steady state deployments and applications under stress during peak traffic loads. All of these assessments are done as part of the CI/CD process to ensure that the deployment geographies are constantly updated based on data-based evidence. In this talk I will be showcasing our exploration into data measurements that help us create a training model to infer the best deployment profile for apps based on constraints.
Vilas joined Walmart labs in 2017 and leads the teams responsible for the testing and deployment pipelines for eCommerce and Stores. Prior to joining Walmart Labs, he had long stints at Comcast and Netflix where he wore many hats as automation, performance and failure testing lead.


Haresh Chudgar - Spotify
CI/CD for Machine Learning Systems
Machine learning systems are notoriously difficult to productionize. Lots of moving parts means lots of failure modes: are you serving the right model? Are you applying the same feature transformation logic in the service as you did during training? Did yesterday's retrained model deploy correctly? How can you make sure your model accuracy doesn't drift over time? This talk will step through these pain points - and more - and walk through how we adapted classic CI/CD techniques to solve them at Spotify
I am a hybrid engineer dabbling in Machine learning, infrastructure, backend services, UI and embedded systems. I have been in Spotify for over a year working in the data engineering team and have recently shifted my focus to ML Infra, working to reduce the time to market of an ML model and provide stricter guarantees. Prior to Spotify, I worked at Samsung R&D Institute Bangalore, working on the sensors team prototyping sensor based applications for the Galaxy line of smartphones and smart watches. I graduated from UMass Amherst with a degree in Computer Science specializing in Machine Learning.


LUNCH
DEVOPS FOR MACHINE LEARNING


Adam McMurchie - Lead Cloud Data Engineer - NatWest
Dynamic Banking - A Five-Year Roadmap to Excellence in AI
Adam McMurchie - NatWest
Five years ago Adam talked here at ReWork on the future of AI in banking, he laid out a roadmap of things to come - previewing China’s Smile to Pay (which shortly after ballooned to 400 million users), smart contracts, personalised banking and other initiatives that have come to fruition. That said, Adam had also warned on the grave consequences of failure and significant challenges to arise both practical and regulatory which closely describes the state of quagmire we now find ourselves in.
From Alexa telling a 10-year old girl to touch a live plug with a penny, to regulatory breaches such as Uber test driving autonomous cars without state permission, running 6 red lights as a result.
Finance hasn’t been spared the controversy either, with multiple institutions losing significant wealth from poorly deployed AI, to colossal investment wastage by backing the wrong AI sectors.
In this talk Adam will share key insights & learnings on how organisations can better traverse these pitfalls by designing, building and deploying finance driven AI that is both sustainable and cost effective. He will also layout key milestones to enable future proofing imminent threats of supply chain failures and global talent shortages so that we can navigate the next five years of AI in finance with confidence.
Adam McMurchie is the Lead Cloud Data Engineer at NatWest. He was previously the leader in Devops and an A.I expert working in the banks SAO platform on the forefront of technology development in finance. With a broad exposure to a range of technologies, Adam drives an ethos of simplification, cloud agnosticism and specialises in spotting the next trends in fin tech. Additionally, Adam also has a background in science with a physics degree specialising in NeuroComputing and is a polyglot linguist & seasoned translator. Adam has pooled these skills to deliver full stack novel solutions from tensor flow driven mobile apps, to personalized banking chatbots. Adam also develops apps designed around the ethos of Social Utility, including Flood/Storm reporting, EV Vehicle bay monitoring and preservation of endangered languages.




Praveen Hirsave - Senior Director Cloud Platform Engineering - HomeAway.com
MLOps - DevOps for Machine Learning
Praveen Hirsave - HomeAway.com
MLOps - DevOps for Machine Learning
Praveen is on a mission to revolutionize travel through the power of technology. He is an experienced Senior Director Of Cloud Platform Engineering at HomeAway with a demonstrated history of working with emergent technologies. He is skilled in Software Development, DevOps, Agile, Big Data, Streaming Platforms, Machine Learning, IT Strategy, Cloud and Mobile Applications. Previous to HomeAway, Praveen worked as a Senior Software Developer Manager at IBM where he managed multiple development teams involved in Big Data backend running Hadoop, Aster Data (Teradata), Oracle, and DB2, as well as managing Digital Recommendations, Benchmark, UI, and Mobile solutions.


PANEL: Challenges & Opportunities of Investing in Machine Learning
Clint Wheelock - Tractica
Clint Wheelock is the founder and managing director of Tractica. He leads all research operations at the firm, including management of its analyst team as well as client interactions and consulting engagements. His personal research focuses on artificial intelligence and user interface technologies.
Wheelock has an extensive background in market intelligence focused on emerging technologies. Most recently, he was founder and president of Pike Research, a leading market intelligence firm focused on the global clean technology industry, which was acquired by Navigant Consulting, after which Wheelock led the rebranded Navigant Research business as its managing director. In this role, he managed all aspects of company operations, including research, sales, marketing, finance, and operations. Prior to forming Pike Research, Wheelock was chief research officer at ABI Research, vice president at the NPD Group, and research director at In-Stat. Previous positions also include senior product management and strategic marketing roles at Qwest Communications and Verizon Communications, as well as prior experience in management consulting and private investment banking. Wheelock holds an MBA from the University of Dallas and a BA from Washington & Lee University.


Lynn Calvo - GM Financial
Driving AI Innovation with Machine Learning in the Enterprise
GM Financial, the wholly-owned captive finance subsidiary of General Motors, will discuss their AI journey with Machine Learning, Deep Learning, and Natural Language Processing. In this session, GM Financial will discuss their challenges, technology choices, and initial successes:
· Addressing a wide range of Machine Learning use cases, from credit risk analysis to improving customer experience
· Implementing multiple different tools (including TensorFlow™, Apache Spark™, Apache Kafka®, and Cloudera®) for different business needs
· Deploying a multi-tenant hybrid cloud environment with containers, automation, and GPU-enabled infrastructure
Gain insights from this enterprise case study, and get perspective on Kubernetes® and other game-changing technology developments.
Lynn Calvo is the AVP of Emerging Data Technology at GM Financial where he introduced big data analytics and machine learning capability into the lines of business and continues work with enabling its application for multiple AI use cases. Lynn is a recognized inventor on three US patents. He holds a master’s in Computer Science and is a former IT consultant servicing Fortune 50 clients with big data, data center, and security next-generation research. Lynn’s forward thinking and depth of knowledge allow him to skillfully leverage his 30-years of results-oriented IT experience into a winning model for GM Financial.

Aaron Blythe - NAIC
Aaron Blythe is genuinely curious. He loves taking things apart, understanding them and making them better. He has created software for over 20 years. Aaron is the lead organizer of the Kansas City DevOps Meetup and the DevOpsDayKC conference. He is currently working on his Master Degree in Data Science through the University of Illinois. You can find more about him at aaronblythe.com


Juni Mukherjee - CloudBees
5 Plumbing Tips For A Smart Pipeline
Pipelines are process-as-code and leverage software gates that need to open for artifacts to be promoted from Dev to Staging to Production. This presentation gives 5 tips on how to design pipelines and their gates to infuse intelligence into them. And most importantly, explains why and how this technology would change our lives for the better. This presentation would also touch upon continuous governance and how it will drive smart pipelines to become the heart of the next revolution of intelligent automation.
Juni is a Product Marketer at CloudBees and is a thought citizen in the DevSecOps and Continuous Everything space. She has helped organizations build continuous delivery pipelines, and would love to solve the problems that plague our industry today. She has authored a couple of books. Juni has worked for tech companies, where she led software engineering projects that help teams improve Time-2-Market. She has worked across diverse domains like identity, security, media, advertisement, retail, camera, phone, banking, and insurance. She leverages her experience in graduate research and software architecture to design and build modular and scalable platforms that improve velocity and productivity of engineering teams. She designs metrics and dashboards to objectively measure behavior, to drive the organization's vision, and to help teams focus on solving software delivery problems, starting with the highest RoI."



END OF SUMMIT

Ask the Experts During the Coffee Break - NETWORKING
Networking & Ask the Experts During the Coffee Break

Infrastructure Slowing Your AI Projects? Nuts & Bolts of AI-Ready Infrastructure - WORKSHOP
Workshop with Dave Logan, Pure Storage

Predicting & Preventing Outages with Machine Learning Forecasting - DEEP DIVE
Workshop with Myra Haubrich and Sharath M, Adobe

Education Corner: Exploring STEM & Data Science in Houston - NETWORKING
Networking Break with Rising Starts & Local Tech Programs

Tips & Tricks from Leading Investors in AI with Q&A - NETWORKING
Get Your Questions Answered by Leading VC's

Designing Ethical AI Solutions - WORKSHOP
Workshop with Greg Adams, Accenture