• TIMES IN PDT

  • 08:00

    WELCOME & OPENING REMARKS - 8am PDT | 11am EDT | 4pm BST

  • APPLICATIONS OF AI

  • 08:10
    Manan Sagar

    Trust Me, I’m a Robot – The Future of AI and Automation in Insurance

    Manan Sagar - CTO - Fujitsu

    Down arrow blue

    Trust Me, I’m a Robot – The Future of AI and Automation in Insurance

    Technology is changing the way we live and consume service. Insurance has perhaps been the slowest to react to these changes but it is apparent that data is starting to drive personalisation and precision in insurance. “Real-time-risk-management” enabled by the “subscription model” is, in the very near future, going to become main stream in personal insurance. Advancements in data sciences coupled with a fourfold increase in the number of sensors is going to lead to a seismic shift in insurance models from the traditional insurance model of “Protection” to “Prevention”.

    Key Takeaways: 1. Data is enabling a shift in the insurance model from protection to prevention.

    1. Whilst Automation is going to help reduce processing costs, AI will provide deep insights and the ability to predict.

    2. AI will enable insurance costs to be viewed as a “service charge” rather than an “annual tax”

    Manan is a highly experienced insurance professional and a pragmatic business leader. He has previously lead Lockton’s Singapore business where he delivered organisation-wide changes and then went on to manage one of the largest acquisitions in the insurance industry. A Chartered Accountant by profession and now a technologist by trait, Manan is well regarded for his thought leadership. His career has spanned across the Americas, EMEA and Australasia. As Fujitsu’s Insurance CTO, Manan is responsible for defining the innovation strategy for the insurance sector. In his role he is a strategic advisor to the insurance sector on digital transformation and connected technology solutions.

    Twitter Linkedin
  • 08:35
    Clara Bove-Ziemann

    How To Explain a ML Prediction to Non-Expert Users?

    Clara Bove-Ziemann - Design+ ML Researcher - AXA

    Down arrow blue

    PRESENTATION: How to Explain an ML Prediction to Non-Expert Users?

    Machine Learning has provided new business opportunities in the insurance industry, but its adoption is for now limited by the difficulty to explain the rationale behind the prediction provided. In our latest research, we explore how we can enhance one type of explanations we can extracted from interpretability method called local feature importance explanations for non-expert users. We propose design principles to present these explanations to non-expert users and are applying them to a car insurance smart pricing interface. We present preliminary observations collected during a pilot study using an online A/B test to measure objective understanding, perceived understanding and perceived usefulness of our designed explanations.

    ROUNDTABLE: Challenges of ML Interpretability and Business Opportunities

    The rise of Machine Learning (ML) has provided new business opportunities in the insurance industry. ML can for instance help improve pricing strategies, fraud detection, claim management or the overall customer experience. Yet, its adoption can be for now limited by the difficulty for ML to explain the rationale behind predictions. What can be explained from ML models? What do people need to be explained of? How to present explanations? These are some challenges we want to address in ML Interpretability

    I am currently working as a Researcher at AXA and am a PhD Candidate at Laboratoire Informatique de Paris 6 (LIP6). I conduct research on eXplainable AI (XAI) and User Experience (UX) in Machine Learning. I graduated from a Design Master’s Degree in 2015 and works for several years as a User Experience Designer in various fields before starting research on Human+AI interactions.

    Linkedin
  • 09:00
    Alessandro Bonaita

    Analytics & AI Acceleration Programme

    Alessandro Bonaita - Group Head of Data Science - Generali Group

    Down arrow blue

    Analytics & AI Acceleration Programme

    Many organizations have come to recognize that their future success will depend on data and AI capabilities. Even in Insurance, Artificial intelligence is increasingly being integrated into core business strategies for cognitive advancement. Since expectations are high and companies are heavily investing in the area, finding the right drivers to lead this digital transformation is crucial for guarantee the success of an AI strategy. People, Value and Execution are three pillars that have demonstrated to successfully lead an AI acceleration program in complex organization such as the Insurance ones.

    Key Takeaways: 1) To accelerate the adoption of AI in complex organization you first need a clear AI strategy: people (not technology!) is where you have to invest more. 2) Committing business lines with a solid business case analysis is fundamental to align plans and create tangible value. 3) Complex organizations typically have different levels of internal AI maturity: a well-defined AI strategy guarantee the execution through an adaptable but consistent approach.

    Alessandro is an Artificial Intelligence director with more than 15 years of experience in Data & Analytics, from hands-on data science consulting, to set-up of international analytics teams and offshore hubs, up to the design of Analytics&AI strategy and Data strategy at HQ level for global companies. He served as AI leader in multinational companies such as American Express, SAS, RCS Mediagroup. He’s also a former deputy Privacy Officer and passionate about AI Ethics, working closely with international Academic institutions for researches on this topic.

    Linkedin
  • 09:25

    COFFEE & NETWORKING BREAK: MEET WITH ATTENDEES VIRTUALLY FOR 1:1 CONVERSATIONS

  • CUTTING EDGE TOOLS & TECHNIQUES

  • 09:35
    Nataliya Le Vine

    Developing Early Warning System to Identify Relevant Events in Unstructured Data

    Nataliya Le Vine - Lead Data Scientist - Swiss Re

    Down arrow blue

    Developing Early Warning System to Identify Relevant Events in Unstructured Data

    Swiss Re is a leading player in the global reinsurance sector. Its role is to anticipate, understand and price risk in order to help insurers manage their risks and absorb some of their biggest losses. As one way to stay ahead of the curve and provide thought leadership to its clients, Swiss Re is developing an early warning expert community platform based around big data and natural language processing. The platform is intended to work on the front lines, to detect events that have the potential to change our view on risk drivers and to help us make business decisions in shorter timescales.

    Key Takeaways: 1) The traditional methods to identify relevant events become unreliable when information volume rapidly increases; 2) Uncoordinated views pose a challenge in taking proactive and strategic actions to manage risks; 3) Early warning expert community platform leverages new data techniques to identify relevant signals and helps integrating experts in a more joined up process.

    Nataliya Le Vine is a data scientist at Advanced Analytics Center of Excellence at Swiss Re, bringing machine learning and AI to drive the technology transformation in insurance. Over the last decade, she worked in academia, tech and insurance industries both in EMEA and Americas with a core expertise in predictive modeling and machine learning.

    Linkedin
  • 10:00
    Julie Wall

    Detecting Deception & Tackling Insurance Fraud Using Conversational AI

    Julie Wall - Reader in Computer Science - University of East London

    Down arrow blue

    Detecting Deception and Tackling Insurance Fraud Using Conversational AI

    We are developing an explainable pipeline that will identify and justify the behavioural elements of a fraudulent claim during a telephone report of an insured loss. To detect the behavioural features of speech for deception detection, we have curated a robust set of acoustic and linguistic markers that potentially indicates deception in a conversation. Using statistical measures and machine learning approaches, the detection of these linguistic markers in the right context is being investigated. The explainable pipeline means that the output of the decision-making element of the system will provide transparent decision explainability, overcoming the “black-box” challenge of traditional AI systems.

    Dr Julie Wall is a Reader in Computer Science, Director of Impact and Innovation for the School of Architecture, Computing and Engineering and leads the Intelligent Systems Research Group at the University of East London. Her current research focuses on developing machine learning and deep learning approaches for speech enhancement, natural language processing and natural language understanding and she maintains collaborative R&D links with industry. This has led to the successful acceptance of two Innovate UK grants with a combined total value of £2,273,177. Since starting her PhD in 2006, Julie has been exploring the overarching research area of designing intelligent systems for processing and modelling temporal data. This primarily involves investigating the architectures and learning algorithms of neural networks for a variety of data sources.

    https://www.uel.ac.uk/research/intelligent-systems

    Twitter Linkedin
  • 10:25

    BREAKOUT SESSIONS: ROUNDTABLE DISCUSSIONS WITH SPEAKERS

  • Nataliya Le Vine

    ROUNDTABLE: Introduction to Swiss Re’s Risk Resilience Platform

    Nataliya Le Vine - Lead Data Scientist - Swiss Re

    Down arrow blue

    Developing Early Warning System to Identify Relevant Events in Unstructured Data

    Swiss Re is a leading player in the global reinsurance sector. Its role is to anticipate, understand and price risk in order to help insurers manage their risks and absorb some of their biggest losses. As one way to stay ahead of the curve and provide thought leadership to its clients, Swiss Re is developing an early warning expert community platform based around big data and natural language processing. The platform is intended to work on the front lines, to detect events that have the potential to change our view on risk drivers and to help us make business decisions in shorter timescales.

    Key Takeaways: 1) The traditional methods to identify relevant events become unreliable when information volume rapidly increases; 2) Uncoordinated views pose a challenge in taking proactive and strategic actions to manage risks; 3) Early warning expert community platform leverages new data techniques to identify relevant signals and helps integrating experts in a more joined up process.

    Nataliya Le Vine is a data scientist at Advanced Analytics Center of Excellence at Swiss Re, bringing machine learning and AI to drive the technology transformation in insurance. Over the last decade, she worked in academia, tech and insurance industries both in EMEA and Americas with a core expertise in predictive modeling and machine learning.

    Linkedin
  • Annie Xue

    Host

    Annie Xue - Head Global L&H Product Risk Data Service - Swiss Re

    Down arrow blue

    Annie leads the global product team of Swiss Re’s Risk Data Service platform along with various client risk management services for L&H insurers.

    A recent example of Annie’s work is with the Swiss Re Risk Resilience Center Center. The ambition is to join the forces of academia, insurers, and data partners to solve Covid-related problems that are most relevant to the insurance industry and beyond.

    In a past life, she has over a decade experience in Reserving, Inforce Management and Experience Studies. Annie is an eternal optimist and brings unparalleled energy and passion into her work.

    Linkedin
  • Brian Alexander

    ROUNDTABLE: Defining a Governance Framework Covering the Appropriate Level of Automation/Human Control

    Brian Alexander - CEO North America - Omina Technologies

    Down arrow blue

    PRESENTATION: Reduce Compliance Risks with Trustworthy AI

    Increasing complexity in the regulatory environment, rates of regulatory change and need for accountability are driving new compliance risks for financial services companies. Trustworthy AI reduces compliance risks while balancing human control and oversight with accountability. AI can automatically identify relevant regulatory changes and predict the impact to the organization (e.g., business units, policies, controls, products/services, contracts). Regulatory changes can be routed to impacted business units with compliance risk indicators/ratings and impact predictions. The AI solution can predict and automatically take actions necessary to maintain compliance, which can be accepted or overridden by the business unit.

    ROUNDTABLE: Defining a Governance Framework Covering the Appropriate Level of Automation/Human Control

    Points of Discussion: Defining compliance and regulatory risks Identifying and rating regulatory/organizational changes Defining compliance risk treatment strategies Including user feedback to improve the AI-enabled compliance management

    Brian is responsible for the strategy and management of Omina Technologies US. He has a B.S. Mech. Eng. and a J.D. Brian has spent the last 25 years working with technology companies and new technologies in capacities ranging from legal advisor to executive to investor. While working in private law practice, Brian represented technology organizations in intellectual property, regulatory and litigation matters. Brian also has advised technology, financial services, energy/utilities and healthcare companies regarding corporate risk and compliance. Prior to joining Omina, Brian worked for C2C with responsibilities including internal legal advisory, corporate strategy and software development, as well as client management on numerous projects covering diverse risk/compliance matters such as information/cybersecurity and data privacy.

    Linkedin
  • James Fort

    ROUNDTABLE: Solving Retail Challenges with Computer Vision

    James Fort - Senior Product Manager, Computer Vision - Unity

    Down arrow blue

    PRESENTATION: Power Up Your Visual AI with Synthetic Data

    Computer Vision is rapidly changing the retail landscape with respect to both the customer experience and in-store day to day logistics like inventory monitoring, brand logo detection, shopper behavior analysis , autonomous checkout. Traditional methods of training models with real world data is becoming a big bottleneck to faster deployment of these vision models. Learn how machine learning and computer vision engineers are using Unity to get faster, cheaper and more unbiased access to high quality synthetic training data and accelerating model deployment.

    Key Takeaways: 1) Computer vision is becoming essential in retail with applications ranging from planogram verification to inventory monitoring to cashier-less checkout. 2) Labeled data is critical to computer vision but the traditional approach of using real-world training data is expensive, time-consuming, and often insufficient for training a production-level system. In contrast, synthetic datasets are less expensive, faster to produce, perfectly labeled, and tailored with the end application in mind. 3) Unity has technology to produce synthetic datasets with structured environments and randomizations that lead to robust model performance. This presentation shows samples from a Unity retail-oriented dataset that you can download.

    ROUNDTABLE: Solving Retail Challenges with Computer Vision

    Join Unity Computer Vision Experts and peers to discuss the rapidly growing field of computer vision and how it is impacting the retail world and some of the challenges associated with deploying computer vision in Retail. This is a freeform session where you can come to the table with your questions and we will have an engaging and interactive conversation around those topics. You can also use this time to talk with the Unity team about using synthetic data for training computer vision models and dig deeper into customer stories and proof points around synthetic data

    James has 14 years of experience building and applying simulation and artificial intelligence technologies. He started his career in the simulation brand at Dassault Systèmes, where he worked on mechanical simulation solutions for automotive and aerospace customers. He spent several years managing the delivery of natural language systems for Alexa at Amazon. He has worked as a product manager in the AI organization at Unity since 2019 focusing on Unity Simulation and Unity’s solutions for computer vision and is excited about the next frontiers in AI.

    Twitter Linkedin
  • Kevin Kim

    ROUNDTABLE: Brainstorming AI Design Principles – Implementation & Theory

    Kevin Kim - Data Scientist - Nasdaq

    Down arrow blue

    AI Beyond Pattern Recognition: Decision Making Systems

    While machine learning and artificial intelligence technologies are now advanced enough to outperform humans in variety of tasks, how we make decisions with models varies by practitioner. Reinforcement learning is promising but it is limited to adversarial settings; or, in vernacular, situations where decisions directly impact the environment. Without figuring out how AI systems can make good decisions in environments they cannot influence, we may forever be stuck in a limbo of pattern recognition, prediction, and analytics. What if we can develop a “theory” of AI decision making? Can we view different decision making situations as a set of engineering systems? Can we define key components of an AI decision maker? Answering such questions would enable us to design AI systems in a modular fashion much like how we design many industrial goods like cars. We may even be able to develop industry standards and manuals on how to design AI decision makers. Using actual use cases and other potential real-world applications in both financial and non-financial settings as examples, a systems view on decision making AI systems is proposed. Furthermore, ways to design and build such systems are explored.

    Key Takeaways: 1) AI systems are far more valuable making decisions than making simple predictions and pattern recognitions

    2) We need a “theory of design” for AI decision making systems: For AI to become a trusted part of decision making both in and out of industry, we need to understand and generalize components of AI systems and define what it means to be “robust”.

    3) This starts with looking at actual use cases, identifying similarities, and quantifying key parameters – so we may use standard design techniques to design AI systems.

    Kevin is a data scientist with strong interest in an interdisciplinary approach that combines artificial intelligence, operations research, systems engineering, economics, and quantitative finance. He is a member of Nasdaq’s Machine Intelligence Lab, a team dedicated to using AI to improve capital markets. At Nasdaq, he has worked on projects that cover topics such as alternative data, capital market operations, financial surveillance, and portfolio management. His most recent interest is in developing a procedure for designing robust and fail-proof decision-making AI systems. Kevin holds a Bachelor’s degree in Computer Science from Washington University in St. Louis.

    Linkedin
  • Vladimir Teodosiev

    ROUNDTABLE: AI: Speech Recognition in Financial Services - Making a Significant Contribution to the UK Economy

    Vladimir Teodosiev - Sales Manager - Nuance Communications

    Down arrow blue

    PRESENTATION: Speech Recognition: Comprehensive Insights in Real-World AI for Business Leaders

    Learn how to streamline your workload, boost efficiency and automate processes with Dragon Speech Recognition. This live short tutorial will show you how you can overcome challenges such as documentation, collaboration across teams in real time and how to get key business insights that will help you perform better as a business and improve your services to customers.

    ROUNDTABLE: AI & Speech Recognition in Financial Services

    Economies are responding to many challenges (e.g. Brexit and the Covid-19 pandemic). Businesses need to find ways to remain both competitive and sustainable. As markets and profitability wax and wane under these influences, the financial services sector faces real and present pressures. How can we advance speech recognition in this time, and consequently build better products & provide better services?

    Vladimir Teodosiev is a Sales Manager at Nuance delivering value to high-profile private and public organisations. Commercially savvy and with a strong interest in technology, he has extensive experience in creating and delivering value to high-profile private and public organisations. Highly customer-focused and results-driven, he supports businesses in developing best practices that facilitate efficient document creation.

    Twitter Linkedin
  • 10:45

    COFFEE BREAK

  • APPLIED AI IN INSURANCE

  • 10:55
    Clara Castellanos Lopez

    NLP for Claims Management

    Clara Castellanos Lopez - Senior Data Scientist - QBE Europe

    Down arrow blue

    NLP for Claims Management

    QBE Insurance Group is one of the world’s leading insurers and reinsurers, with operations in 31 countries worldwide. Covering multiple lines such as marine, motor, casualty, amongst others, the diversity of claims is wide. Exploiting the richness of claims data can help to process claims faster and gain deeper insight. In this talk, I will discuss how some nlp techniques could be used to this end.

    Key Takeaways:

    1) Insurance companies have a lot of unstructured data which has traditionally been difficult to tap into. 2) NLP techniques can be used to deliver business value and drive greater customer satisfaction. 3) Contextual embeddings adapted specifically to your data will improve the quality of your model performance.

    Clara Castellanos Lopez is a Senior Data Scientist at QBE Insurance Group. She works with the claims teams building machine learning algorithms to provide data driven insights and automated solutions. Clara has been working in the industry since 2014 with experience in oil and gas, retail and insurance. She has a master degree in Applied Mathematics and a Ph.D in Geophysics from Universite de Nice Cote d’Azur.

    Twitter Linkedin
  • 11:20
    Connan Snider

    Just Auto Insurance: Making Insurance Affordable, Convenient, and Fair for Everyone

    Connan Snider - Head of Data Science - Just Auto Insurance

    Down arrow blue

    Just Auto Insurance: Making Insurance Affordable, Convenient, and Fair for Everyone

    Just Auto Insurance is using data on how you drive to innovate on risk assessment and pricing of auto insurance. By using telematics data ,gleaned from your phone and other devices, combined with advanced ML/AI tools, Just is able to assess the probability that a customer will have an accident while reducing the number of painful (and often discriminatory) background questions required by traditional insurers. By designing dynamic pricing systems and customer feedback tools around this telematics data, Just is able to provide customers with information and incentives that actually help reduce that risk.

    Connan Snider is the Head of Data Science at Just Auto Insurance. Prior to joining Just, he worked on the Marketplace team at Uber developing pricing and incentive algorithms and contributing to projects ranging from marketing spend optimization to autonomous fleet planning. Before that he was an Assistant Professor in the Economics Department at UCLA. Connan has a B.S. in Mathematics and Economics from Ohio State University and a PhD in Economics from the University of Minnesota.

    Linkedin
  • 11:45

    MAKE CONNECTIONS: MEET WITH ATTENDEES VIRTUALLY FOR 1:1 CONVERSATIONS & GROUP DISCUSSIONS

  • 12:15

    END OF SUMMIT

  • THIS SCHEDULE TAKES PLACE ON DAY 1

This website uses cookies to ensure you get the best experience. Learn more