NLU & Computer Vision
The explosion of user-generated content, endlessly growing interactions between organizations and customers, regulators and companies, federal institutions and citizens, employees and employers are creating both Big Content problems and new business opportunities. Individuals are expressing their needs, suggestions and concerns across a wide variety of content types that can help organizations to make optimal data-driven decisions. At the same time, the amount of available content can be overwhelming and no longer possible to monitor by Content Managers. This is the content that might quickly prove to be harmful for the organization, creating very serious issues including legal consequences as any discriminatory, sexist, racist language used in emails or corporate social media along with inappropriate pieces of text or images should not have a place in the workplace or digital communities. In this session, I'll give an overview of how Natural Language Understanding (NLU) gives a manner to analyze, identify and regroup these opportunities and risks through automated classification, named-entity extraction as well as the analysis of subjectivity, tonality, emotions & intentions within the textual content, and how NLU combined with Computer Vision applications can identify high-risk content.
Key Takeaways: *The boundary between NLP and NLU is not that obvious
*Voice of CCE and Risk Assessment are great examples of Text Analytics business use cases
*NLP is not enough for text analytics; other technologies are needed such as Computer Vision
Robert Kapitan is the Lead Product Manager at OpenText for the AI & Analytics content analytics platform, Magellan Text Mining. Robert has been working with Text Mining and Content Analytic applications for over 20 years helping to build software solutions that will understand human language. He holds an M.A. in Theoretical Linguistics and a PhD in Cognitive Semantics.