How can we use AI to debunk 'Fake News'?

Original

"Fake News" has recently become a buzzword thanks to social media, and with consistent breakthroughs and progressions in AI, is this the answer to debunking the rumors online?

Unfortunately, the general public is attracted to controversy and outlandish stories over the somewhat mundane news that we might get served. When these more compelling stories are published, whether or not they hold true, they get spread at an alarming pace. People are compelled to share this information, and thus fake news becomes "fact" with the public becoming misinformed. Take an April fool's joke, for example, 23andMe teamed up with Lexus to announce 'Genetic Select, the world’s first service that uses human genetics to match you with the car of your genes.' The video announcing the new product gained 1,837,253 views on YouTube in less than a week, demonstrating the power of the internet. 

It's important to educate the general public, and we need to be made aware of the news, but as Rumman Chowdhury, Principal Partner at Accenture explained, 'it’s difficult because we need a hook to get people involved, but what is that hook? If we say ‘Robot beats human’ people are intrigued, but if you say ‘AI helps human’ people don’t care as much! How can we democratize it better?'

How can we use AI to debunk fake news?
Elena Kochkina, Computer Science PhD student at the University of Warwick is currently working on creating an algorithm on Twitter to identify and debunk rumors through the expression of users' emotions and rich data found on social media to help researchers mine public opinion. With a background in natural language processing, Elena covers several areas such as rumor stance and veracity classification, predicting well-being based on heterogeneous user-generated data and target-dependent sentiment recognition. Not only is the circulation of rumors misleading, but it can present many risks when taken as a source of reliable news. Detecting rumourous content is important to prevent the spread of false information which can affect important decisions and stock markets. Elena explains that her work in rumor stance classification is considered to be an important step towards rumor verification as claims that attract a lot of scepticism among users are more likely to be proven false later. 

Elena's work proposes an LSTM-based sequential model that achieves state-of-the-art results on this task through modeling the conversational structure of tweets. The task of automatically assessing well-being using smartphones and online social media is becoming of crucial importance, as an attempt to help individuals self-monitor their mental health state. In the current work, a multiple kernel learning approach is proposed as a mental health predictor, trained on heterogeneous (text and smartphone) user-generated data. The results reveal the efficiency of the proposed model and sequential approaches for time series modeling (i.e., LSTMs) are proposed for future work. Opinion mining is usually achieved by determining the overall sentiment expressed. However, inferring the sentiment towards specific targets is limited by such an approach since a Social Media posts may contain different types of sentiment expressed towards each of the targets mentioned. 
At the Deep Learning Summit in London, we spoke with Elena on the Women in AI Podcast where she shared more about her work, and you can listen to the podcast here
Original

AI Women in Tech


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Acc1.001
    Ibm watson health 3.001
    Rbc research.001
    Mit tech review.001
    Facebook.001
    Graphcoreai.001
    Maluuba 2017.001
    Twentybn.001
    Forbes.001
    Kd nuggets.001
    This website uses cookies to ensure you get the best experience. Learn more