in new techniques

A sentiment analysis experiment with Twitter and the Zuma Spear controversy

I had some fun this week with the raging debate over Brett Murray’s artwork which showed President Jacob Zuma with his private bits a-dangling. It provided the perfect opportunity to have a go using something called “sentiment analysis” to get some insight into an unfolding story.

Right! What the heck is “sentiment analysis” and how on earth could this possibly be used in journalism?

The LA Times Sentimeter

The back-of-envelope description of “sentiment analysis” or “opinion analysis” is simply machine-recognition of the emotional intent behind a piece of text and it seeks to classify it as either “positive” or “negative” or, in some cases, as neutral. For a fuller explanation see this wikipedia entry.

As this story began to heat up last week, I spent the weekend tinkering with some code to see if it would be possible to mine Twitter in realtime and collect as many of these tweets as possible to do some analysis on it. I then recalled something that I had read relating to sentiment analysis and how it could be used with Python, the computer language I am familiar with, alongside the Natural Langage Toolkit library.

I figured it may be useful to grab these tweets in real time and see if we could classify them by mood and then see what kind of story might be generated from such an exercise.

Writing the code to suck the tweets out of the ether and into a database was no problem but the mood analysis was another story.

I wrote some code based on some examples I found on the ‘net but rapidly realised that the classfication would not approach any kind of decent accuracy without a significant “corpus” of tweets to train the software to do the analysis.

Eventually I came across Chatterbox, an excellent API with Python implementations, which is specifically designed to classify social network messages. Chatterbox was developed and is a commericial spin-off from Queen Mary, University of London (the which includes UK’s leading linguistics research facilty) and was created using an Intel supercomputer to crunch more than 100 million tweets as part of developing its “conversation model”. It claims an 80% accuracy rate (which is significant considering that research suggests that humans only agree on the categorisation of a sentiment 70% of the time).

Now, with Chatterbox and the tweet-scraper working together we were ready to go for it on Monday morning.

I ran my script for significant periods over three days and by late on Wednesday had amassed some 20,000 tweets in a database, all of which had been categorised by Chatterbox. After Brett Murray’s picture was attacked in the Goodman Gallery on Tuesday things went crazy and I scraped more than 7,000 tweets in the one hour following that incident alone.

The volume got so great that I was going to smash through Chatterbox’s 15,500 free API calls for a month within a day so on the off-chance sent a mail to Chatterbox asking if they could assist. I was blown away when Chatterbox co-creator Stuart Battersby got in touch and kindly helped me work around the problem.

So, what were we able to say from our exercise? Read the story in Rapport and an English version here. (The Afrikaans version is the better one thanks to a great edit by colleague Pieter Malan who runs the paper’s Weekliks section).
In a nutshell what this project allowed us to do was to document the incredible polarisation that took place on Twitter as this story developed.

We could see how, for example, in Johannesburg, how positive and negative tweets prior to the painting being “defaced” were running roughly 1:1 surged to a 7:1 ratio of positive to negative following the painting being “attacked”.

We could see the incredible shift in KwaZulu-Natal (also a stronghold of President Zuma) from negative to postive along the same time line, and so on.

What is clear is that there was an alarming hardening of sentiment as this story unfolded and it would certainly add strength to the idea that some moderating leadership was urgently required.

The project was fascinating to me and I will definitely try and use this more often.

The only other examples that I can find of this being used by media are with the LA Times using it to gauge public sentiment about Oscar nominees against actual winners and Politico appear to be using it in a partnership with Facebook to look at the Republican presidential campaign. I’d love to know of other media examples that I might look at.

I also include a link to the full dataset (160mb) we created as I think others may like to use this to crunch other questions using it. There is certainly potential for some interesting visualisations.
Beware through that while the set of data is a pretty decent sample it is by no means perfect. The tweets are pulled based on a number of keywords so there are plenty which would have got through the net and my script did not run 24-hours a day due to practical, logistical and technological reasons but captures in total around 28-32 hours worth of twitter “coverage” over the period.

Sentiment analysis has plenty of critics but I think it has potential in journalism.

Write a Comment

Comment


three + 4 =