Is the code open source?
Unfortunately the code isn't open source. There are a few tutorials with open source code that have similar implementations to ours:
The data has been processed so that the emoticons are stripped off. Also, it's in a regular CSV format.Format Data file format has 6 fields:0 - the polarity of the tweet (0 = negative, 2 = neutral, 4 = positive)1 - the id of the tweet (2087)2 - the date of the tweet (Sat May 16 23:58:44 UTC 2009)3 - the query (lyx). If there is no query, then this value is NO_QUERY.4 - the user that tweeted (robotickilldozr)5 - the text of the tweet (Lyx is cool)If you use this data, please cite Sentiment140 as your source.
Where is the tweet corpus for Spanish?
Unfortunately, we do not provide the Spanish data set yet.
What did you use to build this?
We built this using the following technologies:
- Twitter API
- Amazon EC2 (for the backend)
- Google Visualization API (for the annotated timeline)
- Google Charts API (for the pie and bar charts)
- Google Sites (for this documentation)
- Google Spreadsheets (for our feedback form)
- Blogger (for our blog)
- Google Gadgets
- Google Analytics
Thank you Twitter, Amazon, and Google.
Want to discuss ideas?
We have a special forum for discussion ideas here:
Do you have any project ideas?
If you are new to the field of sentiment analysis, I recommend reading the following by Pang and Lee:
There are still many unsolved problems in sentiment analysis. If you're interested, you can help us by working on one of the problems below.
- Building a classifier for subjective vs. objective tweets. We've focused mostly on classifying positive vs. negative correctly. We haven't looked at classifying tweets with sentiment vs. no sentiment very closely.
- Handling negation. Words like no, not, and never are difficult to handle properly.
- Relevant papers:
- Isaac G. Councill, Ryan McDonald, and Leonid Velikovich. 2010. What's great and what's not: learning to classify the scope of negation for improved sentiment analysis. [pdf]
- Potts, Christopher. 2010. On the negativity of negation. [pdf]
- Handling comparisons. Our bag of words model doesn't handle comparisons very well. For example, in the phrase "Stanford is better than Berkeley", the tweet would be considered positive for both Stanford and Berkeley using our bag of words model because it doesn't take into account the relation towards "better".
- The "aboutness" problem. Given a tweet, automatically detect if the sentiment is towards an entity.
- about the term [Google]: "I love Google."
- not about the term [Google]: "You should Google that."
- Relevant papers:
- Target-dependent Twitter Sentiment Classification [pdf]
- Determine context switches. Sometimes tweets contain two different ideas. It would be good to be able to segment these two different ideas out. Here's an example: "Just chomped my way through a massive apple, was pretty tasty. Now for work. Business revision."
- Building an accurate parser for tweets. Dependency parsers, like the Stanford Parser, doesn't handle ungrammatical text very well because they were trained on corpuses like the Wall Street Journal . It would be great to develop a parser that can handle informal text better.
- Sarcasm detection.
- Topic classification for tweets.
- Tag clouds. Given a list of positive and negative tweets, what are the most meaningful words to put in a tag cloud?
- Applying sentiment analysis to Facebook messages. Facebook messages don't have the same character limitations as Twitter, so it's unclear if our methodology would work on Facebook messages.
- Internationalization. We focus only on English sentences, but Twitter has many international users. It should be possible to use our approach to classify sentiment in other languages.
- Sentiment as it relates to religion. Please contact Greg Troxell (gtroxell65 [at] gmail.com) if you're interested in this.
If you are interested in machine learning and natural language processing, you may be interested in the following books: