[Python] Sentiment analysis with NLTK

NLTK is an interesting and powerful Python library to perform text analysis. A common use case is to perform sentiment analysis of some text.

While the configuration can go very deep, here is a sample application to analyse the sentiment of online posts written in English. The first time you execute your program, you will be prompted to download data for NLTK, choose the one that best fits your use case.

 import nltk  
 from nltk.sentiment import SentimentIntensityAnalyzer  
 #common words that have no impact on the overall analysis  
 stopwords = nltk.corpus.stopwords.words("english")  
 sia = SentimentIntensityAnalyzer()  
 #strip the STRING text only to relevant words  
 text = ""  
 for word in STRING.split():  
  #stopwords are all provided to us in lowercase  
  if word.lower() not in stopwords:  
   text += word + " "  
 #get overall sentiment in this post  
 sentiment = sia.polarity_scores(text)["compound"]  

An honorable mention goes to the pyenchant library, which uses Enchant spellchecker dictionaries and allows to check whether a string is a word in a given language. Additionally, it will match ignoring weird casing:

 import enchant  
 english_words = enchant.Dict("en_US")  
 if english_words.check(STRING):  
  #this is an english word  

[Python] Access Twitter tweets with Tweepy

To programmatically access Twitter tweets, you can use Python's Tweepy library.

You must login to Twitter, then apply for developer access. The process is a bit delayed since you will need to wait for Twitter to approve your request (maybe even reply to their emails with additional info), but in general for academic/testing purposes, the approval process takes about 1 week.

Once you're set it is simply a matter of:

 import tweepy  
 twitter_auth = tweepy.AppAuthHandler(TWITTER_KEY, TWITTER_SECRET)  
 #twitter limits frequency of polls, we need to slow down automatically. You have about 900 requests/15 minutes  
 twitter_api = tweepy.API(twitter_auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)  
 #default is 2 weeks of data, the optional input variable since_id is the ID of the last tweet since you want to start polling  
 #we are preparing the request here, but not yet executing it  
 cur = None  
 if since_id is not None:  
  cur = tweepy.Cursor(twitter_api.search, count=TWEET_POLL_LIMIT, include_entities=False, result_type="recent", q=YOUR_HASHTAGS)  
  cur = tweepy.Cursor(twitter_api.search, count=TWEET_POLL_LIMIT, include_entities=False, result_type="recent", q=YOUR_HASHTAGS, since_id=since_id)  
 #we retrieve the tweets here, we can optionally limit the total we retrieve. If we do not limit ourselves and go over our quota, we are stalled until the next window is available to us  
 #the config we did on the twitter_api object will handle this automatically for us (wait_on_rate_limit) and notify us as well (wait_on_rate_limit_notify)  
 tweets = None  
  tweets = cur.items()  
  tweets = cur.items(TWEET_LIMIT)  
 for tweet in tweets:  
  #do something  


Some things to note:

Your query rate will likely be limited, currently the free tier has a 900 requests/15 minutes quota. If you go over that quota, your app will stall and can only resume after the quota is reset.


The output of the poll is a set of tweets which are simply JSON, some notable fields:

  • id: unique ID of this tweet
  • created_at: a string representation of the datetime when this tweet was created
  • text: the body of the tweet


Strangely, the created_at value is NOT an epoch but a human readable string, to convert it to date you can:

 parsed = datetime.strftime(datetime.strptime(date,'%a %b %d %H:%M:%S +0000 %Y'), '%Y-%m-%d %H:%M:%S')  

And to convert that to epoch you can:

import ciso8601
return int(ciso8601.parse_datetime(parsed).timestamp()) 

To verify if a tweet is a retweet you can check if it starts with RT @:

tweet._json["text"].startswith("RT @")

[Python] Access Reddit posts and comments with PRAW

To programmatically access posts and comments from a Reddit subreddit, you can use Python's PRAW library.

You must login to Reddit, then register your application to get a client ID and secret.

Once you're set is it simply a matter of:

 import praw  
 reddit = praw.Reddit(  
 submissions = reddit.subreddit("SUBREDDIT_NAME").new(limit=POST_LIMIT)  
 for submission in submissions:  
  for comment in submission.comments:  
   #do something  

Some things to note:

A post is represented by a submission object, some interesting attributes are:

  • created_utc: the UTC epoch when this post was created
  • author
  • score: all posts start with score 1 and it changes based on user votes
  • title
  • selftext: the body of the post


To verify if a post was deleted there is unfortunately not a simple way but in general you can consider this rule to be valid:

submission.author is None or not submission.is_robot_indexable or submission.selftext == "[deleted]" or submission.selftext == "[removed]"

Remember however that the [deleted] and [removed] strings are language dependent, therefore based on your settings you might not get them in English.

A comment is represented by a comment object, its body is in the body attribute.

To verify if a comment was posted by a moderator (or a moderator bot), you can check:

comment.distinguished == "moderator"


[Python] Download file from URL

Here is a little code snippet useful for downloading files from a specific URL and saving them on the filesystem:

 import urllib.request  
 import shutil  
 with urllib.request.urlopen(URL_TO_FILE) as response, open(FILE_NAME, 'wb') as f:  
   shutil.copyfileobj(response, f)  

opening the file in wb mode will create or OVERWRITE a file in binary mode


Tesla Model 3 1000km test review

I was considering buying an electric car and Tesla is an obvious choice, so I decided to rent a Model 3 for a weekend and drove it about 1000km in order to evaluate it.

I am from Genova, Italy but live in Zurich, Switzerland so my use case is to find a car that allows me to enjoy a weekend/day trip in Switzerland and occasionally can comfortably bring me to Genova without increasing the total trip time too much (over 30 minutes) due to charging needs.

With a common ICE car, the fastest one way trip without any rest/fuel stop can be done in about 5-5.30 hours; on average it takes 6-6.30 due to queues (damn Gotthard) and rarely it is 7+ hours.

So what about an EV? Here is my review after a 960km round trip in two days.


[bash] Extract all distinct lines from file

Using a combination of cut and uniq, it is possible to extract all distinct lines from a given file:

cut -d\; -f1 file.txt | uniq

It is optionally possible to sort the file as well adding sort to the command:

cut -d\; -f1 file.txt | sort | uniq

[bash] Extract characters before marker from all lines in a file with sed

Using sed, it is possible to quickly extract all characters before a given marker from all lines in a file with:

sed 's/MARKER.*//' file.txt