You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.


Assignment 3

In this assignment you will explore text message data and create models to predict if a message is spam or not.

In [1]:
import pandas as pd
import numpy as np

spam_data = pd.read_csv('spam.csv')

spam_data['target'] = np.where(spam_data['target']=='spam',1,0)
spam_data.head(10)
Out[1]:
text target
0 Go until jurong point, crazy.. Available only ... 0
1 Ok lar... Joking wif u oni... 0
2 Free entry in 2 a wkly comp to win FA Cup fina... 1
3 U dun say so early hor... U c already then say... 0
4 Nah I don't think he goes to usf, he lives aro... 0
5 FreeMsg Hey there darling it's been 3 week's n... 1
6 Even my brother is not like to speak with me. ... 0
7 As per your request 'Melle Melle (Oru Minnamin... 0
8 WINNER!! As a valued network customer you have... 1
9 Had your mobile 11 months or more? U R entitle... 1
In [2]:
from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(spam_data['text'], 
                                                    spam_data['target'], 
                                                    random_state=0)

Question 1

What percentage of the documents in spam_data are spam?

This function should return a float, the percent value (i.e. $ratio 100$).*

In [3]:
def answer_one():
    
    return len(spam_data[spam_data['target'] == 1])/ len(spam_data) * 100 #Your answer here
In [4]:
answer_one()
Out[4]:
13.406317300789663

Question 2

Fit the training data X_train using a Count Vectorizer with default parameters.

What is the longest token in the vocabulary?

This function should return a string.

In [5]:
from sklearn.feature_extraction.text import CountVectorizer

def answer_two():
    import numpy as np
    
    vect = CountVectorizer().fit(X_train)
    token = np.array(vect.get_feature_names())
    max_word = 'a'
    
    for word in token:
        if len(word) > len(max_word):
            max_word = word
    return max_word #Your answer here
In [6]:
answer_two()
Out[6]:
'com1win150ppmx3age16subscription'

Question 3

Fit and transform the training data X_train using a Count Vectorizer with default parameters.

Next, fit a fit a multinomial Naive Bayes classifier model with smoothing alpha=0.1. Find the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [7]:
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score

def answer_three():
    vect = CountVectorizer().fit(X_train)
    X_train_vectorized = vect.transform(X_train)
    
    model = MultinomialNB(alpha=0.1)
    model.fit(X_train_vectorized, y_train)
    
    predictions = model.predict(vect.transform(X_test))
    
    return roc_auc_score(y_test, predictions) #Your answer here
In [8]:
answer_three()
Out[8]:
0.97208121827411165

Question 4

Fit and transform the training data X_train using a Tfidf Vectorizer with default parameters.

What 20 features have the smallest tf-idf and what 20 have the largest tf-idf?

Put these features in a two series where each series is sorted by tf-idf value and then alphabetically by feature name. The index of the series should be the feature name, and the data should be the tf-idf.

The series of 20 features with smallest tf-idfs should be sorted smallest tfidf first, the list of 20 features with largest tf-idfs should be sorted largest first.

This function should return a tuple of two series (smallest tf-idfs series, largest tf-idfs series).

In [9]:
from sklearn.feature_extraction.text import TfidfVectorizer

def answer_four():
    vect = TfidfVectorizer().fit(X_train)
    X_train_vectorized = vect.transform(X_train)
    model = MultinomialNB(alpha=0.1)
    model.fit(X_train_vectorized, y_train)
    feature_names = np.array(vect.get_feature_names())
    sorted_tfidf_index = X_train_vectorized.max(0).toarray()[0].argsort()
    
    small_index = feature_names[sorted_tfidf_index[:20]]
    small_value = np.sort(X_train_vectorized.max(0).toarray()[0])[:20]
    small_final_index = np.concatenate((np.sort(small_index[small_value==min(small_value)]), small_index[small_value!=min(small_value)]))
    
    large_index = feature_names[sorted_tfidf_index[:-21:-1]]
    large_value = np.sort(X_train_vectorized.max(0).toarray()[0])[:-21:-1]
    large_final_index = np.concatenate((np.sort(large_index[large_value==max(large_value)]), large_index[large_value!=max(large_value)]))
    
    small = pd.Series(small_value,index=small_final_index)
    large = pd.Series(large_value,index=large_final_index)
    
    return ((small,large))#Your answer here
In [10]:
answer_four()
Out[10]:
(aaniye          0.074475
 athletic        0.074475
 chef            0.074475
 companion       0.074475
 courageous      0.074475
 dependable      0.074475
 determined      0.074475
 exterminator    0.074475
 healer          0.074475
 listener        0.074475
 organizer       0.074475
 pest            0.074475
 psychiatrist    0.074475
 psychologist    0.074475
 pudunga         0.074475
 stylist         0.074475
 sympathetic     0.074475
 venaam          0.074475
 diwali          0.091250
 mornings        0.091250
 dtype: float64, 146tf150p    1.000000
 645          1.000000
 anything     1.000000
 anytime      1.000000
 beerage      1.000000
 done         1.000000
 er           1.000000
 havent       1.000000
 home         1.000000
 lei          1.000000
 nite         1.000000
 ok           1.000000
 okie         1.000000
 thank        1.000000
 thanx        1.000000
 too          1.000000
 where        1.000000
 yup          1.000000
 tick         0.980166
 blank        0.932702
 dtype: float64)

Question 5

Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 3.

Then fit a multinomial Naive Bayes classifier model with smoothing alpha=0.1 and compute the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [11]:
def answer_five():
    
    vect = TfidfVectorizer(min_df=3).fit(X_train)
    X_train_vectorized = vect.transform(X_train)
    
    model = MultinomialNB(alpha=0.1)
    model.fit(X_train_vectorized, y_train)
    
    predictions = model.predict(vect.transform(X_test))
    
    return roc_auc_score(y_test, predictions)#Your answer here
In [12]:
answer_five()
Out[12]:
0.94162436548223349

Question 6

What is the average length of documents (number of characters) for not spam and spam documents?

This function should return a tuple (average length not spam, average length spam).

In [13]:
def answer_six():
    length_nspam = np.mean(list(map(len,spam_data['text'][spam_data.target==0])))
    length_spam = np.mean(list(map(len,spam_data['text'][spam_data.target==1])))
    return ((length_nspam,length_spam)) #Your answer here
In [14]:
answer_six()
Out[14]:
(71.023626943005183, 138.8661311914324)



The following function has been provided to help you combine new features into the training data:

In [15]:
def add_feature(X, feature_to_add):
    """
    Returns sparse feature matrix with added feature.
    feature_to_add can also be a list of features.
    """
    from scipy.sparse import csr_matrix, hstack
    return hstack([X, csr_matrix(feature_to_add).T], 'csr')

Question 7

Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 5.

Using this document-term matrix and an additional feature, the length of document (number of characters), fit a Support Vector Classification model with regularization C=10000. Then compute the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [16]:
from sklearn.svm import SVC

def answer_seven():
    vect = TfidfVectorizer(min_df=5).fit(X_train)
    X_train_vectorized = vect.transform(X_train)
    X_train_vectorized_length = add_feature(X_train_vectorized, X_train.str.len())
    
    X_test_vectorized = vect.transform(X_test)
    X_test_vectorized_length = add_feature(X_test_vectorized, X_test.str.len())
    
    model = SVC(C=10000)
    model.fit(X_train_vectorized_length, y_train)
    predictions = model.predict(X_test_vectorized_length)    
    
    return roc_auc_score(y_test, predictions)
In [17]:
answer_seven()
Out[17]:
0.95813668234215565

Question 8

What is the average number of digits per document for not spam and spam documents?

This function should return a tuple (average # digits not spam, average # digits spam).

In [18]:
def answer_eight():
    import re
    spam = [re.findall("[0-9]",i) for i in spam_data['text'][spam_data.target==1]]
    nspam = [re.findall("[0-9]",i) for i in spam_data['text'][spam_data.target==0]]
    
    it_spam = np.mean(list(map(len,spam)))
    it_nspam = np.mean(list(map(len,nspam)))
    return ((it_nspam,it_spam)) #Your answer here
In [19]:
answer_eight()
Out[19]:
(0.29927461139896372, 15.759036144578314)

Question 9

Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 5 and using word n-grams from n=1 to n=3 (unigrams, bigrams, and trigrams).

Using this document-term matrix and the following additional features:

  • the length of document (number of characters)
  • number of digits per document

fit a Logistic Regression model with regularization C=100. Then compute the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [20]:
from sklearn.linear_model import LogisticRegression

def answer_nine():
    import re
    
    vect = TfidfVectorizer(min_df=5,ngram_range=(1,3)).fit(X_train)
    X_train_vectorized = vect.transform(X_train)
    X_train_vectorized_length = add_feature(X_train_vectorized, X_train.str.len())
    num_digits_X_train = [X_train.apply(lambda x: len(''.join([a for a in x if a.isdigit()])))]
    X_train_vectorized_final = add_feature(X_train_vectorized_length, num_digits_X_train)
    
    X_test_vectorized = vect.transform(X_test)
    X_test_vectorized_length = add_feature(X_test_vectorized, X_test.str.len())
    num_digits_X_test = [X_test.apply(lambda x: len(''.join([a for a in x if a.isdigit()])))]
    X_test_vectorized_final = add_feature(X_test_vectorized_length, num_digits_X_test)
    
    model = LogisticRegression(C=100)
    model.fit(X_train_vectorized_final,y_train)
    
    predictions = model.predict(X_test_vectorized_final)
    
    return roc_auc_score(y_test,predictions) #Your answer here
In [21]:
answer_nine()
Out[21]:
0.96533283533945646

Question 10

What is the average number of non-word characters (anything other than a letter, digit or underscore) per document for not spam and spam documents?

Hint: Use \w and \W character classes

This function should return a tuple (average # non-word characters not spam, average # non-word characters spam).

In [22]:
def answer_ten():
    import re
    spam = [re.findall("\W",i) for i in spam_data['text'][spam_data.target==1]]
    nspam = [re.findall("\W",i) for i in spam_data['text'][spam_data.target==0]] 
    
    it_spam = np.mean(list(map(len,spam)))
    it_nspam = np.mean(list(map(len,nspam)))
    
    return ((it_nspam,it_spam))#Your answer here
In [23]:
answer_ten()
Out[23]:
(17.291813471502589, 29.041499330655956)

Question 11

Fit and transform the training data X_train using a Count Vectorizer ignoring terms that have a document frequency strictly lower than 5 and using character n-grams from n=2 to n=5.

To tell Count Vectorizer to use character n-grams pass in analyzer='char_wb' which creates character n-grams only from text inside word boundaries. This should make the model more robust to spelling mistakes.

Using this document-term matrix and the following additional features:

  • the length of document (number of characters)
  • number of digits per document
  • number of non-word characters (anything other than a letter, digit or underscore.)

fit a Logistic Regression model with regularization C=100. Then compute the area under the curve (AUC) score using the transformed test data.

Also find the 10 smallest and 10 largest coefficients from the model and return them along with the AUC score in a tuple.

The list of 10 smallest coefficients should be sorted smallest first, the list of 10 largest coefficients should be sorted largest first.

The three features that were added to the document term matrix should have the following names should they appear in the list of coefficients: ['length_of_doc', 'digit_count', 'non_word_char_count']

This function should return a tuple (AUC score as a float, smallest coefs list, largest coefs list).

In [24]:
def answer_eleven():
    from sklearn.feature_extraction.text import CountVectorizer
    import re
    vectorizer = CountVectorizer(min_df=5, analyzer='char_wb', ngram_range=[2,5])
    X_train_transformed = vectorizer.fit_transform(X_train)
    
    X_train_transformed_with_length = add_feature(X_train_transformed, [X_train.str.len(),
                                                                        X_train.apply(lambda x: len(''.join([a for a in x if a.isdigit()]))),
                                                                        X_train.str.findall(r'(\W)').str.len()])

    X_test_transformed = vectorizer.transform(X_test)
    X_test_transformed_with_length = add_feature(X_test_transformed, [X_test.str.len(),
                                                                      X_test.apply(lambda x: len(''.join([a for a in x if a.isdigit()]))),
                                                                      X_test.str.findall(r'(\W)').str.len()])
    
    
    model = LogisticRegression(C=100)
    model.fit(X_train_transformed_with_length,y_train)
    
    predictions = model.predict(X_test_transformed_with_length)
    feature_names = np.array(vectorizer.get_feature_names()+ ['length_of_doc', 'digit_count', 'non_word_char_count'])
    
    sorted_coef_index = model.coef_[0].argsort() 
    small_coefficient = list(feature_names[sorted_coef_index[:10]])
    large_coefficient = list(feature_names[sorted_coef_index[:-11:-1]])
    
    return roc_auc_score(y_test, predictions),small_coefficient,large_coefficient #Your answer here
In [25]:
answer_eleven()
Out[25]:
(0.97885931107074342,
 ['. ', '..', '? ', ' i', ' y', ' go', ':)', ' h', 'go', ' m'],
 ['digit_count', 'ne', 'ia', 'co', 'xt', ' ch', 'mob', ' x', 'ww', 'ar'])