Thursday, May 24, 2018

Vector Space, the Final Frontier

Given vectors that represent words, how do we construct sentences? Do we add the vectors? Do we find centroids? Do we normalize before, after or not at all?

In fact, can we even say we are dealing with a vector space?

Remember, a vector space has the following 8 properties:
  • Identity (of addition and multiplication)
  • Distributivity (of scalars and vectors)
  • Addition also has commutivity and associativity plus an inverse.
  • Compatibilty: (ab)v = a(bv)
But as pointed out on StackOverflow:

"The commutativity property of vector addition does not always hold in semantics. Therefore, this property shouldn't (always) hold in the embedding space either. Thus, the embedding space should not be called a vector space.

E.g. attempt to treat semantic composition as vector addition in the vector space:

vrescue dog = v rescuevdog (a dog which is trained to rescue people)

vdog rescue = vdogvrescue (the operation of saving a dog)

The phrases "rescue dog" and "dog rescue" mean different things, but in our hypothetical vector space, they would (incorrectly) have the same vector representation (due to commutativity).

Similarly for the associativity property."

At the same time, erroneous assumptions are not necessarily unacceptable (as the post points out). It's just a high-bias model.

Different Spaces

For fun, I tried the vectors that Word2Vec gave me. Now, there is no reason I could think of why the vectors this algorithm gives me for words should be used to form a sentence. But the results were surprising.

DescriptionAccuracy (%)
Raw vector addition81.0
Normalized vector addition27.9
Raw vector centroids7
Raw vector addition then normalizing7

That is, adding together Word2Vec generated word vectors to make a sentence meant my neural net produced decent (but not stellar) results.

More promising was combining vectors from category space. The results looked like this:

Category Space
DescriptionAccuracy (%)
Normalized vector addition94.0
Normalized vector centroids92.1
Adding unnormalized vectors6.2
Normalized vector addition then normalizing5.3

Finally, concatenating (and truncating if there were more than 10 words per text and padding if there were fewer) the word vectors for a sentenceand feeding it into an ANN produced an accuracy of 94.2%. Naive Bayes and Random Forest gave a similar results (92.3% and 93.3% respectively)

Note: multiplying each vector by a factor that was between 0.5 and 1.5 made no difference to the accuracy of an ANN. Weights will simply change accordingly.

It seems like an ANN can take data in pretty much any format (although it's better when we're using variations on the category space rather than TF-IDF).


When I asked a Data Science PhD I work with which technique I should use (adding vectors; finding centroids, concatenating vectors etc) his answer was: "Yes".

Wednesday, May 23, 2018

Making the gradient

Gradient Descent is perhaps the simplest means of finding minima. The maths is pretty easy. We are trying to minimize the sum of squared errors of our target value and our algorithm's output. Let J(w) be our cost function which is a function of weights w and is based on the SSE. Then

J(w) = Σi(targeti - outputi)2 / 2

where i is the index of a sample in our dataset and we divide by 2 for convenience (it will soon disappear).

Say that

f(w) = target - output = y - wT . x

then using the chain rule, we can calculate:

dJ / dw = (dJ / df) . (df / dw)

if η is our step value then Δwj, the value by which we adjust w.

≈ -η (dJ / dw)
= -η Σi(targeti - outputi)(-xij)

the negative coming from the fact that we want to work in the opposite direction of the gradient as we are finding the minimum. As you can see, it cancels out the leading negative and note, too, that the divide-by-2 has also disappeared.

Now, the difference between GD and Stochastic Gradient Descent comes in how we implement the algorithm. In GD, we calculate wj for all i before moving to the next position. In SGD, we calculate wj for each sample, i, and then move on (see Quora).

Note that in TensorFlow you can achieve SGD by using a mini-batch size of 1 (StackOverflow).

Also note "It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize" (from here).

Gradient Descent vs. Newton's Methods

"Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative" (from StackOverflow).

"More people should be using Newton's method in machine learning... Of course, Newton's method will not help you with L1 or other similar compressed sensing/sparsity promoting penalty functions, since they lack the required smoothness" (ibid). The post includes a good chat about best way to find minima and how GD compares to Newton's methods.

LBFGS and Spark

Spark's default solver for a MultilayerPerceptronClassifier uses lbfgs (a memory efficient version of BFGS) which at least for my dataset is both fast and accurate. The accuracy looks like:

AlgorithmIterationsBatch SizeAccuracy (%)
gd10001 (ie, SGD)88.5

For my configuration (10 executors each with 12gb of memory and 3 cores plus a driver with 20gb of memory), l-bfgs was several times faster.

Interestingly, "there used to be l-BFGS implemented in standard TensorFlow but it was deleted because it never got robust enough. Because tensorflow distributed framework is quite low-level so people sometimes get surprised by how much work it takes to get something robust" (GitHub).

Monday, May 7, 2018

Data Wrangling in Textual Classification

You get out what you put it

Since all my machine learning models were giving me similar but modest accuracy, I looked at playing with the data that is fed into them. This made an enormous difference to the accuracy achieved.

Just a reminder of my results, the top five models were giving an accuracy of about 81 +/- 5%. They all used a TF-IDF to create a DTM (document-term matrix) where "the entire document is thus a feature vector, and each feature vector corresponds to a point in a vector space" (StackOverflow).

There was very little improvement using bagging (although, admittedly, I am told that as a rule of thumb one should use as many different models in bagging as one has classes and this I did not do).

The great leap forward came in representing the training data instead as a term-class matrix. In each case, we used a bag-of-words approach on the corpus ("bag-of-words refers to what kind of information you can extract from a document namely, unigram words" - ibid)


Firstly, I tried using Spark's Word2Vec implementation. This is a word-embedding technique. This is "a means of building a low-dimensional vector representation from corpus of text, which preserves the contextual similarity of words" (Quora).

[If you read much about Word2Vec, you will see the expression distributed vector used frequently. "In distributed representations of words and paragraphs, the information about the word is distributed all along the vector. So, every position in the vector may be non-zero for a given word.  Contrast this with the one-hot encoding of words, where the representation of a word is all 0s except for a 1 in one position for that word" (Quora).]

Vered Shwartz says: "It was common to apply dimensionality reduction to the word co-occurrence matrix (count based vectors) to get low-dimensional word vectors. They perform similarly or slightly worse than word embeddings (it depends who you ask...)". She then cites two different papers that found radically different results.

I tried using Word2Vec embeddings on a Spark neural net for the "20 Newsgroups" data but only got the monkey score (5.02%) with lots of "StrongWolfeLineSearch: Encountered bad values in function evaluation. Decreasing step size to ..." WARNings originating in optimize.LBFGS code. I do not yet know why it is complaining (UPDATE: I was normalizing the resulting sentence vector even though the word vectors had already been normalized)

Interestingly, Random Forest gave me 82.6% on exactly the same data. Nice but not a massive improvement so I concluded using Word2Vec did not significantly effect the results.

Class-Term Matrix

Using a matrix where the rows are the classes rather than the documents and the columns are still word vectors (representing the probability of this word being in a given class) resulted much better predictions. Spark's neural nets, random forest and naive Bayes all achieved an accuracy of about 93 +/- 0.5%.

Note, the words were counted manually without using Spark's IDF class. Because of a lack of collisions using this technique, about 200 more words were captured (about 2% extra) but this did not seem to make any difference. The difference appears to entirely be due to adding the word vectors (with only 20 elements) together for each sentence in the vector space model.

Also note that the columns (ie, the terms) were all normalized (rather than scaled or standardized see here for the differences). That is, they represented the (frequentist's) probability of that term being in any given class. If these vectors were not normalized (ie, they remained the count of that term being in any given class) then the accuracy of the neural net dropped back down to about 81%.

Finally, using this representation of the feature vectors lead to the KMean algorithm giving more sensible results (an average error per cluster of 86.4%).


It was explained to me (by a machine learning PhD) that the improvement in accuracy that comes with a term-class matrix is a result of using an input model that better approximates what you want as an output. "TF-IDF combines the importance-per-document and the importance-per-corpus introducing other information not necessarily supporting your classification problem directly," he said. "TF-IDF was developed for Information Retrieval."

Tuesday, May 1, 2018

Wood for the trees

I'd heard Random Forests generally give good results but the accuracy I was seeing was bad. After some head scratching, things improved enormously.

What are Random Forests?

"A random forest is simply a collection of decision trees whose results are aggregated into one final result. Their ability to limit overfitting without substantially increasing error due to bias is why they are such powerful models" (from TowardsDataScience).

Spark's RandomForestClassifier

The matrix for the 20 Newsgroups data was generated using TF-IDF. Interestingly, if all data is used, the accuracy from Spark's RF implementation is the monkey score (roughly 5%).

However, if we insist that a word must appear in at least 20 documents the accuracy was approximately 25%.

Changing the Random Forest hyperparameters for minDocFreq=1 only achieved a limited increase in accuracy; setting the number of trees hyperparameters to [10, 100, 100] gave an accuracy percentage of [10.3, 27.3, 38.3].

Removing columns with no non-zero values in them (as outlined here) managed to increase the accuracy to between 42% and 48%. But no matter what hyperparameter tuning I did, there it stayed.

What using using minDocFreq=25 and removing empty columns had in common was that they resulted in TF-IDF matrices that were less sparse.

"There is no [Random Forest] implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model insight on zero-only areas.  Try some kernel method or better think of converting your data into some more lush representation with some descriptors (or use some dimensionality reduction method)." (from StackOverflow).

So, let's go all the way and using Singular Value Decomposition. This reduced my TF-IDF from having about 10 000 features to a mere 400. This figure of 400 was found by just a manual binary search that attempted to maximize the accuracy. Interestingly, the eigenvalue "elbow" can be seen in this scree plot:

After this (and some more cross validation that said my tree size should be about 190), accuracy was reaching about 81%.

A Matter of Interpretation

One very nice thing about tree models is that they're easy to interpret compared to say neural nets.

Spark gives a very nice method (featureImportances) that tells you which features were the most important when generating the model.

Gradient Boosted Trees

Note that Spark's facility for boosting trees is currently limited. In Spark, "GBTs do not yet support multiclass classification. For multiclass problems, please use decision trees or Random Forests." (from the docs).

Friday, April 13, 2018

She's a model

...but is she looking good? (With apologies to Kraftwerk).

"No single algorithm performs best or worst. This is wisdom known to machine learning practitioners, but difficult to grasp for beginners in the field. There is no silver bullet and you must test a suite of algorithms on a given dataset to see what works best." (MachineLearningMastery).

This blog goes on to quote from a report that compares models and data sets: "no one ML algorithm performs best across all 165 datasets. For example, there are 9 datasets for which Multinomial NB performs as well as or better than Gradient Tree Boosting, despite being the overall worst- and best-ranked algorithms, respectively".

Another comparison is made here as a data scientist reproduces part of her PhD using modern tools. Interestingly, Naive Bayes this time is a close second to SVMs (a result echoed here when using Python).

For my part, I took the "20 Newsgroups" data set and fed it into Knime's Document Classification Example. (I like this data set as it's close to my proprietary data set). With almost no change, my results for the "subjects" part of the data was:

Decision Tree78%
Naive Bayes70%

Boosting generally reduced all models by 5-10% in accuracy. So did removing just the punctuation rather than the more sophisticated massaging the data as in the example.

Interestingly, the results on the full corpus (not just subject text) was only about half as good. This could be that Knime could not store everything in memory.

Note that this Knime example only uses the top words in the corpus as the TF-IDF matrix would be far too big to fit into memory otherwise. Here, Spark has the advantage of being able to easily process the full matrix (Naive Bayes for instance scores about 85% there). So, these results should constitute a minimum of what we can achieve in Spark.

In Spark, I just first cleared the text of all punctuation with the code in this StackOverflow suggestion, that is:

s.replaceAll("""[\p{Punct}&&[^.]]""", "")

and then ran the ML pipeline of [Tokenizer, StopWordRemover, NGram, IDF, HashingTF, Normalizer]. The results looked like:

NaiveBayes85.1%Very fast (1 minute or less)
MultilayerPerceptronClassifier80.6%Took about 30 minutes with layer sizes (262144, 100, 80, 20); 262144 is the (uncompressed) TFIDF vector sizes
RandomForestClassifier79.6%For numTrees=190, maxDepth=20 but after SVD with n=400
Logistic Regression76.1%SVD with n=400
DeepLearning4J on Spark72.9%After 227 epochs taking 6 hours on 15 executors with 1 core each
RandomForestClassifier53.7%For numTrees=190, maxDepth=30
RandomForestClassifier48%For numTrees=1000
GBTRegressor-"Note: GBTs do not yet support multiclass classification"
LinearSVC-"LinearSVC only supports binary classification."

The table is a little unfair as I spent a disproportionate amount of time tuning the models. And, as ever, YMMV (your mileage may vary). These are the results for my data, yours will probably look very different.

Interestingly, creating a hand-made ensemble of NaiveBayes, MultilayerPerceptronClassifier and RandomForestClassifier didn't improve matters. The result on these 3 models trained on the same data and voting on the test data gave an accuracy of 81.0%.

Finally, there were two algorithms that I've mentioned before that were not part of this work but I'll include them for completeness:


Ensemble Cast

So, taking five Spark models (LinearSVC, NaiveBayes, MultilayerPerceptron, RandomForestClassifier and LogisticRegression), we can take the results and using joinWith and map,  weave the DataFrames together and let them vote on which category a given subject should be in.

Unfortunately, this roll-your-own bagging did not provide significantly better results. The overall accuracy at 86.029% was a mere 0.022% better than the best stand alone model.

Wednesday, April 11, 2018

Python crib sheet #1

Some notes I've been making as I learn Python.


The file is "automatically executed by Python the first time a package or subpackage is loaded. This permits whatever package initialization you may desire. Python requires that a directory contain an file before it can be recognized as a package. This prevents directories containing miscellaneous Python code from being accidentally imported as if they defined a package.

"The second point is probably the more important. For many packages, you won’t need to put anything in the package’s file—just make sure an empty file is present." [1]

"Note that there’s no recursive importing of names with a from ... import * statement." [1]

"A two-level hierarchy should be able to effectively handle all but a few of the rest. As written in the Zen of Python, by Tim Peters, “Flat is better than nested.”" [1]


In general, single underscores are a convention, double has a meaning (name mangling).
"Names, in a class, with a leading underscore are simply to indicate to other programmers that the attribute or method is intended to be private. However, nothing special is done with the name itself.

"Any identifier of the form __spam (at least two leading underscores, at most one trailing underscore) is textually replaced with _classname__spam, where classname is the current class name with leading underscore(s) stripped...  Name mangling is intended to give classes an easy way to define “private” instance variables and methods, without having to worry about instance variables defined by derived classes, or mucking with instance variables by code outside the class. Note that the mangling rules are designed mostly to avoid accidents; it still is possible for a determined soul to access or modify a variable that is considered private." (StackOverflow)

A special example is the __getitem__ special method attribute. "A solution is to use the __getitem__ special method attribute, which you can define in any user-defined class, to enable instances of that class to respond to list access syntax and semantics." [1]


"Python offers four kinds of numbers: integers, floats, complex numbers, and Booleans.

"An integer constant is written as an integer—0, –11, +33, 123456—and has unlimited range, restricted only by the resources of your machine.

"A float can be written with a decimal point or using scientific notation: 3.14, –2E-8, 2.718281828. The precision of these values is governed by the underlying machine but is typically equal to double (64-bit) types in C.

"Complex numbers are probably of limited interest and are discussed separately later in the section.  Booleans are either True or False and behave identically to 1 and 0 except for their string
representations." [1, p40]

Pythonic manipulation of lists

For flatten, see this StackOverflow answer.

flat_list = [item for sublist in l for item in sublist]

which means:

for sublist in l:
    for item in sublist:

where l is the structure we want to flatten.

Note that the += you'd find in Scala is not the same in Python. From StackOverflow.
+ is closer in meaning to extend than to append... the methods work in-place: extend is actually like += - in fact, it has exactly the same behavior as += except that it can accept any iterable, while += can only take another list.
Given a value x that we want i times, a repeating list can be created with [x] * i.


"The pass statement does nothing particular but can act as a placeholder" (StackOverflow). This makes the code compile and appears to act like a placeholder like ??? in Scala.

[1] The Quick Python Book, Second Edition

Monday, April 9, 2018

Cross validation in Spark

What it is

"k-fold cross-validation relies on quarantining subsets of the training data during the learning process... k-fold CV begins by randomly splitting the data into k disjoint subsets, called folds (typical choices for k are 5, 10, or 20). For each fold, a model is trained on all the data except the data from that fold and is subsequently used to generate predictions for the data from that fold.  After all k-folds are cycled through, the predictions for each fold are aggregated and compared to the true target variable to assess accuracy" [1]


Spark's  "CrossValidator begins by splitting the dataset into a set of folds which are used as separate training and test datasets. E.g., with k=3 folds, CrossValidator will generate 3 (training, test) dataset pairs, each of which uses 2/3 of the data for training and 1/3 for testing" (from the documentation).


val nb        = new NaiveBayes("nb")
val pipeline  = new Pipeline().setStages(Array(tokenizer, remover, ngram, hashingTF, idf, nb))
val evaluator = new MulticlassClassificationEvaluator().setLabelCol(LABEL).setPredictionCol("prediciton").setMetricName("accuracy")
val paramGrid = new ParamGridBuilder().addGrid(nb.smoothing, Array(100.0, 10.0, 1.0, 0.1, 0.01, 0.001)).addGrid(idf.minDocFreq, Array(1, 2, 4, 8, 16, 32)).build()
val cv        = new CrossValidator().setEstimator(pipeline).setEvaluator(evaluator).setEstimatorParamMaps(paramGrid).setNumFolds(5)
val fitted    =
val metrics   = fitted.avgMetrics

where tokenizer, remover, ngram, hashingTF and idf are instances of Spark's Tokenizer, StopWordRemover, NGram, HashingTF and IDF .

Running this on the Subject text of the 20 Newsgroup data set yielded the optimized hyperparameters of 1 document for a word to be significant and a smoothing value of 0.1 for regularization leading to 77.1% accuracy.

Running this on all the text of the 20 Newsgroup data yielded values of 10.0 for smoothing and 4 for minDocFreq giving an optimized accuracy of nearly 88%.

Those results in tabular form:

subject only0.1177.1%
all text10.0487.9%

Interestingly, the range over the results for all smoothing hyperparameters was typically less than 6% but the range of results over all minDocFreq was as much as 60%. For this data and this model at least the rather unexceptional conclusion is that you can increase accuracy more from improving feature engineering than model tuning.

(Note: NGram.n was set to 2. After some more CV, I found it was best leaving it as 1. Then, the "subject only" accuracy was 85.1% and the "all text" accuracy was 89.4%).


Happily, Spark has parallel cross validation as of 2.3.0. See TrainValidationSplit.setParallelism(...) - it has a @Since("2.3.0"). This should improve performance. Using 10 executors with 30gb of memory and 2 cores each, CV on the full data set could take 20 minutes or so.


For logs on what TrainValidationSplit is doing, run:

scala> sc.setLogLevel("DEBUG")

This can be irritating so change it back to ERROR when you're done.

[1] Real World Machine Learning (sample here).