library in a hurry, I've made some miscellaneous notes.

The basics
You will have been taught at school that the joint probability of A and B is:

P(A ∩ B) = P(A) P(B)
iff A and B and independent (see

here).

Depending on your interpretation of probability theory, it is axiomatic that the relationship between the

joint probability and the

conditional probability is:

P(A ∩ B) = P(A|B) P(B)
These will come in useful.

__CRFs__
"A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure." [1]

We consider the distribution over

V = X ∪ Y
where:

V are random variables

X observed inputs

Y outputs we'd like to predict

"The main idea is to represent a distribution over a large number of random variables by a product of local functions [Ψ

_{A}] that each depend on only a small number of variables." [1]

__The Local Function__
The local function has the form:

Ψ_{A} (x_{A}, y_{A}) = exp { Σ_{k} θ_{A,k }f_{A,k}(x_{A}, y_{A}) }
where k is the k-th feature of a feature vector.

__Graphical Models__
"Traditionally, graphical models have been used to represent the joint probability distribution p(y, x) ... But modeling the joint distribution can lead to difficulties ... because it requires modeling the distribution p(x), which can include complex dependencies [that] can lead to intractable models. A solution to this problem is to directly model the conditional distribution p(y|x). " [1]

__Undirected Graphical Model__
If these variables are A ⊂ V, we define an undirected graphical model as the set of all distributions that can be written as:

p(x, y) = (1/Z) ∏_{A} Ψ_{A} (x_{A}, y_{A})
for any choice of factors, F = {Ψ

_{A}}, where:

Ψ

_{A} is a function υ

^{n} → ℜ

^{+} where υ is the set of values v can take

Z is the partition function that normalizes the values such that

Z = Σ_{x,y} ∏_{A} Ψ_{A} (x_{A}, y_{A})
__Factor Graph__
This undirected graphical model can be represented as a

*factor graph*. "A factor graph is a

bipartite graph G = (V, F, E)" that is, a graph with two disjoint sets of vertices. One set is the variables, v

_{s} ∈ V, the other is the factors Ψ

_{A} ∈ F. All edges are between these two sets. An edge only exists if vertex v

_{s} is an argument for vertex Ψ

_{A}.

__Directed Model (aka Bayesian Network)__
This is based on a directed graph G = (V, E) and is a family of distributions (aka "model") that factorize as:

p(x,y) = ∏_{v∈V }p(v| π(v))
where π(v) are the parents of v in the graph G.

__Naive Bayesian Classifier__
As a concrete example, let's look at

Mallet's implementation of Naive Bayes.

Firstly, what make Naive Bayes naive? "Naive Bayes is a simple multiclass classification algorithm with the assumption of

*independence* between every pair of features." (from

here).

We ask ourselves: given the data, what is the probability that the classifiers is C? Or, in math-speak, what is p(C|D)?

Now, the data, D, is made up of lots of little data points, { d

_{1}, d

_{2}, d

_{3}, ... }. And given that little equation at the top of this post, if all these data points are

*independent* then:

p(D|C) = p({ d_{1}, d_{2}, d_{3}, ... } | C) = p(d_{1}|C) p(d_{2},|C) p(d_{3},|C) ...
Mallet's Java code for this is surprisingly easy and there is a JUnit test that demonstrates it. In the test, there is a dictionary of all the words in a corpus. It's a small dictionary of the words { win, puck, team, speech, vote }.

We create two vectors that represent the weightings for these words if a document relates to politics or sport. Not surprisingly, {speech, vote, win} have a higher weighting for politics and {team, puck, win} have a higher weighting for sports but all words have a nominal value of 1 added (before normalization) in each vector. This is

Laplace Smoothing and it ensures the maths doesn't blow up when we try to take the log of zero.

Note that these weightings for each category are

*by definition* p(D|C), that is, the probability of the data given a classification.

This being Bayes, we must have a

prior. We simply assume the probability of an incoming document to be about sports or politics as equally likely since there is no evidence to the contrary.

Now, a feature vector comes in with {speech, win} equally weighted. To which class should it go?

The code effectively

- Calculates an inner product between the feature vector and each of (the logarithm of) the class's vector and then adds (the logarithm of) the bias. This is the multiplication in the Local Function section above.
- Deducts the maximum result for all results. This appears to be to handle rounding errors and drops out of the equation when we normalize (see below)
- Takes the exponential of each result.
- Normalizes by dividing by the sum of these exponentials. This is the partition function we saw above, Z.

The most probable is the class we're looking for.

But why does the code do this? Bayes theorem gives us:

p(C|D) = p(D|C) p(C) / p(D)
= prior x p(D|C) / Z
= prior x p(d_{1}|C) p(d_{2},|C) p(d_{3},|C) ... / Z
now, if we let:

λ_{y} = ln(prior)
λ_{y,j} = ln(p(d_{j}|C_{y}))
then

p(C|D) = (e^{λy} ∏_{j=1}^{K} e^{λy,jdj}) / Z = (e^{λy + Σj=1K λy,jdj}) / Z