Saturday, February 23, 2013

Return of the Language Lawyer

I've been playing with other languages recently. It's been fun but there are some terms that keep popping up that are worth noting.

Functional Programming

Dan Grossman of the University of Washington emphasises that functional programming:

  • Avoids mutation
  • Uses functions as values

In addition, it "often uses lots of recursions. You often use a style that is more mathematical than other styles of programming." [2] Although it is not sufficient to say that it is anything that is not OOP.

First Class Functions

Are "a situation where functions can appear and be used wherever we use other values like numbers and lists and strings. Functions can be argument to other functions, combined with tuples and so on" [2]

First Order Functions

The most common use of First Class Functions is as an argument/result of another function. This other function is called a higher-order function. [2] All other functions are called first order functions [3].

Lexically scoped

"We use the environment where the function was defined not the environment in which it is being called" [2].


Closures


"The language keeps around those old environments so that it can implement lexical scope properly... So, a function value has two parts ... the code part [and] the environment that was current when the function was defined... This thing is called a closure." [2]


Total Order

"This is a binary relation ≤ that satisfies:

Antisymmetry: if v ≤ w and w ≤ v then v = w
Transitivity: if v ≤ w and w ≤ v then v ≤ x
Totality: either v ≤ w or w ≤ v or both

Note: the <= for double is not a total order." [1] Eg,

        assertTrue(1d <= 1d);
        assertTrue(0.1d <= 0.1d);
        assertTrue(Double.NaN <= Double.NaN); // blow up!


[1] Prof Bob Segdewick, Princeton University.
[2] Prof Dan Grossman, Coursera.
[3] Wikipedia.

Fundamental Algorithms - Crib Sheet #1


Selection Sort

Description
"First, find the smallest item in the array and exchange it with the first entry... Then, find the next smallest item and exchange it with the second entry. Continues in this way until the entire array is sorted." [1]

"Each exchange puts an item into its final position, so the number of exchanges is N."

Efficiency
Uses ~ (N^2)/2 compares (imagine an NxN table of compares; only half of the cells would indicate a comparison) and N exchanges.

Notes
 Running time is insensitive to input. "It takes about as long to run selection sort for an array that is already in order ... as it does for a randomly-ordered array!"

Sample Code

// Exchange a[i] with smallest entry in a[i+1]...N
for (int  i = 0 ; i < a.length ; i++) { 
    int min = i;
    for (int j = i+1 ; j < a.length ; j++) {
        if (less(a[j]), a[min])) min = j
    exch(a, i, min); // exchange elements elements i and j of a[]
}


Insertion Sort

Description
"The algorithm that people often use to sort bridge hands is to consider the cards one at a time, inserting each into its proper place among those already considered.

Notes: if the "entries are already in order (or nearly order), then insertion sort is much, much faster".
"Slow for large unordered arrays because the only exchange it does involve adjacent entries, so items can move through he array only one place at a time"

Efficiency
Uses ~ ((N^2)/4) exchanges and ~((N^2)/4) compares on average
((N^2)/2) exchanges and ~((N^2)/2) worst case
N-1 compares and 0 exchanges best case.

Sample Code

// insert a[i] among a[i-1], a[i-2], a[i-3], ...
for (int i = 1 ; i < a.length ; i++) {
    for (int j = i ; j > 0 && less(a[j], a[j-1]); j--)
        exch(a, j, j-1);
}

Shellsort

Description
A "simple extension to insertion sort that gains speed by allowing exchanges of array entries that are far apart to produce partially sorted arrays that can be efficiently sorted, eventually by insertion sort".

"Rearrange the array to give it the property that every hth entry (starting anywhere) yields a sorted subsequence. Such an array is said to be h-sorted."

Example: take an array, 7-sort it, 3-sort it then finally 1-sort it.

"The code is the same as the insertion sort except that we go backward through the array we skip by h instead of just by 1... Once you 1-sort that an insertion sort so you always got an ordered result." [2]

Efficiency
Depends on the step length. "The worst-case number of compares used by shellsort with 3x+1 increments is O(N^(3/2))

Notes
"A g-sorted array remains g-sorted after h-sorting it."

"Fast unless the array is huge. Tiny, fixed footprint for code (used in embedded systems)." [2]

Sample Code

int h = 1
while (h < a.length/3 ) h = 3*h + 1; // 1, 4, 13, 40, 121, 364, 1093 ...
while (h >= 1) { // h-sort the array
    for (int i = h ; i < a.length ; i++) { 
        // insert a[i] among a[i-h], a[i-2*h], a[i-3*h], ...
        for (int j = i ; j >= h && less(a[j], a[j-h]) ; j -= h)
            exch(a, j, j-h);
    }
    h = h/3;
}

Mergesort

Description
"To sort an array, divide it into two halves, sort the two halves (recursively), and then merge the results."

It is a recursive, divide-and-conquer algorithm.

Efficiency
Sorts any array of N items in ~ N log N

Notes
Disadvantage: uses extra space ~ N as merging requires an auxiliary array.

Used in Arrays.sort()

Stability: "this sort is guaranteed to be stable: equal elements will not be reordered as a result of the sort." - Collections.sort().


Sample Code

void sort(Comparable[] a, int lo, int hi) {
    if (hi <= lo) return;
    int mid = lo + (hi - lo)/2;
    sort(a, lo, mid);
    sort(a, mid+1, hi);
    merge(a, lo, mid, hi);
}

The merge method looks like the following (where aux is a copy or array a):

int i = lo, j = mid + 1;
for (int k = lo ; k <= hi ; k++) {
    if (i > mid)                   a[k] = aux[j++]; // no more lower
    else if (j > hi)               a[k] = aux[i++]; // no more higher
    else if (less(aux[j], aux[i])) a[k] = aux[j++]; // higher first
    else                           a[k] = aux[i++]; // lower first
}


[1] Algorithms, Sedgewick and Wayne
[2] Sedgwick, Coursera.

Friday, February 22, 2013

Java Security Crib Sheet


Certificates

"A certificate is a statement, issued by one person, that the public key of another person has a certain value. Essentially, a certificate is a signed public key. Marian creates the certificate by placing some information about her, some information about Will, and Will's public key value into a file. She then signs the file with her own private key. Robin Hood (or anyone else) can download this certificate and verify it using Marian's public key. Robin Hood trusts Marian, so he also now has a trustworthy copy of Will's public key, which he can use to verify files signed by Will.

"To verify a certificate, you need a public key.To verify a public key, you need a certificate. Essentially, one certificate can be verified by another, which is verified by another and so forth. This is called certificate chaining. The chain can't be infinite, so where does it start? The certificate chain starts with a certificate whose issuer and subject are the same. Usually such a certificate is issued by a Certificate Authority."

KeyStore

"A KeyStore is a handy box that holds keys and certificates. One KeyStore contains all the information a single person (or application, or identity) needs for authentication. Usually, you have two distinct uses for authentication

 - You need to prove to others who you are
 - You need to make sure that other people are legitimate

"In the first case you can use a private key to sign data. A certificate that contains the matching public key can be used to prove your identity... The private key is used to sign data; the certificates can be presented as credentials backing up the signature.

"In the second case, you can use other people's certificates to prove to yourself that they are who they say they are."

Message Digest

"A message digest is a special number calculated from a set of input data. If you are familiar with hash functions, it will help you to know that a message digest is a lot like a hash value, except longer. Message digests are sometimes called secure hash functions or cryptographic has functions."

MACs

"A Message Authentication Code (MAC), for example, is basically a message digest with an associated key. It produces a short value based on both its input data and the key. In theory, only someone with the same key can produce the same MAC from the same input data."

"MACs differ from digital signatures as MAC values are both generated and verified using the same secret key." [1]

Signing Files

"Another approach to authentication comes from the combination of a message digest and an asymmetric cipher. If Marian encrypts the message digest with her private key, Robin Hood can download the encrypted message digest, decrypt it using Marian's public key, and compare the message digest to one that he computes from the downloaded file. If they match, then he can be sure that the file is correct."

"The encrypted digest is called a signature; Marian has signed the file."

All quotes taken from Java Cryptography, Jonathan Knudsen, except:

[1] Wikipedia 

Saturday, February 16, 2013

What's the point?

Floating point is notoriously tricksy but this little gotcha was new to me.

Unlike normal maths, floats are not associative under addition. For instance:

        System.out.println(((1.0f + 0.05f) + 0.05f) == (1.0f + (0.05f + 0.05f)));

gives:

false

as most programmers might know. This is because although the maximum and minimum for positive floats are something like 3.4028235E38 and 1.4E-45 respectively, not all numbers can be expressed. For instance, "the decimal fraction 0.1 cannot be precisely represented in binary" [1].

OK, so much you should know. But here's the gotcha. Adding a series of numbers that can be represented with floating points might give you a different answer if you go from the last to the first rather than the first to the last.

To demonstrate, let's make an array with the first value much larger than the others.

        private float[] createFloatArray() {
                int     number          = 101;
                float[] floats          = new float[number];
                float   lowValue        = 1f;
                for (int i = 1 ; i < floats.length ; i++) {
                        floats[i] = 1f;
                }
                floats[0] = 1E8f - (number * lowValue);
                return floats;
        }

Then we add up the floats in the array from left to right:

        private float totalLeftToRight(float[] floats) {
                float totalLeftToRight = 0;
                for (int i = 0 ; i < floats.length ; i++) {
                        totalLeftToRight += floats[i];
                }
                System.out.println("Total (left to right): " + new BigDecimal(totalLeftToRight));
                return totalLeftToRight;
        }

Then compare that answer to adding the array right to left (similar code but not worth printing here) and you get:


Total (left to right): 99999896
Total (right to left): 100000000

The reason is that all the small numbers added together are significant when then added to the large number. But individually, they are too small to care about.

As the value of a floating point get bigger, the gaps between numbers that can be expressed also increases. 

Note how (1E8 - 101) cannot be expressed as a float either. It comes to 99999896 rather than 99999899.

[1] Roedy Green has a good description of floating points here.


Oops! Where did my memory go?

Most Java programmers know the typical amount of memory used by primitives (ints take 4 bytes, doubles 8 etc) but what about objects?

Well, primitive arrays take up 24 bytes plus N times the size of the primitive (where N is the number of elements) [1].

Objects typically have an overhead of 16 bytes and are padded to the nearest 8 bytes. Their references to other objects typically take 8 bytes each. Furthermore, each inner class needs 8 bytes for its pointer to the enclosing class [1].

Pointers can be compressed. The reason we might want to do this is that "memory is pretty cheap, but these days bandwidth and cache is in short supply" [2].

[1] Prof Robert Sedgewick, Coursera.
[2] CompressedOops.

Tuesday, February 12, 2013

Thrashing and Amortization

Two new terms I've learned the definition of today in the contexts of Collections:

Amortisation

You might read comments in the JavaDocs such as this one for ArrayList:

"The add operation runs in amortized constant time, that is, adding n elements requires O(n) time."

without worrying too much about what it means. In the case of any collection backed by an array, this means that the array must resize when it doubles - an expensive operation - but that insertions are essentially free (adding an element to an array is much more efficient than adding it to, say, a linked list).

So, when the total number of elements goes from N to 2 x N, the array has to copy 2N elements into the new, resized array. But since N elements were added for "free", the average cost is (2N/N = 2) a constant*

"Every time you hit a power of 2 you take that many array accesses but in a sense you have already paid for them by putting those items on the stack. That's called amortised analysis, when you consider the total cost averaged over all operations." - Prof Rob Sedgewick, Coursera.

Thrashing

Thrashing is a pathological phenomena that could happen as the underlying array resizes at threshold X. If an element is added at X+1, the underlying array is enlarged. If an element is removed leaving X-1, the underlying array is shrunk.

"If a client calls push, pop, push, pop then it's going to be doubling, halving, doubling, halving creating arrays on every operation taking time proportional to N on every operation and therefore quadratic time for everything. The efficient solution is to wait for the array to get one quarter full before you halve it... There won't be another resizing operation until it gets totally full or half again full. So the invariant of that is that the array is always between 25% or 100% full and that every time time you resize you've already paid for it in an amortised sense." - Prof Rob Sedgewick, Coursera.


*  Note: this math is a simplification but you get the idea. A more accurate analysis is that the work of adding N elements is (of course) N accesses plus the number of resizes (2 + 4 + 8 + ... + N). But remember, 2 + 4 + 8 + 16 + ... + N = (2N - 2). We ignore the -2 for large N. Therefore, the total cost is actually 3N and so the amortised cost is the constant 3.

Sunday, February 3, 2013

Functional Programming - Crib Sheet #2

Continuing in my introduction to Functional Programming, here are some concepts I have encountered.

Tail Recursive Optimization

"Certain recursive calls can be replaced by iterations. When the last statement executed in a procedure body is a recursive call of the same procedure, the call is said to be tail recursive... We can speed up a program by replacing tail recursion by iteration. For a procedure without parameters, a tail-recursive call can be simply replaced by a jump to the beginning of the procedure."
- Compilers, Principles, Techniques and Tools - Aho, Sethi and Ullman

Options

There are no nulls in functional languages like ML. Instead, there is a type called option that is either

SOME x

or

NONE

In the Java library, TotallyLazy, Options are modelled by a superclass Option and subclasses None and Some.

Option defines a get method that returns some value x for Some, while the None throws an Exception. If given an Option and you don't know what it is, you can experiment like with methods getOrNull that don't throw exceptions even if your Option is a None.

Currying

Currying is a "closure idiom... it's a new way to deal with with conceptually multi-argument functions. It takes one argument and return a function that takes the next argument. It will still be able to use the first argument because it will be in the environment."

In ML it looks like this:

"Example:


val sorted3 = fn x => fn y => fn z =>  z >= y andalso y >= x
val t1 = ((sorted3 7) 9) 11

Calling (sorted3 7) returns a closure with:

  • Code fn y => fn z => z >= y andalso y >= x 
  • Environment maps x to 7

Calling that closure with 9 returns a closure with:

  • Code fn z => z >= y andalso y >= x  
  • Environment maps x to 7, y to 9

Calling that closure with 11 returns true"


- Prof Dan Grossman, Coursera.

Pattern Matching

Nothing to do with String pattern matching that you find in PERL and Java, this is about how which path to follow through some code is determined by the type of a certain object.

An example in ML looks like:


case all_except_option(s, substitutions) of
    NONE => []
  | SOME x => x


where either an empty list. [], or x is returned depending on the type of the Option (see above).

There is a reflective version in the TotallyLazy library here (and its test that may prove enlightening here). This will guide which code block is executed given the type to an argument. Ideally, you'd do this using normal polymorphism. But this utility proves very useful if you don't have control of the classes that determine what functionality is to be invoked - say they belong to somebody else's library.

Lazy Evaluation

"Lazy evaluation is a method of delaying expression evaluation which avoids multiple evaluation of the same expression. Thus, it combines the advantages of normal order and applicative order evaluation. With lazy evaluation, an expression is evaluated when its value is needed, that is, when it appears in the function position in a function application. However, after evaluation, all copies of that expression are updated with the new value."
- An Introduction to Functional Programming Through Lambda Calculus - Greg Michaelson

Versioning in a multi-app environment

I recently was published at IBM's developer works (see article) - woo hoo! But one thing I omitted was the matter of versioning.

Our floor is made up of many teams with many apps, some depending on the binaries of others. What makes our life harder is one team breaking another team's build when they change a library that is shared.

One conclusion we've drawn is never use Maven SNAPSHOTs. A build should always work or (if there is a genuine error) always fail. It should never be non-deterministic.

If shared code changes, other teams should be told in the daily stand-up that is attended by team representatives. Then, those dependent teams can change their versions in their sandbox and check it in when the code has been reconciled.

In short: if nothing has changed, builds should be absolutely predictable.