Sunday, January 23, 2011

Capital Markets Business Terminology

OK, this isn't tech. I'm writing this so I remember in a few years what it is I did. This follows chats with the guys closes to the business.

Equity Capital Markets

Ways to raise money and the stages:

Private placement
  • Pre-IPO
  • Less than 500 investors in accord with SEC rules.
  • Registering on an exchange.
  • Publicly tradable.
Green shoe option
  • The right of the underwriter(s) to sell additional stock.
  • Stabilizes the price.
Follow on
  • Post-IPO when the company still has equity to sell.
  • Non-publicly traded.
Rights Issue
  • Diluting stock.
  • Current stock-holder have the right to buy more.
  • They can sell this right.

  • All orders controlled by the syndicate
  • Orders controlled by any one bank within the syndicate.
Block trader
  • Very secretive.
  • Relies heavily on network of clients.
  • Operates outside of market hours so as not to spook the market.
  • Deals with large amounts (~1% of company).
  • Offers a sizable discount on market price.
  • Essentially selling "warrants". That is to say, an agreement to exchange the equity through the exchange when the market next opens.
  • Doesn't want the salespeople to know anything!

Debt Capital Markets

There are two types of salesperson:

  • Institutional investors share of deal.
  • ~98% of demand.
PWM (Private Wealth Management)
  • Retail investors.
  • More legal restriction involving selling to these investors (eg, conflicts of interest).
  • Investment bank buys the bond at booking time and sells slices of it.
  • This is iff the investment bank is handling the booking and delivering.
  • Alternative form of payment
  • Unrelated bond from a different issuer.
Soft Allocation
  • What a bank would like a syndicate member to get.
  • This is applicable when said bank is not the book runner.
Duration Manager
  • Very desirable role for an investment bank.
  • Can see the hedges and swaps of other counter-parties. This is valuable market data.
  • Role is assigned by the issuer.

Badly tuned Hibernate queries gobbling tempdb space

Due to some poorly performing HQL, our Sybase server (12.5) was running out of tempdb space.

Debugging the issue has been something of a problem because of the way Sybase manages this odd table.

What is tempdb?

"tempdb needs to be big enough to handle the following processes for every concurrent Adaptive Server user:
  • Worktables for merge joins
  • Worktables that are created for distinct, group by, and order by, for reformatting, and for the OR strategy, and for materializing some views and subqueries

From the Sybase manual.

Since our HQL was using the distinct keyword, we were filling up the tempdb table.

A naive way of analyzing which query was taking up all this space is offered by Sybase themselves here. Basically, the idea is to drop the distinct keyword from your query and select the results into a temp table. For example:
select city
into #tempcity
from authors
"You could use this query to create a temporary table, and then use sp_spaceused," they advise. That is:

use tempdb



What I had difficulty finding was that this is not the whole story. It will only tell you the size of the data retrieved and not the whole amount of tempdb used. For instance, you might only get 10 rows of just a single integer each if you performed a distinct select on an ID. But if the same query has lots of complicated joins, you're actually using up a lot more tempdb space than this as the DB builds worktables with which to process the join relationships.

Our DBA sent me this email:

"Definitely the number of pages read in a single scan of a worktable is an absolute indicator of the size of that table (since the worktable is created on the fly and has n pages that are scanned once). The dataserver will have a 2K page size which means the table is n x 2Kb in size."

So, one way to find the size of these queries is to enable resource limits and see at what point your query breaks. For instance:

sp_configure "allow resource limits"
sp_add_resource_limit sa, NULL, 'at all times', tempdb_space, 5, 2, 3, 1


This sp_add_resource_limit stored procedure is saying for user sa connecting via an undefined application (NULL), at all times, limit the space in tempdb to 5 pages. In the event of any limit being exceeded (2), abort the transaction (3) for that query (1). See here for the full details of how to execute this SP.

Then, execute your query and see if it fails because of the limited resource. You may see something like this:

com.sybase.jdbc3.jdbc.SybSQLException: Exceeded tempdb space limit of 5 pages.

SQLWarning: ErrorCode: 3618 SQLState: 01ZZZ --- Transaction has been aborted.
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 0.214, SQL query: 0.214, Building output: 0

Even a query that returns no rows whatsoever but that does a lot of joining may hit this limit.

By the way, when you're finished, run:

sp_drop_resource_limit sa, NULL

Piper at the Gates of Dawn

What's the best way to run integration tests for separate, discrete services?

In an attempt to decouple the stack, a service has been pulled out into a separate project. To run the integration tests of this discrete project, the stack needs to be brought up. We'd like to automate this so we used Java's ProcessBuilder thus:

ProcessBuilder processBuilder = new ProcessBuilder(command);

Process process = processBuilder.start();

for each module that needs to start up. We'd wait for some character sequence like "Module XXX started!" before starting the next.

All was going well until somebody changed the logging file in the module that we start up this way. The (RMI) thread that serviced our request was hanging and we saw this in JConsole:

Name: RMI TCP Connection(2)-


Total blocked: 0 Total waited: 0

Stack trace: Method)

- locked

- locked




- locked

- locked

- locked

- locked


In this version of Hibernate (3.2.6) this line was just a System.out.println call. Why on earth would this cause the thread to block forever?

The thread was stuck in native code so was there something wrong with my JVM (1.6.0_20 on Windows)? Would looking at file handles show some sort of contention? Would it be OK on a *Nix machine?

Putting a breakpoint in this Hibernate code only confused me further. The first time Hibernate hits this System.out.println call, it is executed without problem. It's only the second time that the thread hangs forever.

Whenever something works fine for a while then stops working for good suggests a resource is being exhausted. It took me some Googling before realizing that the pipe's buffers were filling up - but not until the second execution of System.out.println.

When you start another process via the JVM, you can't then ignore it. It may send data to an output stream (for example, just printing to the console). Some thread needs to drain this stream even if it does nothing with the data - the notion of a StreamGobbler.

Obvious in retrospect but not immediately apparent when you forget and you're confronted by this behaviour.

Tuesday, January 18, 2011

Java producing native code

I rather like this.

Watch Java convert byte code to native code by downloading JDK 7 from here. Make sure you download the DEBUG file.

Then, run your code with something like:

/home/henryp/Tools/JDK_1_7_0_debug/jdk1.7.0/fastdebug/bin/java -XX:+PrintOptoAssembly -XX:CompileThreshold=5 -server -cp ./bin com.henryp.lang.ThreadWaitingMain

And watch glorious native code being generated.

The magic is in the XX:+PrintOptoAssembly flag. You also need -server but the -XX:CompileThreshold=5 is optional. It just says how many times the code needs to be executed before the JIT compiler kicks in.

You can see the atomic Java classes (eg, AtomicInteger) being reduced to little more than cmpxchg instructions (on x86 architectures at least) since they use compare-and-swap semantics of the underlying hardware to achieve their ends. See this excellent article for more information.

Sunday, January 9, 2011

JVM optimization

Want a quick and easy way to make your Java apps run faster? Upgrade your JVM.

Last year, I was using JDK 1.6.0_05 to run our app. I monitored it with YourKit and saw my application threads being blocked by the Reference Handler thread on the monitor of class java.lang.ref.Reference$Lock.

Upon upgrading to JDK 1.6.0_20, there was none of this particular contention. What's more, the app was running about 25% faster without me doing anything!

So, what was causing this contention?

In one sample, the JVM ran for less than 9 minutes and was only doing something “interesting” for about 2 of those yet the total time spent contending for this lock was about 111 seconds. This time was spread across roughly 50-60 threads and YourKit was saying that there are about 50 different methods that appear to be attempting to attain the lock. Of the handful of methods I looked at, I couldn’t see anything in the code that was obviously suspicious. I mean, it was literally blocking in a simple get method!

Let's see what happens when the JDK blocks. Let's take this code:

package com.henryp.lang;

public class ThreadWaitingMain {

public static void main(String[] args) {
final Object lock = new Object();
System.out.println("\nlock.hashCode = " + Integer.toHexString(lock.hashCode()) + "(" + lock.hashCode() + ")"); //

Thread firstThread = new Thread(createLockContender(lock));
firstThread.setName("Thread no. 1");

Thread otherThread = new Thread(createLockContender(lock));
otherThread.setName("Thread no. 2");

System.out.println("Main thread finishing");

private static Runnable createLockContender(final Object lock) {
return new Runnable() {

public void run() {
while (true) {
synchronized (lock) {
try {

System.out.println(Thread.currentThread().getId() + ":" + Thread.currentThread().getName() + ": Have lock. About to sleep.");
System.out.println(Thread.currentThread().getId() + ":" + Thread.currentThread().getName() + ": Relinquishing lock...");
} catch (InterruptedException e) {



Let's run it.

lock.hashCode = 1172e08(18296328)
Main thread finishing


Which PID is it?

[henryp@vmwareFedoraII Test]$ ps aux | grep java
henryp 12148 0.9 0.9 399372 10028 pts/6 Sl+ 22:13 0:00 /usr/java/jdk1.6.0_18/bin/java -server -Dfile.encoding=UTF-8 -classpath /home/henryp/workspace/Test/bin com.henryp.lang.ThreadWaitingMain
henryp 12168 0.0 0.0 4204 704 pts/3 S+ 22:14 0:00 grep java

OK, it's 12148.

By the way, running:

[henryp@vmwareFedoraII Test]$ ps -eLf | grep java

gives us not just the processes but all of the threads:

henryp 12148 11567 12148 0 12 22:13 pts/6 00:00:00 /usr/java/jdk1.6.0_18/bin/java -server -Dfile.encoding=UTF-8 -classpath /home/henryp/workspace/Test/bin com.henryp.lang.ThreadWaitingMain
henryp 12148 11567 12154 0 12 22:13 pts/6 00:00:00 /usr/java/jdk1.6.0_18/bin/java -server -Dfile.encoding=UTF-8 -classpath /home/henryp/workspace/Test/bin com.henryp.lang.ThreadWaitingMain
henryp 12148 11567 12155 0 12 22:13 pts/6 00:00:00 /usr/java/jdk1.6.0_18/bin/java -server -Dfile.encoding=UTF-8 -classpath /home/henryp/workspace/Test/bin com.henryp.lang.ThreadWaitingMain


But which ones are ours?

[henryp@vmwareFedoraII Test]$ jstack 12148 | grep "Thread no."
"Thread no. 2" prio=10 tid=0xb6a7d400 nid=0x2f84 waiting on condition [0x9f388000]
"Thread no. 1" prio=10 tid=0xb6a7bc00 nid=0x2f83 in Object.wait() [0x9f3d9000]

Very useful is our old friend jstack. Right, let's convert those hex values into decimal:

[henryp@vmwareFedoraII Test]$ printf "%d\n" 0x2f84
[henryp@vmwareFedoraII Test]$ printf "%d\n" 0x2f83

Now, let's see which Linux kernel commands are being called by these threads:

[henryp@vmwareFedoraII openjdk7]$ strace -p 12163 > ~/strace_12163.txt 2>&1 &
[henryp@vmwareFedoraII openjdk7]$ strace -p 12164 > ~/strace_12164.txt 2>&1 &

Let's tail one of these:

[henryp@vmwareFedoraII ~]$ tail -f strace_12164.txt
clock_gettime(CLOCK_MONOTONIC, {16852, 857324137}) = 0
clock_gettime(CLOCK_MONOTONIC, {16852, 857391406}) = 0
gettimeofday({1294611927, 576256}, NULL) = 0
clock_gettime(CLOCK_REALTIME, {1294611927, 576322753}) = 0
futex(0xb6a7e9a4, FUTEX_WAIT_PRIVATE, 1, {0, 999933247}) = -1 ETIMEDOUT (Connection timed out)
futex(0xb6a7e228, FUTEX_WAKE_PRIVATE, 1) = 0
clock_gettime(CLOCK_MONOTONIC, {16853, 859389474}) = 0
futex(0x9f80090c, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x9f800908, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0xb6a7c828, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0xb6a7e96c, FUTEX_WAIT_PRIVATE, 689, NULL) = 0
futex(0xb6a7e028, FUTEX_WAIT_PRIVATE, 2, NULL) = 0
futex(0xb6a7e028, FUTEX_WAKE_PRIVATE, 1) = 0

And it's this futex - "fast userspace mutex" - that we're interested in. It's how Linux does it's locking.

But that's where the story ends as I am not a Linux guru and need to do more research. So, I'm afraid this is an incomplete post. I wrote it to keep a record of what I am working on in my spare time.

Saturday, January 8, 2011

An Ontology of Coupling

Any programmer worth his coffee knows about coupling. So, why do we often see tightly coupled programs?

Well, "coupling is unavoidable. What we are most interested in when exploring coupling within IT automation is how close this relationship actually is or should be" [1].

What is coupling? "Coupling is a measure of interconnection among modules in a program structure" according to Pressman [2].

Let's be pedantic and ask: what is a module? Pressman again: "Software is divided into separately named and addressable components, called modules, that are integrated to satisfy problem requirements" [3].

[ASIDE: Pressman makes a point about why this is important by expressing human frailty in mathematical terms: "Let C(x) be a function that defines the perceived complexity of a problem x ... For two problems p1 and p2 ... [an] interesting characteristic has been uncovered through experimentation in human problem solving . That is C (p1 + p2) > C(p1) + C(p2)". That is, human comprehension of complexity can be modeled as a non-linear system.]

The reason I am being pedantic is that most programmers think about coupling between classes. But it could be between any parts of the code base since the term module is sufficiently ambiguous.

Case in point: there was a requirement for different Spring beans in our application depending on whether the app was running in London or New York. The developer decided to write two Spring config XML files each describing the beans required. Which was loaded depended on a command line switch that explicitly stated the file name.

Although this worked, it added unnecessary coupling. All the start-up shell scripts across Dev, QA and Prod in London and New York had to change.

And that's the problem with too much coupling in enterprise applications. It doesn't break the compile and the application will work. It's just costly to maintain.

Another solution was to define one of our classes as a factory in the Spring XML config. This class could determine in which city it was running and return an object of the correct class (both NY and London classes had the same interface).

This is a better solution since the many (admittedly disorganized) start-up scripts did not require changing.

Gradations of Coupling

Pressman's scale of coupling looks like this (ordered from loose to tight):

Data coupling - Simple arguments are passed between modules.

Stamp coupling - Data structures are passed.

Control coupling - "A variable that controls decisions in a subordinate or superordinate module" is passed.

External coupling - "When modules are tied to an environment external to software. For example I/O couples a module to specific devices, formats and communication protocols".

Common coupling - "When a number of modules reference a global data area".

Content coupling occurs when one modules makes use of data or control information maintained within the boundary of another module.

If the original solution had passed a flag describing which city the software was running in, it would be an example of control coupling. But by passing the file name of the XML, this was content coupling; the shell script had to know not only that the app was storing beans in an auxiliary file but how Spring loaded this file (relative to the original Spring config file as it happens).

Steve McConnell calls this last and most insidious kind of coupling "Semantic Coupling".

"Semantic coupling is dangerous because changing code in the used module can break code in the using module in ways that are completely undetectable by the compiler. When code like this breaks, it breaks in subtle ways that seem unrelated to the change made in the used module, which turns debugging into a Sisyphean task." [4]

Another example of this in my current work place is an entitlement system that takes key/value pairs to determine the access rights of a user. The problem with this is what if the system is updated and needs more or different key/value pairs? Since this is not detected by the compiler and all calling code breaks in subtle ways, this is a reasonable example of McConnell's Semantic Coupling.

I would argue that a better solution would be to pass the unique ID of the user and just let the service work our what was needed.


Semantic coupling seems to be not as well known as it should be. Most programmers are familiar with syntactic coupling (change an interface and the class that depends on it may break). Semantic coupling is much more subtle and even hardened engineers get it wrong.

[1] SOA Principles of Service Design (p165), Thomas Erl
[2] Software Engineering A Practitioner's Approach (p375), Fourth Edition, Robert S Presman,
[3] ibid, p365
[4] Code Complete, Second Edition (p32), Steve McConnell