NetKernel News Volume 2 Issue 19

March 4th 2011

What's new this week?

Catch up on last week's news here

Repository Updates

The following updates are available in both the NKEE and NKSE repositories...

  • database-relational 1.9.1
  • http-server 2.3.1
    • Changed POST parameter encoding detection, Jetty 7 now assumes default of UTF-8 previous versions assumed ISO-8859-1
    • Logger updated to log to an independent log.
  • nkse-dev-tools 1.29.1
    • Request Trace and Class Trace tools now show space versions to differentiate between multiple versions of the same module in a single system.
    • Grammar's Kitchen - added workaround for Windows bug of expanding line breaks after each round trip of grammar form submission. Windows doesn't consistently round trip textarea content.
    • New Module Wizard refresh (see below)
  • wink-wiki 1.14.1
    • Fixed a bug in the javascript version selection in the editor diff form.

The following update is available in the NKEE repository...

  • nkee-arp 1.4.1
    • Fixed bug where packages with : (colon) in name would break "generate repository" on certain OS filesystems.

Tools Refresh: New Module Wizard

Ordinarily I create modules by hand from scratch. But I know it can be useful to have a tool to take care of the boilerplate. Last week I had reason to use the New Module Wizard and discovered I didn't like it very much. So I spent an hour tidying it up. The result is included as part of a number of tool updates in the nkse-dev-tools package.

The notable new features of the refreshed New Module Wizard are:

  • Option for "no-script" to create a vanilla module (my most common use-case).
  • Rootspaces now have a uri and a human readable descriptive name.
  • Documentation resources go in their own rootspace to keep development space clean (best practice)
  • Mapper templates now have tidier formatting.
  • "SimpleDynamicImportHook" only declared as a resource in the development space if you choose to attach to a fulcrum.
  • Unit tests create basic stub if "no-script" chosen.
  • Better CSS on the final page to highlight the links that let you quickly go and explore the newly created module with the other system tools.

FWIW, the other tool refreshes were to the Request and Class Trace tools so that they now show the module version number, which allows you to differentiate multiple generations of the same module should you have more than one version commissioned.

Grammar's Kitchen has been fixed to prevent the roundtrip of a grammar on Windows from exploding with newline whitespace! Yep, Windows doesn't consistently round trip <textarea> - it sends \n\r when a form is submitted but on receiving \n\r as content of a textarea it renders it as two line breaks, it then sends you back \n\r\n\r !! So if you roundtrip the same string it gets ever expanding whitepspace line seperation. This is consistent on all tested browsers on Windows.

So if you ever need to "roundtrip" a <textarea> always strip the \r on the inbound form submission to ensure you have platform neutral text. c.f. all my other rants about resource consistency for my feelings on this! (Thanks to René Luyckx for reporting this).

*New* sqlTransactionOverlay

There's a new sqlTransactionOverlay in the mod:db module. It enables you to create wrapped transactional ROC address spaces. After updating from the repositories the documentation is here...


There's a detailed description of how it works below, together with a walk through of how its implemented and how you can easily customise the code to use the pattern it implements for arbitrary transaction managers.

But first confession time...

Background Story

Sometimes you realise you've been living with ROC for so long that you are taking for granted stuff that might not be apparent to other people. For example, last week I was asked how you could manage database transactions in an ROC solution.

The assumption I was guilty of making was that a user starting to explore ROC would rapidly see that all state, including configuration state, is a resource and may be requested in the ROC domain. So for example, transport configuration, client configuration etc etc are resources and may be either routed to static configuration files or, since they're resolved and requested in the ROC domain, to an arbitrary endpoint that can dynamically compute the resource representation (configuration state).

The same pattern holds true for the RDBMS tools, active:sqlQuery, active:sqlUpdate etc etc. The configuration resource that these tools require is a ConnnectionPool resource which is requested (and transrepted) in the ROC domain.

If you look at the Javadoc for the ConnectionPoolAspect (link requires that your javadoc is built in the distribution) you'll see it provides the ability to be a transactional connection pool with the method getTransactedConnection().

In my mind I'd put one and one together and, with my being too long in ROC land, had assumed that the next step - of providing a transactional connection resource for the RDBMS tools - was "just a matter of you setting up the architecture".

And so it is, (read this to the end and then think how it could be done with the existing pluggable-overlay). However, expecting people to be able to compose a transactional ROC address space from the Lego of basic components, was my crime of complacency.

Self-flagellation aside, we can at least make it drop dead simple for you by providing the necessary dozen lines of code in a reusable component. Hence the update to mod:db, which now incorporates a really simple transaction overlay.

And here's what it does (taken from the reference documentation)...


The transaction overlay is used to wrap the space in which your transactional active:sqlXXXX requests will be made. It is entirely open what services you implement in this space. All sql statements that you make in this space will occur within a managed transaction.

The architectural pattern for deploying the transaction overlay is shown below...

The essential requirement is that the overlay is provided with the resource identifier of the connection pool resource as its configuration parameter - for illustration lets call this "res:/RDBMSConfig".

When a request is made to your wrapped transactional process, located inside the wrapped space, the following steps are taken...

  1. The sqlTransactionOverlay intercepts the request.
  2. It issues a second request for a transactional JDBC connection from the res:/RDBMSConfig connection resource specified in its configuration.
  3. It constructs a transient injected space and inserts this into the request scope of the intercepted request for your service (the small inserted cloud in the diagram).
  4. The transacted connection is placed as a resource in this injected space with the same identifier res:/RDBMSConfig as the connection pool resource - effectively masking the external configuration by the transient local transacted resource.
  5. Your service is then invoked with the request that was intercepted.
  6. Any request you make to an active:sqlXXXX service should specify the configuration argument and this must be the same identifier as was used to configure the transaction overlay. It follows that when your request to the active:sqlXXXX tool is made, it will use the transacted connection obtained from the inserted transient transaction space.

Finally your service completes and returns a response. The transaction overlay intercepts the service's response, if the service was successful then the transaction is committed and the response is relayed back to the original requestor. If the service, or any sub-request within the space, throws an exception the sqlTransactionOverlay issues a rollback on the database transaction, and re-throws the exception to the original requestor of the service.

So that's how it works - its pretty simple to wrap your service space to make the sql requests inside it transactional. An example of how you'd configure your own module to use the overlay is provided in the RDBMS docs.

But, this is a powerful pattern which you might like to customize, so here's how its done...

How To Implement your own Transactional Overlay

The sqlTransactionOverlay implements the entire pattern in just a dozen lines of code. Its very simple to take the sqlTransactionOverlay source code, located in the the mod:db module, and customize the code to implement your own transaction overlay to work with an arbitrary transaction manager (eg a message queue etc).

Here's the source code listing with a step-by-step commentary...

package org.netkernel.rdbms.endpoint;

import org.netkernel.layer0.nkf.*;
import org.netkernel.layer0.urii.ValueSpace;
import org.netkernel.module.standard.endpoint.TransparentOverlayImpl;
import org.netkernel.rdbms.representation.ConnectionPoolAspect;
import org.netkernel.rdbms.representation.IAspectDBConnectionPool;

public class SQLTransactionOverlay extends TransparentOverlayImpl
    public void onRequest(String elementId, INKFRequestContext aContext) throws Exception
    {   //Get the identifier of the RDBMS connection pool resource
        String configIdentifier=(String)getParameter("configuration");
        //Source and implicitly transrept the ConnectionPool
        ConnectionPoolAspect connectionPool=aContext.source(configIdentifier,ConnectionPoolAspect.class);
        //Get the transacted connection pool
        IAspectDBConnectionPool transactedConnectionPool=connectionPool.getTransactedConnection();
        //Construct a value space for inserting into the request scope
        ValueSpace vs=new ValueSpace(1);
        //Place the transacted connection resource into the space with the
        //*same* identifier as the external connection pool resource
        vs.put(configIdentifier, transactedConnectionPool, null);
        //Clone the intercepted request
        INKFRequest requestOut=aContext.getThisRequest().getIssuableClone();
        //Attach the inserted space to the request scope of the cloned request.
        boolean commit=false;
        {   //Issue the request (to the users service)
            INKFResponseReadOnly<?> respIn=aContext.issueRequestForResponse(requestOut);
            //Relay the response
            //Commit sql transaction.
        {   if (!commit)
            {   //An exception occurred and will be thrown to the outer requestor.  Rollback the transaction.


The pattern can actually be more generally thought of as "resource masking by transient contextual space-injection" (or "switching horses mid-stream"!). You could use the pattern for all sorts of things, but notably you could readily use it with an alternative transaction manager, maybe obtained with JNDI, or other direct APIs for your transaction system. Inside this code, "its just Java".

Finally, the extra knowledge you need to declare your own prototype overlay...

Declaring a Prototype

Typically with overlays you want to construct an instance as an endpoint that declaratively references the id of a prototype. To set your implementation up as a prototype, you would register your implementation class in your library space using a prototype declaration. Here's how its done in the urn:org:netkernel:mod:db space...

  <parameter name="configuration" desc="Connection pool that is going to be transacted" type="string" min="0" default="res:/etc/ConfigRDBMS.xml" />
  <parameter name="space" type="space" />

Notice how it specifies both the configuration parameter, and a space parameter - yes, this is the space that gets wrapped by the overlay and is actually just a resource referenced as a parameter! You can declare your overlay prototype with other parameters if required.

Who Teaches the Teacher? Lessons on Dynamic Systems

Last week I was out of the office, visiting with Colruyt (the leading supermarket chain in Belgium) and providing a training/consulting session on NetKernel and ROC at Steria Benelux (Steria is a very large pan-European IT professional services group).

I've always thought that to be a good teacher, you have to listen to questions and really think about the answers. A simple question can often challenge assumptions and help you gain a clearer perspective on things. So as much as I was explaining NetKernel and ROC, I was also listening, learning and questioning myself.

You'll have inferred from the number of tweaks and tool refreshes this week, that I brought home some valuable insights. Nearly all of the refreshes were very minor to implement - on the order of "single lines of code" changes - but that's not to dismiss them as insignificant. For example, the sqlTransactionOverlay is a very powerful component, we should have provided it years ago. But it needed a simple and perceptive question, in the context of the Belgian Ministry of Finance's tax portal, to highlight the fact that it wasn't there already.

So you see several small technical enhancements this week. But then there are the non-technical things you learn...

Maintaining Equilibrium

I learned something that I take for granted but that I need to spend more time explaining.

Over the course of three days I observed that smart classical coders bring their own assumptions and working practices to NK/ROC. Of course, we all do whenever we learn something new, but one particular thing stood out.

I sense that there is a pre-assumption that it is expensive and time consuming to make changes to an enterprise system. For example, its not uncommon to have build-deploy-test cycles taking of the order of 10 minutes.

Whether conscious or subconscious, the conventional working practice is to make as many changes as possible during the development phase, in order to "justify the downtime" of the build-deploy-test verification cycle.

The problem with this assumption when you move to NetKernel is that it fails to recognize that NetKernel and the ROC address spaces of your solution are a "live dynamic system".

Loose Coupling? Pah! No Coupling...

An ROC system is not simply "loose coupled". In real terms, it is not coupled at all! The relation between one software endpoint and another is instantaneously bound for each and every request and only during that request (just like the Web).

The job of an ROC developer/architect is to shape the spacial/architectural context so that when a request is made, the binding is resolved as required. This is the new idea which we call "separating architecture from code".

It turns out, and as the guys on the course last week will attest, there are actually relatively few basic building blocks needed to create very sophisticated architectural structure for your code. Things like fileset, endpoint declarations, import (both static and dynamic) and overlays (the most powerful unit of architecture) with the mapper as the most common overlay embodiment.

Now here's the thing. Whenever you modify a space by introducing one of these building blocks, it is instantaneously deployed - the next request in the system will be resolved (bound) by the new architectural change. No downtime. Instant.

This effectively eliminates "build-deploy" from the traditional software cycle time. With ROC you have very short, effectively instant, "change-verify" cycles.

Now the thing to bear in mind is that in ROC we are talking about two distinct focuses of development. There is the development of architectural structure (modification and composition of spacial components) which is distinct (and largely new for a classical coder), and there is the development of the "business-logic" that goes inside an endpoint ie the code that manipulates and computes the resource state (this is familiar territory).


So here's the insight I gained last week.

I observed that people would default to making several untested changes to their spaces before verifying the consequences. The Pavlovian conditioning of classical software: do a lot to justify the downtime.

I saw that this default behaviour was the root of most of their bugs. Whereby a simple error in the spacial composition caused by one simple but incorrect change, can't easily be determined or reverted since it gets compounded when accompanied by the effects of the additional changes.

Change to spacial architecture is best done incrementally.

In ROC change is cheap - but spacial change is invariably very powerful. Why? Because spacial change applies to the relation between entire sets of resources. You are conducting infinities of monkeys.

So the trick is to understand that a live dynamic ROC spacial architecture defines the boundary conditions within which the instantaneous dynamic equilibrium of a live system will operate when requests are made. With this in mind, and with deployment being instant, it really pays to verify frequently.

Change Verification Techniques

When you're new to ROC I teach people to verify every architectural change (edits to module.xml). There are a number of tools that let you get into a simple change-verify cycle.

The zeroth tool is to look at NetKernel's output in stdout /and or the logs. Every time a dynamic module.xml is saved the module is decommissioned and recommissioned. When being commissioned all modules are validated and reports are issued on misconfiguration and are sent to the log, which you can see in stdout or you can view in the log viewer. This is your canary.

Assuming you've not just got things like unparsable XML, the first tool to get familiar with is the "Space Explorer", this gives you an instantaneous view of your spacial structure - if you don't see your endpoints here then neither will a request when it is issued.

The "Request Trace" tool lets you resolve and execute probe requests to a space you are developing and gives a scalpel-like confirmation that your assumptions about the resource set you are constructing are valid.

The XUnit test tool, allows you to take a requst (which you might have used in the request trace tool) and to put it into an ROC test suite - a set of requests that will be issued against your emerging spacial development. The test suite lets you then introduce additional constraints (assertions) to validate the spacial boundary conditions such as response time, representation form, transreption routes etc etc.

Lastly the Visualizer gives you a complete view of both the resoluton and execution of a request - this is the transitory binding state of the ROC system. You can combine this tool with the unit test tool - turn it on and then use the unit test tool to issue a particular request for visualisation.


So, if you're new to ROC, it pays to remember:

  1. Architectural change is cheap but powerful
  2. Change is instantaneous - verification is instantaneous
  3. Verify early, verify often.
  4. Reverting a change is also cheap and instantaneous.

A stitch in time saves nine.

Of course, very soon you start to gain experience and see patterns and can confidently make more than one change before verifying. But it still pays to check frequently.

In fact, I'm working on a commentary showing a real over-the-shoulder example of a typical ROC practitioner's development cycle. I'll have something substantial for you in the next newsletter - but as a taste of next week, I just worked out that in the initial 30-minutes of my development exercise I verified my spacial changes 13 times. That's about once every 2 minutes. Of course the initial phase of development is when you are doing most spacial changes, so this drops off massively when moving to the main development phase of a project.

So to conclude I sense that this a is a cultural rather than a technical matter. Ironically once a change in the spatial architecture has been made and it has been verified it is typically incredibly robust and will outlive the system it is deployed on. A paradox!

To show I'm not just making this stuff up, I spotted this tweet in the twittersphere, that illustrates what I mean...

@databliss Websphere cycle time (make a change, deploy, run test) was 10 min. Switched to #netkernel, time is now 8 seconds.

For the record my current mid-tier laptop (more of which below), the recommission cycle is about 1 second.

NFJS Article - NetKernel: Concurrency Inside

Brian Sletten has an article in the No Fluff Just Stuff Magazine this month. Its a very nice story about his experience of discovering and understanding the linear scaling of NetKernel. The magazine is subscription only at the moment - if you have access you'll find the article in the March 2011 edition Vol III Issue 1 on pages 25-30.

Here's a quote I enjoyed... "[When first moving the system to NetKernel] I was astonished to see a 4x throughput increase on the same hardware without changing a single line of code."

Although I also like this too...

"There was no reason these steps could not be run in parallel, but because of the underlying complexities (native code, licensing and thread safety issues, the need to rendezvous after each one finished, etc.) it was less trivial than it might have been. The team had attempted this goal with the previous environment for months and had never gotten it working predictably ... [With NetKernel] I was done in ten minutes."

Sorry for the teasers if you can't get hold of it. However, Brian says he'll be able to release it publicly after an exclusive embargo period, so as soon as its available we'll post a link.

Incidentally I got around to looking at the performance of my new laptop with the nk-perf tool (install via Apposite to test your own system). Recall that this is a bog standard mid-range laptop with a dual core Intel Core i3 370M, with hyperthreading, so my OS sees it as logically quad core. Here's the scaling chart I get with nk-perf...

The NK scaling discussion (cited by Brian in the article) explains the details of this diagram, if you've read it you'll recognise this graph is showing I have a highly linear NK stack.

Its interesting that I'm running on the same Linux kernel and Java as was used on the AMD 8-core server we show in the NK scaling discussion. So it therefore appears that, all things being equal, the dominant non-linearity in that earlier test was the particular AMD hardware layer (the Intel 8-core server has a profile much like that shown above).

Either way, as Brian reveals to the world - NetKernel and the ROC abstraction is highly linear. If you run nk-perf and don't see this clean linear response curve then you need to look below NK at your combination of Hardware, OS and Java.

Incidentally I get a kick out of this: notice the very slight curve in the response time (green line) as we get to 3 and 4 concurrent requests - it seems that hyperthreading is approximately equivalent to a true core, but we don't see a sharp point of inflection like we see with true cores, which shows that there is a little loss of linearity. So a hyperthreaded load is not exactly equivalent to a true core. Only a truly linear software system like NetKernel could reveal these subtleties.

Et Tu, Brute?

Julius Caeser knew developers need to be extra careful at this time of year. "Beware the IDEs of March", he said. But fear not, its not long until April now, when we'll be revealing NetKernel's new Composite Development Environment (CDE) at the conference.†

This is just one of the many reasons not to miss NKWest2011, Fort Collins, Colorado, USA, 12th-14th April. Find out all about it and book your place today to avoid disappointment...

We are hearing tragic tails of political back-stabbing in ROC development teams. People will stop at nothing to be the ones chosen to attend. Et Tu, Tony?

Watch your backs.

† Julius was actually not much of a programmer, unlike his best mate Pontius Pilote, who, if Monty Python's Life of Brian is a credible source, was in to Lisp. (These and more terrible puns at the conference - also I will be tap dancing.)

Have a great weekend.


Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research

© 2008-2011, 1060 Research Limited