NetKernel News Volume 1 Issue 28

May 14th 2010

What's new this week?

Repository Updates

  • kernel: A very rare potential bug in the handling of busy endpoints with asynchronous requests has been fixed.
  • layer0: Fixed NPE in NKF async request handler if exception is thrown in NKFTransport. Change to Pass-by-Value space to support NKP.
  • docs: The presentation of dynamically generated documentation driven from endpoint metadata has been improved.

NetKernel Enterprise Edition 4.1.0 preview 5 with NKP

We're finally ready to hand over a build of NKEE featuring the NetKernel Protocol client/server infrastructure. You can download NKEE-4.1.0-preview5 here... (requires registration)

If you've not been following the progress reports, here's a very quick summary of what NKP offers (taken from docs http://localhost:1060/book/view/book:mod:nkp/)

"NetKernel Prototcol (NKP) is designed to transparently relay requests between NetKernel instances.

NKP supports long-lived transparent bridging of NetKernel address spaces - you might consider this to be something like ROC's generalized equivalent of NFS or Samba, only it goes beyond "file sharing" to support the full richness of the ROC abstraction.

In addition NKP supports instantaneous "oneshot" stateless connections between client and server. You could consider this mode as a generalization of the stateless REST/HTTP client-server pattern. As with long-lived connections, a oneshot connection fully embodies the richness of the ROC abstraction."

We'll soon be adding some detailed tutorials and demos (such as how to use NKP for a flexible light weight transactional message bus). In the mean time to get you started there are a bunch of packages you can install to see some worked examples and use cases. [To install, download the nkp.jar file and use apposite's "Upload package" feature].

NKP unit tests:

This package contains a comprehensive set of unit tests. It locally instantiates many servers and shows both bridged and oneshot styles of client connection...

Notably the last two tests make remote one-shot requests to a public NKP ping server we have set up on "" running on the default port 10600. We'll keep this up all the time so you can always call it to test your set up/networking etc.

The ping server has the following endpoints:


- SOURCE requests return a BinaryStream containing "Ping"


- SOURCE requests return the operand resource as BinaryStream - If the operand is in requestor's address space you must enable the client to permit server-side requests with <exposeRequestScope>

To see the implementation details of these ping services and to set up your own internal NKP ping server, the implementation is available here...

Lastly a simple demo of a threadless asynchronous pubsub (hub-spoke) pattern is available here...

The pubsub demo sets up a hub that will listen for subscribe requests. Once subscribed a spoke can publish messages to the hub. The hub relays the message to all subscribers in the form of a subrequest into their address spaces.

The demo has two independent spokes. Both of which have a simple AJAX UI which you can subscribe/unsubscribe and send messages with. After you install the demo the HTTP UI will be found here...


Because the UI is stateless we have implemented a simple stateful message buffer in the client-side space which is mapped to receive inbound messages and which is also polled by the AJAX clients for the UI message list view updates.

You can see that a more sophisticated client spoke would use a more elaborate state management arrangement.

[What follows is a detailed discussion of how this pattern is done. You might want to skip over it to the "NKP Patterns" section below if this is too much detail for you.]

An interesting thing to note is that the pubsub demo uses something I'm calling the "envelope pattern". The subscribe request from the client side is handled asynchronously on the server side. The subscribe request is "held on to" for the lifetime of the subscription (no response is issued).

When a publish request is received, the context of the envelope subscription request allows the hub server to issue requests back into each of the subscriber's client-side contexts (ie the subscription context). Only when an explicit request to unsubscribe is received does the hub finally release the subscription request and issue a response. So the subscription request acts so as to provide a "contextual envelope" within which the hub-spoke relationship can issue asynchronous bi-directional requests.

Patterns like this can be achieved without concern for system resource limits since the underlying architecture is asynchronous and the NKP mediated requests do not tie up threads. A simple pubsub unit test is also included in the unit NKP stability unit tests and shows this scaling to 100's of subscribers with no thread limitation.

NKP Patterns

We're excited about NKP and can foresee many deep consequences for true cloud architectures. Yet at the same time its a bit odd for us - since the patterns that are so exciting for distributed cloud systems (see below) are, from the ROC architecture point of view, actually nothing new at all. We're using these patterns all the time within everyday NK solutions.

For example, consider how NetKernel offers "language runtimes". A language runtime is an accessor which receives as an argument the code which it should execute. The code is a resource and is SOURCEd from the runtime requestor's context. The upshot is that a language runtime is a stateless engine which does not require preconfigured imports - it will execute what its told to and shares the instantaneous transient context (address space / scope) of the requestor.

The interesting consequence of this pattern is that if the code to be executed includes further resource requests, the language runtime will construct and issue those requests *also* in the context of the requestor.

To add intrigue to the pattern, in the instantaneous context of the runtime and requestor, nothing prevents the code from requesting resources provided from the runtime's implementating address space. So you can have hybrid solutions that "mix in" requestor and runtime implementation spaces on the fly.

The language runtime pattern is used for all of NetKernel's supported languages such as active:groovy, active:javascript, active:java etc. It's also frequently used whenever you implement a DSL, such as the wiki documentation engine. For example, in the ROC architectures tutorial, the discussion of the wiki's pluggable macro engine offers a specific deconstruction of the pattern with a real application's use case...

As you can see, the dynamic contextual address space of requestor and endpoint is a powerful pattern that is frequently crops up in NK applications, more often than not without even thinking about it (eg its what you're doing when you make a request into the httpRequest: address space)

OK having established the background, lets get back to NKP. If you install the NKP unit tests you'll see the 4th test is called "Remote Groovy with script in SuperStack". It shows that what was previously considered a localized ROC pattern can now be distributed with NKP.

The test shows the active:groovy language runtime on the server side executing a script SOURCEd from the client-side's request scope. If you look at the implementation you'll see that it is exactly the same as when you do this locally. (NKP is just ROC between systems - old is the new new).

But lets think about this from the perspective of cloud computing (I really hesitate to use the c-word since, as with so many technology trends, it is tainted by the ridiculous shrieking of vendors jumping on bandwagons. Its undeniable that the economic advantage of virtualized on-demand system instances is compelling. But so much of this c-word market is traditional software running on more flexible server configurations with vendor tools and services offering system administration. Little (any at all?) is to do with "computing").

Anyway, with that off my chest, consider test 4 from the true "cloud computing" perspective. It is a demonstration of an architectural pattern in which your code resources may be part of your client-side application state whilst the CPU load is farmed off to a "cloud language runtime". Now that's real transparent cloud computing.

OK. The discussion so far is in the classic area of application for distributed computing: scaling up computation by partitioning it over a cluster of computation engines. But what about the other dimension of a distributed resource oriented "cloud"? The fact that, what we just saw was that ROC/NKP lets us create instantaneous converged address spaces?

Think about it. We can now create solutions which are truly a sea (should that be sky) of convergent overlapping clouds of computational state. I believe that while on-demand scaling is a good thing, the real motherload is in the dynamically adaptive systems this leads to.

To get a better view of this, I promised Brad that I'd talk about a neat and simple pattern that NKP enables. Its a variant of what's just been discussed...

Imagine a client © which establishes a bridged NKP address space to a server (S1). Lets assume that we've set up the client and server so that trust is established (ie the client has been authenticated and the connection is encrypted with TLS etc). Therefore C's distributed address space spans S1 and can be considered as a "trusted context". As with the language runtime examples, a request from C to S1 would allow (during that request's lifetime) S1 to issue requests back into C's address space.

So far nothing you've not seen already and, is the same as the macro engine diagram in the ROC architectures book (take another look at the first diagram in this static copy)...

Now, what if C's address space also had a NKP bridged connection to another server S2? So that C contains the union of two trusted contexts to both S1 and S2 at the same time? (refering to the diagram, imagine the 3 spaces shown each existing on different servers).

If you're following this, you should be able to see that we've just established a pattern that allows S1 to issue requests to S2. To the ROC architect its the same pattern we've used locally over and over. But to a distributed system architect its a transparent dynamic distributed 3-party trusted resource space! Imagine the possibilities for cloud application architectures.

Obviously there are many practicalities you need to address for how you would establish trust and introduce constraints to guarantee integrity for this pattern. But these are easily covered with a suitable set of NK's architectural components. The NKP docs provides discussion about the existing NK technologies you can use for trust, security, authentication and constraint (trust boundaries).

So you see why it feels a bit weird for us today. Same old ROC. Brand new, real cloud computing anyone?

Bay Area Visit

I'm in the Bay Area from tomorrow - if anyone fancies meeting up for discussion, cod-philosophizing or just silent staring into the bottom of a beer glass, drop me a line or tweet me @pjr1060


We received some really excellent, detailed and constructive feedback from Jeff Rogers earlier this week. We're currently actively working to take this on board to improve several aspects of the way the system is presented and to cover missing topics that should have been covered.

If you come across stuff that doesn't make sense, needs more coverage, can be improved etc etc please take the time to let us know. We take on board all the feedback and try to turn round improvements in realtime to make things better for everyone.

Have a great weekend. Maybe see some of you soon.


Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research

© 2008-2011, 1060 Research Limited