NetKernel/News/1/29/May_24th_2010
search:

NetKernel News Volume 1 Issue 29

May 24th 2010

What's new this week?

  • Updates for layer1 and lang-dpml.
  • Feature request update.
  • Observations from the field.

Repository Updates

layer1: Adds a "PrimitiveSerializer" transreptor which serializes the following primitive types to UTF-8 encoded string values...

this.declareFromRepresentation(IIdentifier.class);
this.declareFromRepresentation(Byte.class);
this.declareFromRepresentation(Short.class);
this.declareFromRepresentation(Integer.class);
this.declareFromRepresentation(Long.class);
this.declareFromRepresentation(Float.class);
this.declareFromRepresentation(Double.class);
this.declareFromRepresentation(Boolean.class);

This fills a gap when relaying basic types over NKP but also means that basic web apps and development tool browser outputs will now get human readable representations.

lang-dpml: Now hides the scope of arguments so that they are only visible to the script for which they are part of the request. This prevents "scope leakage" under recursive calls. [Thanks to Grégoire Colbert for tracking this down in his complex DPML processes and taking the time to report back.]

Feature Requests Update

One of the key feature requests we received recently from Jeff Rogers was for a centralized listing of components (similar to that available with NK3).

The NK4 documentation is much more dynamic than NK3's and books are generally published as self-contained structures associated with each library. While search gives you some cross-cutting view its presupposes that you know at least a little about something your looking for. Jeff pointed out that his most common use case is being able to browse across components in a "chocolate box" view.

We've taken this on board and have now implemented a component view to complement the existing views. In order to publish into this doc view we've added an extra "category" tag to a document declaration. So for example to get your document to appear as an "accessor" in the centralized accessor list you'd add...

<category>doc accessor</category>

Other tags include: overlay, representation, transport, runtime.

While we've got this technically implemented we've not yet shipped it. Reason being that now that we can see this central view, its clear that we need to put some effort into unifying the content styles, naming conventions and also, where necessary, to provide links back into the more specific book context of a tool.

So expect the results of this work to be available real soon. Since it affects so many disparate modules its likely we'll ship it all as a 4.1.1 release. Watch this space.

Other stuff slated for attention includes:

A new architectural component that implements a "fallback mapper" component which will allow a matched set of resources to be routed to multiple sets of potential "providers" based upon grammar and declarative requests.

An example application domain for this component is to accept a request for /foo/bar/xxxx and to progressively attempt to provide it from static (eg /foo/bar/xxxx.html) and dynamic implementation( /foo/bar/xxxx.groovy) endpoints with ultimately a default fallback (eg /foor/bar/default.txt).

While implementing this pattern "by hand" with your own endpoint would be pretty straight forward, it makes sense to provide a standard component that optimizes use of core utility components such as declarative request etc.

Let us know if you've come across other similar patterns that you think should be generalized into an architectural component.

Observations from the field

I just got back from a trip to Si-valley (hence the reason this newsletter is a couple of days later than normal). It was great to spend "quality time" on site with a great team at a hard-core site-license adopter of NK.

For corporate confidentiality reasons I can't disclose any details but I can pass on some general observations I picked up along the way.

These guys have been using NK in serious ways for mission critical parts of their infrastructure for getting on 5-years. They started with NK3 and are now increasingly adopting NK4.

We go back years so I can reveal I got some "affectionate teasing" from Gary Sole, who gleefully recounted how (a few years ago) they had a notice-board caption competition featuring an X-Ray photo of a sword swallower. The winning entry was: "Still not as painful as learning NetKernel"!

With regards to NK3 that is fair comment, but I'm sure the team would agree that the four years of effort we've put into NK4 have smoothed the learing curve a lot. But that is not to excuse the sentiment expressed, there are some practical tips I observe and teach in the classes that you can use to break through any initial feelings of disconnectedness...

When coming to NetKernel for the first time, you need to take on board that it is a completely decoupled system. That is, there is no long-term binding between one software endpoint and another. Each request for a resource is instantaneously resolved and issued to a target endpoint - the relationship is reformed each and every time.

Therefore in order to hang-your hat, just like when establishing a web application, you have to be prepared to probe the address space with test requests as you start to structure it. The "request resolution and trace tool" found in the "developer" control panel is there for just this purpose. It lets you issue probe requests into any address space in your system. You will want to use it frequently to test your assumptions about the resolvability of the resources you are architecting your solution around.

To start with, you should use the tool's "Resolve" button to test if your request identifier will reach the endpoint you expect (this is a bit like making a HEAD request in REST).

Once the space resolution is structured right, the next step is to actually issue the request for evaluation, to see that the code gets fired up and does something. In the beginning this could be as simple as returning a dummy string resource representation, just to show things are happening.

As you start to implement the internals of the endpoint you'd expect to fill out how it handles the arguments it receives, perhaps makes sub-requests to other services etc (which you'd also probe with the trace tool too - to make sure your import assumptions are correct etc etc). If you're working in Java, you can use regular java debugging inside the endpoint to step through this code. If you're using a scripting language you can use println() or copious use of context.log() output to pin down your assumptions.

Once the first steps of establishing the spatial relations of your applications address space are established and tested with trace tool probe requests, you can then move on to constrain the relations for long term stability. For example, by adding unit test requests. You can think of unit tests on NK as a set of managed probe requests with assertions on response and response metadata.

Moving from the fluid, decoupled starting conditions, through a process of tool use to solidify the spacial context, lets you quickly progress from the large scale view of the address space, into the detailed and solid landscaping of the internal coding.

So you might now be saying - why bother - I can build stuff with early bound and familiar object oriented techniques. For sure. But that is to focus your comparison on the "construction phase", which we observe constitutes only about 20% of a typical ROC system's lifecycle.

In practice an ROC solution developer actually spends 70% of the time in composition/ recomposition of resources and transformations - that is, constructing and making requests for resources/services. A further 10% is spent on introducing and applying constraints (like validation, security boundaries etc), non-functional requirements (like logging, audit, performance monitoring etc) and architectural engineering (like throttles, load-balancing, fallback patterns etc).

So my conjecture (backed by observation at home and in the field) is that some initial simple and iterative use of probe techniques in the construction phase, yields long-term value over the 80% lifecycle of real system solutions. Which pays back in the architectural flexibility that takes ROC to a whole new level of power, scaling and elegance.

If you have any background or experience in electrical engineering you might see strong parallels with this process. Imagine trying to develop an electronic system without using an oscilloscope or logic analyser (cf. NK's request probe and visualizer tools)!

[Of course I would say all this wouldn't I! I'd also say, if you find yourself in the self-taught sword swallower camp and want to become a fire-breathing ROC architectural acrobat, persuade someone with a budget to buy you some training - its very cost effective and a heck of a lot of fun.]


I hope you had a good weekend. We'll resume our regular schedule this week!

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited