NetKernel/News/2/21/March_25th_2011
search:

NetKernel News Volume 2 Issue 21

March 25th 2011

What's new this week?

Catch up on last week's news here

Repository Updates

The following update is available in the NKEE repository...

  • nkee-sshd 1.8.1
    • Update to the openssh PKI authenticator endpoint to try to detect OSX and look for .ssh/authorized_keys in /Users/xxxx/ instead of standard unix path /home/xxxx/

The following updates are available in the NKEE and NKSE repository...

  • http-client 2.4.1
    • Improved the error handling for proxy settings - thanks to Jay Myers of bestbuy.com for reporting this.
  • wink 1.15.1
    • Fixes to workaround XHTML empty element issues on Firefox4 and Chrome.

On Compound Grammars

I had a conversation with Jared Dunne at Findlaw earlier in the week. He was porting a long-standing NK3 system over to NK4 and wanted some advice on using the mapper to route logical requests for an aliased and curryed interface through to the regular active:sqlXXXX RDBMS tools. So for example going from

active:syncQuery+operand@[ ... ]

to

active:sqlQuery+configuration@config-foobar+operand@[ ... ]

It turns out he had several very similar interfaces and all were mapped to a corresponding active:sqlXXXX tool. He got it working by adding a mapper entry with a grammar/request pair for each interface.

But the next question he raised deserves to be highlighted in full. The natural inclination of a coder is to avoid cut and paste and to look to minimize the boilerplate. Also, coming from NK3, you get to be pretty handy with regex's so creating an or'd pattern is second nature.

So Jared began to experiment with the full BNF grammar to pattern match the significant part (the XXXX bit) and simply to relay that in a constructed mapped request. He recognised this could all be done with one mapper endpoint declaration.

While this inclination is natural, I had to say that its no longer considered to be a good idea and is not recommended practice.

Resource Oriented Metadata

At first this might seem odd and goes against a coders training. But there's a good reason. Whenever you declare an endpoint (either virtual, in the mapper, or with a rootspace endpoint declaration) then that endpoint has associated with it a bunch of metadata.

Most obviously it has its grammar, which expresses the unique resolvable resource set managed by the endpoint. Plus a bunch of other stuff, like descriptions, human readable name etc etc. It also has an id defined by an <id>foo</id> tag (or automatically created for you by the space if not specified).

The id is significant. You see, the id is the resource identifier for the metadata of the endpoint. Yes - everything's a resource - even the metadata of an endpoint!

If you issue a META request for "meta:foo" - you will SOURCE the rich and extensible metadata associated with that endpoint.

Which, includes the grammar...

Grammar La Ma Ding Dong

The NK4 grammars aren't just a uni-directional pattern matching language. They are actually bi-directional and can be used to construct a request, populated with appropriate arguments and which will be resolved by the endpoint for which you have the grammar. Its like a reverse wormhole in ROC space.

This little bit of NK4 magic, has been a core part of NK4 since its release. We use it in several tools, notably the space explorer. But its about to become even more powerful, since NK4's resource oriented metadata is the foundation upon which the upcoming NetKernel Composite Development Environment (nCoDE) rests.

Take my word for it

To cut to the chase, if you compound your grammars, the metadata for your logical endpoints is not unique and the endpoints you are defining are ambiguous at the metadata level. The upshot is that they'll then not be uniquely defined and so will not be presented as drag-and-drop components for use in compositional processes.

Until we release the tool in three-weeks, you'll have to take my word for it, that you'll really really want your endpoints to automagically pop-up in the CDE palettes. So it will pay back not to seek to compound your logical endpoint declaration (ie grammars etc).

Of course nothing stops you mapping multiple logical endpoints to the same physical endpoint instantiation - you can compound inside the physical domain all you like.

Take my word for it this week. You're not being profligate to implement a distinct endpoint declaration for each distinct logical endpoint. You'll reap the reward down the line I promise.

Of course, none of what I've just said actually matters to the first-order ROC domain. Its only a higher-level concern if your looking to fully utilise the ROC development tooling. To the first-order ROC domain, a request is a request and an endpoint can define one, many or infinite resource sets.

Over the Shoulder view of the ROC Development Process - Part 2

Last week I talked about the design, process and division of labour considerations to develop a typical ROC solution. This week I've posted another installment of the over-the-shoulder development notes...

http://resources.1060research.com/docs/2011/03/steria-training-project-part2.zip

The commentary-2.txt file included gives a detailed warts and all step-by-step view of one of the teams. This week it focuses on the data-layer team and shows a typical pattern for setting up the RDBMS connection pool resource.

I recommend you glance over the notes to get the context for the discussion that follows.

RDBMS Configuration - Its just another resource

Its kind of self-evident, but a data layer typically needs to talk to an external data persistence mechanism. We'll consider the RDBMS accessors in mod-db, but the following applies no matter what system-of-record you happen to be using.

The RDBMS accessors require a configuration resource, which when requested can be transrepted to a JDBC connection pool. (Here's the docs for the active:sqlQuery tool which needs either an explicit configuration argument, or tries to find it in the context space by requesting res:/etc/ConfigRDBMS.xml).

So the pattern here is that the context within which the tool is used will have a resolvable a resource, which when requested will provide the correct form of connection pool for the system-of-record.

It would be very simple to start off by having res:/etc/ConfigRDBMS.xml be implemented by a static file bound to the space with <fileset>. But a static configuration is really a poor design when you think about the nature of the development process.

For any serious project, there is never one database. There's probably a developer's personal development RDBMS, maybe on their development machine, or a dev data server. There's almost certainly a test database. There's probably some form of staging database. And of course, there's the real database which you don't get to touch until you've proven yourself worthy on all the preliminary databases used in the project phases.

So, we have the situation where if we used static files, we'd either have to fork our module for each deployment platform, or probably even worse, we'd have a generic module that needed configuring to be deployed on each platform. Either way this is no way to carry on. We want seamless and consistent deployment on all systems.

With this in mind, today's over-the-shoulder view, shows how I used a common pattern that gets us where we want to be.

The RDBMS config is just a resource. The active:sqlXXX tools have no interest in what it is, where it comes from or what it does. To them it is just necessary resource state. With this in mind we recognise that we have complete freedom to map res:/etc/ConfigRDBMS.xml to any implementation endpoint we like.

In the example above, I set up a mapper to map it to the execution of a groovy script. I also put this away inside its own ROC space - its another black box. I can hand this box to a sub-member of the data team; their job, to solve the usual ROC engineering problem statement:

  • What resource am I promising? Answer: provide an RDBMS config that is contextually relevant to the current deployment platform.
  • What resources do I need to deliver it? Answer: the connection settings to each Database.

Bootstrapping Trick

You'll see in the commentary that once I've got the mapping in place to the groovy execution, I realise I can pull a cheap trick. At this point in the project lifecycle we don't have an external system-of-record defined. We're playing with the data model while we compose the system. So what I can do is quickly hack-up an in-memory database using H2.

I also know something else. The database config is just a resource. When it gets requested it'll be cached like anything else. So my implementation code will run once and then never need to run again - the connection resource will be cached. So I can use this to my advantage. I can have my code do some extra work for me. It can setup the RDBMS for us too. It'll do it the first and only time this code gets run.

The nice thing about this arrangement is that even though we're at a very early stage of the project I can give this module to the other teams and they'll also each get an automatically setup and configured DB - its a black box.

Selecting Connections

So the architecture I've put in place is going to serve us well going forward. What will inevitably happen is we will sort out the database schema, we'll have half-a-dozen different RDBMS servers for different phases of the development lifecycle.

What will happen is that that the secondary rootspace I created for the configuration will be refactored into its own module. The "RDBMS Config Module". It'll be the exactly the same structure, and the import to the data team is exactly the same but it will become independently manageable and deployable on its own terms and to its own timescales.

The other thing that will happen is that the developer responsible for the connections will change the implementation. They'll modify the code to become a "resource selector". The code will determine which system they are deployed on (maybe using hostname, or any other degree of freedom that's appropriate (MAC address?)), they'll then have a means of constructing the correct RDBMS configuration resource for the specific platform. Maybe they have different static files for each host?

One solution, is to have a system-of-record for connections. A database or service containing the DB configs for each system. A more general and more flexible design than the slow and nasty world of JNDI and J2EE connection management.

But the key point is that the core development team don't give a damn (they never did). Its the "connection monkey's" responsibility. All anyone in the data team needs to know is that no matter what platform their solution is deployed on it will always be using the right system-of-record. This is why ROC offers a contextually consistent model for computer systems.

General Case: Emergent Sophistication

There's nothing specific about this pattern. Its a very typical pattern in ROC. Everything's a resource - even configuration state (or code!). Delegate the responsibility and allow the context to ensure consistency. Concentrate on your local black box and let someone else take care of their stuff.

This is the recipe for emergent complexity. But I always think "emergent complexity" sounds negative - its really "emergent sophistication".

Limited Edition Geek Chic: NetKernel ROCit Scientist

ROC soft wear

You probably heard that ROC has crossed over into high-end couture, and is the must have look in Paris this season?

As Mr Bowie said, "It's loud and tasteless and I've seen it before - beep beep", but don't let that put you off.

We've released a limited edition range of "NetKernel ROCit Scientist" designs, available here...

http://www.printfection.com/netkernel-rocit-scientist

The logo is in tastefully subtle shades and reads "NetKernel ROCit Scientist" in faded green, black, blue with the word "resourceful" in faded black beneath. There's a range of styles and t-shirt colours available.

These items will be available on a strictly limited basis, when one hundred representations have been reified they will be unresolvable forever more.

Don't miss out, like Theseus, you too can be a King of the Greeks geeks.

Discount

The store is operated by printfection.com, who like Cafepress, obviously cream the profit from these on-demand merchandise models. However, if you fancy any of these items, they've offered a discount code...

Coupon Code: NowOpen$5 Discount: $5 off a subtotal of $25+ Valid Today, March 25th through March 27th

Who knows, if demand is good, we might really go to town on the Autumn collection!

Gathering of the Clan

There's a little under three weeks until NKWest2011, Fort Collins, Colorado, USA, 12th-14th April. Its still not too late to secure your place...

http://www.1060research.com/conference/NKWest2011/

The conference is preceded by a one-day bootcamp which provides a rapid immersion in NetKernel/ROC and is already well subscribed with new ROCers, so don't worry you won't be alone.

The conference will provide a broad range of in-depth content, including the first public release of the NetKernel Composite Development Environment (NCoDE).

We gave a sneek peek to Brian Sletten a couple of weeks ago and next day he tweeted...

"I knew it was coming, but they still blew my mind".

Really, its that good. My advice: beg, borrow or steal your way to Fort Collins.

Incidentally, if you are planning on coming but haven't got around to booking yet, I strongly recommend you secure a room at the conference hotel. Our block reservation has now been opened up to the general public, so rooms are first come first served.

Have you noticed my new found marketing talent? Last week it was training courses, this week its t-shirts and conference propaganda. At this rate we're going to make Apple look timid and reserved.


Have a great weekend,

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited