NetKernel/News/1/30/May_28th_2010
search:

NetKernel News Volume 1 Issue 30

May 28th 2010

What's new this week?

NKSE/NKEE v4.1.1

As we recently discussed in the newsletter, we've been undertaking a comprehensive review of documentation. A new centralized cross-cutting view of all currently installed system components is now provided, from the distribution it is available here:

http://localhost:1060/book/view/book:mod:reference/

For reference there is also a static copy on the reference documentation server:

http://docs.netkernel.org/book/view/book:mod:reference/

As we mentioned, providing a centralized view of the individual books published by modules put the pressure on to ensure that we had better consistency across all the system libraries (both pre-installed and installable as repository packages). Therefore we have updated pretty much every published module.

With so many changes across so many pieces, we decided it would more convenient all round to cut a whole new 4.1.1 distribution and repository.

So please take the time to grab a copy from the download servers.

Downloads

NKSE http://download.netkernel.org/nkse/ NKEE https://cs.1060research.com/csp/download/ (Registration Required)

There were no significant technical updates to the core infrastructure - but there are several representations and resource models that were previously hidden and which are now surfaced with reference docs.

Notable items include explanation of the SAX Pipeline architecture of xml-core. The XMLToBean capabilities which provide transreption to/from Java Beans. JSON object models and transreptors are now more visible from the mod-json library. Many low-level layer0 and standard module components now have documentation published into the representation/endpoint views. Several hidden tools in ext-security library are now visible.

We also did some consistency work (and detailed documentation) of how the various http client tools deal with local cacheability of received resource representations.

We also updated the documentation publishing guide to explain how you can get your own docs to appear in the new views...

http://localhost:1060/book/view/book:system:admin/doc:sysadmin:guide:doc:editing

Standard Module Schema

As with the docs, when you're too close to something you need others to show you things in a fresh light. So too with the syntax and structure of the standard module's module.xml declaration. Several people have recently asked for a schema to help with IDE completion etc.

So the 4.1.1 distros now include a guide providing schemas for various standard infrastructures. First entry is a flexible and tolerant Relax NG schema for module.xml.

To get it, from a copy of NKSE or NKEE v4.1.1, look here...

http://localhost:1060/book/view/book:coremeta/doc:coremeta:schema:standard-module

The schema includes definitions for both Grammars ("ref_grammar") and Declarative Requests ("ref_declreq") as well as the built-in endpoints that are common to the Standard module (mapper, overlay, import etc etc).

Please let us know if this covers the use cases you've told us about.

Parameters and Arguments

One of the challenges of developing a schema for module.xml is that endpoints are user-generated and can be parameterized with arbitrary and rich parameter values. Therefore the schema has had to use a pattern where it is fairly rigidly structured for common patterns (eg the mapper's config parameter or the accessor declaration) but is also able to fall back to loosely tolerant behaviour when encountering arbitrary user endpoint declarations.

So, this raises the question, what the heck is a parameter and what's the difference between parameters and arguments?

If you've done anything dynamic (either with Java endpoints or dynamically mapped language runtimes), you'll have used arguments. When a request is issued to an endpoint the endpoint's grammar is used to match the request, but it is also used to break down the full identifier token into "sub-tokens".

For example this request...

res:/user/peter/rodgers

could be resolved by this grammar...

<grammar>res:/user/
  <group name="first">
    <regex type="alphanum" />
  group>/
  <group name="last">
    <regex type="alphanum" />
  group>
grammar>

[Try it in the grammar's kitchen to see the named "first" and "last" parts]

The sub-tokens (or parts) of a grammar are named and can be accessed inside the code of an endpoint using the context object. For example my endpoint could get the last name with...

last=context.getThisRequest().getArgumentValue("last")

The advantage of this is that the code and its mapping are decoupled - the grammar can evolve or be completely reimplemented and the code is not impacted.

Notice that the NKF API calls these pieces of the identifier "arguments". But in this example we've not really seen them in the normal sense we might use the term: an argument to a function.

So when does a fragment of identifier become more like our traditional view of a function argument? Answer, when it is a reference to some other resource. For example consider the active URI scheme...

active:base+argName@...some...resource...+...

The active URI looks somewhat like a function call. [To the NetKernel kernel it's actually just another opaque token, only the grammar actually cares about parsing and splitting out its name@... arguments]

However inside your code you can consider that these arguments are resource references. If you like you can treat the name as a local resource reference that can be dereferenced.

For example say we have...

active:base+importantFile@file:/home/pjr/hello.txt

My NKF endpoint code can do the following...

importantFile=context.getThisRequest().getArgumentValue("importantFile")

this variable now contains the String value "file:/home/pjr/hello.txt" so I can now SOURCE that resource...

file=context.source(importantFile)

OK. That's it taken step-by-step. But more often than not an endpoint will frequently want to interact with resources referenced by arguments (either sourcing or referencing them in sub-requests). So NKF has a trick, it lets you reference the named argument using the arg: scheme.

For example here's the previous example in one line...

file=context.source("arg:importantFile")

When you use the arg: scheme you are saying to NKF "we know this argument is a resource identifier so dereference it". Its very similar to using x* with pointers in C.

Equally you can relay an identifier in a sub-request...

req=context.createRequest("active:uppercase") req.addArgument("operand", "arg:importantFile") uppercasefile=context.issueRequest(req);

is the same as...

importantFile=context.getThisRequest().getArgumentValue("importantFile")
req=context.createRequest("active:uppercase")
req.addArgument("operand", importantFile)
uppercasefile=context.issueRequest(req);
OK, if you've done any playing with NK this stuff will have come up in various tutorials etc. But hopefully this succinctly explains what we mean by an argument - its a part of the request identifier, and becomes powerful when we also can treat it as an identifier too. [One last point, notice that in the absence of any other grammar NKF will construct active URI's as its "house-red" requests. We'll explain in another newsletter how you can change this if you need to]

So what's a parameter?

Well, frequently an endpoint requires resource state in order to bootstrap itself to do whatever it does. Think of a transport configuration, an import endpoint etc etc.

A parameter is a resource reference which an endpoint is able to reference during its postCommission() lifecycle.

You maybe didn't realise it but all the built-in endpoints of the standard module use parameters.

For example, if you've played with the mapper, you'll know it requires two parameters, "config" and "space". Both of these are passed to the mapper endpoint during its postCommission() phase. The mapper endpoint can make requests for these resources back into the ROC address space, even as it is booting itself. NK's physical endpoints aren't just exit ramps from the ROC domain - even when setting themselves up, they can go back up into the logical ROC address space to request other resources.

If you look at the postCommission() interface of any Standard endpoint you'll see it receives a context object.

So just like handling requests during the regular lifecycle, the endpoint can receive the parameters and make requests for them. To make life really easy NKF does the same trick as for "arg:". You can dereference parameters like this...

config=context.source("param:config");

OK all this is nice, consistent and hopefully comprehensible. Why did we go to all this trouble? Why didn't we just have endpoint's internally code in their state (aaargh), use global system properties (Out! Leave the room and never come back for even suggesting it), have a central UDDI-style or Windows-style registry (You can leave and you can take your friend with you!) or, at best, provide static declarative configuration state (just about tolerable but do try harder)?

Because everything is a resource. Even postCommission() parameter state. If a parameter's state is a referencable resource we can also obtain it from arbitrary dynamic resource providers.

For example, say I have a mapper, and I want the logical endpoints (mappings) that it implements to be controlled by another process. No problem...

<mapper>
  <config>active:dynamicMappingsconfig>
  <space> ... space>
mapper>
Where somewhere in my spacial context I'll implement an endpoint to provide "active:dynamicMappings". There are many many use cases this could be valuable for, but here's a dumb one, maybe I'll only provide mappings between the hours of 1 and 2am - so my potentially sensitive data backup services literally only exist in a narrow window of the day.

There's more to parameters than this, they can be typed and automatically transrepted etc etc You can also specify min and max occurances, default values when they're not supplied etc. If you want to see this in practice open up any of the library modules (eg lang-groovy) and look at a prototype definition.

So, long story, but now you know how and why we are able to make all of the configuration state of the core stuff in module.xml behave dynamically and why the question "can we please have a schema?" is not a trivial demand. We're using ROC to implement the ROC address space! It's turtles all the way down.

One final thing, you'll find in the new documentation that there is a lot more detail on endpoint parameters.


Have a great weekend.

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited