|
NetKernel News Volume 6 Issue 11
November 27th 2015
- Repository Updates
- JSON Resource Model Enhancements
- *NEW* JSON Recursion Language
- Introducing ROC to a Microservices Crowd
Repository Updates
NOTE: As announced previously, Java 6 is now end-of-life and since January 2015 all modules are built and target Java 7. Do not attempt to install updates if you are still running Java 6.
The following updates are available in the NKEE and NKSE repositories...
- http-server-3.4.1
- RESTOverlay Refined determination of target endpoint if the resolution is potentially ambiguous.
- json-core-1.10.1
- Updated to use latest org.json.* library. Added DeterminateJSON to deal with ambiguity between JSONObject and JSONArray when transrepting (parse/serialize).
- json-extra-1.1.1
- NEW library of tools built over json-core (see below)
- lang-javascript-1.7.1
- Updated to use Rhino 1.7.7 - many improvements including native JSON object.
- module-standard-1.66.1
- Refinement to Representation and Prototype class declarations to trim accidental whitespace.
- The gradle plugin has been updated with a fix to the deployModuleXXXX task to ensure the generated modules.d/ entry is well formed on Windows. Thanks to Brian Sletten for fixing this.
JSON Resource Model Enhancements
JSON is a popular representation format. Presumably since it enables client-side Javascripters to seamlessly work with it without having to switch context.
However, gains in perceived end-consumer convenience are not achieved without introducing compromises elsewhere - most notably, in that JSON imposes restrictions when it comes to composition.
Composability
We understand, by the nature of ROC, it is a common requirement to take two or more resources and to combine them into a new composite resource. All resource oriented architectures have this requirement - in the web these are called mashups.
Composition is powerful because the truth we must acknowledge is that a perfect normalized API does not exist - the world never quite fits or it moves on and changes. Often the thing we want is a composite of other resources.
We also know, from other engineering disciplines, that composites offer brand new possibilities. Take some iron and add some carbon and you have the composite we call steel. Steel has properties that are vastly superior to either of its components.
In engineering, we understand that composing things results in things that are more valuable than the sum of the parts.
Composability is an important property of a representation resource model.
JSON's Composability
JSON, being a hybrid of a map and an array is not ideal for composition. For a detailed discussion and evaluation of arrays, maps and trees please take a look at my On Data Structures article.
In summary, an array is challenging when composing since its ordering imposes a significant overhead. Equally maps are a challenge since a map key must be unique. Composition risks key collisions, therefore the only way to ensure lossless composition is to build maps of maps.
Since JSON is both map-like and array-like, it cannot avoid these challenges when it comes to creating composite representations. What this really means is that someone that wants to combine two JSON resources will probably end up writing some code. This contrasts sharply with anyone who has experience combining trees - trees can be slapped together often with no code.
JSON Slicing and Compositing Tools
For the reasons outlined above, I have historically not spent much time on JSON, preferring the seamless composability offered by HDS. However there's a lot of it about and we are nothing if not pragmatic. Today as part of a general update to all the JSON tool set, we have released the urn:org:netkernel:json:extra module (installable from Apposite as json-extra).
Currently there are two new tools in json-extra both aimed at addressing the challenge of dealing with JSON composition.
The first tool is JSONRL, a new recursive composition runtime similar in design to the XRL, HRL and TRL family of languages. The details of its design and examples of use are quoted from the docs below...
(You'll notice that because of JSON's composability constraints we have to do a "request-KEY" to "KEY" trick as a compromise to get recursive composability)
*NEW* JSON Recursion Language
active:jsonrl is a recursive composition language for JSON. It belongs to the family of recursive composition runtimes that includes XRL, HRL and TRL.
The general principle of the *RL family is that declarative resource requests embedded in a representation provide links which are recursively evaluated and the resulting representation is substituted into the primary resource structure (much like an HTML page provides linked references that are requested and composited into the final Web page).
The power of the *RL family is that the requests support the full ROC request model - meaning that embedded requests can implement both pull and push state transfer patterns.
JSONRL Requests
JSON is fundamentally a map datastructure - which, unlike XML, imposes certain constraints on request declaration and the resulting substitution - for example, no two entries can share the same name (key) in a JSON document.
To deal with this limitation, declarative requests must be marked with the following reserved prefix request- ("request" followed by a "dash").
Here is an example...
Say we have two JSON resources...
res:/car.json
[ {"id": 10, "color": "silver", "name": "Volvo"}, {"id": 11, "color": "red", "name": "Saab"}, {"id": 12, "color": "red", "name": "Peugeot"}, {"id": 13, "color": "yellow", "name": "Porsche"} ]
and res:/bike.json
[ {"id": 20, "color": "black", "name": "Cannondale"}, {"id": 21, "color": "red", "name": "Shimano"} ]
If we have a JSONRL structure called res:/vehicles.json that has the following embedded JSONRL requests....
{ "request-car" : "res:/car.json", "request-bike" : "res:/bike.json" }
If we request the evaluation of the vehicles.json JSONRL template...
req=context.createRequest("active:jsonrl") req.addArgument("operator","res:/vehicles.json") rep=context.issueRequest(req)
then the resulting composite representation is...
{ "car": [ {"id": 10, "color": "silver", "name": "Volvo"}, {"id": 11, "color": "red", "name": "Saab"}, {"id": 12, "color": "red", "name": "Peugeot"}, {"id": 13, "color": "yellow", "name": "Porsche"} ], "bike": [ {"id": 20, "color": "black", "name": "Cannondale"}, {"id": 21, "color": "red", "name": "Shimano"} ] }
Notice that "request-car" has been replaced and the JSON map now has the entry "car" (the "request-" prefix is removed and the name following the prefix becomes the JSON map key).
Maps and Arrays
JSONRL supports JSONObject (map) and JSONArray structures both for the included composite, the primary template and for the location of "request-XXX" references.
Clearly it is imperative that any requested resource is JSON (or is transreptable to JSONObject or JSONArray).
Basic Request Syntax
If the value associated with a "request-xxx" map entry is a string, then the string is treated as a declarative request. Since JSON does not support multi-line strings the string is treated as abbreviated declarative request syntax.
For example the following request shows the full capability of the requests...
{ "request-GroovyExample" : "active:groovy operator res:/somescript.gy operand res:/foo" }
Advanced Request Syntax
If the value associated with a "request-xxx" map entry is a map, then the "identifier" entry is taken as the abbreviated declarative request and the additional options "async" and "terminate" are also supported.
For example, this issues the same request as the example above, but this time it is performed asynchronously...
{ "request-GroovyExample" : { "identifier" : "active:groovy operator res:/somescript.gy operand res:/foo", "async": true } }
The power of asynchronous evaluation can be seen when dealing with high-latency requests. For example, the following JSONRL performs a mashup (composition) of three different JSON test microservices...
{ "request-Time" : { "identifier" : "http://date.jsontest.com/", "async" : true }, "request-Headers" : { "identifier" : "http://headers.jsontest.com/", "async" : true }, "request-EchoJSON" : { "identifier" : "http://echo.jsontest.com/key/value/one/two", "async" : true } }
The remote jsontest.com endpoints are not very fast so if done sequentially this takes about 800ms, however if done asynchronously in parallel (with "async" : true) it takes about 200ms.
Recursion
As with all *RL languages, JSONRL is recursive. That is, by default the requested JSON will also be evaluated and any "request-XXX" entries that it may also contain will themselves be evaluated. Recursion continues until all request references have been discovered.
Sometimes it can be important (eg for security when compositing external JSON resources), to force the explicit termination of recursion for a given request. Termination requires that you use the Advanced request syntax (above) and specify the "terminate": true map entry.
For example here is the three way mashup again but this time each included remote microservice result is not recursed...
{ "request-Time" : { "identifier" : "http://date.jsontest.com/", "async" : true, "terminate": true }, "request-Headers" : { "identifier" : "http://headers.jsontest.com/", "async" : true, "terminate": true }, "request-EchoJSON" : { "identifier" : "http://echo.jsontest.com/key/value/one/two", "async" : true, "terminate": true } }
Tests / Examples
You can get our xunit test module here...
This provides a detailed set of examples
JSONPath
Often we want to compose things but don't want to combine two whole resources but rather need "a bit of this with a bit of that".
In order to do this we need a simple way to extract a subset of a resource. We could write code, but its usually much better practice to be able to specify the identity of a subset and have a tool that slices out the identified subset.
In the XML world we are familiar with XPath - and the various tools we provide to slice and dice XML based on XPath structures. The power of paths is also extensively exploited in the design of the HDS data structure.
So, to complement active:jsonrl, we need a similar tool to slice JSON up. Enter active:jsonpath, the second new tool in json-extra.
Here's the docs with examples...
active:jsonpath enables JSONPath expressions to be applied to a JSON resource.
JSONPath is similar to XPath in that it allows for simple path expressions to determine specific locations in a JSON structure.
The power of path expressions are that they allow you to subset a given structure and extract it (or simply assert that it is present).
Example
There are many examples for JSONpath expressions on the JSONPath home page. Here is a simple example of how we can use the active:jsonpath service to extract a part of a JSON structure...
Say we have a resource called res:/transport.json that looks like...
{ "car": [ {"id": 10, "color": "silver", "name": "Volvo"}, {"id": 11, "color": "red", "name": "Saab"}, {"id": 12, "color": "red", "name": "Peugeot"}, {"id": 13, "color": "yellow", "name": "Porsche"} ], "bike": [ {"id": 20, "color": "black", "name": "Cannondale"}, {"id": 21, "color": "red", "name": "Shimano"} ] }
The following would extract only the bike data...
req=context.createRequest("active:jsonpath") req.addArgument("operand", "res:/transport.json") req.addArgumentByValue("operator", "$.bike") rep=context.issueRequest(req)
The resulting representation would be...
{ "bike": [ {"id": 20, "color": "black", "name": "Cannondale"}, {"id": 21, "color": "red", "name": "Shimano"} ] }
XUnit Assert
We have also provided an XUnit <jsonpath> assert, so that when writing tests that return JSON representations, you can make path like assertions on the result. Details are provided here
Javascript Update
Finally, we also updated lang-javascript to use the latest Rhino 1.7.7 javascript engine for the active:javascript runtime. There are many improvements, but given the JSON theme, the most useful is that there is now a native JSON object (just like in browser-side javascript), so if you really really want to write code to deal with JSON, then using server-side Javascript and the JSON object makes it very smooth.
Introducing ROC to a Microservices Crowd
Apologies for the high latency on newsletters recently - I've been slammed with travel. In the last three weeks I've given ROC presentations in Stockholm, London and Tel Aviv.
The audience for these talks were immersed in the context of microservices. Its been very useful for me to sample the microservices Zeitgeist.
The good news is that the motivation behind microservices is well intentioned. People generally perceive that increasing the granularity of systems is "a good thing". Also good is that people are open and are exploring many different avenues - this is a grass-roots movement and not the top down mandates we saw in SOA.
However, that being said I sense that there are some default memes that are taking root - and default memes have a danger of transforming into religious dogmas (Hey I've just been doing envangelical work taking the message of ROC to the "Holy Land". I have first hand experience of where that can lead!).
Fueled by the excitement of containerisation (Docker, LXC, LXD, Marathon etc etc etc) one such meme, in danger of becoming dogma, is: "Thou shalt deploy one microservice per container". This is frequently followed by "Thou shalt have one database per microservice".
Err, if you're Netflix with huge traffic per endpoint then this can make sense. But if you're just interested in breaking up a monolithic enterprise architecture this is, frankly, insane.
The reason becomes clear when we consider the total solution complexity. Here's a diagram that illustrates what I mean...
On the left hand side we have monoliths. We know that monoliths have high software complexity, but we overlook that monoliths are a single lump so by their nature they have very low deployment complexity. If we blindly follow the microservices memes then we see the right hand extreme. Every microservice is essentially very simple, but to establish the total functionality we will need many microservices. Deploying and managing many microservics is very complex.
We need to trade off complexities: simple endpoints but complex deployment vs complex software but simple deployment.
Now if you're familiar with NetKernel and ROC you know that our isolated Web-like address spaces and modular partitioning allows us to choose a position anywhere along the spectrum from monolith to microservice. It is not a black and white religious decision. Its engineering and we can find an engineering balance anywhere we choose along the monolith-microservice spectrum. (Of course ROC doesn't allow us to create monoliths at all - everything is a resource and solutions are decoupled composite architectures over resources - but you know what I mean).
Videos
Below there are two different videos of the talks I gave in London and Tel Aviv.
In the talks you'll see that I try to contextualize and address one potentially bad thing I saw in the microservices community. That is it is human nature that people are defaulting to try to understand these systems by reverting to code (in my talks: "Apples") - this is the path back to "services" the reinvention of all the contrivances we saw in SOA in the 2000's.
What I was trying to convey was that if we fail to understand the Web for what it is (a federated resource oriented architecture) we will miss the opportunity once again...
The Video of my London Mucon talk is here...
ROC at Mucon London - November 2015
You may need to sign-up for a Skillsmatter account to view it (sorry about this - but they're a good bunch and do good stuff).
My Tel Aviv talk will be available with a full video of my presentation once the camera and screen capture have been edited, but in the mean time you get the full story without seeing me ("a bonus" as my kids would say). The screen capture is available as a video ...
Summary
So what's the summary of my experience and the message for the ROC community. Microservices is a good thing - it makes introducing ROC a lot easier for us. ROC is a level of thinking several steps beyond where the microservices world is currently at, but they are on the right track - provided we can guide them to see that its about Resources and not Code.
I have also come away with a very clear perspective of how we are uniquely positioned to explain the transition we are seeing. As I say in conclusion in the talks: We are moving from 50 years in which we could rely on the deterministic encoding of Turing machines (coding) to a new world of non-deterministic composition of resources (Unix-like microservice sequencing is the first baby step towards ROC). Everything changes - but in ROC we understand this change better than anyone...
Have a great weekend!
Comments
Please feel free to comment on the NetKernel Forum
Follow on Twitter:
@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff
To subscribe for news and alerts
Join the NetKernel Portal to get news, announcements and extra features.