
NetKernel News Volume 5 Issue 4
March 7th 2014
 Repository Updates
 RDF Jena Update
 HDS2 Preview: Update
 On Computation's Upper Limit
 The Resource Oriented Architectural Style
Repository Updates
The following update is available in the NKEE and NKSE repositories
 rdfjena 1.11.1
 Updated to support JSONLD and TRIG (see below)
Please note updates are published in the 5.2.1 repository.
RDF Jena Update
The RDF Jena library has been updated. Tom Geudens submitted an enhancement that provides an active:jRDFParseJSONLD accessor which parses the JSONLD serialization of RDF. A matched JSONLD serializer is also provided. Thanks Tom!
In addition and while we were incorporating this update, to coincide with its status moving to W3C recommendation, we've also added support for TRIG.
TRIG is an extension of the Turtle serialization format of RDF. It enables multiple RDF graphs to be expressed in a single set.
To incorporate TRIG's support of collections of RDF graphs we've had to extend the representation model a little.
Up to now, the rdfjena resource model has been implemented around IRepJenaModel representations  with each tool being able to consume and/or produce this representation.
TRIG's support of multiple graphs in a single representation has required us to extend the library's representations and provide IRepJenaDataset.
This is the representation that active:jRDFParseTRIG will produce (and the corresponding serializer consumes).
It provides a wrapper over an underlying Jena Dataset and from which you can access one of the collection's Models.
To date, this extension to the representations is limited to just the TRIG parser and serializer  if you want to pass a Model from a Dataset to other tools then you'll have need to manually take care of wrapping it in a IRepJenaModel.
Philosophically you can see that we cannot provide a transreptor from Dataset to Model since this is not an isomorphic transformation. It is a set reducing transform (Dataset > Model : Many to One transform) and furthermore it requires you to identify the Model in the Set which you wish to access.
As we get feedback on this new capability we'll add some accessors to deal with the common transformation patterns that come up in everyday use  so please let us know how you get on.
HDS2 Preview: Update
The development of HDS continues. Here's the latest news from Tony... (updated downloads are available below).
I have been using HDS2 quite extensively with some realworld development in the last couple of weeks. This has lead to me finding a few limitations/bugs in the way that JXPath was integrated behind the scenes. This has now been fixed so consistent results are now obtained with some more obscure/complex XPath expressions. Additionally I’ve refined some naming and added a few more convenience methods so simplifying usage. Here is a summary of the changes:
 HDSContextFactory has been renamed to HDS and all of the method names have been shortened.
 IHDSMutableContext, IHDSContext] have been renamed to [IHDSContext,IHDSReadOnlyContext] to reflect that most usage of the API is with a mutable HDS rather than readonly.
 Added getRootXPath() and getCursorXPath() methods to IHDSContext which return a canonical XPath to the root node of the context and the position of the cursor respectively.
 Added createIfNotExists(xpath) method to IHDSContext for tolerantly ensuring a certain structural element is there and building it if not.
 Added variants of getFirstNode() and getFirstValue() methods that will return null rather than throw exceptions. These methods are called getFirstNodeOrNull() and getFirstValueOrNull()
Download
You can download the module here. At the moment this module is just for testing. When we've had some feedback we'll decide whether to make this a standalone module or just incorporate it as an update to layer1.
mod.hds20140307.jar  mod.hds temporary module
test.layer120140307.jar  tests for layer1 including hds2 tests
On Computation's Upper Limit
On Wednesday I was invited to present ROC to the British Computer Society in London. It was a great evening and, from the reactions I received afterwards, it appears we made quite an impact.
(Earlier in the day, Tom Gilb  the godfather of agile and Fellow of the BCS  gave me a public reprimand. He'd invited me to give a lunchtime preview of the talk to the attendees at one of his Architecture Engineering seminars which by coincidence was at the BCS earlier the same day. Afterwards, when I'd left and his course resumed, and to Tony's amusement he told his class, "ROC is revolutionary. Peter needs to be less modest and learn to blow his own trumpet".)
So, heeding Tom's admonishment, lets rephrase that:
For: "it appears we made quite an impact."
Read: "we hit it out of the park!"
In planning a talk for a prestigious audience you want to make sure you're master of all your facts.
Also, like Tom, I like to measure stuff and I like to know the practical and theoretical limits within which you are working.
Maybe I had a vision of the ghost of Alan Turing haunting me in the corridor, or maybe I thought, like in a PhD viva, I could be asked to go back to first principals, but I woke up the day before having spent the night dreaming about the theoretical limits of computation.
Just in case I'd have been asked "Where does ROC stand with respect to the theoretical limits of computation?" (Of course I wasn't! But you can't be too careful).
Those of you with a long memory will recall I've been here before. Back in 2011 I wrote a couple of articles that discussed the relationship between computational state and theoretical computation limits. They use a light hearted theme ("the infinte monkey question") but they actually point to some hard core aspects of ROC  such as the imperative of knowing the identity of what you've computed...
It might be worth a recap. The first, longer, article is called Infinite Monkeys, the second builds on the first to consider the Fastest Possible Computer and also defines a new unit, the PMU, to quantify extreme computation (lighthearted but also earnestly serious).
So I felt confident that I'd given some serious thought to this before  but I had a nagging feeling that this was a discussion that was innately concerned with an ultimate computer  one in which the computation was reaching a limit of parallelisation.
I must have realised in my sleep I didn't have any sense of the computational limit of a single processor, because I woke and had worked out a way to determine an upper bound.
So I thought I'd share it...
Fastest Processor
The Turing machine is the archetypal computing device. So to work out a general upper bound why not consider its upper bound...
To start this, we need to understand that things get faster as they get smaller. So let's shrink each cell of the Turing tape to the size of a single Atom. An atom is (give or take) one Angstrom diameter which is 10^{10}m. So our optimal Turing machine would consist of an array of atoms where each atom holds the state of one cell.
Next we shrink the tape head down so it can read the state of an atom (you can do this with a scanning tunnelling microscope for example).
Now for a really fast Turing machine we need to be able to move the head as fast as possible. How fast is that? Well nothing can go faster than the speed of light (c). So lets imagine we have a Turing tape head that moves at the speed of light. Which is 3x10^{8} m/s.
So how many operations per second can the theoretical machine do in one second? Simple, that's how many cells can you visit in one second with a lightspeed head. Which is just the speed divided by the size of a cell: 10^{8}/10^{10} = 10^{18} operations / second.
(What happened to the 3? Well in engineering approximation its accepted practice to just worry about the order of magnitude  its an educated guess  so we can get away with rounding off the units as irrelevant detail)
We have an upper bound on the number of operations per second of the Turing machine of 10^{18} ops/sec. But now we can generalize this statement. In fact, we can say that this is a fair estimate of the upper bound of any computer no matter how it is physically embodied. Here's why...
You could say that the the size of an atom is arbitrary  couldn't we build a smaller tape (a smaller unit of state) with something smaller, like a subatomic particle, an electron say?
Well yeah you could in theory  but when you get really small then you start to get into the Heisenberg limit  you might know something's state but you wouldn't know where precisely it is.
An atom is a very good unit since its about the smallest thing which has the property: we can reliably measure it and know where it is. As with the Infinite Monkey's copy of Hamlet, having a tiny Turing tape but not knowing where it is, is a very poor engineering solution!
Equally you might say, but a Turing head (especially an STM tip) can't move at that speed for real!
Well yes thats true. So we can think of other physical ways of acting like the head, using electronics or optics or spintronics. It turns out our approximation is still a good upper bound, since the speed at which any head (or beam or field or spintransition) can propogate  no matter what physical phenomena we use  will always have an upper limit of c.
Any physical mechanism to read the state of a computational bit can't be faster than c. Or put another way, the Nyquist sampling rate of the universe is the time it takes to cross an atom at the speed of light which is 10^{18} sec. Or if you prefer it as a rate 10^{18} Hz. Anything that changes faster than that can't be computed  we just found another engineering limit!
So we've approached the problem by considering Turing's original mechanical machine but we soon discover that we'll hit the same limit no matter which way we actually physically build it. There's an underlying physical limit to any engine of computation of ~10^{18} operations /second.
So now we can ask engineering questions like: How close are we today to the limit?
Well today our CPUs run at GHz frequency. So they run at about 10^{9} operations / second. But let's also assume that our machines are running much more complex instruction sets than the primitive Turing operation. Lets say that on average a single modern operation could be coded up on a Turing machine in about 100 cells?
It follows that a single modern CPU is doing approximately 10^{11} Turing Tape equivalent operations /second. Therefore we can see that our current technology is between 1 and 10 million times below the theoretical absolute limit.
As I said  I didn't get asked this question at the BCS  but at least I was prepared!
But here's something nice to consider. Back in the late 1940's when the very first computers were emerging  the state of the art in electronics would have meant that they would have had clock frequencies of about 1kHz. Which by a happy coincidence of symmetry means that our GHz computers today are about 110 million times faster.
Could the computer pioneers have imagined machines that were 1 million times more powerful? I don't know, but they must have guessed they would get a lot faster if they'd done a similar engineering estimate.
So can we expect that in another 70 years our computers will be approaching the theoretical limit? Nope, its purely theoretical, the other constraints of rate of energy transfer, signal to noise (ie entropy  I got it in again!) and just plain quantum uncertainty will limit us much sooner. We may have to resort to using alternative Plank Monkey Universes if we want to go much faster.
The Resource Oriented Architectural Style
Tony and I have been engaged in some intricate discussions about ROC and the ways to present it so that we can help the transition of classical software developers off the plain of code up to the new dimension(s) of spacially resolved resources.
To capture his thoughts, Tony has written an excellent note that clearly articulates how ROC extends beyond REST. If you know REST and want to get a sense of ROC then this is required reading...
The Resource Oriented Architectural Style, Tony Butterfield, March 2014
Have a great weekend.
Comments
Please feel free to comment on the NetKernel Forum
Follow on Twitter:
@pjr1060 for daytoday NK/ROC updates
@netkernel for announcements
@tab1060 for the hardcore stuff
To subscribe for news and alerts
Join the NetKernel Portal to get news, announcements and extra features.