NetKernel/News/1/41/August_13th_2010
search:

NetKernel News Volume 1 Issue 41

August 13th 2010

What's new this week?

Catch up on last week's news here

NKEE Heads Up

We've been doing extensive QA on the NKEE system. I was hoping to announce the RTM build today but we want to attend to a few last minute details. So please bear with us a little longer. If things go to plan we'll release the final gold build as soon as possible next week.

NKSE Repository Updates

The following updates are available in the NKSE and NKEE RC2 repositories.

Core

  • layer0 1.36.1
    • Added support for ILongReadableBinaryStreamRepresentation supporting content length greater than Integer.MAXINT
  • layer1 1.16.1
    • FileRepresentation now supports Long Binary Stream for very large file sizes.
  • module-standard 1.23.1
    • Reverted out a change to Invalidate space metadata on decommissioned modules so that dependent spaces are updated when new versions with different interfaces are deployed
  • nkse-cron 1.9.1
    • Fixed potential NPE during startup caused by introspecting it's state before it is completely initialised

Applications

  • wink 1.9.1.nkp.jar
    • Fixed export of scripts so that they go through the public channel.
  • wink-slinki 1.4.1
    • Fixed import of scripts

NetKernel ssh daemon

Available now in the NKEE RC2 repository is a brand new nkee-sshd package that provides a full openSSH compatible SSH daemon for NetKernel. It supports standard ssh modes including...

  • ssh shell
  • ssh remote command execution
  • scp file transfer

Here's a shell login using public key auth to NK sshd running on port 8022...

pjr@hp6715b:$ ssh -p 8022 -i test-key hp6715b
    ---------------------------------------------
    NetKernel SSH Server v1.0 on hp6715b
    Welcome: pjr
    Logged in at: Tue Aug 10 19:56:25 BST 2010
    ---------------------------------------------    

pjr@hp6715b:NK$  
NK in the shell prompt indicates you're connected to a NetKernel sshd

There are many uses for this new capability.
  • Secure shell control and management of your applications.
  • Integrating a NetKernel hosted process into a native pipeline process.
  • Download to file the result of an NK process.
  • Upload file to store or replace resource state on the NK server.
  • Use NK as a management tool for your native OS.

For full details and example use, here's a slinki presentation*.

Install

You can install from the NKEE RC2 repository (update apposite and find nkee-sshd and nkee-sshd-fulcrum). However the nkee-sshd is part of the Enterprise licensed components and so it needs a license key. If you're keen to play with this and can't wait for it to be part of the NKEE gold release next week, you should just download a new copy of the RC2 build from the portal (RC2 now ships with necessary license) ...

https://cs.1060research.com/csp/download/ (registration required)

*Starting to love the power of slinki, it took 5 mins to pull together a set of slides by repurposing the module documentation. Excuse the pistachio colour scheme - I think the whole shell thing is getting to me.

An Independent View: What is this NetKernel thing Anyway?

Last week Darren Cruse gave a presentation entitled What is this NetKernel thing Anyway? at the Lambda Lounge St. Louis.

He kindly told us about it and has made his slides available for download...

http://www.slideshare.net/darrencruse/whats-this-netkernel-thing-anyway

It's interesting to hear an outside independent voice. He makes some great points, I read through them and thought "why the hell aren't we saying this stuff". It's funny too, a highlight for me was the "real made up conversation".

[Apparently Darren just set up a blog where he promises more of the same.]

NetKernel News Volume 1 Anthology

The last 40+ weeks of news articles have now been ported to the wiki. I usually write the news in a frantic last-thing-on-Friday kind of scramble, so in porting the content to the wiki I preserved a literal reproduction and, other than some heading styles, I rejected the temptation to apply any retrospective editorial.

As you all know, much of what I say is complete crap. However sometimes I raise my game and produce nuggets of pure drivel. Here are some highlights, should you wish to be reminded...

P v NP

Finally some interesting news that maybe, just maybe, ROC offers some perspective on. The word on the Computer Science street is that Vinay Deolalikar of HP Labs, may (or may not) have proved P NP.

There's a layman's article on the BBC News. For the hardcore here's the Deolalikar Paper (PDF) [It was under a lot of load and wasn't up last time I tried - oh please, lets keep above any innuendos about Mark Hurd].

So what the heck has this got to do with ROC? Well there are some interesting consequences of exploring the complexity of algorithms when they are recast into the extrinsically recursive ROC address space. [By extrinsically recursive, we mean an algorithmic function that invokes itself by requesting a resource identifier in the ROC address space that resolves back to the same endpoint]

Here's a little module to play with, that gives concrete examples of extrinsic recursive implementations of the Fibonacci double recursion algorithm and the Ackermann Function...

http://resources.1060research.com/docs/2010/08/roc-pvnp-1.0.0.nkp.jar

(This is an NK package so to install use the Package upload button in apposite)

After install the demo's web interface is here. It lets you try computing fib(n) using both an intrinsic (classical recursive) and an extrinsic embodiment of the double recursion algorithm. There's a very old draft paper we wrote about the performance characteristics of this here. The headline is that classical recursion is exponential, extrinsic resource-oriented is linear.

Now that sounds hard to believe. But fortunately NK allows us to visualize the internal "structure" of an algorithm.

To prove it, if you've got NKEE, you can use the upload capability of the EE visualizer to view this visualizer trace. (If you don't have NKEE, you can view a static copy here) *

What you'll see is a triangular pattern where each fib(x) request is resolved to the Fibonacci endpoint all the way down to fib(0) at which point, all of a sudden, the recursive resource requests start finding "we've been here before" and the algorithm linearly returns, hitting the cache all the way back.

So this is novel, but I don't have the time to argue whether this is a known result. Some might argue that memoization within a classical recursive function will produce the same result. (The key point here is that we didn't have to rewrite the function - its the extrinsic context in which the algorithm is running that changes its order!).

Let's leave the fib discussion for now. Instead lets look at the the next result, which I think illustrates why ROC can offer some perspective on the PvNP discussion...

Ackermann Function

Included with the demo module is an ROC extrinsic recursive implementation of the Ackermann Function. This innocuous little beast goes like this...

A(m,n)		->  n+1      		if m=0
		->  A(m-1, 1)  		if m>0 and n=0
		->  A(m-1, A(m, n-1)) 	if m>0 and n>0

On first look, nothing too much to get excited about? But don't be deceived, this is a wild-eyed spittle-flecked nutter of an algorithm. Its not exponential, oh no, its tetrential. Meaning it recurses faster than a recursive ferret down a greased rabbit warren on a toroidal planet (me, struggle for a metaphore, never).

If you've installed the demo you can try it out here. Warning: keep m small and try increasing n a little (but still small). Oh yeah, you'll have to increase the Kernel's Max Request Depth which is an in-built safety valve NK provides. For the following test you'll need at least a depth of 300 compared with the default of 40.

As with fib, here's an example visualizer trace that you can upload to examine the structure of the algorithm for A(3,6). (you can see a static copy, here). Be warned these visualizations will make your browser sweat.

So what are we looking at. I guess first I'd better explain how the Ackermann algorithm has been implemented using extrinsic recursion. The key thing with the embodiment is that it is written to be purely asynchronous. If it was synchronous then all the threads in the Bayueax tapestry wouldn't be enough to probe even a tiny part of the A(m,n) space.

Now, because its async, the implementation is somewhat non-obvious. (In the demo source you'll actually find both a sync and async implementation to compare styles). The gist is that a request for A(m,n) makes sub-requests for A(m-1, 1) and A(m-1, A(m,n-1)). The trick we pull is to recognize that if we were to expand the resource identifiers there are not enough atoms in the universe to write it down on (see the expansion section of the Wikipedia article for examples). So we've used a recursive identifier notation and the endpoint recursively issues the requests for each m and n part of the identifier. For example, the basic identifier is A:m,n so we can have A:3,6 but we can also have A:1,A:2,25 (FWIW, this is probably the first ever example of recursive identifiers in an extrinsic ROC algorithm).

Whether or not you followed the explanation, it's still worth taking a "look inside the algorithm" with the visualizer trace.

Looking at the trace, you'll at first see something a bit like fib(n). The function starts to walk out through the foothills of the A(m,n) space. After a while it starts to step back down again. "Phew", you might think, "we're getting somewhere". But oh no, it starts to race up steeper hills, sometimes falling back down jaggy paths. Swooping out further with each cycle. Eventually, after we get to a tipping point, we start to come back towards base camp, eventually and exhausted we find the value of A(m,n). (For a kind of animated view, grab your browser vertical scroll bar and drag it from top to bottom to watch the algorithm move)

Interesting to see perhaps. But even more interesting is the little detail I omitted. Did you spot that as we step back down the jaggy paths, sometimes we set off uphill again only to find a path that we've already walked? In your browser window search for, or highlight, "from cache".

So our extrinsic recursive Ackermann is finding and lopping off massive portions of the space that a local recursive implementation would have to revisit. ROC is thrashing classical again!

On first impression one might think that ROC is fundamentally changing the order. But although it is yielding dramatically more efficient computation, if we start to go out in m (the real killer axis) we very quickly run out of steam.

What we are seeing, and the thing that sheds light on P v NP, is that as we increase m,n , the spacial extent of resources that need to be visited, even by our extrinsic recursive ROC algorithm, grows much much faster than any knowledge we can accumulate about it. Extrinsic recursion saves a lot of extraneous computation, but it cannot keep up with the tetrentially expanding horizon of the unknown.

Its my gut feeling (in Maths they call it a "conjecture") that this is the characteristic that distinguishes NP from P. The Ackermann Function is not the only nutter. I have extrinsic recursive algorithms for the NP-complete Subset Sum (Knapsack Problem) and they show exactly the same "expanding horizon of the unknown" characteristic . If I had the time (and mental horsepower), this would be the direction I'd take to explore a proof of P NP.

I hope Vinay's proof holds up. It is important stuff.

Results

Incidentally here's a fairly meaningless comparison but running "grep -o cache xxx.html|wc -w" on the two HTML visualizer traces shows the ratio of Endpoint executions to cache hits.

Function Invocations Cache Hits
fib(36) 36 36
A:3,6 1536 259

FWIW if you turn off the NK cache (set its max size to zero), you turn the algorithms into effectively their classical equivalents. With no cache I tried A(3,6), but the 300 stack depth became too small, then I had so many concurrent requests in the system that I ran out of memory. Those lopped off branches really did save a lot.

* Yes, we know no limits, in how low we will stoop to point out the extra capabilities of using NKEE!

An unpublished result is that the order of NP algorithms in extrinsic recursive form is affected by the statistical distribution of the input data, ranging from polynomial to exponential as the entropy of the data is increased. This indicates why empirically we see that real-world problems on an ROC system often yield better results than the theoretical limits.

This leads to whole other philosophical can of worms about the nature and role of approximation in computation, and how we can quite readily set up ROC architectures that give approximate solutions. (Just because its NP doesn't mean an approximate answer isn't useful - when was the last time you swore at your satnav for failing to compute the precise NP-complete travelling salesman route plan?)


Have a great weekend!

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited