NetKernel/News/1/35/July_9th_2010
search:

NetKernel News Volume 1 Issue 35

July 9th 2010

What's new this week?

  • Release of NetKernel Enterprise Edition 4.1.1 Preview 6.
  • Various enhancements to tools, endpoint metadata enhancements in the core infrastructure.
  • No Fluff Just Stuff covers NK/ROC.
  • Scheduled talk 26th July Skillsmatter London "NK and the Resource Oriented Cloud".
  • "ROC: Step away from the Turing tape", thoughts on recent community discussion on the state of computer science. *Summer training specials.

NKEE 4.1.1 Preview 6

We are closing in on the final features of NKEE and have cut a preview 6 build. This is a full distribution with a corresponding new apposite repository. To grab a copy you can download from:

https://cs.1060research.com/csp/download/ (registration required)

The following features are updated/new in preview 6...

nkee-dev-tools: System health monitor now supports acknowledgement and dismissal of reports.

nkee-architecture: profiler endpoint with custom state viewer in space explorer. Virtual host endpoint

nkee-deadlock-detector: docs

nkp: Now supports default handling for nkp:// and nkps:// scheme. active:NKPRequest manages concurrent establishment of the same configuration. Now supports configurable file-backed buffering of large representations.

nkee-docs: Enterprise features now have more documentation.

NKEE 4.1.1 preview 6 also includes the following general updates to the NKSE 4.1.1 core...

NKSE Respository Updates

layer0: misc low level updates for endpoint metadata.

standard-module: update to configured overlay base class to stop badly behaving (i.e. always expiring metadata) from slowing system startup/restart

system-core: added new system metadata with bidirectional indexing between prototypes and their instantiated instances

nkse-visualizer: Now uses active:xsltc to speed rendering of very large visualizations.

nkse-dev-tools: show bidirectional linkage between prototypes and their instantiated instances in space explorer all prototype instances to expose custom state viewers/managers to be plugged into space explorer

nkse-doc-content: various corrections and enhancements

No Fluff Just Stuff

If you're US based you'll no doubt have heard of the No Fluff Just Stuff technology roadshow. For the non-US folk its the 21st Century's equivalent of Buffalo Bill's Wild West Circus, only with twitter & laptops instead of guns-n-horses*

Brian Sletten (bare-back ROC specialist) is an ever present fixture and has more air miles than Richard Branson. This weekend he's in Salt Lake City and will be giving some deep coverage of NetKernel including a *hot off the press* demo of NK Protocol. If the NFJS tour is near you its well worth the time to take it in. Brian's non-stop schedule is here...

http://www.nofluffjuststuff.com/conference/speaker/brian_sletten

Jeremy Deane is also increasing his presence on the tour and is giving a range of NetKernel-based talks including Resource Oriented ESB & Resource Oriented Concurrent Processing:

http://www.nofluffjuststuff.com/conference/speaker/jeremy_deane

You get both for the price of one, when they overlap for Boston MA in early Sept.

Recent versions of his presentations are available online and include links to downloadable NK demos too:

Jeremy also published an article in the NFJS Magazine June 2010 edition on RO Concurrent Processing (requires subscription):

http://www.nofluffjuststuff.com/home/magazine_subscribe

[* Most of the time, there was of course the notorious J2EE horse flogging incident in 2006]

NetKernel and the Resource Oriented Cloud

For those on the East side of the Atlantic, I'll be presenting the latest on NK, the NK Protocol and ROC at SkillsMatter in London on the 26th July. The full title of the talk is...

"In the Brain of Peter Rodgers, NetKernel and the Resource Oriented Cloud"

which, as @bsletten. was twittering, sounds pretty scary (or makes me a prime candidate for a "No Stuff Just Fluff" tour!)

Its a free evening event, but you need to book your place in advance:

http://skillsmatter.com/event/cloud-grid/netkernel-and-the-resource-oriented-cloud

If you're not able to make this in person then I will be giving an online web-meeting run-through in advance on Wednesday 21st July:

Europe Mainland 5pm UK 4pm US Eastern 11am US Pacific 8am

Ping an email to me so we can plan capacity and send you the web-meeting details.

ROC: Step away from the Turing tape

You might have caught wind of this thoughtful blog post by "Uncle Bob" Martin earlier in the week...

http://blog.objectmentor.com/articles/2010/07/05/software-calculus-the-missing-abstraction

In which he points out that the conventional software world might not be the whole picture - his metaphor is that software today is like mathematics was, when it only conceived of algebra and yet whole new vistas opened with the dawn of calculus. His sentiment appears to have resonated with the wider community.

He ain't wrong. Nor is he the first to feel this. In the early 2000's philosopher Toby Ord, a graduate student of Jack Copeland, offered a similar view in his review article: "Hypercomputation computing more than the Turing machine"...

http://arxiv.org/pdf/math/0209332

...where, in the introduction, he says:

"In many ways, the present theory of computation is in a similar position to that of geometry in the 18th and 19th Centuries. In 1817 one of the most eminent mathematicians, Carl Friedrich Gauss, became convinced that the fifth of Euclid?s axioms was independent of the other four and not self evident, so he began to study the consequences of dropping it. However, even Gauss was too fearful of the reactions of the rest of the mathematical community to publish this result. Eight years later, J�nos Bolyai published his independent work on the study of geometry without Euclid?s fifth axiom and generated a storm of controversy to last many years."

So, why am I pointing these references out to you? Well, apart from being cajoled to "pipe up" by my friend Mr B. Sletten, because, I've been thinking about this stuff for a long time. To get to where we are with NK and ROC, we've been embracing some heretical beliefs for over ten years.

There is no room to provide the detailed background, and besides we all have real work to do, but for the sake of at least tantalizing you, here are some of the cornerstones...

Information is Platonic - all possible information resources are "out there". Software development is the art of introducing and constraining context in order to reify (make real) tangible instances of the Platonic resource set. Identity is relative to context and may be traded - identity shrinks as context is constrained. There is an innate duality between identity and representation.

To the software community, perhaps the biggest heresy is:

There are an infinite set of possible Turing engines with a corresponding set of valid code (computable identifiers); known respectively as "languages" and "programmes". Ultimately, nobody who uses the solution cares. Put another way. There are no prizes in the "My language is better than yours" competition - in fact there are no judges (only priests).

So, in Uncle Bob's analogy, language is the "Algebra" of the classical software paradigm used in the mainstream today, in that we solve information problems by working within a given Turing dialect. One kind of Turing engine, one representational form for code and ultimately a world that maps into itself. You might call this a "Turing monoculture".

The Turing-Church thesis states that there's nothing inherently missing from a single Turing framework. Any given Turing machine can compute exactly the same as any other. But "living on tape" leaves no room for externalities and emergent state to react with and influence the context.

As Ord points out, misconceptions have grown up about computability and Turing completeness. Misconceptions that Turing never held; he was well aware that a Turing machine can only compute what a mechanistic mathematician can, with no scope for intuitive leaps. By which, I would interpret "intuitive leap" as meaning: there is no scope for accumulated prior knowledge to be applied to shortcut the route to future required state.

In his review, Ord does a good job of summarizing a range of theoretical Hypercomputers. That is, computational models that *are* able to compute more than the Turing machine. [I'll let you Google "hypercomputing" for examples of real world systems that in everyday life cannot be "Turing machine computed" and yet have been reified some how. A good example is how some simple curves have no computable first order derivative but you can draw representations of them.]

For example, what if you had what Ord calls a "Turing Machine with Initial Inscription". ie a Turing machine that contained within itself a lookup table for all possible input states that the machine could ever experience in production. Impossible, yes. But this machine would actually never have to "compute" anything. It would have the answer in advance for every external state change.

So, where does ROC come in? ROC is a two-phase contextual abstraction for computation. The first phase is resolution - taking a resource identifier and using the context to either find a representation or, if not present, to find a Turing engine able to reify the resource. The second is the classical execution of code - but the twist is that during the execution you can, and invariably do, step back into the address space and issue further requests, which results in a cascade of the two-phase cycle.

Using Ord's metaphore of Gauss' Euclidean axioms, ROC relaxes the "axiom of code execution" and allows the foundational informational fabric of identity and context to take their place as first order citizens.

NetKernel is an embodiment of an ROC computation environment and it is, of course, ultimately represented as an inscription on a Turing tape (you can't escape technological reality). However a big trick that we pull with NetKernel is to have figured out that you can use knowledge of the thermodynamics and the statistical distributions of real world problems to trade the impossibly-infinite spacial extent of the Hypercomputation engines, for a local relativistic knowledge of resolvable context, computational energy and relative entropy of the representational state.

If you really dig down into how NetKernel is working you'll find that it is like a hybrid emulation of the "Oracle-machine" (Turing's own 1936 two-phase computational model), a "Turing Machine with Initial Inscriptions" and "Coupled Turing Machines". I say "emulation" since by necessity it is confined within a T-machine. However we can make it work since we are bounding the infinite state space requirements of Hypercomputers by trading space with energy and time to achieve a dynamic-equilibrium.

You tell me if NetKernel is the "calculus" step from the "algebra" of today's classical software? But I do know that NetKernel allows you to jump off any given Turing tape, peak at the dynamic partial multi-dimensional "Oracle tape" and jump back onto another, different dialect, Turing tape. Each time you jump off the tape you get the opportunity to allow the identity and contextuality axioms of the Platonic information reality to join the abstraction.

By making software the art of defining context, ROC is able to present solutions that are not bound within the uni-dimensional solution space of a single language. So you can approach a local approximation to a normalized representation of the software solution. Which is a long winded way of explaining why people repeatedly tell us "I can't believe how little code I write on NetKernel".

If this made sense you may also be starting to see how we can justify the claim that ROC allows architecture and code to be treated independently. Architecture is just another way of saying structural context (the domain of phase one of ROC). Code is the classical "algebra" (the doman of phase two). Ultimately this is important to us all since the entanglement of architecture with code is the limiting factor on the economics of realworld solutions. As I've said before: classical software is too damned expensive, too damned brittle and too damned small-scale for the information problems we could be tackling.

In summary, the heretical idea behind ROC/NetKernel is that software is not about the "algebra" of languages running on Turing machines, its about the "calculus" of obtaining information (state) by controlling and manipulating context.

Besides which, changing job title from "Software Developer" to "Director of Context" sounds so much cooler.

Summer Special Training Offer

The 33% training discount is still available...

If not now, with all the fabulous things you can do with NK clouds /via NKP, then when will be the right time to get your black-belt in NK? It takes just 2-3 days of instructor-led training for an existing Java developer /architect to get fully productive building scalable ROC solutions on NetKernel. So, to give you a leg up the learning curve, we're offering a Summer Special 33% discount on training.

Ping an email titled "NetKernel Summer Special" to services@1060research.com if you want to find out more


Have a great weekend.

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited