NetKernel News Volume 4 Issue 23 - On Empiricism - Part 4, HTTP Asserts, Collected Works of ROC
search:

NetKernel News Volume 4 Issue 23

September 27th 2013

Catch up on last week's news here, or see full volume index.

Repository Updates

The following update is available in the NKEE and NKSE repositories

  • http-client-2.14.1
    • Fixed bug where active:httpPUT requests were not returning the HTTP response headers as an internal NK response header

NEW: Collected Works

I've been writing between 30 to 50 newsletters a year for the past four or five years. Each letter contains topical news items, but also I usually try to include a deeper article. What I've been doing with these articles is attempting to write a definitive book on Resource Oriented Computing (in slow motion).

I know personally that there is some good stuff in here, since I often resort to searching back (search box at the top of the page) and sending a link to something or other when people ask me a particular question.

However up until now you were left to your own masochistic tendencies if you wanted to go back and read through all the material that is published here.

Well starting today, Tom Mueck has dug deep and stepped up to the role of "Editor of the definitive collection of ROC/NetKernel articles" (aka "Masochist-in-Chief"). He has selected important articles and series from the newsletters, and has also collected together material published elsewhere by the wider NetKernel team/community.

I'm delighted to announce that the first 1.1.1 release of this substantial collection is available today as a PDF download:

Download Collection v1.1.1

The collection is loosely structured and can be browsed and dipped into without having to read everything.

Please let us have your feedback. Please feel free to share this PDF with your colleagues, friends, peers and anyone you feel would be interested to hear about the Resource Oriented Computing revolution.

To keep the size manageable there is material that's not included in the collection. For example, I regularly provide an article with technical tips or worked examples - these are still to be gathered up, so you might still want to browse back through the article listing we also maintain or just walk through the volume index.

Thanks Tom. And thanks to you, the ROC community, the readers of these newsletters, for giving me feedback that you're out there and that this stuff is interesting enough for you to keep reading it. There's still plenty more to cover...

Tony, Tony, we need more butterflies...

TIP: HTTP Asserts

The NetKernel XUnit framework is resource oriented and allows you to easily declare custom asserts. Here's a couple of potentially useful little asserts I used this week and that you can cut and paste into your own unit test declaration.

The first one provides a <responseCode> assertion on HTTP responses. (The definition is at the top with an example of its use in a test shown below)...

<testlist>
  <!--Declare a <responseCode> assertion endpoint to test http response code of REST channels-->
  <assertDefinition name="responseCode">
    <identifier>active:groovy</identifier>
    <argument name="operand">arg:test:response</argument>
    <argument name="code">arg:test:tagValue</argument>
    <argument name="operator">
      <literal type="string"> resp=context.source("arg:operand"); code=resp.getHeader("HTTP_ACCESSOR_STATUS_CODE_METADATA") testcode=Integer.parseInt(context.source("arg:code")); context.createResponseFrom(code==testcode); </literal>
    </argument>
  </assertDefinition>
  <!--Example test using these asserts-->
  <test name="POST Hello World">
    <request>
      <identifier>active:httpPost</identifier>
      <argument name="url">http://localhost:8080/foo/</argument>
      <argument name="body">
        <baa />
      </argument>
    </request>
    <assert>
      <responseCode>200</responseCode>
    </assert>
  </test> ...
</testlist>

The second one provides a <httpHeader> assertion on HTTP response header. (The definition is at the top with an example of its use in a test shown below)...

<testlist>
  <!--Declare a <httpHeader> assertion endpoint to test http Headers-->
  <assertDefinition name="httpHeader">
    <identifier>active:groovy</identifier>
    <argument name="operand">arg:test:response</argument>
    <argument name="header">arg:test:tagValue</argument>
    <argument name="operator">
      <literal type="string"> import org.netkernel.layer0.representation.* resp=context.source("arg:operand") headers=resp.getHeader("HTTP_ACCESSOR_RESPONSE_HEADERS_METADATA") testHeader=context.source("arg:header", IHDSNode.class) name=testHeader.getFirstValue("//name") value=testHeader.getFirstValue("//value") h=headers.getFirstValue("//"+name) context.createResponseFrom(value.equals(h)); </literal>
    </argument>
  </assertDefinition>
  <!--Example test using these asserts-->
  <test name="POST Hello World">
    <request>
      <identifier>active:httpPost</identifier>
      <argument name="url">http://localhost:8080/foo/</argument>
      <argument name="body">
        <baa />
      </argument>
    </request>
    <assert>
      <httpHeader>
        <literal type="hds">
          <header>
            <name>Location</name>
            <value>http://localhost:8080/foo/12345</value>
          </header>
        </literal>
      </httpHeader>
    </assert>
  </test> ...
</testlist>

It would probably be a good idea to create a collection of custom asserts and publish them as a library module. If you have asserts you'd like to share please send them in and we'll start a collection (it seems to be the week for starting collections).

On Empiricism - Part 4

Last time we heard how the early Twentieth Century was a period of cataclysmic shocks. A brutal, unrelenting assault that left the Age of Empiricism in tatters... or did it?

In this part we shall finally move our attention to Information Technology and contemplate how the death of Empiricism came to be overlooked...

Modern Computing is Conceived

Modern Computing was conceived in the late 1920's and early 1930's.

History is a fickle discipline and we often simplify to make it easier to tell a story. It would be easy to say that "Computing was invented by Alan Turing" (I know you've been waiting for him to turn up since part 1). In fact modern computing and the formal mathematical concept of computability was conceived independently and approximately simultaneously by both Alan Turing and Alonzo Church.

The former is famous because his conceptual model of the Turing Machine is extremely simple to grasp, and ultimately, is a small step away from how a physical computer actually works. I suppose he earns the credit as the "Father of Computing" because he stuck around after the conception and actually helped deliver the baby and changed its nappies (diapers).

In parallel, Church devised the lambda-calculus, a much more "Mathsy" approach but, as is well understood, is exactly equivalent in terms of its expressiveness, to a Turing machine.

So, Computability has two fathers, but what does it actually mean?

The defining statement of computing is the Church-Turing Thesis, which I shall express as:

Any calculation that can be performed by a human-being following an algorithmic procedure can be performed by a Turing Machine.

Sometimes this is simplistically, and wrongly, expressed as "anything that can be calculated, can be calculated by a Turing Machine", but as we've learned from Gödel, this massively over-eggs the pudding.

At this point we can probably fold-up the intervening 80-years and move straight up to the modern day. After all, that's still what we do in IT now isn't it? We spend our days "constructing algorithmic procedures that can be performed by Turing Machines"...

"What are you doing?"
"I am constructing an algorithmic procedure that can be performed by a Turing Machine"

...is a long winded thing to tell your manager, so we generally shorten this to "coding". We say, "I am writing code".

It seems like Empiricism didn't die after all? Could it be that Coding is to Empiricism what the Birds are to Dinosaurs?

It certainly feels like our building materials (languages - imperative, functional; procedural, object oriented) and our working practices - test driven development - would wish this to be so?

Modern computing has been a great success. Its all pervasive. Its a multi-billion dollar global activity. It feels a lot like the empirical age at the end of the 19th century.

If history tells us anything then IT is ripe for an empirical catastrophy.

Or it would be, if IT's catastrophy hadn't already happened (twice) and we didn't pay attention...

Computing's Empirical Catastrophy

We have heard that Gödel pulled down the pillars of mathematics. His incompleteness theorem was a devastating shock in mathematics - the shock was large in direct proportion to the millenial-depth of the mathematical foundations it undermined.

The weird thing is Computation has had just as serious a shock. Its just that Computation's empirical catastrophe came while computing was, as yet, unborn. Even weirder, the creator of computing, Alan Turing, was also its destroyer...

Turing discovered that he could easily construct algorithms (encodings on the Turing Tape) that would result in the Turing Machine running forever. If you were a child of the 70/80's no doubt you too also quickly discovered this BASIC truth...

10 GOTO 10

Unlike you or I, playing with our first programs, Turing put some mathematical rigour into this discovery. In doing so he joined Gödel as a destroyer of worlds ( Just as with buses, you wait millenia for an incompleteness theorem to come along and then within a matter of years two come along at once.).

Turing proved that there is an infinitely large set of programs (Turing Machine encodings /algorithmic procedures) for which it is impossible to prove in advance if they will end (halt). In Computing this is called the Halting Problem and it is, in fact, exactly mathematically equivalent to Gödel's incompleteness theorem. Turing knew this, his mathematician peers knew it; put simply...

Computing is incomplete.

Shocked? No, I thought not. It was a long time ago. Before the first computer had even been built. Before programming languages, operating systems, microprocessors... We seem to rub along just fine. Who cares?

The definition of computing lets us happily stick to our professions "constructing an algorithmic procedure that can be performed by a Turing Machine". Never mind that the Halting Problem tells us that there is an innate limit to the complexity of any empirical procedure we can devise. We can sense this by recognising that the ceiling of complexity necessarily limits us to "those algorithms which run in short enough time that we can verify that they halt".

Computing's Second Catastrophy

Smash my world once and I don't notice, more fool you, smash my world twice and I still don't notice, more fool me...

We might be forgiven by our Mathematical colleagues for not being moved to tears by the Halting Problem. It was all a long time ago, before our discipline had even been born. So how do we reconcile the catastrophic events of the 1960's? Step forward Gregory Chaiten and Andrei Kolmogorov...

By the 1960's computing was really taking off. It was proving really quite useful and there was money to be made: the CEO of IBM predicted the world might need at least a hundred computers. The first generation of programming languages were maturing. Things looked great - we were all set for the birth of operating systems and the microprocessor revolution - the forward march to the future was on an unstoppable track...

But those pesky mathematicians start asking difficult questions again. Questions like, "what's the shortest possible program to compute any given problem?", which begs a harder question, "How do we determine how complex any program is?".

Kolomogorov answered this last question by defining a measurement of the complexity of an algorithm based upon its string encoding and showing how any minimum representation of a complex problem must have the same entropy as a random representation - if it wasn't completely random then there must exist a smaller algorithmic representation able to represent it. This follows by the simple observation that what "non-randomness" really means is that "there is a recipe to generate" something. (You see I'm not the only one to bang on about entropy in representations).

Meanwhile, a little like the precocious Gödel a generation earlier, Gregory Chaitin a kid of twenty-something wheeled his own clockwork wheelbarrow out and blew a great big hole in the fabric of computing...

By taking a similar approach to Gödel and exploiting the Berry paradox of set-theory, Chaitin presented a new incompleteness theorem, which in its mathematical form is pretty hard to understand:

For every formalized theory of arithmetic there is a finite constant c
such that the theory in question cannot prove any particular number to
have Kolmogorov complexity larger than c.

Generally, and somewhat controversially, this is interpreted as: you can only prove something of a given complexity if your axioms have at least the same complexity.

Shocked? No, I thought not.

But you should be. You should be really really shocked. Irrespective of the detailed interpretation, there are direct and immediate corollaries we should care about profoundly:

For a problem of a given complexity, you cannot know if you've discovered
the shortest program to solve it.

And it follows...

You can never know if the language you are using is the most elegant for a given problem

And it follows...

There are no special programming languages: there are only problems for which
they "seem to fit" and equally problems for which "they don't".

You cannot predict before hand if you're using the "best language". To a fundamental degree, based-upon first principals: software is a faith-based discipline.

On that bombshell we shall pause. Next time, we'll consider where that leaves test driven development and what happens if we reconsider our definition of computability, what happens if we, as with our scientific peers, relax Empiricism and admit relativity and probabalistic approximation...


Have a great weekend.

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited