NetKernel News Volume 5 Issue 5 - ROC Value Proposition Whitepaper, ROC Saves Money compartive measurement of software energy costs, Asynchronous Request Patterns
search:

NetKernel News Volume 5 Issue 5

March 21st 2014

Catch up on last week's news here, or see full volume index.

Repository Updates

The following updates are available in the NKEE and NKSE repositories

  • demo-addressbook 1.2.1
    • Now uses bootstrap instead of the ugly pre-CSS look and feel its had since 1999.
  • html5-frameworks 1.3.1
    • Includes recent updates to the various HTML5 libraries.
  • module-standard 1.63.1
    • in a mapper config now trims trailing whitespace to prevent automated formatting breaking mapper imports. Thanks to Richard Smith for reporting this.
  • nkperf 1.5.1
    • Fixed cache test to be more realistic

The following update is available in the NKEE repository

  • cache-ee 1.7.1
    • Fixed corner case where integer rounding of extremely high volumes of very cheap cache items could result in a transient unbounded cache size (triggered by the artificial nkperf cache test, see above)

Please note updates are published in the 5.2.1 repository.

ROC Value Proposition Whitepaper - Enterprise Software Systems at Web-Scale

Several of our partners have suggested that it would be helpful for them to have material available that would help them introduce and make the case for ROC to their customers.

Material that can tell the story and provide the context to establish why proposing using NetKernel deserves fair consideration.

Tom Mueck has been absorbing the various stakeholder's perspectives and has started to prepare a series of whitepapers. We plan to share these as they become ready and, with your feedback, intend to build a broad library of content.

For example, we are working on a paper that articulates the green-computing benefits of ROC - as Tony shows below, NetKernel enables energy efficient, self-optimising software that automatically eliminates redundant CPU cycles... but that paper is not quite ready.

Today we've published the first result of Tom's efforts, a paper that makes the case for the ROC value proposition of Web-Scale enterprise software. Please download it, let us know your feedback and by all means share this so that together we can start to show how the ROC revolution offers huge long-term benefit.

Its time to tear down the saw-tooth software cycle and step up to "web-scale".

ROC Saves Money - Comparative Measurement of Software Energy Costs

Tony has done some quantitative measurements of the real energy savings of using ROC. In this article, he outlines his method and the results and provides concrete evidence with reproducible measurements to back it up...

http://durablescope.blogspot.co.uk/2014/03/reducing-power-consumption-with-roc.html

The headline numbers are that relative to a classical system NetKernel will save at least 40% of the power. In data center terms, that could equate to tens or hundreds of thousands of dollars a year saving...

Still wondering why you should consider using ROC? You do the math... and this is just operational savings - see Tom's paper for the long-term view of payback. It really is time to tear down the saw-tooth.

On Asynchronous Request Patterns

Its very simple to parallelize execution by making asynchronous requests in NetKernel. Here's how we would convert a synchronous request to a parallel asynchronous request...

//Synchronous
INKFRequest req=aContext.createRequest("active:foo")
Object rep=aContext.issueRequest(req);

//Asynchronous
INKFRequest req=aContext.createRequest("active:foo")
INKFAsyncRequestHandle handle=aContext.issueAsyncRequest(req);

//Do something else here while we wait for the async response to complete
//La La La...

//OK now we're ready to see if we have a response to our request...
Object rep = handle.join();

Its as simple as just saying "do this async and give me a handle to the request so I can get the result in my own sweet time".

The NetKernel scheduler takes care of all of the underlying thread management for us and we can delegate the asynchronous complexity.

This simple pattern is the most common you'll encounter and is typically used either to get extra work done while waiting for something to happen in parallel or equally allows you to do a map-reduce style parallel fan-out - by issuing many sub-requests in parallel and then joining them before "reducing the responses".

Async Listener Pattern

There's also an alternative pattern which we can use if we think things are going to take a while and the current thread could be given back to the kernel to do work for somebody else.

We can use a callback pattern where we give the INKFAsyncRequestHandle a reference to a INKFAsnycRequestListener callback interface. Which looks something like this...

//Asynchronous Callback Pattern
INKFRequest req=aContext.createRequest("active:foo")
INKFAsyncRequestHandle handle=aContext.issueAsyncRequest(req);
handle.setListener(new AsyncListener());
aContext.setNoResponse();

Where our AsyncListener would implement INKFAsnycRequestListener and might look like this...

public class AsyncListener implements INKFAsyncRequestListener
{
    //Other methods no shown...

    public void receiveResponse(
            INKFResponseReadOnly response,
            INKFRequestContext aContext) throws Exception
    {   //Here we can do something with the response before
        //computing a final response representation for the original NK request.
        Object representation=foo;
        aContext.createResponseFrom(representation);
    }
}

The important thing to note here is that we don't need to block our initial thread waiting on the response to the sub-request. We can call aContext.setNoResponse(), which returns the initial request's thread to the kernel to be scheduled to carry on "doing useful work" for any other current requests in the system.

Eventually, when the async request finally completes, the callback interface gets requested with a thread and (significantly as we'll see below) a new INKFRequestContext.

At this point, you can resume whatever needs to be done but ultimately you must make sure to issue a response - this response is the one to the original request - remember we decoupled ourselves by calling setNoResponse(). But its dead simple since we simply use the new INKFRequestContext to issue a response (this new context knows that this is a response to the original request).

Its pretty simple stuff - the only thing that you have to be aware of is that you have taken on responsibility to, at some point, make sure that the initial request gets a response set - otherwise the endpoint that originally requested you will wait forever!

So managing asynchronous execution is really dead simple - the only warning is that it's so simple you can throw your brain away and go overboard by "parallelizing everything".

Remember you won't get any more work done unless you have idle cores available whether you're doing things sync or async. The best use of async patterns is often when you have a request which depends on some externality (like a complex DB call or a service request). In which case letting the system do other stuff while you wait is a "good thing".

Non-Blocking "Client-Endpoint"

The examples shown above cover 95% of the cases where you might need to use asynchronous processing. You can see that all of these involve making asynchronous sub-requests and then dealing with the response(s). However there is another async pattern which also comes up which is a lot rarer.

This is the case of an endpoint which makes non-blocking "client" interactions (often network requests using NIO).

Here's the scenario: We are dealing with a NetKernel request. We make a non-blocking external call and while that call goes out we can return our kernel thread back to do useful work until the NIO call comes back.

This pattern is used extensively in NKP (which is a completely asynchronous non-blocking protocol - hence why it scales extremely well) but it also came up this week in discussion with Tom Geudens who is playing with a new set of non-blocking async HTTP client endpoints (more on which at a future point).

Here's the gist of Tom's requirement...

An endpoint responds to a SOURCE request, makes an external async non-blocking request and, when ready, expects to get a callback from the given technology. In the mean time, we can return the request thread back to the kernel to "do good work" by calling setNoResponse() (just like the example above for the internal async request listener pattern).

public void onSource(INKFRequestContext aContext) throws Exception
{
        CloseableHttpAsyncClient httpclient = HttpAsyncClients.createDefault();
        httpclient.start();
        HttpGet request = new HttpGet("http:/foo.com/path/path");
        httpclient.execute(request, new FutureCallbackImpl(aContext, request, httpclient));
        aContext.setNoResponse();
}

In this case the HTTP client technology uses a FutureCallback interface which it will invoke when the NIO HTTP request gets a response...

class FutureCallbackImpl implements FutureCallback
{
    INKFRequestContext mContext;
    HttpRequest mRequest;
    CloseableHttpAsyncClient mClient;
    
    public FutureCallbackImpl(
            INKFRequestContext aContext,
            HttpRequest aRequest,
            CloseableHttpAsyncClient aClient)
    {   mContext=aContext;
        mRequest=aRequest;
        mClient=aClient;
    }

    public void completed(HttpResponse hResponse)
    {   try
        {   INKFResponse response = mContext.createResponseFrom(getResponseInner(hResponse));
            response.setExpiry(INKFResponse.EXPIRY_ALWAYS);
            issueAsyncResponse(response);
        }
        catch(Exception e)
        {   issueAsyncResponse(mContext.createResponseFrom(e));
        }
    }

    private void issueAsyncResponse(INKFResponse response)
    {   IResponse kResponse=((NKFResponseImpl)response).getKernelResponse();
        ((NKFContextImpl)mContext).handleAsyncResponse(kResponse);
    }
}

At this point we need to do our duty and issue a response so that the original request doesn't wait forever.

Just as with the INKFAsyncRequestListener we need to use the context to issue a response to the original NK request. But there's a big subtlety here - this is not a callback from the kernel after an async request comes back - this is a callback from the external technology and we need to tell the kernel that we have a response for the original request.

Unfortunately all we have is a reference to the original INKFRequestContext - the one upon which we setNoResponse().

How do we use this to tell the kernel we now have a response?

Well for this special corner case we need go slightly closer to the metal of the kernel. We can set a response on the (supposedly) complete INKFRequestContext by going down to the underlying NKFContextImpl like this...

//Create a response on the context
INKFResponse response = mContext.createResponseFrom(foo);
//Obtain its underlying IResponse at the kernel level
IResponse kResponse=((NKFResponseImpl)response).getKernelResponse();
//Set this as an async response via the low level NKFContextImpl
((NKFContextImpl)mContext).handleAsyncResponse(kResponse);

Not the most beautiful thing - but still only a couple of lines of code and pretty cool that this scenario is not really any more complex than for internal asynchronous endpoint patterns.

I don't have time now - but the symmetrical case of an async transport endpoint is also very simple and is covered by the NKF API - remind me to tell you that story another day.


Have a great weekend.

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited