NetKernel/News/3/12/February_3rd_2012
search:

NetKernel News Volume 3 Issue 12

February 3rd 2012

What's new this week?

Catch up on last week's news here

Repository Updates

The following updates are available in the NKSE and NKEE repositories...

  • demo-golden-thread 1.2.1
    • Added import of xml-core so that first request doesn't go up superstack.
  • http-client 2.8.1
    • Fixed an NPE if entity content is not set by remote server. Thanks to Nicolas Spilman for discovering this.
  • lang-trl 1.4.1
    • Rewritten with optimisations and new asynchronous request support (see below)
  • nkse-doc-content 1.41.1
    • Added section covering overlay and how it differs from (or, really, is the same as) . Thanks to Tanmay Sinha for raising this.
  • nkse-visualizer 1.16.1
    • Fixed a potential NPE when examining request details of a request originating over NKP.

The following update is available in the NKEE repository...

  • nkp-1.10.1.nkp.jar
    • Fixed a potential ClassCastException if a client request had a PRIORITY header.

*NEW* active:trl goes async

The active:trl runtime has been rewritten. An update to lang-trl is in the repositories.

The first change is to use a more efficient pattern of internal string substitution. Switching to a substitution into a StringBuffer.

The bigger change is that it now supports a $a{...} flag - which causes the embedded request to be issued asynchronously. This feature means you can asynchronously fan-out any long lived requests while the main parts of the template carry on being built.

As with a regular request the result is also recursively evaluated and so further deep requests can be made. Obviously the initial async request has to be returned before recursive requests are issued - causality is not violated.

The internal async engine is quite simple and so you can expect that we'll review and add the same capability to the XRL runtime.

Tom Latest

Here's the latest from Tom. Please also don't forget that it would be really great if you can lend Tom some time to read/review the current draft of the Practical NetKernel Book.


Coming from Egypt in my last entry, we tour ancient Rome today. You can find the entry at :

If you've missed the previous two entries :

The challenges in those two are still open, so get your entry in soon !

As always, ideas, remarks and entries for the challenges are most welcome at practical (dot) netkernel gmail {dot} com

Threading and Asynchronous Clients

I recently returned from an extended trip to the US - hence the news downtime and today's hasty efforts (next week I've got a good one teed up - I promise!).

I had the pleasure to spend a day with Gary Sole at Findlaw.com. Amongst many interesting discussions, we covered an important area that needs to be shared more broadly. Gary has a large set of NK-based services which provides a load-balanced caching-normalisation layer in front of a primary corporate database server. As is good practice he has a throttle after his transports which ensures that the load into the system is always managed.

Internally he had the default handful of kernel threads able to pick off scheduled requests. Again this is standard configuration - you can't do more work than you have CPUs so you only need a relatively small thread pool in the kernel to get optimum throughput.

But here's the problem he was seeing. The out-of-the-box settings are assuming NK will be required to do some work - its tuned to optimize the CPU processing in a loaded system. But it turns out that some of the SQL queries to the DB can be disproportionately slow. So while the system generally handles a steady and continuous throughput - sometimes you get a long lived request on the SQL accessors. Unfortunately due to the way JDBC is written this means that the open DB connection blocks the requesting thread.

Mostly this is ok - you just have one less thread to do the regular work with. But sometimes reality hits and you get several long-lived requests all at once. Too many and you don't have any threads free to pull off new requests arriving at the throttle. While the system would eventually start moving again - any requests on the throttle will stall. Even though the CPU and NK is actually doing nothing at all.

Now if you are a careful organisation it pays to have probe requests monitoring the liveness of your services. Those probe requests expect to see the service but in this regime they are liable to get a potentially long wait - if its too long maybe they start to panic and start raising alarms? This is what would periodically happen to Gary.

The problem here is that the outbound client request (to the DB - but could be to any system of record or service) causes the request thread to block. In order to carry on handling the regular traffic and tolerate these long lived blocked threads too - you have to increase the kernel thread pool. Ordinarily, if you're actually computing state in an ROC system, this is a non-optimal thing to do. But the CPU in this case is completely idle. We'll take the modest extra memory footprint incurred in order to retain the quality of service.

The rule of thumb should be to set the kernel thread pool to be at least as large as the maximum number of potentially long-lived requests, then add some more for the CPU cores, then for engineering expediency double the number you came up with. (CPU use is not our optimising criteria so this is not an issue). Meanwhile, and counter-intuitively, you also need to do some loosening to the front-end throttle. You want to make sure that it admits at least as many requests as you might see under the peak load of long-lived requests, then make it wider still so that the regular steady-state requests can also continue to get in. A rule of thumb is to guess what your worst case long-lived peak load might be then make the throttle 1.5x as wide as that.

Non-Blocking IO

This scenario would not occur if the underlying client implementations were designed to be asynchronous. JDBC drivers are blocking and other than lobbying your DB vendor for a more modern architecture there's not much you can do about that. But in fact most Java client libraries block on IO - this includes the Apache HTTP client which is used by the active:httpXXXX accessors. [Although Jetty now includes an http-client library using underlying NIO sockets and would be a good candidate for a new async set of REST client tools (watch this space)].

We anticipated this limitation and its implication of forcing artificial physical level constraints onto the logical ROC model when we developed the NetKernel Protocol (NKP). (You'll recall NKP allows the ROC address space to seamlessly span the underlying physical NK instance - you can issue (and receive) requests to a space and not worry its on another server).

NKP is a completely asynchronous architecture. The transport-side uses NIO to monitor the network socket(s), it then receives the inbound request and issues it asynchronously into the ROC domain - no thread ever blocks. Meanwhile the client side issues a request over to the server-side, again it is non-blocking, the requesting thread is freed to go back into the ROC domain to carry on work. Only when the wire-level response comes in is it issued back into the ROC domain to reconnect with the initiating request thread.

Both sides of NKP are asynchronous and both couple to the logical ROC level - which means its impossible to get IO bottlenecks when using NKP. And this also holds for superstack requests coming back from the requested server-side - NKP is symmetric and non-blocking both ways around.

In the future we can only hope that the designers of client libraries will understand the needs of asynchronous architectures and use non-blocking designs to couple to the network protocol.

Better Jokes

My brother thought the cheese jokes last time were pretty poor. He said this was better...

Why is a broken drum kit a great gift? Its unbeatable.

My family live up in Yorkshire, so I prefer this...

I met a dyslexic Yorkshireman the other day. He was wearing a cat flap.


Have a great weekend...

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited