NetKernel/News/3/47/December_7th_2012
search:

NetKernel News Volume 3 Issue 47

December 7th 2012

What's new this week?

Catch up on last week's news here

Repository Updates

No updates this week.

HTTP Log Configuration

Earlier in the week, we had a useful discussion with Keith Treague at Findlaw. He discovered that when he added the RequestLogHandler to the Jetty handler chain it was always logging 200 responses - no matter what the actual underlying HttpResponse code.

It turns out that the commented example in the Fulcrum configurations, which used to work with Jetty 6, is no longer a good pattern for Jetty 7. Instead the working configuration is to put the NetKernelHandler and the LogHandler in a HandlerCollection.

The details and an example configuration can be found in this forum thread.

Thanks Keith for raising this.

Next Year's Articles

The last couple of weeks have been very busy and so attention was diverted from newsletters. Two weeks ago we were on standby over the Black Friday holiday, while last weekend and earlier this week we held an informal gathering of NetKernel customers in Brussels. Both of which turned out to be a lot of fun and very successful.

These events gave me the excuse to put aside the newsletter writing - which was a good thing since the stories around ROC Analysis and Design had reached a natural conclusion.

It being the run-in period for the holiday season I doubt that the next few weeks are a good time to start a new series. However I've taken on board feedback from the weekend and have got some ideas for the new year. Here's a list of the broad themes that I think we can cover...

  • Architectural Systems Patterns
    • There are a number of recurrent large-scale patterns that apply across many diverse vertical applications. I'm not talking about low-level code - but large scale system engineering. I plan to use diagrams to discuss these and the engineering levers available to build balanced solutions (see below).
  • Tool documentation and examples.
    • Its clear that we need to work through individual tools and ensure that each tool has comprehensive documentation including purpose, usage and examples. One way to force this to happen is to commit publicly to write about a specific tool in the newsletter each week - then take the result and ship it as part the library documentation for the tool.
  • Cut and Pasteable projects.
    • Several people have said they'd like more small snippets of stuff that can be used to cut and paste - to experiment and play with. Again, this is good for me to hear and can apply to bite sized chunks.

But of course, please let me know if there are specific topics you'd like to see covered.

On Balance

I've recently found myself using the term "balance" when discussing NK systems. Tony said, "you should explain what you mean cos I don't think its a common expression in software development"... so here goes...

In some ways its easier to consider balance by looking at examples of imbalance. For example you know you're not balanced when:

  • Rates of requests exceed capacity to service the requests.
  • In a thread per request system you have more threads than cores.
  • The number of asynchronous requests is greater than available cores (unless your async requests are I/O bound).
  • Requests to a system of record are greater than its capacity.
  • Requests to a service exceed the latency you promised to deliver in your own service.
  • Maximum configured cache size is greater than available memory.

So we all kind of instinctively know what imbalance is. What is balance?

Balance is the art of applying constraint to ensure that a system's operation satisfies a set of core compromises.

Balance is about making a system sit within a bounded comfort zone. Its about ensuring expectations are satisfied and about not being surprised.

Attaining a balance is fundamentally about selecting from a set of (usually orthogonal) engineering levers. Examples include:

  • Latency vs Throughput
  • Memory vs Latency
  • Hardware Cost vs Performance

Tuning is deciding which orthogonal engineering property should be maximised or, rather, which sets of engineering properties should be adjusted to provide a balanced solution.

The list presented so far describes physical properties for which the balance is bounded by physical limits (devices have finite capability) but any engineering project must balance the economics. Engineering solutions face strong pressures to be "cost effective".

Its interesting but the pressure for cost-effectiveness in information systems is usually focused on the production platform - since this is the physical system that does the work. But there are other aspects to balance which can have a much more significant effect on the economics of solution and, as we shall see, directly contribute back to decisions about its physical characteristics.

For example what about these engineering factors?:

  • Data consistency vs Simplicity of design
  • Risk vs Liability

These are examples of balanced compromises that can have huge impacts on the cost effectiveness of a solution.

The ideas are a little abstract but here's an example that comes up in every business. You have a set of information - most of that information is long lived - but some aspect of it is highly volatile. It might be price, it might be stock levels, it might be currently queued support requests.

The question is - do you engineer your system to track the volatile state? Or put another way - is it essential to provide consistency between the view you present to the user of a system and the volatile item? Is it sufficient to allow approximate consistency?

Its surprising how often in designing software this sort of engineering question get buried in the code. Its even more interesting when you realise that the answer is not a physical level software engineering problem at all. Its really one of Risk vs Liability.

That is, what is the exposure of the business when using approximation? Can the liability be made small enough that the approximation allows an elegant balance to be achieved in the engineered system?

Here's a concrete example. Should the price shown on a website strictly reflect the price of the item as set in the system of record by that department's product manager?

  • If the price is different over a day - then this is bad. Any potential price sensitive customer seeing a too high price may be put off a purchase.
  • If the price is different over a few minutes - then perhaps a few potential customers may be put off - but the lost opportunity can be calculated and used as a determinant in deciding how many minutes is an acceptable balance.
  • What if the price changes between choosing an item and committing to buy it? If there is no chance a customer will find out - then any reputational damage is manageable and the legal terms of service can state that the price at time of selection is the offer. If price change is a certain percentage of the original (either up or down) then the customer might well be either pissed off or happy if they are told the new price - it might be worth doing a reconciliation of the price at this point. This then becomes an opportunity to increase customer satisfaction - since a price that is lowered can be heralded as an additional saving. While a price that goes up can be discounted to the original selected price (provided the lost revenue is small such that the volatility tracking only allows a small number of customers to be in this distribution tail - again this assessment goes into the decision about the balance of the system).

Clearly a system that allows some tolerance in the displayed price offers an acceptable balance of Risk vs Liability and can be considered - leading to a much much simpler software solution.

Of course there are even tricks to pull within the solution too. For example you would seek to engineer a composite system such that the final view to the user was a composition of the static stable dataset (the stable resource) and the late-bound volatile state (the transient resource). And naturally the composite would then become "locally stable" in the solution within the timescale of the volatility of the transient resource.

The point here is that stepping away from the code and thinking about the resource state and its consistency provides direct and dramatic contribution to the cost-effectiveness of an engineering system.

It is exponentially expensive to build and maintain a system that over-constrains data consistency - and, as we saw in the example, it may even be a missed marketing opportunity to increase customer satisfaction and to maximise over the net annual performance of the system (eg loyalty).

The ultimate expression that describes balance is Occam's Razor. Which can be paraphrased as:

Keep it simple - but no simpler


Have a great weekend.

Comments

Please feel free to comment on the NetKernel Forum

Follow on Twitter:

@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff

To subscribe for news and alerts

Join the NetKernel Portal to get news, announcements and extra features.

NetKernel will ROC your world

Download now
NetKernel, ROC, Resource Oriented Computing are registered trademarks of 1060 Research


WiNK
© 2008-2011, 1060 Research Limited