|
NetKernel News Volume 2 Issue 15
February 4th 2011
What's new this week?
- Repository Updates
- Asynchronous Transports - A lesson from the real world
- NKEE, Apposite and Corporate Firewalls
- Preparing for IPv6
- Representations and Evolvable, Stable ROC Solutions
Catch up on last week's news here
Repository Updates
The following update is available in both the NKSE and NKEE repositories...
- http-client 2.2.1
- Updated to use Apache 4.1.x client with built-in NTLM auth.
The following update is available in the NKEE repository...
- nkee-dev-tools 0.18.1
- Control Panel gatekeeper tweak to enhance caching - makes tools even snappier.
Asynchronous Transports - A lesson from the real world
Last week I mentioned that NKEE provides an asynchronous Jetty handler "NetKernelHanderAsync" and I said...
"You literally just swap out NKSE's NetKernelHander with this NetKernelHandlerAsync - they're fully equivalent in every way other than they're internal operational model."
Which is true. But just as you make glib public pronouncements the real world comes and bites a chunk out of your buttock.
I got a ping from Andrew Hallam, a long time NK user from Sydney Australia. He was having problems downloading a copy of NKEE - it always seemed to get to 16MB then stopped.
This was very strange and unreproducible.
After some digging around, it turned out that the download always stopped after exactly two minutes. Hmmm. Timeout. A quick check of the portal's HTTPServerConfig.xml showed that it was using the Async Handler and was using the default timeout - two minutes. Ah ha!
The NKEE distro is pretty small - only 18MB, and comfortably downloads in a few tens of seconds on a typical connection, but Australia is a long way away. We get used to thinking of the internet world as instant and small. Sometimes its good to be reminded that data takes time to move.
So we set a slightly longer timeout and the problem disappeared.
But the lesson to take away is that when using the Async handler you do need to consider what the likely longest representation transfer is going to be. Which includes factoring in a worst-case network transfer rate.
The timeout is required since the HTTP connection is suspended using a continuation with no blocking thread - to ensure you can't be killed by permanent denial of service attacks it will always resume and close the connection after a certain maximum timeout.
Its worth note that by using the Async handler you can introduce some level of layer-7 protection against "trickle-DoS" attacks. ie the class of attack that holds open a live connection but deliberately limits the bandwidth to occupy a server thread. Configuring the async timeout lets you chop such connections after an unreasonable time.
Anyway, if you're exploring an async stack on the front-end and doing large file transfers - bear this in mind.
NKEE, Apposite and Corporate Firewalls
On a somewhat related matter, Carl Conradie has helped us solve a really annoying scenario we were seeing with NKEE syncing Apposite to the repositories through a corporate proxy. It seems that we were being too clever for our own good with the way we'd set up the the https:/ NKEE repository.
We have our own NK Certificate Authority and for the NK services we have cut our own SSL certs - NK has a copy of our root CA certificate public key and uses this with its SSL socket factory in Apposite. All this works just fine and seamlessly.
Until you meet a proxy that decides it knows best about trusted root CAs and refuses to proxy https: to unknown Certificate Authorities. At which point Apposite is not able to synchronize - which means the small download size of the NKEE distro is a bit limiting when it can't tap the potential library of repository packages that are available online. Worse yet, it can't get system updates to make itself current after install.
As I said, we were being too clever for our own good - but also we were relying on the documentation of the Java 5 JDK. If you read the keytool docs you'll find a list of the root CA's that are preconfigured with Java in the jre/lib/security/cacerts keystore.
Its a small list, and since Java 5 first shipped most of the providers have merged or gone away. (Incidentally, we still look at Java 5 since we ensure NK remains compatible with Java 5 - in enterprises it can take a long time to trust new JVMs in production).
Well, the moral is don't trust what you read. Go and look for yourself. A quick run of
keytool -list -v -keystore cacerts
reveals that incremental releases of Java have progressively added many new Root CA's to the recognised list. The Java 5 keytool docs are way out of date. You can now use SSL certs from an increased set of suppliers and the SSL socket factory will happily trust them provided they're in the cacerts list.
The upshot of this tedious story is we swapped the SSL cert on the NKEE apposite repo to one with a known root (FWIW GoDaddy). The corporate proxy is happy, and NKEE Apposite now negotiates the very meanest of proxies.
Thanks Carl for painstakingly helping work out a solution to the jigsaw.
If, you've had trouble getting NKEE to sync inside your corporate proxy - try now. Hopefully we've got a path that works. If its doesn't please let us know - we want Apposite to be smooth and easy.
Preparing for IPv6
The end of the world is nigh. The last block of IPv4 addresses have gone and like Mad Max we're into the post-apocalypse phase. (Figure Left: Graph showing the progressive depletion of IPv4 address pool)
It can only be a matter of days now until you find your resident IT team, ties round head, Nobo whiteboard marker drawn on the face, setting fire to their Hermann Miller furniture, ready to fight to the death just to get that one recycled IP that came up at the 'co-lo'. The wild and haunted look in their eyes, as familiar as ever, but joined now by real fear.
By such threads civilization doth hang. But calm yourselves, like the millenium-bug, we'll get through it. Our salvation is already upon us. For we hath been given IPv6. And lo, it is good...
With such thoughts racing through my mind and kicking myself for not following my instinct to give the kids IPv4's on their birth certificates (To wife: "Yes I know 74.50.52.185 is a bit of a mouthful, but she'll be grateful when she can auction it to get through college"). I thought it appropriate to do a little IPv6 preparation - at least to validate those pieces of layer-7 that we can assume some responsibility for.
Validating IPv6 Readiness
Its very unlikely that your existing network infrastructure is running IPv6. Not least because, a transition period of IPv4 and IPv6 cohabitation has been planned and we will undoubtedly see a period of dual IPv4 and IPv6 servers with clients progressively migrating to full IPv6, with transitional tunneling of "6-over-4" for impedance matching.
You can easily check just what your client-side pathway looks like with this site
As I said, I doubt you have an IPv6 route yet. But we can anticipate that there will be technical and economic pressure for the server-side to transition first. Which we'll see in the summer with the World IPv6 Day when large players switch-on parallel IPv6 access to services.
For the most part, and for us layer-7 ROC architects, this ought not be anything major. It will require that your server hosting is able to give you an IPv6 network, and that your server has an IPv6 address with DNS entry etc.
But the good news is that all modern operating systems come with IPv6 networking enabled out of the box.
I can't tell how you can check this on Windows but there's a very simple check on *nix. Just type "ifconfig" and you'll see your network interfaces...
eth0 Link encap:Ethernet HWaddr 4c:ed:de:72:01:1f inet addr:192.168.200.150 Bcast:192.168.200.255 Mask:255.255.255.0 inet6 addr: fe80::4eed:deff:fe72:11f/64 Scope:Link
The tell tale is the inet6 addr entry. Its showing that you've got IPv6 networking in your kernel.
Without any external IPv6 infrastructure, this value is basically derived from the MAC address of the NIC (see the similarities between HWaddr and inet6 addr?). This in itself is actually a very interesting observation. IPv6 is a big address space, a really really big address space. It follows, as you see with my eth0 device, that its feasible that every device can construct its own unique IPv6 address and with it can go direct onto the public internet.
IPv6 is a world with no need for NAT!
Interesting, and maybe a little bit scary when you think that the consequence of this could be that all devices have to manage their own firewalls. Of course this is only one potential scenario - don't think corporate or personal firewalls will go away. But nevertheless its this possibility of direct access to the internet that makes IPv6 interesting and full of opportunity.
Test It Yourself
So how can you test your server-side application stack's readiness if your external network is not IPv6 enabled yet? Fortunately, your OS is probably already running IPv6 and up in layer-7 that's all we need.
On *nix, take a look at your /etc/hosts. You should see a line like this...
::1 ip6-localhost ip6-loopback
This is the local IPv6 DNS entry for "ip6-localhost", the equivalent of IPv4's "localhost". You'll have to ask your Windows admin team what the equivalent is on Windows - I've been on Linux for ten years now, but the last time I knew anything about it had something called "lmhosts" (I vaguely remember - if anyone has better knowledge let me know and I'll edit this entry with the Win and/or Mac details too).
Anyway with this piece of knowledge, it follows that you can try NetKernel over IPv6 just by hitting...
It works! NetKernel and its HTTP stack is IPv6 ready!
This oughtn't to be too much of a surprise since Java has been IPv6 ready since version 1.4. But its still nice to know that it "just works".
Looking to the future, you no doubt also want to know the status of NetKernel Protocol. Since, and as will be covered at NKWest2011, it allows the ROC abstraction to seamlessly span the cloud.
I converted all our NKP unit tests to use ip6-localhost addresses and am very happy to report that these all passed first time too. Here's the log of our first "Hello NKP" test...
F 11:57:45 NKPClientEnd~ NKPClient(tcp://ip6-localhost:10603) send request res:/stest/hello.txt 2 F 11:57:45 NKPServerEnd~ NKPServer(tcp://ip6-localhost:10603) received request 0:0:0:0:0:0:0:1 res:/stest/hello.txt 2 F 11:57:45 NKPServerEnd~ Send response 2 F 11:57:45 NKPClientEnd~ Receive response 2 I 11:57:45 TestEngineEn~ Test Complete NKP success 13
Notice the NKPServer-side log is logging the client's address with its full IPv6 form 0:0:0:0:0:0:0:1 (128-bit colon separated hex quads - for various reasons explained here, IPv6 has an alternate shorthand syntax so the ip6-localhost address of ::1 is the same as 0:0:0:0:0:0:0:1 in the log)
(Incidentally if anyone knows what the URL form of an IPv6 raw address is then let me know - if you try this in your browser http://0:0:0:0:0:0:0:1:1060/ - it really won't like the colons. There must be an escaping?)
So put down the Nobo, unknot the tie-bandana, douse the Hermann Miller. Everything is going to be all right.
Representations and Evolvable, Stable ROC Solutions
When I was writing last week's article on resource stability I knew I was using an implicit assumption. Its actually a topic I have planned to cover in depth for some time. I can summarise it in a very brief sentence that sums up long-term experience...
ROC solutions maximise their evolvability and stability when you adopt general data models for your representations.
This is a potentially huge topic and needs a series of articles to discuss. It is something I plan to go into at the conference.
The problem with this subject is that it is probably the area that is most contrary to the approach taught by the classical Object-Oriented software perspective.
The other problem is that it has no hard boundaries. You can use classical domain-specific value-objects as your representations if you wish.
There are no hard boundaries since one of the cornerstones of ROC is that any resource may have any number of representational forms. Information is independent of type.
So what I want to discuss is not an absolute. Its a perspective born of experience.
We observe that using general data structures: ArrayLists, XML, JSON and, especially if you look at NK's own tooling, HDS, seem to provide a good balance of stability, extensibility and "transreptability".
I actually have a pretty good sense of why this happens to be the case - but I've got to sit down and do a proper job of setting out the requirements, the nature of the boundary between ROC's logical and physical domains, and ultimately connect it with the discussion of resources as sets - which I introduced in the language runtime articles late last year.
So if you don't mind - I'll hold off diving into this today. But be aware that doing justice to this topic has been on my mind for the best part of five years.
Sorry for the teaser. But if this meant anything to you, then you might want to consider general data structures in your solutions. It will pay off over the long term.
NetKernel West 2011
What? You've not had enough of the subtle plugs all the way through the articles? Need another reminder?
The conference is going to be fantastic. Find out all about it here...
http://www.1060research.com/conference/NKWest2011/
Have a great weekend.
Comments
Please feel free to comment on the NetKernel Forum
Follow on Twitter:
@pjr1060 for day-to-day NK/ROC updates
@netkernel for announcements
@tab1060 for the hard-core stuff
To subscribe for news and alerts
Join the NetKernel Portal to get news, announcements and extra features.