hackergotchi
Operations and other mysteries

Andrew Cowie is a long time Linux engineer and Open Source advocate, repentant Java developer, Haskell aficionado, and GNOME hacker!

Professionally Andrew has consulted in IT operations, business leadership, and tries to help people remove the bottlenecks in their processes so they can run their technology more effectively

He is currently Head of Engineering at Anchor Systems, working to develop the next generation of utility computing infrastructure and tooling.

Contact...

Twitter @afcowie

Google Plus +Andrew Cowie

Email 0x5CB48AEA

RSS Feed /andrew

Inaugurating the Haskell Sessions

There’s an interesting spectrum of events in the technical space. Conferences are the mainstay obviously; usually very intense and high-calibre thanks to the hard work of papers committees and of course the presenters themselves. You become invigorated hearing the experiences and results of other people, sharing ideas in the hallway, and of course the opportunity to vehemently explain why vi is better than emacs over drinks in the bar later is essential to the progress of science.

For a given community, though, conferences are relatively infrequent; often only once a year for a given country (linux.conf.au, Australia’s annual Linux conference, say) and sometimes only once a year globally (ICFP, the international functional programming conference with numerous collocated symposiums and events taking advantage of the fact it’s the one event everyone turns up at is a notable example in computing).

More locally, those in major cities are able to organize monthly meetups, networking events, user groups, and the like. Which are fun; lovely to see friends and continue to build relationships with people you’d otherwise only see once a year.

Finally there are hackfests, often on the order of a weekend in duration. The tend to draw people in from a wide area, and sometimes are in an unusual milieu; Peter Miller’s CodeCon camping and hacking weekends are infamous in the Sydney Linux community; rent a small quiet generator, find a state forest, set up tents and hack on whatever code you want to for a few days. Blissful.

The local monthly events are the most common, though. Typically two or three people offer to give presentations to an audience of 30-50 people. And while hearing talks on a range of topics is invaluable, the fact that so many smart people are in the room passively seems a lost opportunity.

For a while now I’ve been musing whether perhaps there is something between meetups and hackfests. Wouldn’t it be cool to get a bunch of people together, put a problem on the board, and then for an hour go around the room and have a debate about whether the problem is even the right question to be asking, and different approaches to tackling the issue? Something short, relatively focused, and pragmatic; rather than being a presentation of results a consideration of approaches. If we do it in a bit of rotation, each time one person being tasked with framing the question, then over time participants each have the benefit of bringing the collective firepower of the group to bear on one of the problems they’re working.

Needs a name. Seminar? No, that’s what university departments do. Symposium? Too grand. Something more informal, impromptu, but organized. You know, like a jazz jam session. Ah, there we go: gonna call these sessions.

It might be nice to complement the monthly functional programming meetup (fp-syd) that happens in Sydney with something a little more interactive and Haskell focused. And as I’m looking to improve the depth of experience with Haskell in the Engineering group at Anchor, this seemed like a nice fit. So we’re holding the first of the Haskell Sessions tomorrow 2pm, at Anchor’s office in Sydney.

Here’s one to start us off:

Industrial code has to interact with the outside world, often on external dependencies such as databases, network services, or even filesystem operations. We’re used to the ability to separate pure code from that with side-effects, but what abstractions can we use to isolate the dependent code from the rest of the program logic?

I know we’re not the first ones to blunder in to this; I’ve heard plenty of people discussing it. So I’m going to hand around a single page with the type signatures of the functions involved at various points, freshly sharpen some whiteboard markers, and we’ll see what ideas come to light!

If you’re interested in coming, drop me a line.

AfC

Strong eventual consistency

Most people will have seen the “Call Me Maybe” series (so named for the song by Jepsen) of blog posts about data loss in the face of network partition. Midway through the last post in the series is what is almost an off-the-cuff comment, but I think it’s everything:

“Consistency is a property of your data, not of your nodes.”

We tend to get overwhelmed with replication configurations, high-availability solutions, sharding strategies, and worrying about how a given database will react under various failure modes.

And yet, the essential truth that we’re so busy worrying about what’s stored on disk that we forget that we don’t care about consistency of what’s on disk. We need to care about the consistency of our data. It’s easy for a misbehaving program to write garbage, but not to worry! we’re absolutely certain that garbage is consistently replicated across the cluster. Yeah, well done there.

So the much bigger challenge in high-availability distributed systems, is making sure we have sane rules for propagating changes so that we can have a safe view of our data.


About 10 years ago I was working with a Java based object-oriented database (which is a grandiose name for what was as much a disk-backed datastore as anything else, but if you’re morbidly curious about what sort of API such a beast would have, you can read about db4o in a series of posts I wrote about it). It was surprisingly easy to use, and came along at a time when I was prepared to do just about anything to escape the object-relational mapping hell.

They got significant adoption in embedded devices where zero-administration is a necessity and where developers don’t want to deal with the machinery of a full scale RDMBS just to store e.g. configuration parameters. But surprise, it wasn’t long before users started asking for replication features. Now, usually when you hear that term you think of master/slave replication being done at database engine level in a high-availability setup. In this case, however, they had disconnected devices re-establishing connectivity to enterprise datastores, and because of that you had to cope with significant conflicts when it came time to synchronize.

Because the data model was articulated in terms of Java code (to a naive first approximation, you were just storing Java objects), you had the data model living in the same place as the application code, domain layer, and validation logic. This meant that when it came time to cope with those conflicts, the natural place to put do that was in the same Java code. This was interesting, because for just about every other database engine out there data is opaque. Oh, sure, RDBMS have types (though that there are people who think VARCHAR(256) actually tells you anything useful remains a source of wonder; alas, I digress), but if you have a high availability configuration and you’ve allowed concurrent activity during a network partition, then you have to deal with diverged replicas and thus have to merge them. Database doesn’t know what to do; how could it? No: consistency is a property of your data, not the datastore; the rules to decide how to synchronize are a business decision, so where better to put it than in the business logic?

Peter Miller suggests the example of booking flights: multiple passengers can end up allocated the same seat on an oversold flight, but the decision about who gets which seat happens at check-in and conflict resolution is a business one made by the airline staff, not the database.


Throughout the Jepsen posts, you’ll see occasional mention of “CRDTs” as an alternative to the problems of attempting to achieve simultaneous write safety in a distributed system. Finding out just what a CRDT is took a bit more doing that I would have expected; hence wanting to write this post.

Convergent and Commutative Replicated Data Types

It’s easy to have Consistency when you impose synchronous access to your data. But the locks needed to give that property don’t scale to distributed systems; you need to have data that can cope with delay. The idea of self-healing systems have been around for a while, but there hasn’t been much formal study of what data types meet these requirements. If you’re at all interested, I’d encourage you to have a read of “A comprehensive study of Convergent and Commutative Replicated Data Types” by Shapiro, Preguiça, Baquero, and Zawirski.
http://hal.inria.fr/docs/00/55/55/88/PDF/techreport.pdf

They use set notation and a form of psuedocode to describe the different data types which all makes the read a bit more serious than it needs to be, but having had my head buried in this paper for a few days I can say the effort has paid off. They articulate a set of conditions that would make either a state based system able to handle merges — which basically works out because the requirement is for the datatype to be a join semilatice; if it is, then they show the replicas will converge — or an operation based one (aka command pattern to us programmer types) — where the requirement is for manipulations of the datatype to be commutative, and if so, ditto [They also show these are equivalent, which is handy].

Here’s an schematic illustration of a state-based convergent replicated data type, from the paper:

state-based CRDT

The idea being that if you have a merge function, then it doesn’t matter where a state change is made; it will eventually make its way to all replicas.

Which raises the topic of eventual consistency. Anyone who has worked with Amazon S3 has discovered (the hard way, inevitably) that mutating an existing value has wildly undefined behaviour as to when other readers will see that change. CRDTs, on the other hand, exhibit “strong eventual consistency” (or perhaps better “strong eventual convergence”, as Murat Demirbas put his analysis of the topic), whereby the propagation behaviour is well defined.

The surface area you can use one of these data types on is limited. Because the data type is neither synchronous nor is a consensus protocol used to maintain the appearance of a single entity you cannot by definition have a global invariant. So you can track all the additions and subtractions to an integer (summing the like and dislike clicks on a page, for example); addition commutes and eventually all the operations will end up being applied to all the replicas. What you can’t do is something like enforce that the variable never goes below zero (an account balance, say) because two machines with the value at 1 could simultaneously apply a -1 operation, breaking the invariant once that operation propagates. If this seems a bit hypothetical, consider the well documented shopping cart problem encountered by a certain major global online bookseller: delete a book from your cart, and sure enough, five minutes later it’s back again. Classic case of the failure mode encountered by distributed key-value stores.

At first you’d think that this limitation would seriously cramp your style or that there wouldn’t be any real world data types that meet these requirements, but it turns out there are. The significant contribution of the paper is they come up with a formal definition of what a CRDT would need to look like, then explore around a bit and show a number of different datatypes that do meet the requirements.

The paper also includes an impressive reference list & discussion of prior art in the space, so it’s worth a read. There’s also “Conflict-free Replicated Data Types” by the same authors which formalizes SEC.
http://pagesperso-systeme.lip6.fr/Marc.Shapiro/papers/CRDTs_SSS-2011.pdf


Back to the effect of network partitions on data safety:

What about Ceph?

Good question.

What I would be interested in now is how Ceph‘s various inter-related pieces hold up in the face of the sort of aggressive network partition testing conducted in the Jepsen survey. Reading a recent blog article about how the Ceph monitor services have re-implemented their use of Paxos struck me as being extraordinarily complicated. “One Paxos to rule them all”? Oh dear.

I’m doing a back-of-the-envelope examination but I think I already know the answer: you’re not going to get a write acknowledged until it is durably stored — which is Consistency. Ceph is a complex system, and parts of it can be offline when others are continuing to provide service. So you’d have to break it down to the provision of a single piece of mutable data before you could study the Availability of the system properly. I’d love to find someone who would like us do a real analysis using the Jepsen techniques; be interesting to see.

But this all reminds us why we’re interested in CRDTs in the first place: systems where you can build synchronous communication (or an external appearance thereof care of the use of consensus protocols internally) to achieve Consistency are in essence limited to highly controlled clusters in an individual data center. Most real world systems involve components distributed across geographic, temporal, and logical distances, and that means you must take into account the limitations of the speed of information propagation. While most people immediately think about the light-speed problem, it applies just as much to any distributed environment; and in any real world information system we need to serve clients concurrently, and that means the technique of using a CRDT where possible might very well be worth the effort.

AfC

We all wish we knew what we were doing on Google Plus

Public Service Announcement

I have a number of colleagues who are die-hard Facebook users but due to relentless assimilation by the evangelizing hegemonistic swarm that is the Big G, they have been forced to try Google+ for the first time. I’ve noticed them all struggling with similar incongruencies.

Posts are not chat channels

The biggest difference is controlling distribution; for instance, the following scenario is common:

George Jones shared a photo of their “business” trip with all his friends!

Andrew Cowie writes a comment praising the beach and sunset in said photo.

George Jones replies “Hey, yeah, it’s great. So I heard you were in Europe last week?”

What George doesn’t seem to realize is that he just asked that question not of me but of the thirty people he shared the original post with. Which is probably not what he had in mind. I’ve run into this with my parents a fair bit; Dad keeps commenting personally on my public posts. Not sure he quite realizes several thousand people will see his remark :)

Circles aren’t as useful as they seem

Which brings me to posting publicly vs sharing with a given circle or circles. Most of the people I know gave up on circles and are just publishing most things they write as “public” — which makes Google+ posts a long-hand version of Twitter. I certainly am followed by tons of people who aren’t in my circles, so they only see my posts if I hit “public”, which is annoying: I don’t really want to bombard my family with my professional and technology posts. But there’s no “public except this circle” visibility setting, so if I want a wider audience for my general posts, I’m sorta stuck with it. This leads to a much lower signal-to-noise ratio for my friends (the people I care about the most!) for the dubious benefit of writing to people I don’t know, and also leads to the aforementioned friends and family thinking they have to make personal commenting on such posts.

Posts are not really a communication channel

Using Hangouts for casual 1:1 chat is much easier than trying to conduct chat in the comments of a formal post. Someone commenting on a post does raise it to the top of your stream, but when that happens it’s not obvious that a comment on that post is actually the continuation of a personal discussion; all you see is “The post about the Muppets has a new comment!”. Yeah, I bet.

Meanwhile, after years of being a disaster zone, Google has finally merged GTalk, Google Video, the former Google Hangouts, the in-browser Chat sidebar, Gmail chat, the Android G+ app messenger, and lord knows what else under the banner “Hangouts”. So it’s unified now, which is a big advance, and at last you can rather seamlessly and in a device independent way switch between chat and video. This is very awesome.¹

Name prefixing considered useful

If you are going to reply to someone in a comment stream on a (public or otherwise) post, you might consider prefacing the comment with the person’s G+ username; that way a) they’ll [likely] get a notification and b) it’s obvious you’re speaking to that person and not to everyone.

“Hey +Andrew Cowie, I’m glad you like the picture. Heard you passed through Europe last week. Pity we didn’t quite connect. Catch you next trip!”

Or so.

Build it and they will [be forced to] come

Google Plus has been a hodge-podge since the beginning, but it’s also evident that they’re working really hard to improve the integration between services (interesting read about “cleaning up the mess” over at the Verge about this). I don’t want to seem that enthusiastic about it, because frankly it’s absurd that they didn’t have this wired tight before they launched in the first place. For me the fact that Hangouts are now an integrated messaging system is a watershed; I can only hope this model of cross functional team collaboration helps Google improve other areas of their services so desperately in need of some QA.

No say me too

Last thought for people new to Google+: it’s really quite unnecessary to post a comment that says “Thanks”, “Me too”, “I agree”. Not sure why so many people do; that’s what the “+1″ button is for. You’d think people would get that, seeing as how it works identically to the “Like” button on Facebook (via +Calum Benson).

AfC

Afterword

  1. I should make it clear that I’m incredibly frustrated that Google has killed off GTalk, or, more to the point, that the new implementation of Hangouts is both proprietary (not using the XMPP open standard like GTalk did) and closed (they aren’t supporting external clients or server-to-server federation; it’s nice that I get a bling notification on my phone using their app, but on my desktop I have a really good presence framework [yeay, telepathy] which is completely now unable to integrate with Google’s services). This kind of closed behaviour represents one of the things that is unacceptable about Facebook a massive step backward on the part of Google. I like that Hangouts (finally) actually work, and I’ll use them for a while, but we will happily use Video-over-Jabber within our company and with anyone who is (or whose employer is) competent enough to run their own federated XMPP server. The fact that Google just dropped GTalk/XMPP without telling anyone is just another example of their disregard for their users like terminating Reader was. So no surprise, but every incentive to find alternative and better services. Google should think about how well alienating power users has worked out for Microsoft.

http-streams 0.5.0 released

I’ve done some internal work on my http-streams package. Quite a number of bug fixes, which I’m pleased about, but two significant qualitative improvements as well.

First we have rewritten the “chunked” transfer encoding logic. The existing code would accept chunks from the server, and feed them as received up to the user. The problem with this is the server is the one deciding the chunk size, and that means you can end up being handed multi-megabyte ByteStrings. Not exactly streaming I/O. So I’ve hacked that logic so that it yield‘s bites of maximum 32 kB until it has iterated through the supplied chunk, then moves on to the next. Slight increase in code complexity internally, but much smoother streaming behaviour for people using the library.

Secondly I’ve brought in the highly tuned HTTP header parsing code from Gregory Collins’s new snap-server. Our parser was already pretty fast, but this gave us a 13% performance improvement. Nice.

We changed the types in the openConnection functions; Hostname and Port are ByteString and Word16 now, so there’s an API version bump to 0.5.0. Literals will continue to work so most people shouldn’t be affected.

AfC

http-streams 0.4.0 released

Quick update to http-streams, making a requested API change to the signature of the buildRequest function as well as pushing out some bug fixes and performance improvements.

You no longer need to pass the Connection object when composing a Request, meaning you can prepare it before opening the connection to the target web server. The required HTTP 1.1 Host: header is added when sendRequest is called, when the request is written to the server. If you need to see the value of the Host: field that will be sent (ie when debugging) you can call the getHostname function.

I’ve added an “API Change Log” to the README file on GitHub, and the blog post introducing http-streams has been updated reflect the signature change.

Thanks to Jeffrey Chu for his contributions and to Gregory Collins for his advice on performance improvement; this second release is about 9% faster than the original.

AfC

An HTTP client in Haskell using io-streams

An HTTP client

I’m pleased to announce http-streams, an HTTP client library for Haskell, using the Snap Framework’s new io-streams library to handle the streaming I/O.

Back and there again

I’ve been doing a lot of work lately using Haskell to do reprocessing of data from various back-end web services and then presenting fragments of that information in the specific form needed to drive client-side visualizations. Nothing unusual about that; on one edge of your program you have a web server and on the other you’re making make onward calls to further servers. Another project includes meshes of agents talking to other agents; again, nothing extreme; just a server daemon responding to requests and in turn making its own requests of others. Fairly common in any environment build on (and in turn offering) RESTful APIs.

I’m doing my HTTP server work with the fantastic Snap Framework; it’s a lightweight and decently performing web server library with a nice API. To go with that I needed an web client library, but there the choices are less inspiring.

Those working in Yesod have the powerful http-conduit package, but I didn’t find it all that easy to work with. I soon found myself writing a wrapper around it just so I could use types and an API that made more sense to me.

Because I was happily writing web apps using Snap, I thought it would be cool to write a client library that would use the same types. After much discussion with Gregory Collins and others, it became clear that trying to reuse the Request and Response types from snap-core wasn’t going to be possible. But there was a significant amount of code in Snap’s test suite, notably almost an entire HTTP client implementation. Having used Snap.Test to build test code for some of my own web service APIs, I knew there was some useful material there, and that gave me a useful starting point.

Streaming I/O

One of the exciting things about Haskell is the collaborative way that boundaries are pushed. From the beginnings in iteratee and enumerator, the development of streaming I/O libraries such as conduit and pipes has been phenomenal.

The Snap web server made heavy use of the original iteratee/enumerator paradigm; when I talked to some of the contributors in #snapframework about whether they were planning to upgrade to one of the newer streaming I/O libraries, I discovered from Greg that he and Gabriel were quietly working on a re-write of the internals of the server, based on their experiences doing heavy I/O in production.

This new library is io-streams, aimed at being a pragmatic implementation of some of the impressive theoretical work from the other streaming libraries. io-streams‘s design makes the assumption that you’re working in … IO, which seems to have allowed them to make some significant optimizations. The API is really clean, and my early benchmarks were promising indeed.

That was when I realized that being compatible with Snap was less about the Request and Response types and far more about being able to smoothly pass through request and response bodies — in other words, tightly integrating with the streaming I/O library used to power the web server.

http-streams, then, is an HTTP client library built to leverage and in turn expose an API based on the capabilities of io-streams.

A simple example

We’ll make a GET request of http://kernel.operationaldynamics.com:58080/time (which is just a tinsy web app which returns the current UTC time). The basic http-streams API is pretty straight forward:

10 
11 {-# LANGUAGE OverloadedStrings #-}
23 
24 import System.IO.Streams (InputStream, OutputStream, stdout)
25 import qualified System.IO.Streams as Streams
26 import Network.Http.Client
27 
28 main :: IO ()
29 main = do
30     c <- openConnection "kernel.operationaldynamics.com" 58080
31 
32     q <- buildRequest $ do
33         http GET "/time"
34         setAccept "text/plain"
35 
36     sendRequest c q emptyBody
37 
38     receiveResponse c (\p i -> do
39         Streams.connect i stdout)
40 
41     closeConnection c
42 

which results in

Sun 24 Feb 13, 11:57:10.765Z

Open connection

Going through that in a bit more detail, given that single import and some code running in IO, we start by opening a connection to the appropriate host and port:

30     c <- openConnection "kernel.operationaldynamics.com" 58080

Create Request object

Then you can build up the request you need:

32     q <- buildRequest $ do
33         http GET "/time"
34         setAccept "text/plain"

that happens in a nice little state monad called RequestBuilder with a number of simple functions to set various headers.

Having built the Request object we can have a look at what the outbound request would look like over the wire, if you’re interested. Doing:

35     putStr $ show q

would have printed out:

GET /time HTTP/1.1
Host: kernel.operationaldynamics.com:58080
User-Agent: http-streams/0.3.0
Accept-Encoding: gzip
Accept: text/plain

Send request

Making the request is a simple call to sendRequest. It takes the Connection, a Request object, and function of type

   (OutputStream Builder -> IO α)

which is where we start seeing the System.IO.Streams types from io-streams. If you’re doing a PUT or POST you write a function where you are handed the OutputStream and can write whatever content you want to it. Here, however, we’re just doing a normal GET request which by definition has no request body so we can use emptyBody, a predefined function of that type which simply returns without sending any body content. So:

36     sendRequest c q emptyBody

gets us what we want. If we were doing a PUT or POST with a request body, we’d write to the OutputStream in our body function. It’s an OutputStream of Builders as a fairly significant optimization; the library will end up chunking and sending over an underlying OutputStream ByteString which is wrapped around the socket, but building up the ByteString(s) first in a Builder reduces allocation overhead when smacking together all the small strings that the request headers are composed of; taken together it often means requests will be done in a single sendto(2) system call.

Read response

To read the reply from the server you make a call to receiveResponse. Like sendRequest, you pass the Connection and a function to handle the entity body, this time one which will read the response bytes. It’s type is

   (Response -> InputStream ByteString -> IO β)

This is where things get interesting. We can use the Response object to find out the status code of the response, read various headers, and deal with the reply accordingly. Perhaps all we care about is the status code:

42 statusHandler :: Response -> InputStream ByteString -> IO ()
43 statusHandler p i = do
44     case getStatusCode p of
45         200 -> return ()
46         _   -> error "Bad server!"

The response body it available through the InputStream, which is where we take advantage of the streaming I/O coming down from the server. For instance, if you didn’t trust the server’s Content-Length header and wanted to count the length of the response yourself:

42 countHandler :: Response -> InputStream ByteString -> IO Int
43 countHandler p i1 = do
44     go 0 i1
45   where
46     go !acc i = do
47         xm <- Streams.read i
48         case xm of
49             Just x  -> go (acc + S.length x) i
50             Nothing -> return acc

Ok, that’s pretty contrived, but it shows the basic idea: when you read from an InputStream a it’s a sequence of Maybe a; when you get Nothing the input is finished. Realistic usage of io-streams is a bit more idiomatic; the library offers a large range of functions for manipulating streams, many of which are wrappers to build up more refined streams from lower-level raw ones. In this case, we could do the counting trick using countInput which gives you an action to tell you how many bytes it saw:

42 countHandler2 p i1 = do
43     (i2, getCount) <- Streams.countInput i1
44 
45     Streams.skipToEof i2
46 
47     len <- getCount
48     return len

For our example, however, we don’t need anything nearly so fancy; you can of course use the lambda function in-line we showed originally. If you also wanted to spit the response headers out to stdout, Response also has a useful Show instance.

38     receiveResponse c (\p i -> do
39         putStr $ show p
40         Streams.connect i stdout)

which is, incidentally, exactly what the predefined debugHandler function does:

38     receiveResponse c debugHandler

either way, when we run this code, it will print out:

HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Sun, 24 Feb 2013 11:57:10 GMT
Server: Snap/0.9.2.4
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Type: text/plain

Sun 24 Feb 13, 11:57:10.765Z

Obviously you don’t normally need to print the headers like that, but they can certainly be useful for testing.

Close connection

Finally we close the connection to the server:

41     closeConnection c

And that’s it!

More advanced modes of operation are supported. You can reuse the same connection, of course, and you can also pipeline requests [sending a series of requests followed by reading the corresponding responses in order]. And meanwhile the library goes to some trouble to make sure you don’t violate the invariants of HTTP; you can’t read more bytes than the response contains, but if you read less than the length of the response, the remainder of the response will be consumed for you.

Don’t forget to use the conveniences before you go

The above is simple, and if you need to refine anything about the request then you’re encouraged to use the underlying API directly. However, as often as not you just need to make a request of a URL and grab the response. Ok:

61 main :: IO ()
62 main = do
63     x <- get "http://www.haskell.org/" concatHandler
64     S.putStrLn x

The get function is just a wrapper around composing a GET request using the basic API, and concatHandler is a utility handler that takes the entire response body and returns it as a single ByteString — which somewhat defeats the purpose of “streaming” I/O, but often that’s all you want.

There are put and post convenience functions as well. They take a function for specifying the request body and a handler function for the response. For example:

66     put "http://s3.example.com/" (fileBody "fozzie.jpg") handler

this time using fileBody, another of the pre-defined entity body functions.

Finally, for the ever-present not-going-to-die-anytime-soon application/x-www-form-urlencoded POST request — everyone’s favourite — we have postForm:

67     postForm "https://jobs.example.com/" [("name","Kermit"),("role","Stagehand")] handler

Secure connections

I’ve also completely neglected to mention until now SSL support and error handling. Secure connections are supported using openssl; if you’re working in the convenience API you can just request an https:// URL as shown above; in the underlying API you call openConnectionSSL instead of openConnection. As for error handling, a major feature of io-streams is that you leverage the existing Control.Exception mechanisms from base; the short version is you can just wrap bracket around the whole thing for any exception handling you might need — that’s what the convenience functions do, and there’s a withConnection function which automates this for you if you want.

Status

I’m pretty happy with the http-streams API at this point and it’s pretty much feature complete. A fair bit of profiling has been done, and the code is pretty sound at this point. Benchmarks against other HTTP clients are favourable.

After a few years working in Haskell this is my first go at implementing a library as opposed to just writing applications. There’s a lot I’ve had learn about writing good library code, and I’ve really appreciated working with Gregory Collins as we’ve fleshed out this API together. Thanks also to Erik de Castro Lopo, Joey Hess, Johan Tibell, and Herbert Valerio Riedel for their review and comments.

You can find the API documentation for Network.Http.Client here (until Hackage generates the docs) and the source code at GitHub.

AfC

Updates

  1. Code snippets updated to reflect API change made to buildRequest as of v0.4.0. You no longer need to pass the Connection object when building a Request.

Railway Signalling

I’ve long been interested in railways. Not because I’m a “foamer” (UK parlance — apparently some people foam at the mouth when they get the chance to watch passenger trains move, or so the railway employees would have it) or a “railfan” (the US term — Is that supposed to be like “sportsfan”? I mean, just because I want to take a photo that has a train in it doesn’t make me a weirdo, does it? Apparently), but for the same reason that engineers tend to interested in almost everything: how does it all work?

Model of a searchlight railway signal
Not bad for a model railroad!

One part of real-world railways that is fascinating is the signalling necessary to make operations safe and efficient. It’s beguiling to an engineer in no small part because, by design, you can’t infer the behaviour of the entire system just watching the signals that go by as you’re on a train: automated signalling isn’t just about local conditions, but about the relationships between track conditions and the locations of trains across vast distances. The relevant Wikipedia pages have never been much help, either. As an unrequited model railroader, I’ve seen plenty of articles about modelling signals, and even descriptions of CTC machines and Train Orders, but still precious little about how signalling systems, as a whole, work. So I’ve long been curious.

A few days ago I came across a fantastic reference by one Carsten Lundsten about how signalling is done in North America. I’ve been engrossed. It appears the site was written somewhat for a European audience, but as far as I can tell it’s pretty informative for a Canadian and American one, too.

Rather than blathering on about which rule number a signal represents or what speed limits are, these documents concentrate on how signalling systems protect trains and how they have improved over time to provide greater automation and flexibility. If you’re interested, definitely start with Basics of North American Signaling and Safety principles.

Absolute Block Control, double track, through to Centralized Traffic Control, double track
Want to know what this means?

Be sure to make your way through to the page about Absolute Permissive Block signalling — by far and away the best explanation for APB and how APB is different than ABS I’ve ever seen, and I’ve been casually researching this for years.

Opposing signals clearning in section of main track under control of an Absolute Permissive Block signalling system

Enjoy, all ye “train geeks”.

AfC

Model photos from Richard Stallard’s site about his Marbelup Valley railway.
Signal plant diagrams from Carsten Lundsten’s site, as above.

Defining OS specific code in Haskell

I’ve got an ugly piece of Haskell code:

213 baselineContextSSL :: IO SSLContext
214 baselineContextSSL = do
215     ctx <- SSL.context    -- a completely blank context
216     contextSetDefaultCiphers ctx
217 #if defined __MACOSX__
218     contextSetVerificationMode ctx VerifyNone
219 #elif defined __WIN32__
220     contextSetVerificationMode ctx VerifyNone
221 #else
222     contextSetCADirectory ctx "/etc/ssl/certs"
223     contextSetVerificationMode ctx $
224         VerifyPeer True True Nothing
225 #endif
226     return ctx

this being necessary because the non-free operating systems don’t store their X.509 certificates in a place that openssl can reliably discover them. This sounds eminently solvable at lower levels, but that’s not really my immediate problem; after all, this sort of thing is what #ifdefs are for. The problem is needing to get an appropriate symbol based on what OS you’re using defined.

I naively assumed there would be __LINUX__ and __MACOSX__ and __WIN32__ macros already defined by GHC because, well, that’s just the sort of wishful thinking that powers the universe.

When I asked the haskell-cafe mailing list for suggestions, Krzysztof Skrzętnicki said that I could use in my project’s .cabal file. Nice, but problematic because you’re not always building using Cabal; you might be working in ghci, you might be using a proper Makefile to build your code, etc. Then Henk-Jan van Tuyl pointed out that you can get at the Cabal logic care of Distribution.System. Hey, that’s cool! But that would imply depending on and linking the Cabal library into your production binary. That’s bad enough, but the even bigger objection is that binaries aren’t portable, so what’s the point of having a binary that — at runtime! — asks what operating system it’s on? No; I’d rather find that out at build time and then let the C pre-processor include only the relevant code.

This feels simple and an appropriate use of CPP; even the symbol names look just about like what I would have expected (stackoverflow said so, must be true). Just need to get the right symbol defined at build time. But how?

Build Types

Running cabal install one sees all kinds of packages building and I’d definitely noticed some interesting things happen; some packages fire off what is obviously an autoconf generated ./configure script; others seem to use ghci or runghc to dynamically interpret a small Haskell program. So it’s obviously do-able, but as is often the case with Haskell it’s not immediately apparent where to get started.

Lots of libraries available on Hackage come with a top-level Setup.hs. Whenever I’d looked in one all I’d seen is:

  1 import Distribution.Simple
  2 main = defaultMain

which rather rapidly gave me the impression that this was a legacy of older modules, since running:

$ cabal configure
$ cabal build
$ cabal install

on a project without a Setup.hs apparently just Does The Right Thing™.

It turns out there’s a reason for this. In a project’s .cabal file, there’s a field build-type that everyone seems to define, and of course we’re told to just set this to “Simple”:

 27 build-type:          Simple

what else would it be? Well, the answer to that is that “Simple” is not the default; “Custom” is (really? weird). And a custom build is one where Cabal will compile and invoke Setup.hs when cabal configure is called.

Ahh.

When you look in the documentation of the Cabal library (note, this is different from the cabal-install package which makes the cabal executable we end up running) Distribution.Simple indeed has defaultMain but it has friends. The interesting one is defaultMainWithHooks which takes this monster as its argument; sure enough, there are pre-conf, post-conf, pre-build, post-build, and so on; each one is a function which you can easily override.

 20 main :: IO ()
 21 main = defaultMainWithHooks $ simpleUserHooks {
 22        postConf = configure
 23     }
 24 
 25 configure :: Args -> ConfigFlags -> PackageDescription -> LocalBuildInfo -> IO ()
 26 configure _ _ _ _ = do
 27     ...

yeay for functions as first class objects. From there it was a simple matter to write some code in my configure function to call Distribution.Simple’s buildOS and write out a config.h file with the necessary #define I wanted:

  1 #define __LINUX__

Include Paths

We’re not quite done yet. As soon as you want to #include something, you have to start caring about include paths. It would appear the compiler, by default, looks in the same directory as the file it is compiling. Fair enough, but I don’t really want to put config.h somewhere deep in the src/Network/Http/ tree; I want to put it in the project’s top level directory, commonly known as ., also known as “where I’m editing and running everything from”. So you have to add a -I"." option to ghc invocations in your Makefiles, your .cabal file needs to be told in its way:

 61 library
 62   include-dirs:      .

and as for ghci, it turns out you can put a .ghci in your sources:

  1 :set -XOverloadedStrings
  2 :set +m
  3 :set -isrc:tests
  4 :set -I.

and if you put that in your project root directory, running ghci there will work without having to specify all that tedious nonsense on the command line.

The final catch is that you have to be very specific about where you put the #include directive in your source file. Put it at the top? Won’t work. After the pragmas? You’d think. Following the module statement? Nope. It would appear that it strictly has to go after the imports and before any real code. Line 65:

 47 import Data.Monoid (Monoid (..), (<>))
 48 import qualified Data.Text as T
 49 import qualified Data.Text.Encoding as T
 50 import Data.Typeable (Typeable)
 51 import GHC.Exts
 52 import GHC.Word (Word8 (..))
 53 import Network.URI (URI (..), URIAuth (..), parseURI)
 64 
 65 #include "config.h"
 66 
 67 type URL = ByteString
 68 
 69 --
 70 -- | Given a URL, work out whether it is normal or secure, and then
 71 -- open the connection to the webserver including setting the
 72 -- appropriate default port if one was not specified in the URL. This
 73 -- is what powers the convenience API, but you may find it useful in
 74 -- composing your own similar functions.
 75 --
 76 establishConnection :: URL -> IO (Connection)
 77 establishConnection r' = do
 78     ...

You get the idea.

Choices

Several people wrote to discourage this practice, arguing that conditional code is the wrong approach to portability. I disagree, but you may well have a simple piece of code being run dynamically that would do well enough just making the choice at runtime; I’d be more comfortable with that if the OS algebraic data type was in base somewhere; linking Cabal in seems rather heavy. Others tried to say that needing to do this at all is openssl’s fault and that I should be using something else. Perhaps, and I don’t doubt that we’ll give tls a try at some point. But for now, openssl is battle-tested crypto and the hsopenssl package is a nice language binding and heavily used in production.

Meanwhile I think I’ve come up with a nice technique for defining things to drive conditional compilation. You can see the complete Setup.hs I wrote here; it figures out which platform you’re on and writes the .h file accordingly. If you have need to do simple portability conditionals, you might give it a try.

AfC

Integrating Vim and GPG

Quite frequently, I need to take a quick textual note but when the content is sensitive, even just transiently, well, some things shouldn’t be left around on disk in plain text. Now before you pipe up with “but I encrypt my home directory” keep in mind that that only pretects data against it being read in the event your machine is stolen; if something gets onto your system while it’s powered up and you’re logged in, the file is there to read.

So for a while my workflow there has been the following rather tedious sequence:

$ vi document.txt
$ gpg --encrypt --armour 
    -r andrew@operationaldynamics.com 
    -o document.asc document.txt
$ rm document.txt
$

and later on, to view or edit the file,

$ gpg --decrypt -o document.txt document.asc 
$ view document.txt
$ rm document.txt

(yes yes, I could use default behaviour for a few things there, but GPG has a bad habit of doing things that you’re not expecting; applying the principle of least surprise seems a reasonable defensive measure, but fine, ok

$ gpg < document.asc

indeed works. Pedants, the lot of you).

Obviously this is tedious, and worse, error prone; don’t be overwriting the wrong file, now. Far more serious, you have the plain text file sitting around while you’re working on it, which from an operational security standpoint is completely unacceptable.

vim plugin

I began to wonder if there was better way of doing this, and sure enough, via the volumous Vim website I eventually found my way to this delightful gem: https://github.com/jamessan/vim-gnupg by James McCoy.

Since it might not be obvious, to install it you can do the following: grab a copy of the code,

$ cd ~/src/
$ mkdir vim-gnupg
$ cd vim-gnupg/
$ git clone git://github.com/jamessan/vim-gnupg.git github
$ cd github/
$ cd plugin/
$ ls

Where you will see one gnupg.vim. To make Vim use it, you need to put in somewhere vim will see it, so symlink it into your home directory:

$ mkdir ~/.vim
$ mkdir ~/.vim/plugin
$ cd ~/.vim/plugin/
$ ln -s ~/src/vim-gnupg/github/plugin/gnupg.vim .
$

Of course have a look at what’s in that file; this is crypto and it’s important to have confidence that the implementation is sane. Turns out that the gnupg.vim plugin is “just” Vim configuration commands, though there are some pretty amazing contortions. People give Emacs a bad rap for complexity, but whoa. :). The fact you can do all that in Vim is, er, staggering.

Anyway, after all that, it Just Works™. I give my filename a .asc suffix, and ta-da:

$ vi document.asc

the plugin decrypts, lets me edit clear text in memory, and then re-encrypts before writing back to disk. Nice! For a new file, it prompts for the target address (which is one’s own email for personal use) and then on it’s way. [If you’re instead using symmetrical encryption, I see no way around creating an empty file with gpg first, but other than that, it works as you’d expect]. Doing all of this on a GNOME 3 system, you already have a gpg-agent running, so you get all the sexy entry dialogs and proper passphrase caching.

I’m hoping a few people in-the-know will have a look at this and vet that this plugin doing the right thing, but all in all this seems a rather promising solution for quickly editing encrypted files.

Now if we can just convince Gedit to do the same.

AfC

java-gnome 4.1.2 released

This post is an extract of the release note from the NEWS file which you can read online … or in the sources from Bazaar.


java-gnome 4.1.2 (30 Aug 2012)

Applications don’t stand idly by.

After a bit of a break, we’re back with a second release in the 4.1 series covering GNOME 3 and its libraries.

Application for Unique

The significant change in this release is the introduction of GtkApplication, the new mechanism providing for unique instances of applications. This replaces the use of libunique for this purpose, which GNOME has deprecated and asked us to remove.

Thanks to Guillaume Mazoyer for having done the grunt work figuring out how the underlying GApplication mechanism worked. Our coverage begins in the Application class.

Idle time

The new Application coverage doesn’t work with java-gnome’s multi-thread safety because GTK itself is not going to be thread safe anymore. This is a huge step backward, but has been coming for a while, and despite our intense disappointment about it all, java-gnome will now be like every other GUI toolkit out there: not thread safe.

If you’re working from another thread and need to update your GTK widgets, you must do so from within the main loop. To get there, you add an idle handler which will get a callback from the main thread at some future point. We’ve exposed that as Glib.idleAdd(); you put your call back in an instance of the Handler interface.

As with signal handlers, you have to be careful to return from your callback as soon as possible; you’re blocking the main loop while that code is running.

Miscellaneous improvements

Other than this, we’ve accumulated a number of fixes and improvements over the past months. Improvements to radio buttons, coverage of GtkSwitch, fixes to Assistant, preliminary treatment of StyleContext, and improvements to SourceView, FileChooser, and more. Compliments to Guillaume Mazoyer, Georgios Migdos, and Alexander Boström for their contributions.

java-gnome builds correctly when using Java 7. The minimum supported version of the runtime is Java 6. This release depends on GTK 3.4.

AfC


You can download java-gnome’s sources from ftp.gnome.org, or easily checkout a branch frommainline:

$ bzr checkout bzr://research.operationaldynamics.com/bzr/java-gnome/mainline java-gnome

though if you’re going to do that you’re best off following the instructions in the HACKING guidelines.

AfC


Material on this site copyright © 2002-2014 Operational Dynamics Consulting Pty Ltd, unless otherwise noted. All rights reserved. Not for redistribution or attribution without permission in writing. All times UTC

We make this service available to our staff and colleagues in order to promote the discourse of ideas especially as relates to the development of Open Source worldwide. Blog entries on this site, however, are the musings of the authors as individuals and do not represent the views of Operational Dynamics