Upgrading to Precise

The latest release of Ubuntu, version 12.04 aka Precise, has a lot of updates we’ve been waiting on for a while — GNOME 3.4, Haskell 7.4.1, and a huge stack of bugfixes. On the desktop side, quite a number of Linux kernel vs X video modes vs suspend glitches have gone away. That’s fantastic. During most of Oneiric, my laptop was freezing and needing a hard reset at least once a day. Tedious. So I’m quite pleased to report that running Precise, Linux 3.2, gdm, and GNOME 3.4, things are vastly more stable.

Getting upgraded to Precise, however, has not been a pleasant experience.

First we’ve had unattended-upgrades overwriting any configuration stating “no automatic upgrades”. The number of non-technical friends who were set to “security updates only” calling in wondering why a “big upgrade” happened and now their computers don’t work has been staggering. Needless to say we nuked unattended-upgrades from all of our systems a hurry, but for those people it was already too late.

Several desktop upgrades failed half-way through because dpkg suddenly had unresolved symbol errors. Fortunately I was able to work out the missing library binary and manually copy it in from another machine, which was enough to get package system working. Hardly auspicious.

Server side was fraught with difficulty. You cannot yet upgrade from Lucid to Precise. It breaks horribly.

E: Could not perform immediate configuration on 'python-minimal'. Please
see man 5 apt.conf under APT::Immediate-Configure for details. (2)

Brutal. I tried working around it on one system by manually using dpkg, but that just led me into recursive dependency hell:

# cd /var/cache/apt/archvies
# dpkg -r libc6-i686
# dpkg -i libc6_2.15-0ubuntu10_i386.deb
# dpkg -i libc-bin_2.15-0ubuntu10_i386.deb
# dpkg -i multiarch-support_2.15-0ubuntu10_i386.deb
# dpkg -i xz-utils_5.1.1alpha+20110809-3_i386.deb
# dpkg -i liblzma5_5.1.1alpha+20110809-3_i386.deb
# dpkg -i dpkg_1.16.1.2ubuntu7_i386.deb
# apt-get dist-upgrade

Huh. That actually worked on one system. But not on another. Still slammed into the python-minimal failure. For that machine I couldn’t mess around, so I had to give up and did a re-install from scratch. That’s not always feasible and certainly isn’t desirable; if I wanted to be blowing systems away all the time and re-installing them I’d be running Red Hat.

Anyway, I then located this bug about being unable to upgrade (what the hell kind of QA did these people do before “releasing”?) where, very helpfully, Stefano Rivera suggested a magic incantation that gets you past this:

# apt-get install -o APT::Immediate-Configure=false -f apt python-minimal
# apt-get dist-upgrade

(I had tried something very close to this, but didn’t think of doing both apt and python-minimal. Also, it hadn’t occurred to me to use -f. Ahh. For some reason one always sees apt-get -f install not apt-get -f install whatever-package-name).

Ta-da.

AfC

Using inotify to trigger builds

Having switched from an Eclipse (which nicely takes care of building your project for you) to working in gVim (which does nothing of the sort), it’s a bit tedious to have to keep switching from the editor’s window to a terminal in the right directory to whack Up then Enter to run make again.

I know about inotify, a capability in the Linux kernel to watch files for changes, but I hadn’t realized there was a way to use it from the command line. Turns out there is! Erik de Castro Lopo pointed me to a program called inotifywatch that he was using in a little shell script to build his Haskell code. Erik had it set up to run make if one of the files he’d listed on the script’s command line changed.

Saving isn’t what you think it is

I wanted to see if I could expand the scope in a few ways. For one thing, if you had inotifywatch running on a defined list of files and you created a new source file, it wouldn’t trigger a build because it wasn’t being watched by inotify. So I had a poke at Erik’s script.

Testing showed that the script was working, but not quite for the reason that we thought. It was watching for the ‘modify’ event, but actually catching a non-zero exit code. That’s strange; I was expecting a normal 0 not error 1. Turns out 1 is the exit code in situations when the file you were watching is deleted. Huh? All that I did was save a file!

Of course, that’s not what many programs do when saving. To avoid the risk of destroying your original file in the event of having an I/O error when overwriting it, most editors do NOT modify your file in place; they write a new copy and then atomically rename it over the original. Other programs move your original to a backup name and write a new file. Either way, you’ve usually got a new inode. Most of the time.

And that makes the exact event(s) to listen for tricky to pin down. At first glance the ‘modify’ one seemed a reasonable choice, but as we’ve just seen that turns out to not be much use, and meanwhile you end up triggering due to changes made to transient garbage like Vim’s swap files — certainly what you don’t want to trigger a build. Given that a new file is being made, I then tried watching for the ‘create’ event, but it’s overblown with noise too. Finally, you want touching a file to result in a rebuild and that doesn’t involve a ‘create’ event.

It turns out that saving (however done) and touching a file have in common that at some point in the sequence your file (or its backup) will be opened and then closed for writing. inotify has a ‘close_write’ event (complimenting ‘close_nowrite’) so that’s the one to watch for.

If you want to experiment with figuring all this yourself, try doing:

$ inotifywait -m .

and then use your editor and build tools as usual. It’s pretty interesting. The inotifywait(1) program is part of the 'inotify-tools' package on Debian-based Linux systems.

Resurrection, Tron style

Triggering a build automatically is brilliant, but only half the equation; inevitably you want to run the program after building it. It gets harder; if the thing you’re hacking on is a service then, having run it, you’ve got to kill it off and restart it to find out if your code change fixed the problem. How many times have you been frustrated that your bugfix hasn’t taken only to realize you’ve forgotten to restart the thing you’re testing? Running the server manually in yet another terminal window and then killing it and restarting it — over and over — is quite a pain. So why not have that triggered as a result of the inotify driven build as well?

Managing concurrent tasks is harder than it should be. Bash has “job control“, of course, and we’re well used to using it in interactive terminals:

$ ./program
^Z
$

$ bg
[1] 13796
$ jobs
[1] Running    ./program
$

$ kill %1
[1] Terminated ./program
$

It’s one thing to run something and then abort it, but it’s another thing entirely to have a script that runs it and then kills it off in reaction to a subsequent event. Job control is lovely when you’re interactive but for various reasons is problematic to use in a shell script (though, if you really want to, see set -m). You can keep it simple, however: assuming for a moment you have just one program that needs running to test whatever-it-is-you’re-working-on, you can simply capture the process id and use that:

    #!/bin/sh

    ./program &
    PID="$!"

Then you can later, in response to whatever stimuli, do:

    kill $PID

Said stimulus is, of course, our blocking call to inotifywait, returning because a file has been saved.

GOTO 10

Do a build. If it succeeds, run the specified program in the background then block waiting for an inotify ‘close_write’ event. When that happens, kill the program and loop back to the beginning. Easy, right? Sure. that’s why it took me all day.

I called it inotifymake. Usage is simple; throw it in ~/bin then:

$ cd ~/src/project/branch/
$ inotifymake -- ./program

make does its thing, building program as its result
program runs
waiting…

change detected!

kill program make does its thing, build failed :(
waiting…

change detected!

make does its thing, rebuilding program as a result
program runs
waiting…

the nice part being that the server or whatever isn’t running in the middle when the build is borked; http://localhost:8000/ isn’t answering. Yeah.

[The “--” back there separates make targets from the name of the executable you’re running, so you can do inotifymake test or inotifymake test -- ./program if you like]

So yes, that was a lot of effort for not a lot of script, but this is something I’ve wanted for a long time, and seems pretty good so far. I’m sure it could be improved; frankly if it needed to be any more rigorous I’d rewrite it as a proper C program, but in the mean time, this will do. Feedback welcome if you’ve got any ideas; branch is there if you want it.

AfC

Taming gVim on a GNOME Desktop

I love using GEdit to write text documents like blog posts marked up in Markdown. I’ve been using it extensively to write technical documentation for a while now. It’s a lovely text file editor.

Programming with GEdit is a bit more subject to critique. Sure, source code is text files, but as programmers we expect a fair bit of our editor. I’ve been writing my C code in vi for like 25 years. When I returned to Java after a few years in dot-com land I (with great scepticism) began using Eclipse. IDEs are for wimps, after all. Boy was I wrong about that. Once I got the hang of its demented way of doing things, I was blown away at the acceleration that code completion, hierarchy navigation, and most of all context appropriate popups of JavaDoc. Hard to leave that behind.

Lately, however, I’ve been doing a lot of work with Haskell and JavaScript, and that ain’t happening in Eclipse. Since I use GEdit for so much else, I thought I’d give it a whirl for programming in other-than-Java. Didn’t work out so well. I mean, it’s almost there; I tried a bunch of plugins but it seems a bit of a crap shoot. To make development easier I’d certainly need e.g. ctags, but there was nothing packaged.

You’re probably asking why I’m not content to just use Vim from a terminal; after all, been doing so for years. I’ve begun to have a bit of a backlash against running applications in terminal windows; command lines are for doing Linux things, but once you run an editor or email client or whatever in a terminal then suddenly your productivity on the Desktop goes to hell; the whole premise of Alt+Tab (we won’t even talk about the bizarre GNOME Shell Alt+` business) is switching between applications but having both $ and programs in the same type of window blows that up.

Vim, however, has long had a GUI version called gVim, and when running it shows up as an independent application. So, for the hell of it, I gave it a try.

Cut and Paste

Immediately I went bananas becuase copy and paste didn’t work like they should. Yes this is vi; yank-yank, baby. But as gVim it’s also a GUI, and we’ve all pretty much learnt that if you’ve got a white canvas in front of you, Ctrl+C and Ctrl+V are going to work. So much so that I have gnome-terminal rejigged to to make Ctrl+Shift+C result in SIGINT leaving Ctrl+C for copy. Consistency in user interaction is everything.

There’s an entire page on the Vim Tips Wiki devoted to using y, d, and P to yank or delete then put. No kidding. But just as I was about to give up I found, buried at the bottom, advice to add:

    source $VIMRUNTIME/mswin.vim

to your .vimrc. The script affects some behaviour changes which among other things makes selection, cut, copy, and paste work like a normal GtkTextView widget. Hooray!

As for the rest of the GUI, I’ve gone to a lot of trouble to tame it. You need to make quite a number of changes to gVim’s default GUI settings; it’s all a bit cumbersome and the gVim menus (which at first seem like such a great idea!) don’t actually give you much help. Figuring these customizations out took a lot of wading through the wiki and worse the voluminous internal help documentation to figure any of this out; it’s pretty arcane.

Cursor

In particular, I want an unblinking vertical bar as a cursor (to match the desktop wide setting properly picked up by every other GNOME app, here I had to manually force it):

    set guicursor=a:ver1,a:blinkon0

See the documentation for 'guicursor' for other possibilities.

Mouse pointer

Also for consistency, I needed to get standard behaviour for the mouse pointer (in particular, it’s supposed to be an I-beam when over text; the ability to change pointer depending on which mode you’re on is interesting, but it is jarring when compared to, well, everything else on the Desktop):

    set mouseshape=n:beam,ve:beam,sd:updown

The documentation for 'mouseshape' describes enough permutations to keep even the most discerning customization freak happy.

Window dressing

The remaining settings are certainly personal, but you’ll want to pick a default window size that makes decent use of your screen real estate:

    if has("gui_running")
        set lines=45 columns=95
    endif

You need to guard with that if block because otherwise running vim from your command line will resize your terminal (!) and that’s annoying.

You can set the editor font from the menu, but to make it stick across program invocations you need it in .vimrc as shown here; finally, the 'guioptions' at the end disables tearoff menus (t) and turns off the toolbar (T):

    set guifont=DejaVu\ Sans\ Mono\ 11
    set guioptions-=tT

Syntax colouring

I normally use Vim in a terminal with a black background, but for some reason I don’t much like the colour set chosen. Forcing the change to 'light' makes for a nicely different set, but since I run gVim with a white background to be consistent with other GUI programs, I had to do a bit of tweaking to the colours used for syntax highlighting:

    set background=light
    highlight Constant ctermfg=Blue guifg=DarkBlue
    highlight String ctermfg=Blue cterm=bold guifg=DarkBlue gui=bold
    highlight Comment ctermfg=Grey guifg=DarkGrey

Hopefully that does it. As I said, the new Vim wiki is full of an insane amount of information, but since Vim is so powerful it can, ironically, be hard to find what you need to fine tune things just the way you want them. So if you’re a power Vim or gVim user, why don’t you blog about your .vimrc settings?

Still not sure if using gVim is going to be a good idea; the fact that despite all this hackery the editor canvas is still not a GNOME app with behaviour matching the standards of the entire rest of the Desktop is going to make me crazy. Hopefully I can keep things straight in my head; frankly I’d rather be using GEdit but it is nice to have consistency between using Vim to do sysadmin work and using gVim to write code.

AfC

My sound hardware didn’t vanish, honest

I’ve been having intermittent problems with sound not working. Usually restarting (ie, killing) PulseAudio has done the trick but today it was even worse; the sound hardware mysteriously vanished from the Sound Settings capplet. Bog knows what’s up with that, but buried in “Sound Troubleshooting” I found “Getting ALSA to work after suspend / hibernate” which contains this nugget:

The alsa “force-reload” command will kill all running programs using the sound driver so the driver itself is able to be restarted.

Huh. Didn’t know about that one. But seems reasonable, and sure enough,

$ /sbin/alsa force-reload

did the trick.

That wiki page goes on to detail adding a script to /etc/pm/sleep.d to carry this out after every resume. That seems excessive; I know that sometimes drivers don’t work or hardware doesn’t reset after the computer has been suspended or hibernated, but in my case the behaviour is only intermittent, and seems related to having docked (or not), having used an external USB headphone (or not), and having played something with Flash (which seems to circumvent PulseAudio. Bad). Anyway, one certainly doesn’t want to kill all one’s audio-using programs just because you suspended! But as a workaround for whatever it is that’s wrong today, nice.

AfC

Learning Haskell

In the land of computer programming, newer has almost always meant better. Java was newer than C, and better, right? Python was better than Perl. Duh, Ruby is better than everything, so they’d tell you. But wait, Twitter is written in Scala. Guess that must be the new hotness, eh?

Haskell has been around for quite a while; somehow I had it in my head that it was outdated and only for computer science work. After all, there are always crazy weirdos out there in academia working on obscure research languages — at least, that’s the perspective from industry. After all, we’re the ones getting real work done. All you’re doing is sequencing the human genome. We invented Java 2 Enterprise Edition. Take that, ivory tower.

The newness bias is strong, which is why I was poleaxed to find people I respect like Erik de Castro Lopo and Conrad Parker working hard in, of all things, Haskell. And now they’re encouraging me to program in it, too (surely, cats and dogs are sleeping together this night). On their recommendation I’ve been learning a bit, and much to my surprise, it turns out Haskell is vibrant, improving, and really cutting edge.

The next thing

I get the impression that people are tired of being told that the some cool new thing makes everything else they’ve been doing irrelevant. Yet many professional programmers (and worse, fanboy script kiddies) are always looking to the next big thing, the next cool language. Often the very people you respect about a topic have already moved on to something else (there’s a book deal in it for them if they can write it fast enough).

But still; technology is constantly changing and there’s always pressure to be doing the latest and greatest. I try my best to resist this sort of thing, just in the interest of actually getting anything done. Not always easy, and the opposite trap is to adopt a bunker mentality whereby you defend what you’re doing against all comers. Not much learning going on there either.

There is, however, a difference between the latest new thing and learning something new.

One of the best things about being active in open source is the opportunity to meet people who you can look up to and learn from. I may know a thing or two about operations and crisis and such, but my techie friends and colleagues are my mentors when it comes to software development and computer engineering. One thing they have taught me over the years is the value of setting out deliberately to “stretch” your mind. Specifically, experimenting with a new programming language that is not your day-to-day working environment, but something that will force your to learn new ways of looking at problems. These guys are professionals; they recognize that whatever your working language(s) are, you’re going to keep using them because you get things done there. It’s not about being seduced by the latest cool project that some popular blogger would have you believe is the be-all-and-end-all. Rather, in stretching, you might be able to bring ideas back to your main work and just might improve thereby. I think there is wisdom there.

Should I attempt to learn Haskell?

I’ve had an eye on functional programming for a while now; who hasn’t? Not being from a formal computer science or mathematics background — (“damnit Jim, I’m an engineer, not an english major” when called upon to defend my atrocious spelling) — the whole “omigod, like, everything is function and that’s like, totally cool” mantra isn’t quite as compelling by itself as it might be. But lots of people I respect have been going on about functional programming for a while now, and it seemed a good direction to stretch. So I asked which language should I learn?

My colleagues suggested Haskell for a number of reasons. That cutting edge research was happening there and that increasingly powerful things were being implemented in the compiler and runtime as a result sounded interesting. That Haskell being a pure functional language (I didn’t know yet what that meant but that’s beside the point) would really force me to learn a functional way of doing at things (as opposed to some others where you can do functional things but can easily escape those constraints; pragmatic, perhaps, but since the idea was to learn something new, that made Haskell sound good rather than perceiving this as a limitation). Finally, they claimed that you could express problems concisely (brevity good, though not if it’s so dense that it’s write-only).

Considering a new language (or, within a language, considering various competing frameworks for web applications, graphical user interface, production deployment, etc) my sense is that when we look at such things we are all fairly quick to judge, based on our own private aesthetic. Does it look clean? Can I do things I need to do with this easily? How do the authors conceive of the problem space? (in web programming especially, a given framework will make some things easy and other things nigh on impossible; you need to know what world-view you’re buying into).

I don’t know about you, but elegance by itself and in the abstract is not sufficient. Elegance is probably the most highly valued characteristic of good engineering design, but it must be coupled with practicality. In other words, does the design get the job done? So before I was willing to invest time learning Haskell, I wanted to at least have some idea that I’d be able to use it for something more than just academic curiosity.

Incidentally, I’m not sure the Haskell community does itself many favours by glorifying in how clever you can be in the language; the implied corollary is that you can’t do anything without being exceedingly clever about it. If true, that would be tedious. I get the humour of the commentary that as we gain experience we tend to overcomplicate things, as seen in the many different ways there are to express a factorial function. But I saw that article linked from almost every thread about how clever you can be with Haskell; is that the sort of thing that you want to use as an introduction for newcomers? Given the syntax is so different from what people are used to in mainstream C-derived programming languages, the code there just looks like mush. The fact that there are people who studied mathematics are doing theorem proving in the language is fascinating, but the tone is very elevated as a result. A high bar for a newcomer — even a professional with 25 years programming experience — to face.

It became clear pretty fast that I wouldn’t have the faintest idea what I was looking at, but I still tried to see if I could get a sense of what using Haskell would be like. Search on phrases like “haskell performance”, “haskell in production”, “commercial use of haskell”, “haskell vs scala”, and so on. You get more than enough highly partisan discussion. It’s quick to see people love the language. It’s a little harder to evidence see it being used in anger, but eventually I came across pages like Haskell in Industry and Haskell Weekly News which have lots of interesting links. That pretty much convinced me it’d be worth giving it a go.

A brief introduction

So here I am, struggling away learning Haskell. I guess I’d have to say I’m still a bit dubious, but the wonderful beginner tutorial called Learn You A Haskell For Great Good (No Starch Press) has cute illustrations. :) The other major starting point is Real World Haskell (O’Reilly). You can flip through it online as well, but really, once you get the idea, I think you’ll agree it’s worth having both in hard copy.

Somewhere along the way my investigations landed me on discussion of something called “software transactional memory” as an approach to concurrency. Having been a Java person for quite some years, I’m quite comfortable with multi-threading [and exceptionally tired of the rants from people who insist that you should only write single threaded programs], but I’m also aware that concurrency can be hard to get right and that solving bugs can be nasty. The idea of applying the database notion of transactions to memory access is fascinating. Reading about STM led me to this (short, language agnostic) keynote given at OSCON 2007 by one Simon Peyton-Jones, an engaging speaker and one of the original authors of GHC. Watching the video, I heard him mention that he had done an “introduction to Haskell” earlier in the conference. Huh. Sure enough, linked from here, are his slides and the video they took.

Watching the tutorial implies a non-trivial investment in time, and a bit of care to manually track the slides with him as he is presenting, but viewing it all the way through was a very rewarding experience. By the time I watched this I’d already read Learn You A Haskell and a goodly chunk of Real World Haskell, but if anything that made it even more fascinating; I suppose I was able to concentrate more on what he was saying for the emphasis on why things in Haskell are the way they were.

I was quite looking forward to how he would introduce I/O to an audience of beginners; like every other neophyte I’m grinding through learning what “monads” are and how they enable pure functional programming to coexist with side effects. Peyton-Jones’s discussion of IO turns up towards the end (part 2 at :54:36), when this definition went up on a slide:

IO (a) :: World -> (a, World)

accompanied by this description:

“You can think of it as a function that takes a World to a pair of a and a new World … a rather egocentric functional programmer’s view of things in which your function is center of the universe, and the entire world sort of goes in one side of your function, gets modified a bit by your function, and emerges, in a purely functional way, in a freshly minted world which comes out the other…”

“Oh, so that’s a metaphor?” asked one of his audience.

“Yes. The world does not actually disappear into your laptop. But you can think of it that way if you like.”

Ha. :)

Isolation and reusability

A moment ago I mentioned practicality. The most practical thing going these days is the web problem, i.e. using a language and its toolchain to do web programming. Ok, so what web frameworks are there for Haskell? Turns out there are a few. Two newer ones in particular, Yesod and the Snap Framework. Their raw performance as web servers looks very impressive, but the real question is how does writing web pages & application logic go down? Yesod’s approach, called “Hamlet“, doesn’t do much for me. I can see why type safety across the various pieces making up a web page would be something you’d aspire to, but it ain’t happening (expecting designers to embed their work in a pseudo-but-not-actually HTML language has been tried before. Frequently. And it’s been a bust every time). Snap, on the other hand, has something called “Heist“. Templates are pure HTML and when you need to drop in programmatically generated snippets you do so with a custom tag that gets substituted in at runtime. That’s alright. As for writing said markup from within code there’s a different project called “Blaze” which looks easy enough to use.

Reading a thread about Haskell web programming, I saw explicit acknowledgement on the part of framework authors from all sides that it would be possible to mix and match, at least in theory. If you like Yesod’s web server but would rather to use Snap’s Heist template engine, you could probably do so. You’d be in for all the glue code and knowing what you’re about, but this still raises an interesting point.

A big deal with Haskell — and one of the core premises of programming in a functional language that emphasizes purity and modularity — is that you can rely on code from other libraries not to interfere with your code. It’s more than just “no global variables”; pure functions are self contained, and when there are side effects (as captured in IO and other monads) they are explicitly marked and segregated from pure code. In IT we’ve talked about reusable code for a long time, and we’ve all struggled with it: the sad reality is that in most languages, when you call something you have few guarantees that nothing else is going to happen over and above what you’ve asked for. The notion of a language and its runtime explicitly going out of its way to inhibit this sort of thing is appealing.

Hello web, er world

Grandiose notions aside, I wanted to see if I could write something that felt “clean”, even if I’m not yet proficient in the language. I mentioned above that I liked the look of Snap. So, I roughed out some simple exercises of what using the basic API would be like. The fact that I am brand new at Haskell of course meant it took a lot longer than it should have! That’s ok, I learnt a few things along the way. I’ll probably blog separately about it, but after an essay about elegance and pragmatism, I thought I should close with some code. The program is just a little ditty that echos your HTTP request headers back to you, running there. You can decide for yourself if the source is aesthetically pleasing; ’tis a personal matter. I think it’s ok, though I’m not for a moment saying that it’s “good” style or anything. I will say that with Haskell I’ve already noticed that what looks deceptively simple often takes a lot of futzing to get the types right — but I’ve also noticed that when something does finally compile, it tends to be very close to being done. Huh.

So here I am freely admitting that I was quite wrong about Haskell. It’s been a bit of a struggle getting started, and I’m still a bit sceptical about the syntax, but I think the idea of leveraging Haskell shows promise, especially for server-side work.

AfC

A good GNOME 3 Experience

I’ve been using GNOME 3 full time for over 9 months, and I find it quite usable. I’ve had to learn some new usage patterns, but I don’t see that as a negative. It’s a new piece of software, so I’m doing my best to use it the way it’s designed to be used.

Sure, it’s different than GNOME 2. It’s vastly different. But it is a new UI paradigm. The GNOME 2 experience was over 9 years old, and largely based on the experience inherited from the old Windows 95 muxed with a bit of CDE. There were so many things that the GNOME hackers wanted to do — and lots of things all the UI studies said needed changing — that the old pattern simply couldn’t support.

Still, a lot of people are upset. Surprise. Most recently it’s been people running Debian Testing who just recently found that their distro has migrated its packages from GNOME 2.32 to GNOME 3.x. Distros like Ubuntu have been shipping GNOME 2.32 for ages; but it has been well over 2 years since anyone actually worked on that code. It’s wonderful that nothing has changed for you in all that time [a true Debian Stable experience!] but I think it’s a bit odd not to expect that something that was widely advertised as being such a different user experience is … different.

What I find annoying about these conversations is that if they had gone and bought an Apple laptop with Mac OS X on it they would be perfectly reasonably working through learning how to use a new Desktop and not complaining about it at all. But here we are admonishing the GNOME hackers had the temerity to do something new and different.

Installing

I went to some trouble to run GNOME 3 on Ubuntu Linux during the Natty cycle; that was a bit of work but I needed to be current; now with Oneiric things are mostly up to date. GNOME 3.0 was indeed a bit of a mess, but then so was GNOME 2.0. The recently released 3.2 is a big improvement. And it looks like the list of things that seem targeted to 3.4 will further improve things.

I’m now running GNOME 3 on a freshly built Ubuntu Oneiric system; I just did a “command line” install of Ubuntu and then installed gdm, gnome-shell, xserver-xorg and friends. Working great, and not having installed gnome-desktop saved me a huge amount of baggage. Of course a normal Oneiric desktop install and then similarly installing and switching to gnome-shell would work fine too; either way you probably want to enable the ppa:gnome3-team/gnome3 PPA.

Launchers

One thing I do recommend is mapping (say) CapsLock as an additional “Hyper” and then Caps + F1 .. Caps + F12 as launchers. I have epiphany browser on F1, evolution on F2, my IRC client on F3 and so on. Setting up Caps + A as to do gnome-terminal --window means you can pop a term easily from anywhere. You do the mapping in:

    System Settings → Keyboard Layout → Layout tab → Options...

and can set up launchers via:

    System Settings → Keyboard → Shortcuts tab → "Custom Shortcuts" → `[+]` button

(you’d think that’d all just be in one capplet, but anyway)

Not that my choices matter, per se, but to gives you an idea:

AcceleratorLaunchesDescription
Caps + F1 epiphany Web browser (primary)
Caps + F2 evolution Email mail
Caps + F3 pidgin IRC client
Caps + F4 empathy Jabber client
Caps + F5 firefox Web browser (alternate)
Caps + F6 shotwell Photo manager
Caps + F7 slashtime Timezone utility
Caps + F8 rhythmbox Music player
Caps + F9 eclipse Java IDE
Caps + F10 devhelp GTK documentation
Caps + F11 gucharmap Unicode character picker
Caps + F12 gedit Text editor
Caps + Z gnome-terminal --window New terminal window

That means I only use the Overview’s lookup mechanism (ie typing Win, T, R, A… in this case looking for the Project Hamster time tracker) for outlying applications. The rest of the time it’s Caps + F12 and bang, I’ve got GEdit in front of me.

Of course you can also set up the things you use the most on the “Dash” (I think that’s what they call it) as favourites. I’ve actually stopped doing that (I gather the original design didn’t have favourites at all); I prefer to have it as an alternative view of things that are actually running.

Extensions

People love plugin architectures, but they’re quite the anti-pattern; over and above the software maintenance headache (evolving upstream constantly breaks APIs used by plugins, for one example; the nightmare of packaging plugins safely being another) before long you get people installing things with contradictory behaviour and which completely trash the whole experience that your program was designed to have in the first place.

Case in point is that it didn’t take long after people discovered how to use the extension mechanism built into gnome-shell for people to start using it to implement … GNOME 2. Gawd.

Seeking that certainly is not my recommendation; as I wrote above the point of GNOME 3 and it’s new shell is to enable a new mode of interaction. Still, everyone has got their itches and annoyances, and so for my friends who can’t live without their GNOME 2 features, I thought I’d point out a few things.

There are a collections of GNOME Shell Extensions some of which appear to be packaged, i.e. gnome-shell-extensions-drive-menu for an plugin which gives you some kind of menu when removable devices are inserted. I’m not quite sure what the point of that is; the shell already puts something in the tray when you’ve got removable media. Whatever floats your boat, I guess. Out in the wild are a bunch more. The charmingly named GNOME Shell Frippery extensions by Ron Yorston has a bunch of plugins to recreate GNOME 2 features. Most are things I wouldn’t touch with a ten-foot pole (a bottom panel? Who needs it? Yo, hit the Win key to activate the Overview and you see everything!).

My personal itch was wanting to have 4 fixed workspaces. The “Auto Move Workspaces” plugin from gnome-shell-extensions was close (and would be interesting if its experience and UI were properly integrated into the primary shell experience), but the “Static Workspaces” plugin from gnome-shell-frippery did exactly the trick. Now I have four fixed workspaces and I can get to them with Caps + 1 .. Caps + 4. Hurrah.

You install the plugin by dropping the Static_Workspaces@rmy.pobox.com/ directory into ~/.local/share/gnome-shell/extensions/, then restarting the Shell via Alt + F2, R, and then firing up gnome-tweak-tool and activating the extension:

    Advanced Settings → Shell Extension tab → switch "Static Workspaces Extension" to "On"

Hopefully someone will Debian package gnome-shell-frippery soon.

Not quite properly integrated

Having to create custom launchers and fiddle around with plugins just to get things working? “Properly integrated” this ain’t, and that’s my fault. I respect the team hacking on GNOME 3, and I know they’re working hard to create a solid experience. I feel dirty having to poke and tear their work apart. Hopefully over the next few release cycles things like this will be pulled into the core and given the polish and refined experience that have always been what we’ve tried to achieve in GNOME. What would be really brilliant, though, would be a way to capture and export these customizations. Especially launchers; setting that up on new machines is a pain and it’d be lovely to be able to make it happen via a package. Hm.

AfC

Restoring Server to Server Jabber capability in Openfire 3.7.0

We upgraded our Jabber from Openfire 3.6.4 to 3.7.0 but suddenly it wasn’t talking to others who had self-signed certificates (despite having been told that was ok; in such circumstances Jabber offers a “dialback” to weakly establish TLS connectivity). It was breaking with messages like:

 Error trying to connect to remote server: net.au (DNS lookup: net.au:5269)

in the logs. Took a while to isolate that (Java stack traces in server logs, how I hate thee), but that led me to this discussion about the problem.

Turns out to be, more or less, this issue.

Back to the original thread, comment by Gordon Messmer turns the trick; he isolated the specific problem and provided a patch which you can reverse apply. Since I had already downloaded the 3.7.1 nightly build .deb and installed it (which did not fix the problem), but with a patch in hand, I was then able to follow the advice of “Devil” to rebuild the server and replace the openfire.jar manually.

Obviously I will rebuild the .deb in question shortly, but I can confirm that reverting that specific change does restore server to server functionality with people you used to be able to connect to fine.

Meanwhile 3.7.1, alpha though it may be, does seem to have a fair few fixes in it. We’ll keep that for now.

AfC

Mounting a kvm image on a host system

Needed to mount a KVM disk image on a host system without booting the guest. I’ve used kvm-nbd in the past, but that’s more for exporting a filesystem from a remote server (and, in any event, we now use Sheepdog for distributing VM images around).

Found a number of references to using losetup but wanted something simpler; doing the loop part automatically has been a “take for granted” thing for years.

It turns out that if your image is in kvm-img‘s “raw” format already then you can pretty much access it directly. We found this article, dated 2009, which shows you can do it pretty quickly; assuming that the partition is the first (or only one) in the disk image:

# mount -o ro,loop,offset=32256 dwarf.raw /mnt
#

which does work!

I’m a bit unclear about the offset number; where did that come from?

An earlier post mentioned something called kpartx. Given a disk, it will identify partitions and make them available as devices in via the device mapper. Neat. Hadn’t run into that one before.

This comment on Linux Questions suggested using kpartx directly as follows:

# kpartx -a dwarf.raw 
# mount -o ro /dev/mapper/loop0p1 /mnt
#

Nice.

Incidentally, later in that thread is mention of how to calculate offsets using sfdisk -l, however that doesn’t help if you don’t already have the disk available in /dev. But you can use the very same kpartx to get the number of cylinders:

# kpartx -l dwarf.raw 
loop0p1 : 0 16777089 /dev/loop0 63
#

Ah ha; 63 sectors × 512 byte block size is 32256. So now we know where they got the number from; adjust your own mounts accordingly. :)

AfC

Comments:

  1. Pádraig Brady wrote in, saying “I find partx is overkill. I’ve been using the following script for years; lomount.sh dwarf.raw 1 /mnt is how you might use it.” Interesting use of fdisk, there.

Fix your icons package

Updating today, suddenly a whole bunch of icons are broken; GNOME shell was presenting everything with a generic application diamond as its icon. Bah.

I noticed gnome-icon-theme was one of the packages that bumped. It turns out that package is now only a limited subset of the GNOME icon set and that you have to install gnome-icon-theme-full to get the icons you actually need. Bah², but now everything is back the way it should be.

AfC

java-gnome 4.1.1 released

This post is an extract of the release note from the NEWS file which you can read online … or in the sources from Bazaar.


java-gnome 4.1.1 (11 Jul 2011)

To bump or not to bump; that is the question

This is the first release in the 4.1 series. This introduces coverage of the GNOME 3 series of libraries, notably GTK 3.0. There was a fairly significant API change from GTK 2.x to 3.x, and we’ve done our best to accommodate it.

Drawing with Cairo, which you were already doing

The biggest change has to do with drawing; if you have a custom widget (ie, a DrawingArea) then you have to put your Cairo drawing code in a handler for the Widget.Draw signal rather than what used to be Widget.ExposeEvent. Since java-gnome has ever only exposed drawing via Cairo, this change will be transparent to most developers using the library.

Other significant changes include colours: instead of the former Color class there’s now RGBA; you use this in calls in the override...() family instead of modify...() family; for example see Widget’s overrideColor().

Orientation is allowed now

Widgets that had abstract base classes and then concrete horizontal and vertical subclasses can now all be instantiated directly with an Orientable parameter. The most notable example is Box’s <init>() (the idea is to replace VBox and HBox, which upstream is going to do away with). Others are Paned, various Range subclasses such as Scrollbar. Separator, Toolbar, and ProgressBar now implement Orientable as well.

There’s actually a new layout Container, however. Replacing Box and Table is Grid. Grid is optimized for GTK’s new height-for-width geometry management and should be used in preference to other Containers.

The ComboBox API was rearranged somewhat. The text-only type is now ComboBoxText; the former ComboBoxEntry is gone and replaced by a ComboBox property. This is somewhat counter-intuitive since the behaviour of the Widget is so dramatically different when in this mode (ie, it looks like a ComboBoxEntry; funny, that).

Other improvements

It’s been some months since our last release, and although most of the work has focused on refactoring to present GTK 3.0, there have been numerous other improvements. Cairo in particular has seen some refinement in the area of Pattern and Filter handling thanks to Will Temperley, and coverage of additional TextView and TextTag properties, notably relating to paragraph spacing and padding.

Thanks to Kenneth Prugh, Serkan Kaba, and Guillaume Mazoyer for their help porting java-gnome to GNOME 3.


You can download java-gnome’s sources from ftp.gnome.org, or easily checkout a branch frommainline:

$ bzr checkout bzr://research.operationaldynamics.com/bzr/java-gnome/mainline java-gnome

though if you’re going to do that you’re best off following the instructions in the HACKING guidelines.

AfC