Integrating Vim and GPG

Quite frequently, I need to take a quick textual note but when the content is sensitive, even just transiently, well, some things shouldn’t be left around on disk in plain text. Now before you pipe up with “but I encrypt my home directory” keep in mind that that only pretects data against it being read in the event your machine is stolen; if something gets onto your system while it’s powered up and you’re logged in, the file is there to read.

So for a while my workflow there has been the following rather tedious sequence:

$ vi document.txt
$ gpg --encrypt --armour 
    -r andrew@operationaldynamics.com 
    -o document.asc document.txt
$ rm document.txt
$

and later on, to view or edit the file,

$ gpg --decrypt -o document.txt document.asc 
$ view document.txt
$ rm document.txt

(yes yes, I could use default behaviour for a few things there, but GPG has a bad habit of doing things that you’re not expecting; applying the principle of least surprise seems a reasonable defensive measure, but fine, ok

$ gpg < document.asc

indeed works. Pedants, the lot of you).

Obviously this is tedious, and worse, error prone; don’t be overwriting the wrong file, now. Far more serious, you have the plain text file sitting around while you’re working on it, which from an operational security standpoint is completely unacceptable.

vim plugin

I began to wonder if there was better way of doing this, and sure enough, via the volumous Vim website I eventually found my way to this delightful gem: https://github.com/jamessan/vim-gnupg by James McCoy.

Since it might not be obvious, to install it you can do the following: grab a copy of the code,

$ cd ~/src/
$ mkdir vim-gnupg
$ cd vim-gnupg/
$ git clone git://github.com/jamessan/vim-gnupg.git github
$ cd github/
$ cd plugin/
$ ls

Where you will see one gnupg.vim. To make Vim use it, you need to put in somewhere vim will see it, so symlink it into your home directory:

$ mkdir ~/.vim
$ mkdir ~/.vim/plugin
$ cd ~/.vim/plugin/
$ ln -s ~/src/vim-gnupg/github/plugin/gnupg.vim .
$

Of course have a look at what’s in that file; this is crypto and it’s important to have confidence that the implementation is sane. Turns out that the gnupg.vim plugin is “just” Vim configuration commands, though there are some pretty amazing contortions. People give Emacs a bad rap for complexity, but whoa. :). The fact you can do all that in Vim is, er, staggering.

Anyway, after all that, it Just Works™. I give my filename a .asc suffix, and ta-da:

$ vi document.asc

the plugin decrypts, lets me edit clear text in memory, and then re-encrypts before writing back to disk. Nice! For a new file, it prompts for the target address (which is one’s own email for personal use) and then on it’s way. [If you’re instead using symmetrical encryption, I see no way around creating an empty file with gpg first, but other than that, it works as you’d expect]. Doing all of this on a GNOME 3 system, you already have a gpg-agent running, so you get all the sexy entry dialogs and proper passphrase caching.

I’m hoping a few people in-the-know will have a look at this and vet that this plugin doing the right thing, but all in all this seems a rather promising solution for quickly editing encrypted files.

Now if we can just convince Gedit to do the same.

AfC

java-gnome 4.1.2 released

This post is an extract of the release note from the NEWS file which you can read online … or in the sources from Bazaar.


java-gnome 4.1.2 (30 Aug 2012)

Applications don’t stand idly by.

After a bit of a break, we’re back with a second release in the 4.1 series covering GNOME 3 and its libraries.

Application for Unique

The significant change in this release is the introduction of GtkApplication, the new mechanism providing for unique instances of applications. This replaces the use of libunique for this purpose, which GNOME has deprecated and asked us to remove.

Thanks to Guillaume Mazoyer for having done the grunt work figuring out how the underlying GApplication mechanism worked. Our coverage begins in the Application class.

Idle time

The new Application coverage doesn’t work with java-gnome’s multi-thread safety because GTK itself is not going to be thread safe anymore. This is a huge step backward, but has been coming for a while, and despite our intense disappointment about it all, java-gnome will now be like every other GUI toolkit out there: not thread safe.

If you’re working from another thread and need to update your GTK widgets, you must do so from within the main loop. To get there, you add an idle handler which will get a callback from the main thread at some future point. We’ve exposed that as Glib.idleAdd(); you put your call back in an instance of the Handler interface.

As with signal handlers, you have to be careful to return from your callback as soon as possible; you’re blocking the main loop while that code is running.

Miscellaneous improvements

Other than this, we’ve accumulated a number of fixes and improvements over the past months. Improvements to radio buttons, coverage of GtkSwitch, fixes to Assistant, preliminary treatment of StyleContext, and improvements to SourceView, FileChooser, and more. Compliments to Guillaume Mazoyer, Georgios Migdos, and Alexander Boström for their contributions.

java-gnome builds correctly when using Java 7. The minimum supported version of the runtime is Java 6. This release depends on GTK 3.4.

AfC


You can download java-gnome’s sources from ftp.gnome.org, or easily checkout a branch frommainline:

$ bzr checkout bzr://research.operationaldynamics.com/bzr/java-gnome/mainline java-gnome

though if you’re going to do that you’re best off following the instructions in the HACKING guidelines.

AfC

Testing RESTful APIs the not-quite-as-hard way

Last week I wrote briefly about using wget and curl to test RESTful interfaces. A couple people at CERN wrote in to suggest I look at a tool they were quite happy with, called httpie.

I’m impressed. It seems to strike a lovely balance between expressiveness and simplicity. What’s especially brilliant is that it’s written for the common case of needing to customize headers and set specific parameters; you can do it straight off the command line. For what I was doing last week:

$ http GET http://localhost:8000/muppet/6 Accept:application/json
...

sets the Accept header in your request; sending data is unbelieveably easy. Want to post to a form? -f gets you url encoding, and meanwhile you just set parameters on the command line:

$ http -f POST http://localhost:8000/ name="Kermit" title="le Frog"
POST / HTTP/1.1
Accept: */*
Accept-Encoding: gzip
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Host: localhost:8000
User-Agent: HTTPie/0.2.7

name=Kermit&title=le+Frog
...

Nice.

If you’re sending JSON it does things like set the Content-Type and Accept headers to what they should be by simply specifying -j (which sensibly is the default if you POST or PUT and have name=value pairs). And, -v gets you both request and response headers; if you’re testing at this level you usally want to see both. Good show.

$ http -v -j GET http://localhost:8000/muppet/6
GET /muppet/6 HTTP/1.1
Accept: application/json
Accept-Encoding: gzip
Host: localhost:8000
User-Agent: HTTPie/0.2.7


HTTP/1.1 200 OK
Cache-Control: max-age=42
Content-Encoding: gzip
Content-Type: application/json
Date: Thu, 09 Aug 2012 03:52:27 GMT
Server: Snap/0.9.1
Transfer-Encoding: chunked
Vary: Accept-Encoding

{
    "name": "Fozzie"
    "title": "Bear"
}
$

Speaking of bears, I’m afraid to say it turned out to be quite the bear getting httpie installed on Ubuntu. I had to backport pygments, requests, python-oathlib, and pycrypto from Quintal to Precise, and meanwhile the httpie package in Quintal was only 0.1.6; upstream is at 0.2.7 and moving at a rapid clip. I finally managed to get through dependency hell; if you want to try httpie you can add my network tools PPA as ppa:afcowie/network. I had to make one change to httpie: the default compression header in python-requests is

Accept-Encoding: identity, deflate, compress, gzip

which is a bit silly; for one thing if the server isn’t willing to use any of the encodings then it’ll just respond a normal uncompressed entity, so you don’t need identity. More importantly, listing deflate and compress before gzip is inadvisable; some servers interpret the order encodings are specified as an order of preference, and lord knows the intersecting set of servers and clients that actually get defalate right is vanishingly small. So,

Accept-Encoding: gzip

Seems more than sufficient as a default; you can always change it on the command line if you have to for testing. Full documentation at github; that said, once it’s installed, http --help will tell you everything you’d like to know.

AfC

Testing RESTful APIs the hard way

RESTful APIs tend to be written for use by other programs, but sometimes you just want to do some testing from the command line. This has a surprising number of gotchas; using curl or wget is harder than it should be.

Wget

Wget is the old standby, right? Does everything you’d ever want it to. Bit of minor tweaking to get it not to blab on stdout about what it’s resolving (-q) and meanwhile telling it to just print the entity retrieved to stdout rather than saving it to a file (-O -) is easy enough. Finally, I generally like to see the response headers from the server I’m talking to (-S) so as to check that caching and entity tags are being set correctly:

$ wget -S -q -O - http://server.example.com/resource/1
  HTTP/1.1 200 OK
  Transfer-Encoding: chunked
  Date: Fri, 27 Jul 2012 04:49:17 GMT
  Content-Type: text/plain
Hello world

So far so good.

The thing is, when doing RESTful work all you’re really doing is just exercising the HTTP spec, admittedly somewhat adroitly. So you need to be able to indicate things like the media type you’re looking for. Strangely, there’s no command line option offered by Wget for that; you have to specify the header manually:

$ wget -S -q --header "Accept: application/json" -O - http://server.example.com/resource/1
  HTTP/1.1 200 OK
  Date: Fri, 27 Jul 2012 04:55:50 GMT
  Cache-Control: max-age=42
  Content-Type: application/json
  Content-Length: 27
{
    "text": "Hello world"
}

Cumbersome, but that’s what we wanted. Great.

Now to update. This web service wants you to use HTTP PUT to change an existing resource. So we’ll just figure out how to do that. Reading the man page. Hm, nope; Wget is a downloader. Ok, that’s what it said it was, but I’d really come to think of it as a general purpose tool; it does support sending form data up in a POST request with it’s --post-file option. I figured PUT would just be lurking in a hidden corner. Silly me.

Curl

Ok, howabout Curl? Doing a GET is dead easy. Turn on response headers for diagnostic purposes (-i), but Curl writes to stdout by default, so:

$ curl -i http://server.example.com/resource/1
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Fri, 27 Jul 2012 05:11:00 GMT
Content-Type: text/plain

Hello world

but yup, we’ve still got to mess around manually supplying the MIME type we want; at least the option (-H) is a bit tighter:

$ curl -i -H "Accept: application/json" http://server.example.com/resource/678
HTTP/1.1 200 OK
Date: Fri, 27 Jul 2012 05:12:32 GMT
Cache-Control: max-age=42
Content-Type: application/json
Content-Length: 27

{
    "text": "Hello world"
}

Good start. Ok, what about our update? It’s not obvious from the curl man page, but to PUT data with curl, you have to manually specify the HTTP method to be used with (-X) and then (it turns out) you use the same -d parameter as you would if you were transmitting with POST:

$ curl -X PUT -d name=value http://server.example.com/resource/1
$

That’s nice, except that when you’re PUTting you generally are not sending “application/x-www-form-urlencoded” name/value pairs; you’re sending actual content. You can tell Curl to pull from a file:

$ curl -X PUT -d @filename.data http://server.example.com/resource/1

or, (at last), from stdin like you’d actually expect of a proper command line program:

$ curl -X PUT -d @- http://server.example.com/resource/1
You are here.
And then you aren't.
^D
$

That was great, except that I found all my newlines getting stripped! I looked in in the database, and the content was:

You are here. And then you aren't.

Bah.

After writing some tests server-side to make sure it wasn’t my code or the datastore at fault, along with finally resorting to hexdump -C to find out what was going on, I finally discovered that my trusty \n weren’t being stripped, they were being converted to \r. Yes, that’s right, mind-numbingly, Curl performs newline conversion by default. Why oh why would it do that?

Anyway, it turns out that -d is short for --data-ascii; the workaround is to use --data-binary:

$ curl -X PUT --data-binary @- http://server.example.com/resource/1

“oh,” he says, underwhelmed. But it gets better; for reasons I don’t yet understand, Curl gets confused by EOF )as indicated by typing Ctrl+D in the terminal). Not sure what’s up with that, but trusty 40 year-old cat knows what to do, so use it as a front end:

$ cat | curl -X PUT --data-binary @- http://server.example.com/resource/1
Goodbye
^D
$

The other thing missing is the MIME type you’re sending; if for example you’re sending a representation in JSON, you’ll need a header saying so:

$ cat filename.json |
    curl -X PUT --data-binary @-
    -H "Content-Type: application/json" http://server.example.com/resource/1
{
    "text": "Goodbye cruel world"
}
^D
$

which is all a bit tedious. Needless to say I’ve stuck that in a shell script called (with utmost respect to libwww-perl) PUT, taking content type as an argument:

$ ./PUT "application/json" http://localhost:8000/resource/1 < filename.json
HTTP/1.1 204 Updated
Server: Snap/0.9.1
Date: Fri, 27 Jul 2012 04:53:53 GMT
Cache-Control: no-cache
Content-Length: 0
$

Ah, that’s more like it.

AfC

foss.in is back

Choose one

I first visited India in 2004 for what was then called “Linux Bangalore”. I had put in three talk proposals: a business-y one about the frictions encountered deploying open source in the enterprise, a social policy one about our experiences running linux.conf.au and the Linux Australia organization, and a technical tutorial about rapid application development in GNOME (didn’t everyone give a Glade tutorial in those years?). I put in three talks because I didn’t really quite know what they were focusing on and (assuming that they’d want me at all) wanted to offer them some options.

Silly me. They chose all three. :)

Eek!

And thus began my adventure with the conference that in subsequent years was renamed to “foss.in” and despite many evolutions, has remained one of the most fascinating and engaging conferences I’ve been able to attend.

FOSS.IN

Going to a country with a different culture can be confronting; I have not been better treated by any group of organizers than the outstanding way they took care of their speakers at that first conference.

One of the amazing things about the conference is that it has evolved. That’s not always easy; you cheer when it grows along a path you support, but then you are shocked when some of the people involved go and do something that seems incompatible with what you thought the conference was about. That’s pretty common with any endeavour, though it’s amazing how wrapped up we sometimes get about such things.

Burn out & back

The foss.in organizers have made the classic mistake of being good at what they do, and they’ve burned out a few times. But there’s no better indication that they’re open source people for-real: they can’t let it go. After a hiatus of a few years, the conference is back!

I have no idea whether I’ll be able to go this year or not; the conference is well known now and there are a lot more domestic speakers vying for slots. And that’s excellent; one of the driving motivations of the conference when it was first founded was to promote Linux and Open Source within India. Not just getting people to use it, but encouraging people to create open source technology too.

It’s some years later now, and I’m hoping a whole new generation of people can join the community at foss.in. Whether you’re involved in Linux, contribute on an already established open source project; or (most importantly!) if you’ve been working away on your own and don’t realize there are like-minded people out there, I can think of nothing better to help you further your cause than spending some time with this amazing crowd. You don’t have to be a speaker; open source isn’t only about writing code, and like many community-run events it depends on its enthusiastic volunteers; if you’ve got some time and are interested, I’m sure they would appreciate your help this year.

AfC

Update
The CfP is out.

Complaining about GNOME is a new national sport

Bloody hell. GNOME hackers, can someone sit Linus down and get him sorted so he stops whinging?

I mean, we all know GNOME 3 and its Shell have some rough edges. Given that computers users since the stone-age have been adverse to change, it’s not surprising that people complained about GNOME 3.x being different than GNOME 2.x (actually, more to the point, being different than Windows 95. How dare they?). Even though we believe in what we’re doing, we’re up against it for having shipped a desktop that imposes workflow and user experience changes on the aforementioned change-adverse hypothetical user.

BC comic strip

Surely, however, the negative PR impact of Linus constantly complaining about how he’s having such a hard time using GNOME exceeds what it might cost to the GNOME Foundation of getting somebody over to the Linux Foundation to help him out? Oh well, too late now.

Meanwhile, I certainly do agree that extensions.gnome.org is completely useless if the first thing you see on 3.4 release day is “your web browser version isn’t new enough”. It’s not just Fedora; running Epiphany 3.4 here on a current Ubuntu system, same problem. On the other hand, if you add ppa:gnome3-team/gnome3 and ppa:webupd8team/gnome3 to a system running the current Ubuntu release you can completely ditch Unity. You get an up-to-date GNOME 3.4 that works great, and thanks to webupd8 packaging extensions, you get a fair degree of customization over the experience.

Yeah, there is still lots of room for improvement, and of course there are design decisions that make you scratch your head, but come on, it’s not all bad.

AfC

Upgrading to Precise

The latest release of Ubuntu, version 12.04 aka Precise, has a lot of updates we’ve been waiting on for a while — GNOME 3.4, Haskell 7.4.1, and a huge stack of bugfixes. On the desktop side, quite a number of Linux kernel vs X video modes vs suspend glitches have gone away. That’s fantastic. During most of Oneiric, my laptop was freezing and needing a hard reset at least once a day. Tedious. So I’m quite pleased to report that running Precise, Linux 3.2, gdm, and GNOME 3.4, things are vastly more stable.

Getting upgraded to Precise, however, has not been a pleasant experience.

First we’ve had unattended-upgrades overwriting any configuration stating “no automatic upgrades”. The number of non-technical friends who were set to “security updates only” calling in wondering why a “big upgrade” happened and now their computers don’t work has been staggering. Needless to say we nuked unattended-upgrades from all of our systems a hurry, but for those people it was already too late.

Several desktop upgrades failed half-way through because dpkg suddenly had unresolved symbol errors. Fortunately I was able to work out the missing library binary and manually copy it in from another machine, which was enough to get package system working. Hardly auspicious.

Server side was fraught with difficulty. You cannot yet upgrade from Lucid to Precise. It breaks horribly.

E: Could not perform immediate configuration on 'python-minimal'. Please
see man 5 apt.conf under APT::Immediate-Configure for details. (2)

Brutal. I tried working around it on one system by manually using dpkg, but that just led me into recursive dependency hell:

# cd /var/cache/apt/archvies
# dpkg -r libc6-i686
# dpkg -i libc6_2.15-0ubuntu10_i386.deb
# dpkg -i libc-bin_2.15-0ubuntu10_i386.deb
# dpkg -i multiarch-support_2.15-0ubuntu10_i386.deb
# dpkg -i xz-utils_5.1.1alpha+20110809-3_i386.deb
# dpkg -i liblzma5_5.1.1alpha+20110809-3_i386.deb
# dpkg -i dpkg_1.16.1.2ubuntu7_i386.deb
# apt-get dist-upgrade

Huh. That actually worked on one system. But not on another. Still slammed into the python-minimal failure. For that machine I couldn’t mess around, so I had to give up and did a re-install from scratch. That’s not always feasible and certainly isn’t desirable; if I wanted to be blowing systems away all the time and re-installing them I’d be running Red Hat.

Anyway, I then located this bug about being unable to upgrade (what the hell kind of QA did these people do before “releasing”?) where, very helpfully, Stefano Rivera suggested a magic incantation that gets you past this:

# apt-get install -o APT::Immediate-Configure=false -f apt python-minimal
# apt-get dist-upgrade

(I had tried something very close to this, but didn’t think of doing both apt and python-minimal. Also, it hadn’t occurred to me to use -f. Ahh. For some reason one always sees apt-get -f install not apt-get -f install whatever-package-name).

Ta-da.

AfC

Using inotify to trigger builds

Having switched from an Eclipse (which nicely takes care of building your project for you) to working in gVim (which does nothing of the sort), it’s a bit tedious to have to keep switching from the editor’s window to a terminal in the right directory to whack Up then Enter to run make again.

I know about inotify, a capability in the Linux kernel to watch files for changes, but I hadn’t realized there was a way to use it from the command line. Turns out there is! Erik de Castro Lopo pointed me to a program called inotifywatch that he was using in a little shell script to build his Haskell code. Erik had it set up to run make if one of the files he’d listed on the script’s command line changed.

Saving isn’t what you think it is

I wanted to see if I could expand the scope in a few ways. For one thing, if you had inotifywatch running on a defined list of files and you created a new source file, it wouldn’t trigger a build because it wasn’t being watched by inotify. So I had a poke at Erik’s script.

Testing showed that the script was working, but not quite for the reason that we thought. It was watching for the ‘modify’ event, but actually catching a non-zero exit code. That’s strange; I was expecting a normal 0 not error 1. Turns out 1 is the exit code in situations when the file you were watching is deleted. Huh? All that I did was save a file!

Of course, that’s not what many programs do when saving. To avoid the risk of destroying your original file in the event of having an I/O error when overwriting it, most editors do NOT modify your file in place; they write a new copy and then atomically rename it over the original. Other programs move your original to a backup name and write a new file. Either way, you’ve usually got a new inode. Most of the time.

And that makes the exact event(s) to listen for tricky to pin down. At first glance the ‘modify’ one seemed a reasonable choice, but as we’ve just seen that turns out to not be much use, and meanwhile you end up triggering due to changes made to transient garbage like Vim’s swap files — certainly what you don’t want to trigger a build. Given that a new file is being made, I then tried watching for the ‘create’ event, but it’s overblown with noise too. Finally, you want touching a file to result in a rebuild and that doesn’t involve a ‘create’ event.

It turns out that saving (however done) and touching a file have in common that at some point in the sequence your file (or its backup) will be opened and then closed for writing. inotify has a ‘close_write’ event (complimenting ‘close_nowrite’) so that’s the one to watch for.

If you want to experiment with figuring all this yourself, try doing:

$ inotifywait -m .

and then use your editor and build tools as usual. It’s pretty interesting. The inotifywait(1) program is part of the 'inotify-tools' package on Debian-based Linux systems.

Resurrection, Tron style

Triggering a build automatically is brilliant, but only half the equation; inevitably you want to run the program after building it. It gets harder; if the thing you’re hacking on is a service then, having run it, you’ve got to kill it off and restart it to find out if your code change fixed the problem. How many times have you been frustrated that your bugfix hasn’t taken only to realize you’ve forgotten to restart the thing you’re testing? Running the server manually in yet another terminal window and then killing it and restarting it — over and over — is quite a pain. So why not have that triggered as a result of the inotify driven build as well?

Managing concurrent tasks is harder than it should be. Bash has “job control“, of course, and we’re well used to using it in interactive terminals:

$ ./program
^Z
$

$ bg
[1] 13796
$ jobs
[1] Running    ./program
$

$ kill %1
[1] Terminated ./program
$

It’s one thing to run something and then abort it, but it’s another thing entirely to have a script that runs it and then kills it off in reaction to a subsequent event. Job control is lovely when you’re interactive but for various reasons is problematic to use in a shell script (though, if you really want to, see set -m). You can keep it simple, however: assuming for a moment you have just one program that needs running to test whatever-it-is-you’re-working-on, you can simply capture the process id and use that:

    #!/bin/sh

    ./program &
    PID="$!"

Then you can later, in response to whatever stimuli, do:

    kill $PID

Said stimulus is, of course, our blocking call to inotifywait, returning because a file has been saved.

GOTO 10

Do a build. If it succeeds, run the specified program in the background then block waiting for an inotify ‘close_write’ event. When that happens, kill the program and loop back to the beginning. Easy, right? Sure. that’s why it took me all day.

I called it inotifymake. Usage is simple; throw it in ~/bin then:

$ cd ~/src/project/branch/
$ inotifymake -- ./program

make does its thing, building program as its result
program runs
waiting…

change detected!

kill program make does its thing, build failed :(
waiting…

change detected!

make does its thing, rebuilding program as a result
program runs
waiting…

the nice part being that the server or whatever isn’t running in the middle when the build is borked; http://localhost:8000/ isn’t answering. Yeah.

[The “--” back there separates make targets from the name of the executable you’re running, so you can do inotifymake test or inotifymake test -- ./program if you like]

So yes, that was a lot of effort for not a lot of script, but this is something I’ve wanted for a long time, and seems pretty good so far. I’m sure it could be improved; frankly if it needed to be any more rigorous I’d rewrite it as a proper C program, but in the mean time, this will do. Feedback welcome if you’ve got any ideas; branch is there if you want it.

AfC

Taming gVim on a GNOME Desktop

I love using GEdit to write text documents like blog posts marked up in Markdown. I’ve been using it extensively to write technical documentation for a while now. It’s a lovely text file editor.

Programming with GEdit is a bit more subject to critique. Sure, source code is text files, but as programmers we expect a fair bit of our editor. I’ve been writing my C code in vi for like 25 years. When I returned to Java after a few years in dot-com land I (with great scepticism) began using Eclipse. IDEs are for wimps, after all. Boy was I wrong about that. Once I got the hang of its demented way of doing things, I was blown away at the acceleration that code completion, hierarchy navigation, and most of all context appropriate popups of JavaDoc. Hard to leave that behind.

Lately, however, I’ve been doing a lot of work with Haskell and JavaScript, and that ain’t happening in Eclipse. Since I use GEdit for so much else, I thought I’d give it a whirl for programming in other-than-Java. Didn’t work out so well. I mean, it’s almost there; I tried a bunch of plugins but it seems a bit of a crap shoot. To make development easier I’d certainly need e.g. ctags, but there was nothing packaged.

You’re probably asking why I’m not content to just use Vim from a terminal; after all, been doing so for years. I’ve begun to have a bit of a backlash against running applications in terminal windows; command lines are for doing Linux things, but once you run an editor or email client or whatever in a terminal then suddenly your productivity on the Desktop goes to hell; the whole premise of Alt+Tab (we won’t even talk about the bizarre GNOME Shell Alt+` business) is switching between applications but having both $ and programs in the same type of window blows that up.

Vim, however, has long had a GUI version called gVim, and when running it shows up as an independent application. So, for the hell of it, I gave it a try.

Cut and Paste

Immediately I went bananas becuase copy and paste didn’t work like they should. Yes this is vi; yank-yank, baby. But as gVim it’s also a GUI, and we’ve all pretty much learnt that if you’ve got a white canvas in front of you, Ctrl+C and Ctrl+V are going to work. So much so that I have gnome-terminal rejigged to to make Ctrl+Shift+C result in SIGINT leaving Ctrl+C for copy. Consistency in user interaction is everything.

There’s an entire page on the Vim Tips Wiki devoted to using y, d, and P to yank or delete then put. No kidding. But just as I was about to give up I found, buried at the bottom, advice to add:

    source $VIMRUNTIME/mswin.vim

to your .vimrc. The script affects some behaviour changes which among other things makes selection, cut, copy, and paste work like a normal GtkTextView widget. Hooray!

As for the rest of the GUI, I’ve gone to a lot of trouble to tame it. You need to make quite a number of changes to gVim’s default GUI settings; it’s all a bit cumbersome and the gVim menus (which at first seem like such a great idea!) don’t actually give you much help. Figuring these customizations out took a lot of wading through the wiki and worse the voluminous internal help documentation to figure any of this out; it’s pretty arcane.

Cursor

In particular, I want an unblinking vertical bar as a cursor (to match the desktop wide setting properly picked up by every other GNOME app, here I had to manually force it):

    set guicursor=a:ver1,a:blinkon0

See the documentation for 'guicursor' for other possibilities.

Mouse pointer

Also for consistency, I needed to get standard behaviour for the mouse pointer (in particular, it’s supposed to be an I-beam when over text; the ability to change pointer depending on which mode you’re on is interesting, but it is jarring when compared to, well, everything else on the Desktop):

    set mouseshape=n:beam,ve:beam,sd:updown

The documentation for 'mouseshape' describes enough permutations to keep even the most discerning customization freak happy.

Window dressing

The remaining settings are certainly personal, but you’ll want to pick a default window size that makes decent use of your screen real estate:

    if has("gui_running")
        set lines=45 columns=95
    endif

You need to guard with that if block because otherwise running vim from your command line will resize your terminal (!) and that’s annoying.

You can set the editor font from the menu, but to make it stick across program invocations you need it in .vimrc as shown here; finally, the 'guioptions' at the end disables tearoff menus (t) and turns off the toolbar (T):

    set guifont=DejaVu\ Sans\ Mono\ 11
    set guioptions-=tT

Syntax colouring

I normally use Vim in a terminal with a black background, but for some reason I don’t much like the colour set chosen. Forcing the change to 'light' makes for a nicely different set, but since I run gVim with a white background to be consistent with other GUI programs, I had to do a bit of tweaking to the colours used for syntax highlighting:

    set background=light
    highlight Constant ctermfg=Blue guifg=DarkBlue
    highlight String ctermfg=Blue cterm=bold guifg=DarkBlue gui=bold
    highlight Comment ctermfg=Grey guifg=DarkGrey

Hopefully that does it. As I said, the new Vim wiki is full of an insane amount of information, but since Vim is so powerful it can, ironically, be hard to find what you need to fine tune things just the way you want them. So if you’re a power Vim or gVim user, why don’t you blog about your .vimrc settings?

Still not sure if using gVim is going to be a good idea; the fact that despite all this hackery the editor canvas is still not a GNOME app with behaviour matching the standards of the entire rest of the Desktop is going to make me crazy. Hopefully I can keep things straight in my head; frankly I’d rather be using GEdit but it is nice to have consistency between using Vim to do sysadmin work and using gVim to write code.

AfC

My sound hardware didn’t vanish, honest

I’ve been having intermittent problems with sound not working. Usually restarting (ie, killing) PulseAudio has done the trick but today it was even worse; the sound hardware mysteriously vanished from the Sound Settings capplet. Bog knows what’s up with that, but buried in “Sound Troubleshooting” I found “Getting ALSA to work after suspend / hibernate” which contains this nugget:

The alsa “force-reload” command will kill all running programs using the sound driver so the driver itself is able to be restarted.

Huh. Didn’t know about that one. But seems reasonable, and sure enough,

$ /sbin/alsa force-reload

did the trick.

That wiki page goes on to detail adding a script to /etc/pm/sleep.d to carry this out after every resume. That seems excessive; I know that sometimes drivers don’t work or hardware doesn’t reset after the computer has been suspended or hibernated, but in my case the behaviour is only intermittent, and seems related to having docked (or not), having used an external USB headphone (or not), and having played something with Flash (which seems to circumvent PulseAudio. Bad). Anyway, one certainly doesn’t want to kill all one’s audio-using programs just because you suspended! But as a workaround for whatever it is that’s wrong today, nice.

AfC