hackergotchi
Of white hats and black hats

Paul Drain, is a security professional with considerable experience reviewing patching and packaging the Linux kernel for deployment, having done so for Red Hat for many years. He specializes in comprehension of unknown code and troubleshooting deployment problems.

Contact...

Twitter @onepercentfunk

Google Plus +Paul Drain

Email 0x691A36C8

RSS Feed /paul

PSA: mod_security v2 v. the Flash Uploader in WordPress

Recently, i’ve seen a lot of:

<IfModule mod_security.c>
<Files async-upload.php>
 SecFilterEngine Off
 SecFilterScanPOST Off
</Files>
</IfModule>

As the solution to the “My Flash Upload option no longer works with WordPress” — of course, if you’re using version 2 of mod-security — the correct way to completely disable mod-security for the flash uploader, as per the Migration Matrix page, this should be:

<IfModule mod_security2.c>
<Files async-upload.php>
 SecRuleEngine Off
 SecRequestBodyAccess Off
 SecResponseBodyAccess Off
</Files>
</IfModule>

It would appear there’s not a lot of mod-security v2 information as it relates to WordPress — and given issues with the handling of the async-upload.php have recently started appearing on the interschnitzel, I thought i’d put this here in case it is of assistance to anyone else.

Ubuntu 12.04 LTS v. the Acer Aspire 57xx Optimus Technology

The first two candidates for upgrading to Ubuntu 12.04 were my Acer Aspire Laptops, both previously running a quite heavily customised version of 10.04.4, with numerous PPAs (ubuntu-audio-dev and ubuntu-x-swat being the main ones relating to hardware).

People often ask me at trade shows & conferences, “Why do you use 10.04.x, why not 11.10, or [insert-release-here-of-something-else]” — and usually got the response “because I need them to work”, which, is not a criticism of the quality of more recent releases, more a “that’s what LTS’s are for.” kind of response.

So, 12.04 being the +1 for LTS’s, I decided to upgrade — i’d been itching to try out GNOME 3.4 on more modern hardware — and the improvements to Unity (Ubuntu’s own Desktop Environment) sounded quite promising and upbeat.

Which wasn’t as difficult as it previous had been, i’m happy to point out — except, both my 5750G and 5742G have nVidia “Optimus” technology powering their graphics cards, which meant I was presented with the following screen following my install:

It's a blank screen, really.

Yes, it's a black screen -- in real life, it has a flashing cursor :)

So, an ALT-F’x’ — a login, a sudo and a couple of add-apt-repository commands later and:

The Standard Desktop after the Bumblebee code had been added to Precise

Um, that looks a lot better -- to change the resolution, optirun & nvidia-settings work a treat.

and, to make sure the nVidia GPU actually does something:

#  optirun nvidia-settings -c :8
Terminal Window running optirun on :8 Console

Using the nvidia-settings via Optirun allows you to alter Power Management and Display Size settings

So, the magic to make this work, is:

# add-apt-repository ppa:bumblebee/stable
# apt-get -f install bumblebee bumblebee-nvidia

Now, if you’re using the standard Ubuntu packages, you should be able to restart your machine and you’ll be back up and running — however, my 5750g and Precise’s nVidia packages didn’t play nicely — so i’d upgraded to the nvidia-current-updates package and rebooted, then everything ran according to plan.

For some reason, maybe because there was some 200 updates to do on the 5742, the post-installation of the package didn’t add my user to the bumblebee group, so I had to then do:

# usermod -a -G bumblebee *paul*

( where paul obviously is whatever your username is :) )

… then logged-out/logged-in, then the optirun based commands worked perfectly.

Not Unified: Removing Unity from Ubuntu 12.04 LTS

Before I begin: if you would just like to remove “as much of Unity as possible” without reading my personal upgrade woes — please skip to the end :)

So, i’ve almost succeeded in upgrading most of my personal boxes from Ubuntu 11.10 and 10.04.4 to the new Ubuntu release.

Oh, by almost and upgrading — I mean “back up to external drive, re-format, re-install, set up new configuration via a script, then copying everything back from the external drive” — but that’s just me, having used Linux in one way or another since 1993 and having seen ‘upgrades‘ fail in a wide variety of funny (or not so funny) ways.

I should point out, I tried upgrading this time around, one server and one desktop (from 9 machines and several VMs on those machines) — just to see what would happen & well, let’s just say Andrew covered things a lot better than I did.

Now, unlike Andrew — who seems to enjoy building things from bottom-up, I prefer to install whatever the distribution gives me then tweak it to do what I need to do in order to get things to work, which meant I was presented with a brand new Unity desktop.

Well, I wasn’t at first — that’s actually covered in another article — as the two machines I did first both have hybrid graphics systems by nVidia referred to as “Optimus”, which has one nVidia GPU and one discrete Intel one — so what I actually got, was, um, not much actually — but i’ll save you the read here and move on with the purpose of this post …

Skipping forward 9 or so hours:

I didn’t like what I saw.

To be fair, i’m not a one-desktop-to-rule-them-all purist either — as a firm believer of “just use the best thing for the job” — i’ve used, developed, packaged, advocated and supported desktops all over in my F/OSS life, CDE, KDE (1, 3), GNOME (0.x -> 3.0), Openbox, GNUStep, I used to tweak FVWM & TWM in the real early days — and before TWM, it was OS/2 and doing cool stuff with it.

But I digress…

I tried using it, for three whole days I tried to use it as my default desktop, which — comprises of approximately a VM, 12 or so Terminal windows, a web browser and a music player — plus or minus a jabber client.

I added “lenses“, had crashes.

I removed items, then added new ones to the dock, had crashes.

I fired up the first set of Terminals I usually used, then used the Window switcher to move to another virtual desktop, had crashes.

I tried Firefox and couldn’t see a global menu as suggested, instead, I saw the outside of the menu (aka. a 1px border) with speckled contents — then found errors in dmesg about DRM & hangcheck errors from the Intel based GPU in the machine.

Slept on it:

Tried using LibreOffice to write up a document and examine a spreadsheet and found everything too cramped — then used the interschnitzel to discover LibreOffice doesn’t actually support the global menu out of the box … did the requisite:

# apt-get install lo-menubar

and tried again, only to discover the global menu had wandered off like it had in Firefox and the messages in dmesg had returned.

Tried Ubuntu-2D, which didn’t start — it instead popped up a box that said my X configuration was broken.

"Broken X Configuration"

"This is what it sounds like, when Paul cries." -- but in all seriousness, what does an "everyday Ubuntu user" do when presented with this?

Needless to say, I clicked the Reconfigure Graphics option, which presented me with a black screen (ie. X crashed back to TTY1) — a reboot later, I tried the Run in low-graphics mode for just one session which, also presented me with a black, blank screen, so, being somewhat used to X crashes because of broken setups in days gone by, I went looking for my X configuration — then any reference to xorg.conf — then any help on the subject — then proceeded to swear, quite a bit.

… moving along …

I thought, “i’m over this, I wonder if GNOME Shell works on this setup?

# add-apt-repository ppa:gnome3-team/gnome3 && apt-get -f update
# apt-get -f install gnome-shell gnome-tweak-tool gnome-session-fallback 

The former installs gnome-shell & gnome-tweak-tool for customising your setup, the latter installs Ubuntu’s “classic GNOME desktop” — just in case.

Minutes later, I was in GNOME 3.4, in my browser, answering my mail.

Moments later, hangcheck bit me again, forcing a reboot, but as I said earlier, in my case, that’s a different issue.

So, having rebooted, selected GNOME as my default setup from the greeter and was ready to use my machine — I thought:

if i’m never going to use Unity, because it doesn’t actually appear to work, why don’t I get rid of it?

Turns out, that’s a little harder than it looks — there’s numerous examples of ways to do it on the interschnitzel, but all of them ended up either:

  1. Breaking the Desktop (either by claiming X was broken and things were running in low-resolution mode, or by experiencing crashes in software that was linked to the Unity libraries themselves).

  2. Removing Applications you might actually need (including, for example, Nautilus and GNOME Shell itself, as well as Brasero, Rhythmbox, Totem & other GNOME-y related goodness) because they’ve been linked with Unity libraries at either the packaging level, buildtime, or runtime.

(yes, 2 == 1, when you think about it).

In addition, a number of the sites I found recommend getting rid of the indicator-applets packages — which, breaks the fallback session quite severely too, so we don’t want to do that either.

The Juice Is Loose

So, without further delay (aside from saying “running these commands, as they are written here, in the order I have them listed, all work for me and have been well tested on my 12.04 LTS installs, but if they hose your desktop, delete your information, or eat your dog — I take no responsibility for any of it” — operator beware and all that.):

Log Out of Unity, ALT-Fx to your favourite TTY, login, sudo to root and run:

# apt-get --yes purge unity unity-2d unity-2d-places unity-2d-panel unity-2d-spread 
# apt-get --yes purge unity-asset-pool unity-services unity-lens-* unity-scope-*
# apt-get --yes purge liboverlay-scrollbar*
# apt-get --yes purge appmenu-gtk appmenu-gtk3 appmenu-qt
# apt-get --yes purge firefox-globalmenu thunderbird-globalmenu
# apt-get --yes purge unity-2d-common unity-common
# apt-get --yes purge libunity-misc4 libunity-core-5*

To be fair, the commands above don’t remove all of Unity and friends, there’s parts you need for various things:

If you remove:

  • unity-greeter — X will fail to start, claiming you are running in Low Resolution mode and telling you to fix your configuration, troubleshoot or login to a console instead — I thought this might be because of the Optimus setup on my primary machine, but it can be re-produced in VirtualBox and on another physical desktop where 3D actually works — investigation into this is on-going.

  • indicator-* — The Classic GNOME Desktop will break — the indicator applets on the top right-hand-side will be removed, but clicking on the locations they should have been will cause your session to crash.

(This list will be updated with any others I find, or people e-mail/tweet me about, but as of 21-05-2012 is correct.)

When you return, no Unity options should be present in your Session options — and once you’ve chosen your new environment of choice, you will be Unity-less and all of the GNOME based applications on your desktop should continue to operate correctly too.

DDRescue Survival Mode

note: This post is more for my own future reference than anything else, but I figure it might help others out in a jam, so i’d post it here — Paul.

Recently, I was asked to attempt to recover an NTFS based drive that had developed “Click Of Death” — in a laptop that moves around a bit, such a thing is not uncommon, but I always forget the lines that ddrescue that work ‘most reliably’ for me when i’m on a remote machine, so i’m documenting them for completeness.

Firstly, back up the MBR / Partition Table (really, really useful on NTFS based machines that fail)

dd if=/dev/sdX of=/media/working-drive/mbr.code bs=512 count=1

Then, presuming the destination drive is as, or is larger than the source one, run:

ddrescue --no-split /dev/sdX /media/working-drive/backup_cdrive.img  /media/working-drive/backup_cdrive.log 

ddrescue --direct --preallocate --max-retries=9 /dev/sdX /media/working-drive/backup_cdrive.img  /media/working-drive/backup_cdrive.log

ddrescue --direct --preallocate --retrim --max-retries=9 /dev/sdX /media/working-drive/backup_cdrive.img  /media/working-drive/backup_cdrive.log

Then, when you’ve checked your images for bugs with a tool like ‘testdisk’ or’sectrecover’ or any commerical based one you may have on hand, the recovery process is:

  1. Partition the new drive.
  2. Restore Images
  3. Run: dd if=/media/working-drive/mbr.code of=/dev/sdY bs=446 count=1

“Restore Images” in this case, can be:

  1. something physical, like: e2fsck -f /media/working-drive/backup_cdrive.img && dd if=/media/working-drive/backup_cdrive.img of=/dev/sdY[1-100]
  2. or, something virtual, like: e2fsck -f /media/working-drive/backup_cdrive.img && VBoxManage convertfromraw backup_cdrive.img backup_cdrive.vdi –format VDI

(and, for those wondering — the 446 byte copy is due to the fact the new drive is probably not the same as the old one, so we’ll do the partitioning manually and only recover the MBR code, not the whole lot — which is a 446 byte MBR, a 64 byte partition table and a 2 byte signature block.)

HTML Formatting, Blockquotes, Paragraphs & You

Since i’ve been blogging here, one thing has continually frustrated me about the WordPress interface — the fact blockquotes and code tags in the editor will always, automatically put a br tag in, making formatting code, HTML fragments and other configuration examples rather annoying to post.

So, I went looking for a solution — as most of my posts here will have code examples :)

As it turns out, the WordPress Codex for wpautop() already has the ability to turn off the function that does this as part of it’s design — and because I didn’t want to get rid of the function altogether, it was easier to craft my own.

So, in functions.php — it’s a case of:

  1. Removing the existing filter.
  2. Adding our own filter that returns ‘false‘ for the $br portion.
  3. Adding our new filter.

Which looks like:

remove_filter( 'the_content', 'wpautop' );
remove_filter( 'the_excerpt', 'wpautop' );

function wpautop_fixed($str) {
 return wpautop($str, false);
}

add_filter( 'the_content', 'wpautop_fixed' );
add_filter( 'the_excerpt', 'wpautop_fixed' );

Problem solved, code lines don’t break anymore — and the amount of extra HTML I have to add to get the standard editor (or indeed, the uber-cool Markdown on Save Improved plugin we use here) is minimised.

Evolution, Databases, Grief.

Recently, Evolution on my Ubuntu Oneiric Desktop popped up with a dialogue stating:

Database Disk Image Is Malformed

Which caused it to not index anything in any of the folders I had listed in my IMAP setup — restarting, using evolution –force-shutdown and various other solutions found on the interschnitzel had no effect, however — a slightly modified version of this page worked a treat.

Slightly modified, as Evolution 3.x and beyond on Ubuntu use ~/.local/share/evolution/mail for their mail storage — so the correct sequence of events to fix this problem became:

sudo apt-get -f install sqlite3

Then:

cd ~/.local/share/evolution/mail
for i in `find . -name folders.db`; do 
echo "Rebuilding Table $i"; 
sqlite3 $i "pragma integrity_check;"; 
done

Which turned:

Rebuilding Table ./imap/paul@recovered-mail/folders.db

*** in database main ***

On tree page 11 cell 0: 2nd reference to page 173

On tree page 11 cell 1: 2nd reference to page 174

On tree page 11 cell 2: 2nd reference to page 450

On tree page 11 cell 3: 2nd reference to page 711

On tree page 11 cell 4: 2nd reference to page 924

On tree page 1060 cell 0: 2nd reference to page 805

On tree page 1060 cell 1: 2nd reference to page 849

On tree page 1060 cell 2: 2nd reference to page 921

On tree page 1060 cell 3: 2nd reference to page 851

On tree page 1060 cell 4: 2nd reference to page 911

On tree page 1060 cell 5: 2nd reference to page 850

On tree page 1060 cell 6: 2nd reference to page 848

Page 1067: btreeInitPage() returns error code 7

Page 1069: btreeInitPage() returns error code 11

Error: database disk image is malformed

Into:

Rebuilding Table ./imap/paul@recovered-mail/folders.db

ok

Of course, one needs to make sure the databases aren’t being used at the time — and, at least under Oneiric, evolution –force-shutdown tends to be a bit strange, so you might need to manually kill processes such as the evolution-alarm-notifier before starting this process.

Upstream, Downstream and … What is it exactly?

Talking with Peter the other evening about kernel development teams (if you’ve been following along here throughout October, you’ll see that’s been the bulk of my month.) — we wondered:

“What is it called when you’re doing your [kernel] development outside of any sanctioned tree, but other developers with the same vein/idea are *also* taking ideas / code from your tree?”

In the Linux world, that’s not mainline — because that’s Linus’s domain — and, in the Google Android case, Android != Mainline, so it’s not Android / Google either.

Indeed, it’s not “upstream” either — as we’ve seen, Google often does not have changes that are done in external trees — there’s a reason there’s a Qualcomm tree exists that specific vendors pull chipset changes from, for example.

It’s not “downstream” in that same vein — as individual products commonly have either differing hardware, or differing versions of the same code on a device-by-device basis.

So, the two of us got onto something else — and I suddenly thought:

Sidestream

and Peter & I mulled it over for a few sentences and thought, yes, that’s more like it — after all, Sidestream infers:

Development done in parallel with versions upstream (Android Versions, in this case), but not included in upstream, not cherry-picked by upstream.

but also

Development used by downstream (Mobile Vendors, in this case) to provide updates and fixes to individual products — but changes to those files (by the vendors) are not necessarily sent back to either upstream or sidestream repositories.

Taking the Qualcomm example, changes are taken from there and cherry-picked into the ARM “mainline”, but new developments are used and tested there for a suitable amount of time before this happens.

Depending on when and if new Android kernel releases are frozen, the “upstream” code may not include these changes (for obvious reasons).

Vendors who require fixes for the frozen drivers in their “upstream” code, can then cherry-pick or take verbatim changes from the Qualcomm “sidestream” tree when required.

Thoughts? Could it catch on as a new buzzword for external kernel development? ;)

The choice of a fix?

As any Open Source enthusiast knows, our ecosystem is built using layers — there’s the kernel, the platform then the application, each of these serve clearly different purposes and usually, parts at the bottom (the kernel) expose required parts of themselves to things further up the stack.

This, of course provides different levels of tuning and optimisation — kernels have the ability to use /proc or sysfs to allow userspace tuning — the GNOME platform has things like dconf, gconf2 and gsettings to allow programs like gnome-tweak-tool to function for “power” users, as well as the standard control panel for “normal” users — and of course, individual programs have the ability to customise settings via use of the Edit / Preferences menu.

TCP/IP, as part of this ecosystem — is no exception. Of course, there are numerous examples of how to configure the TCP/IP stack, from academia, research departments, distributions, systems integrators and individuals. Most, if not all of these pages discuss using the sysctl program to control the /proc infrastructure in order to make changes to the TCP/IP stack — this is the way it should be.

All of this, is a long-winded lead-in to a seemingly innocuous issue I discovered while working with one of our teams on some mobile kernel analysis work recently — and following my last post here, made me wonder:

Why?

When looking at an n x k x t space when comparing kernels, one-line changes which occur in a single tree usually stand out like a “deer in the headlights” and this one typified the sort of question we all eventually had in the end.

Consider:

--- a/net/ipv4/tcp_output.c      2011-10-04 00:00:00.000000000 +0000
+++ b/net/ipv4/tcp_output.c      2011-03-25 00:00:00.000000000 +0000
@@ -243,6 +243,8 @@ void tcp_select_initial_window(int __spa
else if (*rcv_wnd &gt; init_cwnd * mss)
*rcv_wnd = init_cwnd * mss;
}
+        /* Lock the initial TCP window size to 64K*/
+        *rcv_wnd = 64240;

/* Set the clamp no higher than max representable value */
(*window_clamp) = min(65535U &lt;&lt; (*rcv_wscale), *window_clamp);

Once again, Why?

Especially when:

/sbin/sysctl -w net.ipv4.tcp_rmem = 64240 64240 [MAX]

and:

/sbin/sysctl -w net.ipv4.tcp_wmem = 64240 64240 [MAX]

(Where tcp_{r/w}mem takes the minimum, default and maximum values respectively.)

From userspace (for example, in the init.rc file) or from patching the Android sysctl.conf file — would have done the same thing the code above does but would have allowed tuning by product teams if required.

A good quote I saw during the course of my investigations into the Why question, was:

There is an argument for not applying any optimisation to the TCP connection between a webserver and a mobile phone. The features of a TCP connection can only be negotiated when the initial connection is established. If you move from an area with 3G reception to an area only offering 2.5G, or vice versa, the optimisations you may have done for the 3G connection may cause terrible performance on the 2.5G connection, due to the differences in the network characteristics. Assuming that a connection will always be on the same type of network technology means that you could fall into the pitfall of premature optimisation.

Could such a fix have been done because the production team once again had an internal testing issue that was resolved by doing such a thing, was it a case of it being the easiest way of doing interoperation with difficult client operating systems such as Windows XP, or was it done for another reason?

Indeed, it may have been done because often the kernel team in these situations is a completely different entity to those creating the platform — and asking the platform team to ‘tweak‘ these settings may have been more difficult than making a one-liner fix in the kernel.

Also, from viewing this one-line change, we do not know:

  • If Selective Acknowledgment (SACK) was enabled as part of this vendors platform code? (Reading of most GPRS optimisation guides available on the web, including RFC 3481 suggest it should be).
  • If TCP/IP Window Scaling (RFC 1323) was switched on and supported by default.
  • If TCP ECN (RFC 3168) was switched on and supported by default.
  • If the Cell Towers (that actually do the grunt of the work and are worldwide) as well as the intermediary networks that these particular devices exist on (which indeed, the vendor has no control over at all) have TCP/IP Header Compression (RFC 1144) turned OFF.

as well as a number of other things about this device and it’s networking functionality.

The wrong part of the stack to “fix” this type of issue though, for sure — and we can tell that, not just because we’ve (that’s the Operational Dynamics “we” here) got experience with the way this type of thing should be fixed, but because from our n x k x t space equation from before, no other team (even within the same organisation) chose to fix their particular device the same way.

The more I look at it, the more I think the use of the Think-aloud Protocol in “external” kernel development, would be an interesting thing to investigate.

A tale of two mobile device TTYs.

Recently, i’ve been tinkering with mobile phone code — specifically Android-based mobile code, it’s had a dual purpose, but it’s also had an enlightening effect on exactly why homebrew mobile modding (such as the very well constructed Cyanogen mod) communities actually exist.

One of the alterations I ran across while attempting to unbrick a friends HTC-based mobile phone (who, is not a technical person, but their warranty had expired and a failed ‘Software Upgrade’ on a Win32 based workstation, which had caused their phone to cease being a phone) was very interesting.

Consider the following code from HTC’s own htc_headset_mgr.c file (~ line 775 depending on your Android Kernel Revision):

static DEVICE_ACCESSORY_ATTR(tty, 0666, tty_flag_show, tty_flag_store);

htc_headset_mgr.c, for want of a better description, is the manager code for such devices as a wired headset, the code it controls (specifically, htc_headset.c) does not set permissions for the sysfs files or device nodes it creates — so presumably, that’s what this code does.

It should be noted the case of world-writable sysfs in device code is hardly new, nor is HTC the only vendor with these types of issues and there’s ongoing attempts to try and fix this from members of the Openwall project and others, some more successfully than others — but with the gambit of drivers, some using DAC, some using capabilities and some using neither — that’s hardly surprising.

This driver is one that uses neither, presumably because it was written without consideration for ever appearing outside the HTC tree (ie. upstream) or because the developers didn’t consider capabilities being worth their time to implement.

Cyanogen, fix this with:

static DEVICE_ACCESSORY_ATTR(tty, 0644, tty_flag_show, tty_flag_store);

Which fixes the obvious incorrectness, but the question remains:

Why?

  • Why would the headset require rw+ access for the allocated TTY (and presumably, any other driver this management module pertains to) in the first place.
  • Why does a development team, potentially shipping millions of devices featuring this device worldwide *not* implement _basic_ capabilities handling, or, failing that — at least some form of DAC in their code.
  • Why does a homebrew modding community take the time to find and “fix” this.
  • Why does the upstream production team not apply these changes, after they’ve been found in the wild by another community.

It may have been an oversight, it may have been a hack to get a testcase to pass internally that was never corrected, it may have just been a production team’s time-crunch deadline (every development team has those, after all) or it could be a lack of suitable training and information?

Still, it did make me wonder exactly which other drivers are being shipped with consumer electronics that have the same issues — and, if all this stuff is being done outside sanctioned upstream trees — the kind of issues those of us looking in from the outside are just “not privileged enough” to see.

Headers, and Lions, and Tigers, and Explorers (oh my.)

It seems a lot of my time of late is spent debugging, testing and fixing web technologies — mostly from a security perspective, or a performance one — but, occasionally, it’s more involved helping our people fix issues at runtime in the wild.

So, when a Redmine installation starts looking miserable from a site that looks bad in Internet Explorer — and we debug it to the point we can suggestively point out it’s the “Compatibility Mode” of IE, how do we fix this?

… and moreover, how do we fix this! (without the client needing to edit code to add the oft-used X-UA-Compatible meta string everywhere, in other words.)

It’s not that it’s hard, in reality — it just requires a little thinking from left field ;)

Sites like StackOverflow have a few examples on how to do this, but i’d like to suggest a better one which uses the setenvif and headers modules and a neat little tweak to make things ‘just work(tm)’ on earlier browsers and is both .htaccess and vhost compatible.

<IfModule mod_setenvif.c>
<IfModule mod_headers.c>
BrowserMatch \bMSIE\s[89] good-versions
BrowserMatch \bMSIE\s[67] bad-versions
Header set X-UA-Compatible "IE=IE9,IE=8" env=good-versions
Header set X-UA-Compatible "IE=EmulateIE7,chrome=1" env=bad-versions
</IfModule>
</IfModule>

Which says:

  • The BrowserMatch lines match browser user-agent lines that use IE9 or IE8, followed by IE7 and IE6 respectively and applies the environment variable good/bad-versions to them respectively.
  • The X-UA-Compatible header is set, then applied to those variables, with a degrading-based version numbering (ie. you apply from the highest supported version of IE to the lowest in that order.
  • Finally, the bad-versions line also applies Google Chrome Frame to the end of it’s line, so browsers like IE6 are asked if they would like to use that, before your code resorts to browser-based CSS hacks and other IE-related workarounds.

The list of X-UA-Compatible tags are listed here.

If you use support for IE 10 / Windows 8 Development Editions, feel free to add something like:

BrowserMatch \bMSIE10 new-versions
Header set X-UA-Compatible "IE=Edge" env=new-versions

To your Apache configuration / .htaccess file in the relevant spots — that should keep you covered for future versions of the code :)


Material on this site copyright © 2002-2014 Operational Dynamics Consulting Pty Ltd, unless otherwise noted. All rights reserved. Not for redistribution or attribution without permission in writing. All times UTC

We make this service available to our staff and colleagues in order to promote the discourse of ideas especially as relates to the development of Open Source worldwide. Blog entries on this site, however, are the musings of the authors as individuals and do not represent the views of Operational Dynamics