Bargate security overhaul

Bargate is a web application that lets users access their files on SMB/CIFS servers within the corporate network. It thus connects to SMB/CIFS servers on behalf of the user and authenticates on their behalf as well. To do this it needs their password each time the user loads the page and thus connects to the back end SMB server.

The existing design

The existing security design of Bargate is predicated on the belief that the server should not be trusted to store the user’s password. If it stores the user’s password then any break in to the server / web application could obtain the list of stored user’s passwords. Encrypting them, whilst making an attack slightly more difficult, doesn’t solve the underlying problem since the application will need to have the decryption key stored on it in order to use them. An attacker could steal both in nearly all circumstances.

Bargate thus stores the passport in the user’s session which is client side (stored in cookies). It is encrypted first using AES 256-bit CFB, then put in the session, and the session is signed by itsdangerous before being put into a cookie for the user. The encryption/decryption key for the AES 256-bit step is stored in the Bargate configuration.

The danger in this design is:

  • The encrypted password is sent across the network on every request (even if it is over SSL)
  • The encrypted password is stored in the cookie and thus on a myriad of end user devices, for perhaps up to 30 days (depending on session lifetime)
  • If an attacker gains access to the ENCRYPT_KEY (stored on the server) it can decrypt any password stored on any end user device, and gets the user’s actual password

This design was chosen of course because storing the password on the server, with or without encryption, is even worse. It would also mean any flaws in Bargate to allow attackers to steal a user’s session would work without without first having to first compromise the end user’s device as is the case today. Today if there are any flaws like that in the code they are innocuous as the attacker won’t have the encrypted password, and thus won’t be able to access any systems.

The new design

What we want to achieve is quite simple:

  • The bargate server, if attacked, can’t be used to steal user passwords (i.e. don’t store users passwords in plain text and don’t store them encrypted if the encryption key is known by the application)
  • The end user device, if attacked, can’t be used to get the user password or even the encrypted password
  • The user’s password or encrypted password should not be sent over the wire on every request, only at log on time

The password of course has to be stored somewhere, but it does not have to be stored in plain text, and the place where it is stored does not have to have the encryption key. That is how it works today – its stored on the client which doesn’t have the encryption key – but this has several downsides. Instead the new Bargate authentication system will store the password encrypted on the server, but encrypted with a key stored in the user’s session, thus reversing the design.

This means:

  • Passwords are no longer encrypted using the same encryption key for every user, each session has a unique encryption key.
  • The end user device does not store the password in any form, which allows the deploying company/group/user to focus on server security rather than end user device security (especially important in the age of BYOD).
  • Attacking the end user’s device gives the attacker no useful information. If you get access to the per-user/session encryption key stored on the client this key only decrypts an encrypted password the client never has and never will have.
  • The encrypted password is not sent over the network on each request
  • The decryption key sent over the network on each request is itself encrypted by a key known only by the server, so it is useless to an attacker eavesdropping on the connection (if they had broken TLS).

The new design in detail

  1. The end user logs into Bargate by sending their username and password over TLS
  2. Bargate checks the username and password via LDAP, Kerberos or “SMB”
  3. Bargate generates a 32-byte (256-bit) session encryption key for the user
  4. Bargate encrypts the user’s password using the session encryption key and stores it on the server (most likely in Redis with an expiration)
  5. Bargate encrypts the session encryption key using ENCRYPT_KEY (a bargate config option) and stores it in the user’s session. Bargate does not store the session encryption key any longer.
  6. The user’s browser saves the encrypted decryption key in the browser’s cookie storage
  7. The user’s browser is redirected to view a file server
  8. The user’s browser presents the encrypted decryption key to the server as a cookie over TLS
  9. Bargate decrypts the decryption key using ENCRYPT_KEY
  10. Bargate uses the resulting decryption key to decrypt the password stored in Redis
  11. Bargate uses the decrypted password to authenticate to the SMB server on the user’s behalf

Remaining attack vectors

There are two remaining attack vectors.

  • Session hijacking
    • An attacker can still take session cookies off a client and then use them. This threat is reduced with TLS and http only cookies, but an attacker could still get to them. This is a generic problem with web applications however. Adding restrictions to lock sessions to an IP address is an option, but can be disruptive and is of limited benefit.
  • Attacker with access to both the server and client
    • If the attacker has compromised both ends, well, you know, game’s over anyway.

A replacement for nss_updatedb: nis2db

In 2011 the glibc project released version 2.15 which dropped support for using Berkley DB based database files as a source of user/group information (well, any name service switch information). Instead the “db” backend of name service switch (nss) is now provided by a simple glibc-specific db file format.

This means the nss_updatedb tool, which I have used for years to provide network-free user/group information on Linux machines, no longer works on modern Linux systems. The tool generated BDB files that glibc’s nss db module simply does not support anymore. All Linux systems using glibc 2.15 or later are affected by this change.

To restore the functionality I need, which is to pull user and group information out of NIS and place them in db files that glibc can read, I have written “nis2db” which is a really simple python script which reads from NIS and uses the “makedb” command shipped with glibc.

The tool is available now and is open source:


WARNING. This post is extremely personal and many will probably find it shocking hearing some things from somebody who rarely is so open about them.

When I was a kid I loved many things, but I don’t really remember doing these things in a social way. Having a twin sister, who mostly spoke for me, meant I didn’t really have to learn how to make friends and go out and about a lot. What I did love was playing with Lego in my bedroom, cycling, or watching Star Trek. Maybe I loved Star Trek due to escapism, but I think I loved it because it showed me a world I wasn’t afraid of. Oh to have Beverley Crusher a few decks away when I got ill, or colleagues who could fix any problem. Oh to be in a world with absolute tolerance, beautiful morality and a sense of strength and community.

Perhaps this was ill-preparation for the world I really lived in. Many have written that Star Trek became so popular for many people because it showed a future we can look forward to and build towards. It was a positive future. It gave people hope, hope for those living in a complicated scary world. Well I was quite well insulated from that world. I didn’t see any of the bad bits of the world at that age – I just saw Star Trek. The reality of this murky, dirty world was kept far away from my life – until it was thrust upon me upon leaving home.

Star Trek the next generations’ 4th season episode “Legacy” was a gold standard star trek episode. It combined action, a good story and drama with an underlying moral lesson about trust. At the start of the episode Riker tries to trick/deceive Data, the emotionless android, in a game of cards but he does not succeed. By the end of the episode Data has been deceived and betrayed by somebody he came to trust. He thus returns to Riker to ask for advice.

“It is curious I was so easily mislead”. Riker responds that “In all trust there is the possibility of betrayal. I’m not sure you were prepared for that.”. Data asks “Were you prepared sir?”. The response: “I don’t think anybody ever is”. Data then forms a somewhat logical conclusion: “then it is better not to trust?”.

Riker counters: “Without trust there is no friendship, no closeness, none of the emotional bonds that make us who we are”. This confuses data. “And yet you put yourself at risk?”. Riker responds “Every single time”, whilst smiling. 

I like to think the lesson of this episode is to trust, despite the possible consequences. I’ve lived my life by this maxim. I trust people, probably too easily. I’ve never set out to use people or really to ever go after what I want, instead respecting others and often placing others before my own wishes. This has burned me. Every time. Life has burnt me over and over again.

The first time I trusted somebody with my heart was my first relationship at University. It was like being reborn. Walking along holding hands. Just holding hands. Incredible. I never thought I could ever feel so happy. It ended with being punched by the guy, and being dumped weeks later – indeed a few days after Christmas. Every relationship I’ve had since has followed a similar pattern – I get punched, and left for dead. Never physically of course, just emotionally, not that it makes it any better.

You know, it isn’t easy growing up gay. I don’t think anybody wants to be gay, despite the comfort this fallacy brings to those who don’t understand or can’t fathom love between members of the same gender. Growing up something you don’t want to be isn’t easy. Growing up being gay when the world is full of people hating same-sex attraction isn’t fun. It doesn’t matter if your parents tell your Elton John is gay and that’s OK, the fact that they have to tell you that is OK tells you that it really, really isn’t. Kids aren’t stupid and often learn the thing you didn’t even say.

Ultimately it doesn’t matter that gay men and women can get married now, that we are protected by law (as equals that is), it doesn’t matter, because we’re still living in a 95% heterosexual world where knowing who is gay and who isn’t is virtually impossible and you’re surrounded by people who just do not understand, even if today they want to understand. Anyway, back to the story.

I knew I was different when I was very young. I remember dancing to music in my bedroom – when I was still playing with lego – to some music on my little radio on a saturday morning. Of  course, nobody knew I sometimes danced like that. I remember stopping suddenly and wondering “hmm, maybe I’m gay.”. I don’t recall attaching any negativity to the statement then. I remember dismissing it as “Probably… but that’s years ahead!” before returning to my lego. There was no dislike of this conclusion.

Years later I remember arguing with my mother over something, about me not being happy or being normal or something. I don’t recall. To make it clear my mother wasn’t saying I wasn’t normal and I cannot recall why we were arguing, and I don’t place any blame on my mother at all. What I do recall is my mum worrying about me and saying something along the lines of “I bet you don’t even play with yourself”, referring to masturbation. I must have been 14 or 15 at this time. I worked hard to be as private as possible, always away from people, and always on my computer. I worked hard to hide any hint of puberty, hating the idea of growing up. I worked especially hard to never let my parents know I had learned to masturbate many years before. Many.

I mention this because, I suspect now like many adolescent boys, we learnt to do this before we had sexual attraction. I knew what it was though, I knew it was sexual, and so when I would do this I would think about girls of course. Boys are meant to like girls right? So that’s what I did. I remember “going through” all the girls in my class and then running out of girls – i.e. I’d think about a different girl each day. Then I ran out of girls and started thinking about boys. It didn’t bother me. I didn’t care. It wasn’t really like I was saying I was gay or straight. I wasn’t actually attracted to any of them – girls or boys.

Either way for years until that argument with my mum I thought of it (masturbation) as something really bad. Although we had sex education at school nobody ever really said that masturbating was normal. Or OK. Or allowed! I mean, is it OK? Many people would still claim it isn’t. It just isn’t talked about. I remember keeping a diary – much of which I still have somewhere – hating on myself for doing it and wanting to reduce how often or stop myself entirely. Sadly arguing with my mum didn’t really help, even though my mum was trying to help, I was far too practised at hiding everything and keeping it all locked up. The funny thing is that the first time I did it I was so proud of myself, but I didn’t tell my parents. I’m sure that 99.999% of kids don’t, but still, I wasn’t down on myself about it at  the start. So why was I years later?

What I do remember well from this time is the first time I had a crush on somebody. A boy in my English class, the first week of year 8, mentioned sex out loud, via some euphemism I can barely remember. We were both 12 year old lads. I remember being awoken from my usual mode of ignoring everybody else but the teacher. I felt a surge of attraction towards him. From then on I knew for sure what I liked. Guys.

For some reason I tried every day for years after that to not be gay. But why? My parents certainly were not homophobic or anti-gay. I admit did have to watch, when I was 13, my then best friend be beaten up in the street for being gay. Perhaps that was the turning point. Talking to police about the attack. Listening to my classmates ask me why on earth would I be hanging out with a fag (actual word used, among others). Why wasn’t I scared? I just wasn’t. He was my friend. Why did I care if he was gay? Why did they care that he was?! This was all overheard by teachers. Of course, Section 28 prevented them from saying that there was nothing wrong with me being friends with a gay person. Did I tell my parents? Oh no. Apparently I was shocked when I first found out he was gay, not that I remember it. I already knew I was by then though so perhaps I was just acting. I don’t recall.

School though was very overtly homophobic. All kids were, especially the ones who later turned out to be gay. The worst were the girls in my class who were homophobic. They weren’t homophobic towards me, but didn’t understand why I walked to and from school with a kid who was out as gay. The fact that he lived 5 minutes from my house didn’t seem to matter, he was gay, I shouldn’t be seen with him.

So I tried hard to be anything but gay. I even dated a girl when she asked me out. I remember thinking for years I had to be anything but gay. I didn’t understand why I liked guys my age, 13 year old’s. For the longest time I thought that because I liked boys my own age –  13 year old boys – that it meant I was a paedophile. I even thought that it was preferable to being gay. As a kid you’re always told how awful it is to be queer and how you’d never want to be one. But kids don’t talk about paedophiles. Paedophiles were the old men so we didn’t accuse each other of that – but we did accuse each other of being gay. It was the standard insult.

It was only when I in my mid-20s when I realised those “don’t trust a stranger” things as a young child were about protecting us from them. Adults didn’t talk about them. Children didn’t know about them. It was only barely discussed. So in my mind it was preferable to being gay, after all, I had anti-gay hatred shoved in my face 24 by 7. It was so silly and so naive, but nobody was helping me. Nobody was there to tell me it was OK to like boys my age.

Of course as I grew older the guys I liked grew with me. Nobody ever told me that it would happen that way. I assumed what I liked then would be who I liked forever. These things are important but they aren’t the things we teach our kids.

I eventually of course had to face the reality of who I liked, which led to the first relationship I mentioned earlier, but this was at age 18 and hundreds of miles away from home and with nobody I could really trust or tell.

After that first relationship’s poor ending, a second occurred, where again I was burned. That time I fell in love. So much so that I can still reconnect to those memories and feel that love today – 12 years later. It feels just as incredible now as it did then. It was the first time I’d been intimate with a person and it meant something more than just physical joy (then again since I’d only had sex with one other guy before I’m not really sure it meant that much in hindsight). This time I was left with anxiety disorder and several years of me rejecting people I liked because I was afraid of being hurt again. Spending a year in america didn’t really help either.

Today I feel like this pattern has been repeated over and over ever since my first relationship. The last twelve years of my life are a long story of a cycle of trust and honesty on my part, and commitment and effort, and betrayal on the other. My friends tell me I’m a nice, attractive loving guy, and yet the people I have trusted the most have also hurt me the most. I no longer believe Riker was right. Perhaps data was right, perhaps it is better not to trust?

I don’t really know what I’ve done to deserve the situation I now live in. I’m 30, past my prime for having a fun youthful sexually exuberant standard gay lifestyle. I’ve wasted years in relationships where I was giving and getting absolutely nothing out.  I’ve wasted the last few years getting attached and falling for the most terrible man I’ve ever met, and then living with the consequences of such serious and wounding betrayal and treatment has kept me from the life I want.

I don’t know how to get my life back on track. Mental illness (due to the fallout of trusting somebody) and its resulting need for medication, and its resulting gaining of weight, has left me feeling and looking old, tired, and overweight. I might be recovering from my illness, but I don’t know how to recover my life. I don’t know how to pick myself up again and be the physically active and happy person I once was.

I’m tired, and lacking in hope. I don’t know how to fix myself, I don’t know how to take another day working in a place I loathe, and I don’t know how I can ever in a million years trust anybody ever again with any emotion or love or trust. ever. One of my favourite tv shows is Frasier and in one of the episodes Daphne muses about her parents failed marriage:

Daphne: I can't believe this is really happening.  
I mean, maybe I'm naive, but I always thought love 
would save the day.
Harry: Well, you know, we all think that when we're 
young. But then life beats us around a bit and you 
learn to dream a little smaller.

I’m not old, but I consider myself to be of a mature age now. And I can’t disagree with Harry one bit. Life has beaten me around. A lot (I can’t quite add enough emphasis to that). I don’t feel like I deserved much of any of it. Yet it still happened to me and somehow I am required to struggle on. I’m somehow required to believe I’ll meet somebody better and somehow I’m meant to believe that everything will turn out for the best. Well, evidence so far suggests that it won’t just turn out for the best.

I’m not sure what I’m living for any more and I’m not sure what motivates me to even get out of bed in the morning. I just hope I keep finding the will powder to do it, despite not wanting to, but it gets harder every single day. Perhaps in the end it is better not to trust.

Britain: For the Love of God, Please Stop David Cameron

Originally posted on Benjamin Studebaker:

On May 7 (this Thursday), Britain has a general election. I care deeply about British politics–I did my BA over there and will return to do my PhD there this fall. But more importantly, David Cameron’s government has managed the country’s economy with stunning fecklessness, and I couldn’t live with myself if I didn’t do my part to point this out.

View original 1,605 more words

Kerberos protected NFS with Active Directory and the PAC

For years I’ve been trying to use Active Directory’s Kerberos implementation for setting up secure NFS4. This is where NFS4 is configured to require kerberos tokens to make sure only the user with a valid kerberos token (i.e. they authenticated to Active Directory) can access their relevant files. This is stark contrast to NFS with AUTH_SYS – where certain IP addresses are essentially given full access.

The advantage of using NFS4/krb5 is that it can be used to share out a protected NFS4 file share to whatever IP address you like safe in the knowledge that only authenticated users can access their files. They have to authenticate with Kerberos first (i.e. Active Directory) before they can access their files – and only their files. It also solves the ‘root squash’ problem – root cannot access everybody else’s files.

However, in the past we were using Windows Server 2003 as our domain controllers – we only upgraded to Server 2012R2 a few months ago. Since upgrading we could finally mount (connect to) NFS4 shares protected by Kerberos. Once mounted however users could not use their kerberos ticket to access their files – permission denied was returned. The logs showed no errors. It was a head banging against brick wall moment.

Everything should have been working, until I discovered an obscure article suggesting that our users are in too many groups. Sure enough thanks to some odd internal practices relating to software groups and Sharepoint our users were in literally hundreds of groups – but why would this break NFS4? Its because, as ever, Active Directory isn’t a what I’d call standard Kerberos implementation. Active Directory uses an optional RFC4120 field called ‘AuthorizationData’. It fills this with a Microsoft-only ‘thing’ called the Privilege Attribute Certificate or ‘PAC’. It contains all sorts of additional information such as groups, SIDs, password caches, etc. Its essential to Microsoft servers – but NFS4 doesn’t need it. NFS4 doesn’t send group information.

The good news is you can instruct AD not to send PAC information for your NFS4 server. The procedure is very simple:

In the Active Directory Users and Computers tool, select View -> Advanced Features.

  • Open the computer object properties of the NFS4 server (i.e. find the computer object for your NFS4 server)

  • Select the Attribute Editor tab

  • Edit the “userAccountControl” attribute

  • The original value will probably be 4096 and be displayed as “0x1000 = (WORKSTATION_TRUST_ACCOUNT)”. Don’t worry if it isn’t that however.

  • Add 33554432 to the value field.

  • Click OK

  • Ensure the stored value shows “0x2000000 = (NO_AUTH_DATA_REQUIRED)”

Once this is done the PAC won’t be added to the Kerberos ticket. This should then allow users to access NFS4 mounts and manage their files – and prevent anybody else managing their files!

In RHEL7 you should not need to do this as the ‘svcgssd’ service has been replaced with a new daemon – the gss-proxy. This software has been written specifically to cope with the huge size of Active Directory Kerrberos tickets. Sadly I don’t have a RHEL7 system (yet) to prove this. I will update this blog post when I do!

Filestore Web Access – or how I fell in love with programming again

When I was 16 I wrote a little ‘CMS’ or website content management system called IonPanel. It was pretty awful – it was written in PHP and MySQL, was probably terribly insecure and I mostly programmed it on Windows using IIS. It was however terribly exciting to write, and rather popular for a little while. Searching for the right string on google would find hundreds upon hundreds of websites running the software, and it was open source! Lots of people contributed to it. Several of my friends wrote little CMS packages, but none were as popular as IonPanel, and none as fast and feature packed. I was very proud of it. Sadly it died of the second-system effect when I attempted to re-write it for version ‘2.0’. A beta was launched, but then I went to University, I started realising how terrible PHP was, and I gave up. IonPanel slowly died. As time passed I longed for that time again – when I was writing code daily on an open source project that lots of people were using.

Since then I’ve written lots of code for lots of people but nothing has captivated me like IonPanel did – until now – twelve years later. A year or so ago I got the idea of writing a web interface to the University’s file storage platform. I’d recently got into Python and wanted to find a CIFS/SMB library I could use from Python. I found one – albeit badly documented and barely used – and wrote an application around it. Today that application has grown into something I’m extremely proud of. Enter ‘Filestore Web Access’.

Filestore Web Access allows all university students and staff to access their personal and shared files from a modern web browser anywhere in the world. Until I created FWA getting access to files away from the University’s standard desktops was quite difficult, unless you knew how to use SSH!

At the time of writing, it’s looking really rather good, here it is in two different themes:

Screen Shot 2014-04-21 at 19.30.55           fwa-flatly

The responsive design (thanks to Twitter Bootstrap, and a lot of extra code) causes it to work great on mobile:

Screen Shot 2014-04-21 at 19.36.28 fwa-mobile-1

And the new login screen with changing backgrounds I’m especially proud of:

Screen Shot 2014-04-21 at 19.33.35 Screen Shot 2014-04-21 at 19.33.59 Screen Shot 2014-04-21 at 19.33.47


I intend to write more about FWA in the next few days and weeks. Until then you can look take a look at even more screenshots!

You can also view the project page on GitHub:


Docker is a whale which carries containers on its back

See, its a whale! With containers! On its back! Like discworld but a whale instead of a turtle.

Ever since I first played with User Mode Linux (UML) back in the days of Linux 2.4 I’ve been working with virtualisation, normally being involved in server virtualisation activities wherever I’ve worked. The project I’m leading right now at Southampton is the conversion of our entire physical server estate to virtual on VMware.

Despite living and breathing these technologies I’ve never actually liked x86 virtualisation. It is a terrible waste of code and processor time. It virtualises the entire hardware platform as if the guest OS is actually running on real physical hardware – but why? And even this isn’t entirely true anymore – in all modern virtualisation products the guest OS is fully aware its being virtualised, there are tonnes of ‘tools’ and ‘drivers’ running facilitating communication between guest and hypervisor. Its thus a hybrid – a mess of different approaches and compromises.

I entirely blame Microsoft for the growth of this odd x86 virtualisation market. Outside of the x86 world IBM and Sun created hardware level virtualisation and OS-level virtualisation, but in x86 land, because of the proprietary and slow-moving nature of Windows, vendors sprang up creating the x86 hybrid virtualisation model – part hardware, part software. It meant you could run Windows inside a virtualised container and make better use of hardware – at the cost of enormous overheads and massive duplication of data. One of the most ridiculous things from an architecture perspective is every x86 VM solution emulating a PC BIOS or UEFI firmware instance for every guest. Whatever for!

So for a long time I’ve been hoping that “OS-level” virtualisation would eventually assert itself and become the dominant form of virtualisation. I think it hasn’t because Microsoft joined the x86 virtualisation party by buying Hyper-V and rushing off to compete with VMware and so the market has carried on down this odd virtualisation path. Architecturally there will always be a place for this type of virtualisation, but the vast majority of servers and virtual desktops don’t need this. They don’t need to pretend to be running on real hardware. They don’t need to talk to a fake-BIOS. Clearly the x86 virtualisation vendors think this too as each new generation of product has mixed more ‘paravirtualized’ components into the product – to improve performance and cut down on duplication.

So whats the alternative? Real OS-level virtualisation! There are lots to choose from too. Solaris has Zones/Containers. FreeBSD has jails. AIX has WPARs. HP-UX has HP-UX containers. Linux predictably has lots to choose from: OpenVZ, VServer, lmctfy and LXC to name a few (and predictably, until recently, none were in the upstream kernel). LXC is the one everybody was talking about. The idea was to put acceptable OS-level virtualisation components into the kernel rather than just taking OpenVZ and shoving it in the kernel, which would have ended badly and never been accepted. So LXC has taken a long time to write because of this and somewhat has lost its ‘new! exciting!’ sheen.

LXC remains however the right architectural way to do virtualisation. In LXC, and all the other OS-level technologies, the host’s kernel is shared and is used by the guest container. No hardware is virtualised. No kernel is virtualised – only the userland components are. So the host’s kernel is still doing all the work and thats what the guest operating system uses as its kernel. This eliminates all the useless overheads and allows for easy sharing of userland components too – so you don’t have to install the same operating system N times for N virtualised guests.

Sadly everybody’s experience with LXC for the past few years was along the lines of “oooh, that sounds awesome! is it ready yet?” and usually the answer was “not yet…nearly!”. All that changed last month though as LXC 1.0 was released and became ‘production ready’. Yay! All we needed now I thought was for all the Linux shops to switch away from bulky x86 full fat hypervisors and start moving to LXC. Instead, by the time LXC 1.0 was released, something else has come along and stolen the show.

Enter Docker. Now, Docker actually is LXC. Without LXC, Docker wouldn’t exist. But Docker extends LXC. Its the pudding on top which makes it into a platform literally everybody is talking about. Docker is not about virtualising servers, its about containerising applications, but uses LXC underneath. The Docker project says that the aim is to “easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.”

So when I realised Docker was getting massive traction I was displeased, because I wanted LXC to get this traction, and docker was stealing the show. However, I had missed the point. Docker is revolutionary. I wanted LXC to kill all the waste between the hardware and the server operating system’s userland components – the parts that are my day job. Docker wants to kill that waste, and all the waste in the userland of the operating system as well – the parts I hadn’t considered being a problem.

For years vendors and open source projects have produced applications, released them and asked for an IT department to install and maintain operating systems, install and maintain pre-requisite software and then install the application and configure it. Then usually another team in the organisation actually runs and maintains the application. Docker has the potential to kill all of that waste. In the new world order the vendor writes the code and creates a container with all the prerequisite OS and userland components (except for the linux kernel itself) and then releases the container. The customer only has to load the container and then use the application.

It is then a combination of the fairly well established “virtual appliances” seen in VMware/KVM/Hyper-V land, but with all the x86 hypervisor waste removed.

This has many benefits:

  • The software vendor doesn’t have to maintain a full operating system that is expected to work on any number of virtualisation solutions and different fake hardware models. They only have to target LXC, with the host kernel doing all the difficult work.
  • The software vendor can pick and choose whatever userland components they need and properly and fully integrate the application with the userland OS.
  • The software vendor takes care of patching the userland OS and the application. The patching process is integrated. No more OS patches breaking the app. No more OS patching for the IT department to do.
  • The customer IT department’s work is radically and significantly reduced. They only have to deploy the container image – a very easy procedure – and within seconds have a fully set up and ready to use application.
  • And end to dependencies, prerequisites, compatibility issues, lengthy installation, incorrect configured operating systems and applications
  • And all the benefits of LXC – low overheads, high performance, and end to the duplication of the same operating system.
  • And end to having to upgrade and move applications because the guest server operating system is now end of life – even if the application isn’t.

So, today’s IT platforms probably consist of:

  • A farm of physical servers running a hypervisor platform like VMware or KVM
  • Hundreds if not thousands of virtual machines running only 2-3 different operating system flavours (e.g. RHEL5/6 or Windows Server 2008/2012) with a small number of VMs (<10%) running exoctic different things
  • Teams of infrastructure people maintaining the guest operating systems and using OS-level management systems such as RHN, Landscape, Puppet, Chef, Cfengine, Runit, etc and spending a lot of time patching and maintaining operating systems.
  • Teams of application people, usually without root, or even worse with root, having an uneasy relationship with infrastructure teams, installing applications and patching them (or probably not patching them) and maintaining them.

If Docker catches on the way I’d like it to (beyond what even the Docker project envisaged) then I think we’d see:

  • A farm of physical servers running an LXC hypervisor Linux OS
  • Hundreds if not thousands of Docker containers containing whatever the vendor application needs.
  • Teams of application people using the vendor supplied web-interfaces to manage the applications, patching them using vendor patching systems which integrate all the components fully, or just by upgrading stateless docker instances to the latest version.

It seems that this vision is already a reality: CoreOS envisages applications packaged as ‘Docker’ containers, and CoreOS as the minimalist platform hypervisor underneath. The IT departments’ sole job would be to install CoreOS onto hardware and then load Docker containers as needed from vendors, open source projects, and internal software development teams.

This is all very new and cutting edge. Docker 0.9 was only released a few weeks ago. CoreOS’s latest version is a major change. Other exciting areas of development with Docker are plans to let you swap out LXC and use OpenVZ or Solaris Zones or FreeBSD jails instead, thus opening Docker up to Solaris and BSD too. This is a very exciting new frontier which, if successful, will totally re-write how the IT world works. I can’t wait to see what happens next.