Life is not fair

I loathe the phrase. As a statement of fact it doesn’t really make that much sense. Life can’t be fair or unfair. Its just what it is. People rarely utter this phrase to support somebody and is not supportive anyway. More often people use this phrase as justification.  “Well, life isn’t fair”. Its a common phrase for parents to utter to their children. “Life is awful so don’t complain” seems to be the implication.

If you were to disagree with my assertion above, then logically we use the phrase “life isn’t fair” to explain that the world is not a fair place and thus logically we must expect in our lives to be treated unfairly, to be treated badly, to expect misfortune, bad luck, and for other people to not treat us justly.

Are we really saying that though? We also get told we must follow the rule of law, we must do what our teachers say, we must do what our managers say, we should treat others as we expect to be treated, we should be fair to others. Aren’t these two statements in conflict with each other? The phrase “life isn’t fair” as a justification only works if we thus teach or ‘allow’ people to be unfair to others.

I don’t believe in a “just world”. I don’t believe karma is real or that a divine entity will make adjustments to make life fair. I do believe that actions have consequences but not to a grand plan of fairness, actions and reactions are just what they are, and just happen, in many cases randomly.

Despite all of this I go about each day with a genuine, deeply felt sense that I have been treated badly, undeservedly. I do not deserve to be treated as I am. Given how much time I spend caring about everybody else, given how much effort I put into things that benefit others, its not fair, right?

Well, I believe this because I was taught that we must all be kind to each other, we must honour each other, be nice, and not be selfish. Quite frankly this is utter bullshit. I have nothing but anger and contempt for the actions of people who taught me this tripe. Its not true. The world isn’t a fair place, as they blindly would tell me when I was mistreated, but would enforce their moral views on me anyway. I’m required to care about others, not be selfish, but when others do the same to me, all that is left is an empty statement of agreement and a useless retort – life isn’t fair.

One of the most important lessons I’ve learnt through counselling is that our life is ours alone and we should not let others control us, instead, we must seek to achieve what we want and in most cases put our needs before others. All of this is in super stark contrast to what most of us are told when children, when we’re taught that we should put others first.

Life isn’t fair, so why should I act fairly all the time? Why exactly must I accept that life is shit, and yet, feel guilty if I do what I want rather than what others want? Why exactly am I expected to not do things others don’t like, or act the way others want me to act, when nobody ever does that for me?

What really bothers me is that in life other people act unfairly, and I have to accept this, because life isn’t fair, and I’m not allowed to act unfairly myself, but on top of this, I’m not even allowed to express my frustration about other peoples actions. No. I must not do this. I must be quiet, not cause a problem, and censor myself, because, after all, life isn’t fair.

In researching this blog post I read a lot of articles about fairness and the reality of the world, one of my favourites was your broken idea of fairness. I don’t agree with all of it, and I don’t have a broken idea of fairness because, as you have just read, I know the world isn’t fair. What I think is important is his Rule number one. Life is a competition. It isn’t meant to be fair, and if you believed all that crap growing up about sharing, fairness, etc, then you were gullible. Life is about getting what you want over others.

I think the most important part of the article is talking about how other peoples morality is forced onto us as children:

People like to invent moral authority. It’s why we have referees in sports games and judges in courtrooms: we have an innate sense of right and wrong, and we expect the world to comply. Our parents tell us this. Our teachers teach us this. Be a good boy, and have some candy.

But reality is indifferent. You studied hard, but you failed the exam. You worked hard, but you didn’t get promoted. You love her, but she won’t return your calls.

Life isn’t fair. But don’t let others tell you to act fairly when simultaneously justifying the world by saying it isn’t fair. Do what you want. Be you. Say it like it is, and realise that others are competing with you. Its very unlikely they will place you before them, so don’t place them before you.

Bargate security overhaul

Bargate is a web application that lets users access their files on SMB/CIFS servers within the corporate network. It thus connects to SMB/CIFS servers on behalf of the user and authenticates on their behalf as well. To do this it needs their password each time the user loads the page and thus connects to the back end SMB server.

The existing design

The existing security design of Bargate is predicated on the belief that the server should not be trusted to store the user’s password. If it stores the user’s password then any break in to the server / web application could obtain the list of stored user’s passwords. Encrypting them, whilst making an attack slightly more difficult, doesn’t solve the underlying problem since the application will need to have the decryption key stored on it in order to use them. An attacker could steal both in nearly all circumstances.

Bargate thus stores the passport in the user’s session which is client side (stored in cookies). It is encrypted first using AES 256-bit CFB, then put in the session, and the session is signed by itsdangerous before being put into a cookie for the user. The encryption/decryption key for the AES 256-bit step is stored in the Bargate configuration.

The danger in this design is:

  • The encrypted password is sent across the network on every request (even if it is over SSL)
  • The encrypted password is stored in the cookie and thus on a myriad of end user devices, for perhaps up to 30 days (depending on session lifetime)
  • If an attacker gains access to the ENCRYPT_KEY (stored on the server) it can decrypt any password stored on any end user device, and gets the user’s actual password

This design was chosen of course because storing the password on the server, with or without encryption, is even worse. It would also mean any flaws in Bargate to allow attackers to steal a user’s session would work without without first having to first compromise the end user’s device as is the case today. Today if there are any flaws like that in the code they are innocuous as the attacker won’t have the encrypted password, and thus won’t be able to access any systems.

The new design

What we want to achieve is quite simple:

  • The bargate server, if attacked, can’t be used to steal user passwords (i.e. don’t store users passwords in plain text and don’t store them encrypted if the encryption key is known by the application)
  • The end user device, if attacked, can’t be used to get the user password or even the encrypted password
  • The user’s password or encrypted password should not be sent over the wire on every request, only at log on time

The password of course has to be stored somewhere, but it does not have to be stored in plain text, and the place where it is stored does not have to have the encryption key. That is how it works today – its stored on the client which doesn’t have the encryption key – but this has several downsides. Instead the new Bargate authentication system will store the password encrypted on the server, but encrypted with a key stored in the user’s session, thus reversing the design.

This means:

  • Passwords are no longer encrypted using the same encryption key for every user, each session has a unique encryption key.
  • The end user device does not store the password in any form, which allows the deploying company/group/user to focus on server security rather than end user device security (especially important in the age of BYOD).
  • Attacking the end user’s device gives the attacker no useful information. If you get access to the per-user/session encryption key stored on the client this key only decrypts an encrypted password the client never has and never will have.
  • The encrypted password is not sent over the network on each request
  • The decryption key sent over the network on each request is itself encrypted by a key known only by the server, so it is useless to an attacker eavesdropping on the connection (if they had broken TLS).

The new design in detail

  1. The end user logs into Bargate by sending their username and password over TLS
  2. Bargate checks the username and password via LDAP, Kerberos or “SMB”
  3. Bargate generates a 32-byte (256-bit) session encryption key for the user
  4. Bargate encrypts the user’s password using the session encryption key and stores it on the server (most likely in Redis with an expiration)
  5. Bargate encrypts the session encryption key using ENCRYPT_KEY (a bargate config option) and stores it in the user’s session. Bargate does not store the session encryption key any longer.
  6. The user’s browser saves the encrypted decryption key in the browser’s cookie storage
  7. The user’s browser is redirected to view a file server
  8. The user’s browser presents the encrypted decryption key to the server as a cookie over TLS
  9. Bargate decrypts the decryption key using ENCRYPT_KEY
  10. Bargate uses the resulting decryption key to decrypt the password stored in Redis
  11. Bargate uses the decrypted password to authenticate to the SMB server on the user’s behalf

Remaining attack vectors

There are two remaining attack vectors.

  • Session hijacking
    • An attacker can still take session cookies off a client and then use them. This threat is reduced with TLS and http only cookies, but an attacker could still get to them. This is a generic problem with web applications however. Adding restrictions to lock sessions to an IP address is an option, but can be disruptive and is of limited benefit.
  • Attacker with access to both the server and client
    • If the attacker has compromised both ends, well, you know, game’s over anyway.

A replacement for nss_updatedb: nis2db

In 2011 the glibc project released version 2.15 which dropped support for using Berkley DB based database files as a source of user/group information (well, any name service switch information). Instead the “db” backend of name service switch (nss) is now provided by a simple glibc-specific db file format.

This means the nss_updatedb tool, which I have used for years to provide network-free user/group information on Linux machines, no longer works on modern Linux systems. The tool generated BDB files that glibc’s nss db module simply does not support anymore. All Linux systems using glibc 2.15 or later are affected by this change.

To restore the functionality I need, which is to pull user and group information out of NIS and place them in db files that glibc can read, I have written “nis2db” which is a really simple python script which reads from NIS and uses the “makedb” command shipped with glibc.

The tool is available now and is open source:

Britain: For the Love of God, Please Stop David Cameron

Originally posted on Benjamin Studebaker:

On May 7 (this Thursday), Britain has a general election. I care deeply about British politics–I did my BA over there and will return to do my PhD there this fall. But more importantly, David Cameron’s government has managed the country’s economy with stunning fecklessness, and I couldn’t live with myself if I didn’t do my part to point this out.

View original 1,605 more words

Kerberos protected NFS with Active Directory and the PAC

For years I’ve been trying to use Active Directory’s Kerberos implementation for setting up secure NFS4. This is where NFS4 is configured to require kerberos tokens to make sure only the user with a valid kerberos token (i.e. they authenticated to Active Directory) can access their relevant files. This is stark contrast to NFS with AUTH_SYS – where certain IP addresses are essentially given full access.

The advantage of using NFS4/krb5 is that it can be used to share out a protected NFS4 file share to whatever IP address you like safe in the knowledge that only authenticated users can access their files. They have to authenticate with Kerberos first (i.e. Active Directory) before they can access their files – and only their files. It also solves the ‘root squash’ problem – root cannot access everybody else’s files.

However, in the past we were using Windows Server 2003 as our domain controllers – we only upgraded to Server 2012R2 a few months ago. Since upgrading we could finally mount (connect to) NFS4 shares protected by Kerberos. Once mounted however users could not use their kerberos ticket to access their files – permission denied was returned. The logs showed no errors. It was a head banging against brick wall moment.

Everything should have been working, until I discovered an obscure article suggesting that our users are in too many groups. Sure enough thanks to some odd internal practices relating to software groups and Sharepoint our users were in literally hundreds of groups – but why would this break NFS4? Its because, as ever, Active Directory isn’t a what I’d call standard Kerberos implementation. Active Directory uses an optional RFC4120 field called ‘AuthorizationData’. It fills this with a Microsoft-only ‘thing’ called the Privilege Attribute Certificate or ‘PAC’. It contains all sorts of additional information such as groups, SIDs, password caches, etc. Its essential to Microsoft servers – but NFS4 doesn’t need it. NFS4 doesn’t send group information.

The good news is you can instruct AD not to send PAC information for your NFS4 server. The procedure is very simple:

In the Active Directory Users and Computers tool, select View -> Advanced Features.

  • Open the computer object properties of the NFS4 server (i.e. find the computer object for your NFS4 server)

  • Select the Attribute Editor tab

  • Edit the “userAccountControl” attribute

  • The original value will probably be 4096 and be displayed as “0x1000 = (WORKSTATION_TRUST_ACCOUNT)”. Don’t worry if it isn’t that however.

  • Add 33554432 to the value field.

  • Click OK

  • Ensure the stored value shows “0x2000000 = (NO_AUTH_DATA_REQUIRED)”

Once this is done the PAC won’t be added to the Kerberos ticket. This should then allow users to access NFS4 mounts and manage their files – and prevent anybody else managing their files!

In RHEL7 you should not need to do this as the ‘svcgssd’ service has been replaced with a new daemon – the gss-proxy. This software has been written specifically to cope with the huge size of Active Directory Kerrberos tickets. Sadly I don’t have a RHEL7 system (yet) to prove this. I will update this blog post when I do!

Filestore Web Access – or how I fell in love with programming again

When I was 16 I wrote a little ‘CMS’ or website content management system called IonPanel. It was pretty awful – it was written in PHP and MySQL, was probably terribly insecure and I mostly programmed it on Windows using IIS. It was however terribly exciting to write, and rather popular for a little while. Searching for the right string on google would find hundreds upon hundreds of websites running the software, and it was open source! Lots of people contributed to it. Several of my friends wrote little CMS packages, but none were as popular as IonPanel, and none as fast and feature packed. I was very proud of it. Sadly it died of the second-system effect when I attempted to re-write it for version ‘2.0’. A beta was launched, but then I went to University, I started realising how terrible PHP was, and I gave up. IonPanel slowly died. As time passed I longed for that time again – when I was writing code daily on an open source project that lots of people were using.

Since then I’ve written lots of code for lots of people but nothing has captivated me like IonPanel did – until now – twelve years later. A year or so ago I got the idea of writing a web interface to the University’s file storage platform. I’d recently got into Python and wanted to find a CIFS/SMB library I could use from Python. I found one – albeit badly documented and barely used – and wrote an application around it. Today that application has grown into something I’m extremely proud of. Enter ‘Filestore Web Access’.

Filestore Web Access allows all university students and staff to access their personal and shared files from a modern web browser anywhere in the world. Until I created FWA getting access to files away from the University’s standard desktops was quite difficult, unless you knew how to use SSH!

At the time of writing, it’s looking really rather good, here it is in two different themes:

Screen Shot 2014-04-21 at 19.30.55           fwa-flatly

The responsive design (thanks to Twitter Bootstrap, and a lot of extra code) causes it to work great on mobile:

Screen Shot 2014-04-21 at 19.36.28 fwa-mobile-1

And the new login screen with changing backgrounds I’m especially proud of:

Screen Shot 2014-04-21 at 19.33.35 Screen Shot 2014-04-21 at 19.33.59 Screen Shot 2014-04-21 at 19.33.47


I intend to write more about FWA in the next few days and weeks. Until then you can look take a look at even more screenshots!

You can also view the project page on GitHub:


Docker is a whale which carries containers on its back

See, its a whale! With containers! On its back! Like discworld but a whale instead of a turtle.

Ever since I first played with User Mode Linux (UML) back in the days of Linux 2.4 I’ve been working with virtualisation, normally being involved in server virtualisation activities wherever I’ve worked. The project I’m leading right now at Southampton is the conversion of our entire physical server estate to virtual on VMware.

Despite living and breathing these technologies I’ve never actually liked x86 virtualisation. It is a terrible waste of code and processor time. It virtualises the entire hardware platform as if the guest OS is actually running on real physical hardware – but why? And even this isn’t entirely true anymore – in all modern virtualisation products the guest OS is fully aware its being virtualised, there are tonnes of ‘tools’ and ‘drivers’ running facilitating communication between guest and hypervisor. Its thus a hybrid – a mess of different approaches and compromises.

I entirely blame Microsoft for the growth of this odd x86 virtualisation market. Outside of the x86 world IBM and Sun created hardware level virtualisation and OS-level virtualisation, but in x86 land, because of the proprietary and slow-moving nature of Windows, vendors sprang up creating the x86 hybrid virtualisation model – part hardware, part software. It meant you could run Windows inside a virtualised container and make better use of hardware – at the cost of enormous overheads and massive duplication of data. One of the most ridiculous things from an architecture perspective is every x86 VM solution emulating a PC BIOS or UEFI firmware instance for every guest. Whatever for!

So for a long time I’ve been hoping that “OS-level” virtualisation would eventually assert itself and become the dominant form of virtualisation. I think it hasn’t because Microsoft joined the x86 virtualisation party by buying Hyper-V and rushing off to compete with VMware and so the market has carried on down this odd virtualisation path. Architecturally there will always be a place for this type of virtualisation, but the vast majority of servers and virtual desktops don’t need this. They don’t need to pretend to be running on real hardware. They don’t need to talk to a fake-BIOS. Clearly the x86 virtualisation vendors think this too as each new generation of product has mixed more ‘paravirtualized’ components into the product – to improve performance and cut down on duplication.

So whats the alternative? Real OS-level virtualisation! There are lots to choose from too. Solaris has Zones/Containers. FreeBSD has jails. AIX has WPARs. HP-UX has HP-UX containers. Linux predictably has lots to choose from: OpenVZ, VServer, lmctfy and LXC to name a few (and predictably, until recently, none were in the upstream kernel). LXC is the one everybody was talking about. The idea was to put acceptable OS-level virtualisation components into the kernel rather than just taking OpenVZ and shoving it in the kernel, which would have ended badly and never been accepted. So LXC has taken a long time to write because of this and somewhat has lost its ‘new! exciting!’ sheen.

LXC remains however the right architectural way to do virtualisation. In LXC, and all the other OS-level technologies, the host’s kernel is shared and is used by the guest container. No hardware is virtualised. No kernel is virtualised – only the userland components are. So the host’s kernel is still doing all the work and thats what the guest operating system uses as its kernel. This eliminates all the useless overheads and allows for easy sharing of userland components too – so you don’t have to install the same operating system N times for N virtualised guests.

Sadly everybody’s experience with LXC for the past few years was along the lines of “oooh, that sounds awesome! is it ready yet?” and usually the answer was “not yet…nearly!”. All that changed last month though as LXC 1.0 was released and became ‘production ready’. Yay! All we needed now I thought was for all the Linux shops to switch away from bulky x86 full fat hypervisors and start moving to LXC. Instead, by the time LXC 1.0 was released, something else has come along and stolen the show.

Enter Docker. Now, Docker actually is LXC. Without LXC, Docker wouldn’t exist. But Docker extends LXC. Its the pudding on top which makes it into a platform literally everybody is talking about. Docker is not about virtualising servers, its about containerising applications, but uses LXC underneath. The Docker project says that the aim is to “easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.”

So when I realised Docker was getting massive traction I was displeased, because I wanted LXC to get this traction, and docker was stealing the show. However, I had missed the point. Docker is revolutionary. I wanted LXC to kill all the waste between the hardware and the server operating system’s userland components – the parts that are my day job. Docker wants to kill that waste, and all the waste in the userland of the operating system as well – the parts I hadn’t considered being a problem.

For years vendors and open source projects have produced applications, released them and asked for an IT department to install and maintain operating systems, install and maintain pre-requisite software and then install the application and configure it. Then usually another team in the organisation actually runs and maintains the application. Docker has the potential to kill all of that waste. In the new world order the vendor writes the code and creates a container with all the prerequisite OS and userland components (except for the linux kernel itself) and then releases the container. The customer only has to load the container and then use the application.

It is then a combination of the fairly well established “virtual appliances” seen in VMware/KVM/Hyper-V land, but with all the x86 hypervisor waste removed.

This has many benefits:

  • The software vendor doesn’t have to maintain a full operating system that is expected to work on any number of virtualisation solutions and different fake hardware models. They only have to target LXC, with the host kernel doing all the difficult work.
  • The software vendor can pick and choose whatever userland components they need and properly and fully integrate the application with the userland OS.
  • The software vendor takes care of patching the userland OS and the application. The patching process is integrated. No more OS patches breaking the app. No more OS patching for the IT department to do.
  • The customer IT department’s work is radically and significantly reduced. They only have to deploy the container image – a very easy procedure – and within seconds have a fully set up and ready to use application.
  • And end to dependencies, prerequisites, compatibility issues, lengthy installation, incorrect configured operating systems and applications
  • And all the benefits of LXC – low overheads, high performance, and end to the duplication of the same operating system.
  • And end to having to upgrade and move applications because the guest server operating system is now end of life – even if the application isn’t.

So, today’s IT platforms probably consist of:

  • A farm of physical servers running a hypervisor platform like VMware or KVM
  • Hundreds if not thousands of virtual machines running only 2-3 different operating system flavours (e.g. RHEL5/6 or Windows Server 2008/2012) with a small number of VMs (<10%) running exoctic different things
  • Teams of infrastructure people maintaining the guest operating systems and using OS-level management systems such as RHN, Landscape, Puppet, Chef, Cfengine, Runit, etc and spending a lot of time patching and maintaining operating systems.
  • Teams of application people, usually without root, or even worse with root, having an uneasy relationship with infrastructure teams, installing applications and patching them (or probably not patching them) and maintaining them.

If Docker catches on the way I’d like it to (beyond what even the Docker project envisaged) then I think we’d see:

  • A farm of physical servers running an LXC hypervisor Linux OS
  • Hundreds if not thousands of Docker containers containing whatever the vendor application needs.
  • Teams of application people using the vendor supplied web-interfaces to manage the applications, patching them using vendor patching systems which integrate all the components fully, or just by upgrading stateless docker instances to the latest version.

It seems that this vision is already a reality: CoreOS envisages applications packaged as ‘Docker’ containers, and CoreOS as the minimalist platform hypervisor underneath. The IT departments’ sole job would be to install CoreOS onto hardware and then load Docker containers as needed from vendors, open source projects, and internal software development teams.

This is all very new and cutting edge. Docker 0.9 was only released a few weeks ago. CoreOS’s latest version is a major change. Other exciting areas of development with Docker are plans to let you swap out LXC and use OpenVZ or Solaris Zones or FreeBSD jails instead, thus opening Docker up to Solaris and BSD too. This is a very exciting new frontier which, if successful, will totally re-write how the IT world works. I can’t wait to see what happens next.