Kerberos protected NFS with Active Directory and the PAC

For years I’ve been trying to use Active Directory’s Kerberos implementation for setting up secure NFS4. This is where NFS4 is configured to require kerberos tokens to make sure only the user with a valid kerberos token (i.e. they authenticated to Active Directory) can access their relevant files. This is stark contrast to NFS with AUTH_SYS – where certain IP addresses are essentially given full access.

The advantage of using NFS4/krb5 is that it can be used to share out a protected NFS4 file share to whatever IP address you like safe in the knowledge that only authenticated users can access their files. They have to authenticate with Kerberos first (i.e. Active Directory) before they can access their files – and only their files. It also solves the ‘root squash’ problem – root cannot access everybody else’s files.

However, in the past we were using Windows Server 2003 as our domain controllers – we only upgraded to Server 2012R2 a few months ago. Since upgrading we could finally mount (connect to) NFS4 shares protected by Kerberos. Once mounted however users could not use their kerberos ticket to access their files – permission denied was returned. The logs showed no errors. It was a head banging against brick wall moment.

Everything should have been working, until I discovered an obscure article suggesting that our users are in too many groups. Sure enough thanks to some odd internal practices relating to software groups and Sharepoint our users were in literally hundreds of groups – but why would this break NFS4? Its because, as ever, Active Directory isn’t a what I’d call standard Kerberos implementation. Active Directory uses an optional RFC4120 field called ‘AuthorizationData’. It fills this with a Microsoft-only ‘thing’ called the Privilege Attribute Certificate or ‘PAC’. It contains all sorts of additional information such as groups, SIDs, password caches, etc. Its essential to Microsoft servers – but NFS4 doesn’t need it. NFS4 doesn’t send group information.

The good news is you can instruct AD not to send PAC information for your NFS4 server. The procedure is very simple:

In the Active Directory Users and Computers tool, select View -> Advanced Features.

  • Open the computer object properties of the NFS4 server (i.e. find the computer object for your NFS4 server)

  • Select the Attribute Editor tab

  • Edit the “userAccountControl” attribute

  • The original value will probably be 4096 and be displayed as “0×1000 = (WORKSTATION_TRUST_ACCOUNT)”. Don’t worry if it isn’t that however.

  • Add 33554432 to the value field.

  • Click OK

  • Ensure the stored value shows “0×2000000 = (NO_AUTH_DATA_REQUIRED)”

Once this is done the PAC won’t be added to the Kerberos ticket. This should then allow users to access NFS4 mounts and manage their files – and prevent anybody else managing their files!

In RHEL7 you should not need to do this as the ‘svcgssd’ service has been replaced with a new daemon – the gss-proxy. This software has been written specifically to cope with the huge size of Active Directory Kerrberos tickets. Sadly I don’t have a RHEL7 system (yet) to prove this. I will update this blog post when I do!

Filestore Web Access – or how I fell in love with programming again

When I was 16 I wrote a little ‘CMS’ or website content management system called IonPanel. It was pretty awful – it was written in PHP and MySQL, was probably terribly insecure and I mostly programmed it on Windows using IIS. It was however terribly exciting to write, and rather popular for a little while. Searching for the right string on google would find hundreds upon hundreds of websites running the software, and it was open source! Lots of people contributed to it. Several of my friends wrote little CMS packages, but none were as popular as IonPanel, and none as fast and feature packed. I was very proud of it. Sadly it died of the second-system effect when I attempted to re-write it for version ’2.0′. A beta was launched, but then I went to University, I started realising how terrible PHP was, and I gave up. IonPanel slowly died. As time passed I longed for that time again – when I was writing code daily on an open source project that lots of people were using.

Since then I’ve written lots of code for lots of people but nothing has captivated me like IonPanel did – until now – twelve years later. A year or so ago I got the idea of writing a web interface to the University’s file storage platform. I’d recently got into Python and wanted to find a CIFS/SMB library I could use from Python. I found one – albeit badly documented and barely used – and wrote an application around it. Today that application has grown into something I’m extremely proud of. Enter ‘Filestore Web Access’.

Filestore Web Access allows all university students and staff to access their personal and shared files from a modern web browser anywhere in the world. Until I created FWA getting access to files away from the University’s standard desktops was quite difficult, unless you knew how to use SSH!

At the time of writing, it’s looking really rather good, here it is in two different themes:

Screen Shot 2014-04-21 at 19.30.55           fwa-flatly

The responsive design (thanks to Twitter Bootstrap, and a lot of extra code) causes it to work great on mobile:

Screen Shot 2014-04-21 at 19.36.28 fwa-mobile-1

And the new login screen with changing backgrounds I’m especially proud of:

Screen Shot 2014-04-21 at 19.33.35 Screen Shot 2014-04-21 at 19.33.59 Screen Shot 2014-04-21 at 19.33.47

 

I intend to write more about FWA in the next few days and weeks. Until then you can look take a look at even more screenshots!

You can also view the project page on GitHub: https://divad.github.io/bargate/

 

Docker is a whale which carries containers on its back

docker-logo

See, its a whale! With containers! On its back! Like discworld but a whale instead of a turtle.

Ever since I first played with User Mode Linux (UML) back in the days of Linux 2.4 I’ve been working with virtualisation, normally being involved in server virtualisation activities wherever I’ve worked. The project I’m leading right now at Southampton is the conversion of our entire physical server estate to virtual on VMware.

Despite living and breathing these technologies I’ve never actually liked x86 virtualisation. It is a terrible waste of code and processor time. It virtualises the entire hardware platform as if the guest OS is actually running on real physical hardware – but why? And even this isn’t entirely true anymore - in all modern virtualisation products the guest OS is fully aware its being virtualised, there are tonnes of ‘tools’ and ‘drivers’ running facilitating communication between guest and hypervisor. Its thus a hybrid – a mess of different approaches and compromises.

I entirely blame Microsoft for the growth of this odd x86 virtualisation market. Outside of the x86 world IBM and Sun created hardware level virtualisation and OS-level virtualisation, but in x86 land, because of the proprietary and slow-moving nature of Windows, vendors sprang up creating the x86 hybrid virtualisation model – part hardware, part software. It meant you could run Windows inside a virtualised container and make better use of hardware – at the cost of enormous overheads and massive duplication of data. One of the most ridiculous things from an architecture perspective is every x86 VM solution emulating a PC BIOS or UEFI firmware instance for every guest. Whatever for!

So for a long time I’ve been hoping that “OS-level” virtualisation would eventually assert itself and become the dominant form of virtualisation. I think it hasn’t because Microsoft joined the x86 virtualisation party by buying Hyper-V and rushing off to compete with VMware and so the market has carried on down this odd virtualisation path. Architecturally there will always be a place for this type of virtualisation, but the vast majority of servers and virtual desktops don’t need this. They don’t need to pretend to be running on real hardware. They don’t need to talk to a fake-BIOS. Clearly the x86 virtualisation vendors think this too as each new generation of product has mixed more ‘paravirtualized’ components into the product – to improve performance and cut down on duplication.

So whats the alternative? Real OS-level virtualisation! There are lots to choose from too. Solaris has Zones/Containers. FreeBSD has jails. AIX has WPARs. HP-UX has HP-UX containers. Linux predictably has lots to choose from: OpenVZ, VServer, lmctfy and LXC to name a few (and predictably, until recently, none were in the upstream kernel). LXC is the one everybody was talking about. The idea was to put acceptable OS-level virtualisation components into the kernel rather than just taking OpenVZ and shoving it in the kernel, which would have ended badly and never been accepted. So LXC has taken a long time to write because of this and somewhat has lost its ‘new! exciting!’ sheen.

LXC remains however the right architectural way to do virtualisation. In LXC, and all the other OS-level technologies, the host’s kernel is shared and is used by the guest container. No hardware is virtualised. No kernel is virtualised – only the userland components are. So the host’s kernel is still doing all the work and thats what the guest operating system uses as its kernel. This eliminates all the useless overheads and allows for easy sharing of userland components too – so you don’t have to install the same operating system N times for N virtualised guests.

Sadly everybody’s experience with LXC for the past few years was along the lines of “oooh, that sounds awesome! is it ready yet?” and usually the answer was “not yet…nearly!”. All that changed last month though as LXC 1.0 was released and became ‘production ready’. Yay! All we needed now I thought was for all the Linux shops to switch away from bulky x86 full fat hypervisors and start moving to LXC. Instead, by the time LXC 1.0 was released, something else has come along and stolen the show.

Enter Docker. Now, Docker actually is LXC. Without LXC, Docker wouldn’t exist. But Docker extends LXC. Its the pudding on top which makes it into a platform literally everybody is talking about. Docker is not about virtualising servers, its about containerising applications, but uses LXC underneath. The Docker project says that the aim is to “easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.”

So when I realised Docker was getting massive traction I was displeased, because I wanted LXC to get this traction, and docker was stealing the show. However, I had missed the point. Docker is revolutionary. I wanted LXC to kill all the waste between the hardware and the server operating system’s userland components – the parts that are my day job. Docker wants to kill that waste, and all the waste in the userland of the operating system as well – the parts I hadn’t considered being a problem.

For years vendors and open source projects have produced applications, released them and asked for an IT department to install and maintain operating systems, install and maintain pre-requisite software and then install the application and configure it. Then usually another team in the organisation actually runs and maintains the application. Docker has the potential to kill all of that waste. In the new world order the vendor writes the code and creates a container with all the prerequisite OS and userland components (except for the linux kernel itself) and then releases the container. The customer only has to load the container and then use the application.

It is then a combination of the fairly well established “virtual appliances” seen in VMware/KVM/Hyper-V land, but with all the x86 hypervisor waste removed.

This has many benefits:

  • The software vendor doesn’t have to maintain a full operating system that is expected to work on any number of virtualisation solutions and different fake hardware models. They only have to target LXC, with the host kernel doing all the difficult work.
  • The software vendor can pick and choose whatever userland components they need and properly and fully integrate the application with the userland OS.
  • The software vendor takes care of patching the userland OS and the application. The patching process is integrated. No more OS patches breaking the app. No more OS patching for the IT department to do.
  • The customer IT department’s work is radically and significantly reduced. They only have to deploy the container image – a very easy procedure – and within seconds have a fully set up and ready to use application.
  • And end to dependencies, prerequisites, compatibility issues, lengthy installation, incorrect configured operating systems and applications
  • And all the benefits of LXC – low overheads, high performance, and end to the duplication of the same operating system.
  • And end to having to upgrade and move applications because the guest server operating system is now end of life – even if the application isn’t.

So, today’s IT platforms probably consist of:

  • A farm of physical servers running a hypervisor platform like VMware or KVM
  • Hundreds if not thousands of virtual machines running only 2-3 different operating system flavours (e.g. RHEL5/6 or Windows Server 2008/2012) with a small number of VMs (<10%) running exoctic different things
  • Teams of infrastructure people maintaining the guest operating systems and using OS-level management systems such as RHN, Landscape, Puppet, Chef, Cfengine, Runit, etc and spending a lot of time patching and maintaining operating systems.
  • Teams of application people, usually without root, or even worse with root, having an uneasy relationship with infrastructure teams, installing applications and patching them (or probably not patching them) and maintaining them.

If Docker catches on the way I’d like it to (beyond what even the Docker project envisaged) then I think we’d see:

  • A farm of physical servers running an LXC hypervisor Linux OS
  • Hundreds if not thousands of Docker containers containing whatever the vendor application needs.
  • Teams of application people using the vendor supplied web-interfaces to manage the applications, patching them using vendor patching systems which integrate all the components fully, or just by upgrading stateless docker instances to the latest version.

It seems that this vision is already a reality: https://coreos.com/. CoreOS envisages applications packaged as ‘Docker’ containers, and CoreOS as the minimalist platform hypervisor underneath. The IT departments’ sole job would be to install CoreOS onto hardware and then load Docker containers as needed from vendors, open source projects, and internal software development teams.

This is all very new and cutting edge. Docker 0.9 was only released a few weeks ago. CoreOS’s latest version is a major change. Other exciting areas of development with Docker are plans to let you swap out LXC and use OpenVZ or Solaris Zones or FreeBSD jails instead, thus opening Docker up to Solaris and BSD too. This is a very exciting new frontier which, if successful, will totally re-write how the IT world works. I can’t wait to see what happens next.

Performing HTTP SSL server certificate validation from Python or Perl

Update: Since I’ve written this post I’ve switched to using Python Requests, which is a much better way of achieving verified SSL connections.

SSL/TLS, love it or hate it, is the backbone of nearly all online communication. These days most network protocols are usually written atop HTTP, and then wrapped inside TLS (HTTPS) to provide encryption. But HTTPS also provides verification and trust via certificates. In this way you can ensure that not only are you sending your data in an encrypted fashion but you are talking to the real server and not a rogue server instead.

You would think, given the prevalence of systems written using HTTPS as the underlying protocol, that writing a HTTPS client in Python or Perl would be easy and all the complex security and verification would be done for you. Sadly not so. Simply opening HTTPS connections from Python and Perl are both extremely easy – you can use urllib2 in Python and LWP in Perl. Both provide encryption – but certificate verification? Not so easy.

Perl is best placed here because the current version of LWP (or any version from 6.00) performs certificate validation. When you connect to an SSL HTTP server it will validate the certificate and ensure that the CN of the certificate matches the hostname you specified. For more details, see the LWP docs for SSL opts. This feature is relatively recent however – it was only released in March 2011. As such nearly all current stable Linux distributions ship a previous LWP.

Previous releases could be made – with some difficulty – to validate the certificate, but not to verify the CN of the certificate matched the hostname the code was connecting to. That wasn’t all that useful of course because as long as the server had /any/ valid SSL certificate trusted by the client then it would work. So for LWP connections using SSL make sure you update your module to LWP 6 or later, and of course you can do this easily on even old versions of Perl.

What about Python? Surely Python’s famous ease of use and great built in modules will mean it works? Alas, no. The situation is even worse. As the urllib2 documentation for Python 2.7 says, HTTPS connections do not perform certificate verification. Python 3.0 or 3.1 doesn’t support it either. Python 3.2 or later, thankfully, does support performing SSL certificate verification – it has new options for specifying the CA certificate details. It even does hostname CN checking/verification. But Python 3.2 is very new as well – first release in February 2011 – and is in virtually no Linux distributions – certainly no stable or enterprise releases.

However, there is another option. Many programs are written in Python that perform proper SSL certificate checking. A little poking around at the source of some Red Hat utilities revealed the answer – they don’t use urllib, they use libcurl / pycurl instead. The python curl module supports performing SSL verification and hostname verification and as such this module is used by virtually all the tools in Python which need to talk over HTTPS. Sadly, many don’t however and were probably coded with urllib thinking they did.

Sadly, pycurl is really badly documented. So if you want to find out how to do it here is a little example:

import pycurl
curl = pycurl.Curl()
curl.setopt(pycurl.URL, "https://secure.domain.com/")
curl.setopt(pycurl.SSL_VERIFYPEER, 1)
curl.setopt(pycurl.SSL_VERIFYHOST, 2)
curl.setopt(pycurl.CAINFO, "/path/to/certificate-chain-bundle.crt")
curl.perform()

The “SSL_VERIFYPEER” flag means that cURL will check the validity of the certificate against the certificate chain / root CA certificates. The “SSL_VERIFYHOST” flag means that cURL will check the certificate CN matches the hostname you connected to. The latter option must be set to 2 – see the link for more information.

PXE booting to a Windows WDS boot server from a Linux DHCP server

The University of Southampton is deploying a new Infoblox-based DHCP and DNS service. Underneath Infoblox is just a custom version of ISC DHCPD and ISC BIND. We also have a Windows-based WDS (Windows Deployment Services) network build server which we want to be used as the PXE build server. If you’re not using IP helpers, which most people don’t, this can be tricky.

The solution is to configure the “next-server” and “pxe-boot” server IP address in Infoblox. As far as I can tell this seems to set DHCP option 66 which is the TFTP boot server IP address. However you also need to set the boot filename – which seems to be DHCP option 67. If you’re using a Linux TFTP server and syslinux/pxelinux that filename should be set to ‘pxelinux.0′. The Windows TFTP server of course requires a different path.

It seems that path is ‘/boot/x86/wdsnbp.com’, except Windows uses backslashes, and just putting that path in does not work – you’ll end up with a TFTP access violation error. Instead you need to use backslashes, but escape the backslashes as ISC DHCPD will think you’re using an escape sequence. You also need to terminate the filename with a NUL character.

You’ll thus need something like this in your Infoblox GUI:

Screen Shot 2013-05-29 at 21.02.19

And then, voila, you should be able to use a PXE boot client onto a Windows Deployment Services PXE server from an Infoblox DHCP server.

Logging detailed information when Flask deals with exceptions

I use the Flask web development framework, written in Python, as the tool of choice for writing web applications. Its simple, lightweight, easy and well documented. It makes writing web applications a breeze – freeing me to write the important stuff.

Flask integrates into the Python API wherever possible and takes care of most things for you. When something goes badly wrong in your code Flask catches exceptions and sends them through Python’s logging framework. This is very useful – in addition to telling the user, Flask can be configured to send logging to a file and to e-mail, so I get to hear about it even if a user doesn’t report a fault. This is where tools like Sentry also fit in – although I don’t use them.

The problem is that default logging, whilst dumping a full stack trace, does not add any Flask or web-specific details. The log format only contains the default log record attributes:

Message type:       ERROR
Location:           /usr/lib/python2.6/site-packages/flask/app.py:1306
Module:             app
Function:           log_exception
Time:               2013-05-03 15:35:57,170
Logger Name:        bargate
Process ID:         1877

Message:

Exception on /login [POST]

The worst part of the above is the lack of relevant information, but also the fact that because the exception is handled by several wapper functions, the ‘Location’ and ‘Function’ and ‘Module’ parts are all irrelevant because they refer to the function dealing with the exception rather than where it was generated from.

Either way, I wanted to extend the log format (the emails generated) when exceptions occur. A thread on the Flask mailing list led me toward messing with subclassing parts of Python’s logging classes (as indicated by other developers), but this led me astray. After extensive reading of ‘logging’ and thinking about the problem in detail, I realised this was nonsense – there was no way to inject additional custom attributes into exception logs like that.

The way to do it is to inject custom data into a log with the “extra=” kwargs flag when calling “logger.error”. The problem of course is that code you write within Flask doesn’t (generally) call logger.error at all – Flask does this for you within app.log_exception. The solution then is deliciously simple – and a recommended way of using Flask – subclass Flask itself.

Thus I subclassed Flask like so:

class BargateFlask(Flask):
        def log_exception(self, exc_info):
                """Logs an exception.  This is called by :meth:`handle_exception`
                if debugging is disabled and right before the handler is called.
                This implementation logs the exception as error on the
                :attr:`logger` but sends extra information such as the remote IP
                address, the username, etc.

                .. versionadded:: 0.8
                """

                if 'username' in session:
                        usr = session['username']
                else:
                        usr = 'Not logged in'

                self.logger.error("""Path:                 %s 
HTTP Method:          %s
Client IP Address:    %s
User Agent:           %s
User Platform:        %s
User Browser:         %s
User Browser Version: %s
Username:             %s
""" % (
                        request.path,
                        request.method,
                        request.remote_addr,
                        request.user_agent.string,
                        request.user_agent.platform,
                        request.user_agent.browser,
                        request.user_agent.version,
                        usr,

                ), exc_info=exc_info)

In the above example I decided not to use the extra kwargs flag and modify the Formatter object when setting up e-mail alerts as that would mean every call to app.logger.error would require me to send the same extra= flags. Instead I added the information I wanted via the message parameter (which achieves the same end result). This way, error e-mails are much more useful:

A fatal error occured in Bargate.

Message type:       ERROR
Location:           /data/fwa/bargate/__init__.py:104
Module:             __init__
Function:           log_exception
Time:               2013-05-05 09:43:59,942
Logger Name:        bargate
Process ID:         12141

Further Details:

Path:                 / 
HTTP Method:          POST
Client IP Address:    152.78.130.147
User Agent:           Wget/1.11.4 Red Hat modified
User Platform:        None
User Browser:         None
User Browser Version: None
Username:             Not logged in