Monday, 20 March 2023

SELinux sucks II

Previously I wrote about SELinux being bad for security on the grounds that it consumed too much of your time for little benefit. But I recently discovered it actively undermining the security on hosts I am responsible for.

The biggest wins for security are education and regular patching. So ensuring there is a patching mechanism is at the top of my list when building a new host. While the advice from NIST still reads like something written in the 1970s, NCSC and others endorse automated patching. I've not have any issues with Ubuntu/Debian across hundreds of machines running automated patching for several years - but I'm still cautious about how this is implemented. 

I recently inherited some Centos 7 boxes which did not have automatic patching. While there is a RHEL package yum-cron this is quite a history of issues. I would also need to break the packaged files to ensure that my live boxes did not update at the same time as the dev/test. I appreciate that delaying the updates to live might still mean a different patch set gets deployed than is on dev/test....but it is a risk which can be reduced. So I just added a simple cron job to run `yum -y update` logging its stdout and stderr. 

I thought all was good. Cron job was firing. Yum was happily reporting that there was nothing to do. Job done. What's next?

Only it was not working. 

Updates were available. They were not installed. Yum said "No packages marked for update".

Running an update via an interactive session worked fine.

The culprit? SELinux.

It took a bit of experimentation to find this. On a dev system, sitting idle (other than my ssh session) SELinux was generating so much noise that the default log rotation config only allowed for 40 minutes of history. Setting up a more frequent cron job I was able to capture failures in the audit.log. So yum was being blocked by SELinux but reporting no errors. In fairness, the absence of errors from yum was not the fault of SELinux....however yum is now produced/maintained by RedHat.

OK....so maybe there was a reason for installing yum-cron after all? So I installed that (I do find it surprising that it EXPECTS manual intervention before it will actually work). Configured it. And nothing. `yum list updates` says there are lots of packages to update. The cron job fires. Yum-cron writes no logs. No updates installed. I amended the yum-cron config to send an email when patching has been attempted - will monitor and see how it goes.

This was an interesting journey. In addition to the issues with RHEL yum-cron, I discovered that Centos yum-cron has issues all of its own.

So to summarize....

  • SELinux blocks yum run from cron
  • yum does not report failures
  • Centos yum-cron can't install security fixes only




Friday, 27 January 2023

Usable Memcache

 I like simple.

Memcache is simple.

But sometimes it's maybe too simple.

I needed a substrate for holding session data which was accessible from more than one host (and potentially, more than one application). Given the lack of persistence in memcache I had a look at Couchbase....and found something even more bewildering than Oracle! Redis looked like it wouldn't take me weeks to get running, but while frequently touted as a replacement for memcache, I couldn't find any documentation stating whether it had a binary compatible memcache interface. So back to memcache.

First problem: I want high availability - that means more than one instance. Oh, C(r)AP! Sharding is easy enough, but replication needs a bit more work. A bit of digging and I found mcrouter which, along with haproxy means I can have load balancing and failover. But recovery is still missing. 

The memcache distribution comes with a Perl script which will copy memcache data from another instance. However when I tested it, I found the copy operation was very lossy - I was only getting around 70% of the data across to the new instance (HIGHLY variable) when hammering the source with a 1:1 mix of updates and gets. A bit disappointing - but understandable really. Maintaining a consistent list of all known items would add a lot of complexity and performance problems.

After a bit of reading I implemented and tested my own script using lru_crawler. Although still not perfect, in testing it was achieving >99%, again while the source was getting hosed. This is now available at https://github.com/symcbean/mcseed/blob/main/mcseed.php 


I disabled the package systemd unit file and created my own which uses a shell script to

  1. Block external access to the memcache and mcrouter ports
  2. start the memcache binary
  3. run mcseed to populate the cache
  4. allow incoming traffic to the mcrouter and memcache ports

Job done.