Distributed Firewall Configuration Part 1: Salt and Pillars
For a long time now, I’ve recommended a distributed firewall solution, and preached the benefits: increased security, distributed load, improved DDOS resistance and system-by-system fine tuning. But these benefits only come if the system is being maintained appropriately, and that can be a pretty big challenge. I’ve been asked several times how I manage distributed firewall rules. In my experience, using an automation engine (in our case salt) and a firewall management utility (in our case ferm) has made the rules easy to manage, distribute, and most importantly understand.
The first challenge is around organization. The simpler your
organization schema is, the less documentation you
have to write, the easier it is for other people
to pick it up. It also increases performance, because you have
fewer rules for packets to be compared against before being accepted
or rejected. Security, automation, monitoring, scripting: my mantra
is layers, and that is definitely in play here: we have multiple layers
of organization.
IP Allocation
The first layer of organization is the IP numbering of the hosts. I assigned a block of IP addresses to each task which will be large enough for it to scale, but small enough that it is not wasteful. Tasks include: web server, SIE Remote Access server, administrative machines and customer boxes. Most tasks required few enough hosts to fit in a /29, but a few required a /28. Note that these are merely logical designations and not IP subnets, so you don’t lose 3 useful IPs out of a /29. This allows us to be more specifically secure, in that we can tie firewall rules to allow (or deny) types of servers to access to specific systems without needing a rule for each server.
Our IP organization continues with the class of server: dev, qa, or production. Each class gets a subsection of the task block. In most cases prod gets half, with qa, staging, and dev splitting the remainder. That means that for most things production is inside a /30 for the purposes of the firewall configuration. This allows for things like production to be locked down so that only the production automation engine can touch production systems, and only the right production systems can hit the production databases.
Salt Automation
The second layer of organization is in our salt automation. Salt has two built in classification systems: pillars and grains. We use pillars since they are defined on the server side, which makes management a little simpler. We create pillars to track datacenter, rack number, and physical location, which are important for systems management which may not seem important until you have a remote hands technician plug a server into the wrong switch. We also use pillars to track the dev, qa, staging and production status, as well as the task to which a server is assigned. Salt deploys the appropriate software and firewall rules in a uniform centralized manner which significantly reduces the manual labor aspect and improves consistency. It simply doesn’t forget one of the rules.
We continue the organization down to the naming of the files. Where it is possible to use custom configuration file names, we append a code indicating that it is centrally managed, or if there is a file that is temporarily independently managed pending automation that is also indicated in the file name. We also put a note at the top of the file as a gentle reminder that if things aren’t fixed the right way, they won’t stay fixed.
Also important to be organized are the personnel groups. This won’t necessarily be by role, as you’ll may have groups of support staff and developers that need access to a group of servers but only that one group, a QA team that only needs access to some services on that group of servers and several others, such as an operations team which needs access to all the servers and a team of DBAs that only need access to the database servers. You may even want to divide access up between production, quality assurance, staging and development servers. Having your personnel divided up by role and task from a server access perspective allows you to simplify securing your servers from unnecessary internal threats as well.
Monitoring
The final layer is our monitoring system, where we use host groups to watch the services that are expected to be on each system to make sure everything is running as expected. The host group organization pretty closely matches the task organization in salt. The organization allows a smaller staff to watch more servers with more reliability, security and uptime.
The Details
Now that we have the baseline provided, we can get into actually implementing the distributed firewall. I’m going to assume for the sake of sanity that you have cleaned out all your previous firewall configurations, or you’re using a hardware firewall. If that’s not the case, I’d suggest clearing or disabling the firewall at the same time as pulling in the new configuration. Also, start on non-production systems. You’re going to break things, let’s figure out what before we get to production.
A couple of notes about the salt configuration. Some of this may not make sense until you’ve read through the ferm configuration section. I would strongly recommend reading through all of this a couple times and feel comfortable with it before you begin working with it. This is far from the only way to do it, so feel free to adjust things to your needs. I’m not going to go through the initial salt installation in this document, as it’s a bit outside the scope, and many people will use the concepts behind this with their current automation engine. But I will provide a couple recommendations from what we’ve learned the hard way: use multiple masters and use git as a back end.
REMINDER: Do this on testing systems. Do not start with production. Murphy’s Law will make you regret it.
In your file root, you’ll create a folder for each of the applications
you’ll install, with an init.sls
and the files that you’ll be
providing. Through the scope of this document we’ll be looking
into a file root tree that looks like this:
top.sls ferm/init.sls ferm/ferm.conf ferm/SM-base.conf ferm/SM-services.conf ferm/SM-internal-all.conf ferm/SM-internal-web.conf ferm/SM-internal-monitor.conf ferm/SM-trusted-all.conf ferm/SM-trusted-dba.conf ferm/SM-trusted-ops.conf
The top.sls
will contain the subdirectories to run:
top.sls
:
base: ‘*’: - ferm
And those point to the init.sls
to run:
ferm/init.sls
:
ferm: pkg: - installed service: - enabled - watch: - pkg: ferm - file: /etc/ferm/ferm.conf - file: /etc/ferm/conf.d/* /etc/ferm/ferm.conf: file.managed: - source: salt://ferm/ferm.conf - user: root - group: root - mode: 644 /etc/ferm/conf.d: file.directory: - user: root - group: root - mode: 755 - makedirs: True {% if salt['pillar.get']('services') %} /etc/ferm/conf.d/SM-services.conf: file.managed: - source: salt://ferm/SM-services.conf - template: jinja - user: root - group: root - mode: 644 - require: - file: /etc/ferm/conf.d {% else %} /etc/ferm/conf.d/SM-services.conf: file.absent {% endif %} /etc/ferm/conf.d/SM-internal-all.conf: file.managed: - source: salt://ferm/SM-internal-all.conf - user: root - group: root - mode: 644 - require: - file: /etc/ferm/conf.d {% if salt['pillar.get']('services:internal-web') %} /etc/ferm/conf.d/SM-internal-web.conf: file.managed: - source: salt://ferm/SM-internal-web.conf - template: jinja - user: root - group: root - mode: 644 {% else %} /etc/ferm/conf.d/SM-internal-web.conf: file.absent {% endif %} {% if salt['pillar.get']('services:internal-monitor') %} /etc/ferm/conf.d/SM-internal-monitor.conf: file.managed: - source: salt://ferm/SM-internal-monitor.conf - template: jinja - user: root - group: root - mode: 644 {% else %} /etc/ferm/conf.d/SM-internal-web.conf: file.absent {% endif %} /etc/ferm/conf.d/SM-trusted-all.conf: file.managed: - source: salt://ferm/SM-trusted-all.conf - user: root - group: root - mode: 644 - require: - file: /etc/ferm/conf.d {% if salt['pillar.get']('services:trusted-dba') %} /etc/ferm/conf.d/SM-trusted-dba.conf: file.managed: - source: salt://ferm/SM-trusted-dba.conf - template: jinja - user: root - group: root - mode: 644 - require: - file: /etc/ferm/conf.d {% else %} /etc/ferm/conf.d/SM-trusted-dba.conf: file.absent {% endif %} {% if salt['pillar.get']('services:trusted-ops') %} /etc/ferm/conf.d/SM-trusted-ops.conf: file.managed: - source: salt://ferm/SM-trusted-ops.conf - template: jinja - user: root - group: root - mode: 644 - require: - file: /etc/ferm/conf.d {% else %} /etc/ferm/conf.d/SM-trusted-ops.conf: file.absent {% endif %}
Breaking down the above, it installs ferm, then configures the service to restart if salt changes any configuration file, or updates the package.
It puts in place ferm.conf
, makes the folder /etc/ferm/conf.d
, and
then in that folder places SM-base.conf
, SM-internal-all.conf
and
SM-trusted-all
. If there are any services at all, it puts in
SM-services
, and then if any of those services should be in
SM-trusted-ops.conf
or SM-internal-web.conf
it’ll build and place
the correct file(s). There is more information on that process
under the ferm configuration, later.
You’ll also end up in the pillar root, with a structure that looks like this
top.sls services/public/http services/public/https services/internal-all/http services/internal-all/https services/internal-web/postgres services/internal-monitor/snmp services/internal-monitor/postgres services/trusted-all/icmp services/trusted-dba/postgres services/trusted-ops/postgres services/trusted-ops/ssh services/trusted-ops/snmp
Inside the top.sls
is where you will define which systems get which pillars.
top.sls
:
base: ‘web1.dc1.mydomain.local’: - services/public/http - services/public/https - services/trusted-ops/snmp - services/internal-monitor/snmp ‘db1.dc1.mydomain.local’: - services/internal-web/postgres - services/trusted-ops/postgres - services/trusted-ops/snmp - services/trusted-dba/postgres - services/internal-monitor/postgres - services/internal-monitor/snmp
Inside each of those is an init.sls
which includes the pillar structure.
services/public/http/init.sls
:
services: public: http: protocol: tcp port: 80
Once you have the structure completely built out, and your organization standard codified into salt, you’ll need to actually configure ferm to be able to verify that everything works. We’ll cover that in the next article.
Travis Hall is a System Administrator for Farsight Security, Inc.
Read the next part in this series: Distributed Firewall Configuration Part 2: Ferm Configuration with Salt