page-loader

As I’m spinning up more and more VMs on my Proxmox hypervisor, I started getting tired of installing Splunk forwarders on each system. Going out to the Splunk site to grab the wget command, pulling up the Universal Forwarder manual to remember the steps to get it up and running, it all got to be annoying. So, I decided to automate that process using Ansible. I started scratching the surface of what Ansible is capable of in my Alert Framework Development project, but this turned out to be a much more classical automation project, in which I learned a lot.

I’m using the following playbook to push out and install the forwarder to single hosts. I’m targeting those hosts through the ansible-playbook command that I’ll run after creating a new VM.

Directory/Playbook Structure:

For anyone looking for more information on the structure of Ansible playbooks, or how to organize your directories for a playbook, I’d highly recommend this link.

This is the directory structure I ended up with:

playbooks/
	Splunk_UF_Install/
		main.yml
		roles/
			common/
				files/
					splunkforwarder-7.2.3-06d57c595b80-Linux-arm.tgz
					splunkforwarder-7.2.3-06d57c595b80-Linux-x86_64.tgz
					user-seed.conf
				tasks/
					main.yml
				vars/
					uf_user_password.yml
 

Breaking this down, we have the main.yml at the playbook’s base:

---
- name: Install Linux Universal Forwarder
  hosts: '{{ host }}'
  vars:
    splunk_working_directory: '/tmp/Splunk/'
    splunk_uf_file_centos: 'splunkforwarder-7.2.3-06d57c595b80-Linux-x86_64.tgz'
    splunk_uf_file_arm: 'splunkforwarder-7.2.3-06d57c595b80-Linux-arm.tgz'
  roles:
    - common
...

This is the part of the playbook that gets called by the ansible-playbook command (see here). It requires a variable to be passed to it to specify the host to deploy the forwarder to. It also handles some basic variable assignments and determines the roles to run, in this case, ‘common’.

Within the common role, we have our files that we’ll need, as well as some additional encrypted variables, and lastly the tasks we’ll need to run in order to get the forwarder installed. Let’s jump into the tasks we’re running, and throughout I’ll reference back to the other involved files so we can keep the flow of information straight.

Tasks – main.yml

We start off by including a variable file from our vars/ directory, more on that later.

---
- include_vars: uf_user_password.yml
 

The next three tasks are for checking how much work there is left to do. We want to check if we have the Splunk forwarder package already copied over, and whether we have Splunk already installed to the /opt/splunkforwarder/ directory. If either of those are true, that will affect the tasks we skip or execute coming up.

- name: Check if Splunk already installed
  stat:
    path: '/opt/splunkforwarder'
  register: splunk_path_present

- name: Check if Splunk forwarder package already transferred - CentOS
  stat:
    path: "{{splunk_working_directory}}{{splunk_uf_file_centos}}"
  register: splunk_installer_present
  when:
    - ansible_facts['distribution'] == "CentOS"

- name: Check if Splunk forwarder package already transferred - Raspberry Pi
  stat:
    path: "{{splunk_working_directory}}{{splunk_uf_file_arm}}"
  register: splunk_installer_present
  when:
    - ansible_facts['distribution'] != "CentOS"
 

For each of these, we’re using the register functionality in Ansible to make note of whether the path exists or is populated with the forwarder package. As well, we’re determining which forwarder version to look for based on the distribution of Linux that’s found.

In my lab environment, I’m only running CentOS7 and Raspbian operating systems, which makes it easy to determine which version of the forwarder package we’ll require. If I end up running some different version of Linux one day, I’ll need to slightly rework the checks.

Next, assuming we don’t already have the forwarder copied over or Splunk installed, we use the copy module to transfer the applicable forwarder package over to the working directory declared in the playbook’s main.yml, /tmp/Splunk/. Notice that we’re copying over different versions of the forwarder, based on the type of OS, and again those file variables are set by the playbook’s main.yml.

- name: Copy forwarder over to system - CentOS
  copy: 
    src: /etc/ansible/playbooks/Splunk_UF_Install/roles/common/files/{{splunk_uf_file_centos}}
    dest: "{{splunk_working_directory}}"
    owner: cmbusse
    group: cmbusse
    mode: 0644
  when:
    - splunk_installer_present.stat.exists == false
    - splunk_path_present.stat.exists == false
    - ansible_facts['distribution'] == "CentOS"

- name: Copy forwarder over to system - Raspberry Pi
  copy:
    src: /etc/ansible/playbooks/Splunk_UF_Install/roles/common/files/{{splunk_uf_file_arm}}
    dest: "{{splunk_working_directory}}"
    owner: cmbusse
    group: cmbusse
    mode: 0644
  when:
    - splunk_installer_present.stat.exists == false
    - splunk_path_present.stat.exists == false
    - ansible_facts['distribution'] != "CentOS"
 

Once we have the forwarder package copied over, its time to untar it to the appropriate directory. This is done using the command module:

- name: Untar Splunk UF Package - CentOS
  command: tar xzf {{splunk_working_directory}}{{splunk_uf_file_centos}} -C /opt
  args:
    warn: False
  become: yes
  become_method: sudo
  when:
    - splunk_path_present.stat.exists == false
    - ansible_facts['distribution'] == "CentOS"

- name: Untar Splunk UF Package - Raspberry Pi
  command: tar xzf {{splunk_working_directory}}{{splunk_uf_file_arm}} -C /opt
  args:
    warn: False
  become: yes
  become_method: sudo
  when:
    - splunk_path_present.stat.exists == false
    - ansible_facts['distribution'] != "CentOS"
 

In the newest versions of the universal forwarder, we need to create a user before we’re able to start the forwarder. Gone are the days of just providing admin:changeme and continuing with the install. To account for this, Splunk has a nifty new way of configuring the admin (or any other user’s) password at install.

By copying a file called ‘user-seed.conf’ to the $SPLUNK_HOME/etc/system/local/ directory, when Splunk is run for the first time, it will consume this file and use the provided user credentials to make your local Splunk user. However, we don’t want the user credentials just sitting out in the open in the playbook. Luckily, Ansible has support for encrypting files using the Ansible Vault for use cases just like this.

Start by creating a file called user-seed.conf within your /files/ directory in your playbook:

user-seed.conf:
[user-info]
USERNAME = admin
PASSWORD = <password>

Then, simply run the command:

ansible-vault encrypt user-seed.conf

Ansible will prompt you for a password that you willl need to be able to work with this file in the future, so make sure it is one you will remember. The user-seed.conf file is now fully encrypted, and worthless to someone looking to snoop around. However, when Ansible makes use of this file in any of your plays (such as a copy), it will check the password you supply versus the one used to encrypt the file, and on success, will decrypt the file back to plaintext.

This way, our local copy will always remain encrypted, but when it is time to deploy the forwarder, Ansible will handle the decryption, and the Splunk installation will continue. Once Splunk starts for the first time, the file is consumed, and all record of the user credentials on the remote machine are gone. This was surprisingly easy to set up, and I was very happy with how easily it worked.

So, here we see the user-seed.conf file being copied over to the proper directory, in much the same way the forwarder package was previously:

- name: Copy user-seed.conf to Forwarder Location
  copy:
    src: /etc/ansible/playbooks/Splunk_UF_Install/roles/common/files/user-seed.conf
    dest: /opt/splunkforwarder/etc/system/local/
    owner: cmbusse
    group: cmbusse
    mode: 0644
  become: yes
  become_method: sudo
 

Now that we have the forwarder unpackaged, and the user-seed.conf file in place, its time to start the forwarder. This is done by a simple command that starts and accepts the license:

- name: Start Splunk Forwarder
  command: /opt/splunkforwarder/bin/splunk start --accept-license
  become: yes
  become_method: sudo
 

With Splunk up and running, we’ll want to connect this server up to the deployment server so it can receive whatever apps we want to push to it. This is done through another simple command module. However, in order to add the deployment server, we need to authenticate with the local Splunk user. We can do this on the command line, but we again run into the problem of credentials being stored in the clear. This is where the “include_vars: uf_user_password.yml” comes into play.

Much like we created an encrypted configuration file earlier, you can encrypt any of your standard variable files. Simply create a yaml document with the password for the local Splunk user, encrypt it, and when the playbook runs, it will decrypt it and pass the password along cleanly. This is where the include_vars from the very beginning of the tasks comes into play as we’re pulling in the password for the local Splunk user, {{uf_user_password}}:

- name: Set Deployment Server
  command: /opt/splunkforwarder/bin/splunk set deploy-poll -auth admin:{{uf_user_password}} https://splunk.bussenet.com:8089
  become: yes
  become_method: sudo
 

Lastly, we run the remaining configuration commands required to set up forwarding, enabling start on boot, and lastly restarting the forwarder to ensure all changes are active:

- name: Set forward-server
  command: /opt/splunkforwarder/bin/splunk add forward-server -auth admin:{{uf_user_password}} splunk.bussenet.com:9997
  become: yes
  become_method: sudo

- name: Enable Boot Start
  command: /opt/splunkforwarder/bin/splunk enable boot-start
  become: yes
  become_method: sudo

- name: Restart Splunk forwarder
  command: /opt/splunkforwarder/bin/splunk restart
  become: yes
  become_method: sudo
...

And that’s the whole playbook!

Demo:

The command we run specifies the playbook in question, has flags to prompt the user to enter both the sudo credentials and the vault credentials, and specifies the host to target:

ansible-playbook --ask-become-pass --ask-vault-pass /etc/ansible/playbooks/Splunk_UF_Install/main.yml --extra-vars "host=10.0.0.185"

After plugging in the sudo credentials (which allows the playbook’s user to perform sudo commands during the playbook’s operation), and the vault credentials (which allows the playbook to decrypt the various files), Ansible runs through the above tasks.

The machine I’m targeting in this example is a CentOS7 box, so you’re able to see how Ansible skips over any Raspberry Pi specific tasks:

Everything is properly coming back with either ‘oks’ for tasks that are executed, or ‘changed’ for tasks that have to make various changes to the system.

Conclusion

Through running the playbook, we’ve gone from a complete Splunk-less box to having a forwarder installed , configured, running, and having apps deployed to it:

Before long, we also have logs streaming into Splunk:

Overall I’m very happy with this project, it ended up exactly where I wanted it to. If you have any comments, suggestions or would just like to chat, feel free to drop a comment below.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *