Fluentd_logo During the last weeks I started to play with Elasticsearch, Fluentd and Kibana. I made a documentation to help on deploying it easily.

As you may know, I’m an Ansible fan, so I made Ansible playbooks to deploy a complete infrastructure (server and clients). They will deploy this kind of architecture:

Es_ki_fl

On the client side, Fluentd clients will get syslog and Nginx logs, to send them to the Fluentd server. On the server side, a Fluentd receiver will be there to get data from other Fluentd clients. It will then push it to the Elasticsearch server. The Elasticsearch server is located on the same server than the Fluentd receiver to make it simpler. An autopurge can be configured directly from the playbook using curator tool. To finish, Kibana is installed on the server to make it simpler as well.

You can find those playbooks on my GitHub or on Ansible Galaxy.

Is it complex to configure Ansible playbooks it? I made my best to make it as quick as possible and as customizable as possible (for the context and my needs). What you need to do is first getting playbooks like this (site.yml):

site.yml:

# All machines
- name: fluent-clients
  hosts: all
  user: root
  roles:
    - fluentd
  vars_files:
    - "group_vars/fluentd_client.yml"

# Log
- name: log servers
  hosts: logs
  user: root
  roles:
    - elasticsearch
    - role: nginx
      nginx_sites:
        - server:
           file_name: kibana.domain.com
           server_name: kibana.domain.com
           listen: 80
           root: /usr/share/nginx/www/kibana/src
           location1: {name: /, try_files: "$uri $uri/ /index.html"}
    - kibana
    - fluentd
  vars_files:
    - "host_vars/kibana.yml"

Then you have to configure/adapt the mandatory options to your needs for fluentd_clients (fluentd_client.yml):

fluentd_client.yml:

## Fluentd client
# fluentd fqdn used by forwarders to fluentd
fluentd_server_fqdn: kibana.domain.com
# Address of elasticsearch used by fluentd
es_fqdn: localhost
es_port: 9200
# Syslog plugin
fluentd_plugin_syslog_ip: 127.0.0.1
fluentd_plugin_syslog_port: 5140

And like this for the log server (kibana.yml):

kibana.yml:

## Fluentd server
# Address of elasticsearch used by fluentd
es_fqdn: localhost
es_port: 9200
# If this machine should forward to Elasticsearch
forward_to_es: True
# Curator tool
install_curator: True
curator_max_keep_days: 90
# Head plugin
install_head: True
# ElasticHQ plugin
install_eshq: False
# Marvel plugin
install_marvel: True

## Kibana
# URL address to reach kibana
dns_url_kibana: kibana.domain.com
# Folder to store Kibana
kibana_path: /usr/share/nginx/www/kibana
# Kibana version from GitHub tag
kibana_tag_version: v3.1.0

## Elasticsearch
# Elasticsearch version from debian repository
es_version: 1.3

Then launch the Ansible playbook, it will install everything by itself:

$ ansible-playbook site.yml

At the end, the web interface of kibana will be ready and all your fluentd clients will redirect their syslog and default nginx configuration to ElasticSearch.

Note: I’ve used my Nginx playbook, but you can use any other one to setup kibana interface as this is made in AngularJS.