Ok! Hopefully we're now at a point where we have logs with timestamps, tons of information, all structured in a recognizable way.
Let's now pump our logs in an Elastic Search database, so that we can start searching, filtering, distilling information out of them!
Elastic Search, work your magic please! Right?
Ok! Hopefully we’re now at a point where we have logs with timestamps, tons of information, all structured in a recognizable way.
Let’s now pump our logs in an Elastic Search database, so that we can start searching, filtering, distilling information out of them!
Elastic Search, work your magic please! Right?
When talking about Elastic Search, people seem to refer to a single piece of software. Yet, when talking about “Elastic Search”, you’re strictly speaking about the database.
To get data in and out of this database, you’ll need multiple software components working together, forming an Elastic Search stack.
To get from a load of separate log files to 1 searchable overview, we need to execute the following steps. For each step, we need to use a specific component
At the end of this series, you'll have
mapped the meaningful parts of your logs in a database.
the ability to easily create (real-time) graphs, allowing you to visualize patterns.
When we have a look at the official docs, we’ll se that:
To set up the Elastic Search software, I’m using Docker and Docker compose. The reason: Docker makes it very easy to reproduce the same set-up. If you have Docker and Docker compose installed, reproducing my setup is simply pulling the Git library and executing “docker-compose up”.
If cannot or do not want to use Docker, or want to manually install the Elastic Search software, everything in this article still applies. It’s just a matter of putting configuration files in the right place.
You can find an example of a Docker and Docker-compose setup on my Github.
All I need to do is:
git checkout https://github.com/stainii/ElasticSearchForUnionVMS
cd ElasticSearchForUnionVMS
docker-compose up –d
You can find all necessary config files in the repository.
With Docker, I boot up every module and give the containers access to the config files.
An example section of my Docker-compose file:
...
filebeat:
image: "docker.elastic.co/beats/filebeat:6.6.0"
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml => here, I map the config
- /app/logs/:/app/logs/ => here, I give Filebeat access to the logs
- ./filebeat/data:/usr/share/filebeat/data => Filebeat’s data will be synced with my local computer. When the Filebeat container gets killed, I don’t lose the data.
networks:
- elk
...
Ok... so I’ve provided example config files. But what’s in them? How to tune them to your needs?
In the next parts of this series, we’ll go deeper into configuring every module.