Blogs
ElasticStack installation on CentOS7
Preparation
- CentOS 7 64 Bit mit 4GB RAM
- Unblocking the ports for Elastic in the firewall:
1. sudo firewall-cmd --zone=public --add-port=9200/tcp --permanent # Elasticsearch HTTP
2. sudo firewall-cmd --zone=public --add-port=9300/tcp --permanent # Elasticsearch TCP transport
3. sudo firewall-cmd --reload
Key
- Meaning of colors in this guide:
- elasticsearch.repo → Text written in red is a free-form naming convention, whereas text written in black is not free-form.
- code examples appear in gray boxes
- an http link with curl command or import command is to be executed in the VM/Moba. Without curl in any web browser.
- this manual has CentOS7 preinstalled. The directories and commands may vary from OS to OS. In the case with the directory/command is shown as blue text, otherwise as orange text.
If not working on CentOS7, these links will help:
https://www.elastic.co/guide/en/logstash/7.6/dir-layout.html
https://www.elastic.co/guide/en/logstash/7.6/running-logstash.html
Step 1 - Install Java/OpenJDK
Java version 8 or 11 is required for the installation:
yum install java-1.8.0-openjdk-devel -y
Step 2 - Check if installed correctly
java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
Step 3 - Creating the Elastic Repo
First a file named elasticsearch.repo is created in the directory /etc/yum.repos.d/.
There the following is copied in:
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
After that the following command is executed:
yum install --enablerepo=elasticsearch elasticsearch -y
Step 4 - Starting Elasticsearch with systemd
For it to start automatically, the following commands must be executed:
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
Step 5 - Configure ElasticStak (Optional)
After that, Elasticsearch can be started or stopped with the following commands:
sudo systemctl start elasticsearch.service #starts
sudo systemctl stop elasticsearch.service #stop
Step 6 - Verify that Elasticsearch has been started.
Elastic does not give any feedback if it has been started. Only if there are files in a subfolder, it was installed and started correctly.
ls /var/log/elasticsearch/
Alternatively you can look in journalctl:
journalctl --unit elasticsearch
May 06 04:14:36 localhost.localdomain systemd[1]: Started Elasticsearch.
Step 7 - Check if everything has worked out
Sending an HTTP request to PORT 9200 to the localhorst, you can verify that everything works
curl -X GET "localhost:9200/"
{
"name" : "localhost.localdomain",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "2CQpHaNnTs6mW3ntb65Z7A",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Step 8 - Assign a host IP to Elastic
By default, Elastic has set localhost as the host IP. This should be changed in the /etc/elasticsearch/elasticsearch.yml directory:
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: IP der VM, auf der Elastic installiert wurde
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
Step 9 - Interesting commands
Create index logs:
curl -X PUT "192.168.20.100:9200/logs" -H 'Content-Type: application/json' -d'{ "settings" : { "index" : { } }}'
Delete index logs:
curl -XDELETE "192.168.20.100:9200/logs"
For security reasons
Default Elasticsearch yaml, which can be found in the /etc/elasticsearch/elasticsearch.yml folder:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true