//
you're reading...
Android Apps

ELK Configuration

Installation- Quick Configuration

In order to install the LogStash quickly, we need to Download LogStash, ElasticSearch, Kibana and Java8. The broker module (Redis) will not be used.

Download the following RPMs from the ElasticSearch website. The LogStash, ElasticSearch, Kibana are found on the same website..

  • LogStash (v6.2.3 as of this writing)
  • ElasticSearch (v6.2.3 as of this writing)
  • Kibana (v6.2.3 as of this writing)
  • Oracle JDK 1.8.0_144
LogStash-AWS-Generic

Log Message Flow Diagram.

Dependencies

• Jdk 1.8.xx final version and above
• The “GO” language and libraries for the Kibana

ElasticSearch Configurations

Elastic Search is used as the Primary Storage for all log files. Its core is based on the Lucene Search Index engine and will be used for fast clustered indexing of the logs.

Download the RPM and install it using sudo rpm -i elasticsearch-xx.rpm. The installation will put the ElasticSearch folder into /usr/share/elasticsearch/

Change directory to /usr/share/elasticsearch/
Open the ./bin/elasticsearch executable file, and set the JVM environment variables..
##ES_HEAP_SIZE — Sets both the minimum and maximum memory to allocate (recommended)

ES_JAVA_OPTS="-Xms256m -Xmx512m"
JAVA_HOME=/usr/java/jdk1.8.0_144

In order to install specific ElasticSearch visualization plugins, set the JAVA_HOME of the ./bin/plugin as well.

JAVA_HOME=/usr/java/jdk1.8.0_144

Now configure the following parameters in the /etc/elasticsearch/elasticsearch.yml file. (sudo permission might be required to edit this file)

### Cluster Configurations
cluster.name: elasticsearch
node.name: “MasterDB”
node.master: true
# Allow this node to store data (enabled by default):
node.data: true
node.rack: rack314
###Number of default shards and replicas
index.number_of_shards: 5
index.number_of_replicas: 0
##Path to Data files
path.data: /usr/share/elasticsearch/data
#Network Binding (Local ElasticSearch Binding)
network.bind_host: 192.168.1.69
network.publish_host: 192.168.1.69
network.host: 192.168.1.69
# Set a custom port for the node to node communication (9300 by default):
transport.tcp.port: 9300
transport.tcp.compress: true
# Set a custom port to listen for HTTP traffic:
http.port: 9200

Once the above values are set correctly (IP address must be set correctly), save the file. For better visibility of the running JVM, rename the java binary name to javaelastic and set the appropriate name in the ./bin/elasticsearch file (as follows).

if [ -x "$JAVA_HOME/bin/javaelastic" ]; then
 JAVA="$JAVA_HOME/bin/javaelastic"
else
 JAVA=`which javaelastic`
fi

Create the ElasticSearch Data folder..

$> sudo mkdir /usr/share/elasticsearch/data
$> sudo chown -Rf elasticsearch.elasticsearch /usr/share/elasticsearch/data
For Starting and stopping the service…
sudo service elasticsearch start / sudo service elasticsearch stop

Elastic Search Plugins (For more information)

https://blog.codecentric.de/en/2014/03/elasticsearch-monitoring-and-management-plugins/

Kibana Configuration

Before we configure the LogStash, we need to complete the Visualization Tool (Kibana). As the Kibana runs on the GO language, you will need to install the GO runtime as well.

$> sudo yum install golang*
$> sudo yum install gcc-c*
$> sudo yum install nodejs*

Now download and extract the kibana6.2.3.tgz file to the /data/ folder.
Configure the extracted kibana6.2.3 and point the configuration to the ElasticSearch IP.

Configure the Kibana

First edit the kibana.yml file and enable the following lines. Make the appropriate IP address changes as shown below..

server.port: 5601
 server.host: "{{Kibana-Server-IP}}"
 elasticsearch.url: "http://{{ElasticServer-IP}}:9200"

Start the Kibana Service and Point your browser to http://%5Bkibana_IP%5D:5601/

The LogStash configurations is as follows. As the LogStash itself is will combine and process the logging-sockets. The following instructions will guide you through the setup/configuration process. Please create folder or file if they are not present..
LogStash Listener
There are 2 ways of installing the LogStash. One way is through the Tar Ball, and the other is through the RPM. We will use the standard RPM installation in the following example.
Using the standard RPM (Installation files gets copied to the /usr/share folder)

  1. Download the RPM file from the ElasticSearch website.
  2. Install the downloaded rpm. (sudo rpm -i logstash-6.x.x.rpm)
  3. Copy the following configuration block to the /etc/logstash/conf.d/lumberjack.conf
    The alternative option is to download the tar ball, and extract/configure.
#For simple file based logging.
input {
 file {
  type => "apache" 
  path => "/data/apache/logs/server.log"
 }

## For SocketAppender, listen on 4477
 tcp {
  type => "tomcat"
  port => 4477
 }
}

##### Multiline Filter needed by the Log4J Multiline Exception ######
filter{
  multiline {
   patterns_dir => "/etc/logstash/patterns"
   pattern => "(^%{TOMCAT_DATESTAMP})|(^%{CATALINA_DATESTAMP})"
   negate => true
   what => "previous"
  }
}

## Output to your ElasticSearch IP
output {
 elasticsearch {
  index => "log-%{type}-%{+YYYY.MM.dd}"
  protocol => "http"
  host => "192.168.30.14"
  port => "9200"
 }
}

Save this lumberjack.conf file with the correct elasticsearch host IP. The type is a special identifier that we set in the input{} section which changes the Index-Key Name. By changing the Index-Key name, you can manage these indexes better.

For example: Certain index-keys can be deleted after a few days, while the others can be deleted after a few months..

Now add the following 2 lines into the /etc/logstash/patterns/grok-patterns and save the text file.. (Create the /patterns folder if needed)

CATALINA_DATESTAMP %{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM)
TOMCAT_DATESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) %{ISO8601_TIMEZONE}

An Index-Key will have a name constructed based on the format in the lumberjack.conf.

Example:

“log-%{type}-%{+yyyy.MM.dd}”. This would translate into either Index “log-apache-2015.11.17” or “log-tomcat-2015.11.17” based on the input point.

If you have installed the LogStash using RPM, use the following to start and stop the LogStash..

$> sudo service logstash stop
$> sudo service logstash start

The preferred choice would be to have JDK 1.8 installed in the /usr/java/. Therefore create a javalogstashprod executable in your /usr/java/jdk1.8.x.x/bin folder..

$> cp /usr/java/jdk1.8.x/bin/java /usr/java/jdk1.8.x/bin/javalogstashprod
$> cp /usr/java/jdk1.8.x/bin/java /usr/java/jdk1.8.x/bin/javalogstashcons

Open the /etc/init.d/logstash and locate the program=/ variable for the logstash installation path. Change the Java Executable binary name in the [install-path]/logstash/bin/logstash.lib.sh file to reflect the new name shown below.

setup_java() {
 if [ -z "$JAVACMD" ] ; then
   if [ -n "$JAVA_HOME" ] ; then
     JAVACMD="$JAVA_HOME/bin/javalogstashprod"
   else
     JAVACMD="javalogstashprod"
   fi
 fi

Log4J Configurations

Apart from reading log files from the disk, LogStash can process tcp sockets as well. The following configuration will be required from the JBoss / Tomcat Servers. The following SocketAppenders can be used to directly log-messages to ElasticSearch..

<appender name="LOGSTASH" class="org.apache.log4j.net.SocketAppender">
 <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/>
 <param name="RemoteHost" value="192.168.1.13"/>
 <param name="Port" value="4571"/>
 <param name="ReconnectionDelay" value="10000"/>
 <param name="Threshold" value="INFO"/>
  <layout class="org.apache.log4j.PatternLayout">
   <param name="ConversionPattern" value="%d %-5r %-5p [%c] (%t:%x) %m%n"/>
  </layout>
</appender>
<root>
  <appender-ref ref="CONSOLE"/>
  <priority value="INFO" />
  <appender-ref ref="FILE"/>
  <appender-ref ref="LOGSTASH"/>
</root>

The 192.168.1.13 IP is the LogStash server which listens on 4477.

  1. Startup the ElasticSearch first as its the primary database for this operation.
  2. Start up the Kibana and check for any errors. If it connects, Kibana will prompt you to select the appropriate Index.
  3. Startup the LogStash. This should take about 1 minute to load all the plugins.
  4. Kibana should now begin to show the logs (After you have selected the Index)

If you have any questions, please feel free to contact me shawnbrito@gmail.com

 

About Shawn Brito

I'm guy searching for amazing new things to do, adventures to go on. Travelling across Sri Lanka and Video Blogging.

Discussion

Trackbacks/Pingbacks

  1. Pingback: LOgging with ELK -[y2015] – Central IT Monitoring - June 11, 2020

Leave a comment