Monday 30 December 2013

Logstash for weblogic - Part III - Using GROK patterns

This post explains the concept of GROK filter which gives more flexibility in parsing the logs and analyzing. Weblogic SOA logs will have information’s like severity, host details, composite details, timestamps etc. and this information will be more helpful when we use logstash centralized logging solution across multiple environments.

Using grok, we can parse unstructured log data into structured and also queryable. Logstash has 120 patterns which is available in this link. https://github.com/logstash/logstash/tree/v1.3.2/patterns

General information about logstash is available in this link - http://logstash.net/docs/1.3.2/filters/grok 

Below picture explains the structure of admin log in weblogic and how we can identify the pattern to parse it using grok. 


This is the format of admin logs and you can decide your own pattern using this link which has 120 patterns for Logstash.  Link is https://github.com/logstash/logstash/tree/v1.3.2/patterns

"####<%{DATA:wls_timestamp}> <%{WORD:severity}> <%{DATA:wls_topic}> %{HOST:hostname}> <(%{WORD:server})?> %{GREEDYDATA:logmessage}"

Words that are in bold are valid patterns that can be used in logstash. When your log entries matches these patterns which are in bold, then those particular data will be indexed in the name of one that is highlighted in Yellow.

Please use below config plan to achieve grok filters and indexing, you can change accordingly to different logs. You need to include multiline filter as well to get rid of spaces in Java exception. Using multiline filter is discussed in this post.  

input {
 stdin {
    type => "stdin-type"
  }
  file {
    type => "ADMdomainlog"
    path => [ "D:/Logstash/Log/soa_domain.log"]
  }
  }
 
  filter {
  multiline {
    type => "ADMdomainlog"
    pattern => "^####"
    negate => true
    what => "previous"
  }
    grok {
    type => "ADMdomainlog"
    pattern => ["####<%{DATA:wls_timestamp}> <%{WORD:severity}> <%{DATA:wls_topic}> <%{HOST:hostname}> <(%{WORD:server})?> %{GREEDYDATA:logmessage}"]
    add_field => ["Log", "Admin Domain Log"]
  }
  }
 
output {
  elasticsearch { embedded => true }
}

Run using the logstash and open Kibana to view the logs, you can see that there are new indexes as per your grok patterns and you can even filter using those indexes as mentioned in below screen shots.





This grok adds more flexibility in analyzing the logs and we can use it more effectively when we define our own dashboard in Kibana which will be discussed in further posts...

Thursday 26 December 2013

Logstash for Weblogic - Part II - Using multiline filters

Normally our server logs are like one mentioned below and so if you use the config file mentioned in first post, then you can see that each and every line of one log event is captured as separate event which leads to confusion..!!




There is way to overcome this problem in logstash by using filter called multiline. This filter will collapse multiline messages into a single event. The multiline filter is for combining multiple events from a single source into the same event. The goal of this filter was to allow joining of multi-line messages from files into a single event. For example - joining java exception and stack trace messages into a single event.

General syntax of multiline filter is

filter {
  multiline {
    type => "type"
    pattern => "pattern, a regexp"
    negate => boolean
    what => "previous" or "next"
  }
}

Where, ‘regexp’ should match what you believe to be an indicator that the field is part of a multi-line event. Here we can match logs that start with ^#### which will be common for all weblogic logs. ^ à indicates that logs start with ####

The 'what' must be "previous" or "next" and indicates the relation to the multi-line event. Here we provide previous as we need to relate the space with previous lines.
The 'negate' can be "true" or "false" (defaults false).
  input {
Save this file as sample.conf
Run logstash and feed your logs with sample logs, now you can see that all your java exception log entry is captured as single event.  This simple multilane filter helps to solve the problem.
You can see that logs are captured as single event. 
Please contact me incase of any doubts and in next post, we I will share about GROK filters which gives more flexibility in analyzing the logs..



You need to add the filter in between input and output like mentioned in below config file. 

 stdin {
    type => "stdin-type"
  }
  file {
    type => "ADMdomainlog"
    path => [ "D:/Logstash/Log/soa_domain.log"]
  }
  }
  
  filter {
  multiline {
    type => "ADMdomainlog"
    pattern => "^####"
    negate => true
    what => "previous"
  }
  }
  
output {
  elasticsearch { embedded => true }
}




 View in Kibana,



Wednesday 25 December 2013

Logstash for Weblogic and SOA Part - I

Those who wonder what is logstash, it is an open source tool for managing events and logs. This can be used to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, log stash comes with a web interface for searching and drilling into all of your logs.

All sorts of information required to deploy logstash is available in logstash official website (http://logstash.net/) but this post will share my experience and learning’s while we tried to use it in our project especially for analyzing the weblogic and SOA logs…

Step 0:
  1. Download https://download.elasticsearch.org/logstash/logstash/logstash-1.2.2-flatjar.jar
  2. Works well in windows as well, so no worries for POC
  3. This seems to be very important one; I spend hours to get this clicked. LOGSTASH can be used only to read live logs or new logs and not existing logs from your logs folders. There is a way to read those logs which will be shared in the end of this post.
  4. Logstash is shipped along with elastic search as storage engine and Kibana to display the parsed logs
Another important thing in logstash is Configuration file:

  1. Configuration file is the simple .conf file that is used to read and parse the logs
  2. It primarily deals with input, output and filters
    1. Inputs are inputs to logstash which can be a file path to the logs
    2. Outputs are section which deals with how the parsed logs needs to be viewed
All these sections let us to configure plugins and all available plugins are available in in this link http://logstash.net/docs/1.2.2/ under plugin documentations.

To configure logstash in standalone server, 



Save as sample.conf.  Now we need to run the logstash using below syntax which is very simple. Before that if you are trying in windows, set your java home in environment variables and if it is in linux, then you need to run with java path. Both the examples are provided below. 

java -jar logstash-1.2.2-flatjar.jar agent -f logstash_sample.conf  --- For windows

/u01/dev/fmwadm/Middleware/jrockit_160_29_D1.2.0-10/jre/bin/java -jar logstash-1.2.2-flatjar.jar agent -f logstash_sample.conf --- For Linux

make sure that you have placed a empty soa_domain.log file in the mentioned path. After executing this command, paste your log entries to the soa_domain.log file so that logstash parse it as new entry in log file :). 
These logs will be captured as events and stored in elastic search and to view this in UI, kibana is used. But to acheive this, stop the above command with ctrl+c and execute below command which will invoke the logstash with kibana web service. 

java -jar logstash-1.2.2-flatjar.jar agent -f logstash-sample.conf – web





Kibana Web by default use 9292 port and make sure that nothing is running in this port, open http://localhost:9292/ which will kick start kibana for you. There are lot many options in kibana which I will share in separate post. 



As of now use default dashboard and view your logs that are parsed with default indexes like  _id, _index, type,@timestamp,@version.



Now you can search the logs as you wish...

Same steps are used in linux or unix well.
Here we do face problem when your logs has space in between the log entries and that is where we need to use filter which I will share in next post. 




Purpose of this post to give basic introduction about logstash and to understand how it can be used for analyzing weblogic logs... Feel free to contact me if you face any difficulties. 





Monday 23 December 2013

Hello World :)

Hello All, 

With 5 plus years of experience in SOA and primarily into support, i do faced lot many critical problems which required more research to solve it but i do forgot those in short span of time due to other problems :). 
So decided to write them somewhere for my reference and if it is in blog, then it may be useful for others as well who are looking for same piece of information. 
My first post is on logstash which was a recent POC that i tried and drafting it now.. 
All the details are my personal views and no way related to oracle views or any other

Thanks for visiting this blog, keep visiting and happy learning:)