Tuesday 8 July 2014

OSB 11g - MQ Connection or Transport missing in SBConsole ( How to add or fix this?)

This post may be helpful for someone who is a beginner in OSB. 

Those who are interested to learn OSB must install weblogic and OSB 11g in local system for hands on. There are several tutorial available and I am also planning to write a series of post which will help those who are looking for quick hands on for quick learning. I recently got opportunity to explore the OSB 11g.

This post assumes that all are aware of basic OSB terminologies like Proxy services, Business services, transports etc. 





One such transport is MQ and OSB supports access to IBM Websphere MQ using MQ transport.

But after installing OSB, we may not found the MQ connection under resources or MQ protocol under transport configuration page in SBconsole. 

Reason is MQ libraries are not bundled with the Oracle Service Bus installer and we need to ensure that supported version of MQ client Library is available in our environment. 

Jar Name : com.ibm.mq.jar
Download Link : http://www.java2s.com/Code/JarDownload/com.ibm/com.ibm.mq.jar.zip

Steps to add: 

1. Stop the domain server or Admin server in case of development version
2. Copy the file to Domain Home\osb_domain\lib directory
3. Start the server

Now go to resources and you can MQ connection and also while creating proxy services, you can now select MQ protocol in transport configuration page as well. 

Cheers..

Tuesday 24 June 2014

SOA 11g - Useful SOA Infra SQL queries for Oracle SOA Production Support - Part III ( Get SOA Payload from Back end tables ( SOA Infra) )

This post will be helpful when there is a need to reprocess high number of instance due to any server outage or data source outage.

Query to retrieve the payload from back end tables:

select Document from (select Document From Xml_Document Where 1 = 1 and Document_Id in (select document_id from document_ci_ref where cikey in (Select B.Cikey From Composite_Instance A, Cube_Instance B Where A.Ecid = B.Ecid And Nvl(A.Conversation_Id,'A')  = Nvl(B.Conversation_Id,'A') And Id = #Composite Instance ID)) order by doc_partition_date) a where rownum <=1

Write a Java program that automates the reprocessing job :- 


  • Connect to SOA infra database
  • Retrive the payload using above query
  • Convert the blob to string
  • Add SOA envelope to the retrived payload
  • Generate the end point url using the values from cube instance table for this particular composite
  • use HTTP post to invoke the composite
Above steps is just a way to automate the reprocessing, you can use this query and automate in any other possible way as well. 

Code snippet to convert the blob to string...


                                    Blob blob = payloadDetail.getBLOB("DOCUMENT");
                                    BinXMLStream inpbin = proc.createBinXMLStream(blob);
                                    BinXMLDecoder dec = inpbin.getDecoder();
                                    InfosetReader xmlreader = dec.getReader();
                                    XMLDocument doc = (XMLDocument)domimpl.createDocument(xmlreader);
                                    //doc.print(System.out);
                                    TransformerFactory tf = TransformerFactory.newInstance();
                                    Transformer transformer;
                                    transformer = tf.newTransformer();
                                    
                                    //Convert Xml to String                       
                                    StringWriter writer = new StringWriter();
                                    transformer.transform(new DOMSource(doc), new StreamResult(writer));
                                    String output = writer.getBuffer().toString();


Use this and write your process accordingly...

Friday 20 June 2014

SOA 11g - Useful SOA Infra SQL queries for Oracle SOA Production Support - Part II ( Queries and its usage)

Query to find the count of instances for individual composites :

SELECT count(*),SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1,INSTR(SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1),'!')-1)
FROM COMPOSITE_INSTANCE WHERE
CREATED_TIME > TRUNC(SYSDATE)-7 AND CREATED_TIME < TRUNC(SYSDATE)
GROUP BY SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1,INSTR(SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1),'!')-1);


Normally composite_dn will be like XX_Test/HelloWorld!1.0*soa_36243828-42da-4952-9579-f58c0e913705, so to overcome that, we can use below code to get the composite name alone.

"SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1,INSTR(SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1),'!')-1)"

This query is used to get the instance count of all composites, all states between timestamps. 
Filters can be added and modified to get the results as required. 

How to find the instance ID if we have composite sensors in our composites:

SELECT Ci.*,
  number_value Delivery_ID
FROM composite_sensor_value csv,
  (SELECT *
  FROM COMPOSITE_INSTANCE
  WHERE CREATED_TIME > TRUNC(SYSDATE)-7
  AND CREATED_TIME  < TRUNC(SYSDATE)
  AND SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1,INSTR(SUBSTR(COMPOSITE_DN,INSTR(COMPOSITE_DN,'/')+1),'!')-1) IN ('%Composite_Name%')
    --AND STATE <> '1'
  )CI
WHERE CSV.COMPOSITE_INSTANCE_ID = CI.ID
AND SENSOR_NAME                 = '%Sensor_Name%'
AND NUMBER_VALUE                = '%Sensor_Values%'
ORDER BY CREATED_TIME;

Add or modify filters to get the results as expected. 

If you are not sure about the type of sensor, use this substr(clob_value, instr(clob_value,'>',1,1)+1, instr(clob_value,'</',1,1)-1) to filter or select the values. 

if you want to get the details for particular state, then read this post to understand the state and description.

Query to get the count of instance on hourly basis:

SELECT TO_CHAR(TRUNC(ci.CREATED_TIME, 'HH'), 'DD-MON-YYYY HH24:MI:SS'), COUNT (*) FROM COMPOSITE_INSTANCE CI
WHERE CREATED_TIME > TO_DATE('19-JUN-2014 08:35:10', 'DD-MON-YYYY HH24:MI:SS') AND
CREATED_TIME < sysdate
GROUP BY TO_CHAR (TRUNC (CI.CREATED_TIME, 'HH'), 'DD-MON-YYYY HH24:MI:SS')
ORDER BY TO_CHAR (TRUNC (CI.CREATED_TIME, 'HH'), 'DD-MON-YYYY HH24:MI:SS');


Modify the filter created_time accordingly to get the expected results. 
You can add composite_dn filter to get the hourly count of instances for one particular composite. 

Query to get the count of instances on different state:

SELECT COUNT(*),
  DECODE(cube_instance.STATE, 0, 'Initiated', 1, 'Running', 2, 'Suspended', 3, 'Faulted', 4, 'Closed Pending', 5, 'Closed Completed', 6, 'Closed Faulted', 7, 'Closed Cancelled', 8, 'Closed Aborted', 9, 'Closed Stale', 10,'Closed Rolled Back','unknown') state
FROM CUBE_INSTANCE
WHERE creation_date > TRUNC(sysdate)-1
AND creation_date   < sysdate
GROUP BY STATE;


If you want to get the same details on composite wise,

SELECT COUNT(*),
  DECODE(CUBE_INSTANCE.STATE, 0, 'Initiated', 1, 'Running', 2, 'Suspended', 3, 'Faulted', 4, 'Closed Pending', 5, 'Closed Completed', 6, 'Closed Faulted', 7, 'Closed Cancelled', 8, 'Closed Aborted', 9, 'Closed Stale', 10,'Closed Rolled Back','unknown') STATE ,
  COMPOSITE_NAME
FROM CUBE_INSTANCE
WHERE CREATION_DATE > TRUNC(sysdate)-1
AND CREATION_DATE   < SYSDATE
GROUP BY STATE,
  COMPOSITE_NAME
ORDER BY COMPOSITE_NAME;


Above are the major tables and possible joins, so we can rewrite as per our requirement and get the expected results quickly. 

Caution - From my experience, while running queries in Production SOA infra schema, make sure you restrict the number of records to search by adding date filter else it may thorow temp table space error. 

SOA 11g - Useful SOA Infra SQL queries for Oracle SOA Production Support - Part I ( List of tables and it states)

Those who handle Oracle SOA support will know how difficult to locate an instance using composite sensors in Enterprise Manager Console. This will eat our time and patience :)

But this can be avoided by using SOA infra tables. 

We all know that the transactions are available in SOA Infra tables across multiple tables and if we leverage it with proper joins, then life of support person will be easy. 

Below mentioned tables are the important ones that will be used to locate the instances.

CUBE_INSTANCE
COMPOSITE_INSTANCE
COMPOSITE_SENSOR_VALUE

Composite Instance :

Below are the important columns in composite instance tables. 

ECID: Unique values for one end to end transactions across several tables.
ID: Instance ID that is visible in Enterprise Manager Console
PARENT_ID: This will be null for initiator composite and child composite will have value of parent composite 
COMPOSITE_DN: Composite name along with partition name
STATE: State of instance like completed, running etc which is explained in details                           
CREATED_TIME: Creation time of an instance

List of states and its description:            


StateDescription
0Running
1Completed
2Running with faults
3Completed with faults
4Running with recovery required
5Completed with recovery required
6Running with faults and recovery required
7Completed with faults and recovery required
8Running with suspended
9Completed with suspended
10Running with faults and suspended
11Completed with faults and suspended
12Running with recovery required and suspended
13Completed with recovery required and suspended
14Running with faults, recovery required, and suspended
15Completed with faults, recovery required, and suspended
16Running with terminated
17Completed with terminated
18Running with faults and terminated
19Completed with faults and terminated
20Running with recovery required and terminated
21Completed with recovery required and terminated
22Running with faults, recovery required, and terminated
23Completed with faults, recovery required, and terminated
24Running with suspended and terminated
25Completed with suspended and terminated
26Running with faulted, suspended, and terminated
27Completed with faulted, suspended, and terminated
28Running with recovery required, suspended, and terminated
29Completed with recovery required, suspended, and terminated
30Running with faulted, recovery required, suspended, and terminated
31Completed with faulted, recovery required, suspended, and terminated
32Unknown
64-


Any value in the range of 32 to 63 indicates that the composite instance state has not been enabled, but the instance state is updated for faults, aborts, etc.

Cube Instance:

CIKEY: Unique value for a instance  
CREATION_DATE: Creation date           
STATE: State of Instance                   
STATUS: Status of Instance        
ECID: Unique hexa decimal id for end to end transaction  
CMPST_ID: Instance ID that is visible in Enterprise Manager console or Value of ID column in Composite Instance table  
COMPOSITE_NAME: Name of composite  
DOMAIN_NAME: Partition Name  
COMPONENT_NAME: Name of component       
COMPOSITE_REVISION: Revision Number of composites   


CUBE_INSTANCE States
StateDescription
0STATE_INITIATED
1STATE_OPEN_RUNNING
2STATE_OPEN_SUSPENDED
3STATE_OPEN_FAULTED
4STATE_CLOSED_PENDING_CANCEL
5STATE_CLOSED_COMPLETED
6STATE_CLOSED_FAULTED
7STATE_CLOSED_CANCELLED
8STATE_CLOSED_ABORTED
9STATE_CLOSED_STALE
10STATE_CLOSED_ROLLED_BACK


Composite Sensor Values:

COMPOSITE_INSTANCE_ID: Instance ID that is visible in EM console or Value of ID in composite_instance table
COMPONENT_NAME: Name of the component          
SENSOR_NAME: Name of the sensors                    
CPST_PARTITION_DATE: Creation date 
STRING_VALUE: Sensor value in String datatype        
NUMBER_VALUE: Sensor value in Number datatype         
DATE_VALUE: Sensor value in Date datatype           
CLOB_VALUE: Sensor value in Clob datatype           
BLOB_VALUE: Sensor value in Blob datatype           

In following post, we shall see different queries that we use day by day for support purpose. 

Thursday 19 June 2014

Logstash for Weblogic - Part IV - Working across clustered environments

This is where logstash act as a real Champ..!!!

Logstash can be installed in any servers where it act as shipper and all these logs are accumulated in single place and can be used for real time monitoring. 
For this, additional components like redis, elsatic search are used. 

Logstash acts as a log collector, processor, shippers and indexer. 
It needs to be installed and running in servers where it will read the logs as events and as per the conf file configurations, here logstash acts as log collector and shipper. 
And in main server, it will act as indexer and processor. 

Refer previous post for more details. Post1Post2Post3;

Logstash has Kibana which is a dashboard for data visualization. 

Redis acts as a broker and receives the captured events from other servers ( Logstash that is acting as shipper) to main server. 

Elastic search acts as a persistent storage & search medium for all captured events


Below flow chart is self-explanatory. 


Now I shall explain steps to configure it across the servers.

For easy understanding, we have SOA server1 and SOA server2 and SOA Server1 will act as main server here and logs will be captured as event and shipped to SOA Server1 from SOA server2. So we can have as many as servers. 

Redis and Elastic search needs to be downloaded and running before any shipper starts shipping the events.  So as per our work case, we need to install it in SOA Server1.

Redis: 
  •       Download redis and extract the source.
  •       Execute the redis : src/redis-server --loglevel verbose 
You will get output as mentioned in below picture.


Elastic Search:

  • Download the latest version of elastic search.
  • Execute the command : bin/elasticsearch –f

Logstash as Shipper:

Run the logstash as shipper now in SOA Server2.
In config, make sure that output is directed to redis that is running in Server 1.

Like this,



Save this as say, shipper_server2.conf and run the logstash using below command.

java -jar logstash-1.2.0-flatjar.jar agent -f shipper_server2.conf

Now check your redis output, clients connected value will be 2 which indicated that shipper is successfully connected to Broker.

Now all the events captured will be stored in elastic search.

Logstash as Indexer:

To run logstash as indexer, make sure that input is read from redis.
List details mentioned in shipper output will be used here in input. 

Conf file will look like below,


  
And save this as indexer.conf and run using below command,

java -jar logstash-1.2.0-flatjar.jar agent -f indexer.conf

Kibana:

Use individual terminal to run kibana using below command,

java -jar logstash-1.2.0-flatjar.jar – web

Open kibana using this URL – http://localhost:9292

Logstash is so modular that all the above components can be installed on dedicated servers to distribute the load and specify the host accordingly.

Monday 30 December 2013

Logstash for weblogic - Part III - Using GROK patterns

This post explains the concept of GROK filter which gives more flexibility in parsing the logs and analyzing. Weblogic SOA logs will have information’s like severity, host details, composite details, timestamps etc. and this information will be more helpful when we use logstash centralized logging solution across multiple environments.

Using grok, we can parse unstructured log data into structured and also queryable. Logstash has 120 patterns which is available in this link. https://github.com/logstash/logstash/tree/v1.3.2/patterns

General information about logstash is available in this link - http://logstash.net/docs/1.3.2/filters/grok 

Below picture explains the structure of admin log in weblogic and how we can identify the pattern to parse it using grok. 


This is the format of admin logs and you can decide your own pattern using this link which has 120 patterns for Logstash.  Link is https://github.com/logstash/logstash/tree/v1.3.2/patterns

"####<%{DATA:wls_timestamp}> <%{WORD:severity}> <%{DATA:wls_topic}> %{HOST:hostname}> <(%{WORD:server})?> %{GREEDYDATA:logmessage}"

Words that are in bold are valid patterns that can be used in logstash. When your log entries matches these patterns which are in bold, then those particular data will be indexed in the name of one that is highlighted in Yellow.

Please use below config plan to achieve grok filters and indexing, you can change accordingly to different logs. You need to include multiline filter as well to get rid of spaces in Java exception. Using multiline filter is discussed in this post.  

input {
 stdin {
    type => "stdin-type"
  }
  file {
    type => "ADMdomainlog"
    path => [ "D:/Logstash/Log/soa_domain.log"]
  }
  }
 
  filter {
  multiline {
    type => "ADMdomainlog"
    pattern => "^####"
    negate => true
    what => "previous"
  }
    grok {
    type => "ADMdomainlog"
    pattern => ["####<%{DATA:wls_timestamp}> <%{WORD:severity}> <%{DATA:wls_topic}> <%{HOST:hostname}> <(%{WORD:server})?> %{GREEDYDATA:logmessage}"]
    add_field => ["Log", "Admin Domain Log"]
  }
  }
 
output {
  elasticsearch { embedded => true }
}

Run using the logstash and open Kibana to view the logs, you can see that there are new indexes as per your grok patterns and you can even filter using those indexes as mentioned in below screen shots.





This grok adds more flexibility in analyzing the logs and we can use it more effectively when we define our own dashboard in Kibana which will be discussed in further posts...

Thursday 26 December 2013

Logstash for Weblogic - Part II - Using multiline filters

Normally our server logs are like one mentioned below and so if you use the config file mentioned in first post, then you can see that each and every line of one log event is captured as separate event which leads to confusion..!!




There is way to overcome this problem in logstash by using filter called multiline. This filter will collapse multiline messages into a single event. The multiline filter is for combining multiple events from a single source into the same event. The goal of this filter was to allow joining of multi-line messages from files into a single event. For example - joining java exception and stack trace messages into a single event.

General syntax of multiline filter is

filter {
  multiline {
    type => "type"
    pattern => "pattern, a regexp"
    negate => boolean
    what => "previous" or "next"
  }
}

Where, ‘regexp’ should match what you believe to be an indicator that the field is part of a multi-line event. Here we can match logs that start with ^#### which will be common for all weblogic logs. ^ à indicates that logs start with ####

The 'what' must be "previous" or "next" and indicates the relation to the multi-line event. Here we provide previous as we need to relate the space with previous lines.
The 'negate' can be "true" or "false" (defaults false).
  input {
Save this file as sample.conf
Run logstash and feed your logs with sample logs, now you can see that all your java exception log entry is captured as single event.  This simple multilane filter helps to solve the problem.
You can see that logs are captured as single event. 
Please contact me incase of any doubts and in next post, we I will share about GROK filters which gives more flexibility in analyzing the logs..



You need to add the filter in between input and output like mentioned in below config file. 

 stdin {
    type => "stdin-type"
  }
  file {
    type => "ADMdomainlog"
    path => [ "D:/Logstash/Log/soa_domain.log"]
  }
  }
  
  filter {
  multiline {
    type => "ADMdomainlog"
    pattern => "^####"
    negate => true
    what => "previous"
  }
  }
  
output {
  elasticsearch { embedded => true }
}




 View in Kibana,