Wednesday, 17 August 2016
Adding multiple jars to dependency
<dependency>
<groupId>foo</groupId>
<artifactId>foo</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>${basedir}/lib/*.jar</systemPath>
</dependency>
<plugin>
<groupId>com.googlecode.addjars-maven-plugin</groupId>
<artifactId>addjars-maven-plugin</artifactId>
<version>1.0.2</version>
<executions>
<execution>
<goals>
<goal>add-jars</goal>
</goals>
<configuration>
<resources>
<resource>
<directory>${basedir}/../buildtools/lib</directory>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
Wednesday, 10 August 2016
Listing of the elements directly under the POM's project element
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<!-- The Basics -->
<groupId>...</groupId>
<artifactId>...</artifactId>
<version>...</version>
<packaging>...</packaging>
<dependencies>...</dependencies>
<parent>...</parent>
<dependencyManagement>...</dependencyManagement>
<modules>...</modules>
<properties>...</properties>
<!-- Build Settings -->
<build>...</build>
<reporting>...</reporting>
<!-- More Project Information -->
<name>...</name>
<description>...</description>
<url>...</url>
<inceptionYear>...</inceptionYear>
<licenses>...</licenses>
<organization>...</organization>
<developers>...</developers>
<contributors>...</contributors>
<!-- Environment Settings -->
<issueManagement>...</issueManagement>
<ciManagement>...</ciManagement>
<mailingLists>...</mailingLists>
<scm>...</scm>
<prerequisites>...</prerequisites>
<repositories>...</repositories>
<pluginRepositories>...</pluginRepositories>
<distributionManagement>...</distributionManagement>
<profiles>...</profiles>
</project>
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<!-- The Basics -->
<groupId>...</groupId>
<artifactId>...</artifactId>
<version>...</version>
<packaging>...</packaging>
<dependencies>...</dependencies>
<parent>...</parent>
<dependencyManagement>...</dependencyManagement>
<modules>...</modules>
<properties>...</properties>
<!-- Build Settings -->
<build>...</build>
<reporting>...</reporting>
<!-- More Project Information -->
<name>...</name>
<description>...</description>
<url>...</url>
<inceptionYear>...</inceptionYear>
<licenses>...</licenses>
<organization>...</organization>
<developers>...</developers>
<contributors>...</contributors>
<!-- Environment Settings -->
<issueManagement>...</issueManagement>
<ciManagement>...</ciManagement>
<mailingLists>...</mailingLists>
<scm>...</scm>
<prerequisites>...</prerequisites>
<repositories>...</repositories>
<pluginRepositories>...</pluginRepositories>
<distributionManagement>...</distributionManagement>
<profiles>...</profiles>
</project>
Saturday, 6 August 2016
Elasticsearch
===================================================Kibana==================================================
===========================================================================================================
Kibana is an open source analytics and visualization platform designed to work with Elasticsearch.
You use Kibana to search, view, and interact with data stored in Elasticsearch indices.
You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.
Kibana makes it easy to understand large volumes of data. Its simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time.
kibana is a reporting tool
Discover :
========
https://www.elastic.co/guide/en/kibana/current/discover.html#discover
You can interactively explore your data from the Discover page.
You have access to every document in every index that matches the selected index pattern.
You can also see the number of documents that match the search query and get field value statistics.
If a time field is configured for the selected index pattern, the distribution of documents over time is displayed in a histogram at the top of the page.
indices :
=======
C:\Users\vnemalik\Documents\001096043\soft\elasticsearch-2.1.1\bin>elasticsearch.bat
[2016-02-10 15:11:58,272][WARN ][bootstrap ] unable to install syscall filter: syscall filtering not supported for OS: 'Windows 7'
[2016-02-10 15:11:59,024][INFO ][node ] [node-1] version[2.1.1], pid[6112], build[40e2c53/2015-12-15T13:05:55Z]
[2016-02-10 15:11:59,025][INFO ][node ] [node-1] initializing ...
[2016-02-10 15:11:59,135][INFO ][plugins ] [node-1] loaded [], sites []
[2016-02-10 15:11:59,247][INFO ][env ] [node-1] using [1] data paths, mounts [[OS (C:)]], net usable_space [245gb], net total_space [297.7gb], spins? [unknown], types [NTFS]
[2016-02-10 15:12:04,088][INFO ][node ] [node-1] initialized
[2016-02-10 15:12:04,088][INFO ][node ] [node-1] starting ...
[2016-02-10 15:12:04,281][INFO ][transport ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-02-10 15:12:04,313][INFO ][discovery ] [node-1] xyz/jg-IhVS2Qx-c5dN8ge9VBg
[2016-02-10 15:12:08,349][INFO ][cluster.service ] [node-1] new_master {node-1}{jg-IhVS2Qx-c5dN8ge9VBg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-02-10 15:12:08,376][INFO ][http ] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-02-10 15:12:08,376][INFO ][node ] [node-1] started
[2016-02-10 15:12:08,756][INFO ][gateway ] [node-1] recovered [6] indices into cluster_state
Time Filter :
===========
The Time Filter restricts the search results to a specific time period.
Searching ( Elasticsearch Query DSL/Lucene query syntax ) :
=========================================================
status:200
status:[400 TO 499] - Lucene query syntax
status:[400 TO 499] AND (extension:php OR extension:html) - Lucene query syntax
Automatically Refreshing the Page / Refresh Interval :
====================================================
You can configure a refresh interval to automatically refresh the page with the latest index data. This periodically resubmits the search query.
Filtering By Field :
==================
You can filter the search results to display only those documents that contain a particular value in a field.
To add a positive filter, click the Positive Filter button Positive Filter Button. This filters out documents that don’t contain that value in the field.
To add a negative filter, click the Negative Filter button Negative Filter Button. This excludes documents that contain that value in the field.
Viewing Document Data :
=====================
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
Kibana reads the document data from Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the name of the field, add filter buttons, and the field value.
meta-fields :
===========
meta-fields include the document’s _index, _type, _id, and _source fields.
Creating Indices:
================
Creating indices using logstash
Creating indeces using Curl
curl -XPUT http://localhost:9200/twitter5
curl -XPUT 'http://localhost:9200/twitter10/' -d '{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 2
}
}
}'
The create index API
--------------------
curl -XPUT 'http://localhost:9200/twitter10/' -d '{ "settings" : { "index" : { "number_of_shards" : 3, "number_of_replicas" : 2 } } }'
The create index API allows to provide a set of one or more mappings:
---------------------------------------------------------------------
curl -XPOST localhost:9200/test -d '{ "settings" : { "number_of_shards" : 1 }, "mappings" : { "type1" : { "_source" : { "enabled" : false }, "properties" : { "field1" : { "type" : "string", "index" : "not_analyzed" } } } } }'
curl -XPUT localhost:9200/test -d '{ "creation_date" : 1407751337000 }'
curl -XDELETE 'http://localhost:9200/twitter/'
curl -XGET 'http://localhost:9200/twitter/'
The get index API can also be applied to more than one index, or on all indices by using _all or * as index.
curl -XGET 'http://localhost:9200/twitter/_settings,_mappings' (_settings, _mappings, _warmers and _aliase
Does )
Does Index exist:
curl -XHEAD -i 'http://localhost:9200/twitter'
Closing/Opening indexes :
curl -XPOST 'localhost:9200/my_index/_close'
curl -XPOST 'localhost:9200/my_index/_open'
PUT Mapping:
===========
1) Creates an index called twitter with the message field in the tweet mapping type.
curl -XPUT http://localhost:9200/twitter11 { "mappings": { "tweet": { "properties": { "message": { "type": "string" } } } } }
2) Uses the PUT mapping API to add a new mapping type called user.
curl -XPUT http://localhost:9200/twitter11/_mapping/user { "properties": { "name": { "type": "string" } } } - Not working
3) Uses the PUT mapping API to add a new field called user_name to the tweet mapping type.
curl -XPUT http://localhost:9200/twitter11/_mapping/tweet11 { "properties": { "user_name": { "type": "string" } } }
Kibana Search
=============
SubmitterId = "BS321GRACEZI" OR TransactionID = "8900145433765010"
"Transaction ID = 8900145433765010" AND "SUBMITTER ID = BS321GRACEZI"
LogLevel:DEBUG AND JavaClass:EDIEligibilityBO
"Transaction ID: 8900145433765010" AND "SUBMITTER ID: BS321GRACEZI" AND "B2B Error Code: 0"
TransactionID = [ 8220143989361570 TO 8900145433765010 ] AND "Submitter ID = BS321GRACEZI"
TransactionID = [ 8220143989361570 TO 8900145433765010 ] AND ("Submitter ID = BS321GRACEZI" OR "Submitter ID = B00099999800")
NOT "Submitter ID = BS321GRACEZI"
====================================================
Logstash
====================================================
bin/logstash -e 'input { stdin { } } output { stdout {} }'
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html - Show Logstash design
input {
file {
path => "/path/to/logstash-tutorial.log"
start_position => beginning
}
}
The default behavior of the file input plugin is to monitor a file for new information, in a manner similar to the UNIX tail -f command. To change this default behavior and process the entire file, we need to specify the position where Logstash starts processing the file.
To verify your configuration, run the following command:
bin/logstash -f first-pipeline.conf --configtest
curl -XGET http://localhost:9200/logstash-2016.02.10/_search?q=response=200
nput {
file {
path => "/var/log/messages"
type => "syslog"
}
file {
path => "/var/log/apache/access.log"
type => "apache"
}
}
path => [ "/var/log/messages", "/var/log/*.log" ]
path => "/data/mysql/mysql.log"
output {
file {
path => "/var/log/%{type}.%{+yyyy.MM.dd.HH}"
}
}
input {
file {
path => "/tmp/*_log"
}
}
http://localhost:9200/twitter/_settings/_index/
http://localhost:9200/logstash-*/_settings/_index
http://localhost:9200/logstash-2016.02.10/_settings/
match => { "message" => "%{COMBINEDAPACHELOG}"}
match => { "message" => "google"}
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
geoip {
source => "clientip"
}
==============================================================================================================================================================
Elastic Search
==============================================================================================================================================================
To see all the mappings related to each index
---------------------------------------------
"@timestamp":{"type":"date","format":"dateOptionalTime"}
if [type] == "b2b_field_mapping" { } -??
indexing, searching, and modifying your data.
There are a few concepts that are core to Elasticsearch. Understanding these concepts from the outset will tremendously help ease the learning process.
Near Realtime (NRT) :
===================
Elasticsearch is a near real time search platform. What this means is there is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.
Cluster:
=======
A cluster is a collection of one or more nodes (servers) that together holds your entire data and provides federated indexing and search capabilities across all nodes. A cluster is identified by a unique name which by default is "elasticsearch". This name is important because a node can only be part of a cluster if the node is set up to join the cluster by its name.
Node:
=====
A node is a single server that is part of your cluster, stores your data, and participates in the cluster’s indexing and search capabilities. Just like a cluster, a node is identified by a name which by default is a random Marvel character name that is assigned to the node at startup.
Index:
======
An index is a collection of documents that have somewhat similar characteristics. For example, you can have an index for customer data, another index for a product catalog, and yet another index for order data. An index is identified by a name (that must be all lowercase) and this name is used to refer to the index when performing indexing, search, update, and delete operations against the documents in it.
Within an index/type, you can store as many documents as you want. Note that although a document physically resides in an index, a document actually must be indexed/assigned to a type inside an index.
Used to check if the index (indices) exists or not.
curl -XHEAD -i 'http://localhost:9200/twitter'
Points to Note:
==============
logstash -f b2bLog.conf --log C:/KibanaElasticSearch/StageVersion/logstash-2.3.2/logstash.log &
Elasticsearch is hosted on Maven Central. ( http://search.maven.org/#search|ga|1|a%3A%22elasticsearch%22 )
//grok condition
if '"B2B Error Code"' not in [kvpairs] {
json {
source => "kvpairs"
remove_field => [ "kvpairs" ]
add_field => {"Transaction_Status" => "UNSUCCESSFUL,Error Code Not Found"}
}
}
// Grok match string for b2b log
match => { "message" => "\[%{LOGLEVEL:LogLevel}\] %{MONTHDAY:Date} %{MONTH:Month} %{YEAR:Year} %{TIME:Timestamp} - %{DATA:JavaClass} %{DATA:JavaMethod}- %{GREEDYDATA:CorrelationID}: %{GREEDYDATA:kvpairs}"}
match => { "message" => "\[%{LOGLEVEL:LogLevel}\] %{B2B_DATE:timestamp} - %{DATA:JavaClass} %{DATA:JavaMethod}- %{GREEDYDATA:CorrelationID}: %{GREEDYDATA:kvpairs}"}
// To list out all the mapping associated to each index - GET operationhttp://localhost:9200/_all/_mapping?pretty=1
// To list out the single template - GET operationhttp://localhost:9200/_template/logstash?pretty
// To list all the templates availablehttp://localhost:9200/_template/
Elastic Search DSL(Domain Specific Language).
index => "logstash-gpuz-%{+YYYY.MM.dd}"
"format": "yyyy-MM-dd HH:mm:ss"
manage_template => false if you want to manage the template outside of logstash.
Disable the option Use event times to create index names and put the index name instead of the pattern (tests).
Default for number_of_replicas is 1 (ie one replica for each primary shard)
curl -XGET 'http://localhost:9200/twitter/_settings,_mappings' - get api for index
The above command will only return the settings and mappings for the index called twitter.
The available features are _settings, _mappings, _warmers and _aliases.
1)Installing sense plug-in for kibana
kibana.bat plugin --install elastic/sense
2) another way, download and add plugin
https://download.elasticsearch.org/elastic/sense/sense-latest.tar.gz
https://download.elastic.co/elastic/sense/sense-latest.tar.gz - latest
$ bin/kibana plugin -i sense -u file:///PATH_TO_SENSE_TAR_FILE
https://www.elastic.co/guide/en/sense/current/installing.html
two ways to ovveride the existing logstash template:
1) manage_template => true
template_overwrite => true
template_name => "b2btemplate"
template => "C:/Users/vnemalik/Documents/001096043/soft/logstash-2.1.1/templates/automap.json"
{
"template": "logstash-*",
"settings": {
"number_of_shards" : 1
},
"mappings": {
"b2bkibana": {
"_all": {
"enabled": true
},
"properties": {
"@timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"@version": {
"type": "string"
},
"CorrelationID": {
"type": "string",
"index": "not_analyzed"
},
"Submitter ID": {
"type": "string",
"index": "not_analyzed"
},
"Transaction Type": {
"type": "long",
"index": "not_analyzed"
},
"Transaction Version": {
"type": "string",
"index": "not_analyzed"
},
"Transaction Mode": {
"type": "string",
"index": "not_analyzed"
},
"Transaction ID": {
"type": "long",
"index": "not_analyzed"
},
"ServiceTypeCode": {
"type": "long",
"index": "not_analyzed"
},
"Payer ID": {
"type": "long",
"index": "not_analyzed"
},
"Service invoked": {
"type": "string",
"index": "not_analyzed"
},
"Service type": {
"type": "string",
"index": "not_analyzed"
},
"<statusMessageLevel>": {
"type": "string",
"index": "not_analyzed"
},
"<serviceCallStatus>": {
"type": "string",
"index": "not_analyzed"
},
"<messageType>": {
"type": "string",
"index": "not_analyzed"
},
"<statusMessage>": {
"type": "string",
"index": "not_analyzed"
},
"System ID": {
"type": "string",
"index": "not_analyzed"
},
"Source Code for Coverage": {
"type": "string",
"index": "not_analyzed"
},
"Claim System Type Code for Coverage": {
"type": "string",
"index": "not_analyzed"
},
"Eligibility System Type Code for Coverage": {
"type": "string",
"index": "not_analyzed"
},
"Coverage Type": {
"type": "string",
"index": "not_analyzed"
},
"Vendored Coverage": {
"type": "string",
"index": "not_analyzed"
},
"Vendor Name": {
"type": "string",
"index": "not_analyzed"
},
"Source Code": {
"type": "string",
"index": "not_analyzed"
},
"Claims System Type Code": {
"type": "string",
"index": "not_analyzed"
},
"Eligiblity System Type Code": {
"type": "string",
"index": "not_analyzed"
},
"B2B Error Code": {
"type": "string",
"index": "not_analyzed"
},
"AAA03": {
"type": "string",
"index": "not_analyzed"
},
"AAA04": {
"type": "string",
"index": "not_analyzed"
},
"JavaMethod": {
"type": "string",
"index": "not_analyzed"
},
"JavaClass": {
"type": "string",
"index": "not_analyzed"
},
"LogLevel": {
"type": "string",
"index": "not_analyzed"
},
"Date": {
"type": "date",
"index": "not_analyzed"
}
}
}
}
}
// Output plug in to skip grok failures
output {
if [type] == "apache-access" {
if "_grokparsefailure" in [tags] {
null {}
}
elasticsearch {
}
}
}
2) Either from curl or Fiddler Web Debugger or Sense tab of Kibana
edidashboardtemplate
installing aggrigate plugin:
C:\Users\vnemalik\Documents\001096043\b2b\ElasticSearch_POC\testing_nodes\logstash-2.1.1\bin>plugin install logstash-filter-aggregate
// Elastic boxsudo su -c "sh elasticsearch" -s /bin/sh aneela1
sudo -b su -c "sh elasticsearch" -s /bin/sh aneela1
sudo -b su -c "sh kibana" -s /bin/sh aneela1
Start - $nohup bin/kibana &
Stop – kill -9 (pid)
now-1w/w
apsrs3723 - Initial Stage Dashboard
apsrs3726 - thats the latest version and also have NAS connected to it - its the stage server in the DMZ!
we have our stage servers(apsp8705,apsp9016,) connected to apsrs3926(linex server) via NAS share..
From apsp8705(AIX) logs(x12logs, processlogs) shipped to apsrs3926(linux,/b2b_lt/elastic) where ES stack got installed.
Useful Links:
============
https://www.youtube.com/watch?v=60UsHHsKyN4
https://www.youtube.com/watch?v=U3m0jKygAqU
http://code972.com/blog/2015/02/80-elasticsearch-one-tip-a-day-managing-index-mappings-like-a-pro
https://www.timroes.de/
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
https://discuss.elastic.co/t/cannot-get-my-template-to-work/27150/15 -good one , templates https://discuss.elastic.co/t/confused-about-how-to-use-raw-fields-and-not-analyze-string-fields/28106
http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html
http://cookbook.logstash.net/recipes/cisco-asa/ -?
Do you want this? The mutate filter will change all the double quote to single quote.
filter {
mutate {
gsub => ["message","\"","'"]
}
}
mutate {
gsub => ['message','\"','`']
}
match => { "message" => "(?m)\[%{LOGLEVEL:LogLevel}\] %{B2B_DATE:editimestamp} - %{DATA:JavaClass} %{DATA:JavaMethod}- %{GREEDYDATA:CorrelationID}: %{GREEDYDATA:kvpairs}"}
timestamp issue
----------------https://discuss.elastic.co/t/how-to-set-timestamp-timezone/28401/16
Thursday, 4 August 2016
Protocol
DICT -
DNS - Application Layer
FILE -
FTP - Application Layer
FTPS -
GOPHER -
HTTP -
HTTPS -
IMAP -
IMAPS -
LDAP -
LDAPS -
POP3 -
POP3S -
RTMP -
RTSP -
SCP -
SFTP -
SMTP - Application Layer
SMTPS -
SIP - Application Layer
TELNET - Application Layer
TFTP -
TELNET - TELNET is a two-way communication protocol which allows connecting to a remote machine and run applications on it.
FTP - FTP(File Transfer Protocol) is a protocol, that allows File transfer amongst computer users connected over a network. It is reliable, simple and efficient.
SMTP - SMTP(Simple Mail Transport Protocol) is a protocol, which is used to transport electronic mail between a source and destination, directed via a route.
DNS - DNS(Domain Name Server) resolves an IP address into a textual address for Hosts connected over a network.
Note:
Please refer Computer Networks which talks in detail about TCP/IP and OSI layered architecture and bounded protocols. Also, it talk about merits and demerits of both approaches.
http://www.studytonight.com/computer-networks/
DNS - Application Layer
FILE -
FTP - Application Layer
FTPS -
GOPHER -
HTTP -
HTTPS -
IMAP -
IMAPS -
LDAP -
LDAPS -
POP3 -
POP3S -
RTMP -
RTSP -
SCP -
SFTP -
SMTP - Application Layer
SMTPS -
SIP - Application Layer
TELNET - Application Layer
TFTP -
TELNET - TELNET is a two-way communication protocol which allows connecting to a remote machine and run applications on it.
FTP - FTP(File Transfer Protocol) is a protocol, that allows File transfer amongst computer users connected over a network. It is reliable, simple and efficient.
SMTP - SMTP(Simple Mail Transport Protocol) is a protocol, which is used to transport electronic mail between a source and destination, directed via a route.
DNS - DNS(Domain Name Server) resolves an IP address into a textual address for Hosts connected over a network.
Note:
Please refer Computer Networks which talks in detail about TCP/IP and OSI layered architecture and bounded protocols. Also, it talk about merits and demerits of both approaches.
http://www.studytonight.com/computer-networks/
Understanding OSI reference model and TCP/IP
TCP/IP - Transmission Control Protocol and Internet Protocol
It is the network model used in the current Internet architecture as well. Protocols are set of rules which govern every possible communication over a network. These protocols describe the movement of data between the source and destination or the internet
The overall idea was to allow one application on one computer to talk to(send data packets) another application running on different computer.
The TCP/IP is real, it exists. The internet works with TCP/IP model of networking.
The OSI model is just a guide. It's a specification, it doesn't exist anywhere.
OSI model was developed to figure out how to get all these computers , networks and operating systems to talk to each.
SIP - Introduction
Signaling System No. 7 (SS7) is a set of telephony signaling protocols developed in 1975, which is used to set up and tear down most of the world's public switched telephone network (PSTN) telephone calls.
Mobicents is an Open Source VoIP Platform written in Java to help create, deploy, manage services and applications integrating voice, video and data across a range of IP and legacy communications networks
https://github.com/RestComm/sipunit - sip unit test
Sunday, 31 July 2016
Java Basics - Understanding core concepts
C - can write programs - No OOPS concepts
C++ - got knowledge - partial OOPS concepts
java -
.Net - Framework - C#(== core) java - programing language
C# and java are developed based on OOPS concepts
PHP - developed based on OOPS concepts - FACEBOOK
Phython - developed based on OOPS concepts
OOPs (Concept) - Object Oriented Programing language - java,.net,python,php
Structure oriented programing - c,c++ (partial oops)
OOPS - IPE (Inheritance,Polymorhism and Encapsulation))
Access specifiers - private , public , protected and default
J2SEE - Standard Edition 1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8 - core java - Sun - Oracle(present)
J2EE - Enterprise Edition - J2EE API(Servlets, Jsp) complance - advance java - Sun - Oracle(present)
Servers developed on J2EE API - Apache - Tomcat WS, Jboss AS, IBM- Websphere AS , Weblogic AS
JDBC API - data base - ORACLE, MYSQL, SYBASE, DB2..etc
Sun released API'S like J2SE,J2EE and J2ME - Since Oracle acquired Sun, now all these API's maintaining by Oracle.
JDK (Java Development Kit) - SDK (Software Development Kit) - JDK is to compile your java files to generate .class files out of your .java files usint javac command.
Ex : javac HelloWorld.java -> HelloWorld.class
javac command nothing but javac.exe resides in JDK, we also call it as java compiler.
javac is responsible to generate byte code (.class files)
JRE - Java Runtime Environment - JRE is nothing but collections libraries (jars) required to run java byte code. In other words to run .class files
java HelloWorld(.class)
While running java code (byte code) you use java command. It's nothing but java.exe available as part of JRE bundle.
For every version of java you'll provided by jdk and jre.
jdk - compile time
jre - run time
J2ME - Mobile Edition - Only for basic mobiles , not for smart phone - Sun - Oracle(present)
Android API on top of J2SE - .apk - Only for smart phones - released from Google
core java , jdbc, servlets, jsp
Spring , Hibernate and Struts - java frameworks from third party companies developed on top of J2EE API and J2EE API to make the developers life easy.
Path and CLASSPATH are environment variables.
Path - All the executable you want to run from anywhere , then you must keep then in path environment variable.
All windows programs are .exe executables.
You can run executables only from the directory where they are located in the physical directory.
Else, if you want to run them from any directory then keep the directory of them in path variable.
Eg: typical path variable value - echo %PATH%
C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\windows\des_tools;C:\Program Files (x86)\WebEx\Productivity Tools;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\AppSense\Application Manager\Agent\Plugins\EcaRulesEngine\;C:\Program Files\Java\jdk1.8.0_45\bin;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Java\jre7\bin;C:\Users\vnemalik\Documents\001096043\soft\apache-maven-3.2.5\bin
CLASSPATH : - Classpath is also an environment variable but only specific to Java. - Only for java developers.
you keep jar file paths in class path variable to refer these classes/jars from your application.
Class path is needed when your application is dependent on external application that you need refer. You should keep the dependent application as a jar file in the class path variable.
OOPS - very important - OOPS is nothing but a guide lines or specification or standard set for software application development.
Inheritance -
Polymorphism - poly mean many, morphism means forms - many forms
An application or an object plays different role in different times i.e it is exhibiting different behavior at different times is nothing but polymorphism.
Encapsulation -
OOPS gave a solution on problem faced in earlier traditional oriented languages.
OOPS is a solution for the problem faced by developers in C and C++ languages.
Understand all the oops concepts with java examples thoroughly.
Java Interview :-
Core java:
-----------
- OOPS concepts
- Class
- Object
- Understanding Object.java class, which is a super class of all the classes in java
- packages
- inheritance with java example
- polymorphism with java example
- encapsulation with java example
- Abstract classes
- Interfaces
- exception handling with java example - both checked and unchecked exceptions.
- Understanding the difference between Error and Exception
- Threads - basics - Thread class and Runnable interface - how many ways we can create threads, what is the default thread in java .. etc
- JDBC - How would you connect to any database using java api - understanding DriverManager Class
- Java Collections - List , Set , Map , ArrayList , HashMap , HashSet .. etc - pros and cons of using a specific type of collection object.
- Java Generics - a new feature added from jdk 1.5
- New features introduced in each version of java from 1.5 to 1.8
Advance Java:
----------
Servlet programing
JSP - Java Server Pages
Understanding WebApplication and Enterprise Application
Must have knowledge on at least two application servers for instance Tomcat and JBoss
How to deploy application and work with it.
EJB - Not required for you
Books:
https://havealookonenglish.files.wordpress.com/2015/12/head-first-programming.pdf
Note:
Try a test/certification from these site. I heard companies hiring with the scores fresher's get from these site. Have a look on these once.
https://www.myamcat.com/
https://www.elitmus.com/
Monday, 25 July 2016
maven-eclipse-plugin
<!-- download source code in Eclipse, best practice -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<version>2.9</version>
<configuration>
<downloadSources>true</downloadSources>
<downloadJavadocs>false</downloadJavadocs>
</configuration>
</plugin>
Tuesday, 19 July 2016
The static in-project repository solution
The static in-project repository solution:
<repository>
<id>repo</id>
<releases>
<enabled>true</enabled>
<checksumPolicy>ignore</checksumPolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
<url>file://${project.basedir}/repo</url>
</repository>
Use Maven to install to project repo:
mvn install:install-file -DlocalRepositoryPath=repo -DcreateChecksum=true -Dpackaging=jar -Dfile=[your-jar] -DgroupId=[...] -DartifactId=[...] -Dversion=[...]
<repository>
<id>repo</id>
<url>file://${project.basedir}/repo</url>
</repository>
<dependency>
<groupId>org.swinglabs</groupId>
<artifactId>swingx</artifactId>
<version>0.9.2</version>
<scope>system</scope>
<systemPath>${project.basedir}/lib/swingx-0.9.3.jar</systemPath>
</dependency>
Registering your repository in pom.xml:
<repository>
<id>ProjectRepo</id>
<name>ProjectRepo</name>
<url>file://${project.basedir}/libs</url>
</repository>
<repository>
<id>repo</id>
<releases>
<enabled>true</enabled>
<checksumPolicy>ignore</checksumPolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
<url>file://${project.basedir}/repo</url>
</repository>
Use Maven to install to project repo:
mvn install:install-file -DlocalRepositoryPath=repo -DcreateChecksum=true -Dpackaging=jar -Dfile=[your-jar] -DgroupId=[...] -DartifactId=[...] -Dversion=[...]
<repository>
<id>repo</id>
<url>file://${project.basedir}/repo</url>
</repository>
<dependency>
<groupId>org.swinglabs</groupId>
<artifactId>swingx</artifactId>
<version>0.9.2</version>
<scope>system</scope>
<systemPath>${project.basedir}/lib/swingx-0.9.3.jar</systemPath>
</dependency>
Registering your repository in pom.xml:
<repository>
<id>ProjectRepo</id>
<name>ProjectRepo</name>
<url>file://${project.basedir}/libs</url>
</repository>
Tuesday, 31 May 2016
Key Points
- CentOS (the community version of Red Hat Enterprise Linux)
- We can use elevated admin rights than using admin rights since getting admin rights is not very easy while in office work.
- If you keep a project as a dependency for another project, you don't need to import those files in the another project to use them.
Wednesday, 6 April 2016
Friday, 1 April 2016
jboss clustering
1) create a serverr group for a full-ha profile
configure clustering and check how two war interchange the information - must, it's easy to configure clustering in jboss
create a group, under it create two hosts. deploy a war file on group, so that it will be deployed in both the servers.
add <distributable/> tag to web.xml that you've deployed on to group.
Now check the consoles, logs and request count for web applicaion from gui to ensure both the war sharing the data each other.
configure clustering and check how two war interchange the information - must, it's easy to configure clustering in jboss
create a group, under it create two hosts. deploy a war file on group, so that it will be deployed in both the servers.
add <distributable/> tag to web.xml that you've deployed on to group.
Now check the consoles, logs and request count for web applicaion from gui to ensure both the war sharing the data each other.
Thursday, 31 March 2016
SSL - Secured Socket Layer
Encryption Algorithm
Asymmetric encryption: (one way encryption, low in performance)
Public key
Private key
Symmetric encryption: (high speed)
icici ----> <-----public------ca(certifying authority)
Every browser would get certified from CA, where CA would perform background checks and gives clean chit to the browser. When the browser gets the digitally signatures from CA, browsers treat them as secured sites and we can believe that our transactions with the site will be genuine, authenticated , faithful and believable.
all the browser needs to understand CA(Verizon) and
there is a cross mark which tells us that the site is not resisted with CA and may harmful
Implementing SSL
1) Key store -> public/private key
2) public/private key generation
jdk -> key store
3) On your jboss SSL connector
4) Point your SSL connector to keystore.
1) C:\Users\edi5752>keytool -genkey -alias optum -keyalg RSA -keystore C:\Users\edi
5752\Documents\vamsi\optum.keystore
Enter keystore password:
Keystore password is too short - must be at least 6 characters
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: vamshi krishna
What is the name of your organizational unit?
[Unknown]: optum
What is the name of your organization?
[Unknown]: uhg
What is the name of your City or Locality?
[Unknown]: hyderabad
What is the name of your State or Province?
[Unknown]: india
What is the two-letter country code for this unit?
[Unknown]: 91
Is CN=vamshi krishna, OU=optum, O=uhg, L=hyderabad, ST=india, C=91 correct?
[no]: yes
Enter key password for <optum>
(RETURN if same as keystore password):
Re-enter new password:
C:\Users\edi5752>
Right now JBoss won't have any connector for https, so we need to connect this connector with a sybsystem in jboss.
<subsystem xmlns="urn:jboss:domain:web:2.2" default-virtual-server="default-host" native="false">
<connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/>
<connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" enabled="true">
<ssl name="optum-ssl" key-alias="optum" password="123456" certificate-key-file="C:\Users\edi5752\Documents\vamsi\optum.keystore"/>
</connector>
<virtual-server name="default-host" enable-welcome-root="true">
<alias name="localhost"/>
<alias name="example.com"/>
</virtual-server>
</subsystem>
if you are running your standlone in full mode
Understand what is Layer 7, and understand what Auth code ..etc
OSI model:
7. Application layer
6. Presentation layer
5. Session layer
4. Transport layer
3. Network layer
2. Data link layer
1. Physical layer
Asymmetric encryption: (one way encryption, low in performance)
Public key
Private key
Symmetric encryption: (high speed)
icici ----> <-----public------ca(certifying authority)
Every browser would get certified from CA, where CA would perform background checks and gives clean chit to the browser. When the browser gets the digitally signatures from CA, browsers treat them as secured sites and we can believe that our transactions with the site will be genuine, authenticated , faithful and believable.
all the browser needs to understand CA(Verizon) and
there is a cross mark which tells us that the site is not resisted with CA and may harmful
Implementing SSL
1) Key store -> public/private key
2) public/private key generation
jdk -> key store
3) On your jboss SSL connector
4) Point your SSL connector to keystore.
1) C:\Users\edi5752>keytool -genkey -alias optum -keyalg RSA -keystore C:\Users\edi
5752\Documents\vamsi\optum.keystore
Enter keystore password:
Keystore password is too short - must be at least 6 characters
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: vamshi krishna
What is the name of your organizational unit?
[Unknown]: optum
What is the name of your organization?
[Unknown]: uhg
What is the name of your City or Locality?
[Unknown]: hyderabad
What is the name of your State or Province?
[Unknown]: india
What is the two-letter country code for this unit?
[Unknown]: 91
Is CN=vamshi krishna, OU=optum, O=uhg, L=hyderabad, ST=india, C=91 correct?
[no]: yes
Enter key password for <optum>
(RETURN if same as keystore password):
Re-enter new password:
C:\Users\edi5752>
Right now JBoss won't have any connector for https, so we need to connect this connector with a sybsystem in jboss.
<subsystem xmlns="urn:jboss:domain:web:2.2" default-virtual-server="default-host" native="false">
<connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/>
<connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" enabled="true">
<ssl name="optum-ssl" key-alias="optum" password="123456" certificate-key-file="C:\Users\edi5752\Documents\vamsi\optum.keystore"/>
</connector>
<virtual-server name="default-host" enable-welcome-root="true">
<alias name="localhost"/>
<alias name="example.com"/>
</virtual-server>
</subsystem>
if you are running your standlone in full mode
Understand what is Layer 7, and understand what Auth code ..etc
OSI model:
7. Application layer
6. Presentation layer
5. Session layer
4. Transport layer
3. Network layer
2. Data link layer
1. Physical layer
EJB3.x Deployment in JBoss
EJB - Enterprise Java Beans
=================
Session Beans - Synchronus - Meant for wring Business logic
Entity Beans - Synchronus - DAL - replaced by JPA/Hibernate
MDB(Message Driven Beans) - Asynchronus
Messaging - Two components talking to each other.
Messaging of two types :
Synchronous - Request/Reply mechanism ; lightly coupled
Asynchronous - fire and Forget kind of mechanism, It doesn't really wait for a response, instead it leaves a listener, whenever the response coming in as a message, a thread will take care of it.
MOM - Message oriented Middle ware products
WebSphere MQ/MB - IBM
ACTIVE MQ - APACHE
APOLOGETIC
RABBIT MQ
HARNET MQ - JBOSS- RED HAT (implemented as messaging subsystem but only available in full,full-ha profile)
Types of Asynchronous Messaging :
Point to Point Messaging - you work with original copy of the message. Queues we need to configure.
Public Subscribe Messaging - Multiple publishers, multiple subscribers.
A copy of the message will be sent to subscribers.
Eg: Twitter, If AB post a tweet , all the followers would get a copy of the tweet.
When two components implemented in different languages ,still they can talk each other.
1)Declarative transaction/security (everything via xml in terms of configuraiton)
2)Multilayered Architecture
Spring far better when it comes to DT and MA discussed above.
If you go with Spring , Apache is just enough, free of cost.
Session Beans :
JNDI names
Ejb client (jvm one) ----> Ejb (Jboss jvm)
You can call from one jvm to another jvm.
EJB:Remote Interface (call to an external jvm)Local Interface (with in the same jvm) ; much better performance
JMS - Java Messaging Service
JMS is to connect MOM products.
Java application need to connect to MOM product to put a message/retrieve a message.
JMS API - java api
java appication --->jms api ---> [MOM products to put/retrieve messages]
your application use verdor provided api or jmi api to use MOM product.
1)Use vendor api for messaging (full utilization of messaging, hard binding to that product, will be hard to migrate to diff mq later)
2)Use JMS api (limited use of messagin, will be easy to migrate to diff MQ)
Follow the steps to create JMS application.
1)Connection Factory
2)Create a Destination - Queue or Topic
=================
Session Beans - Synchronus - Meant for wring Business logic
Entity Beans - Synchronus - DAL - replaced by JPA/Hibernate
MDB(Message Driven Beans) - Asynchronus
Messaging - Two components talking to each other.
Messaging of two types :
Synchronous - Request/Reply mechanism ; lightly coupled
Asynchronous - fire and Forget kind of mechanism, It doesn't really wait for a response, instead it leaves a listener, whenever the response coming in as a message, a thread will take care of it.
MOM - Message oriented Middle ware products
WebSphere MQ/MB - IBM
ACTIVE MQ - APACHE
APOLOGETIC
RABBIT MQ
HARNET MQ - JBOSS- RED HAT (implemented as messaging subsystem but only available in full,full-ha profile)
Types of Asynchronous Messaging :
Point to Point Messaging - you work with original copy of the message. Queues we need to configure.
Public Subscribe Messaging - Multiple publishers, multiple subscribers.
A copy of the message will be sent to subscribers.
Eg: Twitter, If AB post a tweet , all the followers would get a copy of the tweet.
When two components implemented in different languages ,still they can talk each other.
1)Declarative transaction/security (everything via xml in terms of configuraiton)
2)Multilayered Architecture
Spring far better when it comes to DT and MA discussed above.
If you go with Spring , Apache is just enough, free of cost.
Session Beans :
JNDI names
Ejb client (jvm one) ----> Ejb (Jboss jvm)
You can call from one jvm to another jvm.
EJB:Remote Interface (call to an external jvm)Local Interface (with in the same jvm) ; much better performance
JMS - Java Messaging Service
JMS is to connect MOM products.
Java application need to connect to MOM product to put a message/retrieve a message.
JMS API - java api
java appication --->jms api ---> [MOM products to put/retrieve messages]
your application use verdor provided api or jmi api to use MOM product.
1)Use vendor api for messaging (full utilization of messaging, hard binding to that product, will be hard to migrate to diff mq later)
2)Use JMS api (limited use of messagin, will be easy to migrate to diff MQ)
Follow the steps to create JMS application.
1)Connection Factory
2)Create a Destination - Queue or Topic
Subscribe to:
Posts (Atom)