Skip to main content

Posts

Showing posts from 2018

Data Analytics: Watching and Alerting on real-time changing data in Elasticsearch using Kibana and SentiNL

In the previous post , we have setup ELK stack and ran data analytics on application events and logs. In this post, we will discuss how you can watch real-time application events that are being persisted in the Elasticsearch index and raise alerts if condition for watcher is breached using SentiNL (Kibana plugin). Few examples of alerting for application events ( see previous posts ) are: Same user logged in from different IP addresses. Different users logged in from same IP address. PermissionFailures in last 15 minutes. Particular kind of exception in last 15 minutes/ hour/ day. Watching and alerting on Elasticsearch index in Kibana There are many plugins available for watching and alerting on Elasticsearch index in Kibana e.g. X-Pack , SentiNL . X-Pack is a paid extension provided by elastic.co which provides security, alerting, monitoring, reporting and graph capabilities. SentiNL is free extension provided by siren.io which provides alerting and reporting function

Running data analytics on application events and logs using Elasticsearch, Logstash and Kibana

In this post, we will learn how to use Elasticsearch, Logstash and Kibana for running analytics on application events and logs. Firstly, I will install all these applications on my local machine. Installations You can read my previous posts on how to install Elasticsearch , Logstash , Kibana and Filebeat on your local machine. Basic configuration I hope by now you are have installed Elasticsearch, Logstash, Kibana and Filebeat on your system. Now, Let's do few basic configurations required to be able to run analytics on application events and logs. Elasticsearch Open elasticsearch.yml file in [ELASTICSEARCH_INSTLLATION_DIR]/config folder and add properties to it. cluster.name: gauravbytes-event-analyzer node.name: node-1 Cluster name is used by Elasticsearch node to form a cluster. Node name within cluster need to be unique. We are running only single instance of Elasticsearch on our local machine. But, in production grade setup there will be master nodes, data nodes a

Java 8 - default and static methods in interfaces

Java 8 introduced default and static methods in interfaces. These features allow us to add new functionality in the interfaces without breaking the existing contract for implementing classes. How do we define default and static methods? Default method has default and static method has static keyword in the method signature. public interface InterfaceA { double someMethodA(); default double someDefaultMethodB() { // some default implementation } static void someStaticMethodC() { //helper method implementation } Few important points for default method You can inherit the default method. You can redeclare the default method essentially making it abstract . You can redefine the default method (equivalent to overriding). Why do we need default and static methods? Consider an existing Expression interface with existing implementation like ConstantExpression , BinaryExpression , DivisionExpression and so on. Now, you want to add new functionalit

Installing Filebeat

Filebeat Filebeat is a light-weight log shipper. It is installed as a agent and listen to your predefined set of log files and locations and forward them to your choice of sink (Logstash, Elasticsearch, database etc.) Installation deb curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-amd64.deb sudo dpkg -i filebeat-6.3.2-amd64.deb rpm curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-x86_64.rpm sudo rpm -vi filebeat-6.3.2-x86_64.rpm mac curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-darwin-x86_64.tar.gz tar xzvf filebeat-6.3.2-darwin-x86_64.tar.gz docker docker pull docker.elastic.co/beats/filebeat:6.3.2 Windows Download the filebeat from official website and do the following configurations. 1) Extract the zip file to your choice of location. e.g. C:\Program Files. 2) Rename the filebeat- -windows directory to Filebeat . 3) Open a PowerShell prompt as an Administrator (right

Installing Kibana

Kibana Kibana is a visualization dashboard for Elasticsearch and you can choose many available charts like graphs, pie, bar, histogram etc. or real time textual data and can gain meaningful analytics. Installation Installating Kibana directly from tar files For Linux installation wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-linux-x86_64.tar.gz shasum -a 512 kibana-6.2.3-linux-x86_64.tar.gz tar -xzf kibana-6.2.3-linux-x86_64.tar.gz cd kibana-6.2.3-linux-x86_64/ For Windows installation //Dowload Kibana https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-windows-x86_64.zip //running kibana /bin/kibana.bat Installation from packages Debian package installation // Import elatic PGP key wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - //install https transport module sudo apt-get install apt-transport-https //save repository definition echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" |

Installing Logstash

Logstash Logstash is data processing pipeline which ingests the data simultaneously from multiple data sources, transform it and send it to different `stash` i.e. Elasticsearch, Redis, database, rest endpoint etc. For example; Ingesting logs files; cleaning and transforming it to machine and human readable formats. There are three components in Logstash i.e. Inputs, Filters and Outputs Inputs It ingests data of any kind, shape and size. For examples: Logs, AWS metrics, Instance health metrics etc. Filters Logstash filters parse each event, build a structure, enrich the data in event and also transform it to desired form. For example: Enriching geo-location from IP using GEO-IP filter, Anonymize PII information from events, transforming unstructured data to structural data using GROK filters etc. Outputs This is the sink layer. There are many output plugins i.e. Elasticsearch, Email, Slack, Datadog, Database persistence etc. Installing Logstash As of writing Logstash(6.2.3) r

Elasticsearch setup and configuration

What is Elasticsearch? Elasticsearch is highly scalable, broadly distributed open-source full text search and analytics engine. You can in very near real-time search, store and index big volume of data. It internally use Apache Lucene for indexing and storing data. Below are few use cases for it. Product search for e-commerce website Collecting application logs and transaction data for analyzing it for trends and anomalies. Indexing instance metrics(health, stats) and doing analytics, creating alerts for instance health on regular interval. For analytics/ business-intelligence applications Elasticsearch basic concepts We will be using few terminologies while talking about Elasticsearch. Let's see basic building blocks of Elasticsearch. Near real-time Elasticsearch is near real-time. What it means is that the time (latency) between the indexing of document and its availability for searching. Cluster It is a collection of one or multiple nodes (servers) that together h

Apache Ignite - Internals

We have learnt about What is Apache Ignite? , Setting up Apache Ignite and few quick examples in last few posts. In this post, we will deep dive into Apache Ignite core Ignite classes and discuss about following internals. Core classes Lifecycle events Client and Server mode Thread pools configurations Asynchronous support in Ignite Resource injection Core classes Whenever you will be interacting with Apache Ignite in application, you will always encounter Ignite interface and Ignition class. Ignition is the main entry point to create a Ignite node. This class provides various methods to start a grid node in the network topology. // Starting with default configuration Ignite igniteWithDefaultConfig = Ignition.start(); // Ignite with Spring configuration xml file Ignite igniteWithSpringCfgXMLFile = Ignition.start("/path_to_spring_configuration_xml.xml"); // ignite with java based configuration IgniteConfiguration icfg = ...; Ignite igniteWithJavaConfigurat

Apache Ignite - Examples on Data grid, compute grid, service grid and executing SQL queries

In this article, we will show few examples on using Apache Ignite as Compute Grid, Data Grid, Service Grid and executing SQL queries on Apache Ignite. These are basic examples and use the basic api available. There will be few posts in near future which explains the available API in Compute Grid, Service Grid and Data Grid. Ignite SQL Example Apache Ignite comes with JDBC Thin driver support to execute SQL queries on the In memory data grid. In the example below, we will create tables, insert data into tables and get data from tables. I will assume that you are running Apache Ignite on your local environment otherwise please read setup guide for running Apache Ignite server. Creating Tables try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/"); Statement stmt = conn.createStatement();) { //line 1 stmt.executeUpdate("CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"");

Apache Ignite - Setup guide

In this post, we will discuss about setting up Apache Ignite. Installation You can download the Apache Ignite from its official site . You can download the binary , sources , Docker or Cloud images and maven . There is also a third party support from GridGain. Steps for binary installation This is pretty straightforward installation. Download the binary from website. You can optionally setup installation path as IGNITE_HOME . To run Ignite as server, you need to run below command on terminal. /bin/ignite.bat // If it is Windows /bin/ignite.sh //if it is Linux The above command will run the Ignite with default configuration file under $IGNITE_HOME/config/default-config.xml , you can pass your own configuration file with following command /bin/ignite.sh config/ignite-config.xml Steps for building from sources If you are likely to build everything from sources, than follow the steps listed below. # Unpack the source package $ unzip -q apache-ignite-{version}-src.zip $ cd ap