Skip to main content

Posts

Showing posts matching the search for java

React Ecosystem: Server-side rendering with Next.js

In the previous post , we created a BlogPost application with React and Redux and managed global state with Redux. We will extend the same application and will introduce Next.js for server-side rendering. The bigger benefit of using Next.js is pre-rendering of the page along with automatic code-splitting, static site export, CSS-in-JS. Next.js functions Next.js exposes three functions for data fetching and these are getStaticProps , getStaticPaths and getServerSideProps . First two functions are used for Static generation and the last function getServerSideProps is used for Server-side rendering. Static generation means the HTML is generated at the build(project build) time whereas in Server-side rendering HTML is generated at each request. Adding required libraries Run npm i --save next @types/next from root of the project to add the required libraries for this example. Update following commands under scripts in package.json. "dev": "next dev", "star

Spring JDBC RowMapper example

In this post, we will discuss what RowMapper is and how to use it when writing Jdbc code using Spring JDBC module. What is RowMapper? It is an interface of Spring JDBC module which is used by JdbcTemplate to map rows of java.sql.ResultSet . It is typically used when you query data. Example usage of RowMapper Let's first create a RowMapper which can map products. class ProductRowMapper implements RowMapper { @Override public Product mapRow(ResultSet rs, int rowNum) throws SQLException { Product product = new Product(); product.setId(rs.getInt("id")); product.setName(rs.getString("name")); product.setDescription(rs.getString("description")); product.setCategory(rs.getString("category")); return product; } } Now, we will use this ProductRowMapper in #queryForObject of JdbcTemplate . Product product = jdbcTemplate.queryForObject("select * from product where id=1", new Prod

Running data analytics on application events and logs using Elasticsearch, Logstash and Kibana

In this post, we will learn how to use Elasticsearch, Logstash and Kibana for running analytics on application events and logs. Firstly, I will install all these applications on my local machine. Installations You can read my previous posts on how to install Elasticsearch , Logstash , Kibana and Filebeat on your local machine. Basic configuration I hope by now you are have installed Elasticsearch, Logstash, Kibana and Filebeat on your system. Now, Let's do few basic configurations required to be able to run analytics on application events and logs. Elasticsearch Open elasticsearch.yml file in [ELASTICSEARCH_INSTLLATION_DIR]/config folder and add properties to it. cluster.name: gauravbytes-event-analyzer node.name: node-1 Cluster name is used by Elasticsearch node to form a cluster. Node name within cluster need to be unique. We are running only single instance of Elasticsearch on our local machine. But, in production grade setup there will be master nodes, data nodes a

Java 8 - default and static methods in interfaces

Java 8 introduced default and static methods in interfaces. These features allow us to add new functionality in the interfaces without breaking the existing contract for implementing classes. How do we define default and static methods? Default method has default and static method has static keyword in the method signature. public interface InterfaceA { double someMethodA(); default double someDefaultMethodB() { // some default implementation } static void someStaticMethodC() { //helper method implementation } Few important points for default method You can inherit the default method. You can redeclare the default method essentially making it abstract . You can redefine the default method (equivalent to overriding). Why do we need default and static methods? Consider an existing Expression interface with existing implementation like ConstantExpression , BinaryExpression , DivisionExpression and so on. Now, you want to add new functionalit

Installing Logstash

Logstash Logstash is data processing pipeline which ingests the data simultaneously from multiple data sources, transform it and send it to different `stash` i.e. Elasticsearch, Redis, database, rest endpoint etc. For example; Ingesting logs files; cleaning and transforming it to machine and human readable formats. There are three components in Logstash i.e. Inputs, Filters and Outputs Inputs It ingests data of any kind, shape and size. For examples: Logs, AWS metrics, Instance health metrics etc. Filters Logstash filters parse each event, build a structure, enrich the data in event and also transform it to desired form. For example: Enriching geo-location from IP using GEO-IP filter, Anonymize PII information from events, transforming unstructured data to structural data using GROK filters etc. Outputs This is the sink layer. There are many output plugins i.e. Elasticsearch, Email, Slack, Datadog, Database persistence etc. Installing Logstash As of writing Logstash(6.2.3) r

Elasticsearch setup and configuration

What is Elasticsearch? Elasticsearch is highly scalable, broadly distributed open-source full text search and analytics engine. You can in very near real-time search, store and index big volume of data. It internally use Apache Lucene for indexing and storing data. Below are few use cases for it. Product search for e-commerce website Collecting application logs and transaction data for analyzing it for trends and anomalies. Indexing instance metrics(health, stats) and doing analytics, creating alerts for instance health on regular interval. For analytics/ business-intelligence applications Elasticsearch basic concepts We will be using few terminologies while talking about Elasticsearch. Let's see basic building blocks of Elasticsearch. Near real-time Elasticsearch is near real-time. What it means is that the time (latency) between the indexing of document and its availability for searching. Cluster It is a collection of one or multiple nodes (servers) that together h

Apache Ignite - Internals

We have learnt about What is Apache Ignite? , Setting up Apache Ignite and few quick examples in last few posts. In this post, we will deep dive into Apache Ignite core Ignite classes and discuss about following internals. Core classes Lifecycle events Client and Server mode Thread pools configurations Asynchronous support in Ignite Resource injection Core classes Whenever you will be interacting with Apache Ignite in application, you will always encounter Ignite interface and Ignition class. Ignition is the main entry point to create a Ignite node. This class provides various methods to start a grid node in the network topology. // Starting with default configuration Ignite igniteWithDefaultConfig = Ignition.start(); // Ignite with Spring configuration xml file Ignite igniteWithSpringCfgXMLFile = Ignition.start("/path_to_spring_configuration_xml.xml"); // ignite with java based configuration IgniteConfiguration icfg = ...; Ignite igniteWithJavaConfigurat

Introduction to Apache Ignite

This is an introduction series to Apache Ignite. We will discuss about Apache Ignite, its features, usage as in-memory data grid, compute grid, distributed caching, near real-time caching and persistence distributed database. What is Ignite? It is in-memory compute platform . It is in-memory data grid . Durable , strongly consistent and highly available. Providing option to run SQL like queries on cache (Providing JDBC API to support this). Durable memory Apache Ignite is memory-centric platform based on durable memory architecture. It allows you to store and processing data on in-memory(RAM) and on disk (If Ignite Native persistence is enabled). When the Ignite native persistence is enabled, it will treat disk as superset of data, which is cable of surviving crash and restarts. In-memory features RAM is always treated as first memory tier, all the processing happens there. It has following characteristics. Off-heap based: All the data and indexes are stored outs

Spring Core - PropertyPlaceHolderConfigurer example

In this post, we will externalize the properties used in the application in a property file and will use PropertyPlaceHolderConfigurer to resolve the placeholder at application startup time. Java Configuration for PropertyPlaceHolderConfigurer @Configuration public class AppConfig { @Bean public PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() { PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer = new PropertySourcesPlaceholderConfigurer(); propertySourcesPlaceholderConfigurer.setLocations(new ClassPathResource("application-db.properties")); //propertySourcesPlaceholderConfigurer.setIgnoreUnresolvablePlaceholders(true); //propertySourcesPlaceholderConfigurer.setIgnoreResourceNotFound(true); return propertySourcesPlaceholderConfigurer; } } We created object of PropertySourcesPlaceholderConfigurer and set the Locations to search. In this example we used ClassPathResource to resolve the properti

Spring Security: Digest Authentication example

In this post, we will discuss about Digest Authentication with Spring Security. You can also read my previous post on Basic Authentication with Spring Security . What is Digest Authentication? This authentication method makes use of a hashing algorithms to encrypt the password (called password hash) entered by the user before sending it to the server. This, obviously, makes it much safer than the basic authentication method, in which the user’s password travels in plain text (or base64 encoded) that can be easily read by whoever intercepts it. There are many such hashing algorithms in java also, which can prove really effective for password security such as MD5, SHA, BCrypt, SCrypt and PBKDF2WithHmacSHA1 algorithms. Please remember that once this password hash is generated and stored in database, you can not convert it back to original password. Each time user login into application, you have to regenerate password hash again, and match with hash stored in database. So, if user

Spring Boot - Restful webservices with Jersey

In the previous posts, we have created a Spring Boot QuickStart , customized the embedded server and properties and running specific code after spring boot application starts . Now in this post, we will create Restful webservices with Jersey deployed on Undertow as a Spring Boot Application. Adding dependencies in pom.xml We will add spring-boot-starter-parent as parent of our maven based project. The added benefit of this is version management for spring dependencies. <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.0.RELEASE</version> </parent> Adding spring-boot-starter-jersey dependency This will add/ configure the jersey related dependencies. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jersey</artifactId> </dependency> Adding spring-boot-starter-undertow depend

Spring Boot - A quick start

In this post, we will create a simple Spring Boot application which will run on embedded Apache Tomcat. What is Spring Boot? Spring Boot helps in creating stand-alone, production-grade application easily with minimum fuss. It is the opinionated view of Spring framework and other third party libraries which believes in convenient configuration based setup. Let's start building Spring Boot Application. Adding dependencies in pom.xml We will first add spring-boot-starter-parent as parent of our maven based project. <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.1.RELEASE</version> </parent> The benefit of adding spring-boot-starter-parent is that version managing of dependency is easy. You can omit the required version on the dependency. It will pick the one configured the parent pom or from starters pom. Also, it conveniently setup the build r

Spring Core - @Import annotation

In this post, we will learn about @Import annotation and its usage. You can see my previous post on how to create a simple spring core project. What is @Import annotation and usage? @Import annotation is equivalent to <import/> element in Spring XML configuration. It helps in splitting the single Java based configuration file into small, modular, maintainable and component based configuration. Let's see it with example. @Configuration @Import(value = { DBConfig.class, WelcomeGbConfig.class }) public class HelloGbAppConfig { } In above code snippet, we are importing two different configuration files viz. DBConfig , WelcomeGbConfig in application level configuration file HelloGbAppConfig . The above code is equivalent to Spring XML based configuration below. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/bean

Spring Core - Java configuration example

In this post, we will create a spring context and will register bean via Java configuration file. You can see my previous post on how to create a simple spring core project. What is @Configuration annotation? @Configuration annotation indicates that there is one or more bean methods and spring containers can process to generate bean definitions at runtime. Also, @Bean annotation is used at method level to signifies that this will be registered as bean in spring context. Let's create a quick configuration class. @Configuration public class WelcomeGbConfig { @Bean GreetingService greetingService() { return new GreetingService(); } } Now, we will create spring context as follows. // using try with resources so that this context closes automatically try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext( WelcomeGbConfig.class);) { GreetingService greetingService = context.getBean(GreetingService.class); greetingService.gre

Spring Core - A quick start

In this post, we will create a Spring context and will get a bean object from it. What is Spring context? Spring context is also termed as Spring IoC container which is responsible for instantiate, configure and assemble the beans by reading configuration meta data from XML, Java annotations and/ or Java code in configuration files. Technologies used Spring 4.3.6.RELEASE, Maven Compiler 3.6.0 and Java 1.8 We will first create a simple maven project. You can select the maven-archtype-quickstart as archtype. Adding dependencies in pom.xml We will add spring-framework-bom in the dependency management. <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-framework-bom</artifactId> <version>4.3.6.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </depe

Java 8 - Method references

This article is in continuation to my other posts on Functional Interfaces , static and default methods and Lambda expressions . Method references are the special form of Lambda expression . When your lambda expression are doing nothing other than invoking existing behaviour (method), you can achieve same by referring it by name. :: is used to refer to a method. Method type arguments are infered by JRE at runtime from context it is defined. Types of method references Static method reference Instance method reference of particular object Instance method reference of an arbitrary object of particular type Constructor reference Static method reference When you refer static method of Containing class. e.g. ClassName::someStaticMethodName class MethodReferenceExample { public static int compareByAge(Employee first, Employee second) { return Integer.compare(first.age, second.age); } } Comparator compareByAge = MethodReferenceExample::compareByAge; Instanc

Java 8 - Streams in Action

In this post, we will cover following topics. What are Streams? What is a pipeline? Key points to remember for Streams. How to create Streams? What are Streams? Java 8 introduced new package java.util.stream which contains classes to perform SQL-like operations on elements. Stream is a sequence of elements on which you can perform aggregate operations (reduction, filtering, mapping, average, min, max etc.). It is not a data structure that stores elements like collection but carries values often lazily computed from source through pipeline . What is a pipeline? A pipeline is sequence of aggregate (reduction and terminal) operations on the source. It has following components. A source: Collections, Generator Function, array, I/O channel etc. zero or more intermediate operations: filter, map, sequential, sorted, distinct, limit, flatMap, parallel etc. Intermediate operations returns/produces stream. a termination operation: forEach, reduction, noneMatch, allMatch, c

Java 8 - Aggregate operations on Streams

This post is in continuation with my earlier posts on Streams . In this post we will discuss about aggregate operations on Streams. Aggregate operations on Streams You can perform intermediate and terminal operations on Streams. Intermediate operations result in a new stream and are lazily evaluated and will start when terminal operation is called. persons.stream().filter(p -> p.getGender() == Gender.MALE).forEach(System.out::println); In the snippet above, filter() doesn't start filtering immediately but create a new stream. It will only start when terminal operation is called and in above case when forEach() . Intermediate operations There are many intermediate operations that you can perform on Streams. Some of them are filter() , distinct() , sorted() , limit() , parallel() , sequential , map() , flatMap . filter() operation This takes Predicate functional interface as argument and the output stream of this operation will have only those elements which pass th

Java 8 - Lambda expressions

In this post, we will cover following topics. What are Lambda expressions? Syntax for Lambda expression. How to define no parameter Lambda expression? How to define single/ multi parameter Lambda expression? How to return value from Lambda expression? Accessing local variables in Lambda expression. Target typing in Lambda expression. What are Lambda expressions? Lambda expressions are the first step of Java towards functional programming. Lambda expressions enable us to treat functionality as method arguments, express instances of single-method classes more compactly. Syntax for Lambda expression Lambda has three parts: comma separated list of formal parameters enclosed in parenthesis. arrow token -> . and, body of expression (which may or may not return value). (param) -> { System.out.println(param); } Lambda expression can only be used where the type they are matched are functional interfaces . How to define no parameter Lambda expression? If the la