Iterating Collections API

Java 8 introduced new way of iterating Collections API. It is retrofitted to support #forEach method which accepts Consumer in case of Collection and BiConsumer in case of Map.

Consumer

Java 8 added introduced new package java.util.function which also includes Consumer interface. It represents the operation which accepts one argument and returns no result.

Before Java 8, you would have used for loop, extended for loop and/ or Iterator to iterate over Collections .

List<Employee> employees = EmployeeStub.getEmployees();
Iterator<Employee> employeeItr = employees.iterator();
Employee employee;
while (employeeItr.hasNext()) {
  employee = employeeItr.next();
  System.out.println(employee);
}

In Java 8, you can write Consumer and pass the reference to #forEach method for performing operation on every item of Collection.

// fetch employees from Stub
List<Employee> employees = EmployeeStub.getEmployees();
// create a consumer on employee
Consumer<Employee> consolePrinter = System.out::println;
// use List's retrofitted method for iteration on employees and consume it
employees.forEach(consolePrinter);

Or Just one liner as

employees.forEach(System.out::println);

Before Java 8, you would have iterated Map as

Map<Long, Employee> idToEmployeeMap = EmployeeStub.getEmployeeAsMap();
for (Map.Entry<Long, Employee> entry : idToEmployeeMap.entrySet()) {
  System.out.println(entry.getKey() + " : " + entry.getValue());
}

In Java 8, you can write BiConsumer and pass the reference to #forEach method for performing operation on every item of Map.

BiConsumer<Long, Employee> employeeBiConsumer = (id, employee) -> System.out.println(id + " : " + employee);
Map<Long, Employee> idToEmployeeMap = EmployeeStub.getEmployeeAsMap();
idToEmployeeMap.forEach(employeeBiConsumer);

or Just a one liner:

idToEmployeeMap.forEach((id, employee) -> System.out.println(id + " : " + employee));

This is how we can benefit with newly introduced method for iteration. I hope you found this post informative. You can get the full example on Github.

In this post, we will cover following items.

  • What is java.util.function.Predicate?
  • How to filter data with Predicates?
  • Predicate chaining.

Java 8 introduced many new features like Streaming API, Lambdas, Functional interfaces, default methods in interfaces and many more.

Today, we will discuss about Predicate interface added in java.util.function package and its usage in filtering in-memory data.

What is java.util.function.Predicate?

Predicate is like a condition checker, which accepts one argument of type T and return the boolean value.

It's a functional interface with functional method test(Object). Here, Object is typed.

@FunctionalInterface
interface Predicate<T> {
  public boolean test(T t);
}

How we can filter data with Predicates?

Consider we have Collection of employees and we want to filter them based on age, sex, salary and/ or with any other combinations. We can do that with Predicate.

Let's understand this with one short example.

class Employee {
  private long id;
  private String firstName;
  private String lastName;
  private int age;
  private Sex sex;
  private int salary;

  // getters, constructor, hashCode, equals, to String
}

Defining predicates for filtering

Predicate<Employee> male = e -> e.getSex() == Sex.MALE;
Predicate<Employee> female = e -> e.getSex() == Sex.FEMALE;
Predicate<Employee> ageLessThan30 = e -> e.getAge() < 30;
Predicate<Employee> salaryLessThan20 = e -> e.getSalary() < 20000;
Predicate<Employee> salaryGreaterThan25 = e -> e.getSalary() > 25000;

Filtering employees with Predicates

employees.stream().filter(male).collect(Collectors.toList());
employees.stream().filter(female).collect(Collectors.toList());
employees.stream().filter(ageLessThan30).collect(Collectors.toList());
employees.stream().filter(salaryLessThan20).collect(Collectors.toList());

Here, employees reference is of type java.util.List.

Collections framework is retrofitted for Streaming API and have stream() and parallelStream() methods along with few other additions.filter() method is defined in Stream. We are streaming employees collection and filtering them based on the Predicate and then collecting as java.util.List.

Predicate chaining

java.util.function.Predicate have three default method. Two of them and(Predicate<T> other) and or(Predicate<T> other) is used for predicate chaining.

Filtering employees with multiple predicates

Let's say, we want to filter collection of employees which involves multiple conditions like

  • all male salary less than 20k.
  • all female salary greater than 25k.
  • all male salary either less than 20 k or greater than 25k.

Let's understand this with quick example.

Defining predicates

Predicate<Employee> male = e -> e.getSex() == Sex.MALE;
Predicate<Employee> female = e -> e.getSex() == Sex.FEMALE;
Predicate<Employee> ageLessThan30 = e -> e.getAge() < 30;
Predicate<Employee> salaryLessThan20 = e -> e.getSalary() < 20000;
Predicate<Employee> salaryGreaterThan25 = e -> e.getSalary() > 25000;
Predicate<Employee> salaryLessThan20OrGreateThan25 = salaryLessThan20.or(salaryGreaterThan25);

Predicate<Employee> allMaleSalaryLessThan20 = male.and(salaryLessThan20);
Predicate<Employee> allMaleAgeLessThan30 = male.and(ageLessThan30);
Predicate<Employee> allFemaleSalaryGreaterThan25 = female.and(salaryGreaterThan25);

Predicate<Employee> allMaleSalaryLessThan20OrGreateThan25 = male.and(salaryLessThan20OrGreateThan25);

Line 1 => Predicate test for employee male

Line 2 => Predicate test for employee female

Line 3 => Predicate test for employee age less than 30

Line 4 => Pedicate test for employee salary less than 20000

Line 8 => Predicate test for employee male and salary less than 20000

Line 10 => Predicate test for employee female and salary greater than 25000

Line 12 => Predicate test for employee male and salary either less than 20000 or greater than 25000

Filtering employees with predicate chaining

employees.stream().filter(allMaleSalaryLessThan20).collect(Collectors.toList());
employees.stream().filter(allMaleAgeLessThan30).collect(Collectors.toList());
employees.stream().filter(allFemaleSalaryGreaterThan25).collect(Collectors.toList());
employees.stream().filter(allMaleSalaryLessThan20OrGreateThan25).collect(Collectors.toList());

This is how we can use Predicate to filter in-memory data. I hope you find this post informative and helpful. You can get the full example code on Github.

java.util.function package

Java 8 introduced new package and introduced many functional interface. It can be divided into four categories.

  • Predicate
  • Consumer
  • Function
  • Supplier

Predicate

It represents a boolean-valued function of one argument. It is a functional interface with method test(T) where T is typed.

You can see the usage here.

Consumer

It represents an operation accept(s) argument(s) and return void with side-effects. Java 8 introduced many versions of Consumers.

You can see the usage of Consumer here.

Spring 4.3 - @GetMapping, @PostMapping, @PutMapping and @DeleteMapping

There are some new improvements in Spring Boot 1.4 and Spring 4.3 which lead to a better readability and some use of annotations, particularly with HTTP request methods.

We usually map GET, PUT, POST and DELETE HTTP method in rest controller in the following way.

@RestController
@RequestMapping("/api/employees")
public class EmployeeController {

  @RequestMapping
  public ResponseEntity<List<Employee>> getAll() {
    return ResponseEntity.ok(Collections.emptyList());
  }

  @RequestMapping("/{employeeId}")
  public ResponseEntity<Employee> findById(@PathVariable Long employeeId) {
    return ResponseEntity.ok(EmployeeStub.findById(employeeId));
  }

  @RequestMapping(method = RequestMethod.POST)
  public ResponseEntity<Employee> addEmployee(@RequestBody Employee employee) {
    return ResponseEntity.ok(EmployeeStub.addEmployee(employee));
  }

  @RequestMapping(method = RequestMethod.PUT)
  public ResponseEntity<Employee> updateEmployee(@RequestBody Employee employee) {
    return ResponseEntity.ok(EmployeeStub.updateEmployee(employee));
  }

  @RequestMapping(path = "/{employeeId}", method = RequestMethod.DELETE)
  public ResponseEntity<Employee> deleteEmployee(@PathVariable Long employeeId) {
    return ResponseEntity.ok(EmployeeStub.deleteEmployee(employeeId));
  }
}

But with Spring Framework 4.3 and Spring Boot 1.4, we have new annotations to map the HTTP methods.

  • GET -> @GetMapping
  • PUT -> @PutMapping
  • POST -> @PostMapping
  • DELETE -> @DeleteMapping
  • PATCH -> @PatchMapping
/**
 * 
 * @author Gaurav Rai Mazra
 *
 */
@RestController
@RequestMapping("/api/employees")
public class EmployeeController {

  @GetMapping
  public ResponseEntity<List<Employee>> getAll() {
    return ResponseEntity.ok(Collections.emptyList());
  }

  @GetMapping("/{employeeId}")
  public ResponseEntity<Employee> findById(@PathVariable Long employeeId) {
    return ResponseEntity.ok(EmployeeStub.findById(employeeId));
  }

  @PostMapping
  public ResponseEntity<Employee> addEmployee(@RequestBody Employee employee) {
    return ResponseEntity.ok(EmployeeStub.addEmployee(employee));
  }

  @PutMapping
  public ResponseEntity<Employee> updateEmployee(@RequestBody Employee employee) {
    return ResponseEntity.ok(EmployeeStub.updateEmployee(employee));
  }

  @DeleteMapping(path = "/{employeeId}")
  public ResponseEntity<Employee> deleteEmployee(@PathVariable Long employeeId) {
    return ResponseEntity.ok(EmployeeStub.deleteEmployee(employeeId));
  }
}

These annotations has improved the readability of the code. I hope you find this post helpful. You can get full example code on Github.

This post is in continuation to my older post on Single Responsibility principle. At that time, I provided solution where we refactored FileParser and moved validation logic to FileValidationUtils and also composed Parser interface with various implementation viz. CSVFileParser, XMLFileParser and JsonFileParser (A sort of Strategy Design pattern). and validation code was moved to FileValidationUtils java file. You can get hold of old code on Github.

This was roughly 2 years ago :).

I though of improving this code further. We can completely remove FileValidationUtils by making following code change in Parser interface.

public interface Parser {
  public void parse(File file);

  public FileType getFileType();

  public default boolean canParse(File file) {
    return Objects.nonNull(file) && file.getName().endsWith(getFileType().getExtension());
  }
}
public class FileParser {
  private Parser parser;

  public FileParser(Parser parser) {
    this.parser = parser;
  }

  public void setParser(Parser parser) {
    this.parser = parser;
  }

  public void parseFile(File file) {
    if (parser.canParse(file)) {
      parser.parse(file);
    }
  }
}

We introduce default method in Parser interface ( Java 8 features) which checks if the file could be parsed which could be overridden by the concrete implementations. You can check the full example code on Github.

This post is in continuation to my previous post on Apache Avro - Introduction. In this post, we will discuss about generating classes from Schema.

How to create Apache Avro schema?

There are two ways to generate AVRO classes from Schema.

  • Pragmatically generating schema
  • Using maven Avro plugin

Consider we have following schema in "src/main/avro"

{
  "type" : "record",
  "name" : "Employee",
  "namespace" : "com.gauravbytes.avro",
  "doc" : "Schema to hold employee object",
  "fields" : [{
    "name" : "firstName",
    "type" : "string"
  },
  {
    "name" : "lastName",
    "type" : "string"
  }, 
  {
    "name" : "sex", 
    "type" : {
      "name" : "SEX",
      "type" : "enum",
      "symbols" : ["MALE", "FEMALE"]
    }
  }]
}

Pragmatically generating classes

Classes can be generated for schema using SchemaCompiler.

public class PragmaticSchemaGeneration {
 private static final Logger LOGGER = LoggerFactory.getLogger(PragmaticSchemaGeneration.class);

 public static void main(String[] args) {
  try {
   SpecificCompiler compiler = new SpecificCompiler(new Schema.Parser().parse(new File("src/main/avro/employee.avsc")));
   compiler.compileToDestination(new File("src/main/avro"), new File("src/main/java"));
  } catch (IOException e) {
   LOGGER.error("Exception occurred parsing schema: ", e);
  }
 }
}

At line number 6, we create the object of SpecificComplier. It has two constructor, one take Protocolas an argument and other take Schema as an argument.

Using Maven plugin to generate schema

There is maven plugin which can generate schema for you. You need to add following configuration to your pom.xml.

<plugin>
  <groupId>org.apache.avro</groupId>
  <artifactId>avro-maven-plugin</artifactId>
  <version>${avro.version}</version>
  <executions>
    <execution>
      <id>schemas</id>
      <phase>generate-sources</phase>
      <goals>
        <goal>schema</goal>
        <goal>protocol</goal>
        <goal>idl-protocol</goal>
      </goals>
      <configuration>
        <sourceDirectory>${project.basedir}/src/main/avro/</sourceDirectory>
        <outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
      </configuration>
    </execution>
  </executions>
</plugin>

This is how we can generate classes from Avro schema. I hope you find this post informative and helpful. You can find the full project on Github.

In this post, we will discuss following items

  • What is Apache Avro?
  • What is Avro schema and how to define it?
  • Serialization in Apache Avro.

What is Apache Avro?

"Apache Avro is data serialization library" That's it, huh. This is what you will see when you open their official page.Apache Avro is:

  • Schema based data serialization library.
  • RPC framework (support).
  • Rich data structures (Primary includes null, string, number, boolean and Complex includes Record, Array, Map etc.).
  • A compact, fast and binary data format.

What is Avro schema and how to define it?

Apache Avro serialization concept is based on Schema. When you write data, schema is written along with it. When you read data, schema will always be present. The schema along with data makes it fully self describing.

Schema is representation of AVRO datum(Record). It is of two types: Primitive and Complex.

Primitive types

These are the basic type supported by Avro. It includes null, int, long, bytes, string, float and double. One quick example:

{"type": "string"}

Complex types

Apache Avro support six complex types i.e. record, enum, array, map, fixed and union.

RECORD

Record uses the name type 'record' and has following attributes.

  • name: a JSON string, providing the name of the record (required).
  • namespace: A JSON string that qualifies the name.
  • doc: A JSON string representing the documentation for the record.
  • aliases: A JSON array, providing alternate name for the record
  • fields: A JSON array, listing fields (required). It has own attributes.
    • name: A JSON string, providing the name of the field (required).
    • type: A JSON object, defining a schema or record definition (required).
    • doc: A JSON string, providing documentation for the field.
    • default: A default value for the field if the instance lack recognition of the field value.
{
  "type": "record",
  "name": "Node",
  "aliases": ["SinglyLinkedNodes"],
  "fields" : [
    {"name": "value", "type": "string"},
    {"name": "next", "type": ["null", "Node"]}
  ]
}
ENUM

Enum uses the type "enum" and support attributes i.e. name, namespace, aliases, doc and symbols (A JSON array).

{ 
  "type": "enum",
  "name": "Move",
  "symbols" : ["LEFT", "RIGHT", "UP", "DOWN"]
}
ARRAYS

Array uses the type "array" and support single attribute item.

{"type": "array", "items": "string"}
MAPS

Map uses the type "map" and support one attribute values. Its key by default are of type string.

{"type": "map", "values": "long"}
UNIONS

Unions are represented by JSON array as ["null", "string"] which means the value type could be null or string.

FIXED

Fixed uses type "fixed" and support two attributes i.e. name and size.

{"type": "fixed", "size": 16, "name": "md5"}

Serialization in Apache Avro

Apache Avro data is always serialized with its schema. It supports two types of encoding i.e. Binary and JSON . You can read more on serialization on their official specification and/ or can see the example usage here.