Showing posts with label Apache AVRO. Show all posts

This post is in continuation to my previous posts on Apache Avro - Introduction, Apache Avro - Generating classes from Schema and Apache Avro - Serialization.

In this post, we will share insights on using Apache Avro as RPC framework.

We first need to define a protocol to use Apache Avro as RPC framework. Before going into depth of this topic, let's discuss What protocol is?

Avro protocols describes RPC interfaces. They are defined as JSON similar to Schema.

A protocol has following attributes

  • protocol: a string, defining name of the protocol.
  • namespace: an optional that qualifies the name.
  • types: an optional list of definitions of named types (like record, enum, fixed and errors).
  • messages: an optional JSON object whose keys are method names of protocoland whose values are objects whose attributes are described below. No two messages may have the same name.

Further, Message have following attributes

  • request: a list of named, typed parameter schemas.
  • response: a response schema.
  • error: an optional union of declared error schemas.

Let's define a simple protocol to exchange email message between client and server.

{
  "namespace": "com.gauravbytes.avro",
  "protocol": "EmailSender",
   "types": [{
     "name": "EmailMessage", "type": "record",
     "fields": [{
       "name": "to",
       "type": "string"
     },
     {
       "name": "from",
       "type": "string"
     },
     {
       "name": "body",
       "type": "string"
     }]
   }],
   "messages": {
     "send": {
       "request": [{"name": "email", "type": "EmailMessage"}],
       "response": "string"
     }
   }
}

Here, The protocol defines an interface EmailSender which takes an EmailMessage as request and return string response.

We have created a mock implementation of EmailSender

public class EmailSenderImpl implements EmailSender {
  @Override
  public CharSequence send(EmailMessage email) throws AvroRemoteException {
    return email.toString();
  }
}

Now, we create a server, Apache Avro uses Netty for the same.

server = new NettyServer(new SpecificResponder(EmailSender.class, new EmailSenderImpl()),
    new InetSocketAddress(65333));

Now, we create a client which sends request to the server.

NettyTransceiver client = new NettyTransceiver(new InetSocketAddress(65333));
// client code - attach to the server and send a message
EmailSender proxy = SpecificRequestor.getClient(EmailSender.class, client);
logger.info("Client built, got proxy");

// fill in the Message record and send it
EmailMessage message = new EmailMessage();
message.setTo(new Utf8(args[0]));
message.setFrom(new Utf8(args[1]));
message.setBody(new Utf8(args[2]));
logger.info("Calling proxy.send with message: {} ", message.toString());
logger.info("Result: {}", proxy.send(message));
// cleanup
client.close();

This is how we can use Apache Avro as RPC framework. I hope you found this article useful. You can download the full example code from Github.

I did a comparison of Java default serialization and Apache Avro serialization of data and results were very astonishing.

You can read my older posts for Java serialization process and Apache Avro Serialization.

Apache Avro consumed 15-20 times less memory to store the serialized data. I created a class with three fields (two String and one enum and serialized them with Avro and Java.

The memory used by Avro is 14 bytes and Java used 231 bytes (length of byte[])

Reason for generating less bytes by Avro

Java Serialization

The default serialization mechanism for an object writes the class of the object, the class signature, and the values of all non-transient and non-static fields. References to other objects (except in transient or static fields) cause those objects to be written also. Multiple references to a single object are encoded using a reference sharing mechanism so that graphs of objects can be restored to the same shape as when the original was written.

Apache Avro

writes only the schema as String and data of class being serialized. There is no per field overhead of writing the class of the object, the class signature as in Java. Also, the fields are serialized in pre-determined order.

You can find the full Java example on github.

Avro can't handle circular references and throw java.lang.StackOverflowError whereas Java's default serialization can handle it. (example code for Avro and example code for Java serialization) Another observation is that Avro have no direct way of defining inheritance in the Schema (Classes) but Java's default serialization support inheritance with its own constraints like super class either need to implements Serializable interface or have default no-args constructor accessible till top hierarchy (otherwise will throw java.io.NotSerializableException).

You can also view my other posts on Avro.

This post is in continuation with my earlier posts on Apache Avro - Introduction and Apache Avro - Generating classes from Schema.

In this post, we will discuss about reading (deserialization) and writing(serialization) of Avro generated classes.

"Apache Avro™ is a data serialization system." We use DatumReader<T> and DatumWriter<T> for de-serialization and serialization of data, respectively.

Apache Avro formats

Apache Avro supports two formats, JSON and Binary.

Let's move to an example using JSON format.

Employee employee = Employee.newBuilder().setFirstName("Gaurav").setLastName("Mazra").setSex(SEX.MALE).build();

DatumWriter<Employee> employeeWriter = new SpecificDatumWriter<>(Employee.class);
byte[] data;
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
  Encoder jsonEncoder = EncoderFactory.get().jsonEncoder(Employee.getClassSchema(), baos);
  employeeWriter.write(employee, jsonEncoder);
  jsonEncoder.flush();
  data = baos.toByteArray();
}
  
// serialized data
System.out.println(new String(data));
  
DatumReader<Employee> employeeReader = new SpecificDatumReader<>(Employee.class);
Decoder decoder = DecoderFactory.get().jsonDecoder(Employee.getClassSchema(), new String(data));
employee = employeeReader.read(null, decoder);
//data after deserialization
System.out.println(employee);

Explanation on the way :)

Line 1: We create an object of class Employee (AVRO generated)

Line 3: We create an object of SpecificDatumWriter<T> which implements DatumWriter<T> Also, there exists other implementation of DatumWriter viz. GenericDatumWriter and ReflectDatumWriter.

Line 6: We create JsonEncoder by passing Schema and OutputStream where we want the serialized data and In our case, it is in-memory ByteArrayOutputStream.

Line 7: We call #write method on DatumWriter with Object and Encoder.

Line 8: We flushed the JsonEncoder. Internally, it flushes the OutputStream passed to JsonEncoder.

Line 15: We created object of SpecificDatumReader<T> which implements DatumReader<T>. Also, there exists other implementation of DatumReader viz. GenericDatumReader and ReflectDatumReader.

Line 16: We create JsonDecoder passing Schema and input String which will be deserialized.

Let's move to serialization and de-serialization example with Binary format.

Employee employee = Employee.newBuilder().setFirstName("Gaurav").setLastName("Mazra").setSex(SEX.MALE).build();

DatumWriter<Employee> employeeWriter = new SpecificDatumWriter<>(Employee.class);
byte[] data;
try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
  Encoder binaryEncoder = EncoderFactory.get().binaryEncoder(baos, null);
  employeeWriter.write(employee, binaryEncoder);
  binaryEncoder.flush();
  data = baos.toByteArray();
}
  
// serialized data
System.out.println(data);
  
DatumReader<Employee> employeeReader = new SpecificDatumReader<>(Employee.class);
Decoder binaryDecoder = DecoderFactory.get().binaryDecoder(data, null);
employee = employeeReader.read(null, decoder);
//data after deserialization
System.out.println(employee);

All the example is same except Line 6 and Line 16 where we are creating an object of BinaryEncoder and BinaryDecoder.

This is how to we can serialize and deserialize data with Apache Avro. I hope you found this article informative and useful. You can find the full example on github.

This post is in continuation to my previous post on Apache Avro - Introduction. In this post, we will discuss about generating classes from Schema.

How to create Apache Avro schema?

There are two ways to generate AVRO classes from Schema.

  • Pragmatically generating schema
  • Using maven Avro plugin

Consider we have following schema in "src/main/avro"

{
  "type" : "record",
  "name" : "Employee",
  "namespace" : "com.gauravbytes.avro",
  "doc" : "Schema to hold employee object",
  "fields" : [{
    "name" : "firstName",
    "type" : "string"
  },
  {
    "name" : "lastName",
    "type" : "string"
  }, 
  {
    "name" : "sex", 
    "type" : {
      "name" : "SEX",
      "type" : "enum",
      "symbols" : ["MALE", "FEMALE"]
    }
  }]
}

Pragmatically generating classes

Classes can be generated for schema using SchemaCompiler.

public class PragmaticSchemaGeneration {
 private static final Logger LOGGER = LoggerFactory.getLogger(PragmaticSchemaGeneration.class);

 public static void main(String[] args) {
  try {
   SpecificCompiler compiler = new SpecificCompiler(new Schema.Parser().parse(new File("src/main/avro/employee.avsc")));
   compiler.compileToDestination(new File("src/main/avro"), new File("src/main/java"));
  } catch (IOException e) {
   LOGGER.error("Exception occurred parsing schema: ", e);
  }
 }
}

At line number 6, we create the object of SpecificComplier. It has two constructor, one take Protocolas an argument and other take Schema as an argument.

Using Maven plugin to generate schema

There is maven plugin which can generate schema for you. You need to add following configuration to your pom.xml.

<plugin>
  <groupId>org.apache.avro</groupId>
  <artifactId>avro-maven-plugin</artifactId>
  <version>${avro.version}</version>
  <executions>
    <execution>
      <id>schemas</id>
      <phase>generate-sources</phase>
      <goals>
        <goal>schema</goal>
        <goal>protocol</goal>
        <goal>idl-protocol</goal>
      </goals>
      <configuration>
        <sourceDirectory>${project.basedir}/src/main/avro/</sourceDirectory>
        <outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
      </configuration>
    </execution>
  </executions>
</plugin>

This is how we can generate classes from Avro schema. I hope you find this post informative and helpful. You can find the full project on Github.

In this post, we will discuss following items

  • What is Apache Avro?
  • What is Avro schema and how to define it?
  • Serialization in Apache Avro.

What is Apache Avro?

"Apache Avro is data serialization library" That's it, huh. This is what you will see when you open their official page.Apache Avro is:

  • Schema based data serialization library.
  • RPC framework (support).
  • Rich data structures (Primary includes null, string, number, boolean and Complex includes Record, Array, Map etc.).
  • A compact, fast and binary data format.

What is Avro schema and how to define it?

Apache Avro serialization concept is based on Schema. When you write data, schema is written along with it. When you read data, schema will always be present. The schema along with data makes it fully self describing.

Schema is representation of AVRO datum(Record). It is of two types: Primitive and Complex.

Primitive types

These are the basic type supported by Avro. It includes null, int, long, bytes, string, float and double. One quick example:

{"type": "string"}

Complex types

Apache Avro support six complex types i.e. record, enum, array, map, fixed and union.

RECORD

Record uses the name type 'record' and has following attributes.

  • name: a JSON string, providing the name of the record (required).
  • namespace: A JSON string that qualifies the name.
  • doc: A JSON string representing the documentation for the record.
  • aliases: A JSON array, providing alternate name for the record
  • fields: A JSON array, listing fields (required). It has own attributes.
    • name: A JSON string, providing the name of the field (required).
    • type: A JSON object, defining a schema or record definition (required).
    • doc: A JSON string, providing documentation for the field.
    • default: A default value for the field if the instance lack recognition of the field value.
{
  "type": "record",
  "name": "Node",
  "aliases": ["SinglyLinkedNodes"],
  "fields" : [
    {"name": "value", "type": "string"},
    {"name": "next", "type": ["null", "Node"]}
  ]
}
ENUM

Enum uses the type "enum" and support attributes i.e. name, namespace, aliases, doc and symbols (A JSON array).

{ 
  "type": "enum",
  "name": "Move",
  "symbols" : ["LEFT", "RIGHT", "UP", "DOWN"]
}
ARRAYS

Array uses the type "array" and support single attribute item.

{"type": "array", "items": "string"}
MAPS

Map uses the type "map" and support one attribute values. Its key by default are of type string.

{"type": "map", "values": "long"}
UNIONS

Unions are represented by JSON array as ["null", "string"] which means the value type could be null or string.

FIXED

Fixed uses type "fixed" and support two attributes i.e. name and size.

{"type": "fixed", "size": 16, "name": "md5"}

Serialization in Apache Avro

Apache Avro data is always serialized with its schema. It supports two types of encoding i.e. Binary and JSON . You can read more on serialization on their official specification and/ or can see the example usage here.