In this post, we will discuss following items.

  • What is object cloning in Java?
  • How to implement cloning in Java?
  • What is Cloneable interface and its main flaw?
  • Shallow Cloning.
  • Deep Cloning.
  • Alternate to Cloning.

What is object cloning in Java?

Object cloning is the process of creating exact copy of that object. To do this, there are two pre-requisite in Java:

1) implement Cloneable interface

2) and, override the clone() defined in java.lang.Object class.

The clone() is defined in java.lang.Object class and have protected access by default. You need to set its visibility to public by overriding it, so that you will be able to call this method from outside the class overriding it.

General contract of clone()

Creates and returns a copy of this object. The precise meaning of "copy" may depend on the class of the object. The general intent is that, for any object x, the expression:

x.clone() != x will be true, and that the expression x.clone().getClass() == x.getClass() will be true, but these are not absolute requirements.

While it is typically the case that x.clone().equals(x) will be true, this is not an absolute requirement.

No constructors are called on cloned object.

What is Cloneable interface and its main flaw?

Cloneable is marker interface. It's main flaw is that it lacks clone() method. Normally, interface means defining contracts for its implementing classes. But, here it just determines the behaviour of Object class's clone(). If presents, then calling clone method will return the field-by-field copy of that object; otherwise will throws CloneNotSupportedException.

Shallow Cloning

It is the field-by-field copy of that object. The implementation is pretty straightforward. Implement the Cloneable interface and override clone() from Object class.

Let's see this by code snippet.

public class Address implements Cloneable {
  private String streetAddress;
  private String city;
  private String state;
 
  public Address() {
  
  }
  //getters/ setters/ toString/ hashCode/ equals

  @Override
  public Address clone() throws CloneNotSupportedException {
    //Since no immutable object are there we will use shallow cloning only
    return (Address)super.clone();
  }
}

public class Employee implements Cloneable {
  private long id;
  private String name;
  private Address address;
 
  public Employee() {
  
  }
  // getters/ setters/ toString/ hashCode/ equals

  @Override
  public Employee clone() throws CloneNotSupportedException {
    return (Employee)super.clone();
  }
}
public class ShallowCloneExample {
  public static void main(String[] args) throws CloneNotSupportedException {
    Employee employee = new Employee(1, "Gaurav", new Address("Sector 37C", "Chandigarh", "India"));
    Employee shallowClone = employee.clone();
    employee.getAddress().setCity("Hoshiarpur");
    // should return false but returning true
    System.out.println(employee.getAddress().equals(shallowClone.getAddress()));
  }
}

In the above code snippet, Employee class clone() method called the super.clone() i.e. Object class's clone() method is called. The only problem in case of shallow cloning is when you have composed of Objects in your class. If you change anything in shallow cloned object reference, it will be reflected in original object reference as well.

When can we use shallow cloning

If all the instance fields are either primitive or Immutable object type, then shallow cloning is recommended.

Deep cloning

When you recursively clone each and every instance field of that object. Let's understand this by example.

public class Employee implements Cloneable {
  private long id;
  private String name;
  private Address address;
 
  public Employee() {
  
  }

  // getter/setters/toString/ equals/ hashCode

  @Override
  public Employee clone() throws CloneNotSupportedException {
    Employee cloned = (Employee) super.clone();
  
    if (this.address == null) {
      cloned.setAddress(null);
    }
    else {
      cloned.setAddress(this.address.clone());
    }
  
    return cloned;
  }
}


public class Address implements Cloneable {
  private String streetAddress;
  private String city;
  private String state;
 
  public Address() {
  }

  // getters/setters/toString/ hashCode/ equals

  @Override
  public Address clone() throws CloneNotSupportedException {
    //Since no immutable object are there we will use shallow cloning only
    return (Address)super.clone();
  }
}
public class DeepCloneExample {
  public static void main(String[] args) throws CloneNotSupportedException {
    Employee employee = new Employee(1, "Gaurav", new Address("Sector 37C", "Chandigarh", "India"));
    Employee deepClone = employee.clone();
    employee.getAddress().setCity("Hoshiarpur");
    // should return false
    System.out.println(employee.getAddress().equals(deepClone.getAddress()));
  }
}

In the above code snippet, Employee class has reference of Address class which indeed is mutable. We have changed the implementation of clone() and recursively call the clone() of every mutable object reference so that we have new object for those as well.

Some important points to remember

final fields can't be cloned.

Use co-variant return type while overriding the clone().

If you are implementing cloning for thread-safe class, then you need to synchronize the cloning and methods used in cloning.

Alternate to cloning

We can use copy constructor and/or static factory method as an alternate to cloning. The advantage of this approach over cloning is that it doesn't demand unenforcable thinly documented conventions. They don't conflict with proper use of final fields. They don't throw unnecessary checked exceptions. They don't require object casts.

public Employee(Employee employeeToCopy)
public static Employee newInstance(Employee employeeToCopy)

This is how we clone objects in Java. I hope this post is informative and explains pitfall of cloning well. You can find the example code used above on Github.

Open close principle

  • Bertrand Meyer coined the term open close principle in his 1988 book Object Oriented Software Construction.
  • Software entities like classes, modules and functions should be "open for extension but closed for modifications".
  • It is a generic principle. You can consider it when writing your classes to make sure that when you need to extend their behaviour you don't have to change the class but to extend it.
  • When referring to the classes Open Close principle can be ensured by the use of Abstract classes and/ or Interfaces and concrete classes implementing their behaviour. This will enforce having concrete classes implementing Abstract classes/ Interfaces instead of changing them. 
  • Some particular cases where this principle is used are Template Design Pattern, Strategy design pattern.

We want to draw different kind of images. For this, We wrote a generic class ImageEditer which can draw shapes. See the below code snippet.

package javawithgaurav.openclose;

/**
 * 
 * @author Gaurav Rai Mazra
 * <a href="www.javawithgaurav.blogspot.in">Click here to view more</a>
 */
//Abstract class different type of Shape
abstract class Shape {
    public static final int TYPE_RECTANGLE = 1;
    public static final int TYPE_SQUARE = 2;
    
    private int type;
    
    Shape (int type) {
        
    }
    
    public int getType() {
        return type;
    }
}

//Rectangle
class Rectangle extends Shape {
    Rectangle () {
        super(TYPE_RECTANGLE);
    }
}

//Square
class Square extends Shape {
    Square () {
        super(TYPE_SQUARE);
    }
}
package javawithgaurav.openclose;

/**
 * 
 * @author Gaurav Rai Mazra
 * <a href="www.javawithgaurav.blogspot.in">Click here to view more</a>
 */

public class ImageEditor
{
    public void drawShape (Shape s) {
        final int shapeType = s.getType();
        //Based on shape type draw shapes code
        if (shapeType == Shape.TYPE_RECTANGLE) {
           drawRectangle(s); 
        }
        else if (shapeType == Shape.TYPE_SQUARE) {
            drawSquare(s);
        }
    }
    
    private void drawRectangle(Shape s) {
        // Logic to draw Rectangle
    }
    
    private void drawSquare(Shape s) {
        // logic to draw Square
    }
}

By looking into above code snippet, we see no problem. We have Shape as abstract class and then its concrete implementations like Rectangle, Square. And, we have ImageEditor class who have only exposing drawShape() to draw shape of any type and it is hiding method to draw specific image like Rectangle, Square etc. Atleast, we are using abstraction, encapsulation, hiding etc. features of OOPs(pun intended).

Problem with above structure

In case we are required to add new shape say Polygon then what?. Will our ImageEditor be able to draw it? Answer is No.

We need to change ImageEditor class to support this behavior. It means we need to modify it??? If we modify it then we need to unit test it. One change can raise some other issue in class.

So, how we can ensure to close ImageEditor for modifications?

Good approach

We will change it according to OCP and also SRP. Let's do it in the code snippet below.

package javawithgaurav.openclose;

/**
 * 
 * @author Gaurav Rai Mazra
 * <a href="www.javawithgaurav.blogspot.in">Click here to view more</a>
 */
//Abstract class different type of Shape
abstract class Shape {
  private String name;

  Shape (String name) {
      this.name = name;
  }
  
  public String getName() {
      return name;
  }
  
  abstract void draw();
}

//Rectangle
class Rectangle extends Shape {
  Rectangle () {
      super("Rectangle");
  }
  
  @Override
  public void draw() {
      //logic to draw RECTANGLE
  }
}

//Square
class Square extends Shape {
  Square () {
      super("Square");
  }
  
  @Override
  public void draw() {
      //logic to draw SQUARE
  }
}
package javawithgaurav.openclose;

/**
 * 
 * @author Gaurav Rai Mazra
 * <a href="www.javawithgaurav.blogspot.in">Click here to view more</a>
 */

public class ImageEditor
{
    public void drawShape (Shape s) {
        s.draw();
    }
    // other methods related to editing image goes here
}

We declared the responsibility to draw in Shape but also make it out abstract so that every concrete class should define how to draw itself.

In the ImageEditor class, the drawShape() delegate the call to draw to Shape. Now, with this change we can draw any kind of shape and in future if it has to draw new shape then we don't have to modify it.

Single Responsibility Principle

  • Single responsibility principle was introduced by Tom DeMarco in his book "Structured Analysis and Systems Specification, 1979". Robert Martin reinterpreted the concept and defined the responsibility as a reason to change.
  • A class should have only one reason to change.
  • In this context, responsibility is considered as reason to change. This principle states that if we have two reasons to change for a class, we have to split the functionality in two classes. Each class will handle only one responsibility and on future if we need to make one change we are going to make it in the class which handles it. When we need to make a change in the class having more responsibilities the change might affect the other functionality of the classes.

Let's see this by example.

package com.gauravbytes.srp.parser;

import java.io.File;

class FileParser {
 
 public void parseFile(File file) {
  // parse file logic for xml, csv, json data in files
  if (isValidFile(file, FileType.CSV, FileType.XML, FileType.JSON)) {
   //parsing logic starts
  }
 }
 
 private boolean isValidFile(File file, FileType... types) {
  if(file == null || types == null || types.length == 0)
   return false;
  
  String fileName = file.getName().toLowerCase();
  for (FileType type : types) {
   if (fileName.endsWith(type.getExtension()))
    return true;
  }
  
  return false;
 }
 
}
package com.gauravbytes.srp.parser;

enum FileType { 
 
 CSV(".csv"), XML(".xml"), JSON(".json"), PDF(".pdf"), RICHTEXT(".rtf"), TXT(".txt");
 
 private String extension;
 
 private FileType (String extension) {
  this.extension = extension;
 }
 
 public String getExtension() {
  return this.extension;
 }
}

FileParser class parses the csv, xml and json file and generates the data for it. It also have method to first validate the file which checks if the file is valid. This fileparser is doing more than one stuff.

- It is validating the files.

- Currently, It is parsing the csv, json, xml files.

In future, if we want to text file, rtf file and so on then we need to change this class. Also if we want to change how we validate the file, then also we need to change this file. This leads to the problem like unit testing the class again because one change can affect the existing functionality and so on.

In the above example, if I want to change the strategy to parse xml file. lets say previously it was using dom parser to parse xml but i want to use SAX parser for parsing due to change in requirement or due to bigger size of file. Then i need to change this class again. Same can happen with json parsing or csv parsing.

We can avoid multiple reasons for change in the fileparser class by introducing separate class for validating the file and also modify the structure of the class and introduce new classes for which have specific responsibility to parse specific type of class.

The new and improved structure for the class will look something like this.

1. FileParser.java

In this class, we removed method to validate files which was present in earlier file and placed it in FileValidationUtils. We removed the extra responsibility to validate files from this FileParser.

We also removed the code which actually parses the file based on whatever file it is like xml, csv or json. Now, In case we need to change our xml reading/parsing logic that that can be changed without changing FileParser class. The solution here to divide the responsibility of parsing specific type to their respective classes and then those classes may have other specific method to parse those files.

In our solution, we created interface Parser and then created the specific classes to handle XML, CSV and JSON parsing like CSVFileParser, JsonFileParser and XmlFileParser.

We used composition relation in FileParser and give setter to change parsing at any time in this FileParser but the functionality will never change in FileParser.

package com.gauravbytes.good.srp.parser;

import java.io.File;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public class FileParser {
 private Parser parser;
 
 public FileParser(Parser parser) {
  this.parser = parser;
 }
 
 public void setParser(Parser parser) {
  this.parser = parser;
 }
 
 public void parseFile(File file) {
  if (FileValidationUtils.isValidFile(file, parser.getFileType())) {
   parser.parse(file);
  }
 }
}
package com.gauravbytes.good.srp.parser;

import java.io.File;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public class FileValidationUtils {
 
 private FileValidationUtils() {
  
 }
 
 public static boolean isValidFile (File file, FileType... types) {
  if (file == null || types == null || types.length == 0)
   return false;
  
  String fileName = file.getName().toLowerCase();
  for (FileType type : types) {
   if (fileName.endsWith(type.getExtension()))
    return true;
  }
  
  return false;
 }
 
 public static boolean isValidFile (File file, FileType type) {
  if (file == null || type == null)
   return false;
  
  String fileName = file.getName().toLowerCase();
  if (fileName.endsWith(type.getExtension()))
   return true;
  
  return false;
 }
}
package com.gauravbytes.good.srp.parser;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public enum FileType { 
 
 CSV(".csv"), XML(".xml"), JSON(".json"), PDF(".pdf"), RICHTEXT(".rtf"), TXT(".txt");
 
 private String extension;
 
 private FileType (String extension) {
  this.extension = extension;
 }
 
 public String getExtension() {
  return this.extension;
 }
}
package com.gauravbytes.good.srp.parser;

import java.io.File;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public interface Parser {
 //method to parse file
 public void parse(File file);
 
 // return filetype to validate
 public FileType getFileType();
}
package com.gauravbytes.good.srp.parser;

import java.io.File;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public class CSVFileParser implements Parser {

 @Override
 public void parse(File file) {
  //logic to parse CSV file goes here
 }

 @Override
 public FileType getFileType() {
  return FileType.XML;
 }

}
package com.gauravbytes.good.srp.parser;

import java.io.File;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public class XmlFileParser implements Parser {

 @Override
 public void parse(File file) {
  // logic to parse xml file
 }

 @Override
 public FileType getFileType() {
  return FileType.XML;
 }

}
package com.gauravbytes.good.srp.parser;

import java.io.File;

/**
 * @author Gaurav Rai Mazra
 * 
 */
public class JsonFileParser implements Parser {

 @Override
 public void parse(File file) {
  // Logic to parse json file
 }

 @Override
 public FileType getFileType() {
  return FileType.JSON;
 }

}

Benefits of SRP

Organize the code: By following SRP, we organized the code in well defined classes. Every class will have its own purpose (single purpose) and single reason for change.

Less fragile: When a class has more than one reason to change then it is more fragile. One change may lead to unexpected behaviour or problems at other places which will never be known to us until later stage of the project.

Low Coupling: More functionalities in single class mean high coupling or cohesion. In our example coupling is lowered by composition relation.

Code refactoring: Code refactoring is easy task. If we want to change behaviour then we can change by setting other parser type in our example.

Maintainability, Testability and easier debugging are the other benefits of following SRP in class designing.

This is how we can gain long-term benefits from SRP. You can find the example code on github.

What are Software design principles?

  • Software design principles represent a set of guidelines that helps us to avoid having a bad design.

  • Associated to Robert Martin who gathered them in “Agile Software Development: Principles, Patterns, and Practices".
  • According to Robert Martin there are 3 important characteristics of a bad design that should be avoided:
    • Rigidity: It is hard to change because every change affects too many other parts of the system.
    • Fragility: When you make a change, unexpected parts of the system break.
    • Immobility: It is hard to reuse in another application because it can't be disentangled from the current application.

What does SOLID stands for?

This article is about Indexing and Searching documents with Apache Lucene version 4.7. Before jumping to example and explanation, let's see what Apache Lucene is.

Introduction to Apache Lucene

Lucene is a high-performance, scalable information retrieval (IR) library. IR refers to the process of searching for documents, information within documents, or metadata about documents. Lucene lets you add searching capabilities to your application. [ref. Apache Lucene in Action Second edition covers Apache Lucene v3.0]

The main reason for popularity of Lucene is its simplicity. You don't require in-depth knowledge of indexing and searching process to get started with Lucene. You can start with learning handful of classes which actually do the indexing and searching for Lucene. The latest version released is 4.7 and books are only available for v3.0.

Important note

Lucene is not ready-to-use application like file-search program, web-crawler or search engine. It is a software toolkit or library and with the help of it you can build your own search application or libraries. There are many frameworks build on top of Lucene Core API for searching.

Libraries and Environment used
  • Eclipse Kepler
  • JDK 1.7
  • lucene-core-4.7.2.jar
  • lucene-queryparser-4.7.2.jar
  • lucene-demo-4.7.2.jar
  • lucene-analyzers-common-4.7.2.jar

Indexing with Lucene

Let's jump to indexing process in Lucene with example and then we will explain the classes that are used and their purpose.

1. IndexerTest is class used to show the demo.

package lucene.indexer;

import java.io.File;
import java.io.FileFilter;

/**
 * @author Gaurav Rai Mazra
 */
public class IndexerTest {
 
 public static void main(String[] args) throws Exception {
  String indexDir = "index";
  String dataDir = "dir";
  
  long start = System.currentTimeMillis();
  final IndexingHelper indexHelper = new IndexingHelper(indexDir);
  int numIndexed;
 
  try {
   numIndexed = indexHelper.index(dataDir, new TextFilesFilter());
  }
  finally {
   indexHelper.close();
  }
  
  long end = System.currentTimeMillis();
  System.out.println("Indexing " + numIndexed + " files took " + (end - start) + " milliseconds");
 }
}

// class filters only .txt files for indexing
class TextFilesFilter implements FileFilter {
 @Override
 public boolean accept(File pathname) {
  return pathname.getName().toLowerCase().endsWith(".txt");
 }
}

2. IndexingHelper class is used to represent how to do the indexing.

package lucene.indexer;

import java.io.File;
import java.io.FileFilter;
import java.io.FileReader;
import java.io.IOException;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;

/**
 * @author Gaurav Rai Mazra
 */
public class IndexingHelper {
 //class which actually creates and maintain the indexes in the file
 private IndexWriter indexWriter;
 
 public IndexingHelper(String indexDir) throws Exception {
  //To represent actual directory
  Directory directory = FSDirectory.open(new File(indexDir));
  //Holds configuration required in creation of IndexWriter object
  IndexWriterConfig indexWriterConfig = new IndexWriterConfig(Version.LUCENE_47, new StandardAnalyzer(Version.LUCENE_47));
  indexWriter = new IndexWriter(directory, indexWriterConfig);
 }
 
 public void close() throws IOException {
  indexWriter.close();
 }
 
 // exposed method to index files 
 public int index(String dataDir, FileFilter fileFilter) throws Exception {
  File[] files = new File(dataDir).listFiles();
  for (File f : files)
  {
   if (!f.isDirectory() && !f.isHidden() && f.exists() && f.canRead() && (fileFilter == null || fileFilter.accept(f)))
    indexFile(f);
  } 
  
  return indexWriter.numDocs();
 }
 
 private void indexFile(File f) throws Exception {
  System.out.println("  " + f.getCanonicalPath());
  Document doc = getDocument(f);
  indexWriter.addDocument(doc);
 }

 private Document getDocument(File f) throws Exception {
  // class used by lucene indexwriter and indexreader to store and reterive indexed data
  Document document = new Document();
  document.add(new TextField("contents", new FileReader(f)));
  document.add(new StringField("filename", f.getName(), Field.Store.YES));
  document.add(new StringField("fullpath", f.getCanonicalPath(), Field.Store.YES));
  return document;
 }
}

In IndexingHelper class, we have used following classes of Lucene library for indexing .txt files.

  • IndexWriter class.
  • IndexWriterConfig class.
  • Directory class.
  • FSDirectory class.
  • Document class.

Explanation

1. IndexWriter: It is the centeral component of indexing process. This class actually creates new Index or opens the existing one and add, remove and update the document in the index. It has one public constructor which takes Directory class's object and IndexWriterConfig class's object as parameters.

This class exposes many methods to add Document class object to be used internally in Indexing.

This class exposes methods used for deletingDocuments from the index as well and other informative methods like numDocs() which returns all the documents in the index including deleted once if they are not flushed on file.

2. IndexWriterConfig: It holds the configuration required to create IndexWriter object. It has one public constructor which takes two parameter one is enum of Version i.e. lucene version for compatibility issues. The other parameter is object of Analyzer class which itself is abstract class but have many implementing classes like WhiteSpaceAnalyzer, StandardAnalyzer etc. which helps in Analyzing the tokens. It is used in analysis process.

3. Directory: The Directory class represents the location of Lucene index. It is an abstract class and have many different concrete implementation. No one implementation is best suited for the computer architecture you have. Hence use FSDirectory abstract class to get best possible concrete implementation available for the Directory class.

4. Analyzer: Before any text is indexed, it is passed to Analyzer for extracting tokens out of that text that should be indexed and rest will be eliminated.

5. Document: Document class represents collection of Fields. It is a chunk of data which we want to index and make it retrievable at a later time.

6. Field: Each document will have one or more than one fields. Each field has a name and corresponding to it a value. Most of Field class methods are depreciated. It is favourable to use other existing implementation of Field class like IntField, LongField, FloatField, DoubleField, BinaryDocValuesField, NumericDocValuesField, SortedDocValuesField, StringField, TextField, StoredField.

Searching with Lucene

Let's jump to searching with Lucene and then will explain the classes used.

package lucene.searcher;

import java.io.File;
import java.io.IOException;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;

/**
 * @author Gaurav Rai Mazra
 */
public class SearcherTest {

 public static void main(String[] args) throws IOException, ParseException {
  String indexDir = "index";
  String q = "direwolf";
  
  search(indexDir, q);
 }
 
 //Search in lucene index
 private static void search(String indexDir, String q) throws IOException, ParseException {
  //get a directory to search from
  Directory directory = FSDirectory.open(new File(indexDir));
  // get reader to read directory
  IndexReader indexReader = DirectoryReader.open(directory);
  //create indexSearcher
  IndexSearcher is = new IndexSearcher(indexReader);
  // Create analyzer to analyse documents
  Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_47); 
  //create query parser
  QueryParser queryParser = new QueryParser(Version.LUCENE_47, "contents", analyzer);
  //get query
  Query query = queryParser.parse(q);
  
  //Query query1 = new TermQuery(new Term("contents", q));

  long start = System.currentTimeMillis();
  //hit query
  TopDocs hits = is.search(query, 10);
  long end = System.currentTimeMillis();
  
  System.err.println("Found " + hits.totalHits + " document(s) in " + (end-start) + " milliseconds");
  for (ScoreDoc scoreDoc : hits.scoreDocs)
  {
   Document document = is.doc(scoreDoc.doc);
   System.out.println(document.get("fullpath"));
  }
 }
}
Explanation

1. IndexReader: This is an abstract class providing an interface for assessing an index. For getting particular implementation helper class DirectoryReader is used which calls open method with passing directory reference to get IndexReader object.

2. IndexSearcher: IndexSearcher is used to search data which is indexed by IndexWriter. You can think of IndexSearcher as a class which opens the index in read-only mode. It requires the IndexReader instance to create object of it. It has method to search and getting documents.

3. QueryParser: This class is used to parse the string to generate query out of it.

4. Query: It is abstract class represent the query to be used in searching. There are many concrete classes to it like TermQuery, BooleanQuery, PhraseQuery etc. It contains several utility method, one of it is setBoost(float).

5. TopDocs: It represents the hit returned by search method of IndexSearcher. It has one public constructor which take three parameters int totalHits, ScoreDoc[] scoreDocs, float maxScore. The ScoreDoc contains the score and documentId of the document.