We have learnt about What is Apache Ignite?, Setting up Apache Ignite and few quick examples in last few posts. In this post, we will deep dive into Apache Ignite core Ignite classes and discuss about following internals.

  • Core classes
  • Lifecycle events
  • Client and Server mode
  • Thread pools configurations
  • Asynchronous support in Ignite
  • Resource injection

Core classes

Whenever you will be interacting with Apache Ignite in application, you will always encounter Ignite interface and Ignition class. Ignition is the main entry point to create a Ignite node. This class provides various methods to start a grid node in the network topology.

// Starting with default configuration
Ignite igniteWithDefaultConfig = Ignition.start();

// Ignite with Spring configuration xml file
Ignite igniteWithSpringCfgXMLFile = Ignition.start("/path_to_spring_configuration_xml.xml");

// ignite with java based configuration
IgniteConfiguration icfg = ...;
Ignite igniteWithJavaConfiguration = Ignition.start(icfg);

There are also other useful methods in Ignition class which we will discuss below. Ignite interface provide control over node. It has various methods to interact as data-grid, service-grid, compute-grid, schedular and many more.

Lifecycle events

Apache Ignite provides four LifecyleEvents i.e. BEFORE_NODE_START, AFTER_NODE_START, BEFORE_NODE_STOP and AFTER_NODE_STOP. It provide hook to tap these events. You need to implement LifecycleBean and set the implementation in the ignite configuration.

class IgniteLifecycleEventListener implements LifecycleBean {

    @Override
    public void onLifecycleEvent(LifecycleEventType evt) throws IgniteException {
        String message;
        switch (evt) {
            case BEFORE_NODE_START:
                message = "before_node_start event is called!";
                break;
            case AFTER_NODE_START:
                message = "after_node_start event is called!";
                break;
            case BEFORE_NODE_STOP:
                message = "before_node_stop event is called!";
                break;
            case AFTER_NODE_STOP:
                message = "after_node_stop event is called!";
                break;
            default:
                message = "Unknown event";
                break;
        }
        System.out.println(message);
    }
}

Client and Server mode

Apache Ignite node can be run in client or server mode. Server nodes participates in Computing, Caching, data grid, service grid etc. and client nodes are way to interact with server nodes to have near time caching, transaction, computing, service grid functionality. You need to explicitly define the client and server mode.

IgnitionConfiguration.setClientMode(...);

Ignition.setClientMode(...);

Thread pool configurations

System thread pool

It processes all cache related operations except SQL and some other queries and also handles computing cancellation tasks.

IgniteConfiguration.setSystemThreadPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Public thread pool

All computations are received by processed in this thread pool.

IgnitionConfiguration.setPublicThreadPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Queries pool

Handles the SQL queries and SCAN operation executed across the cluster.

IgnitionConfiguration.setQueryThreadPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Services Pool

Handles service-grid calls.

IgniteConfiguration.setServicePoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Striped Pool

Accelerate basic caching operations and transactions by spreading execution on multiples stripes that don't contend with each other.

IgniteConfiguration.setStripedPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Data stream pool

Used in data streaming.

IgniteConfiguration.setDataStreamerPoolSize(...);
//By default it has size equals to max(8, total_no_of_cores)

Custom thread pool

You can define your own custom thread pools. These are used in compute grid. For example, you want to run another task from compute grid task and you also want to avoid the deadlocks. This could be done with custom thread pools synchronously.

IgniteConfiguration icfg = ...;
icfg.setExecutorConfiguration(new ExecutorConfiguration("myCustomThreadPool").setSize(16));
class InternalTask implements IgniteRunnable {
    private static final long serialVersionUID = 5169676352276118235L;
    
    @Override
    public void run() {
        System.out.println("Internal task executed!");
    }
}

class OuterTask implements IgniteRunnable {
    private static final long serialVersionUID = 602712410415356484L;

    @IgniteInstanceResource
    private Ignite ignite;
  
    @Override
    public void run() {
        System.out.println("Ignite Outer task!");
        ignite.compute().withExecutor("myCustomThreadPool").run(new InternalTask());
    }
  
}

// Ignite main example class
IgniteConfiguration icfg = defaultIgniteCfg("custom-thread-pool-grid");
icfg.setExecutorConfiguration(new ExecutorConfiguration("myCustomThreadPool").setSize(16));
  
try (Ignite ignite = Ignition.start(icfg)) {
    ignite.compute().run(new OuterTask());
}

Asynchronous support in Ignite

Ignite API comes with synchronous and asynchronous support. Asynchronous calls return IgniteFuture or one of its implementations. You can call the blocking get method to get value or can add listener(IgniteInClosure) which will get executed as soon as the IgniteFuture has the result.

IgniteCompute compute = ignite.compute();
IgniteFuture fut = compute.callAsync(() -> "Hello from Callable");
//blocking call
String result = fut.get();
//added listener to future which will get executed as soon as future has result.
fut.listener(f -> System.out.println(f.get());
If the IgniteFuture is already have the result from asynchronous operation by the time IgniteInClosure is passed to listen or chain method, then it will be executed synchronously with the caller thread. Otherwise closure will get executed when the asynchronous operation finishes. The closure will be called in system thread pool for asynchronous cache related operations or public thread pool in case of compute operations. So, it is recommended(at least avoid) calling cache/ compute related operations from the closure to avoid deadlocks due to thread starvations.

Resource Injection

Ignite support dependency injection of pre-defined resources which could be used in the task, jo, closure or SPI. It supports both field and method based injection.

IgniteRunnable task = new IgniteRunnable() {
    private static final long serialVersionUID = 787726700536869271L;

    @IgniteInstanceResource
    private transient Ignite ignite;
    
    @Override
    public void run() {
        System.out.println("Hello Gaurav Bytes from: " + ignite.name());
     
    }
};

In the above example code, we have used @IgniteInstanceResource annotation to inject current Ignite instance in the IgniteRunnable object. There are other pre-defined resources that you can inject in the jobs, tasks, closures and SPI.

Resource Name Description
@IgniteInstanceResource Injects current instance of Ignite API
@CacheNameResource Injects the grid-cache name provided by the CacheConfiguration.getName()
@CacheStoreSessionResource Injects the CacheStoreSession instance
@LoadBalancerResource Injects the ComputeLoadBalancer instance for load-balancing
@SpringApplicationContextResource Injects the Spring's ApplicationContext

Apart from this, there are few other resources like TaskContinuousMapperResource, TaskSessionResource, SpringResource, ServiceResource and JobContextResource.

In this article, we will show few examples on using Apache Ignite as Compute Grid, Data Grid, Service Grid and executing SQL queries on Apache Ignite. These are basic examples and use the basic api available. There will be few posts in near future which explains the available API in Compute Grid, Service Grid and Data Grid.

Ignite SQL Example

Apache Ignite comes with JDBC Thin driver support to execute SQL queries on the In memory data grid. In the example below, we will create tables, insert data into tables and get data from tables. I will assume that you are running Apache Ignite on your local environment otherwise please read setup guide for running Apache Ignite server.

Creating Tables
try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
     Statement stmt = conn.createStatement();) {
    //line 1
    stmt.executeUpdate("CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR) WITH \"template=replicated\"");

    //line 2
    stmt.executeUpdate("CREATE TABLE Person (id LONG, name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id)) WITH \"backups=1, affinityKey=city_id\"");

    stmt.executeUpdate("CREATE INDEX idx_city_name ON City (name)");

    stmt.executeUpdate("CREATE INDEX idx_person_name ON Person (name)");
}

In line 1, we are creating a City table with CacheMode as replicated which means it will be replicated on whole cluster. There are three possible values for CacheMode which is LOCAL, REPLICATED and PARTITIONED. We will discuss about this later in detail.

In line 2, we are creating Person table. You might have noticed affinityKey being used. The purpose of affinityKey is to collate the data together.

Inserting data in tables
try (PreparedStatement stmt = conn.prepareStatement("INSERT INTO City (id, name) VALUES (?, ?)")) {

    stmt.setLong(1, 1L);
    stmt.setString(2, "Forest Hill");
    stmt.executeUpdate();

    stmt.setLong(1, 2L);
    stmt.setString(2, "Denver");
    stmt.executeUpdate();

    stmt.setLong(1, 3L);
    stmt.setString(2, "St. Petersburg");
    stmt.executeUpdate();
}

try (PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person (id, name, city_id) VALUES (?, ?, ?)")) {

    stmt.setLong(1, 1L);
    stmt.setString(2, "John Doe");
    stmt.setLong(3, 3L);
    stmt.executeUpdate();

    stmt.setLong(1, 2L);
    stmt.setString(2, "Jane Roe");
    stmt.setLong(3, 2L);
    stmt.executeUpdate();

    stmt.setLong(1, 3L);
    stmt.setString(2, "Mary Major");
    stmt.setLong(3, 1L);
    stmt.executeUpdate();

    stmt.setLong(1, 4L);
    stmt.setString(2, "Richard Miles");
    stmt.setLong(3, 2L);
    stmt.executeUpdate();
}
Querying data from tables
try (Connection conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
     Statement stmt = conn.createStatement()) {
    try (ResultSet rs = stmt.executeQuery("SELECT p.name, c.name FROM Person p, City c WHERE p.city_id = c.id")) {
        while (rs.next())
            System.out.println(rs.getString(1) + ", " + rs.getString(2));
        }
    }
}

You can find the full example code here.

Ignite Compute Grid Example

In this example, we will use Ignite's compute grid to fetch data.

try (Ignite ignite = Ignition.start(defaultIgniteCfg("cache-reading-compute-engine"))) {
    long cityId = 1;

    ignite.compute().affinityCall("SQL_PUBLIC_CITY", cityId, new IgniteCallable<List<String>>() {
        private static final long serialVersionUID = -131151815825938052L;

        @IgniteInstanceResource
        private Ignite currentIgniteInstance;

        @Override
        public List<String> call() throws Exception {
            List<String> names = new ArrayList<>();
            IgniteCache<BinaryObject, BinaryObject> personCache = currentIgniteInstance.cache("SQL_PUBLIC_PERSON").withKeepBinary();
       
            IgniteBiPredicate<BinaryObject, BinaryObject> filter = (BinaryObject key, BinaryObject value) -> {
                return key.hasField("CITY_ID") && key.<Long>field("CITY_ID") == cityId;
            };

            ScanQuery<BinaryObject, BinaryObject> query = new ScanQuery<>(filter);

            try (QueryCursor<Entry<BinaryObject, BinaryObject>> cursor = personCache.query(query)) {
                Iterator<Entry<BinaryObject, BinaryObject>> itr = cursor.iterator();

                while (itr.hasNext()) {
                    Entry<BinaryObject, BinaryObject> cache = itr.next();
                    names.add(cache.getValue().<String>field("NAME"));
                }

            }
            return names;
         }
     }).forEach(System.out::println);;
}

In this example, we are getting list of person residing in same city. We are calling compute grid on SQL_PUBLIC_CITY cache to query with affinitykey cityId and the IgniteCallable task. In the IgniteCallable task, we have @IgniteInstanceResource which will be injected by the Ignite server running this task.

Ignite Data example

This example will usage of Ignite as in memory data grid.

try (Ignite ignite = Ignition.start(defaultIgniteCfg("ignite-data-grid"))) {
    IgniteCache personCache = ignite.getOrCreateCache("personCache");
    for (int i = 0; i < 10; i++) {
        personCache.put(i, "Gaurav " + i);
    }
   
    for (int i = 0; i < 10; i++) {
        System.out.println(personCache.get(i));
    }
}

Ignite Service grid example

interface TimeService extends Service {
    public LocalDateTime currentDateTime();
}
 
static class TimeServiceImpl implements TimeService {
    private static final long serialVersionUID = 3977097368864906176L;

    @Override
    public void cancel(ServiceContext ctx) {
        System.out.println("Service is cancelled!");
    }

    @Override
    public void init(ServiceContext ctx) throws Exception {
        System.out.println("Service is initialized!");
    }

    @Override
    public void execute(ServiceContext ctx) throws Exception {
        System.out.println("Service is deployed!");
    }

    @Override
    public LocalDateTime currentDateTime() {
        return LocalDateTime.now();
    }
}

try (Ignite ignite = Ignition.start(defaultIgniteCfg("ignite-service-grid"))) {
    ignite.services().deployClusterSingleton("timeServiceImpl", new TimeServiceImpl());
   
    TimeService timeService = ignite.services().service("timeServiceImpl");
   
    System.out.println("Current time is: " + timeService.currentDateTime());
}

If you want to deploy some service on grid than it should implement Service interface. Also, service grid deployments are not zero deployments. You need to put the compiled jars to the Ignite server instance and than need to restart the instance as well.

In this post, we will discuss about setting up Apache Ignite.

Installation

You can download the Apache Ignite from its official site. You can download the binary, sources, Docker or Cloud images and maven. There is also a third party support from GridGain.

Steps for binary installation

This is pretty straightforward installation. Download the binary from website. You can optionally setup installation path as IGNITE_HOME. To run Ignite as server, you need to run below command on terminal.

/bin/ignite.bat // If it is Windows
/bin/ignite.sh //if it is Linux

The above command will run the Ignite with default configuration file under $IGNITE_HOME/config/default-config.xml, you can pass your own configuration file with following command

/bin/ignite.sh config/ignite-config.xml

Steps for building from sources

If you are likely to build everything from sources, than follow the steps listed below.

# Unpack the source package
$ unzip -q apache-ignite-{version}-src.zip
$ cd apache-ignite-{version}-src
 
# Build In-Memory Data Fabric release (without LGPL dependencies)
$ mvn clean package -DskipTests
 
# Build In-Memory Data Fabric release (with LGPL dependencies)
$ mvn clean package -DskipTests -Prelease,lgpl
 
# Build In-Memory Hadoop Accelerator release
# (optionally specify version of hadoop to use)
$ mvn clean package -DskipTests -Dignite.edition=hadoop [-Dhadoop.version=X.X.X]

Steps for maven

You just need to add the maven dependencies to make it work in your project. Ignite has many integration support with other libraries and almost all of them are optional. The only mandatory one is ignite-core. You can add ignite-spring for configuring Ignite with Spring XML like configurations and ignite-indexing for SQL querying.

<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-core</artifactId>
    <version>${ignite.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-spring</artifactId>
    <version>${ignite.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-indexing</artifactId>
    <version>${ignite.version}</version>
</dependency>

You can download the docker image or Cloud AMI from this link.

This is an introduction series to Apache Ignite. We will discuss about Apache Ignite, its features, usage as in-memory data grid, compute grid, distributed caching, near real-time caching and persistence distributed database.

What is Ignite?

  • It is in-memory compute platform.
  • It is in-memory data grid.
  • Durable, strongly consistent and highly available.
  • Providing option to run SQL like queries on cache (Providing JDBC API to support this).

Durable memory

Apache Ignite is memory-centric platform based on durable memory architecture. It allows you to store and processing data on in-memory(RAM) and on disk (If Ignite Native persistence is enabled). When the Ignite native persistence is enabled, it will treat disk as superset of data, which is cable of surviving crash and restarts.

In-memory features

RAM is always treated as first memory tier, all the processing happens there. It has following characteristics.

  • Off-heap based: All the data and indexes are stored outside of Java heap which helps in processing petabytes of data.
  • Since all data and indexes are off-heap based, it removes noticeable GC pauses since application code is only source possible for pause-the-world events.
  • It has predictable memory usage. You can configure memory usage with MemoryConfiguration
  • It uses memory as efficient as possible and runs defragmentation routines in the background.
  • Data and indexes on disk and in-memory are stored as same page format which improved the performance and avoids unnecessary data format conversion.

Persistence features

Here are few high-level persistence features.

  • Persistence is optional to disk. You can enable or disable it.
  • It provides data resiliency. If persistence is enabled, full dataset will be stored on physical disk and you can survives cluster restarts, crashes.
  • It can execute SQL queries on full dataset.
  • Cluster restarts are instantaneous. In-memory data will be cached automatically.

In this post, we will use Spring security to handle form based authentication. You can also read my previous posts on Basic Authentication and Digest Authentication.

Technologies/ Frameworks used

Spring Boot, Spring Security, Thymeleaf, AngularJS, Bootstrap

Adding depedencies in pom.xml

In the example, we will use Spring Boot, Spring Security, Undertow and thymeleaf and will add their starters as shown below.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
      <exclusion>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-tomcat</artifactId>
      </exclusion>
    </exclusions>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-undertow</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
  </dependency>
  <dependency>
    <groupId>org.thymeleaf.extras</groupId>
    <artifactId>thymeleaf-extras-springsecurity4</artifactId>
    <version>2.1.2.RELEASE</version>
  </dependency>
</dependencies>

Spring Security Configurations

We will extend WebSecurityConfigurerAdapter class which is a convenient base class to create WebSecurityConfigurer.

@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
  @Override
  protected void configure(HttpSecurity http) throws Exception {
    http.csrf().disable()
        .authorizeRequests()
            .antMatchers("/static/**", "/", "/index", "/bower_components/**").permitAll()
            .anyRequest().authenticated()
            .and()
        .formLogin()
            .loginPage("/login")
            .permitAll()
            .and()
        .logout()
            .permitAll();
  }
 
  @Bean
  public UserDetailsService userDetailsService() {
    InMemoryUserDetailsManager manager = new InMemoryUserDetailsManager();
    manager.createUser(User.withUsername("gaurav").password("s3cr3t").roles("USER").build());
    return manager;
  }
 
  @Bean
  SpringSecurityDialect securityDialect() {
    return new SpringSecurityDialect();
  }
}

@EnableWebSecurity annotation enables the Spring Security. We have overridden the configure method and configured the security. In the above code, we have disabled the csrf request support (By default it is enabled). We are authorizing all the requests to /index, /,/static folder and sub-folders, bower_components folder and its sub-folder accessible without authentication but all other should be authenticated. We are referring /login as our login page for authentication.

In the above code snippet, we are also registering the UserDetailsService. When we enable web-security in Spring, it expects a bean of type UserDetailsService which is used to get UserDetails. For example purpose, I am using InMemoryUserDetailsManager provided by the Spring.

MVC configuration

@Configuration
public class MvcConfig extends WebMvcConfigurerAdapter {
  @Override
  public void addViewControllers(ViewControllerRegistry registry) {
    registry.addViewController("/viewUsers").setViewName("viewUsers");
    registry.addViewController("/index").setViewName("index");
    registry.addViewController("/").setViewName("index");
    registry.addViewController("/login").setViewName("login");
  }
}

In the above configurations, we are registering ViewController and setting their names. This is all configuration that we need to do to enable Spring Security. You can find the full working project including the html files on Github.

In this post, we will externalize the properties used in the application in a property file and will use PropertyPlaceHolderConfigurer to resolve the placeholder at application startup time.

Java Configuration for PropertyPlaceHolderConfigurer

@Configuration
public class AppConfig {

  @Bean
  public PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
    PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer = new PropertySourcesPlaceholderConfigurer();
    propertySourcesPlaceholderConfigurer.setLocations(new ClassPathResource("application-db.properties"));
    //propertySourcesPlaceholderConfigurer.setIgnoreUnresolvablePlaceholders(true);
    //propertySourcesPlaceholderConfigurer.setIgnoreResourceNotFound(true);
    return propertySourcesPlaceholderConfigurer;
  }
}

We created object of PropertySourcesPlaceholderConfigurer and set the Locations to search. In this example we used ClassPathResource to resolve the properties file from classpath. You can use file based Resource which need absolute path of the file.

DBProperties file

@Configuration
public class DBProperties {
 
  @Value("${db.username}")
  private String userName;
 
  @Value("${db.password}")
  private String password;
 
  @Value("${db.url}")
  private String url;

  //getters for instance fields
}

We used @Value annotation to resolve the placeholders.

Testing the configuration

public class Main {
  private static final Logger logger = Logger.getLogger(Main.class.getName());
 
  public static void main(String[] args) {
    try (ConfigurableApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class, DBProperties.class);) {
      DBProperties dbProperties = context.getBean(DBProperties.class);
      logger.info("This is dbProperties: " + dbProperties.toString());
    }
  }
}

For testing, we created object of AnnotationConfigApplicationContext and got DBProperties bean from it and logged it using Logger. This is the simple way to externalize the configuration properties from framework congfiguration. You can also get the full example code from Github.

In this post, we will discuss about Digest Authentication with Spring Security. You can also read my previous post on Basic Authentication with Spring Security.

What is Digest Authentication?

  • This authentication method makes use of a hashing algorithms to encrypt the password (called password hash) entered by the user before sending it to the server. This, obviously, makes it much safer than the basic authentication method, in which the user’s password travels in plain text (or base64 encoded) that can be easily read by whoever intercepts it.
  • There are many such hashing algorithms in java also, which can prove really effective for password security such as MD5, SHA, BCrypt, SCrypt and PBKDF2WithHmacSHA1 algorithms.
  • Please remember that once this password hash is generated and stored in database, you can not convert it back to original password. Each time user login into application, you have to regenerate password hash again, and match with hash stored in database. So, if user forgot his/her password, you will have to send him a temporary password and ask him to change it with his new password. Well, it’s common trend now-a-days.

Let's start building simple Spring Boot application with Digest Authentication using Spring Security.

Adding dependencies in pom.xml

We will use spring-boot-starter-security as maven dependency for Spring Security.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-security</artifactId>
</dependency>

Digest related Java Configuration

@Bean
DigestAuthenticationFilter digestFilter(DigestAuthenticationEntryPoint digestAuthenticationEntryPoint, UserCache digestUserCache, UserDetailsService userDetailsService) {
  DigestAuthenticationFilter filter = new DigestAuthenticationFilter();
  filter.setAuthenticationEntryPoint(digestAuthenticationEntryPoint);
  filter.setUserDetailsService(userDetailsService);
  filter.setUserCache(digestUserCache);
  return filter;
}
 
@Bean
UserCache digestUserCache() throws Exception {
  return new SpringCacheBasedUserCache(new ConcurrentMapCache("digestUserCache"));
}
 
@Bean
DigestAuthenticationEntryPoint digestAuthenticationEntry() {
  DigestAuthenticationEntryPoint digestAuthenticationEntry = new DigestAuthenticationEntryPoint();
  digestAuthenticationEntry.setRealmName("GAURAVBYTES.COM");
  digestAuthenticationEntry.setKey("GRM");
  digestAuthenticationEntry.setNonceValiditySeconds(60);
  return digestAuthenticationEntry;
}

You need to register DigestAuthenticationFilter in your spring context. DigestAuthenticationFilter requires DigestAuthenticationEntryPoint and UserDetailsService to authenticate user.

The purpose of the DigestAuthenticationEntryPoint is to send the valid nonce back to the user if authentication fails or to enforce the authentication.

The purpose of UserDetailsService is to provide UserDetails like password and list of role for that user. UserDetailsService is an interface. I have implemented it with DummyUserDetailsService which loads every passed userName's details. But, you can restrict it to some few user or make it Database backed. One thing to remember is the password passed need to be in plain text format here. You can also use InMemoryUserDetailsManager for storing handful of user configured either through Java configuration or with xml based configuration which could access your application.

In the example, I also have used the caching for UserDetails. I have used SpringBasedUserCache and underlying cache is ConcurrentMapCache. You can use any other caching solution.

Running the example

You can download the example code from Github. I will be using Postman to run the example. Here are the few steps you need to follow.

1. Open postman and enter url (localhost:8082).

2. Click on Authorization tab below the url and select Digest Auth from Type dropdown.

3. Enter username(gaurav), realm(GAURAVBYTES.COM), password(pwd), algorithm(MD5) and leave nonce as empty. Click Send button.

4. You will get 401 unauthorized as response like below.

5. If you see the Headers from the response, you will see "WWW-Authenticate" header. Copy the value of nonce field and enter in the nonce textfield.

6. Click on Send Button. Voila!!! You got the valid response.

This is how we implement Digest Authentication with Spring Security. I hope you find this post informative and helpful.