Spring Boot with Elastic Search

Spring Boot With Elastic Search

Elastic Search setup

  • Installation
The latest version can be downloaded from the following link:
https://www.elastic.co/downloads/elasticsearch

The server can be started by executing the elasticsearch.bat (for windows).
Access
http://localhost:9200 and you should get an output similar to this:

{
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "LPD9wz2cSqOrzB9hUYCfGA",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "zip",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
  • Phonetic Plugin Installation
Elastic Search installation is shipped with few default plugins. But based on our business requirements, we can add additional plugins to the elastic search.
For our use-case, we will install the phonetic search plugin and enable elastic search to allow search on phonetically similar sounding words.


elasticsearch-plugin install analysis-phonetic
  • Creating Index
As per the elastic search index glossary, index is :
"An optimized collection of JSON documents. Each document is a collection of fields, the key-value pairs that contain your data."

Elastic Search Glossary

With Respect to a traditional database terminology, index can be considered as a Database schema which can contain a number of tables.
For our use-case, we will create an employee index which can store documents related to employees.
The indexes can be created using Rest endpoints exposed by elastic search.
I am using Postman to access the rest endpoints.


URL: localhost:9200/employee_index?include_type_name=true
Method: PUT
Header: Content-type:application/json
Body:
  {
  "aliases": {},
  "mappings": {
    "employee": {
      "properties": {
        "id": {
          "type": "keyword"
        },
        "firstName": {
          "type": "text",
          "analyzer": "my_analyzer"
        },
        "lastName": {
          "type": "text",
          "analyzer": "my_analyzer"
        },
        "salary": {
          "type": "double"
        },
        "createdDate": {
          "type": "date"
        },
        "employeeType": {
          "type": "text"
        },
        "contractType": {
          "type": "nested",
          "properties": {
            "id": {
              "type": "keyword"
            },
            "name": {
              "type": "keyword"
            }
          }
        }
      }
    }
  },

  "settings": {
    "number_of_shards": "2",
    "number_of_replicas": "1",

    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "standard",
          "filter": ["lowercase", "phonemas"]
        }
      },
      "filter": {
        "phonemas": {
          "type": "phonetic",
          "encoder": "metaphone",
          "replace": "false"
        }
      }
    }
  }
}
The index consists of the definition of the employee entity, few settings related to shards, and most importantly the definition of our analyzer where we define that  we want the elastic search to also consider phonetic searching capability.

Updated file : employee_index.json

Additional Note: From Elastic search version 7 on-wards, the index mapping should not contain the name of the mapping. In our case we are using the mapping name as employee. To make this work, we append include_type_name=true in the index creation url.


Spring Boot Application setup

We start, by creating a spring boot project using spring initializer with the following dependencies:

Spring boot Init


Entity and Repo

@Data
@EqualsAndHashCode(callSuper = true)
@Document(indexName = "employee_index", type = "employee")
public class EmployeeDTO extends BaseAuditDTO implements Serializable {

  /**
   * 
   */
  private static final long serialVersionUID = 544120913140064729L;

  @Id
  private String id;

  @NotNull(message = "First Name cannot be null")
  @NotBlank(message = "First Name cannot be Empty")
  private String firstName;

  @NotNull(message = "Last Name cannot be null")
  @NotBlank(message = "Last Name cannot be Empty")
  private String lastName;

  private Double salary;

  @NotNull(message = "Contract Type cannot be null")
  private ContractTypeDTO contractType;

  @NotNull(message = "Employee Type cannot be null")
  EmployeeType employeeType;

  private String imagePath;
}

import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;

import com.prashant.elasticsearch.dto.EmployeeDTO;

public interface EmployeeESRepo extends ElasticsearchRepository<EmployeeDTO, String> {

}

Note: Here we provide the id type as string. Due to this spring data elastic search will associate the _id field of elastic search with the id of the entity.

Elastic search integration

The next logical step is to integrate our elastic search with our Spring Boot application.

application.yml

server:
  port: ${SERVICE_PORT:7001}
  
elasticsearch:
  clustername: elasticsearch
  #host: <aws-elastic-search-service-url>
  host: http://localhost:9200
  indexName: employee_index

EsConfigTransportClient.java

This class defines the connection settings to connect to a elastic search cluster.(Note, transport client is deprecated, hence we just use it for tests)
@Profile("test")
@Configuration
@EnableElasticsearchRepositories(basePackages = "com.prashant.elasticsearch.es.repo")
public class EsConfigTransportClient {
  @Value("${elasticsearch.host}")
  private String EsHost;

  @Value("${elasticsearch.port}")
  private int EsPort;

  @Value("${elasticsearch.clustername}")
  private String EsClusterName;

  @Bean
  public Client client() throws Exception {

    Settings esSettings = Settings.builder()
      .put("cluster.name", EsClusterName)
      .build();

    TransportClient client = new PreBuiltTransportClient(esSettings);
    client.addTransportAddress(new TransportAddress(InetAddress.getByName(EsHost), EsPort));
    return client;

  }

  @Bean
  public ElasticsearchOperations elasticsearchTemplate() throws Exception {
    return new ElasticsearchTemplate(client(), new CustomEntityMapper());
  }

}

EsConfigTransportClient.java

This class defines the connection settings to connect to a elastic search cluster.
@Profile("!test")
@Configuration
@EnableElasticsearchRepositories(basePackages = "com.prashant.elasticsearch.es.repo")
public class EsConfigHighLevelClient extends AbstractElasticsearchConfiguration {
  @Value("${elasticsearch.host}")
  private String EsHost;

  @Value("${elasticsearch.clustername}")
  private String EsClusterName;

  @Override
  @Bean
  public RestHighLevelClient elasticsearchClient() {

    RestHighLevelClient client = new RestHighLevelClient(
      RestClient.builder(HttpHost.create(EsHost)));

    return client;
  }

}

Note: we define the url and port for elastic search in the host property like :http://localhost:9200. This is defined in the application.yml


EmployeESService.java
The service class exposes operations to create, update, delete and search entities from / to elastic search.
@Service
public class EmployeeESService {

  private final EmployeeESRepo employeeESRepo;

  @Autowired
  public EmployeeESService(EmployeeESRepo employeeESRepo) {
    this.employeeESRepo = employeeESRepo;
  }

  public EmployeeDTO saveEmployee(EmployeeDTO employee) {
    return employeeESRepo.save(employee);
  }

  public EmployeeDTO findById(String id) {
    return employeeESRepo.findById(id).orElseThrow(() -> new ResourceNotFoundException("Employee", id));
  }

  public Page<EmployeeDTO> findAll(Pageable pageable) {
    return employeeESRepo.findAll(pageable);
  }

  public void deleteEmployee(String id) {
    EmployeeDTO employee = findById(id);
    employeeESRepo.delete(employee);
  }

  /**
   * Returns pageable response based on the search criteria
   * @param esSearchFilter
   * @param pageable
   * @return Page<EmployeeES>
   */
  public Page<EmployeeDTO> searchEmployeeByCriteria(ESSearchFilter esSearchFilter, Pageable pageable) {
    QueryBuilder query = FilterBuilderHelper.build(esSearchFilter);
    NativeSearchQuery nativeSearchQuery = null;
    if (null != pageable) {
      nativeSearchQuery = new NativeSearchQueryBuilder().withPageable(pageable).withQuery(query).build();
    } else {
      nativeSearchQuery = new NativeSearchQueryBuilder().withQuery(query).build();
    }

    return employeeESRepo.search(nativeSearchQuery);

  }

Search Filter Builders:

These are the utility classes which help to create QueryBuilder objects to interact with Elastic search. They will be invoked from the 
public class FilterBuilderHelper {
  public static QueryBuilder build(ESSearchFilter esSearchFilter) {
    BoolQueryBuilder boolParentQueryBuilder = new BoolQueryBuilder();

    Map<String, List<ESFilterCondition>> filtersByFieldName = esSearchFilter.getConditions().stream().collect(Collectors.groupingBy((ESFilterCondition::getFieldName)));

    for (Map.Entry<String, List<ESFilterCondition>> entry : filtersByFieldName.entrySet()) {
      List<ESFilterCondition> fieldConditions = entry.getValue();
      String fieldName = entry.getKey();

      if (fieldName.contains(".")) {
        // nested search. First identify the root element
        BoolQueryBuilder boolFieldQueryBuilder = new BoolQueryBuilder();
        int lastIndex = fieldName.lastIndexOf(".");
        String rootPath = fieldName.substring(0, lastIndex);
        for (ESFilterCondition condition : fieldConditions) {
          boolFieldQueryBuilder.should(QueryBuilderHelper.prepareQueryCondition(condition));
        }
        NestedQueryBuilder nestedQuery = new NestedQueryBuilder(rootPath, boolFieldQueryBuilder, ScoreMode.None);
        boolParentQueryBuilder.must(nestedQuery);

      } else {
        BoolQueryBuilder boolFieldQueryBuilder = new BoolQueryBuilder();
        for (ESFilterCondition condition : fieldConditions) {
          boolFieldQueryBuilder.should(QueryBuilderHelper.prepareQueryCondition(condition));
        }
        boolParentQueryBuilder.must(boolFieldQueryBuilder);
      }

    }

    return boolParentQueryBuilder;
  }
}

public class QueryBuilderHelper {
  public static QueryBuilder prepareQueryCondition(ESFilterCondition condition) {
    QueryBuilder queryStringBuilder;
    switch (condition.getOperation()) {
      case EQ:
        queryStringBuilder = QueryBuilders.matchQuery(condition.getFieldName(), condition.getValue1());
        break;
      case GT:
        queryStringBuilder = QueryBuilders.rangeQuery(condition.getFieldName()).gt(condition.getValue1());
        break;
      case LT:
        queryStringBuilder = QueryBuilders.rangeQuery(condition.getFieldName()).lt(condition.getValue1());
        break;
      case GTE:
        queryStringBuilder = QueryBuilders.rangeQuery(condition.getFieldName()).gte(condition.getValue1());
        break;
      case LTE:
        queryStringBuilder = QueryBuilders.rangeQuery(condition.getFieldName()).lte(condition.getValue1());
        break;
      case BETWEEN:
        queryStringBuilder = QueryBuilders.rangeQuery(condition.getFieldName()).from(condition.getValue1()).to(condition.getValue2());
        break;
      case LIKE:
        queryStringBuilder = queryStringQuery("*" + QueryParser.escape(condition.getValue1()) + "*").field(condition.getFieldName());
        break;
      case STARTS_WITH:
        queryStringBuilder = QueryBuilders.prefixQuery(condition.getFieldName(), QueryParser.escape(condition.getValue1()));
        break;
      case REGEX:
        queryStringBuilder = QueryBuilders.regexpQuery(condition.getFieldName(), condition.getValue1());
      default:
        throw new IllegalArgumentException("Not supported Operation");
    }
    return queryStringBuilder;
  }
}

Swagger and Rest Endpoints

  • Swagger url:
http:localhost:7001/swagger-ui.html

Source Code

The source code and the readme can be found on GitHub.
The readme also contains some sample data creation and data search json which can be used using the swagger endpoints.

Front end:
The front-end code has been developed using angular and can also be found on GitHub.

Ensure you have npm and angular cli installed.
Run npm install 
then ng serve to start the frront end application. The url will be http://localhost:4200

AWS Migration

The entire stack can be deployed on AWS. Please find details in the following post.

Comments