How to manage Open Distro Elasticsearch cluster using REST API service call with Postman tool

How to manage Open Distro Elasticsearch cluster using REST API service call with Postman tool

OpenDistro_RESTAPI

Test Environment

Fedora 32

Open Distro Elasticsearch

Open Distro Elasticsearch is used to analyze and index large datasets. Its primarily used for Log analytics, Real-time application monitoring, Clickstream analytics, Search backend.

Here in this article we will carry out some of the basic operation with the elasticsearch cluster using the REST API calls. In order to create and manage our REST API requests we will be using the ‘Postman’ an collaboration platform for API development.

If you are interested in watching the video. Here is the YouTube video on the same step by step procedure outlined below.

Procedure

Step1: Download the Postman API client

As a first step download the Postman application tar file from the below mentioned URL.

URL - https://www.postman.com/downloads/
File - Postman-linux-x64-8.7.0.tar.gz

Step2: Extract the gunzip file

Once downloaded, lets extract the package to a particular folder.

tar -xzvf /home/admin/middleware/software/Postman-linux-x64-8.7.0.tar.gz -C .

Step3: Configure the Desktop icon by creating the below file

Now, lets create the below Postman.desktop file for creating a shortcut icon to launch the Postman application which we extract. Make sure to update the Exec line with the correct path where the Postman executable is extracted. Create the below file in the your home directory at the specified location.

File: /home/admin/.local/share/applications/Postman.desktop

[Desktop Entry]
Encoding=UTF-8
Name=Postman
Exec=/home/admin/middleware/Stack/Postman/app/Postman %U
Icon=/home/admin/middleware/Stack/Postman/app/resources/app/assets/icon.png
Terminal=false
Type=Application
Categories=Development;

Step4: Launch the Elasticsearch and Kibana services using the docker compose file

Here is the sample docker-compose file which you can get from the Open Distro Elasticsearch documentation and use it to launch the Elasticsearch and Kibana service.

File: docker-compose.yml

version: '3'
services:
  odfe-node1:
    image: amazon/opendistro-for-elasticsearch:1.12.0
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    container_name: odfe-node1
    environment:
      - cluster.name=odfe-cluster
      - node.name=odfe-node1
      - discovery.seed_hosts=odfe-node1,odfe-node2
      - cluster.initial_master_nodes=odfe-node1,odfe-node2
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms4096m -Xmx4096m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - /apps/elasticsearch/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - odfe-net
  odfe-node2:
    image: amazon/opendistro-for-elasticsearch:1.12.0
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    container_name: odfe-node2
    environment:
      - cluster.name=odfe-cluster
      - node.name=odfe-node2
      - discovery.seed_hosts=odfe-node1,odfe-node2
      - cluster.initial_master_nodes=odfe-node1,odfe-node2
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms4096m -Xmx4096m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - /apps/elasticsearch/data:/usr/share/elasticsearch/data
    networks:
      - odfe-net
  kibana:
    image: amazon/opendistro-for-elasticsearch-kibana:1.12.0
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    container_name: odfe-kibana
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: https://odfe-node1:9200
      ELASTICSEARCH_HOSTS: https://odfe-node1:9200
    networks:
      - odfe-net

#volumes:
#  odfe-data1:
#  odfe-data2:

networks:
  odfe-net:

Start up the Elasticsearch and Kibana Docker serivces as shown below.

docker-compose up -d

Now that we have our Elasticsearch service up and running, lets carry out the basic operation using the REST API calls from curl command or using the POSTMAN tool.

Step5: Get the elasticsearch cluster details

Here let’s query for the cluster details.

curl -X GET 'https://fedser32.stack.com:9200/' -u admin:admin@1234 --insecure

Output:

{
  "name" : "odfe-node1",
  "cluster_name" : "odfe-cluster",
  "cluster_uuid" : "5GOEtg12S6qM5eaBkmzUXg",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Step6: Get the elasticsearch node details

Now let’s get the list of nodes available in the cluster.

curl -X GET 'https://fedser32.stack.com:9200/_cat/nodes?v' -u admin:admin@1234 --insecure

Output:

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.18.0.2           13          97   5    0.51    0.51     0.45 dimr      -      odfe-node1
172.18.0.4            3          97   5    0.51    0.51     0.45 dimr      *      odfe-node2

Step7: Get the list of installed plugins in elasticsearch cluster

Here let’s get the list of installed plugins that are available for use in the cluster.

curl -X GET 'https://localhost:9200/_cat/plugins?v' -u admin:admin@1234 --insecure

Output:

name       component                       version
odfe-node1 opendistro-anomaly-detection    1.12.0.0
odfe-node1 opendistro-job-scheduler        1.12.0.0
odfe-node1 opendistro-knn                  1.12.0.0
odfe-node1 opendistro-reports-scheduler    1.12.0.0
odfe-node1 opendistro_alerting             1.12.0.2
odfe-node1 opendistro_index_management     1.12.0.1
odfe-node1 opendistro_performance_analyzer 1.12.0.0
odfe-node1 opendistro_security             1.12.0.0
odfe-node1 opendistro_sql                  1.12.0.0
odfe-node2 opendistro-anomaly-detection    1.12.0.0
odfe-node2 opendistro-job-scheduler        1.12.0.0
odfe-node2 opendistro-knn                  1.12.0.0
odfe-node2 opendistro-reports-scheduler    1.12.0.0
odfe-node2 opendistro_alerting             1.12.0.2
odfe-node2 opendistro_index_management     1.12.0.1
odfe-node2 opendistro_performance_analyzer 1.12.0.0
odfe-node2 opendistro_security             1.12.0.0
odfe-node2 opendistro_sql                  1.12.0.0

Step8: Index a document in elasticsearch cluster

Let’s now index a single json document as shown below.

File: singledoc.json

{
  "title": "The Wind Rises",
  "release_date": "2013-07-20"
}
curl -X PUT -H 'Content-Type: application/json' 'https://fedser32.stack.com:9200/movies/_doc/3?pretty' -u admin:admin@1234 --insecure -d @singledoc.json

Output:

{
  "_index" : "movies",
  "_type" : "_doc",
  "_id" : "3",
  "_version" : 2,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "_seq_no" : 12,
  "_primary_term" : 7
}

Step9: Search for the document

Once the indexing has been completed we can try to search for the document as shown below.

curl -X GET 'https://fedser32.stack.com:9200/movies/_search?q=Wind&pretty' -u admin:admin@1234 --insecure

Output:

{
  "took" : 24,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 0.7361701,
    "hits" : [
      {
        "_index" : "movies",
        "_type" : "_doc",
        "_id" : "3",
        "_score" : 0.7361701,
        "_source" : {
          "title" : "The Wind Rises",
          "release_date" : "2013-07-20"
        }
      }
    ]
  }
}

Step10: Delete the document previously indexed

Here let’s try to delete the indexed document. In this operation i have not used the pretty argument which actually outputs the JSON data in human readable format.

curl -X DELETE 'https://fedser32.stack.com:9200/movies/_doc/3' -u admin:admin@1234 --insecure

Output:

{"_index":"movies","_type":"_doc","_id":"3","_version":3,"result":"deleted","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":13,"_primary_term":7}

Step11: Index bulk documents

Here let’s try to bulk index a set of JSON documents as shown below.

curl -X POST -H 'Content-Type: application/json' 'https://fedser32.stack.com:9200/data/_bulk' -u admin:admin@1234 --insecure --data-binary @bulkdoc.json

Output:

{"took":8,"errors":false,"items":[{"index":{"_index":"movies","_type":"_doc","_id":"1","_version":7,"result":"updated","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":16,"_primary_term":7,"status":200}},{"index":{"_index":"movies","_type":"_doc","_id":"2","_version":5,"result":"updated","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":17,"_primary_term":7,"status":200}}]}

These are some of the basic operation that we can carry out using the REST API call on Elasticsearch cluster. You can go through the complete list of operations that are supported by the Elasticsearch REST API service at the following documentation reference.

Hope you enjoyed reading this article. Thank you..