How to enable audit logging in Elasticsearch
Here in this article we will see how we can enable audit logging in Elasticsearch. We will be setting up a three node elasticsearch cluster with kibana and enable audit setting using the configuration file.
Test Environment
Fedora 37 server
Docker
Docker compose
What is Auditing
Auditing is a way to keep track of the security related events such as authentication failures, data access events, security configuration updates or any events related to activities for which required permissions are not granted via roles and role mapping.
Audit Logging Settings
There are two types of audit logging settings that can be updated in elasticsearch cluster.
Static audit settings | These settings must be configured in elasticsearch.yml configuration file |
Dynamic audit settings | These settings can be updated using “cluster update settings api“ |
We need to make sure that these settings are updated on each of the cluster nodes. Please note, if we are using the docker solution for running the elasticsearch container we can do the static audit setting by passing them as environment variable.
If you are interested in watching the video. Here is the YouTube video on the same step by step procedure outline below.
Procedure
Step1: Ensure Docker and Docker Compose Installed
As a first step ensure that you have docker and docker-compose installed on your system. You can following the official documentation from docker to install these tools.
[admin@fedser ~]$ docker -v
Docker version 20.10.22, build 3a2c30b
[admin@fedser ~]$ docker-compose -v
docker-compose version 1.29.2, build unknown
Step2: Set kernel parameter
vm.max_map_count is a Linux kernel parameter that defines the maximum count of mapped memory regions allowed in the system. We need to set this parameter on host system.
[admin@fedser elasticsearch-docker]$ cat /etc/sysctl.conf
...
vm.max_map_count=262144
[admin@fedser elasticsearch-docker]$ sudo sysctl -p
vm.max_map_count = 262144
Step3: Create environment file
We need to create the below “.env” with the following properties set. Audit feature is not available in the free subscription. So to enable auditing we can use the trial version (or purchase the subscription) by updating the LICENSE to trial in the file as shown below.
[admin@fedser elasticsearch-docker]$ cat .env
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=admin@1234
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=admin@1234
# Version of Elastic products
STACK_VERSION=8.6.0
# Set the cluster name
CLUSTER_NAME=elastic-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
#LICENSE=basic
LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80
# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
Step4: Create Docker Compose file
This is the default docker-compose.yml file elasticsearch documentation. We need to update each of the cluster service environment variables section with the following “- xpack.security.audit.enabled=true” to enable audit logging in elasticsearch.
By default the following audit events “access_denied, access_granted, anonymous_access_denied, authentication_failed, connection_denied, tampered_request, run_as_denied, run_as_granted, security_config_change” will be logged in the logs.
[admin@fedser elasticsearch-docker]$ cat docker-compose.yml
version: "2.2"
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es02\n"\
" dns:\n"\
" - es02\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es03\n"\
" dns:\n"\
" - es03\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
- xpack.security.audit.enabled=true
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es02/es02.key
- xpack.security.http.ssl.certificate=certs/es02/es02.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es02/es02.key
- xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
- xpack.security.audit.enabled=true
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es03:
depends_on:
- es02
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es03/es03.key
- xpack.security.http.ssl.certificate=certs/es03/es03.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es03/es03.key
- xpack.security.transport.ssl.certificate=certs/es03/es03.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
- xpack.security.audit.enabled=true
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
certs:
driver: local
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
Step5: Start Elasticsearch Cluster
Start the Elasticsearch and Kibana services using the docker-compose file that we created as shown below.
[admin@fedser elasticsearch-docker]$ docker-compose up -d
Creating network "elasticsearch-docker_default" with the default driver
Creating volume "elasticsearch-docker_certs" with local driver
Creating volume "elasticsearch-docker_esdata01" with local driver
Creating volume "elasticsearch-docker_esdata02" with local driver
Creating volume "elasticsearch-docker_esdata03" with local driver
Creating volume "elasticsearch-docker_kibanadata" with local driver
Creating elasticsearch-docker_setup_1 ... done
Creating elasticsearch-docker_es01_1 ... done
Creating elasticsearch-docker_es02_1 ... done
Creating elasticsearch-docker_es03_1 ... done
Creating elasticsearch-docker_kibana_1 ... done
Now we can validate that the the audit logs are getting written in the json format as shown below.
[admin@fedser elasticsearch-docker]$ docker-compose logs -f
...
es01_1 | {"type":"audit", "timestamp":"2023-01-15T09:37:14,922+0000", "cluster.uuid":"ZAFz4rMOT32Xm8oKaij8hg", "node.id":"1HcmNBq2TqWyNsYgLUFUtQ", "event.type":"rest", "event.action":"anonymous_access_denied", "origin.type":"rest", "origin.address":"127.0.0.1:59796", "url.path":"/", "request.method":"GET", "request.id":"z8OBgKZ7SNi6r_8vpXTtsg"}
es01_1 | {"type":"audit", "timestamp":"2023-01-15T09:37:16,776+0000", "cluster.uuid":"ZAFz4rMOT32Xm8oKaij8hg", "node.id":"1HcmNBq2TqWyNsYgLUFUtQ", "event.type":"transport", "event.action":"access_granted", "authentication.type":"REALM", "user.name":"kibana_system", "user.realm":"reserved", "user.roles":["kibana_system"], "origin.type":"rest", "origin.address":"172.19.0.6:45912", "request.id":"JQwjtpj3QOqgQJRyhBHcIw", "action":"cluster:monitor/xpack/ml/data_frame/analytics/get", "request.name":"Request", "opaque_id":"unknownId", "trace.id":"f502e1f4c7341945a666936e4bfbb4ed"}
es01_1 | {"type":"audit", "timestamp":"2023-01-15T09:37:17,058+0000", "cluster.uuid":"ZAFz4rMOT32Xm8oKaij8hg", "node.id":"1HcmNBq2TqWyNsYgLUFUtQ", "event.type":"transport", "event.action":"access_granted", "authentication.type":"REALM", "user.name":"kibana_system", "user.realm":"reserved", "user.roles":["kibana_system"], "origin.type":"rest", "origin.address":"172.19.0.6:49688", "request.id":"6SmsHgGlRHe1yYj4buhdWw", "action":"indices:data/read/get", "request.name":"GetRequest", "indices":[".kibana_8.6.0"], "opaque_id":"unknownId", "trace.id":"f502e1f4c7341945a666936e4bfbb4ed"}
es01_1 | {"type":"audit", "timestamp":"2023-01-15T09:37:17,115+0000", "cluster.uuid":"ZAFz4rMOT32Xm8oKaij8hg", "node.id":"1HcmNBq2TqWyNsYgLUFUtQ", "event.type":"transport", "event.action":"access_granted", "authentication.type":"REALM", "user.name":"kibana_system", "user.realm":"reserved", "user.roles":["kibana_system"], "origin.type":"rest", "origin.address":"172.19.0.6:49714", "request.id":"7ff0zEO3RWe4E-NAuRyL5g", "action":"indices:data/write/index", "request.name":"IndexRequest", "indices":[".kibana_task_manager_8.6.0"], "opaque_id":"unknownId", "trace.id":"f502e1f4c7341945a666936e4bfbb4ed"}
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.