ECK Elasticsearch and Kibana on K8S
Note on Elasticsearch and kibana on K8S.
First, Set up NFS Server from my link below.
Then, Set up the folder for elasticsearch. Navigate to folder /mnt/nfs_share and create folder elk and elk0
cd /mnt/nfs_share
mkdir elk
cd elk
mkdir elk0
chmod and chown to nobody:nogroup
chmod 777 elk0
sudo chown nobody:nogroup elk0
This is my repo.
Next, apply yml file to deploy Elastic Operator, Elasticsearch and kibana. You must change NFS Server IP to yours (in pv.yml)
Forward Port and access Kibana.
kubectl port-forward svc/kibana-kb-http 5601:5601 --namespace=elk
Open browser https://localhost:5601/
Login with user: elastic
Password is in secret. You can run following command to get yours.
kubectl -n elk get secret elastic-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
Select Discover in Left menu.
You can see that Filebeat agent sent logs to elasticsearch.
You can config in filebeat-agent.yml for which namespace that filebeat shipping log.
Next, we will ship log from another cluster.
Create your new cluster. For me, I create cluster-2 that has filebeat agent. It will ship logs to cluster-1 that has kibana and elasticsearch.
Following read is for shipping log from other K8S Cluster by agent as daemonset.
https://computingforgeeks.com/ship-kubernetes-logs-to-external-elasticsearch/
This is my repo for filebeat agent.
https://gitlab.com/krisadas/argocd-test/-/tree/main/filebeat
You must change to Elastic Search External IP from your first cluster.
Put it in your yml file both Cluster-1 External IP and password for elasticsearch.
I use option ssl.verification_mode: none
It’s not recommend for production use. This is for display howto connect filebeat with elasticsearch only.
Apply your yml to new cluster, then open kibana on cluster-1.
You will see the log in Discover menu. In agent.name you can see they come from both cluster-1 and cluster-2.