• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

Constructing a Value-Efficient ELK Stack for Centralized Logging

Admin by Admin
April 6, 2025
Home Software
Share on FacebookShare on Twitter


If your organization has finances constraints, buying licensed merchandise like Splunk for logging infrastructure is probably not possible. Fortuitously, a robust open-source different exists: ELK (Elasticsearch, Logstash, and Kibana). ELK gives strong logging and visualization capabilities.

At a startup the place I labored, value minimization was a precedence, so I applied ELK for logging.

On this article, I am going to information you thru organising and configuring the free model of the ELK stack on GCP utilizing Terraform and Ansible. Nevertheless, the identical directions might be adopted to deploy it on different cloud platforms like AWS and Azure.

Why Select ELK?

After thorough analysis, I made a decision to implement the ELK stack on GCP utilizing digital machines (VMs) for logging as a result of its ease of use, wealthy dashboards, and easy setup course of. Whereas I might have deployed it on a GKE cluster, I opted for VMs on the time for numerous causes.

Elasticsearch is an open-source search and analytics engine that means that you can acquire and analyze logs from a number of sources, together with IoT gadgets, software servers, internet servers, and cloud providers. The ELK stack consists of the next parts:

  • Elasticsearch – Shops and indexes log information

  • Logstash – Filters and codecs logs earlier than ingestion

  • Kibana – Gives a graphical person interface (GUI) for looking out and visualizing logs

  • Filebeat – A light-weight log shipper put in as an agent on machines producing logs

Figure 1

Determine 1

Stipulations

Earlier than organising ELK, guarantee you will have the next:

  • A cloud account (Google Cloud, AWS, or Azure). This information makes use of GCP.

  • Terraform and Ansible put in in your native machine.

  • Correct authentication configured between your native machine and the cloud supplier (Google Cloud or some other) with the required entry permissions for Terraform and Ansible.

Half 1: ELK Infrastructure Setup Utilizing Terraform on GCP

The ELK stack consists of varied nodes, every serving a selected operate to reinforce scalability and failover:

  • Grasp nodes – Handle cluster operations and indexing.

  • Information nodes – Retailer and index log information for search and evaluation.

  • Kibana node – Gives a GUI for log visualization and analytics.

  • Logstash node – Filters, transforms, and ingests logs from numerous sources.

Whereas all functionalities might be mixed on a single node, separating them in a manufacturing atmosphere improves scalability and fault tolerance, relying on the workload.

Create the next recordsdata in a folder the place you intend to run the Terraform code, or clone my Git repository, which comprises all of the code: GitHub – pradeep-gaddamidi/ELK.

1. create_elk_instances.tf

locals {
  config = var.environment_config[terraform.workspace]
  cases = [for key, value in local.config.nodes : {
    name = key
    machine_type = (
      can(regex("master_.*", value)) ? local.config.master_machine_type :
      can(regex("kibana_.*", value)) ? local.config.kibana_machine_type :
      can(regex("logstash_.*", value)) ? local.config.logstash_machine_type :
      local.config.node_machine_type
    )
    zone = (
      can(regex(".*_zoneb", value)) ? local.config.region_zones[1] :
      can(regex(".*_zonec", worth)) ? native.config.region_zones[2] :
      native.config.region_zones[0]
    )
    network_tags         = native.config.network_tags
    ssh_keys      = native.config.ssh_keys
    static_ip_name       = key           # Modify or depart null as wanted
    service_account_name = "elastic"     # Modify or depart null as wanted
    disk_name            = key           # Modify or depart null as wanted
    disk_type            = "pd-standard" # Modify as wanted
    disk_size = (
      can(regex("master_.*", worth)) ? native.config.master_disk_size :
      can(regex("kibana_.*", worth)) ? native.config.kibana_disk_size :
      can(regex("logstash_.*", worth)) ? native.config.logstash_disk_size :
      native.config.node_disk_size
    )
    disk_zone = (
      can(regex(".*_zoneb", worth)) ? native.config.region_zones[1] :
      can(regex(".*_zonec", worth)) ? native.config.region_zones[2] :
      native.config.region_zones[0]
    )
    disk_project = native.config.project_name
  }]
}

module "gcp_instance" {
  supply                = "../../modules/gcp_custom_instance"
  gce_image             = native.config.gce_image
  subnet                = native.config.subnet
  area                = native.config.area  # Present solely when creating static IPS
  cases             = native.cases
  use_common_service_account = native.config.use_common_service_account # Present solely when creating a typical service account accross all of the cases
}

2. variables.tf 

variable "environment_config" {
  description = "Configuration per atmosphere"
  kind = map(object({
    project_name         = string
    area               = string
    region_zones         = record(string)
    master_machine_type  = string
    node_machine_type    = string
    kibana_machine_type  = string
    logstash_machine_type= string
    network_tags         = record(string)
    community              = string
    subnet               = string
    gce_image            = string
    ca_bucket_location   = string
    backup_bucket        = string
    master_disk_size     = quantity
    node_disk_size       = quantity
    kibana_disk_size     = quantity
    logstash_disk_size   = quantity
    use_common_service_account = bool
    machine_access_scopes= record(string)
    nodes                = map(string)
    ssh_keys             = record(string)
  }))
  default = {
    nonprod = {
      project_name         = "nonprod-infra-monitoring"
      area               = "us-central1"
      region_zones         = ["us-central1-a", "us-central1-b"]
      master_machine_type  = "n1-standard-2"
      node_machine_type    = "n1-standard-2"
      kibana_machine_type  = "n1-standard-2"
      logstash_machine_type= "n1-standard-2"
      network_tags         = ["elastic", "nonprod"]
      community              = "tasks/nonprod-networking/world/networks/nonprod-vpc"
      subnet               = "tasks/nonprod-networking/areas/us-central1/subnetworks/nonprod-sub01"
      gce_image            = "debian-cloud/debian-12"
      ca_bucket_location   = "nonprod-elastic-certificates"
      backup_bucket        = "nonprod-elastic-backup"
      master_disk_size     = 100
      node_disk_size       = 510
      kibana_disk_size     = 100
      logstash_disk_size   = 100
      use_common_service_account = true
      machine_access_scopes = ["cloud-platform"]
      ssh_keys              = []
      nodes = {
        "nonprod-elastic-master-node1" = "master_zonea"
        "nonprod-elastic-data-node1"   = "data_zonea"
        "nonprod-elastic-data-node2"   = "data_zoneb"
        "nonprod-elastic-kibana"       = "kibana_zonea"
        "nonprod-elastic-logstash"     = "logstash_zonea"
      }
    }
    prod = {
      project_name         = "prod-infra-monitoring"
      area               = "us-central1"
      region_zones         = ["us-central1-a", "us-central1-b", "us-central1-c"]
      master_machine_type  = "n2-standard-2"
      node_machine_type    = "n2-highmem-4"
      kibana_machine_type  = "n2-standard-2"
      logstash_machine_type= "n2-standard-2"
      network_tags         = ["elastic", "prod"]
      community              = "tasks/prod-networking/world/networks/prod-vpc"
      subnet               = "tasks/prod-networking/areas/us-central1/subnetworks/prod-sub01"
      gce_image            = "debian-cloud/debian-12"
      ca_bucket_location   = "prod-elastic-certificates"
      backup_bucket        = "prod-elastic-backup"
      master_disk_size     = 100
      node_disk_size       = 3000
      kibana_disk_size     = 100
      logstash_disk_size   = 100
      use_common_service_account = true
      machine_access_scopes = ["cloud-platform"]
      ssh_keys              = []
      nodes = {
        "elastic-master-node1" = "master_zonea"
        "elastic-master-node2" = "master_zoneb"
        "elastic-master-node3" = "master_zonec"
        "elastic-data-node1"   = "data_zonea"
        "elastic-data-node2"   = "data_zonea"
        "elastic-data-node3"   = "data_zoneb"
        "elastic-data-node4"   = "data_zoneb"
        "elastic-data-node5"   = "data_zonea"
        "elastic-data-node6"   = "data_zoneb"
        "elastic-kibana"       = "kibana_zonea"
        "elastic-logstash"     = "logstash_zonea"
        "elastic-logstash2"     = "logstash_zoneb"
        "elastic-logstash3"    = "logstash_zonec"
      }
    }
  }
}


I’ve created a customized module to provision GCP cases and used it within the create_elk_instances.tf file. Nevertheless, you can too use GCP’s official Terraform module to create VM cases.

module "gcp_instance" {
  supply                = "./modules/gcp_custom_instance"


The ./modules/gcp_custom_instance folder should have the recordsdata, gcp_custom_vm.tf and variables_custom.tf).


Under is the code for my customized module:

3. gcp_custom_vm.tf

locals {
  common_service_account_email = var.use_common_service_account ? google_service_account.common_service_account[0].e mail : null
}

useful resource "google_compute_instance" "google-compute-instance" {
  for_each = { for index, inst in var.cases : inst.title => inst }
  title         = every.worth.title
  machine_type = every.worth.machine_type
  zone = every.worth.zone
#  allow_stopping_for_update = true
  tags         = every.worth.network_tags
  metadata = {
    ssh-keys = be a part of("n", every.worth.ssh_keys)
  }

  boot_disk {
    initialize_params {
      picture = var.gce_image
    }
  }

  network_interface {
    subnetwork = var.subnet
    network_ip = every.worth.static_ip_name != null ? google_compute_address.static_ips[each.value.static_ip_name].handle : null
  }

  dynamic "service_account" {
    for_each = every.worth.service_account_name != null ? [1] : []
    content material {
      scopes = var.machine_access_scopes
      e mail  = var.use_common_service_account ? google_service_account.common_service_account[0].e mail :               google_service_account.individual_service_account[each.value.name].e mail
    }
  }

  dynamic "attached_disk" {
    for_each = every.worth.disk_name != null ? [1] : []
    content material {
      supply      = google_compute_disk.google-compute-disk[each.value.disk_name].self_link
      device_name = "${every.worth.disk_name}-data"
      mode        = "READ_WRITE"
    }
  }

}


useful resource "google_compute_disk" "google-compute-disk" {
  for_each = { for index, inst in var.cases : inst.disk_name => inst if inst.disk_name != null }

  title = "${every.worth.disk_name}-data"
  kind = every.worth.disk_type
  measurement = every.worth.disk_size
  zone = every.worth.disk_zone
  undertaking = every.worth.disk_project
}

useful resource "google_service_account" "common_service_account" {
  depend        = var.use_common_service_account ? 1 : 0
  account_id   = var.use_common_service_account ? lookup(var.cases[0], "service_account_name", null) : null
  display_name = "Service Account"
}

useful resource "google_service_account" "individual_service_account" {
  for_each     = { for index, inst in var.cases : inst.service_account_name => inst if inst.service_account_name != null && !var.use_common_service_account }

  account_id   = every.worth.service_account_name
  display_name = "Service account for ${every.worth.title}"
}

useful resource "google_compute_address" "static_ips" {
  # Solely embody cases which have static_ip_name outlined
  for_each = { for index, inst in var.cases : inst.static_ip_name => inst if inst.static_ip_name != null }

  title         = every.worth.static_ip_name
  address_type = "INTERNAL"
  area       = var.area
  subnetwork   = var.subnet
}

output "common_service_account_email" {
  worth       = native.common_service_account_email
  description = "The e-mail of the widespread service account"
}

4. variables_custom.tf

variable "cases" {
  description = "Listing of occasion configurations"
  kind = record(object({
    title                = string
    machine_type        = string
    zone                = string
    network_tags        = non-compulsory(record(string))
    ssh_keys	        = non-compulsory(record(string))
    static_ip_name      = non-compulsory(string)
    service_account_name = non-compulsory(string)
    disk_name           = non-compulsory(string)
    disk_type           = non-compulsory(string)
    disk_size           = non-compulsory(quantity)
    disk_zone           = non-compulsory(string)
    disk_project        = non-compulsory(string)
  }))
}

variable "gce_image" {
  description = "GCE picture for the cases"
  kind        = string
  default     = "debian-cloud/debian-12"
}

variable "subnet" {
  description = "Subnet for the community"
  kind        = string
}

variable "area" {
  description = "GCP area"
  kind        = string
  default     = "us-central1"
}

variable "use_common_service_account" {
  description = "Flag to find out if a typical service account ought to be used for all cases"
  kind        = bool
  default     = false
}

variable "machine_access_scopes" {
  description = "Scopes for machine entry"
  kind        = record(string)
  default     = ["cloud-platform"]
}


Assign permissions to the service accounts created earlier within the code:

locals {
  bucket_config = var.environment_config[terraform.workspace]
}

useful resource "google_storage_bucket_iam_binding" "elastic-backup" {
  bucket  = native.bucket_config.backup_bucket
  function    = "roles/storage.objectAdmin"
  members = native.config.use_common_service_account ? ["serviceAccount:${module.gcp_instance.common_service_account_email}"] : []
}

useful resource "google_storage_bucket_iam_binding" "elastic-certs" {
  bucket  = native.bucket_config.ca_bucket_location
  function    = "roles/storage.objectViewer"
  members = native.config.use_common_service_account ? ["serviceAccount:${module.gcp_instance.common_service_account_email}"] : []
}


Create the GCP buckets used for certificates and elastic backups:

useful resource "google_storage_bucket" "elastic-backup" {
  title          = native.bucket_config.backup_bucket
  location      = "US"
  storage_class = "STANDARD"

  uniform_bucket_level_access = true
}
useful resource "google_storage_bucket" "elastic-certs" {
  title          = native.bucket_config.ca_bucket_location
  location      = "US"
  storage_class = "STANDARD"

  uniform_bucket_level_access = true
}

 

You should utilize the under Terraform instructions to create the above assets:

terraform workspace set nonprod (in the event you use workspaces)
terraform init
terraform plan
terraform apply

You possibly can add new nodes as wanted by updating variables, i.e., including new nodes to the nodes part of the file and re-running the Terraform code. It will provision the brand new information nodes robotically. Now that the ELK infrastructure is about up, the following step is to put in and configure the ELK software program.

Half 2: Configure the ELK Infrastructure Utilizing Ansible

Stipulations

1. The certificates era required for safe communication between numerous Elastic nodes might be automated. Nevertheless, I selected to generate them manually by following the ELK guides.

As soon as the certificates are generated, stage them on the GCP bucket elastic-certificates.

2. Make certain your Ansible hosts recordsdata are organized as under:

  • All information and grasp nodes are grouped beneath the elastic part

  • Kibana nodes beneath kibana part

  • Logstash nodes beneath logstash

  • Information nodes beneath information

  • Grasp nodes beneath grasp

Create the next recordsdata in a folder the place you intend to run the Ansible playbook. Then, execute the Ansible playbook under to put in and configure ELK.

ansible.yaml

---
- title: Set up Elasticsearch pre-reqs on Debian
  hosts: all
  turn out to be: sure
  duties:
    - title: Replace apt repository
      apt:
        update_cache: sure

    - title: Set up default-jre
      apt:
        title:
          - default-jre
        state: current

    - title: Add Elasticsearch GPG key
      apt_key:
        url: https://artifacts.elastic.co/GPG-KEY-elasticsearch
        state: current

    - title: Set up apt-transport-https
      apt:
        title: apt-transport-https
        state: current

    - title: Add Elasticsearch repository
      apt_repository:
        repo: "deb https://artifacts.elastic.co/packages/8.x/apt secure essential"
        state: current
        filename: elastic-8.x

    - title: Replace apt repository
      apt:
        update_cache: sure

- title: Set up Elasticsearch on Debian
  hosts: elastic
  turn out to be: sure
  duties:
    - title: Set up Elasticsearch
      apt:
        title: elasticsearch=8.11.2
        state: current
    - title: Allow Elasticsearch service
      ansible.builtin.systemd:
        title: elasticsearch.service
        enabled: sure

- title: Set up Kibana on Debian
  hosts: kibana
  turn out to be: sure
  duties:
    - title: Set up Kibana
      apt:
        title: kibana=8.11.2
        state: current
    - title: Allow kibana service
      ansible.builtin.systemd:
        title: kibana.service
        enabled: sure

- title: Set up logstash on Debian
  hosts: logstash
  turn out to be: sure
  duties:
    - title: Set up logstash
      apt:
        title: logstash=1:8.11.2-1
        state: current
    - title: Allow logstash service
      ansible.builtin.systemd:
        title: logstash.service
        enabled: sure

- title: Copy the kibana.yml configuration file to the kibana nodes
  hosts: kibana
  turn out to be: sure
  duties:
    - title: Copy a kibana.yml file
      template:
        src: "{{ playbook_dir }}/recordsdata/kibana.j2"
        dest: /and so on/kibana/kibana.yml

- title: Copy the pipelines.yml configuration file to the logstash nodes
  hosts: logstash
  turn out to be: sure
  duties:
    - title: Copy a logstash pipelines.yml file
      template:
        src: "{{ playbook_dir }}/recordsdata/logstash.j2"
        dest: /and so on/logstash/conf.d/pipelines.conf

- title: Copy the elasticsearch_node.yml configuration file to the nodes
  hosts: information
  gather_facts: sure
  turn out to be: sure
  duties:
    - title: Get zone information from metadata server
      ansible.builtin.uri:
        url: http://metadata.google.inside/computeMetadata/v1/occasion/zone
        technique: GET
        return_content: sure  # Ensures that the content material is returned
        headers:
          Metadata-Taste: "Google"
      register: zone_info
      check_mode: no
    - title: Extract the zone title
      set_fact:
        zone_name: "{{ zone_info.content material.break up("https://dzone.com/")[-1] }}"
    - title: Copy a elasticsearch_node.yml file
      template:
        src: "{{ playbook_dir }}/recordsdata/elasticsearch_node.j2"
        dest: /and so on/elasticsearch/elasticsearch.yml

- title: Copy the elasticsearch_node.yml configuration file to the nodes
  hosts: grasp
  gather_facts: sure
  turn out to be: sure
  duties:
    - title: Copy a elasticsearch_master.yml file
      template:
        src: "{{ playbook_dir }}/recordsdata/elasticsearch_master.j2"
        dest: /and so on/elasticsearch/elasticsearch.yml
- title: Obtain the certificates from the GCS bucket
  hosts: elastic
  turn out to be: sure
  duties:
    - title: certificates
      command: gsutil cp gs://nonprod-elastic-certificates/* /and so on/elasticsearch/certs
- title: Obtain the certificates from the GCS bucket
  hosts: kibana
  turn out to be: sure
  duties:
    - title: certificates
      command: gsutil cp gs://nonprod-elastic-certificates/elasticsearch-ca.pem /and so on/kibana

- title: Obtain the certificates from the GCS bucket
  hosts: logstash
  turn out to be: sure
  duties:
    - title: certificates
      command: gsutil cp gs://nonprod-elastic-certificates/elasticsearch-ca.pem /usr/share/logstash/pipeline/elasticsearch-ca.pem

The configuration recordsdata required by the Ansible playbook ought to be positioned within the recordsdata listing. The anticipated recordsdata are listed under:

1. elasticsearch_master.j2

node.title: {{ ansible_default_ipv4.handle }}
node.roles: [ master ]
discovery.seed_hosts:
 - 10.x.x.x
 - 10.x.x.x
 - 10.x.x.x
#cluster.initial_master_nodes:
# - 10.x.x.x
# - 10.x.x.x
# - 10.x.x.x
community.host : {{ ansible_default_ipv4.handle }}
cluster.title: prod-monitoring
path:
  information: /mnt/disks/elasticsearch
  logs: /var/log/elasticsearch
cluster.routing.allocation.consciousness.attributes: zone
cluster.routing.allocation.consciousness.power.zone.values: us-central1-a,us-central1-b
xpack.safety.http.ssl.enabled: true
xpack.safety.http.ssl.keystore.path: /and so on/elasticsearch/certs/http.p12
xpack.safety.enabled: true
xpack.safety.transport.ssl.enabled: true
xpack.safety.audit.enabled: true
xpack.safety.transport.ssl.verification_mode: certificates
xpack.safety.transport.ssl.keystore.path: /and so on/elasticsearch/certs/elastic-certificates.p12
xpack.safety.transport.ssl.client_authentication: required
xpack.safety.transport.ssl.truststore.path: /and so on/elasticsearch/certs/elastic-certificates.p12
xpack.license.self_generated.kind: fundamental


A couple of factors to be famous in regards to the above elastic grasp nodes configuration:
  1. We’re utilizing a fundamental (free) license, not a premium one.

  2. When Ansible runs on the grasp node, it robotically fills within the IPv4 handle of the grasp node by default.

  3. Uncomment cluster.initial_master_nodes solely when creating the cluster for the primary time.

  4. Safety is enabled between: 

    • Grasp nodes utilizing xpack.safety.transport.ssl.enabled
    • Information nodes and Kibana/Logstash utilizing xpack.safety.http.ssl.enabled

2. elasticsearch_node.j2

node.title: {{ ansible_default_ipv4.handle }}
node.roles: [ data, transform, ingest ]
discovery.seed_hosts:
 - 10.x.x.x
 - 10.x.x.x
 - 10.x.x.x
#cluster.initial_master_nodes:
# - 10.x.x.x
# - 10.x.x.x
# - 10.x.x.x
community.host : {{ ansible_default_ipv4.handle }}
cluster.title: prod-monitoring
path:
  information: /mnt/disks/elasticsearch
  logs: /var/log/elasticsearch
node.attr.zone: {{ zone_name }}
xpack.safety.http.ssl.enabled: true
xpack.safety.http.ssl.keystore.path: /and so on/elasticsearch/certs/http.p12
xpack.safety.enabled: true
xpack.safety.transport.ssl.enabled: true
xpack.safety.audit.enabled: true
xpack.safety.transport.ssl.verification_mode: certificates
xpack.safety.transport.ssl.keystore.path: /and so on/elasticsearch/certs/elastic-certificates.p12
xpack.safety.transport.ssl.client_authentication: required
xpack.safety.transport.ssl.truststore.path: /and so on/elasticsearch/certs/elastic-certificates.p12
xpack.license.self_generated.kind: fundamental

3. kibana.j2

elasticsearch.hosts: ["https://10.x.x.x:9200","https://10.x.x.x:9200","https://10.x.x.x:9200","https://10.x.x.x:9200"]
server.title: kibana
server.host: {{ ansible_default_ipv4.handle }}
server.port: 443
elasticsearch.username: 'kibana_system'
elasticsearch.password: 'somepassxxxxx'
elasticsearch.ssl.certificateAuthorities: ['/etc/kibana/elasticsearch-ca.pem']
elasticsearch.ssl.verificationMode: 'certificates'
server.ssl.enabled: true
server.ssl.certificates: /and so on/ssl/kibana/kibana-cert.crt
server.ssl.key: /and so on/ssl/kibana/kibana-key.key
server.publicBaseUrl: https://elastic.firm.xyz
xpack.encryptedSavedObjects.encryptionKey: zxy123f1318d633817xyz1234
xpack.reporting.encryptionKey: 1xfsyc4ad24176a902f2xyz123
xpack.safety.encryptionKey: cskcjsn60e148a70308d39dxyz123
logging:
  appenders:
    file:
      kind: file
      fileName: /var/log/kibana/kibana.log
      format:
        kind: json
  root:
    appenders:
      - default
      - file
pid.file: /run/kibana/kibana.pid

4. logstash.j2

enter {
        beats {
                port => 5044
        }

        tcp {
                port => 50000
        }
    tcp {
        port => 5000
        codec => "line"
        kind => "syslog"
    }

    http {
        port => 5050
    }
        google_pubsub {
        kind => "pubsub"
        project_id => "my-project-123"
        subject => "cloud_functions_logs"
        subscription => "cloud_functions_logs-sub"
###        json_key_file => "/and so on/logstash/keys/logstash-sa.json"
        codec => "json"
                      }
        google_pubsub {
        kind => "pubsub"
        project_id => "my-project-123"
        subject => "cloud_run_logs"
        subscription => "cloud_run_logs-sub"
###        json_key_file => "/and so on/logstash/keys/logstash-sa.json"
        codec => "json"
                      }
}

filter {
    grok {
        match => { "message" => "^%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:[%{POSINT:pid}])?: %{GREEDYDATA:log_message}" }
    }
    
    date {
        match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        goal => "@timestamp"
    }
    
    kv  "
        value_split => "="
    
    
    mutate {
        remove_field => [ "timestamp" ]
        convert => { "pid" => "integer" }
    }
}

### Add your filters / logstash plugins configuration right here

output {
        elasticsearch {
                hosts => ["https://10.x.x.x:9200","https://10.x.x.x:9200","https://10.x.x.x:9200","https://10.x.x.x:9200"]
                person => "logstash_writer"
                password => "mypassxyz"
                index => "logs-my-index-%{+yyyy.MM.dd}"
                motion => "create"
                ssl => true
                cacert => '/usr/share/logstash/pipeline/elasticsearch-ca.pem'
        }
}

A couple of factors to be famous in regards to the above logstash configuration:

  • Within the Logstash configuration above, we use numerous filters corresponding to grok, date, kv, and mutate to match and modify incoming logs. Modify in response to your wants.

  • In each kibana.j2 and logstash.j2, for “elasticsearch.hosts”, you possibly can specify all information nodes as a listing, permitting requests to be round-robin distributed throughout them. Alternatively, configure an inside load balancer with information nodes because the backend and supply simply the load balancer’s IP.

  • Be certain that the index and logstash_writer customers are created by way of the Kibana console. Moreover, configure the mandatory indices to ingest information from different sources like Filebeat and assign correct permissions to the respective customers.

  • Information might be ingested into Elasticsearch by way of Logstash, permitting for mandatory filtering, or it may be despatched on to information nodes utilizing brokers like Filebeat.

  • If you’re storing any of the above .j2 Jinja recordsdata in a Git repository, they usually comprise delicate data, encrypt them utilizing ansible-vault. Discuss with the Ansible documentation to study extra about utilizing ansible-vault.

Right here is the Filebeat configuration if you wish to ship logs immediately from Docker functions. It’s also possible to use it to ship logs from some other functions.

filebeat.conf

logging.json: true
logging.stage: information
logging.metrics.enabled: false

setup.kibana.host: ${KIBANA_HOST}
setup.ilm.enabled: true

output.elasticsearch:
  hosts: ${ELASTIC_HOST}
  indices:
    - index: "audit-%{+yyyy.MM.dd}"
      when.has_fields: ["_audit"]
    - index: "logs-%{+yyyy.MM.dd}"
      when.has_fields: ["app", "env"]
    - index: "invalid-stream-%{+yyyy.MM.dd}"
      when.has_fields: ["error.data", "error.message"]

filebeat.autodiscover:
  suppliers:
    - kind: docker
      templates:
        - config:
            - kind: container
              paths:
                - /var/lib/docker/containers/${information.docker.container.id}/*.log

processors:
  - decode_json_fields:
      fields: ["message"]
      process_array: false
      max_depth: 1
      goal: ""
      overwrite_keys: false
      add_error_key: true


As soon as ELK is about up, you possibly can configure information backups known as snapshots to the ‘elastic-backup’ GCS bucket by way of the Kibana console.

Conclusion

Figure 2

Determine 2

With information being ingested from numerous sources, corresponding to Filebeat, into the Elasticsearch cluster, you possibly can entry Kibana’s UI to look logs (Determine 2), create visualizations, monitor logs, and arrange alerts successfully.

By putting in and configuring the open-source ELK stack, you possibly can considerably scale back licensing prices whereas solely paying for the GCP infrastructure you employ. Terraform and Ansible automation make it easier to rise up and operating rapidly, permitting for simple scaling with minimal effort.
Good luck! Be happy to attach with me on LinkedIn.
Tags: BuildingCentralizedCostEffectiveELKLoggingStack
Admin

Admin

Next Post
Prime Crypto Wallets of 2025: Balancing Safety and Comfort

Prime Crypto Wallets of 2025: Balancing Safety and Comfort

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Report: AI coding productiveness positive aspects cancelled out by different friction factors that sluggish builders down

Report: AI coding productiveness positive aspects cancelled out by different friction factors that sluggish builders down

July 10, 2025
How authorities cyber cuts will have an effect on you and your enterprise

How authorities cyber cuts will have an effect on you and your enterprise

July 9, 2025
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved