Managing Elastic Cloud deployments with Terraform streamlines infrastructure management, brings your cloud-native Elasticsearch stacks under IaC (Infrastructure as Code), and ensures reproducibility. In this article, we’ll go through setting up a Terraform configuration to deploy Elasticsearch and Kibana on Elastic Cloud and scale your deployment.
In this article, we’ll walk through how to:
- Set up your Elastic Cloud deployment using Terraform
- Generate and manage API keys securely
- Add advanced scaling options
Prerequisites
Before you begin:
- Elastic Cloud account (elastic.co)
- Terraform installed (terraform.io/downloads)
- Elastic API Key (you can generate it from your Elastic Cloud console)
Project Structure
We’ll organize our Terraform configuration into multiple files for better readability:
terraform/
├── main.tf
├── state.tf
├── provider.tf
├── variables.tf
├── terraform.tfvars
variables.tf
Configure your Elastic Cloud providers using API key authentication.
variable "elastic_api_key" {
description = "API key for Elastic Cloud"
type = string
sensitive = true
}
variable "elasticsearch_password" {
description = "Password for the elastic user"
type = string
sensitive = true
}
main.tf
Generate an API key using the elasticstack provider.
resource "elasticstack_elasticsearch_security_api_key" "api_key" {
name = "terraform-api-key"
role_descriptors = jsonencode({
"custom-role" = {
cluster = ["all"]
index = [{
names = ["*"]
privileges = ["all"]
}]
}
})
}
state.tf
This defines the actual Elastic Cloud deployment.
resource "ec_deployment" "elasticsearch" {
name = "rahulranjan"
region = "gcp-us-central1"
version = "8.17.3"
deployment_template_id = "gcp-storage-optimized"
elasticsearch = {
hot = {
size = "4g"
zone_count = 2
autoscaling = {}
}
}
kibana = {
size = "1g"
zone_count = 1
}
}
deployment_template_id: Choose based on your use case. “gcp-storage-optimized” is good for hot-warm tier setups.hot: Defines the topology for Elasticsearch’s hot node.kibana: Automatically links Kibana to the deployment.
➕ Advanced Scaling Example
Add a warm node tier and a machine learning (ML) node:
elasticsearch = {
hot = {
size = "4g"
zone_count = 2
autoscaling = {}
}
warm = {
size = "2g"
zone_count = 1
autoscaling = {}
}
ml = {
size = "1g"
zone_count = 1
autoscaling = {}
}
}
- Warm Node: Used for less frequently accessed data, helps in cost optimization.
- ML Node: Required for anomaly detection, machine learning jobs, and advanced monitoring.
- Autoscaling: Allows Elastic to scale resources dynamically based on usage patterns.
provider.tf
Configure your Elastic Cloud providers using API key authentication.
terraform {
required_providers {
ec = {
source = "elastic/ec"
version = "~> 0.12.2"
}
elasticstack = {
source = "elastic/elasticstack"
version = "~> 0.11.4"
}
}
}
provider "ec" {
apikey = var.elastic_api_key
}
provider "elasticstack" {
elasticsearch {
username = "elastic"
password = var.elasticsearch_password
endpoints = [ec_deployment.elasticsearch.elasticsearch.https_endpoint]
}
}
terraform.tfvars
Store variable values securely (don’t commit this file).
elastic_api_key = "your_elastic_cloud_api_key"
elasticsearch_password = "your_elasticsearch_password"
Or Set Env Vars (Avoid Hardcoding Secrets)
export TF_VAR_elastic_api_key="your_actual_elastic_cloud_api_key"
export TF_VAR_elastic_password="your_elasticsearch_password"
Commands to Run
Initialize and apply your Terraform configuration:
terraform init
terraform plan
terraform apply
By using Terraform with Elastic Cloud:
- You gain reproducibility and automation
- API keys are securely managed
- Infra setup becomes a breeze, not a bottleneck
- Scaling is as easy as editing your Terraform files
This modular setup also makes it easier to scale or modify your deployment in the future.
Conclusion:
Infrastructure as code is a powerful paradigm. When coupled with Elastic Cloud and Terraform, it allows engineers to quickly spin up robust, scalable environments.
The project code is available on my GitHub.






Leave a comment