Terraform Integration Guide (AWS)

Prev Next

This document will walk you through the process of integrating Lucidity's block storage AutoScaler with your existing Terraform-managed infrastructure.

Prerequisites

Before beginning the integration process, ensure you have the following:

  • Necessary permissions to modify Terraform files and execute configurations.

Integration Overview

Integrating Lucidity with your Terraform setup allows you to manage your block storage more efficiently, offering features such as automated scalability depending on the disk utilization and enhanced monitoring. This document will cover multiple scenarios, including management of EBS volumes both inside and outside of EC2 instances.

Scenario 1: Integrating Lucidity with EBS Volumes Attached to EC2 Instances

Overview

In this scenario, we will cover the use case where an AWS EC2 instance has attached EBS volumes using Terraform and how this setup is modified once Lucidity is integrated to manage the infrastructure more effectively.

Before Lucidity Integration

Initially, the EC2 instance is configured with EBS volumes, with the basic setup ensuring volumes are attached directly to the instance and tagged accordingly. The following Terraform configuration demonstrates this setup:

resource "aws_instance" "example" {
 ami           = "ami-0cdad8f13c46c8fe6"
 instance_type = "t3.micro"
 key_name      = "example-key"
 ebs_block_device {
   device_name           = "/dev/sdh"
   volume_size           = 50
   delete_on_termination = true
 }
 tags = {
   Name        = "MyExampleInstance"
   Environment = "Production"
 }
}

In this configuration, any change to the instance or its volumes would be directly managed by Terraform, including updates to tags and volume settings.

After Lucidity Integration

After integrating Lucidity, below modifications need to be made to the Terraform configuration to delegate management of specific attributes to Lucidity, such as tags and lifecycle changes for the EBS volumes. This ensures that Lucidity's automated processes can manage these aspects without Terraform attempting to revert them. Here is how the configuration changes:

resource "aws_instance" "example" {
 ami           = "ami-0cdad8f13c46c8fe6"
 instance_type = "t3.micro"
 key_name      = "example-key"
 ebs_block_device {
   device_name           = "/dev/sdh"
   volume_size           = 50
   delete_on_termination = true
 }
 tags = {
   Name        = "MyExampleInstance"
   Environment = "Production"
 }
 lifecycle {
   ignore_changes = [
     "ebs_block_device",  // Lucidity manages volume lifecycle
     "tags"               // Tags are managed outside of Terraform
   ]
 }
}
provider "aws" {
 region = "us-west-2"
 ignore_tags {
   keys = ["ManagedByLucidity"]
 }
}

Key Changes:

  • Lifecycle Management: The ignore_changes attribute is added to the lifecycle block of the EC2 resource. This tells Terraform to ignore any changes to the EBS volumes and tags, which are managed by Lucidity.

  • Provider Configuration: The AWS provider configuration is enhanced with ignore_tags. This directs Terraform not to manage tags that are specified by Lucidity, preventing conflicts between manual changes and Terraform's state management.

Scenario 2: Integrating Lucidity with Externally Attached EBS Volumes

Overview

In this scenario, we'll cover how to configure external EBS volumes (not directly attached upon instance creation) and the necessary modifications to manage these volumes effectively with Lucidity after integration. Managing volumes externally allows more flexibility in storage management and can help with configurations where volumes may need to be detached or reattached without affecting the instance lifecycle.

Before Lucidity Integration

Initially, EBS volumes are defined as separate resources and attached to EC2 instances through attachment specifications in Terraform. Here's how you might define this in your Terraform script before integrating with Lucidity:

resource "aws_ebs_volume" "example_volume_0" {
 availability_zone = "us-west-2a"
 size              = 50
 type              = "gp2"
}
resource "aws_volume_attachment" "example_attach_0" {
 device_name = "/dev/sdh"
 volume_id   = aws_ebs_volume.example_volume_0.id
 instance_id = aws_instance.example.id
}
resource "aws_ebs_volume" "example_volume_1" {
 availability_zone = "us-west-2a"
 size              = 50
 type              = "gp2"
}
resource "aws_volume_attachment" "example_attach_1" {
 device_name = "/dev/sdh"
 volume_id   = aws_ebs_volume.example_volume_1.id
 instance_id = aws_instance.example.id
}
resource "aws_instance" "example" {
 ami           = "ami-0cdad8f13c46c8fe6"
 instance_type = "t3.micro"
}

After Lucidity Integration

After Lucidity is integrated, below modifications need to be made to ensure that changes to tags and volume_tags are ignored by Terraform. This prevents Terraform from attempting to manage or revert these properties, allowing Lucidity to handle them:

resource "aws_ebs_volume" "example_volume_1" {
 availability_zone = "us-west-2a"
 size              = 50
 type              = "gp2"
}
resource "aws_volume_attachment" "example_attach_1" {
 device_name = "/dev/sdh"
 volume_id   = aws_ebs_volume.example_volume_1.id
 instance_id = aws_instance.example.id
}
resource "aws_instance" "example" {
 ami           = "ami-0cdad8f13c46c8fe6"
 instance_type = "t3.micro"
 tags = {
   "ManagedBy" = "Lucidity"
 }
}

Key Changes:

  • EBS Volume Resource: Added a tags property to label the resource as managed by Lucidity and included a lifecycle block to ignore changes to tags.

  • Volume Attachment Resource: Added a lifecycle block to ignore changes to tags and volume_tags. This ensures that tag management carried out externally by Lucidity is not interfered with by Terraform.

Key Operations After Lucidity Integration for Scenario 2:

  1. Adding the ignore_changes Block:

    • Users must manually add the ignore_changes directive to the Terraform configuration for both the EBS volume and its attachment. This prevents Terraform from attempting to manage or revert tags and volume attributes that are handled by Lucidity or other external processes.

  2. State Management:

  • Users need to manually remove the EBS volume and volume attachment blocks from the Terraform configuration files when these resources are no longer managed by Terraform (perhaps because they are now managed by Lucidity).

  • Users must also remove the corresponding state file data using Terraform commands:

    terraform state rm aws_volume_attachment.data_disk_0_attach

    terraform state rm aws_ebs_volume.data_disk_0

  • As an alternative to manually editing the state file and Terraform configuration, users can use the following command to refresh the state based on the actual infrastructure, effectively accepting any changes made outside of Terraform:

    terraform apply --refresh-only --auto-approve

Benefits Post-Integration

  • No Issues with Auto Scaling: Once these steps are completed, users should not encounter any issues when Lucidity’s AutoScaler performs scaling actions such as expanding or shrinking disks. The ignore_changes block will prevent Terraform from interfering with these dynamic changes.

Scenario 3: Managing EBS Volumes with Modules and Variable Files

Overview

In this scenario, Terraform modules, coupled with variable files, are employed for creating and managing EBS volumes outside of VMs. When Lucidity is integrated, it takes over some management aspects, and adjustments need to be made to accommodate this change.

Before Lucidity Integration

Initially, development teams use the modules by setting the required variables in terraform.tfvars files. An example of this setup might look like:

module "ebs_volumes" {
 source        = "./modules/ebs_volumes"
 volume_config = var.volume_details
}
# In terraform.tfvars file
volume_details = {
 "disk0" = {
   size = 8
   type = "gp2"
 },
 "disk1" = {
   size = 16
   type = "gp2"
 }
}

After Lucidity Integration

After integrating with Lucidity, it's important to ensure the module is adjusted and variable files are updated to reflect the changes.

module "ebs_volumes" {
 source        = "./modules/ebs_volumes"
 volume_config = var.volume_details
}
# Updated terraform.tfvars file
volume_details = {
 "disk0" = {
   size = 8  # Only one disk specified now
 }
}
# Assuming the ebs_volumes module defines resources like this:
resource "aws_ebs_volume" "example" {
 for_each          = var.volume_details
 size              = each.value.size
 type              = each.value.type
 availability_zone = var.availability_zone
 tags = {
   "ManagedBy" = "Lucidity"
 }
 lifecycle {
   ignore_changes = ["size", "tags"]  # Ignoring changes as per Lucidity's management of volumes
 }
}

Key Changes

  • Variable File Adjustments: The terraform.tfvars file is modified to reflect only the resources that need to be managed by Terraform, aligning with Lucidity's scope of management.

  • Module Adaptation: Within the module, the lifecycle block is added to the EBS volume resource definitions to ignore changes in size and tags, as these might now be managed by Lucidity.

  • State Management: As Lucidity manages certain aspects of the volumes, Terraform's state file may need to be updated accordingly to reflect the current infrastructure accurately.

Key Operations After Lucidity Integration

Once Lucidity is integrated:

  1. Modify the Variable File:

    • Update terraform.tfvars to match the resources and parameters managed by Terraform.

  2. Update the Module Configuration:

    • If needed, add or modify lifecycle blocks within your modules to prevent Terraform from attempting to manage aspects now handled by Lucidity.

  3. State Reconciliation:

    • Use terraform apply --refresh-only --auto-approve to update the Terraform state if Lucidity has made changes to the resources.

Scenario 4:  Creating and Onboarding New Partitions in AWS with Lucidity

Overview

This scenario explains how AWS users can automate the creation of new EBS volumes and directly manage them through Lucidity. By modifying the existing Terraform script to include parameters for the partition name, instance ID, disk type, and other AWS-specific settings such as KMS keys, users can leverage an API call to Lucidity's dashboard to automatically manage these new volumes.

Terraform Script Configuration

Users will adjust their Terraform scripts to facilitate the automated creation and onboarding of new partitions to be managed by Lucidity. This involves making an API call that triggers the Lucidity dashboard backend to handle these operations.

Here is a detailed Terraform script adapted for AWS:

resource "null_resource" "create_new_mount_instance" {
  provisioner "local-exec" {
    command = <<EOT
    #!/bin/bash
    uri="http://<dashboardurl>/api/v1/partition/create"
    headers=(
        -H "Authorization: secretkey"
        -H "X-Authtype: auth_key"
        -H "X-Tenants: <tenantId>"
        -H "X-Tenant: <tenantId>"
        -H "accept: /"
        -H "Content-Type: application/json"
        -H "access-id: accesskey"
    )
    body=$(jq -n \
        --arg diskType "<diskType>" \
        --arg instance "<instanceid>" \
        --arg partition "J" \
        --arg tenant "<tenantId>" \
        --arg awsKmsKeyId "<awsKmsKeyId>" \
        '{
            diskType: $diskType,
            instance: $instance,
            partition: $partition,
            tenant: $tenant,
            awsKmsKeyId: $awsKmsKeyId
        }')
    curl -X POST "${headers[@]}" -d "$body" "$uri"
    EOT
  }
  triggers = {
    always_run = "${timestamp()}"
  }
}

Parameters and Options:

  • diskType: Specify the disk type such as gp3, io1, sc1, based on performance needs.

  • awsKmsKeyId: For securing data at rest, specify the AWS KMS key ID if encryption is required. If left empty, it can default to a predefined setting.

  • instance, tenant, and partition: Essential fields to define the specific resources being managed.

Integration Process:

Running the Terraform script initiates an API call to Lucidity’s backend, automatically creating the new partition within the designated disk pool and immediately placing it under Lucidity's management system. This integration allows for automated management functions like capacity scaling based on actual usage.