Software development best practices

How to Import Existing Resources In Terraform for HIPAA Compliance

Time for reading: 11 min

Why would you want to import existing cloud resources into Terraform? Well, there are many reasons.

I once joined as a Lead DevOps Engineer on a healthcare project developed outside the US. Since 2018, multiple teams have supported the project, fueled by coffee and bad decisions. Eventually, the founders sold the startup to a new owner and drove their Tesla Roadsters off into the sunset. 

Suddenly, our top priority became entering the US market. This is where HIPAA certification comes into play, together with infrastructure as code and yours truly. 

In this article, I’ll show how to generate Terraform from existing resources, make your infrastructure idempotent, and automate its deployment. 

I’ll also explain how to make sure that any future changes don’t come with a billion-dollar AWS bill.

So, let’s first…

Create a high-level Terraform import plan

There are many reasons to import existing resources into Terraform. It can help your product complete HIPAA certification, win a new round of funding, acquire enterprise-level customers, or implement new features with less effort. 

When I started working on the project, it had some serious issues with quality and security. HIPAA compliance was a distant dream. At this point, importing several hundred AWS resources into Terraform seemed like too much work. However, migrating to infrastructure as code allows standardization of all environments, reduces the cost of future changes, and improves scalability.  

Terraform import of existing resources was a logical decision. 

However, we had next to no documentation available. The only thing that existed was a Production environment deployed in an AWS account. The team had to analyze and rework legacy undocumented code without downtime.

Let’s start with a high-level plan to estimate the Terraform import timeline and any extra QA/Dev resources needed:

  1. Check access to systems, accounts, repositories, etc.
  2. Create a list of resources involved in the work of the application.
  3. Build a diagram of your current architecture and its ideal state, if necessary.
  4. Consider possible options for infrastructure import:
    1. Use utilities like TerraCognita.
    2. Describe and import infrastructure using the Terraform import command
    3. Describe and import infrastructure using the Terraform import block
  5. Evaluate the result of importing the current structure in Terraform and deploying it to a separate account – side-by-side inspection.
  6. Prepare for changes and their implementation in Production:
    1. Full migration to new infrastructure
    2. Migration within the current infrastructure.

The process is almost the same for Microsoft Azure and GCP. It doesn’t matter if you have manual deployments, CloudOps, developer scripts/frameworks, or even CloudFormation. All that’s required is full access to your cloud account — or, at least, read-only permission for both the GUI console and the API/CLI. 

There are three options to choose from when you want to import existing AWS resources into Terraform. Each has advantages and drawbacks. 

Option #1: tools like TerraCognita

TerraCognita works with all main cloud providers, including AWS, GCP, and AzureRM. 

It allows you to automatically create tfstate and tf files for most typical resources. For example, here’s the command for importing an AWS EC2 instance from our test environment.

terracognita aws \
  --aws-access-key "${AWS_ACCESS_KEY_ID}" \
  --aws-secret-access-key "${AWS_SECRET_ACCESS_KEY}" \
  --aws-default-region "${AWS_DEFAULT_REGION}" \
  --tfstate test.tfstate" \
  --hcl"test.tf" \
  --include aws_instance

And this is an example of a test.tf file generated by TerraCognita:

provider "aws" {}

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
  required_version = ">= 1.0"
}

resource "aws_instance" "test" {
  tags = {
    "Name" = "test-instance"
  }
  tags_all = {
    "Name" = "test-instance"
  }
  ami                         = "ami-058b428b3b45defec"
  associate_public_ip_address = true
  availability_zone           = "us-east-1a"
  capacity_reservation_specification {
    capacity_reservation_preference = "open"
  }

  iam_instance_profile                 = "test-instance-profile"
  instance_initiated_shutdown_behavior = "stop"
  instance_type                        = "t4g.small"
  key_name                             = "test-instance"
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "required"
    instance_metadata_tags      = "disabled"
  }
  private_ip = "10.0.4.67"
  root_block_device {
    delete_on_termination = true
    encrypted             = true
    iops                  = 3000
    kms_key_id            = "arn:aws:kms:us-east-1:111111111111:key/11111111-2222-3333-4444-567890000001"
    throughput            = 125
    volume_size           = 32
    volume_type           = "gp3"
  }

  source_dest_check      = true
  subnet_id              = "subnet-01234567878133141"
  tenancy                = "default"
  user_data              = ""
  vpc_security_group_ids = ["sg-01111112112333355"]
}

The command above adds the resource to your Terraform. The next step is to substitute values with expressions and alter the resource to make it easier to use and maintain. That’s all.

TerraCognita is a neat option if you’re willing to work with third-party tools. However, it lacked support for some newer AWS features we needed. 

TerraCognita pros

➕Automatic generation of Terraform files.

➕Ability to filter resources you want to import.

➕Support for all three major cloud providers.

TerraCognita cons

➖Limited number of resource types available for import.

➖Difficulty in editing TF files with a complex infrastructure when TerraCognita substitutes values with randomly named constants. 

➖Complex or overly wordy results are possible.

➖May lack the latest AWS features or Terraform features.

Terra Cognita video game

Image source: Spectrum Computing

Option #2: Terraform Import command

This approach requires you to describe the resource in code. Here’s a Terraform import example:

resource "aws_instance" "test_import" {
  ami           = "ami-058b428b3b45defec"
  instance_type = "t4g.small"
  tags = {}
}

Next, let’s take a look at the aws_instance resource documentation

The “import” section states that successfully importing a resource requires an AWS EC2 instance ID. Executing the $ terraform import aws_instance.test_import i-011122233344455aa command provides the following output:

aws_instance.test_import: Importing from ID "i-011122233344455aa"...
aws_instance.test_import: Import prepared!
  Prepared aws_instance for import
aws_instance.test_import: Refreshing state... [id=i-011122233344455aa]
s
Import successful!

The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform.

After this, running the Terraform plan provides a diff that you need to manually add to the Terraform code.

 ~ resource "aws_instance" "test_import" {
        id                                   = "i-011122233344455aa"
      ~ tags                                 = {
          - "Name"            = "test-instance" -> null
        }
      ~ tags_all                             = {
          - "Name"            = "test-instance" -> null
        }
      + user_data_replace_on_change          = false
        # (31 unchanged attributes hidden)

        # (8 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

So, you need to alter the Terraform code as follows:

resource "aws_instance" "test_import" {
  ami           = "ami-058b428b3b45defec"
  instance_type = "t4g.small"
  tags = {
    "Name" = "test-instance"
  }
  tags_all = {
    "Name" = "test-instance"
  }
}

Now running the terraform plan results in zero differences.

Plan: 0 to add, 0 to change, 0 to destroy.

The “Terraform Import” command is the best choice if you only have a couple of cloud resources. Our team, on the other hand, had hundreds of resources. 

Terraform import command pros

➕Requires minimum effort to import resources

➕Works with any resources

➕Best for importing a few resources. 

Terraform import command cons

➖Requires manual insertion of the resource definition into a .tf file after the import.

➖Needs a separate run for each resource. Cumbersome for a large infrastructure.

Terraform import command

Option #3: Terraform Import Block

This is a new option that works for Terraform v1.5.0 and later. How to use the Terraform import block? For our test AWS EC2 Instance resource, you need to write the following code block:

import {
  to = aws_instance.test_import
  id = "i-011122233344455aa"
}

resource "aws_instance" "test_import" {
  ami           = "ami-058b428b3b45defec"
  instance_type = "t4g.small"
}

Let’s run the Terraform plan. This is the output you should get:

aws_instance.test_import: Preparing import... [id=i-011122233344455aa]
aws_instance.test_import: Refreshing state... [id=i-011122233344455aa]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_instance.test_import will be updated in-place
  # (imported from "i-011122233344455aa")
  ~ resource "aws_instance" "test_import" {
        ami                                  = "ami-058b428b3b45defec"
        arn                                  = "arn:aws:ec2:us-east-1:111111111111:instance/i-011122233344455aa"
        associate_public_ip_address          = true
        availability_zone                    = "us-east-1a"
        cpu_core_count                       = 2
        cpu_threads_per_core                 = 1
        disable_api_stop                     = false
        disable_api_termination              = false
        ebs_optimized                        = false
        get_password_data                    = false
        hibernation                          = false
        iam_instance_profile                 = "test-instance-instance-profile"
        id                                   = "i-011122233344455aa"
        instance_initiated_shutdown_behavior = "stop"
        instance_state                       = "running"
        instance_type                        = "t4g.small"
        ipv6_address_count                   = 0
        ipv6_addresses                       = []
        key_name                             = "test-instance"
        monitoring                           = false
        placement_partition_number           = 0
        primary_network_interface_id         = "eni-00edfc7fa5ae0cd96"
        private_dns                          = "ip-10-0-4-67.ec2.internal"
        private_ip                           = "10.0.4.67"
        public_dns                           = "ec2-44-205-173-9.compute-1.amazonaws.com"
        public_ip                            = "44.205.173.9"
        secondary_private_ips                = []
        security_groups                      = []
        source_dest_check                    = true
        subnet_id                            = "subnet-055b6c23e78c33e4f"
      ~ tags                                 = {
          - "Name"            = "test-instance" -> null
        }
      ~ tags_all                             = {
          - "Name"            = "test-instance" -> null
        }
        tenancy                              = "default"

      + user_data_replace_on_change          = false
        vpc_security_group_ids               = [
            "sg-01111112112333355",
        ]

        capacity_reservation_specification {
            capacity_reservation_preference = "open"
        }

        cpu_options {
            core_count       = 2
            threads_per_core = 1
        }

        credit_specification {
            cpu_credits = "unlimited"
        }

        enclave_options {
            enabled = false
        }

        maintenance_options {
            auto_recovery = "default"
        }

        metadata_options {
            http_endpoint               = "enabled"
            http_protocol_ipv6          = "disabled"
            http_put_response_hop_limit = 1
            http_tokens                 = "required"
            instance_metadata_tags      = "disabled"
        }

        private_dns_name_options {
            enable_resource_name_dns_a_record    = false
            enable_resource_name_dns_aaaa_record = false
            hostname_type                        = "ip-name"
        }

        root_block_device {
            delete_on_termination = true
            device_name           = "/dev/sda1"
            encrypted             = true
            iops                  = 3000
    kms_key_id            = "arn:aws:kms:us-east-1:111111111111:key/11111111-2222-3333-4444-567890000001"
            tags                  = {}
            tags_all              = {}
            throughput            = 125
            volume_id             = "vol-11111111111111111"
            volume_size           = 32
            volume_type           = "gp3"
        }
    }

Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.

The next step is to make changes to the code, as shown by Terraform: 

import {
  to = aws_instance.test_import
  id = "i-011122233344455aa"
}

resource "aws_instance" "test_import" {
  ami           = "ami-058b428b3b45defec"
  instance_type = "t4g.small"
  tags = {
    "Name" = "test-instance"
  }
  tags_all = {
    "Name" = "test-instance"
  }
}

Now, run the Terraform apply to get the following output:

aws_instance.test_import: Importing... [id=i-011122233344455aa]
aws_instance.test_import: Import complete [id=i-011122233344455aa]

Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>

Well done! Now, you can manage this instance directly from Terraform.

Terraform import block pros

➕Allows the definition of the import procedures in the Terraform configuration, making them a part of your infrastructure code with version control.

➕Supports bulk imports. Makes it easier to work with multiple resources with the help of modules or scripts.

➕Best for importing numerous identical resources using identical code blocks.

Terraform import block cons

➖The syntax may be less intuitive, especially for Terraform beginners.

➖Some versions don’t support all features or have restrictions on the resources that can be imported using blocks.

Terraform import block

Evaluate the Terraform import results

Now that you know how to import AWS resources into Terraform, let’s evaluate the results.

Create an exact copy of your infrastructure in a separate account. Let’s name it a “Test environment” and check its security. You can then apply HIPAA compliance configuration using AWS Config and applying the HIPPA conformance pack.

AWS Security Hub is another helpful service. You can use it to apply AWS Foundational Security Best Practices v1.0.0 (FSBP) or CIS AWS Foundations Benchmark to improve security. Console utilities like fsec are also useful for locally checking your code for security issues.

The next step is to fix all errors and issues in the Test environment by editing Terraform code.

To help you in this process, a colleague has recently written an in-depth guide to HIPAA compliance for startups. It includes the initial audit instructions, recommendations for basic fixes like S3 event notifications and SNS topic encryption, as well as advanced manipulations with S2 buckets, backups and logs, KMS keys, and security groups. 

After fixing all non-compliant HIPAA issues, retest everything to make sure it works as needed. You can then automatically apply the changes to the Production environment. The security score will be identical for both environments.

Test out any changes on the Test environment first as the product evolves. After passing the fsec, AWS Config, and AWS Security Hub assessments, apply the changes to the Prod. 

Terraform allows you to complete all the necessary steps in just a few minutes. It also shows exactly what was changed and where to find it. 

Prevent your costs from spinning out of control

With Terraform code, you can spin up as many environments as needed and destroy them just as easily. This is handy for reducing the cloud costs.

For example, you can destroy the test environment after applying the HIPAA compliance changes and making sure everything works correctly. Need to make any new changes? Just spin up another environment and remove it ASAP. Got a pause in development? Delete your dev and stage environments to save costs.

You can also update CI/CD pipelines to dynamically create TA environments that will be deleted after the testing. With manual QA, you can create environments just before a release, and remove them after the go-live.

This is just one of the many ways to reduce your AWS bill we discussed in our cloud cost optimization guide. So, check it out if that’s a priority for your project. 

Choose MindK as your DevOps partner-min

Conclusions

Importing existing infrastructure to Terraform might seem daunting and time-consuming. However, it allows you to easily manage the infrastructure, make updates, and comply with HIPAA and other regulations at a lower cost. 

There are three options for importing existing AWS resources into Terraform. Third-party tools like TerraCognita can help you automatically generate TF files and filter the imported resources. However, such tools make it difficult to edit the files and don’t support some resource types.  

The Terraform import command, on the other hand, doesn’t have such limitations. It is great for importing a couple of resources with minimum effort. As your infrastructure grows in size, it becomes less practical. It requires a separate run for each resource.

The import block in Terraform is the newest option, working with Terraform v1.5.0 and onwards. The block allows you to make the import procedure part of your infrastructure code. It supports mass import and works great with larger infrastructures. 

However, the method is less intuitive for new users. So, if you have any questions on how to import existing resources in Terraform, need help with HIPAA compliance or cloud infrastructure, don’t hesitate to contact us.

Subscribe to MindK Blog

Get our greatest hits delivered to your inbox once a month.
MindK uses the information you provide to us to contact you about our relevant content andservices. You may unsubscribe at any time. For more information, check out our privacy policy.

Read next

Saas app development

Understanding the Technical Side of SaaS Application Development

Read more
guide for software ideas

How Software Ideas are Born: Step-by-Step Guide on Generating one in 2024

Read more
Digital Transformation in Education: Redefining the Experience of Everyone Involved

Digital Transformation in Education: Redefining the Experience of Everyone Involved

Read more