Terraform refactoring - Extract module
After working with Terraform within your organization or team for several weeks or months, you will notice some best practices and conventions you want to apply to specific commonly-used resources.
Terraform modules provide the perfect tool. You can store these modules in the same repository or download them from an S3 or GCS bucket to reuse them in multiple projects.
Creating an instance and an external IP address
Let’s start by creating an example resource that we can use to walk through the “Extract Module” process.
variable "project" {}
provider "google" {}
provider "google-beta" {}
resource "google_compute_address" "main_application" {
provider = google-beta
project = var.project
name = "main-application"
region = "europe-west1"
labels = {
"department" = "finance"
}
}
resource "google_compute_instance" "main_application" {
project = var.project
name = "main-application"
machine_type = "f1-micro"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "opensuse-cloud/opensuse-leap-15-5-v20230607-x86-64"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.main_application.address
}
}
labels = {
"department" = "finance"
}
}
I recommend that you run terraform init && terraform apply
to deploy the above code in your environment so that you can follow along with the next steps in the post.
The need for a module
As you can see in the example code, we’ve added labels to our IP address and Compute instance. Rather than have every team use a different labeling structure, we would like to encapsulate this logic in a module and require all developers to specify the “department” label whenever they create a resource.
Creating the module
Let’s create a new directory, “brytecode-compute-instance,” and move the logic of our module to this new directory.
The code of brytecode-compute-instance/main.tf
is shown below.
variable "project" {}
variable "department" {}
resource "google_compute_address" "address" {
provider = google-beta
project = var.project
name = "main-application"
region = "europe-west1"
labels = {
"department" = var.department
}
}
resource "google_compute_instance" "instance" {
project = var.project
name = "main-application"
machine_type = "f1-micro"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "opensuse-cloud/opensuse-leap-15-5-v20230607-x86-64"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.address.address
}
}
labels = {
"department" = var.department
}
}
In our project’s main.tf
file, we can now call this module to create a Compute Engine instance and an external IP.
variable "project" {}
provider "google" {}
provider "google-beta" {}
module "main_application" {
source = "./brytecode-compute-instance"
project = var.project
department = "finance"
}
Run terraform init
to install the new module, and then run terraform plan
to validate the changes that will be performed.
...
Plan: 2 to add, 0 to change, 2 to destroy.
...
Apparently, Terraform plans to destroy our existing resources. The name of our resources changed because we extracted them into a module. The resource name now needs to be prefixed with the module name.
As explained in the article on renaming Terraform resources, we need to notify Terraform that the resources in the state file are the same as those in our code. We will need to add moved
blocks to tell Terraform the new name of the resources.
After adding the moved
statements, main.tf
looks like this:
variable "project" {}
provider "google" {}
provider "google-beta" {}
module "main_application" {
source = "./brytecode-compute-instance"
project = var.project
department = "finance"
}
moved {
from = google_compute_address.main_application
to = module.main_application.google_compute_address.address
}
moved {
from = google_compute_instance.main_application
to = module.main_application.google_compute_instance.instance
}
When you run terraform plan
again, you will see that Terraform now correctly identifies the resources and proposes to update its state. Submit these changes to your version control repository to have them applied on a subsequent deployment of your application.