· devops · 13 min read

How to Build an AWS ECS Cluster with Terraform and Nginx Container Image?

In this article, we'll discuss how to pull a Docker Nginx image to an ECS Cluster using Terraform. This involves defining an ECS service with a task definition specifying the Nginx Docker image and creating the ECS Cluster with the Terraform AWS provider.

Pulling down a Docker Nginx image to an ECS Cluster involves deploying a task definition that specifies the Docker image to be used in the ECS service.

The scenario at hand involves deploying a Docker container with an Nginx image, which requires two key steps. Firstly, we need to pull the Nginx image from the Docker registry, which can be easily accomplished using Docker commands such as ‘docker pull’. Once we have the image, the next step is to create an ECS cluster and deploy the Nginx image using Terraform, a popular infrastructure-as-code tool.

Using Terraform, we can define an ECS service with a task definition that specifies the Nginx Docker image and create the ECS cluster with the AWS provider. This allows us to easily automate the process of deploying and managing the Nginx container, as well as scaling the cluster as needed.

Overall, by leveraging Terraform and Docker, we can quickly and efficiently deploy Nginx in an ECS cluster, helping to streamline our deployment processes and ensure our applications are up and running smoothly.

Prerequisites for Nginx Docker container deployment with Terraform and ECS on AWS

Before we can deploy a Docker container with an Nginx image using Terraform and an ECS cluster, there are several prerequisites that need to be met.

Firstly, we need to have an AWS account with proper permissions that allow us to create and manage an ECS cluster. This includes permissions for managing IAM roles and policies, creating VPCs, subnets, and security groups, and provisioning EC2 instances and other AWS resources.

In addition to the AWS account, we also need an IDE of our choice with the AWS CLI installed and configured. This will enable us to interact with our AWS resources directly from the command line, including managing the ECS cluster, creating and updating IAM roles and policies, and provisioning and managing other AWS resources.

We also need to have Docker installed on our system, which will allow us to pull down and manage the Nginx image that we’ll be deploying in our ECS cluster. This can be easily installed on a variety of operating systems and is available for free on the Docker website.

Finally, we need to have Terraform installed, which is the tool that we’ll be using to automate the deployment and management of our ECS cluster and Nginx container. Terraform is an open-source infrastructure-as-code tool that allows us to define and manage our infrastructure as code, making it easy to provision, manage, and update our resources in a scalable and repeatable way.

What does Amazon ECS stand for and how does it work?

Amazon ECS is a container management service offered by AWS that provides a scalable and efficient way to run and manage containerized applications. With ECS, users can easily deploy and manage Docker containers on a cluster of virtual machines, allowing them to quickly scale and manage their applications as needed. Additionally, ECS offers a number of advanced features, such as load balancing, auto-scaling, and task scheduling, making it an ideal choice for large-scale container deployments.

How does Terraform work and what are its key features?

Terraform is an open-source infrastructure-as-code tool designed by HashiCorp that enables users to define and manage their infrastructure through declarative configuration files. By using Terraform, users can easily build, provision, and modify cloud-based resources such as virtual machines, load balancers, and databases in a repeatable and scalable way. The tool is cloud-agnostic, meaning it can be used with multiple cloud providers and services, and offers a wide range of advanced features such as resource graph visualization, automated plan approval, and dependency management.

To get started, we’ll create a new directory called “ECS-cluster” and move into it. From here, we’ll break down the next steps into separate sections, each corresponding to the creation of specific resources required to complete the project. These sections will be represented as individual files that can be created with the command mcedit <name>.tf. To proceed, simply copy and paste the relevant code from each section below into its corresponding file.

To ensure the security of our sensitive data, we’ll need to input our access and secret access keys into this file. This information will be used to authenticate our requests and ensure that we have the necessary permissions to access and modify our AWS resources.

Securing our Sensitive Data

To ensure the security of our sensitive data, we’ll need to input our access and secret access keys into this file. This information will be used to authenticate our requests and ensure that we have the necessary permissions to access and modify our AWS resources.

Now create your first variable Terraform file <terraform.tfvars> with the following content.

aws_access_key = "<your-access-key>"
aws_secret_key = "<your-secret-key>"

In this section, we’ll define our providers <provider.tf>, including Docker and AWS, that will be used to interact with and manage our resources. By referencing these providers in our Terraform configuration, we can easily configure and manage the various services and resources needed to deploy our infrastructure in a repeatable and scalable way.

terraform {
  required_providers {
    docker = {
      source  = "kreuzwerker/docker"
      version = "~>3.0.1"
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.56.0"

# Configuring AWS as the provider
provider "aws" {
  region = var.aws_region
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key

# Configuring Docker as the provider
provider "docker" {}

Here, we’ll define the values for our resources, making it possible to reuse our code later on. By defining these values in <variables.tf>, we can make our Terraform configuration more modular and scalable, allowing us to quickly and easily update and modify our resources as needed. This also helps to ensure consistency across our infrastructure and avoids duplicating code or hardcoding values.

# Provider configuration variables
variable "aws_region" {
  description = "AWS region to use"
  type        = string
  default     = "eu-central-1"

variable "aws_access_key" {
  description = "AWS access key"
  type        = string
  sensitive   = true

variable "aws_secret_key" {
  description = "AWS secret access key"
  type        = string
  sensitive   = true

# VPC configuration variables
variable "vpc_cidr" {
  description = "CIDR block for main VPC"
  type        = string
  default     = ""

# Availability zones configuration variable
variable "availability_zones" {
  description = "AWS availability zones to use"
  type        = list(string)
  default     = ["eu-central-1a"]

Now we create the file <vpc.tf> which defines the network infrastructure required for our ECS cluster deployment, including a VPC, public subnet, internet gateway, and routing table. By creating these resources, we can establish connectivity to the internet and access the Docker registry to pull the necessary images. Terraform allows us to automate this process and ensure consistency and scalability in our network infrastructure, making it easier to manage and maintain over time.

resource "aws_vpc" "main" {
  cidr_block = var.vpc_cidr

  tags = {
    Name = "main"

resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(aws_vpc.main.cidr_block, 8, 1) // Takes -->
  availability_zone       = var.availability_zones[0] // Use the first availability zone by default
  map_public_ip_on_launch = true

  tags = {
    Name = "public-subnet"

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "igw"

resource "aws_route_table" "public_route_table" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = ""
    gateway_id = aws_internet_gateway.main.id

  tags = {
    Name = "public-route-table"

resource "aws_route_table_association" "public_subnet_association" {
  subnet_id      = aws_subnet.public_subnet.id
  route_table_id = aws_route_table.public_route_table.id

Now we create a security group in the file <ecs-sg.tf> to allow HTTP traffic to access our ECS cluster is crucial for viewing our Nginx test page later in the project. By enabling traffic to pass through the security group and reach our cluster, we can ensure that our application functions correctly. With Terraform, we can automate this process and ensure that our security group is correctly configured and meets our specific project requirements. This will make it easier to manage and maintain our infrastructure over time and enable us to scale our application with ease.

resource "aws_security_group" "ecs_sg" {
  name        = "ecs-sg"
  description = "Security group for ECS cluster"
  vpc_id      = aws_vpc.main.id

  ingress {
    description = "Allow HTTP traffic"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [""]

  egress {
    description = "Allow all outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = [""]

To complete our infrastructure, we’ll create our ECS cluster in <main.tf> and define any other resources or arguments required for the cluster to run. With Terraform, we can automate this process and ensure that our ECS cluster is correctly configured and meets our specific project requirements. This will make it easier to deploy and manage our infrastructure over time, allowing us to focus on developing and scaling our application.

# Creating an ECS cluster
resource "aws_ecs_cluster" "cluster" {
  name = "cluster"

  setting {
    name  = "containerInsights"
    value = "enabled"

# Creating an ECS task definition
resource "aws_ecs_task_definition" "task" {
  family                   = "service"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE", "EC2"]
  cpu                      = 512
  memory                   = 2048

  container_definitions = jsonencode([
      name: "nginx",
      image: "nginx:1.23.1",
      cpu: 512,
      memory: 2048,
      essential: true,
      portMappings: [
          containerPort: 80,
          hostPort: 80,

# Creating an ECS service
resource "aws_ecs_service" "service" {
  name             = "service"
  cluster          = aws_ecs_cluster.cluster.id
  task_definition  = aws_ecs_task_definition.task.arn
  desired_count    = 1
  launch_type      = "FARGATE"
  platform_version = "LATEST"

  network_configuration {
    assign_public_ip = true
    security_groups  = [aws_security_group.ecs_sg.id]
    subnets          = [aws_subnet.subnet.id]

  lifecycle {
    ignore_changes = [task_definition]

To apply the infrastructure changes we’ve made using our .tf files, we can use the Terraform command-line interface. With Terraform, we can easily manage and update our infrastructure in a consistent and repeatable way. To apply our changes, we can run the following command:

$ terraform init

Super wide

After writing the Terraform code, the next step is to initialize the current working directory and configure the backend with the providers we provided earlier in the <provider.tf> file. This will set up the necessary infrastructure for us to deploy our code. Once the directory is initialized, we can then use a command to fix the formatting of our <.tf> files and ensure that everything is properly aligned and easy to read. Proper formatting and organization of our code will make it more maintainable over time.

$ terraform fmt

Super wide

We can ensure that our Terraform configuration files are correct by validating them. This command checks the syntax and configuration of our code against the provider’s schema and confirms whether it’s ready for deployment. By validating our configuration files, we can catch any potential issues early and prevent problems when deploying our infrastructure.

$ terraform validate

Super wide

To preview the changes that will be made to our infrastructure once we apply our Terraform code, we can generate an execution plan. This execution plan will provide a detailed breakdown of the resources that will be created, modified, or deleted. Additionally, it will show any errors or warnings that may be present in our code. By reviewing the execution plan, we can catch any issues and ensure that our infrastructure is properly configured before we apply the changes.

$ terraform plan

Super wide

Once we have reviewed the changes that Terraform will make to our infrastructure, we can confirm the modifications by applying the execution plan. This will initiate the process of creating, modifying, or deleting the resources specified in our code. It’s essential to verify everything is set up correctly before proceeding because these changes can impact our infrastructure. Once we’ve confirmed the changes, we can apply them and wait for Terraform to complete the process.

$ terraform apply --auto-approve
  • Using the —auto-approve flag enables us to automate the approval process without having to interact with the prompt manually. This flag can be useful when we are sure of the changes we are making to our infrastructure and want to apply them without any delay. However, it’s essential to verify that everything is correctly configured before using this flag because it can result in irreversible changes that can negatively impact our infrastructure.

Super wide

Terraform builds our infrastructure in a specific order once we apply the configuration files, and it displays how many resources it created in the process. This is because Terraform configurations are declarative, which means we define the desired state, and Terraform handles the ordering of resources or variables. Once we have applied the configuration files, we should verify that everything has been created successfully by checking the AWS Console.

Excellent, now that our ECS cluster is up and running, it’s time to test our containerized application. To do so, we need to access the Nginx webserver and make sure everything is working as expected. We can do this by selecting the cluster we just created from the list next to View on the same page as the image.

Super wide

Subsequently, we can navigate to the Tasks tab on the Amazon ECS console and click on the task that we just created. This will give us details about the task, including its status, task definition, launch type, and more. By selecting the task, we can view more information about the container that was launched, such as its name, status, and CPU and memory usage. This information will allow us to verify that our container was successfully launched and is running as expected.

Super wide

After successfully accessing the Nginx webserver, we can view our web page by copying and pasting the Public IP into a new web browser tab. This step is crucial to ensure that our ECS cluster is functioning correctly and that the Nginx container image was successfully pulled down from the Docker registry. We can now relax knowing that our team has successfully deployed a Docker container with an Nginx image to our ECS cluster.

Super wide

Here comes the best part of this project …

Now, it’s time for the destroy phase, where we will remove all the resources created by Terraform and ensure that we don’t incur any unnecessary costs.

$ terraform destroy --auto-approve

By running this command, Terraform will begin the process of dismantling all the resources we have created throughout this project, ensuring that no traces are left behind. This is a crucial step in avoiding any unwanted or unintended charges and maintaining a clean and organized infrastructure. The process will only take a few moments, and once it’s completed, we can be sure that our environment is back to its original state.

Super wide

And with that, we have completed the creation of our ECS cluster using Terraform. Through this project, we’ve accomplished a lot, including pulling down an Nginx image from the Docker registry and setting up our cluster to run the container on AWS. The process has been educational, and we have learned how to create and configure our infrastructure using code, specifically Terraform.

Back to Blog