Provision Ubuntu 20.04 in AWS with Terraform and install Nginx with Ansible

In this article, I have used two (02) Ubuntu 20.04 ec2 instances within AWS, namely Web Server and Ansible Server. Web server is deployed using Terraform. Ansible Server is manually configured to demonstrate the Ansible installation and Playbook setup.

Harsha Nanayakkara
8 min readJun 22, 2022

--

Before I dig in, let’s have a brief idea on what actually Terraform and Ansible are.

What is Terraform?

Terraform (TF) is an Infrastructure as code (IaC) tool which uses a declarative approach¹ to achieve provisioning and orchestration of servers. TF specializes in building an infrastructure from scratch and maintain their states, rather than configuring software on top of existing infrastructure, where Ansible comes into the picture.

E.g: Terraform can be used to provision a complete server from the beginning.

¹ Gives instructions to the program and let it determine the steps that need to be taken. Instead of how to get there, the focus is on the result.

What is Ansible?

Ansible is a Configuration Management (CM) tool which uses a procedural approach² to achieve desired state on a particular configuration. Ansible specializes in managing configurations within already existing infrastructure.

E.g: Ansible can be used to install and manage nginx configurations inside the already provisioned server.

² It defines a particular sequence of actions that must be followed to get the desired result.

Now, let’s move on with the task.

Task 01: Web Server deployment using Terraform

prerequisites:

  • Terraform needs to be installed. If not installed already, you may get it from here.
  • AWS account and access keys. For more details on AWS account creation, please see How do I create and activate a new AWS account?. Also, further information on access keys can be found here.
  • Setup AWS profile locally as described here, which will be used in this Terraform example.

Okay, once we have all the above, we are good to go.

There are 02 main files used in TF config as below;

  • main.tf: contains the main set of configurations for the module.
  • variables.tf: contains the variable definitions for the module.

Firstly, we will look at main.tf file;

provider "aws" { profile = "default"
region = "us-east-1"
}

Initially, it is required to declare the provider block for AWS. Here, I have used the defaultprofile and us-east-1region.

resource "aws_vpc" "customer_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "customer vpc"
}
}

In this step, VPC is created with a label customer_vpc and a CIDR block 10.0.0.0/16 .

resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.customer_vpc.id
cidr_block = var.public_subnet_cidr_block
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "public subnet"
}
}
resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.customer_vpc.id
cidr_block = var.private_subnet_cidr_block
availability_zone = "us-east-1b"
tags = {
Name = "private subnet"
}
}

Next, 02 subnets (public and private) are created as depicted above. As we can see, cidr_block of both subnets have been assigned a variable. These variables have been already defined in the variables.tf which we will look at later.

resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.customer_vpc.id
tags = {
Name = "customer_vpc_igw"
}
}

Since we have a public subnet, it is required to add an Internet Gateway for internet access.

resource "aws_route_table" "public_rt" {
vpc_id = aws_vpc.customer_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}tags = {
Name = "public route table"
}
}
resource "aws_route_table_association" "customer_vpc_us-east-1a_public" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.public_rt.id
}

In this step, a route table is created and associated with the public subnet.

resource "aws_security_group" "allow_http_ssh" {
name = "allow_http_ssh_sg"
description = "Allow HTTP,SSH inbound connections"
vpc_id = aws_vpc.customer_vpc.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
cidr_blocks = ["0.0.0.0/0"]
from_port = 8
to_port = 0
protocol = "icmp"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow HTTP SSH Traffic"
}
}

With the aws_security_group resource, firewall (security group) rules have been declared. According to this configuration, HTTP, SSH and PING traffic will be allowed to this instance from anywhere. SSH is required to allow communication with Ansible Server, which we will take a look at in the next section. The from_port and to_port of the icmp block were set to ports8 and, 0 respectively, to only allow a ping (Echo).

resource "aws_instance" "web_server" {
ami = var.instance_ami
instance_type = var.instance_type
availability_zone = "us-east-1a"
subnet_id = aws_subnet.public_subnet.id
key_name = "web_server_key"
vpc_security_group_ids = [aws_security_group.allow_http_ssh.id]
associate_public_ip_address = true
tags = {
Name = "Web Server"
}
}

Now, it is time to set up the EC2 instance using the aws_instance resource type. Instance type, availability zone, security groups and SSH keys are defined here.

output "server_public_ip" {
value = aws_instance.web_server.public_ip
}

Finally, output module is added to print out the public IP of the EC2 instance, so we don’t need to refer to AWS management console to get it :)

With this main.tf file has been completed. Now let’s move on to variables.tf file.

variable "public_subnet_cidr_block" {
description = "public_subnet_cidr_block"
type = string
default = "10.0.1.0/24"
}
variable "private_subnet_cidr_block" {
description = "private_subnet_cidr_block"
type = string
default = "10.0.2.0/24"
}
variable "instance_ami" {
description = "instance_ami"
type = string
default = "ami-08d4ac5b634553e16"
}
variable "instance_type" {
description = "instance_type"
type = string
default = "t2.micro"
}

I have defined variables for public_subnet_cidr_block, private_subnet_cidr_block, instance_ami and instance_type inside the variables.tf file and referenced in the main.tf configuration.

With this, we have completed the TF configuration completely. Now we have to execute the code and let TF provision the server for us.

TF commands used are as follows;

  • terraform init : to initialize terraform environment and configs
  • terraform plan : shows the overall execution plan with number of additions, changes and removals. It is always better to run this command and see how the code will run.
  • terraform apply : used to actually apply the changes. If yes is entered, the changes will take effect.
  • terraform destroy : destroys all applied changes.

Task 02: Set up Ansible server and deploy Nginx

prerequisite:

  • Ubuntu 20.04 instance needs to be provisioned in the same Subnet as the Web Server.

To begin with, it is required to install Ansible using the below commands.

$ sudo apt update
$ sudo apt install software-properties-common
$ sudo add-apt-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible

Note: Since Ansible uses SSH to communicate between hosts, we should be able to SSH to the Web Server from the Ansible Server. In order to do that, SSH key needs to be generated and shared with the remote host (Web Server). Below commands outline how we can achieve this task. -t option defines the algorithm. Key size is specified using the -b option.

ssh-keygen -t rsa -b 4096

Once the command is successfully executed, 02 files will be added to the .ssh folder. The private key is included in id_rsa whereas the public key is included in id_rsa.pub file. In order to communicate with the remote server (Web Server) the public key (id_rsa.pub) has to be shared with it. However, due to the nature of AWS instances, we will get an error (Permission denied (publickey)) if we try to copy the public key using ssh-copy-id command. Therefore, either we have to manually copy and paste the key into the remote host or temporary enable the PasswordAuthentication which can be found in /etc/ssh/sshd-config to make it possible to transfer the key using ssh-copy-id .

$ sudo vi /etc/ssh/sshd_config
$ PasswordAuthentication yes //Find this line and change to yes

Save the file and exit. You should restart the SSH service to trigger the changes.

$ sudo systemctl restart sshd

After that, give a password to ubuntu user.

sudo passwd ubuntu

Once we have completed the above steps, we can simply run the below command to copy the id_rsa.pub to the remote server. -i specifies the path of the identity file to be used. When it prompts for the password, give the password that we set in the previous step.

$ ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@172.31.89.199

Once the key is successfully transferred, it is always better to check the connectivity first and disable the PasswordAuthentication . You should be able to ssh to the remote host without giving any password from now onwards.

Now, the host IP of the remote server (Web Server) has to be added to the /etc/ansible/hosts file. For this demo, I have added it under [webservers] group for simplicity.

Next, playbook needs to be configured with the required steps that need to be executed to install Nginx on the Web Server.

Create a playbook with any name. For the purpose of this article, I have named it nginx-install.yml and created inside my home directory which looks like below.

---
- name: nginx install and start service
hosts: webservers
become: true
tasks:
- name: install nginx
ansible.builtin.apt:
update_cache: yes
name: nginx
state: latest
- name: start nginx
ansible.builtin.service:
name: nginx
state: started
enabled: yes

File should be in YAML format, hence needs to be started with 03 hyphens. Indentation must be followed carefully. There are 02 tasks defined for installation and to start the service. Ansible has built-in modules which has been used here accordingly. E.g: ansible.builtin.apt and ansible.builtin.service .

Before actually running the playbook it is always advisable to check for errors and the syntax as well.

  • $ ansible-playbook nginx-install.yml --syntax-check : To check any possible syntax errors.
  • $ ansible-playbook nginx-install.yml --check : To simulate the changes and validate the execution. It will show any errors available before we actually run the playbook. The output for our scenario is as below;
PLAY [nginx install and start service] *****************************************TASK [Gathering Facts] *********************************************************
ok: [172.31.89.199]
TASK [install nginx] ***********************************************************
changed: [172.31.89.199]
TASK [start nginx] *************************************************************
changed: [172.31.89.199]
PLAY RECAP *********************************************************************
172.31.89.199 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Great, no errors so far and we are in safe hands :) Now we can go ahead and run the playbook for real.

$ ansible-playbook nginx-install.yml

Once the execution is finished we can check the status by using the public IP of the Web Server. It will give us the welcome page of Nginx.

nginx welcome page
Nginx welcome page

This concludes the implementation of server using Terraform and Nginx configuration using Ansible. I sincerely hope this will help you to tackle anything similar.

Thank you for reading and stay safe!

--

--

Harsha Nanayakkara

An enthusiastic autodidact who is passionate to gain and freely share knowledge. I would really appreciate your feedback and support!

Recommended from Medium

Lists

See more recommendations