Alexa Skills Are Fun

Alexa devices have revolutionized the way we interact with one and another and with our surroundings. While the smart phones paved way to apps, Alexa devices have given us immense power with Alexa skills that could eventually influence our lifestyles in the near future. I use echo devices to wake me up early morning, manage my schedules, set reminders for meetings, shopping, conference calls, entertainment, intercoms and the list just continues to infinity. Home automation is definitely an area I’m focusing on.

Home automation with Alexa Skills is a breeze, and eases the life for many like me who are physically disabled. Recently I ventured out on learning and developing Alexa skills and its been a absolute fun. I intend to begin here with the basics unlike my earlier posts on cloud. So anyone interested to join me in this way can quickly start developing with a solid fundamental understanding. Lets begin with analysing a sample alexa invocation phrase:

The Alexa Invocation Phrase typically consists of 5 parts:

Wake Word : This triggers the alexa device voice interactions, wake words are very few as of now like Amazon, Computer, Echo apart from Alexa itself

Launch Word : is used to invoke the skills, it can be ask, get, open, start and so on

Skill / Invocation Name : Skill or Invocation name is the actual custom skill that we will develop, flash breifing and smart home skill do not require a Invocation name.

Utterance : Utterance is nothing but the actions the skills should perform as in to read, to sing, to play and so on

Slot Values : These are the values that Alexa converts them into voice interfaces, Utterances and Slot values should be related to the Skills.

With this basic understanding we will design a voice interface model in the next article. Bye For Now!!!

Orchestrate Multiple Environments with GCP

The purpose of this series of posts on Terraform with GCP is to accomplish more with less. Here we try to optimize our templates for bringing up multiple environments across multiple projects in GCP. Below approach will help spin multiple instances with minimal efforts by introducing .tfvars files into our templates.

Use case: I have 2 projects gcp-homecompany-qa and gcp-homecompany-dev for this purpose and we will have to create compute instances with terraform on GCP. Lets get on with it.

The folder structure goes as below

1
2
3
4
5
6
7
8
9
---/gce/`
-- firewall.tf
-- httpd_install.sh
-- main.tf
-- output.tf
-- provider.tf
-- variables.tf
-- app-dev.tfvars
-- app-qa.tfvars

The varaibles.tf file will be used to declare the variables and to assign few default values.

1
2
3
4
5
6
7
8
9
10
11
12
variable "test_servers" { 

type = list(any)
}

variable "disk_zone" { default = "" }
variable "disk_type" { default = "" }
variable "disk_name" { default = "" }
variable "disk_size" { default = "" }
variable "project_id" { default = "" }
variable "credentials_file" { default = "" }
variable "path" { default = "/home/admin/gcp_credentials/keys" }

The values to these variables are assigned in the respective .tfvars files, so here we create 2 .tfvars files to lets say spin up 2 environments Dev and QA environments. And the two files are defined as below:

dev.tfvars

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
credentials_file = "gcp-homecompany-dev-key.json"

project_id = "gcp-homecompany-dev"



test_servers = [{

id = 1

compute_instance_name = "demo1"

compute_machine_type = "e2-standard-2"

compute_image = "centos-8"

compute_network = "home-network"

compute_subnet = "home-sub-subnetwork"

compute_zone = "us-central1-a"

compute_size = "100"

},

{

id = 2

compute_instance_name = "demo2"

compute_machine_type = "e2-standard-2"

compute_image = "centos-8"

compute_network = "home-network"

compute_subnet = "home-sub-subnetwork"

compute_zone = "us-central1-a"

compute_size = "100"

},

{

id = 3

compute_instance_name = "demo3"

compute_machine_type = "e2-standard-2"

compute_image = "centos-8"

compute_network = "home-network"

compute_subnet = "home-sub-subnetwork"

compute_zone = "us-central1-a"

compute_size = "100"

}]



disk_zone = "us-east1-b"

disk_type = "pd-ssd"

disk_name = "additional volume disk"

disk_size = "150"

app-qa.tfvars

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
credentials_file = "gcp-homecompany-qa-key.json"

project_id = "gcp-homecompany-qa"



test_servers = [{

id = 1

compute_instance_name = "demo1"

compute_machine_type = "e2-standard-2"

compute_image = "centos-8"

compute_network = "home-network"

compute_subnet = "home-sub-subnetwork"

compute_zone = "us-central1-a"

compute_size = "100"

},

{

id = 2

compute_instance_name = "demo2"

compute_machine_type = "e2-standard-2"

compute_image = "centos-8"

compute_network = "home-network"

compute_subnet = "home-sub-subnetwork"

compute_zone = "us-central1-a"

compute_size = "100"

},

{

id = 3

compute_instance_name = "demo3"

compute_machine_type = "e2-standard-2"

compute_image = "centos-8"

compute_network = "home-network"

compute_subnet = "home-sub-subnetwork"

compute_zone = "us-central1-a"

compute_size = "100"

}]

disk_zone = "us-east1-b"

disk_type = "pd-ssd"

disk_name = "additional volume disk"

disk_size = "150"

In the above .tfvars files we tried to populate the list test_servers with 3 google compute instances. In order to iterate through this list with key-value pairs we try to implement for loop with a for_each meta-arguments in the below template. Hence following changes are to be done to our main.tf file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
resource "google_compute_instance" "test_instance" {

for_each = { for test_instance in var.test_servers : test_instance.id => test_instance }

name = each.value.compute_instance_name

machine_type = each.value.compute_machine_type

zone = each.value.compute_zone

metadata_startup_script = "${file("httpd_install.sh")}"

can_ip_forward = "false"



// tags = ["",""]

description = "This is our virtual machines"
tags = ["allow-http","allow-https"]
boot_disk {
initialize_params {
image = each.value.compute_image
size = each.value.compute_size
}
}


network_interface {
network = each.value.compute_network
subnetwork = each.value.compute_subnet
access_config {
// Ephemeral IP
}
}

service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}

The for_each meta argument will assign the values to the arguments from the list with key-value pair. While we can now test the above template. For generalizing the network, subnet and load balancer related stuffs, I will post in the future articles.

terraform apply -var-file=app-<env>.tfvars

And the above command create compute instances depending on the .tfvars files passed while applying .

Making Terraform Dynamic with Interpolation

Continuing from the previous post we will try to introduce interpolation, flow control and looping. We will split the main.tf to different chunks of files that hold specific definitions to create the resources in GCP. We will create the provider.tf file which holds the provider configurations.

provider.tf

1
2
3
4
5
6
7
8
9
10
variable "path" {  default = "/home/vagrant/gcp_credentials/keys" }

provider "google" {
project = "triple-virtue-271517"
version = "~> 3.38.0"
region = "us-central1"
zone = "us-central1-a"
credentials = "${file("${var.path}/triple-virtue.json")}"

}

Firewall rules can be defined in a separate file as firewall.tf as below:

firewall.tf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
resource "google_compute_firewall" "allow-http-port" {
name = "allow-http-port"
network = "default"

allow {
protocol = "tcp"
ports = ["80"]
}

target_tags = ["allow-http"]

}

resource "google_compute_firewall" "allow-https-port" {
name = "allow-https-port"
network = "default"

allow {
protocol = "tcp"
ports = ["443"]
}

target_tags = ["allow-https"]

}

Interpolation in Terraform helps to assign values to variables, this way we can dynamically manage the provisioning of resources in the cloud environments. Here we create variables.tf file with defines the variables that can be used in the script.

variable.tf

1
2
3
4
5
6
7
variable "image" {  default = "centos-8" }
variable "machine_type" { default = "n1-standard-2" }
variable "name_count" { default = ["server-1","server-2","server-3"]}
variable "environment" { default = "production" }
variable "machine_type_dev" { default = "n1-standard-1" }
variable "machine_count" { default = "1" }
variable "machine_size" { default = "20" }

We will then create a seperate file httpd_install.sh where we install the web servers into the compute instances.

httpd_install.sh

1
2
3
4
5
sudo yum update 
sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl start httpd
sudo systemctl enable httpd

Now lets define the main.tf that reffers to the interpolation, firewall rules and the script to install the apache webservers.

main.tf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
resource "google_compute_instance" "default" {

count = length(var.name_count)
name = "list-${count.index+1}"
machine_type = var.environment != "production" ? var.machine_type : var.machine_type_dev
metadata_startup_script = "${file("httpd_install.sh")}"


can_ip_forward = "false"
description = "This is our virtual machines"

tags = ["allow-http","allow-https"]




boot_disk {
initialize_params {
image = var.image
size = var.machine_size
}
}


network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}

metadata = {
size = "20"
foo = "bar"
}

}

Now by carefully observing main.tf, we see that the lines refer to the variables defined in the variables.tf

1
2
3
count = length(var.name_count)
name = "list-${count.index+1}"
machine_type = var.environment != "production" ? var.machine_type : var.machine_type_dev

Further the above lines also shows the looping and flow control. Here we are looping to create 3 compute instances of type production grade. Below we see clear interpolation the terraform which refers the image and machine_size defined in the variables.tf

boot_disk {
    initialize_params {
        image = var.image
        size = var.machine_size
    }
}

The below line initializes the installation of apache webservers with httpd_install.sh script.

metadata_startup_script = "${file("httpd_install.sh")}"

Hence the output.tf will look like below:

output.tf

1
2
3
4
5
6
7
output "machine_type" {
value = "${google_compute_instance.test_instance[*].machine_type}"
}

output "name" {
value = "${google_compute_instance.test_instance[*].name}"
}

the overall files created in this regard is as below:

1
2
3
4
5
6
7
---/gce/
-- firewall.tf
-- httpd_install.sh
-- main.tf
-- output.tf
-- provider.tf
-- variables.tf

The results of the above experiments are as below:

Beginning Terraform with GCP

This and the next series of posts will demonstrate the simplification of introducing complexity in IaC best practices. But first a simple Terraform script to provision resources on a GCP cloud. We dive into getting a VM instance with Apache web server with in Google Cloud Platform public in public cloud. We start with one main.tf which has all the configurations and the resources to provision and orchestrate in GCP.

Lets first define the provider configurations:

1
2
3
4
5
6
7
8
9
provider "google" {

project = "triple-virtue-271517"
version = "~> 3.38.0"
region = "us-central1"
zone = "us-central1-a"
credentials = "${file("${var.path}/cloud-access.json")}"

}

The path variable refers to the access tokens to GCP cloud project as below:

1
variable "path" { default = "/home/vagrant/gcp_credentials/keys" }

Lets define the firewall rules with default network resources

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
resource "google_compute_firewall" "allow-http-port" {
name = "allow-http-port"
network = "default"

allow {
protocol = "tcp"
ports = ["80"]
}

target_tags = ["allow-http"]

}

resource "google_compute_firewall" "allow-https-port" {
name = "allow-https-port"
network = "default"

allow {
protocol = "tcp"
ports = ["443"]
}

target_tags = ["allow-https"]

}

The target_tags defined shall then be referred in the resources (VM instances) that may require the firewall rules to enable http and https ports.

Further we will define code to provision a VM instance and map it to the default network with above mentioned firewall rule

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
resource "google_compute_instance" "test_instance" {

name = "demo-01"
machine_type = "e2-standard-2"
zone = "us-central1-a"
metadata_startup_script = <<-EOF
sudo yum update
sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl start httpd
sudo systemctl enable httpd
EOF

tags = ["allow-http","allow-https"]

boot_disk {

initialize_params{

image = "centos-8"
size = "100"


}
}

network_interface {
network = "default"

access_config {
// Ephemeral IP
}
}

service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}

}

The metadata_startup_script also tries to install webserver while provisioning the vm instance. The network_interface section assigns a public ip to the same instance.

Now putting all together

main.tf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
variable "path" {  default = "/home/vagrant/gcp_credentials/keys" }

provider "google" {
project = "triple-virtue-271517"
version = "~> 3.38.0"
region = "us-central1"
zone = "us-central1-a"
credentials = "${file("${var.path}/cloud-access.json")}"

}

resource "google_compute_firewall" "allow-http-port" {
name = "allow-http-port"
network = "default"

allow {
protocol = "tcp"
ports = ["80"]
}

target_tags = ["allow-http"]

}

resource "google_compute_firewall" "allow-https-port" {
name = "allow-https-port"
network = "default"

allow {
protocol = "tcp"
ports = ["443"]
}

target_tags = ["allow-https"]

}


resource "google_compute_instance" "test_instance" {

name = "demo-01"
machine_type = "e2-standard-2"
zone = "us-central1-a"
metadata_startup_script = <<-EOF
sudo yum update
sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl start httpd
sudo systemctl enable httpd
EOF

tags = ["allow-http","allow-https"]

boot_disk {

initialize_params{

image = "centos-8"
size = "100"


}
}

network_interface {
network = "default"

access_config {
// Ephemeral IP
}
}

service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}

}

output "machine_type" {
value = "${google_compute_instance.test_instance.machine_type}"
}

output "name" {
value = "${google_compute_instance.test_instance.name}"
}

Here below are the results of the above resources created in GCP.

The server instance created in the vm console

The Apache webserver running in that instance

In the next part we will further refine the above script by splitting the script into different files and terraform interpolation.

Provisioning Jenkins with Vagrant

We will further install a Jenkins Instance while initializing a vagrant box. The refined Vagrant and the related jenkins installation is as below:

Jenkins.sh should be at the same level as vagrantfile

1
2
3
4
5
6
7
8
9
10
11
12
13
sudo yum install -y epel-release
sudo yum -y update
sudo yum install -y net-tools
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import http://pkg.jenkins.io/redhat-stable/jenkins.io.key
sudo yum install -y git
sudo yum install -y java-1.8.0-openjdk.x86_64
sudo yum -y install jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins
echo "Installing Jenkins Plugins"
JENKINSPWD=`sudo cat /var/lib/jenkins/secrets/initialAdminPassword`
echo $JENKINSPWD

Our refined Vagrantfile code is as below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end

#Master VM
config.vm.define "cicd-admin-server" do |cicd-admin-server|
cicd-admin-server.vm.hostname = "master.server.dev"
cicd-admin-server.vm.box = "bento/centos-7.8"
cicd-admin-server.vm.network :private_network, ip: "192.168.60.10"
cicdserver.vm.provision "shell" do |shell|
shell.path = "jenkins.sh"
end
end

# Node VM - 1
config.vm.define "node1" do |node1|
node1.vm.hostname = "node1.server.dev"
node1.vm.box = "bento/centos-7.8"
node1.vm.network :private_network, ip: "192.168.60.12"
end

#Node VM - 2

config.vm.define "node2" do |node2|
node2.vm.hostname = "node2.server.dev"
node2.vm.box = "bento/centos-7.8"
node2.vm.network :private_network, ip: "192.168.60.14"
end
end

Now try to hit the URL: http://192.168.60.10:8080/

Apply the password that is printed during the vagrant spinning up the vm.

Gathering the Building Blocks!!

99 percent of the time our work revolves around surfing the internet in search of solutions for all the day to day hassles. All the contents that I surf mostly deals with basic DevOps and Cloud Orchestration Solutions, but for more complex problems there seems to be no single website that could explain a SRE thirst for best practices that we could approach in a more professional way in our daily work life.

This site intends to do just that. I prefer to dive in straight to the point where it matters. Here I wish upon building right from setting up a development environment that will be used to build more complex solutions in the future. At the time of starting this blog, my development environment looks like the below:

Machine: A modest HP Pavilion, i7 10th Gen Processor Win10 with 16 Gigs of RAM and 512 Gigs of SSD.

Virtualizations: VirtualBox + Vagrant with CentOS 7.8 Boxes. ( Dont bother about the versions. All latest versions is just more than sufficient to work). For more information on the installation and setting up VirtualBox and Vagrant, check the links below:

VirtualBox: https://www.virtualbox.org/manual/ch02.html

Vagrant: https://www.vagrantup.com/docs/installation

Vagrant Images: https://app.vagrantup.com/bento/boxes/centos-7.8

GitHuB: https://github.com/manjunathrreddy/

Vagrantfile to Provision VMs locally as below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end

#Master VM
config.vm.define "cicd-admin-server" do |cicd-admin-server|
cicd-admin-server.vm.hostname = "master.server.dev"
cicd-admin-server.vm.box = "bento/centos-7.8"
cicd-admin-server.vm.network :private_network, ip: "192.168.60.10"
end

# Node VM - 1
config.vm.define "node1" do |node1|
node1.vm.hostname = "node1.server.dev"
node1.vm.box = "bento/centos-7.8"
node1.vm.network :private_network, ip: "192.168.60.12"
end

#Node VM - 2

config.vm.define "node2" do |node2|
node2.vm.hostname = "node2.server.dev"
node2.vm.box = "bento/centos-7.8"
node2.vm.network :private_network, ip: "192.168.60.14"
end
end

The above vagrant script creates a master cicd server for all admin and cloud orchestration, and the 2 nodes to be as slaves machines for load sharing.

The above Vagrantfile will be further refined accordingly for specific use cases in future. From the directory where the Vagrant file is present. Trigger the Below command

$vagrant up cicd-admin-server node1 node2

Brings all the vms to life!!!

Now if you are wondering what about Docker and Kubernetes, thats a different eco space for microservices. I prefer using any one of the managed services (preferably GKE).