Handling DOM Elements From link_to remote: true Callback

This is a documentation of how to handle the response from a link_to remote: true API call and manipulate the DOM with minimal Javascript code.

Motivation

In the past, I use Javascript to add a click listener on a button element in order to make jquery.ajax() API call to my Rails server.

A typical use case would be to delete a row in a list. While I can make a resource delete request to the Rails server and it will reload the page with the new list the RESTful way, this UX flow does not work out in some cases.

Hence, I had to use the jquery.ajax() way to work things out. I did not use the link_to with remote: true helper because I thought there is no way for me to listen for the response and react.

The the only way I can listen on the callback and handle the DOM element thereafter, without having to reload the page was to use success callback the jquery.ajax(), or so I thought.

The Magic

So a better way is to use the link_to remote: true helper to render out the element without fuss the Rails way. And the key step is to add a Javascript listener on the element.

<!-- page.html.erb -->
<div data-model-id="<%= @model.id %>">
  <%= link_to 'DELETE', model_path(@model), method: :delete, class: 'delete-model', data: { confirm: 'Are you sure?' }, remote: true %>
</div>
// page.js
$(document).on('ajax:success', '.delete-model', event => {
  const [response, status, xhr] = event.detail;
  $(`.parent-row[data-model-id="${response.model_id}"]`).remove();
  alert(response.message);
});

The listener will listen for a ajax:success callback on the element that made the API call. Upon triggered by the event, it executes the block of code in its callback function. In this callback function, we will receive the data passed from the backend, which we can use to remove the DOM element as required.

Note that you might not want to use $(document).on() in a turbolinks environment as the listener will be added every time the page changes. A particular use case is documented here.

We can add a ajax:error listener as well to handle errors.

The Advantages

javascript crying | vic-l

This is a no hassle method of writing code in ruby (well, for the rendering of the element at least). The old way that I do, which is to use the jquery.ajax() method, requires more tear-inducing Javascript code to conjure. For a full stack Rails developer, it is not the most welcome.

On top of that, using Rails helper to render out the HTML element allows us to make use of the various Rails helpers to supercharge our development speed.

Url route helpers parse out the actual RESTful route to call with ease. Since it is dynamically interpolated, no code change is required should there be a route change.

We can also still take advantage of rails-ujs which has some handy features commonly needed for development. In the example above, I added a data-confirm attribute. This will be trigger rails-ujs to ask for confirmation before proceeding with the request, and gracefully abort the operation should the user cancel the confirmation.

This will require proper setup with the new Rails 6 version. Check out my article on how to properly setup Rails 6 with bootstrap, and of couse, integrate the new rails-ujs in its brand new frontend paradigm running on webpack.

Conclusion

Utilizing rails helpers as much as possible will exude the strength of the Rails framework even more, which is rapid development. This method of listening to remote API calls and act accordingly from the response allows exactly this.

AWS Lamba and API Gateway Integration With Terraform To Collect Emails

This is a documentation on creating a service that collects emails. It runs on serverless technology utilizing AWS lambda and API Gateway. It is also made easy to deploy to the cloud with infrastructure as code via Terraform in the form of a plug-and-play methodology.

Motivation

Often, I have to make static websites that are not exactly completely static because it requires a backend to collect the emails. While 3rd party services like mailchimp and sendgrid has their own SDKs to support easy integration for email collection, we have to be worried about hitting the limit in their packages and plans. This translate to stress for developers as we have to find a solution on it quickly and properly. If this happens on a weekend or a Friday, somehow this is always the case as more people are surfing the net then, the intensity is amplified.

For a new website, it is very hard to gauge the traffic and thus the plan required for the 3rd party service. this poses difficulties when budgeting for the project. Under utilizing the service also translate to unnecessary cost. The best kind of plan for such website is a pay as you go model, in my opinion, and that can be achieved by integrating with cloud providers like AWS.

Technology Stack

AWS Lambda

Enter AWS lambda where you only pay for what you use. You do not need to fork out money at the start of your project. Instead, you will just pay for how much you use, hence relieving you of the worry of wasting money on resources you are not using. In fact, this is only an issue if you are hitting 1 million potential users signing up with their email every month. The reason is because AWS lambda has 1 million free request every month before they start charging. This is highly unlikely for a new website, which means you now have a backend for your static website for free.

API Gateway

For the Serverless fuction, that is AWS Lambda, to connect to the Internet via an API, we need the API Gateway. This exposes the serverless function to be accessible by the World Wide Web with a HTTPS endpoint. It runs on the encrypted transport security layer protocol to uphold security by default. This allows your websites to use the serverless function via API calls.

Terraform

To set up the infrastructures, the usual way is to navigate the AWS management console, deploy the required AWS resoures and link them. This can be a challenge if you are not familiar with the required configurations. Not only will this translate to loss of precious time to debug these issues, which otherwise developers could have spent it with your loved ones and challenge the meme below, but it will also lead to frustration.

While frustration is a part and parcel of life as a programmer, we can also avoid them with our knowledge of code. Here is where Terraform enters the fray. It is an Infrastructure As Code where you write the configurations of the infrastructure once and you can deploy it multiple time without having to go through the whole forest of the AWS console each time. This means you do not need to remember every single step and do not need to deal with surprise bugs because you forgot one of them, or worse, had a spelling error.

Programming is like magic. You write very specific instructions in arcane languages to invoke commands, and if you get it even a little bit wrong you risk unleashing demons and destroying everything.

— Diana Carrier (@artemis_134) June 23, 2018

Since the blueprint infrastructure is in code, this means we can leverage version control features with git, and work together to improve the code base along the way without fear of not being able to rollback to the previous successful configuration.

Terraform Files

I will start off with the terraform files required to setup the infrastructure to deploy the code. Let’s start off with the place to store our emails.

The database – AWS DynamoDB

I will store the emails collected in AWS’s own noSQL database DynamoDB. This is a fast, simply structured and schemaless storage which fits my use case very aptly.

It allows fast and simultneous writes at high speed, so there is no fear of race conditions from spike in the volume of signups during a PR event promoting the product and getting people to leave their emails at the website.

Since it is schemaless, we can easily add new details of the users that you would like to collect on top of their emails along the way without having to migrate and fiddle with the structure of the database. With proper metaprogramming, you do not need to touch the backend code as well, leaving only the frontend to work on adding the new text fields for data collection.

For the sake of argument, we can also use the traditional relational database management system  (RDBMS) for this project. It is written in SQL, which is a langauge most, if not all, developers who every touched a database would have known. There is no need to use fancy noSQL for this simple project. In addition, the chances of leveraging the scaling advantage of noSQL over SQL databases are low, because you will need alot of traffic for that to become a worry. For a new website, that is highly unlikely to happen.

However, highly influenced by the cost, I am still sticking with DynamoDB in this case. To setup an AWS RDS to host a managed relational database, the cheapest MySQL database already goes for around 20 USD a month, as compared to the pay as you go model the DynamoDB employs. On top of that, it has a generous amount of free usage and storage under its free tier. This free tier does not last for the first 12 months after your signup but forever, unlike the RDS counterpart. We probably will NOT incur any cost using DynamoDB unless your marketing is brilliant for your new website.

resource "aws_dynamodb_table" "main" {
  name = "${var.project_name}-dynamodb_table"
  billing_mode = "PROVISIONED"
  read_capacity = var.dynamodb-read_capacity
  write_capacity = var.dynamodb-write_capacity
  hash_key = "email"

  attribute {
    name = "email"
    type = "S"
  }
}

Provisioning the database is the simplest. I am using Terraform variables to substitute values to set the number of reading and writing units required, as well as the table name for robustness sake.

I have set the billing mode to “provisioned” for simplicity sake. Afterall I am not expecting any insane burst of traffic for a site that is not popular. Even if it does, maybe due to some incrediable promotion at some hugely popular event, I do not expect the load to require me to scale the reading and writing capacities of the database. It is going to be a quick write of a few bytes.

On top of that, provisioned capacity means less configurations needed for the permissions to autoscale of the capacities of the database. It can take some time to configure that, and since that is outside the topic of the article, I will stick to “provisioned” billing mode.

The hash_key, or “partition key” in other definitions, is analogous to the primary key in a SQL database table. It requires specific details under the attribute property. You can specify the range_key, or “sort key” here if you require, and remember to add attribute to describe it as well.

Other attributes that are neither the partition key nor the sort key need not have a attribute property in this file. You can simply just write it in the database and it will register. Afterall, this is a schemaless database.

On top of that, it is a fully managed database, so it comes with all the goodies like backup and version maintenance to spare developers from all these chores.

The backend – AWS Lambda

​Next is the lambda function. It is written in Javascript using Nodejs. The file below is the configuration file to set the infrastructure required. Let’s dive into it.

resource "aws_lambda_function" "main" {
  filename = var.zipfile_name
  function_name = "${var.project_name}"
  role = aws_iam_role.main.arn
  handler = "index.handler"

  source_code_hash = "${filebase64sha256("${var.zipfile_name}")}"

  runtime = "nodejs12.x"
}

resource "aws_iam_role" "main" {
  name = "${var.project_name}-iam_lambda"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_iam_policy" "main" {
  name = "main"
  path = "/"
  description = "IAM policy for lambda to write to dynamodb table and logging"

  policy = templatefile("${path.module}/lambda_policy.tmpl", { dynamodb_arn = aws_dynamodb_table.main.arn })
}

resource "aws_iam_role_policy_attachment" "main" {
  role = "${aws_iam_role.main.name}"
  policy_arn = "${aws_iam_policy.main.arn}"
}

resource "aws_lambda_permission" "main" {
  statement_id = "AllowExecutionFromAPIGateway"
  action = "lambda:InvokeFunction"
  function_name = aws_lambda_function.main.function_name
  principal = "apigateway.amazonaws.com"

  source_arn = "${aws_api_gateway_rest_api.main.execution_arn}/*/*/*"
}

Uploading of the backend code will be using the base64 hash of the zipfile of the code. The code will need to be first compressed and zipped before taking this action. We will see how we can automate this process later.

This lambda function will need the permissions to write to the dynamoDB table. This is done using

  • aws_iam_role to establish trust between the 2 AWS services
  • aws_iam_policy to give permission for the lambda function access the database resource and perform the PutItem action. Details of the policy is interpolated via a template file, which we will go through later
  • aws_iam_role_policy_attachment to bind the aws_iam_role to the aws_iam_policy on the lambda function
  • aws_lambda_permissionto allow API Gateway to be able to integrate the lambda function and invoke it

The template file for the aws_iam_policy is shown below. It lists the actions that the lambda function is permitted to perform on the specified dynamodb table. It also contains the permissions for lambda function to push the logs to AWS Cloudwatch. By the way, these logging permissions are the default permissions for a lambda function, and this template adds on the DynamoDB permissions to them. Note the dynamodb_arn variable that is interpolated, which jusitifies the use of the template file instead of hardcoding the whole policy in the main terraform file for robustness sake.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "dynamodb:PutItem",
      "Resource": "${dynamodb_arn}",
      "Effect": "Allow"
    },
    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*",
      "Effect": "Allow"
    }
  ]
}

The API layer – AWS API Gateway

The API Gateway is required to expose the lambda function to be consumed by servers and websites via a URL endpoint. The endpoint will be served over the HTTPS, which requires some extra configurations as documented below.

resource "aws_api_gateway_rest_api" "main" {
  name = var.project_name
}

resource "aws_api_gateway_resource" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  parent_id = aws_api_gateway_rest_api.main.root_resource_id
  path_part = "email"
}

resource "aws_api_gateway_integration" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = aws_api_gateway_method.main.http_method
  integration_http_method = aws_api_gateway_method.main.http_method
  type = "AWS_PROXY"
  uri = aws_lambda_function.main.invoke_arn
}

resource "aws_api_gateway_integration_response" "main" {
  depends_on = [aws_api_gateway_integration.main]

  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = aws_api_gateway_method.main.http_method
  status_code = aws_api_gateway_method_response.main.status_code
}

resource "aws_api_gateway_method" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = "POST"
  authorization = "NONE"
}

resource "aws_api_gateway_deployment" "main" {
  depends_on = [
    "aws_api_gateway_integration_response.main",
    "aws_api_gateway_method_response.main",
  ]
  rest_api_id = aws_api_gateway_rest_api.main.id
}

resource "aws_api_gateway_method_settings" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  stage_name = aws_api_gateway_stage.main.stage_name
  
  # settings not working when specifying the single method
  # refer to: https://github.com/hashicorp/terraform/issues/15119
  method_path = "*/*"

  settings {
    throttling_rate_limit = 5
    throttling_burst_limit = 10
  }
}

resource "aws_api_gateway_stage" "main" {
  stage_name = var.stage
  rest_api_id = aws_api_gateway_rest_api.main.id
  deployment_id = aws_api_gateway_deployment.main.id
}

resource "aws_api_gateway_method_response" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = aws_api_gateway_method.main.http_method
  status_code = "200"
}

output "endpoint" {
  value = "${aws_api_gateway_stage.main.invoke_url}${aws_api_gateway_resource.main.path}"
}

So let’s break it down.

The aws_api_gateway_rest_api represents the project in its entirety.

The aws_api_gateway_resource refers to each api route of this project, and there is only 1 in this case.

I have setup only 1 stage environment of aws_api_gateway_stage for this project using a Terraform variable. You can setup a different stages to differentiate the staging and production environments.

The aws_api_gateway_stage is associated to a aws_api_gateway_method_settings that sets the throttling rate of the API to prevent spams and overloading. For the method_path property, the wildcard route is used to apply to all routes instead of the only API route that was created. It is trivial in this case, but the explanation for picking this “easy” route is simply due to a bug. It I were to specify the exact route, which is in the form of {resource_path}/{http_method}, the settings on the throttling rate will not propagate. It was documented here on github but was not properly resolved. Leaving it here for now.

The aws_api_gateway_deployment configures the deployment of the API. Note the depends_on attribute that was assigned. This explicit dependency is critical to ensure the deployment is called into effect after all the necessary resources have been provisioned.

The aws_api_gateway_integration configuration sets the integration to lambda proxy using POST HTTP method without any authorization, as specified by the aws_api_gateway_method configuration. Lambda proxy allows us to handle the request from the server like how we would in a typical web application backend framework. The full request object is passed to lambda function and the API Gateway plays no part in mapping any of the request parameters. The API Gateway mapping has great potential to integrate interfaces properly, but for our use case, it is not necessary. I find this article doing a great job in explaining the API Gateway features with easy to consume information and summary, like a gameshark guide book written by the half-blood prince. Do take a look to understand AWS API Gateway better.

The aws_api_gateway_integration_response is responsible for handling the response from the lambda function. This is where we can make changes to the headers returned from the lambda function using the response_parameters property, which is not used in this case. This is also the place to map and transform the response data from the backend to fit the desired data structure using the response_templates property.

The aws_api_gateway_method_response is where we can filter what response headers and data from aws_api_gateway_integration_response to pass on to the caller.

The transform and mapping of the headers and data from the backend (ie the lambda function) in aws_api_gateway_integration_response and the filter of headers and data before passing to the front end in aws_api_gateway_method_response is not needed in this sample application. It is just good knowledge to have. There are 2 reasons why we do not need them here.

First, in a bit, we will go through the front end that will make an API call that is a simple request. A simple request does not require a preflight request, which is a API call made by browsers prior to the actual API call, as they are deemed safe since they are using standard CORS-safelisted request headers. In the event that one does need a preflight request because one is not making a simple request, we will need to set up another API route that will transform the headers returned from the backend and allow the relevant headers to be passed on to the front end for this preflight request. This will allow the frontend website to overcome the CORS policy enabled by default in modern browsers. This will mean we need to configure a new set of aws_api_gateway_rest_api, aws_api_gateway_integration, aws_api_gateway_method, aws_api_gateway_integration_response, aws_api_gateway_method_response just for this preflight request. Things can get complicated here, so I will leave out of this article. If you still to implement CORS, [this gist](https://gist.github.com/keeth/6bf8b67c82f9a085e03ecbb289a859d6) is a good reference.

Second, we are using lambda proxy integration, so the full response from the lambda will be passed to the front end and mapped automatically, provided the response from the lambda code is properly formatted. Refer to this documentation for more details on it.

At last, the output resource will print the value of the enpoint of the api for us to integrate in our frontend.

The Admin Stuff

This file contains the details that we will need to setup terraform and the variables we are using. The provider‘s region attribute here is hardcoded, which should ideally not be the case. I have yet to figure out how to make this dynamic and robust. The name with the todo- prefix should be changed to fit the project.

We are using an S3 bucket as the Terraform backend to hold the state of the infrastructure provisioned by Terraform. ​Creation of the bucket will be automated via a script that we will go through during the section on deployment.

provider "aws" {
  version = "~> 2.24"
  region = "eu-west-1"
}

terraform {
  required_version = "~> 0.12.0"
  backend "s3" {
    bucket = "todo-project-tfstate"
    key = "terraform.tfstate"
    region = "eu-west-1"
  }
}

variable "project_name" {
  type = string
  default = "todo-project"
}

variable "region" {
  type = string
  default = "eu-west-1"
}

variable "stage" {
  type = string
  default = "todo-stage"
}

variable "zipfile_name" {
  type = string
  default = "todo-project.zip"
}

variable "dynamodb-read_capacity" {
  type = number
  default = 1
}

variable "dynamodb-write_capacity" {
  type = number
  default = 1
}

The Application

Here is the application code in written in nodejs. It is a simple write to the dynamodb with basic error handling. It takes in only 1 parameter, that is the email. This code can definitely be improved by allowing more parameters to be written to the database in a dynamic way, so that the same code base can be used for a site that collects the first and last name of the user, as well as another site that collects the date of birth of the user. I will leave that as a future personal quest.

// Load the AWS SDK for Node.js
const AWS = require('aws-sdk');

// Set the region 
AWS.config.update({region: 'eu-west-1'});

// Create the DynamoDB service object
const ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});

exports.handler = async (event) => {
  console.log(JSON.stringify(event, null, 2));
  const params = {
    TableName: 'todo-project-dynamodb_table',
    Item: {
      'email' : {S: JSON.parse(event.body).email}
    }
  };

  // Call DynamoDB to add the item to the table
  ddb.putItem(params, function(err, data) {
    if (err) {
      console.log("Error", err);
    } else {
      console.log("Success", data);
    }
  });
  
  try {
    const result = await ddb.putItem(params).promise();
    console.log("Result", result);
    const response = {
      statusCode: 204,
      headers: {
        "Access-Control-Allow-Origin" : "*",
      },
    };
    return response;
  } catch(err) {
    console.log(err);
    const response = {
      statusCode: 500,
      headers: {
        "Access-Control-Allow-Origin" : "*",
      },
      body: JSON.stringify({ error: err.message }),
    };
    return response;
  }
};

A thing to note here is the need to return the Access-Control-Allow-Origin header in the response. The response also has to follow a particular but straightforward and common format in order for lambda proxy integration with API Gateway. This will map the response properly to the API Gateway method response and be returned to the frontend websites to overcome the CORS policy implemented by modern browsers.

Deployment

I will be using 3 ruby scripts for deployment related tasks, namely init.rb, apply.rb and destroy.rb, and a helper service object, get_aws_profile.rb for the deployment process.

Let’s take a look at them.

get_aws_profile.rb

# get_aws_profile.rb

class GetAwsProfile
  def self.call
    aws_profile = "todo-aws_profile"

    begin
      aws_access_key_id = `aws --profile #{aws_profile} configure get aws_access_key_id`.chomp
      abort('') if aws_access_key_id.empty?

      aws_secret_access_key = `aws --profile #{aws_profile} configure get aws_secret_access_key`.chomp
      abort('') if aws_secret_access_key.empty?
    rescue Errno::ENOENT => e
      abort("Make sure you have aws cli installed. Refer to https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html for more information.")
    end

    p "AWS_ACCESS_KEY_ID = #{aws_access_key_id}"
    p "AWS_SECRET_ACCESS_KEY = #{aws_secret_access_key}"

    [aws_profile, aws_access_key_id, aws_secret_access_key]
  end
end

This is a helper method that will get the aws_access_key_id and the aws_secret_access_key for usage in the scripts. Note that it uses the aws cli command to attain the keys. Hence, it has to be installed on your local machine prior to running. It also assumes you are using named profile to hold your credentials.

I don’t really like this setup since it requires these prerequisites. But well that can be solved again in the future.

init.rb

The first script to run is init.rb.

The init.rb will create the S3 bucket to be used as the terraform backend. Line 20 checks for the presence of this bucket and throws an exception if the bucket does not exist. The rescue block, if triggered, will create the non-existent bucket.

The initialization process on terraform is run via its docker image.

require 'bundler/inline'

gemfile do
  source 'https://rubygems.org'
  gem 'pry'
  gem 'aws-sdk-s3', '~> 1'
end

require './get_aws_profile.rb'

aws_profile, aws_access_key_id, aws_secret_access_key = GetAwsProfile.call

s3_client = Aws::S3::Client.new(
  access_key_id: aws_access_key_id,
  secret_access_key: aws_secret_access_key,
  region: 'eu-west-1'
)

begin
  s3_client.head_bucket({
    bucket: 'todo-project-tfstate',
    use_accelerate_endpoint: false
  })
rescue StandardError
  s3_client.create_bucket(
    bucket: 'todo-project-tfstate',
    create_bucket_configuration: {
      location_constraint: 'eu-west-1'
    }
  )
end

response = `docker run \
  --rm \
  --env AWS_ACCESS_KEY_ID=#{aws_access_key_id} \
  --env AWS_SECRET_ACCESS_KEY=#{aws_secret_access_key} \
  -v #{Dir.pwd}:/workspace \
  -w /workspace \
  -it \
  hashicorp/terraform:0.12.12 \
  init`

puts response

apply.rb

Once initialized, the next script to run is apply.rb.

Prior to applying the Terraform instructure, the backend code is packaged into a zip file. After application, the zip file is deleted for housekeeping.

require 'bundler/inline'

gemfile do
  source 'https://rubygems.org'
  gem 'pry'
  gem 'rubyzip', '>= 1.0.0'
end

require './get_aws_profile.rb'
require 'zip'

aws_profile, aws_access_key_id, aws_secret_access_key = GetAwsProfile.call

folder = Dir.pwd
input_filenames = ['index.js']
zipfile_name = File.join(Dir.pwd, 'todo-project.zip')

File.delete(zipfile_name) if File.exist?(zipfile_name)

Zip::File.open(zipfile_name, Zip::File::CREATE) do |zipfile|
  input_filenames.each do |filename|
    zipfile.add(filename, File.join(folder, filename))
  end
end

response = `docker run \
  --rm \
  --env AWS_ACCESS_KEY_ID=#{aws_access_key_id} \
  --env AWS_SECRET_ACCESS_KEY=#{aws_secret_access_key} \
  -v #{Dir.pwd}:/workspace \
  -w /workspace \
  -it \
  hashicorp/terraform:0.12.12 \
  apply -auto-approve`

puts response

File.delete(zipfile_name) if File.exist?(zipfile_name)

With this, the api is now deployed and can be called from any website. We will go through a sample front end integration in a bit.

destroy.rb

Once you are done with the project or are in the process of debugging, the destroy script will remove all the resources deployed. It will also remove the S3 backend that was created outside of Terraform.

require 'bundler/inline'

gemfile do
  source 'https://rubygems.org'
  gem 'pry'
  gem 'aws-sdk-s3', '~> 1'
end

require './get_aws_profile.rb'

aws_profile, aws_access_key_id, aws_secret_access_key = GetAwsProfile.call

response = `docker run \
  --rm \
  --env AWS_ACCESS_KEY_ID=#{aws_access_key_id} \
  --env AWS_SECRET_ACCESS_KEY=#{aws_secret_access_key} \
  -v #{Dir.pwd}:/workspace \
  -w /workspace \
  -it \
  hashicorp/terraform:0.12.12 \
  destroy -auto-approve`

puts response

s3_client = Aws::S3::Client.new(
  access_key_id: aws_access_key_id,
  secret_access_key: aws_secret_access_key,
  region: 'eu-west-1'
)

begin
  s3_client.head_bucket({
    bucket: 'todo-project-tfstate',
    use_accelerate_endpoint: false
  })

  s3_client.delete_object({
    bucket:  'todo-project-tfstate',
    key: 'terraform.tfstate', 
  })
  s3_client.delete_bucket(bucket: 'todo-project-tfstate')
rescue StandardError
  puts "todo-project-tfstate S3 bucket already destroyed."
end

Sample Frontend Integration

<!DOCTYPE html>
<html>
<head>
  <script
  src="https://code.jquery.com/jquery-3.4.1.min.js"
  integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo="
  crossorigin="anonymous"></script>
</head>
<body>

  <h2>HTML Forms</h2>

  <form id="form">
    <label for="email">First name:</label><br>
    <input type="text" id="email" name="email" value="test@test.com"><br>
    <input type="submit" value="Submit">
  </form>

  <script type="text/javascript">
    $( "#form" ).submit(function(event) {
      event.preventDefault();

      $.ajax({
        type: "POST",
        url: "https://todo-endpoint.execute-api.eu-west-1.amazonaws.com/todo-stage/email",
        data: JSON.stringify({
          email: $('#email').val()
        }),
        success: function(data, textStatus, jqXHR) {
          debugger
        },
        error: function(jqXHR, textStatus, errorThrown) {
          debugger
        }
      });
    });
  </script>

</body>
</html>

Below is a simple html web page that will has the email prefilled for demonstration purpose. The form will submit via jquery.ajax() using default settings so as not to trigger the need for preflight request.

You will see that the email will be added to the DynamoDB table, and the logs of the lambda funciton will be recorded in AWS Cloudwatch.

Conclusion

This exercise helped me understand how lambda is integrated with API Gateway, as well as the immense potential as a robust middleware the latter can be. In addition, I got to understand preflight request and CORS better, as well as the jquery.ajax() function.

The project is saved in this repository for future reference.

How To Restrict File Search In Sublime Based On Project

This is a documentation on how to restrict text search to within specific directories per project you are working on in Sublime.

You may often find it annoying that a simple text search is searching in folders that is not part of your source code. While you can easily flip the switch in the user settings of the sublime text editor, it might not be ideal if you wear multiple hats like me and work on different frameworks.

One framework’s trash may be another’s treasure. Folders that are considered junk in one framework might be important in another. And if we happen to work on these framework together simultaneously, we would have to constantly flip the switch on and off as we jump between working on these projects.

sublime meme | vic-l

If you are using sublime text because the framework you are working on is simple enough to manage and you do not want the computation-heavy indexing and compilation process to be running in the background constantly, this is an article for you to boost your productivity.

The User Setting Way

The commonly documented way of configuring your sublime editor is to tweak the configuration option in the user setting. Using restricted file search as an example, we can simple press CMD + , (assuming you are on a mac) to call up the user setting files in the sublime editor.

Next add this setting under the Preferences.sublime-settings file as shown:

{
  ...
  "binary_file_patterns": [
    "node_modules/*",
    "public/packs/",
    "public/assets/",
    "public/packs-test/",
    "tmp/*",
    "*.jpg",
    "*.jpeg",
    "*.png",
    "*.gif",
    "*.ttf",
    "*.tga",
    "*.dds",
    "*.ico",
    "*.eot",
    "*.pdf",
    "*.swf",
    "*.jar",
    "*.zip"
  ]
  ...
}

The binary_file_patterns option will instruct Sublime to treat these files as binary files. Binary files are not readable by human, hence it is not considered in Goto Anything or Find in Files functions by default.

Line 4 to 8 are folders that we want excluded from our search process.

The rest are referring to specific files base on their extensions.

This setup is what I typically use for my Rails projects.

The Project-specific Way

The better alternative is to base the setting on the project level so that we do no overlap the settings between projects.

Save the project as a sublime project via File > Save As…

If moving the mouse is not your thing, can you simply create a file in the root directory with the .sublime-project extension.

Next, add the same binary_file_patterns setting as shown previously under the folders key as shown:

{
  "folders":
  [
    {
      "path": ".",
      "binary_file_patterns": [
        ...
      ]
    }
  ],
}

Line 5, the path key is required. This is added by default when we save our project as a sublime project. More information on its function and purpose can be found here.

Next, close sublime. This time, open our project via the sublime project file that you have created.

Make a search now and you will realise that you are no longer searching in the folders that you have no interest in. The search process is also much faster than before because the program is going through less files, and ignoring the bulkier ones.

Hallelujah!

Housekeeping

Once we created the .sublime-project file, another file with the extension .sublime-workspace will also be created. The latter contains user specific data and you will not want to share it with other developers who may be working on the same source code as you. Add this file to our <code>.gitignore</code> file to achieve this.

Setup Bootstrap In Rails 6 With Webpacker For Development And Production

This is a documentation on how to setup Bootstrap 4 in Rails 6 using Webpacker. As the framework shifts away from sprockets and the asset pipeline to embrace the dominating methodology of handling frontend affairs in the Javascript world that is webpack, we have to adapt along.
The way to setup a css framework to bootstrap your application has undergone a revamp, and this article seeks to cover the essential steps to set it up.

Pre-requisites

This article will assume you have set up all the required tools required for a typical Rails 6 application.

The main extra tool you will need as compared to previous versions of Rails is the yarn package manager. You can install yarn on your computer via various ways based on your preference and your OS.

We will not be covering it in this article.

Setting Up Bootstrap

With the shift in paradigm of handling front end assets, we no longer install front end libraries using gems. In the past, these gems are merely wrappers around the Javascript libraries and files which present a number of problems.

First, the latest changes in the Javascript world will take some time to propagate into the Rails realm.

Second, having an intermediate wrapper increase the potential points of failure during the wrapping process.

Third, we are really dependent on the angels who are working on these wrappers. If they do not update the gems frequently, we are stuck with the old features. This can be frustrating if you are waiting for a certain bug fix or a new feature that is already available in the latest release.

To install bootstrap, run this command.

yarn add bootstrap jquery popper.js

This command will automatically install the latest bootstrap package in the yarn registries and add its dependency entry and version in your package.json file. Jquery and popper.js are libraries that bootstrap depends on, especially in their Javascript department.

The JS And CSS Files

The main Javascript file, application.js should now reside in the app/javascript/packs folder. This is because Webpacker will now look for all the javascript files in this directory to compile. This is the default setting for Webpacker.

Of course, you can go ahead and change the configuration to your liking. However, keep in mind that Rails promotes convention over configuration. This implies that as much as possible, methodologies and practices should follow a certain default unless absolutely necessary. this has multiple advantages. My favorite one is the portability of code among fellow Rails developers. Developers can easily understand the flow of logic and where to find bugs because they are where are expected to be. This cuts down the development time and cost greatly.

The application.js file should look like this:

require("@rails/ujs").start()
require("turbolinks").start()
require("@rails/activestorage").start()
require("channels")
require("bootstrap")

// stylesheets
require("../stylesheets/main.scss")

Line 1 to 4 are the default files already present in the file.

Line 5 adds the Bootstrap Javascript library.

Line 8 adds your custom stylesheet. Now, this file can be placed anywhere. In the above example, the path is relative to where the application.js file is. Hence, the file is placed in app/javascript/stylesheets/main.scss in this example.

Next, we import the Bootstrap stylesheet files in the main stylesheet file.

@import "bootstrap/scss/bootstrap";

Note that we are importing files from the node_modules folder, and not a bootstrap folder placed in the relative path of the current directory of the main stylesheet file.

Also, you do not need the ~ in front of the path to signify that it is from the node_modules folder like you would usually do for other non-Rails project using webpack. The tilde alias in webpack is a default webpack configuration that will resolve to the node_modules folder. While it will still work here, it is not required as the node_modules folder is already configured as part of the search paths that webpack will look for when resolving the modules.

Now, you may be wondering how to the Bootstrap libraries will work without importing any of its dependencies, that are popper.js and Jquery. We will come to that in a minute. Before that, let’s look at the views.

The Views

Now, we will need to add the javascript and stylesheets files into the page. Following convention in this example, we will add to the application.html.erb layout so that the Bootstrap framework can be accessed in all pages. These lines of code are added in the head section of the layout template.

<%= stylesheet_pack_tag 'application' %>
<%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %>

There are a  number of things that are different from the old implementation.

Line 1 adds the compiled stylesheets path that webpacker will compile. Note that this only happens if the extract_css option is set to true in the webpacker.yml file. More about this later.

As you can see, there is no more stylesheet_include_tag. In the past, this helper method will get files from the public/assets folder, into which the asset pipeline will compile stylesheets and javascript files with other added pre and post processing. Now, everything is going to be done by Webpack.

Here what’s happening.

Webpack will look at application.js and find the stylesheet files that are included in it. Then, using a combination of Webpack loaders, Webpack will know how to compile and translate the scss syntax, the url paths of assets used etc. into a css file that the browser can read and implement its styling.

These Webpack loaders are already included by the Webpacker and its configurations set up. However, there are many loaders out there that are not included by default. They tend to be less used conventionally and will require manual intervention from your side.

One example is using ruby code inside your javascript files. This requires the rails-erb-loader that will “teach” Webpack to understand the erb syntax. The implementation involves a number of steps, one of which is to append this loader to the Webpack environment.js configuration file. Thankfully, for this case, the community has deemed it a pretty common use case that there is, at least, a rake task that comes together with the Webpacker gem to set this up easily.

The compilation process mentioned above, however, is not applied in the development environment by default. This is due to the extract_css settings in the webpacker.yml page. More about this and its implications in a bit.

Note that stylesheet_include_tag still works for assets you place in the app/assets folder. However, while that is true, as Rails moves away from the old Sprockets and assets pipeline convention, this is expected to become deprecated in the future.

The Webpacker Configuration File

Lastly, we need to add the dependencies of bootstrap. This takes place in the config/webpack/environment.js file.

const { environment } = require('@rails/webpacker')
const webpack = require('webpack')

environment.plugins.append('Provide', new webpack.ProvidePlugin({
  $: 'jquery',
  jQuery: 'jquery',
  Popper: ['popper.js', 'default']
}))

module.exports = environment

As you can see, we are utilising the ProvidePlugin function of Webpack to add the dependency libraries in all the javascript packs instead of having to import them everywhere.

This is just an example of how we can import files with Webpack in Rails. And in this case, especially for jQuery, it makes a lot of sense as there is a high chance that we will be using it in other javascript files.

The extract_css Option

There is one last point I would like to touch on. That is the extract_css option in the config/webpacker.yml file.

When set to true, webpack will compile the stylesheet files that were imported into the javascript files into external standalone stylesheets. These compiled files will then be added into the views via the stylesheet_pack_tag helper method as mentioned earlier.

In comparison, when set to false, the stylesheets are not compiled into standalone files. Instead, they are added into the view as a blob during runtime by the the relevant javascript file. This takes place only after the javascript file has been completely downloaded by the browser.

In development mode, the conventional setting for the extract_css option is false, and this has quite a significant implication on how the website will behave.

One, there might be a flash of unstyled content (FOUC) when the page loads because the javascript files are loaded asynchronously. This is unlike the css files which are blocking resources that will pause the rendering of the website until the file has been downloaded. This asynchronous loading of files allows the website to continue rendering while it waits for itself to be completely downloaded before computing the css blob and insert it into the html source code. If the web page loads before this occurs, the style for the web page is not present, and FOUC will thus occur.

Two, the stylesheet_pack_tag is not needed in the development environment using the default setting. Things will seem to work fine only until it is pushed into the production environment where the extract_css option is set to true, desirably and by default. So make sure to use the stylesheet_pack_tag helper even if it seems to work fine without it in development.

Conclusion

At this point of time, the application should be running with Bootstrap in place. Do test out how it will differ in the production environment as compared to development.

JWT With Refresh Token Using Devise And Doorkeeper Without Authorization

This is a documentation on setting up the authentication system of a rails project in a primarily API environment.

Rails is essentially a framework for bootstrapping applications on the web environment. The support for APIs is thus lacking. One aspect of it is an off the shelf authentication system that can fit both the API and web environment on the same monolith application.

The Devise gem, while hugely popular and has established itself as the de facto authentication gem in the Rails world, does not come supported with an authentication system fit for interaction via APIs. The main reason is because it relies on cookies, which is strictly a browser feature.

To overcome this, often, we have to use other gems to couple with it to leverage on its scaffolded features for user authentications.

In this article, we will use the Doorkeeper and Devise combination to provide an authentication using JSON Web Tokens (JWT), the modern day best practices for authentication via APIs.

But let us first understand what kind of authentication system we are building and why we choose Doorkeeper.

The Example Authentication System​

Now, as a disclaimer, there are many ways to setup an authentication system.

One such consideration is the devise-jwt gem, which serves as a direct replacement to the cookies for your APIs. It is simple to implement and allows you to choose from multiple strategies to expire your token. Except that it does not come with a refresh token.

This implies that the token will expire and the user will have to login again. If your application requires such security, you can consider this gem instead.

However, in this article, the authentication system that I will like to set up is one that allows user to log in via JWT that will expire, and upon expiry, the front end can use the refresh token to get a new JWT without having the user to login again. This allows the user to stay logged in without compromising security excessively.

Why do we need to ensure the JWT expires?

Security Considerations Using JWT

Allowing user to be logged in permanently is kind of the standard user flow for many applications nowadays. The easiest way to implement this is to not expire the JWT. However, that is a recipe for disaster. It is akin to passing your password around when making API requests. And the moment it gets compromised, malicious attackers can have all the time in the world to explore your account and even plan their attacks, and leaving the users all the time in the world to say their prayers.

We thus have to enforce expiry on the JWT at the very least. To accomplish that without forcing the user to have to login again is to use a refresh token.

A refresh token stays in the local machine for the whole of it lifetime, or until the user actively logs out. This allows that the access token, which is dispatched out into the wild wild west otherwise known as the Internet, can at least expire within a certain period of time. And when it expires, the front end can use the refresh token to get a new access token to allow the user to continue its current session as though he or she is still logged in. So even if the access token gets compromised in the world beyond the walls, the potential damage is reduced.

This mechanism is made into a standard known as Oauth. There are many libraries out there that implements this already, and it is widely adopted among many of the software products that we use like Google account, facebook and twitter.

However, while this works with authenticating with these external providers, it has a crucial requirement that we do not want when implementing our own in house authentication system (I am referring to the old school email and password login). That step is the authorization step.

Some of us may have come across such a  request when we try to sign up with an app via Facebook, as shown below:

oauth authorization | vic-l

While this feature is absolutely essential in the OAuth protocol. it presents an awkwardness when we want to leverage on the OAuth libraries to implement JWT with refresh token for our in house authentication.

The Awkwardness Of OAuth

Just make sure we are on the same page, here are a summary of the points that led up to this awkwardness.

First, we need to make the tokens expire for security reasons.

Second, refresh token are here to the rescue, and they are used in the OAuth protocol.

Third, unfortunately, OAuth requires an authorization step, which in house authentication system do not need.

Last, we cannot leverage on the various OAuth implementation out there to implement a JWT with refresh token without having to hack these libraries and somehow sidestep the authorization step.

Hacking Doorkeeper

The OAuth library that we will be using is Doorkeeper. Its wiki page already has a section on skipping the authorization step, which certainly signals the demand for such an implementation. However, there are some points missing from this implementation and this article will try to cover more of them. These steps are highly influenced by this blog post.

First, install doorkeeper and its migration files, following its instructions.

rails g doorkeeper:install
rails g doorkeeper:migration

Changes To The Migration Files

Edit the migration file like this.

# frozen_string_literal: true

class CreateDoorkeeperTables < ActiveRecord::Migration[6.0]
  def change
    create_table :oauth_access_tokens do |t|
      t.references :resource_owner, index: true
      t.integer :application_id
      t.text :token, null: false
      t.string :refresh_token
      t.integer :expires_in
      t.datetime :revoked_at
      t.datetime :created_at, null: false
      t.string :scopes
    end

    # required to allow model.destroy to work
    create_table :oauth_access_grants do |t|
      t.references :resource_owner, null: false
      t.integer :application_id
      t.string   :token, null: false
      t.integer  :expires_in, null: false
      t.text     :redirect_uri, null: false
      t.datetime :created_at, null: false
      t.datetime :revoked_at
      t.string   :scopes, null: false, default: ''
    end

    # Uncomment below to ensure a valid reference to the resource owner's table
    add_foreign_key :oauth_access_tokens, :users, column: :resource_owner_id
  end
end

Compared to the original generated copy of the migration file, we have removed the oauth_applications table which refers to the application that we want to grant permission to in the authorization step. Since we are skipping the authoirzation, there is no need to have this unused table.

Next we have changed

t.references :application, null: false

into

t.integer :application_id

Since the table is no longer present, we cannot use the references helper, and need to resort to specifying the the basic data type. We are still keeping this column in the database although we have deleted the application table because Doorkeeper uses this attribute while running its operation. Without it, an error will occur along the lines of “column not found“.

In fact, we also do not need the oauth_access_grants table, which is the bridge between the oauth_access_tokens table and the oauth_applications. It records which token authorized which application. However, without it, an error will be thrown when destroying a user record from the database. If you do not have such a feature, feel free to remove this table as well.

Lastly, only keep the foreign key implementation on oauth_access_tokens and change the model name according to whatever you have named your model.

Changes To The Initializer File

Edit the configuration in the doorkeeper initializer file as such:

# frozen_string_literal: true

Doorkeeper.configure do
  ...
  resource_owner_from_credentials do |routes|
    user = User.find_for_database_authentication(email: params[:email])
    request.env['warden'].set_user(user, scope: :user, store: false)
    user
  end
  ...
  use_refresh_token
  ...
  grant_flows %w[password]
  ...
  skip_authorization do
    true
  end
  ...
  api_only
  base_controller 'ActionController::API'
end

We are essentially following this documentation on their wiki, but with some additional content and some slight changes, to implement an authentication flow whereby the token is returned in exchange for the credentials of the resource owner, in this case the user’s email and password.

Line 5 to 9 is the main implementation.

On line 6, we are instructing Doorkeeper to use Devise method, find_for_database_authentication, for authenticating the correct user. This method will run use the underlying warden gem in Devise to do its authentication magic. This, however, will save the user in the session, which can be a problem when we check for sessions in the controller level. More on this later. We undo this in line 7.

On line 7, we instruct warden to set the user only for the request and not store it in the session, as documented here.

On line 11, uncomment use_refresh_token to ensure a refresh token is generated on login.

Line 13 is for older version of Doorkeeper at 2.1+. More information in the above mentioned wiki page.

Line 15 to 17, we instruct Doorkeeper to skip the authorization step.

Line 19, we set mode to api_only. This can help to optimize the application to a certain extent. For example, it skips forgery protection checks that is not necessary in an API environment, which reduces computational requirement and latency.

Line 20, I am just explicitly setting the base controller to use ActionController::API instead of the default ActionController::Base, although this should have already been implemented when the mode is set to api_only.

Controller Level

Devise comes with a helper method, current_user or whatever your model name is, to access the current authenticated resource. This, however, will return a nil value in the current implementation because the underlying method will not be working. The underlying method is, taken from the source code:

def current_#{mapping}
  @current_#{mapping} ||= warden.authenticate(scope: :#{mapping})
end

With reference to this stackoverflow answer, we will modify it to look like this:

def current_user
  @current_user ||= if doorkeeper_token
                      User.find(doorkeeper_token.resource_owner_id)
                    else
                      warden.authenticate(scope: :user, store: false)
                    end
end

We have essentially overwritten the default implementation by Devise to check for the “current_user” using the doorkeeper_token first, and fallback on the default implementation. The fallback will be useful in the event where our application will still be using the traditional login methods via a web browser. Feel free to remove it if you are not going to have such any request coming from a web browser. And of course, remember to handle the scenario of a nil doorkeeper_token.

Last but not least, implement that authorization check at the correct routes and actions in the Doorkeeper::TokensController via the before_action callback like how you would when using just Devise alone.

before_action :doorkeeper_authorize!

Custom Controller

I personally have some custom code that I want to add to all my APIs so that when the frontend consumes my APIs, they will not be left stunned by responses having different JSON structure.

I keep a response_code and a response_message in all my APIs for the frontend to react accordingly and trigger the desired UX flow.

Here is how I modify my controller. Let’s start off with some modification to the Doorkeeper modules.

module Doorkeeper
  module OAuth
    class TokenResponse
      def body
        {
          # copied
          "access_token" => token.plaintext_token,
          "token_type" => token.token_type,
          "expires_in" => token.expires_in_seconds,
          "refresh_token" => token.plaintext_refresh_token,
          "scope" => token.scopes_string,
          "created_at" => token.created_at.to_i,
          # custom
          response_code: 'custom.success.default',
          response_message: I18n.t('custom.success.default')
        }.reject { |_, value| value.blank? }
      end
    end
  end
end

Here, I modify the response from Doorkeeper to add in my required keys. I am using I18n to handle the custom messages and prepare the application for a global audience.

Next, the error response. By default, Doorkeeper returns the keys error and error_description. That is different from what I want. I will overwrite it totally.

module Doorkeeper
  module OAuth
    class ErrorResponse
      # overwrite, do not use default error and error_description key
      def body
        {
          response_code: "doorkeeper.errors.messages.#{name}",
          response_message: description,
          state: state
        }
      end
    end
  end
end


name, description and state are accessible variables in the default class. I integrate them into my custom API response for standardization purpose.

Now the controller. There are 3 main methods: login, refresh and logout. Let’s go through them.

module Api
  module V1
    class TokensController < Doorkeeper::TokensController
      before_action :doorkeeper_authorize!, only: [:logout]

      def login
        user = User.find_for_database_authentication(email: params[:email])

        case
        when user.nil? || !user.valid_password?(params[:password])
          response_code = 'devise.failure.invalid'
          render json: {
            response_code: response_code,
            response_message: I18n.t(response_code)
          }, status: 400
        when user&.inactive_message == :unconfirmed
          response_code = 'devise.failure.unconfirmed'
          render json: {
            response_code: response_code,
            response_message: I18n.t(response_code)
          }, status: 400
        when !user.active_for_authentication?
          create
        else
          create
        end
      end

      def refresh
        create
      end

      def logout
        # Follow doorkeeper-5.1.0 revoke method, different from the latest code on the repo on 6 Sept 2019

        params[:token] = access_token

        revoke_token if authorized?
        response_code = 'custom.success.default'
        render json: {
          response_code: response_code,
          response_message: I18n.t(response_code)
        }, status: 200
      end

      private

      def access_token
        pattern = /^Bearer /
        header = request.headers['Authorization']
        header.gsub(pattern, '') if header && header.match(pattern)
      end
    end
  end
end

Firstly, I am applying the doorkeeper_authorize! callback on the logout method only as that is the only method that will require the user to be logged in.

The login method will largely follow what we defined in the initializer file under the resource_owner_from_credentials block. The modification here is to define specific error scenarios and their respective response_code here. For those scenarios that are of no interest to me, I will leave it to the catch-all case and and return what is now the default modified ErrorResponse.

The second case in particular is specific to my project. I allow admin users to create the users, and have a flag (created_by_admin_and_authenticated) to differentiate them.

  • nil means the user registered normally
  • false means they are created by the admin user, but have yet to authenticate with the email that our server sent out to them
  • true means they are created by admin user and have also authenticated their email address

I will force users who are created by admin users but have yet to authenticate via email to reset their password, leveraging on what Devise has already provided with its password module.

Note: this is definitely much to be optimized here. For example, the find_for_database_authentication method is being called twice here for a successful user login, once in this custom controller and the other in the default Doorkeeper::TokensController create method.

The refresh method to refresh the access_token is practically the same as the default create method, but I am overriding it here because I use ApiPie to add documentation to the routes. For those who do not use ApiPie, we define its required parameters, headers etc. above the line 31 to define the documentation for the refresh method. I also can rename the route in doing so to create an API that the front end developers that I am working with would find more familiar with.

The logout method makes use of the revoke_token method, according to its source code, to revoke the JWT.

In my application, I require my frontend to add the JWT token in the Authorization header instead of a parameter in the request body based on convention. Doorkeeper, on the other hand, expects the token to be present in the params. To overcome this, I created the custom private access_token method to get the token in the header that the front end has placed in their requests. That token is then placed in the params object behind the key named token as Doorkeeper would have expected. Doorkeeper can then do its thing without having to modify any of its internal workings.

Since the revoke_token method provided by Doorkeeper will make use of the token key in the params, I will first use the private access_token method to extract the JWT token from the Authorization header. Then add it as the value to the token key of the params variable.

The logout method is required for the front end to dispose of the current access token they have for security purposes. I also use it to remove the users’ devices token so that they do not receive push notifications after logging out.

Login Request

{
	"email": "user1@test.com",
	"password": "user1@test.com",
	"grant_type": "password"
}


A login request will have these keys. In particular, the grant_type strategy used should be password.

Conclusion

You should be able to login with the correct credentials with the default Doorkeeper::TokensController and access your controllers with the correct resource, just like how you would when using Devise alone. Otherwise, you can use your custom controller inherit and customise the authentication routes, as I have demonstrated.

Hope this was helpful!

How To Setup A Standard AWS VPC With Terraform

This is a documentation on how to setup the standard virtual private network (VPC) in AWS with the basic security configurations using Terraform.

In general, I classify the basics as having the servers and databases in the private subnets, and having a bastion server for remote access. There is definitely much room to improve from this setup and certainly much more in the realms beyond my knowledge. However, as a start, this is, at the very least, essential for a production environment,

Personally, I have an Amazon Certified Solutions Architect (Associate) certificate to my name, but like most of the engineering university graduates out there who have forgotten how to do dy/dx or  what the hell is the L’Hôpital’s rule, I have all but forgotten the exact steps to recreate such an environment.

AWS Associate Solutions Architect | vic-l

As a saving grace 😅, I should say that I do know how to set it up, just that I do not have it at the tip of my fingers. I would not get it right the first time, but given time I will eventually set it up correctly.

This is true for whenever I setup an environment for new projects. Debugging the setup which can be time consuming and frustrating. It is not efficient and is probably one of the key reasons why infrastructure as code (IaC) has become a trending topic in recent years.

Provisioning these infrastructures using code implies:

  • version control on code and, in turn, infrastructural changes made by members of the development team
  • easily reproducible infrastructures
  • automation

One of the frontrunners in this industry is Terraform. All that is required are the configurations written in files ending with the “tf” extension placed in the same directory.

The VPC

Start by provisioning the VPC.

We set the CIDR block to provide the maximum number private ip addresses that an AWS VPC allows. This implies that you can have up to 65,536 AWS resources in your VPC, assuming each of them require a private IP address for communication purpose.

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16" # 65536 ip addresses

  tags = {
    Name = "${var.project_name}${var.env}"
  }
}

The variables project_name and env can be placed in a separate .tf as long as they are in the same directory when Terraform eventually runs to apply the changes.

The Gateways

Next, we setup the Internet gateway (IGW) and NAT gateway (NGW).

The IGW allows for resources in the public subnets to communicate with the outside Internet.

The NGW does the same thing,  but for the resources in the private subnets. Sometimes, these resources need to download packages from the Internet for updates etc. This is in direct conflict with the security requirements that placed them in the private subnets in the first place. The NGW balances these 2 requirements.

# IGW
resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "${var.project_name}${var.env}"
  }
}

resource "aws_route_table" "igw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "igw-${var.project_name}${var.env}"
  }
}

resource "aws_route" "igw" {
  route_table_id = aws_route_table.igw.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id = aws_internet_gateway.main.id
}

# NGW
resource "aws_route_table" "ngw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "ngw-${var.project_name}${var.env}"
  }
}

resource "aws_route" "ngw" {
  route_table_id = aws_route_table.ngw.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id = aws_nat_gateway.main.id
}

### NOTE ###
resource "aws_eip" "nat" {
  vpc = true
}

resource "aws_nat_gateway" "main" {
  allocation_id = aws_eip.nat.id
  subnet_id = aws_subnet.public-ap-southeast-1a.id

  tags = {
    Name = "${var.project_name}${var.env}"
  }
}

Both gateways need to be associated to their respective aws_route_table via an aws_route that will route out to everywhere on the Internet, as indicated by the 0.0.0.0/0 CIDR block.

The NGW requires some additional setup.

First, a NAT gateway requires an elastic IP address due to the way it is engineered. I would not pretend I know how it works to tell you why a static IP address is required, but I do know we can easily provision using Terraform.

This static IP address will also come in useful if your private instances need to make API calls to third party sources that require the instances ip address for whitelisting purpose. The outgoing requests from the private instances will bear the ip address of the NGW.

In addition, a NAT gateway needs to be placed in one of the the public subnet in order to communicate with the Internet. As you can see, we have made an implicit dependency on the aws_subnet which we will define later. Terraform will ensure the NAT gateway will be created after the subnets are setup.

The Subnets

Now, let’s setup the subnets.

We will setup 1 public and 1 private subnet in each availability zones that the region provides. I will be using the ap-southeast-1 (Singapore) region. That will be a total of 6 subnets to provision as there are 3 subnets in this region.

#### public 1a
resource "aws_subnet" "public-ap-southeast-1a" {
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.100.0/24"
  availability_zone_id = "apse1-az2"

  tags = {
    Name = "public-ap-southeast-1a-${var.project_name}${var.env}"
  }
}

resource "aws_route_table_association" "public-ap-southeast-1a" {
  subnet_id = aws_subnet.public-ap-southeast-1a.id
  route_table_id = aws_route_table.igw.id
}

#### public 1b
resource "aws_subnet" "public-ap-southeast-1b" {
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.101.0/24"
  availability_zone_id = "apse1-az1"

  tags = {
    Name = "public-ap-southeast-1b-${var.project_name}${var.env}"
  }
}

resource "aws_route_table_association" "public-ap-southeast-1b" {
  subnet_id = aws_subnet.public-ap-southeast-1b.id
  route_table_id = aws_route_table.igw.id
}

#### public 1s
resource "aws_subnet" "public-ap-southeast-1c" {
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.102.0/24"
  availability_zone_id = "apse1-az3"

  tags = {
    Name = "public-ap-southeast-1c-${var.project_name}${var.env}"
  }
}

resource "aws_route_table_association" "public-ap-southeast-1c" {
  subnet_id = aws_subnet.public-ap-southeast-1c.id
  route_table_id = aws_route_table.igw.id
}

#### private 1a
resource "aws_subnet" "private-ap-southeast-1a" {
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.1.0/24"
  availability_zone_id = "apse1-az2"

  tags = {
    Name = "private-ap-southeast-1a-${var.project_name}${var.env}"
  }
}

resource "aws_route_table_association" "private-ap-southeast-1a" {
  subnet_id = aws_subnet.private-ap-southeast-1a.id
  route_table_id = aws_route_table.ngw.id
}

#### private 1b
resource "aws_subnet" "private-ap-southeast-1b" {
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.2.0/24"
  availability_zone_id = "apse1-az1"

  tags = {
    Name = "private-ap-southeast-1b-${var.project_name}${var.env}"
  }
}

resource "aws_route_table_association" "private-ap-southeast-1b" {
  subnet_id = aws_subnet.private-ap-southeast-1b.id
  route_table_id = aws_route_table.ngw.id
}

#### private 1c
resource "aws_subnet" "private-ap-southeast-1c" {
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.3.0/24"
  availability_zone_id = "apse1-az3"

  tags = {
    Name = "private-ap-southeast-1c-${var.project_name}${var.env}"
  }
}

resource "aws_route_table_association" "private-ap-southeast-1c" {
  subnet_id = aws_subnet.private-ap-southeast-1c.id
  route_table_id = aws_route_table.ngw.id
}

Amidst this long snippet of configuration for the subnets, it is essentially a repeat of the same resources association.

For the public subnets, they are assigned the CIDR blocks 10.0.1.0/2410.0.2.0/24 and 10.0.3.0/24 respectively. Each will have up to 256 ip addresses to house 256 AWS resources that requires an ip address. Their addresses will be from, taking the first subnet as example, 10.0.1.0 to 10.0.1.255.

For the private subnets, they occupy the CIDR blocks 10.0.101.0/24, 10.0.102.0/24 and 10.0.103.0/24 respectively.

To be exact, there will be less than 256 addresses per subnet as some private IP addresses are reserved in every subnet. Of course, you can provision more or less ip addresses per subnet with the correct subnet masking setting.

Each subnet is associated to different availability zones via the availability_zone_id to spread out the resources across the region.

Each public subnet is also associated to the aws_route_table that is related to the IGW, while each private subnet is associated to the aws_route_table related to the NGW.

The Database

Next, we setup the database. We will provision the database using RDS and place it in the private subnets for security purpose.

At this point of time, I must admit that I do not know if this is the best way to setup the database. I personally have a lot of questions on how the infrastructure will change when the application scales eventually, especially for the database. How will the database be sharded into different regions to serve a global audience? How do the database sync across the different regions? These are side quests that I will have to pursue in the future.

For now, a single instance in a private subnet.

resource "aws_db_instance" "main" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  identifier = "rds-${var.project_name}${var.env}"
  name = "something"
  username = "something"
  password = "something"

  skip_final_snapshot = false
  # notes time of creation of rds.tf file
  final_snapshot_identifier = "rds-${var.project_name}${var.env}-1573454102"

  vpc_security_group_ids = [aws_security_group.rds.id]
  db_subnet_group_name = aws_db_subnet_group.main.id

  lifecycle {
    prevent_destroy = true
  }

  tags = {
    Name = "rds-${var.project_name}${var.env}"
  }
}

resource "aws_db_subnet_group" "main" {
  name = "db-private-subnets"
  subnet_ids = [
    aws_subnet.private-ap-southeast-1a.id,
    aws_subnet.private-ap-southeast-1b.id,
    aws_subnet.private-ap-southeast-1c.id
  ]

  tags = {
    Name = "subnet-group-${var.project_name}${var.env}"
  }
}

As you can see, we can see and review the full configuration for the database using code as compared to having to navigate around the AWS management console to complete the puzzle. We can easily know the size of the database instance we have provisioned as well as its credentials (Ok this is debatable if we want to commit sensitive data in our code).

In this configuration, I ensured that the database will produce a final snap shot in the event it gets destroyed.

Access to the database will be guarded by an aws_security_group that will be defined later.

The database is also associated to the aws_db_subnet_group resource. This resource consist of all the private subnet that we provisioned. This creates an implicit dependency on these subnets, ensuring that the database will only be created after the subnets are created. This would also tell AWS to place the database in the custom VPC that the subnets exist in.

I also ensured the database will not be destroyed by Terraform accidentally using the lifecycle configuration.

The Bastion

The bastion server allows us to access the servers and the database instance in the private subnets. We will provision the bastion inside the public subnet.

resource "aws_instance" "bastion" {
  ami = "ami-061eb2b23f9f8839c"
  associate_public_ip_address = true
  instance_type = "t2.nano"
  subnet_id = aws_subnet.public-ap-southeast-1a.id
  vpc_security_group_ids = ["${aws_security_group.bastion.id}"]
  key_name = aws_key_pair.main.key_name

  tags = {
    Name = "bastion-${var.project_name}${var.env}"
  }
}

resource "aws_key_pair" "main" {
  key_name = "${var.project_name}-${var.env}"
  public_key = "ssh-rsa something"
}


output "bastion_public_ip" {
  value = aws_instance.bastion.public_ip
}

I am using a Ubuntu-18.04 LTS image to setup the bastion instance. Note that the AMI id will differ from region to region, even for the same operating system. The image below shows the difference in the AMI id between Singapore and Tokyo regions.

ubuntu ami in ap-southeast-1| vic-l
ubuntu ami in tokyo region | vic-l

I will mainly use the bastion to tunnel the commands to the private subnet. Hence, there is no need for a large computation. The cheapest and smallest instance size of t2.nano is chosen.

It is associated to a public subnet that we created. Any subnet will work, but make sure it is public as we need to be able to connect to it.

Its security group will be defined later.

All EC2 instances in AWS can be given an aws_key_pair. We can generate a custom private key using the ssh-keygen command or you can use the default ssh key in your local machine so that you can ssh into the bastion easily without having to define the identity file each time you do so.

Then, there is the output block. After Terraform has completed its magic, it will output values defined in these output blocks. In this case, the public ip address of the bastion server will be shown on the terminal, making it easy for us to obtain the endpoint.

The Security Groups

Lastly, the connection is not completed without setting up the security groups that guards the traffic going in and out of the resources. This was the bane of my AWS Solution Architect journey. With the required configurations spelled out in code instead of steps in the console that exist only in the memory, Terraform has helped me greatly to further understand this feature.

There are a total of 3 aws_security_group resources  to be created, representing the bastion, the instances and the database respectively. Each of them have their own set of inbound and/or outbound rules, named “ingress” and “egress” in Terraform terms, that are configured separately.

While you can configure the inbound and outbound rules together within the resource block of the respective aws_security_group, I would recommend against that. This is because doing so will result in tight coupling between the security groups, especially if one of its aws_security_group_rule is pointing to another aws_security_group as the source. This is problematic when we eventually make changes to the security groups because, for example, maybe one cannot be destroyed because a security group that is it dependent on is not supposed to be destroyed.

And the frustrating thing is that Terraform, or maybe the underlying AWS api, do not indicate the error. In fact it takes forever to destroy security groups that are created this way, only to fail after making us wait for a long time, which makes debugging superfluously tedious.

There are many issues mentioning this and something related on Github, like this. This has to do with has been termed “enforced dependencies” that Terraform currently has no mechanism to handle.

By decoupling the aws_security_group and their respective aws_security_group_rule into separate resources, we will give Terraform and ourselves an easier time removing and making changes to the security groups in the future.

Bastion

Let’s see how we can configure Terraform setup the security of the subnets. We start off with the security group for the bastion server. We will make 3 rules for it.

# bastion
resource "aws_security_group" "bastion" {
  name = "${var.project_name}${var.env}-bastion"
  description = "For bastion server ${var.env}"
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "${var.project_name}${var.env}"
  }
}

resource "aws_security_group_rule" "ssh-bastion-world" {
  type = "ingress"
  from_port = 22
  to_port = 22
  protocol = "tcp"
  # Please restrict your ingress to only necessary IPs and ports.
  # Opening to 0.0.0.0/0 can lead to security vulnerabilities
  # You may want to set a fixed ip address if you have a static ip
  security_group_id = aws_security_group.bastion.id
  cidr_blocks = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "ssh-bastion-web_server" {
  type = "egress"
  from_port = 22
  to_port = 22
  protocol = "tcp"
  security_group_id = aws_security_group.bastion.id
  source_security_group_id = aws_security_group.web_server.id
}

resource "aws_security_group_rule" "mysql-bastion-rds" {
  type = "egress"
  from_port = 3306
  to_port = 3306
  protocol = "tcp"
  security_group_id = aws_security_group.bastion.id
  source_security_group_id = aws_security_group.rds.id
}

The first is an ingress rule to allow us to ssh into it from wherever we are. Of course, this is not ideal as it means anyone from anywhere can ssh into it. We should scope it to the ip address where you work from, be it your home or your office. However, for my case, as a digital nomad, the ip address that I work with just changes so often as I moved around that it just makes more sense to open it up to the world. I made a calculated risk here. Please don’t try this at home.

The second is an egress rule that allow the bastion instance to ssh into the web servers in the private subnets. The source of this rule is set as the aws_security_group of the web servers.

The third rule is another outbound rule  to allow the bastion to communicate with the database. Since I am using <code>mysql</code> as the database engine, the port used is 3306. This allows us to run database operation on the isolated database instance in the private subnet via the bastion over the correct port securely.

Web Servers

Next will be the security groups for your web servers. The only rule that it requires will be the ingress rule for the bastion to ssh into itself over port 22.

resource "aws_security_group" "web_server" {
  name = "${var.project_name}${var.env}-web-servers"
  description = "For Web servers ${var.env}"
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "${var.project_name}${var.env}"
  }
}

resource "aws_security_group_rule" "ssh-web_server-bastion" {
  type = "ingress"
  from_port = 22
  to_port = 22
  protocol = "tcp"
  security_group_id = aws_security_group.web_server.id
  source_security_group_id = aws_security_group.bastion.id
}

RDS

Lastly, the rds instance. It consist of 2 rules.

resource "aws_security_group" "rds" {
    name = "rds-${var.project_name}${var.env}"
    description = "For RDS ${var.env}"

vpc_id = aws_vpc.main.id
  tags = {
    Name = "${var.project_name}${var.env}"
  }
}

resource "aws_security_group_rule" "mysql-rds-web_server" {
  type = "ingress"
  from_port = 3306
  to_port = 3306
  protocol = "tcp"
  security_group_id = aws_security_group.rds.id
  source_security_group_id = aws_security_group.web_server.id
}

resource "aws_security_group_rule" "mysql-rds-bastion" {
  type = "ingress"
  from_port = 3306
  to_port = 3306
  protocol = "tcp"
  security_group_id = aws_security_group.rds.id
  source_security_group_id = aws_security_group.bastion.id
}

The first is of course to open up port 3306 to allow request from the web servers to reach the database to run the application.

The second is to allow the bastion to communicate over port 3306. We have to define the egress rule applied on the bastion server itself to connect out to the RDS instance previously. Now, this ingress rule will allow the incoming request from the bastion server to reach the RDS instance instead of being blocked off.

Terraform Apply

These resources can be defined in a single or multiple terraform files with the extension tf, as long as they are in the same directory.

If you are using docker to run terraform, you can do a volume mount of the current directory into the workspace of the docker container and apply the infrastructure!

Improvements

We can harden the security of this setup further by, for example, configuring the Network Access Control Level (NACL or Network ACL). In this setup, the default is allow all traffic in bound and outbound for all the resources. However, this will be beyond the scope of this article.

What’s Next

Note that I did not provision any EC2 instances where my application will run. At this point of time, you can feel free to provision the EC2 instances for the web servers just like the bastion server, but associating them with the private subnets.

For me, I favor AWS Elastic Beanstalk in handling the deployment. What I have done so far is only the provisioning of the infrastructure. Hence, in my case, instead of defining the EC2 instances, I will define an elastic beanstalk environment to host my Rails application and configure it to use the VPC to leverage on all the security.

How To Add Datatables To Webpacker In Rails

This is a documentation of using datatables with the latest version of Rails (6.0 at the time of writing) that uses webpacker as the default Javascript compiler.

I found some difficulty in looking for documentation of integrating this in the new Rails away from the lands of Sprocket.

Hopefully this can help you and my future self when I come back to understand what I did to make my codes work instead of leaving it to God.

Only God Knows | Vic-l

Datatables and custom styling

Datatables ship with its core files and some default styling packages with major CSS framework like Bootstrap and Foundation. Taking Bootstrap 4 as the example framework, install the packages below using yarn.

"datatables.net": "^1.10.19",
"datatables.net-bs4": "^1.10.19",

These are the latest versions at the time of writing. They will be added to the package.json file.

Require Datatables

In your javascript file (place it in application.js for now), require the file and initialize datatables.

require("datatables.net")
require('datatables.net-bs4')
require("datatables.net-bs4/css/dataTables.bootstrap4.min.css")

const dataTables = [];

document.addEventListener("turbolinks:load", () => {
  if (dataTables.length === 0 && $('.data-table').length !== 0) {
    $('.data-table').each((_, element) => {
      dataTables.push($(element).DataTable({
        pageLength: 50
      }));
    });
  }
});

document.addEventListener("turbolinks:before-cache", () => {
  while (dataTables.length !== 0) {
    dataTables.pop().destroy();
  }
});

Let me explain what each line does.

On line 1, we import the core datatables js files. These js files adds the standard search and sorting functions of datatables as well as wire up any of your custom configurations.

On line 2, the javascript that will work with Bootstrap 4 elements. It will add elements to the web page for, for example, the pagination feature using the common Bootstrap classes like row and col-*.

On line 3, it imports the custom css file that are required by the datatables JavaScript function but are not present in default Bootstrap stylings. Yes, we are importing the CSS files in a javascript file. Webpack will compile this JavaScript file into the public/packs folder and take care of loading the css into the webpage albeit via javascript. Note that if you set the extract_css option as true in the webpacker configuration, it will instruct webpacker to compile the css into a standalone file, instead of loading it as part of the Javascript code. Hence, you will need to rely on stylesheet_pack_tag to load the css file in the page for the styling to work.

Line 5 is where we declare a datatables array variable to be accessed within this module that is this script. This is a critical step for DataTables to play well with Rails in a turbolinks powered environment. The role of this variable is to store all instances of the tables that have been initialized.

The next 2 blocks of code add 2 listeners to the DOM.

The first triggers the dataTable() function on the desired elements that bear the class data-table. This sets up the pagination, search, sort etc functionalities that make datatables so powerful and simple on your table element. The event this occurs on is turbolinks:load, which is when the url changes and the page loads. Each element is initialized and stored in the dataTables array variable. The 2nd listener will reference them.

The second listener will destroy each of the dataTable instance that are stored in the namesake variable, if any is present. It is triggered during the turbolinks:before-cache event, which takes place when the page navigates away. This step is crucial to remove the elements that were added when the datatables script is evaluated, like the search bar and the pagination elements. If this is not done, there will be extra elements appearing on the webpage when the user navigates back through the browser history as mentioned in this Github issue.

NOTE that it is important NOT to name the class of your elements as “dataTable” as they will get destroyed in the process. If that happens, when the user navigates forward and back again or vice versa, the element will not be picked up and the dataTable() function will not be executed. Kudos to Philip for his comment.

Optimizing

= javascript_pack_tag 'custom/datatables', 'data-turbolinks-track': 'reload'
= stylesheet_pack_tag 'custom/datatables'

Not every page has a table that you will like to initialize the datatable functionalities on. You should only require this in pages that require the code to be executed. In this way, the initial load time of your page will be reduced by not downloading the extra files that you will not use and affect the page speed of innocent pages.
This means downloading 2 files instead of one which can affect page speed due to having to make 2 request instead of 1. However, the resultant overheads from the http requests are unavailing considering these are javascript files that are not render-blocking resources and they can be loaded asynchronously to mitigate it.
Put the above code snippet in another js file in the app/javascript/packs. Webpacker will pick up this file as another entry point and compile the js asset that you can add separately.
Call this js file in the page that require it as such:

Once again, you will find stylesheet_pack_tag useful only if you have enabled extract_css in the webpacker configuration. It will be responsible to load the compiled (or ‘extracted’ in this context) css file.

Playing Well With Turbolinks

# application.html.slim
head
  = yield :javascript_in_head
body
  = yield

# specific/page.html.slim
body
  - content_for :javascript_in_head do
    = javascript_pack_tag 'my-datatables-scripts', 'data-turbolinks-track': 'reload'

Be careful of where you load this javascript file. Make sure to load it in the head html element tag because turbolinks will only handle the javascript files loaded in the head and not the body html element tag.
To do this in the page, use the content_for helper.

Rails will insert the javascript file in the head section at line 3 the for the given page in the head section of the page’s layout, ensuring that turbolinks perform hooks on the javascript file as well.

Why Is Render JSON And Return Not Working In My Controllers

Recently, I stumbled upon on an unexpected error when i was refactoring my code to follow the style guide of rubocop, Ruby On Rails’ very own linter.

The and/or style guide recommends that the logical operator && be used in place of and. When I did that, my controllers started to break my tests.

Return JSON Did Not In Fact Return

I have a controller action that looks like this. It does a check on the current user and returns early with a custom json if the condition is right, instead of continuing the propagation to look for the view file corresponding to itself.

def show
  if current_user.one_piece_fan?
    render json: {
      messsage: 'My nakama!'
    } and return
  end
end

This works fine, but when I changed the and to &&, this custom json was not returned. What went wrong?

&& vs and

There is a very subtle difference between the two. It is their precedence order.

In a line of code consisting of multiple operations, it plays a part in deciding what gets evaluated first. This turns out to be fairly crucial for ruby that has stripped itself of the non-human-friendly characters like brackets and semi-colons. Let’s take a look at the code below.

secrets of "and" | vic-l

Did that surprise you?

Well, this is all because of precedence.

In the first line of code, and has a lower precedence than the assignment operator =, hence the assignment took place before the logical AND operation is carried out with false. In other words, this is actually how it looks like had ruby still have its clothes on.

(s = true) && false

Hence, the false value returned from this line of code is referring to the result of the && operation. And when one of the operand is false, the result will be false.

multiply by 0 | vic-l

As for the third line of code, it works just like how most people would commonly interpret it. The result of the && operation is assigned to the s variable, and it subsequent false value is the value that the variable s now holds.

Returning Early In Controllers

So back to the case of controllers.

class ApplicationController < ActionController::API
  def render_and
    render json: {
      message: 'Using "and return"'
    } and return
  end

  def render_amp
    render json: {
      message: 'Using "&& return"'
    } && return
  end

  def render_amp_with_brackets
    render(json: {
      message: 'Using "&& return" with brackets'
    }) && return
  end
end

Base on the snippet above, the actions render_and and render_amp_with_brackets will work just like how you would have expected. They will return the render function early and stop the controller from propagating further.

As for the render_amp method, it is rendering the result of the && operation between the return function and the hash. Essentially, it looks like this.

render({ ... } && return)

Since there is no eventual return in this render function, the controller will further propagate and carry out its search for the view corresponding the the action.

Final Thoughts

I hope this has help us understand our and and &&s better!

Credits to this stackoverflow answer.

Reopen And Add Methods To Models In Ruby Gems

This is a documentation on how to add class and instance methods to models that exist in ruby gems. Often, there is a need to add methods to models that are created in ruby gems.

In a recent project that I am working on, I found a particular need for adding images to a tagging gem. The purpose of the gem is for taxonomy and the final UI draft has different images allocated to each of the category (or should I say tag). We decided to have the rails backend handle the image and tag association. Hence, the ideal way to handle this would be to modify the models in the tagging gem to hold its image as well upon creation.

Project Specifics

The tagging gem that I am using is, contrary to the more popular and senior ActsAsTaggableOn gem, the Gutentag gem.

The reason I use the latter instead of the former is because the former does not support the new ActiveRecord 6 when I was working on the project. It returns erroneous results and throws error due to deprecated ActiveModel method in its normal usage for example.

The alternative I found is Gutentag. It support Rails 6 and the contributors are actively resolving issues, keeping its issues count at 0 at the time of writing. I found it reliable and it does its main job well, which is to provide the tagging module.

The only thing it lacks for this particular project is an image to associate with for each tag. Here is where I would need to hack it.

I want to add image to each tag using ActiveStorage via has_on_attached method, and also a custom instance method that will return the tag’s name and image url.

The Rationale

The way I am doing it is to create a module that defines the relevant methods, and have the Gutentag::Tag model include this custom module. I will include it during the initialization phase. This will require some workarounds because of we are accessing the ActiveStorage and ActiveModel/ActiveRecord railitie sduring the initialization phase where these railities are not loaded yet.

The Extension Module

Kudos to this answer on stackoverflow, define the extension module as such:

# lib/extensions/gutentag.rb
# frozen_string_literal: true

module Extensions
  module Gutentag
    extend ActiveSupport::Concern
    
    included do
      has_one_attached :image
    end

    def json_attributes
      custom_attributes = attributes.dup
      custom_attributes.delete 'created_at'
      custom_attributes.delete 'updated_at'
      custom_attributes.delete 'taggings_count'
      custom_attributes.delete 'id'

      #  add image path base on service used
      if Rails.env.test? || Rails.env.development?
        ActiveStorage::Current.set(host: 'http://localhost:3000') do
          custom_attributes['image'] = self.image.attached? ? self.image.service_url : nil
        end
      else
        custom_attributes['image'] = self.image.attached? ? self.image.service_url : nil
      end

      custom_attributes
    end
  end
end

I modified it slightly with the use of ActiveSupport::Concern to do it the Rails 6 way. This helps to resolve module dependencies gracefully.

In this extension, I attached an image to the module using ActiveStorage‘s has_one_attached class method, which will ultimately be applied to the Gutentag::Tag model.

I also defined the instance method json_attributes which will return only the name and the image url in the resultant tag when called. It is used in the api response when frontend clients are retrieving the list of tags for example.

The Initialization

The code will be added to the original Gutentag initializer file under config/initializers/gutentag.rb.

# frozen_string_literal: true

require 'extensions/gutentag'

Gutentag.normaliser = lambda { |value| value.to_s }

Rails.application.config.to_prepare do
  begin
    if ActiveRecord::Base.connection.table_exists?(:gutentag_tags)
      Gutentag::Tag.include Extensions::Gutentag
    end
  rescue ActiveRecord::NoDatabaseError
  end
end

The extension file is imported in line 3.

Line 5 is one of the provided original Gutentag configuration option. This is specific to my project and is trivial in relation to the topic of this article. I am leaving it here to show other Gutentag configuration changes will co-exist with this custom module of mine.

Line 10 is the main line of code to execute. It will add the module to the Gutentag::Tag model which is defined inside the source code of the Gutentag gem. However, as you can see, it is wrapped in a number of codes. Not doing so will result in errors.

Here is why.

As we are going to involve the ActiveRecord and ActiveSupport railities, which have not been initialized yet during the default rails initialization phase, we need to ensure we run the code after they have been loaded.

Rails has 5 initialization events. The first initialization event to fire off after all railities are loaded is to_prepare, hence we define the code after that happens inside its block.

Since we are interacting with a ActiveRecord model, during the initialization phase, it is possible that the table has not been created. In other words, the Gutentag tables migration has not been executed, resulting in errors about the table not existing. An if conditional check is done to prevent this error.

I am not handling the else condition as under normal circumstances, after the proper migration has been executed, this will not happen. A possible scenario that this would happen is during rake tasks to create or migrate the database, either of which does not use the new methods at all.

A non-existing table is not the only thing we have to guard when dealing with Railities during the initialization process. A non-existing database is also a probable scenario that may occur. An example is during the rails db:create step. Hence, we rescue the ActiveRecord::NoDatabaseError error to silence the error. As this is often the only scenario that will happen, I will not handle the exception in the rescue block.

Usage

Now we can use it in our application. For instance, I can seed some default tags with images attached to them as shown:

# db/seeds.rb
# frozen_string_literal: true

p 'Creating Tags'
[
  'Luffy',
  'Zoro',
  'Usopp',
  'Sanji',
  'Nami',
  'Chopper',
  'Robin',
  'Frankie',
  'Brooks',
].each do |name|
  tag = Gutentag::Tag.create!(name: name)
  tag.image.attach(io: File.open("#{Rails.root.join('app', 'assets', 'images')}/#{name}_avatar.jpg"), filename: "#{name}_image.jpg")
  tag.save!
end
p 'Tags created'

Then in my api response for listing the tags, I can use the json_attributes method as such:

# app/controllers/api/v1/tags_controller.rb
# frozen_string_literal: true

module Api
  module V1
    class TagsController < Api::BaseController
      def index
        @tags = Gutentag::Tag.order(:name)
      end
    end
  end
end


# app/views/api/v1/tags/index.json.jbuilder
json.tags do
  json.array! @tags do |tag|
    json.merge! tag.json_attributes
  end
end

How To Change Or Add New SSH Key for EC2

This is a documentation of how to change or add new ssh key for your EC2 instance if you lost, and maybe compromised your private key.

The gist of it is to add in a new key pair to the disk volume of the EC2 instance. Pretty straightforward! But how can you do it without  being able to ssh into the EC2 instance without the private key you just lost? You will need to attach the root volume of the EC2 instance to another temporary EC2 instance, which you can access with a new key pair, and add in the new key pair to the original volume from there.

Summon the NewKeyPair!

First, create a new key pair. You can either generate a private and public key pair on your own and import the public one into the AWS console, or create it from the AWS management console and download the private key that they generated for you thereafter. Should you go for the latter, make sure your browser is not blocking the download.

Blocked download | vic-l

For the rest of the article, the new key pair will be referred to as NewKeyPair, and the old key pair LostKeyPair.

Retire The Veteran

Stop your old instance. Do not terminate!

NOTE: Your instance root volume need to be EBS backed and not instance store as instance store volumes are ephemeral. They do not persist the data after power down.

Once it has successfully stopped, you will realise that its volume remains attached. That’s EBS for you!

We will come to detaching it in a while. For now, spin up a new server.

Katon: Summon-The-New-Server-Jutsu

Launch a new server with the NewKeyPair. This is a temporary server and can be any of the linux distribution.

Detach The Old Volume

In the volumes page, select the old instance volume and select Detach as shown. There should be no error unless your old instance is still in the process of shutting down.

Detach old EBS Volume | vic-l

Once it is detached, you will observe that its status has changed to available and its Attachment Information will become blank. Now it is freeee! Time to attach it to the new server and receive its new key pair.

Attach To New Instance

Attach the root volume to the new instance as shown.

Then select the device to mount on.

attach volume device | vic-l

I will set /dev/sdf as suggested. The other devices reserved for the root volume (/dev/sda) and instance store volumes (/dev/sd[b-e]). More information on the device naming in AWS EC2 can be found here.

Run the command lsblk to see the new volume mounted. Note that the linux kernel has change my mount point from sdf to xvdf as noted in the warning callout in the image above.

lsblk | vic-l

Mounting The Volume

You would not be able to use the volume right away after attaching without mounting the volume in the system. Mounting will tell the EC2 instance how to access this new device via its list of directory. This will require setting up a mount point. Run the commands below.

sudo mkdir /mnt/tempvol
sudo mount /dev/xvdf1 /mnt/tempvol

These commands will mount the root of the device to the directory named /mnt/tempvol. You can change directory into the volume and see that it contains content from your old server.

From the image above, you can see that the authorized_keys file containing the old public key is placed in /home/ubuntu/.ssh directory relative to the mount point. The new public key pair exist in the /home/ubuntu/.ssh directory in the absolute path, which exist in the root volume of the new instance.

Adding The NewKeyPair To The Old Volume

Eventually, we want to use the new key pair to access the old server, with the content of the old volume, just like the good old times. To do get, add the NewKeyPair to the ssh folder of the old volume.

Realise that this is now possible because of the attaching and mounting of the old volume to a new instance which can be accessed due to a fresh setup and key pair creation.

I have used the append operation, >> instead of the overwrite operation, which is a single >. This is not necessary. It is up to you to decide if you want to get rid of the old key pair or not, depending on your situation.

If you lost your old key pair, feel free to overwrite it. There is no point hoarding it, and Marie Kondo can’t help you declutter software.

Attaching The Volume Back

Next, you can shut down your new server and attach your old volume back to the old instance. Remember, the EC2 instance will not be deleted and is still available if you chose stop instead of terminate when shutting it down initially.

attach volume back | vic-l
Mount the volume, this time, to /dev/sda1.
attach to sda1 | vic-l

The reason to mount it at /dev/sda1 is because we need to give the instance back its root volume for its boot operations. If we were to mount it to another device, you will see this error when starting the server because no root volume is detected.

error starting old instance | vic-l

Back To The Past

Now you can try to connect to your old instance after it has started up.

NOTE: you will see that the connection instruction is still mentioning the LostKeyPair. Even if you had overwritten your ssh key pair to the new one, this wrong instructions will still persist. Of course, you should connect with your NewKeyPair.

Connect to old instance again | vic-l

To ascertain that you have the old public keys, head dow to the <code>~/.ssh</code> directory and see that the changes you made on the new instance via attaching and mounting of the old instance’s volume has persisted.

Now, we have successfully added a new key pair to the old instance, and we can use it to ssh into the old instance form now on, even though we had lost our original key pair that was used to create it.