How To Use HTML Validation On Flatpickr

My go to date picker JavaScript library is flatpickr. It has a decent UI that can be customized easily, is lightweight, and is straight forward to implement. There is one particular flaw that I seek to address in this article, that is integrating it with the basic HTML validation.

Motivation

I want to be able to use basic HTML validation on my date inputs. A date input with a simple required attribute should trigger the HTML validation if it is empty upon form submission.

I also would like to prevent users from being able to enter free text in the text field which is meant for date input.

However, flatpickr comes with a flaw that would demand a hack to make it work the way I want.

The Read Only Problem

The cause of its flaw is that flatpickr disables itself by slapping the input with a readonly attribute. A readonly attribute will cause the input to elude the grasp of the basic HTML validation during form submission. This is so as HTML validation ignores readonly, disabled and hidden elements.

The Catch-22 Solution

The immediate solution is to initialize the flatpickr instance with the option allowInput set to true. However, this will allow users to be able to enter free text in the date input, which is what I would also like to guard against.

The Hack

Hence, my solution here is to implement a listener that will toggle the input’s readonly attribute when the date picker dropdown is active, and toggle it back when the user has moved on from date picking.

The Code

flatpickr("[data-behavior='flatpickr']", {
  altInput: true,
  altFormat: 'F j, Y',
  dateFormat: 'Y-m-d',
  allowInput: true,
  onOpen: function(selectedDates, dateStr, instance) {
    $(instance.altInput).prop('readonly', true);
  },
  onClose: function(selectedDates, dateStr, instance) {
    $(instance.altInput).prop('readonly', false);
    $(instance.altInput).blur();
  }
});

Line 1 initializes the flatpickr instance on elements specified by the selector.

Line 2 to 4 are my custom configuration options. I am placing it here to demonstrate the hack for a non default scenario.

Line 2 indicates my intention to display my date values in an alternative format.

Line 3 indicates the alternative format to display in.

Line 4 indicates the date format that will eventually submitted to the form’s action endpoint.

These configurations will tell flatpickr to create 2 inputs.

One is hidden and meant to be submitted to the backend. This input will inherit the attributes of the original input element that would matter during submission, like the name and id. Let’s call this the hidden input.

The other input is meant for display purpose. The value in it will be shown the desired alternative input format. And since it is display, it will inherit the relevant attributes from the original input element that matters for display. For example, the style, class and in particular, the readonly and required attributes. Let’s call this the display input.

Line 5 removes the default readonly attribute that flatpickr will place on its target input elements.

Line 7 defines a function that will be triggered when the dropdown is active. It will find the display input and set it to readonly. This will prevent users from being able to entering any text that will mess up the input value.

Line 10 undo the effect of line 7 in the likely scenario when the user has finished picking the date and the date dropdown closes.

Line 11 handles the behavior where the input cursor remains on the date input when the datepicker dropdown is deactivated. A possible scenario that this might happen is when the user presses the escape key when the dropdown is active. When that happens, the readonly attribute is gone and the user is able to enter any text on the input. Thus, the blur() function prevents this.

How To Setup And Debug Google App Script

This is a quick start guide on setting up Google App Script and using it in G Suite applications. Mainly, I will go through Google Sheets and Google Drive since they are the most widely used services.

Motivation

This is meant to remove the layer of repetitive mundane house keeping tasks via automation. In my side hustle, we have shifted our files from dropbox to Google Drive recently, bestowing us the ability to automate our tasks with Google App Scripts.

Setup

Head to your Google App Scripts console. There are many functions you can explore here. I will touch on just 3. Mainly the projects, executions and triggers.

Every project holds the script to execute based on a trigger. It is up to your jurisdiction to decide if a project should uphold a Single Responsibility Principle or do multiple task as part of a bigger singular operation.

Clicking on the ‘New Project’ button near the top left of the screen will bring you to the code editor where you can code your script.

In a script, you can define any number of functions. The exact function to run can be chosen during the trigger selection process. The usual setup is to have a main function, and the other functions are helpers.

Debug

The most important thing in coding is debugging. After all, it makes up most of our time as software developers.

In App Script, there is no luxury to debug the code interactively. You will need to rely on the usual logger. To print a log, write the code as such:

function main() {
  Logger.log('Hello World!');
}

There are 2 ways to view the logs.

First, within the script editor, go to View -> Logs. A dialog box will popup to show you the logs. A shortcut on mac is cmd + enter.

Second, is from the “My Executions” page back in the App Script console. I personally prefer looking at the logs through there because it does not only show its output but also other details like whether it is even running in the first place.

Click on the play arrow button to execute the function for debugging purposes.

Triggers

In the App Script editor, click on the clock icon to bring yourself to the function’s triggers page.

Add a trigger by clicking the bottom right button. A dialog box will popup and you can select the conditions of the trigger.

You can select the function in the script to run, as mentioned earlier, and the type of trigger. For Google Sheets, there are more options for triggering the script. More on that later.

Basics

Before you begin, it is good to know that every entity in G Suite have an id. This id is the gibberish string that you see in the URL of an opened Google Sheets, Google Docs or even a folder in Google Drive. It is highlighted in yellow in the image below.

Google Docs ID

Knowing the id of the particular file or folder allow you to carry out your operations without having to write the code to search for it. Then again, you can search them by name.

The entities that can be called and utilized in the App Script are documented here. Let’s take a look at DriveApp for example.

Example With Google Drive On Time Driven Trigger

Let’s say you want to remove editors you previously gave access to in your files. You want them to be removed when the files are moved into a particular folder.

function removeEditors() {
  var folder = DriveApp.getFolderById(ID_OF_FOLDER);

  iterateFiles(folder);
}

function iterateFiles(folder) {
  var files = folder.getFiles();
  while (files.hasNext()) {
    var file = files.next();
    Logger.log(`Looking at file ${file.getName()}`);

    var editors = file.getEditors();
    editors.forEach(function(editor) {
      var email = editor.getEmail();
      var approvedEmails = [
        'luffy@gmail.com',
        'zoro@gmail.com',
        'sanji@gmail.com',
        'usopp@gmail.com',
        'nami@gmail.com',
        'chopper@gmail.com',
        'robin@gmail.com',
        'franky@gmail.com',
        'brooks@gmail.com',
        'jinbei@gmail.com'
      ];

      if (!approvedEmails.includes(email)) {
        Logger.log(`Removing editor: ${email} from ${file.getName()}`);
        file.removeEditor(email);
      }
    })
  }
}

Given the ID_OF_FOLDER, this script will iterate through all the files in that folder, and for each file it will check if its editors’ emails are under the list of approved emails. If their email is not in the list, they are stripped of the editor role in the file.

Note the way the files variable is being looped. The loop is carried out if hasNext() returns true, and the next file in line is retrieved via next(). This is different from how the editors are looped with forEach, which is, I believe, the usual way developers loop through iterators in JavaScript.

The last step is to setup the time driven trigger to your liking.

The code, unfortunately, cannot be optimized by checking the lastUpdatedDate() of the file and sparing the need to loop through editors for files that were moved into this folder eons ago. This is because moving files into different folders does not update the file’s lastUpdatedDate().

Albeit a small inconvenience, I do expect more features and attributes in the future in App Script to be developed for us to utilize and optimize our codes. Until then, I sincerely hope that it stays free, as it already it right now, without any usage limitation or tiering. In fact, I hope it stays free forever!

Example With Google Sheet OnEdit Trigger

Let’s look at another example with Google Sheet. In a spreadsheet, we want to compile the values entered in a sheet into another sheet in the same spreadsheet in real time. Simple and straightforward. Let’s see how we can work on it.

Before that, take note of an extremely crucial step. Make sure the spreadsheet is a Google Sheet. If you uploaded a Microsoft Excel sheet and opened it using Google Spreadsheet, you will not be able to run any App Script until you convert it to a Google Sheet. If this applies to you, go to File -> Save as Google Sheet as shown below to make this necessary change.

Save as Google Sheets

The function is as shown below.

function compile(event) {
  var row = event.range.getRow();
  var column = event.range.getColumn();
  var value = event.range.getValue();
  var spreadsheet = SpreadsheetApp.getActiveSpreadsheet();
  var targetSheet = spreadsheet.getSheetByName("Target Sheet")
  targetSheet.getRange(row, column).setValue(value * 2)
}

We get the row and column that the user was editing on, whichever sheet that was, and multiply its value by 2 before saving it in the same row and column in the desired sheet with the name Target Sheet.

Now to add the trigger. In the script editor, click on the clock icon. It will bring you to the triggers of this project. Click on the button to add trigger on the bottom right of the page. Under Select event source, you will now see a new option From spreadsheet, and when you select it, you can select the event type to kickstart the function. We are looking for the onEdit event type for this case.

As you can see, this project is associated to only 1 spreadsheet – the spreadsheet it was created from. Additionally, each spreadsheet can have only 1 project as well.

Hence, you cannot have the same script running on different spreadsheet. At least if you did it this way. There may be another way to do so and overcome this restriction but I have yet to explore it.

Potential

The potential of App Scripts does not end here. Other than scripts, you can even code out html views to present visual dashboard based on real time changes.

On top of that, you can publish you own app scripts, as well as use the scripts other developers have made. This forms a community can supercharge automation to increase productivity.

Conclusion

Make use of App Script and automate away!

Handling DOM Elements From link_to remote: true Callback

This is a documentation of how to handle the response from a link_to remote: true API call and manipulate the DOM with minimal Javascript code.

Motivation

In the past, I use Javascript to add a click listener on a button element in order to make jquery.ajax() API call to my Rails server.

A typical use case would be to delete a row in a list. While I can make a resource delete request to the Rails server and it will reload the page with the new list the RESTful way, this UX flow does not work out in some cases.

Hence, I had to use the jquery.ajax() way to work things out. I did not use the link_to with remote: true helper because I thought there is no way for me to listen for the response and react.

The the only way I can listen on the callback and handle the DOM element thereafter, without having to reload the page was to use success callback the jquery.ajax(), or so I thought.

The Magic

So a better way is to use the link_to remote: true helper to render out the element without fuss the Rails way. And the key step is to add a Javascript listener on the element.

<!-- page.html.erb -->
<div data-model-id="<%= @model.id %>">
  <%= link_to 'DELETE', model_path(@model), method: :delete, class: 'delete-model', data: { confirm: 'Are you sure?' }, remote: true %>
</div>
// page.js
$(document).on('ajax:success', '.delete-model', event => {
  const [response, status, xhr] = event.detail;
  $(`.parent-row[data-model-id="${response.model_id}"]`).remove();
  alert(response.message);
});

The listener will listen for a ajax:success callback on the element that made the API call. Upon triggered by the event, it executes the block of code in its callback function. In this callback function, we will receive the data passed from the backend, which we can use to remove the DOM element as required.

Note that you might not want to use $(document).on() in a turbolinks environment as the listener will be added every time the page changes. A particular use case is documented here.

We can add a ajax:error listener as well to handle errors.

The Advantages

javascript crying | vic-l

This is a no hassle method of writing code in ruby (well, for the rendering of the element at least). The old way that I do, which is to use the jquery.ajax() method, requires more tear-inducing Javascript code to conjure. For a full stack Rails developer, it is not the most welcome.

On top of that, using Rails helper to render out the HTML element allows us to make use of the various Rails helpers to supercharge our development speed.

Url route helpers parse out the actual RESTful route to call with ease. Since it is dynamically interpolated, no code change is required should there be a route change.

We can also still take advantage of rails-ujs which has some handy features commonly needed for development. In the example above, I added a data-confirm attribute. This will be trigger rails-ujs to ask for confirmation before proceeding with the request, and gracefully abort the operation should the user cancel the confirmation.

This will require proper setup with the new Rails 6 version. Check out my article on how to properly setup Rails 6 with bootstrap, and of couse, integrate the new rails-ujs in its brand new frontend paradigm running on webpack.

Conclusion

Utilizing rails helpers as much as possible will exude the strength of the Rails framework even more, which is rapid development. This method of listening to remote API calls and act accordingly from the response allows exactly this.

AWS Lamba and API Gateway Integration With Terraform To Collect Emails

This is a documentation on creating a service that collects emails. It runs on serverless technology utilizing AWS lambda and API Gateway. It is also made easy to deploy to the cloud with infrastructure as code via Terraform in the form of a plug-and-play methodology.

Motivation

Often, I have to make static websites that are not exactly completely static because it requires a backend to collect the emails. While 3rd party services like mailchimp and sendgrid has their own SDKs to support easy integration for email collection, we have to be worried about hitting the limit in their packages and plans. This translate to stress for developers as we have to find a solution on it quickly and properly. If this happens on a weekend or a Friday, somehow this is always the case as more people are surfing the net then, the intensity is amplified.

For a new website, it is very hard to gauge the traffic and thus the plan required for the 3rd party service. this poses difficulties when budgeting for the project. Under utilizing the service also translate to unnecessary cost. The best kind of plan for such website is a pay as you go model, in my opinion, and that can be achieved by integrating with cloud providers like AWS.

Technology Stack

AWS Lambda

Enter AWS lambda where you only pay for what you use. You do not need to fork out money at the start of your project. Instead, you will just pay for how much you use, hence relieving you of the worry of wasting money on resources you are not using. In fact, this is only an issue if you are hitting 1 million potential users signing up with their email every month. The reason is because AWS lambda has 1 million free request every month before they start charging. This is highly unlikely for a new website, which means you now have a backend for your static website for free.

API Gateway

For the Serverless fuction, that is AWS Lambda, to connect to the Internet via an API, we need the API Gateway. This exposes the serverless function to be accessible by the World Wide Web with a HTTPS endpoint. It runs on the encrypted transport security layer protocol to uphold security by default. This allows your websites to use the serverless function via API calls.

Terraform

To set up the infrastructures, the usual way is to navigate the AWS management console, deploy the required AWS resoures and link them. This can be a challenge if you are not familiar with the required configurations. Not only will this translate to loss of precious time to debug these issues, which otherwise developers could have spent it with your loved ones and challenge the meme below, but it will also lead to frustration.

While frustration is a part and parcel of life as a programmer, we can also avoid them with our knowledge of code. Here is where Terraform enters the fray. It is an Infrastructure As Code where you write the configurations of the infrastructure once and you can deploy it multiple time without having to go through the whole forest of the AWS console each time. This means you do not need to remember every single step and do not need to deal with surprise bugs because you forgot one of them, or worse, had a spelling error.

Programming is like magic. You write very specific instructions in arcane languages to invoke commands, and if you get it even a little bit wrong you risk unleashing demons and destroying everything.

— Diana Carrier (@artemis_134) June 23, 2018

Since the blueprint infrastructure is in code, this means we can leverage version control features with git, and work together to improve the code base along the way without fear of not being able to rollback to the previous successful configuration.

Terraform Files

I will start off with the terraform files required to setup the infrastructure to deploy the code. Let’s start off with the place to store our emails.

The database – AWS DynamoDB

I will store the emails collected in AWS’s own noSQL database DynamoDB. This is a fast, simply structured and schemaless storage which fits my use case very aptly.

It allows fast and simultneous writes at high speed, so there is no fear of race conditions from spike in the volume of signups during a PR event promoting the product and getting people to leave their emails at the website.

Since it is schemaless, we can easily add new details of the users that you would like to collect on top of their emails along the way without having to migrate and fiddle with the structure of the database. With proper metaprogramming, you do not need to touch the backend code as well, leaving only the frontend to work on adding the new text fields for data collection.

For the sake of argument, we can also use the traditional relational database management system  (RDBMS) for this project. It is written in SQL, which is a langauge most, if not all, developers who every touched a database would have known. There is no need to use fancy noSQL for this simple project. In addition, the chances of leveraging the scaling advantage of noSQL over SQL databases are low, because you will need alot of traffic for that to become a worry. For a new website, that is highly unlikely to happen.

However, highly influenced by the cost, I am still sticking with DynamoDB in this case. To setup an AWS RDS to host a managed relational database, the cheapest MySQL database already goes for around 20 USD a month, as compared to the pay as you go model the DynamoDB employs. On top of that, it has a generous amount of free usage and storage under its free tier. This free tier does not last for the first 12 months after your signup but forever, unlike the RDS counterpart. We probably will NOT incur any cost using DynamoDB unless your marketing is brilliant for your new website.

resource "aws_dynamodb_table" "main" {
  name = "${var.project_name}-dynamodb_table"
  billing_mode = "PROVISIONED"
  read_capacity = var.dynamodb-read_capacity
  write_capacity = var.dynamodb-write_capacity
  hash_key = "email"

  attribute {
    name = "email"
    type = "S"
  }
}

Provisioning the database is the simplest. I am using Terraform variables to substitute values to set the number of reading and writing units required, as well as the table name for robustness sake.

I have set the billing mode to “provisioned” for simplicity sake. Afterall I am not expecting any insane burst of traffic for a site that is not popular. Even if it does, maybe due to some incrediable promotion at some hugely popular event, I do not expect the load to require me to scale the reading and writing capacities of the database. It is going to be a quick write of a few bytes.

On top of that, provisioned capacity means less configurations needed for the permissions to autoscale of the capacities of the database. It can take some time to configure that, and since that is outside the topic of the article, I will stick to “provisioned” billing mode.

The hash_key, or “partition key” in other definitions, is analogous to the primary key in a SQL database table. It requires specific details under the attribute property. You can specify the range_key, or “sort key” here if you require, and remember to add attribute to describe it as well.

Other attributes that are neither the partition key nor the sort key need not have a attribute property in this file. You can simply just write it in the database and it will register. Afterall, this is a schemaless database.

On top of that, it is a fully managed database, so it comes with all the goodies like backup and version maintenance to spare developers from all these chores.

The backend – AWS Lambda

​Next is the lambda function. It is written in Javascript using Nodejs. The file below is the configuration file to set the infrastructure required. Let’s dive into it.

resource "aws_lambda_function" "main" {
  filename = var.zipfile_name
  function_name = "${var.project_name}"
  role = aws_iam_role.main.arn
  handler = "index.handler"

  source_code_hash = "${filebase64sha256("${var.zipfile_name}")}"

  runtime = "nodejs12.x"
}

resource "aws_iam_role" "main" {
  name = "${var.project_name}-iam_lambda"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_iam_policy" "main" {
  name = "main"
  path = "/"
  description = "IAM policy for lambda to write to dynamodb table and logging"

  policy = templatefile("${path.module}/lambda_policy.tmpl", { dynamodb_arn = aws_dynamodb_table.main.arn })
}

resource "aws_iam_role_policy_attachment" "main" {
  role = "${aws_iam_role.main.name}"
  policy_arn = "${aws_iam_policy.main.arn}"
}

resource "aws_lambda_permission" "main" {
  statement_id = "AllowExecutionFromAPIGateway"
  action = "lambda:InvokeFunction"
  function_name = aws_lambda_function.main.function_name
  principal = "apigateway.amazonaws.com"

  source_arn = "${aws_api_gateway_rest_api.main.execution_arn}/*/*/*"
}

Uploading of the backend code will be using the base64 hash of the zipfile of the code. The code will need to be first compressed and zipped before taking this action. We will see how we can automate this process later.

This lambda function will need the permissions to write to the dynamoDB table. This is done using

  • aws_iam_role to establish trust between the 2 AWS services
  • aws_iam_policy to give permission for the lambda function access the database resource and perform the PutItem action. Details of the policy is interpolated via a template file, which we will go through later
  • aws_iam_role_policy_attachment to bind the aws_iam_role to the aws_iam_policy on the lambda function
  • aws_lambda_permissionto allow API Gateway to be able to integrate the lambda function and invoke it

The template file for the aws_iam_policy is shown below. It lists the actions that the lambda function is permitted to perform on the specified dynamodb table. It also contains the permissions for lambda function to push the logs to AWS Cloudwatch. By the way, these logging permissions are the default permissions for a lambda function, and this template adds on the DynamoDB permissions to them. Note the dynamodb_arn variable that is interpolated, which jusitifies the use of the template file instead of hardcoding the whole policy in the main terraform file for robustness sake.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "dynamodb:PutItem",
      "Resource": "${dynamodb_arn}",
      "Effect": "Allow"
    },
    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*",
      "Effect": "Allow"
    }
  ]
}

The API layer – AWS API Gateway

The API Gateway is required to expose the lambda function to be consumed by servers and websites via a URL endpoint. The endpoint will be served over the HTTPS, which requires some extra configurations as documented below.

resource "aws_api_gateway_rest_api" "main" {
  name = var.project_name
}

resource "aws_api_gateway_resource" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  parent_id = aws_api_gateway_rest_api.main.root_resource_id
  path_part = "email"
}

resource "aws_api_gateway_integration" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = aws_api_gateway_method.main.http_method
  integration_http_method = aws_api_gateway_method.main.http_method
  type = "AWS_PROXY"
  uri = aws_lambda_function.main.invoke_arn
}

resource "aws_api_gateway_integration_response" "main" {
  depends_on = [aws_api_gateway_integration.main]

  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = aws_api_gateway_method.main.http_method
  status_code = aws_api_gateway_method_response.main.status_code
}

resource "aws_api_gateway_method" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = "POST"
  authorization = "NONE"
}

resource "aws_api_gateway_deployment" "main" {
  depends_on = [
    "aws_api_gateway_integration_response.main",
    "aws_api_gateway_method_response.main",
  ]
  rest_api_id = aws_api_gateway_rest_api.main.id
}

resource "aws_api_gateway_method_settings" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  stage_name = aws_api_gateway_stage.main.stage_name
  
  # settings not working when specifying the single method
  # refer to: https://github.com/hashicorp/terraform/issues/15119
  method_path = "*/*"

  settings {
    throttling_rate_limit = 5
    throttling_burst_limit = 10
  }
}

resource "aws_api_gateway_stage" "main" {
  stage_name = var.stage
  rest_api_id = aws_api_gateway_rest_api.main.id
  deployment_id = aws_api_gateway_deployment.main.id
}

resource "aws_api_gateway_method_response" "main" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  resource_id = aws_api_gateway_resource.main.id
  http_method = aws_api_gateway_method.main.http_method
  status_code = "200"
}

output "endpoint" {
  value = "${aws_api_gateway_stage.main.invoke_url}${aws_api_gateway_resource.main.path}"
}

So let’s break it down.

The aws_api_gateway_rest_api represents the project in its entirety.

The aws_api_gateway_resource refers to each api route of this project, and there is only 1 in this case.

I have setup only 1 stage environment of aws_api_gateway_stage for this project using a Terraform variable. You can setup a different stages to differentiate the staging and production environments.

The aws_api_gateway_stage is associated to a aws_api_gateway_method_settings that sets the throttling rate of the API to prevent spams and overloading. For the method_path property, the wildcard route is used to apply to all routes instead of the only API route that was created. It is trivial in this case, but the explanation for picking this “easy” route is simply due to a bug. It I were to specify the exact route, which is in the form of {resource_path}/{http_method}, the settings on the throttling rate will not propagate. It was documented here on github but was not properly resolved. Leaving it here for now.

The aws_api_gateway_deployment configures the deployment of the API. Note the depends_on attribute that was assigned. This explicit dependency is critical to ensure the deployment is called into effect after all the necessary resources have been provisioned.

The aws_api_gateway_integration configuration sets the integration to lambda proxy using POST HTTP method without any authorization, as specified by the aws_api_gateway_method configuration. Lambda proxy allows us to handle the request from the server like how we would in a typical web application backend framework. The full request object is passed to lambda function and the API Gateway plays no part in mapping any of the request parameters. The API Gateway mapping has great potential to integrate interfaces properly, but for our use case, it is not necessary. I find this article doing a great job in explaining the API Gateway features with easy to consume information and summary, like a gameshark guide book written by the half-blood prince. Do take a look to understand AWS API Gateway better.

The aws_api_gateway_integration_response is responsible for handling the response from the lambda function. This is where we can make changes to the headers returned from the lambda function using the response_parameters property, which is not used in this case. This is also the place to map and transform the response data from the backend to fit the desired data structure using the response_templates property.

The aws_api_gateway_method_response is where we can filter what response headers and data from aws_api_gateway_integration_response to pass on to the caller.

The transform and mapping of the headers and data from the backend (ie the lambda function) in aws_api_gateway_integration_response and the filter of headers and data before passing to the front end in aws_api_gateway_method_response is not needed in this sample application. It is just good knowledge to have. There are 2 reasons why we do not need them here.

First, in a bit, we will go through the front end that will make an API call that is a simple request. A simple request does not require a preflight request, which is a API call made by browsers prior to the actual API call, as they are deemed safe since they are using standard CORS-safelisted request headers. In the event that one does need a preflight request because one is not making a simple request, we will need to set up another API route that will transform the headers returned from the backend and allow the relevant headers to be passed on to the front end for this preflight request. This will allow the frontend website to overcome the CORS policy enabled by default in modern browsers. This will mean we need to configure a new set of aws_api_gateway_rest_api, aws_api_gateway_integration, aws_api_gateway_method, aws_api_gateway_integration_response, aws_api_gateway_method_response just for this preflight request. Things can get complicated here, so I will leave out of this article. If you still to implement CORS, [this gist](https://gist.github.com/keeth/6bf8b67c82f9a085e03ecbb289a859d6) is a good reference.

Second, we are using lambda proxy integration, so the full response from the lambda will be passed to the front end and mapped automatically, provided the response from the lambda code is properly formatted. Refer to this documentation for more details on it.

At last, the output resource will print the value of the enpoint of the api for us to integrate in our frontend.

The Admin Stuff

This file contains the details that we will need to setup terraform and the variables we are using. The provider‘s region attribute here is hardcoded, which should ideally not be the case. I have yet to figure out how to make this dynamic and robust. The name with the todo- prefix should be changed to fit the project.

We are using an S3 bucket as the Terraform backend to hold the state of the infrastructure provisioned by Terraform. ​Creation of the bucket will be automated via a script that we will go through during the section on deployment.

provider "aws" {
  version = "~> 2.24"
  region = "eu-west-1"
}

terraform {
  required_version = "~> 0.12.0"
  backend "s3" {
    bucket = "todo-project-tfstate"
    key = "terraform.tfstate"
    region = "eu-west-1"
  }
}

variable "project_name" {
  type = string
  default = "todo-project"
}

variable "region" {
  type = string
  default = "eu-west-1"
}

variable "stage" {
  type = string
  default = "todo-stage"
}

variable "zipfile_name" {
  type = string
  default = "todo-project.zip"
}

variable "dynamodb-read_capacity" {
  type = number
  default = 1
}

variable "dynamodb-write_capacity" {
  type = number
  default = 1
}

The Application

Here is the application code in written in nodejs. It is a simple write to the dynamodb with basic error handling. It takes in only 1 parameter, that is the email. This code can definitely be improved by allowing more parameters to be written to the database in a dynamic way, so that the same code base can be used for a site that collects the first and last name of the user, as well as another site that collects the date of birth of the user. I will leave that as a future personal quest.

// Load the AWS SDK for Node.js
const AWS = require('aws-sdk');

// Set the region 
AWS.config.update({region: 'eu-west-1'});

// Create the DynamoDB service object
const ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});

exports.handler = async (event) => {
  console.log(JSON.stringify(event, null, 2));
  const params = {
    TableName: 'todo-project-dynamodb_table',
    Item: {
      'email' : {S: JSON.parse(event.body).email}
    }
  };

  // Call DynamoDB to add the item to the table
  ddb.putItem(params, function(err, data) {
    if (err) {
      console.log("Error", err);
    } else {
      console.log("Success", data);
    }
  });
  
  try {
    const result = await ddb.putItem(params).promise();
    console.log("Result", result);
    const response = {
      statusCode: 204,
      headers: {
        "Access-Control-Allow-Origin" : "*",
      },
    };
    return response;
  } catch(err) {
    console.log(err);
    const response = {
      statusCode: 500,
      headers: {
        "Access-Control-Allow-Origin" : "*",
      },
      body: JSON.stringify({ error: err.message }),
    };
    return response;
  }
};

A thing to note here is the need to return the Access-Control-Allow-Origin header in the response. The response also has to follow a particular but straightforward and common format in order for lambda proxy integration with API Gateway. This will map the response properly to the API Gateway method response and be returned to the frontend websites to overcome the CORS policy implemented by modern browsers.

Deployment

I will be using 3 ruby scripts for deployment related tasks, namely init.rb, apply.rb and destroy.rb, and a helper service object, get_aws_profile.rb for the deployment process.

Let’s take a look at them.

get_aws_profile.rb

# get_aws_profile.rb

class GetAwsProfile
  def self.call
    aws_profile = "todo-aws_profile"

    begin
      aws_access_key_id = `aws --profile #{aws_profile} configure get aws_access_key_id`.chomp
      abort('') if aws_access_key_id.empty?

      aws_secret_access_key = `aws --profile #{aws_profile} configure get aws_secret_access_key`.chomp
      abort('') if aws_secret_access_key.empty?
    rescue Errno::ENOENT => e
      abort("Make sure you have aws cli installed. Refer to https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html for more information.")
    end

    p "AWS_ACCESS_KEY_ID = #{aws_access_key_id}"
    p "AWS_SECRET_ACCESS_KEY = #{aws_secret_access_key}"

    [aws_profile, aws_access_key_id, aws_secret_access_key]
  end
end

This is a helper method that will get the aws_access_key_id and the aws_secret_access_key for usage in the scripts. Note that it uses the aws cli command to attain the keys. Hence, it has to be installed on your local machine prior to running. It also assumes you are using named profile to hold your credentials.

I don’t really like this setup since it requires these prerequisites. But well that can be solved again in the future.

init.rb

The first script to run is init.rb.

The init.rb will create the S3 bucket to be used as the terraform backend. Line 20 checks for the presence of this bucket and throws an exception if the bucket does not exist. The rescue block, if triggered, will create the non-existent bucket.

The initialization process on terraform is run via its docker image.

require 'bundler/inline'

gemfile do
  source 'https://rubygems.org'
  gem 'pry'
  gem 'aws-sdk-s3', '~> 1'
end

require './get_aws_profile.rb'

aws_profile, aws_access_key_id, aws_secret_access_key = GetAwsProfile.call

s3_client = Aws::S3::Client.new(
  access_key_id: aws_access_key_id,
  secret_access_key: aws_secret_access_key,
  region: 'eu-west-1'
)

begin
  s3_client.head_bucket({
    bucket: 'todo-project-tfstate',
    use_accelerate_endpoint: false
  })
rescue StandardError
  s3_client.create_bucket(
    bucket: 'todo-project-tfstate',
    create_bucket_configuration: {
      location_constraint: 'eu-west-1'
    }
  )
end

response = `docker run \
  --rm \
  --env AWS_ACCESS_KEY_ID=#{aws_access_key_id} \
  --env AWS_SECRET_ACCESS_KEY=#{aws_secret_access_key} \
  -v #{Dir.pwd}:/workspace \
  -w /workspace \
  -it \
  hashicorp/terraform:0.12.12 \
  init`

puts response

apply.rb

Once initialized, the next script to run is apply.rb.

Prior to applying the Terraform instructure, the backend code is packaged into a zip file. After application, the zip file is deleted for housekeeping.

require 'bundler/inline'

gemfile do
  source 'https://rubygems.org'
  gem 'pry'
  gem 'rubyzip', '>= 1.0.0'
end

require './get_aws_profile.rb'
require 'zip'

aws_profile, aws_access_key_id, aws_secret_access_key = GetAwsProfile.call

folder = Dir.pwd
input_filenames = ['index.js']
zipfile_name = File.join(Dir.pwd, 'todo-project.zip')

File.delete(zipfile_name) if File.exist?(zipfile_name)

Zip::File.open(zipfile_name, Zip::File::CREATE) do |zipfile|
  input_filenames.each do |filename|
    zipfile.add(filename, File.join(folder, filename))
  end
end

response = `docker run \
  --rm \
  --env AWS_ACCESS_KEY_ID=#{aws_access_key_id} \
  --env AWS_SECRET_ACCESS_KEY=#{aws_secret_access_key} \
  -v #{Dir.pwd}:/workspace \
  -w /workspace \
  -it \
  hashicorp/terraform:0.12.12 \
  apply -auto-approve`

puts response

File.delete(zipfile_name) if File.exist?(zipfile_name)

With this, the api is now deployed and can be called from any website. We will go through a sample front end integration in a bit.

destroy.rb

Once you are done with the project or are in the process of debugging, the destroy script will remove all the resources deployed. It will also remove the S3 backend that was created outside of Terraform.

require 'bundler/inline'

gemfile do
  source 'https://rubygems.org'
  gem 'pry'
  gem 'aws-sdk-s3', '~> 1'
end

require './get_aws_profile.rb'

aws_profile, aws_access_key_id, aws_secret_access_key = GetAwsProfile.call

response = `docker run \
  --rm \
  --env AWS_ACCESS_KEY_ID=#{aws_access_key_id} \
  --env AWS_SECRET_ACCESS_KEY=#{aws_secret_access_key} \
  -v #{Dir.pwd}:/workspace \
  -w /workspace \
  -it \
  hashicorp/terraform:0.12.12 \
  destroy -auto-approve`

puts response

s3_client = Aws::S3::Client.new(
  access_key_id: aws_access_key_id,
  secret_access_key: aws_secret_access_key,
  region: 'eu-west-1'
)

begin
  s3_client.head_bucket({
    bucket: 'todo-project-tfstate',
    use_accelerate_endpoint: false
  })

  s3_client.delete_object({
    bucket:  'todo-project-tfstate',
    key: 'terraform.tfstate', 
  })
  s3_client.delete_bucket(bucket: 'todo-project-tfstate')
rescue StandardError
  puts "todo-project-tfstate S3 bucket already destroyed."
end

Sample Frontend Integration

<!DOCTYPE html>
<html>
<head>
  <script
  src="https://code.jquery.com/jquery-3.4.1.min.js"
  integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo="
  crossorigin="anonymous"></script>
</head>
<body>

  <h2>HTML Forms</h2>

  <form id="form">
    <label for="email">First name:</label><br>
    <input type="text" id="email" name="email" value="test@test.com"><br>
    <input type="submit" value="Submit">
  </form>

  <script type="text/javascript">
    $( "#form" ).submit(function(event) {
      event.preventDefault();

      $.ajax({
        type: "POST",
        url: "https://todo-endpoint.execute-api.eu-west-1.amazonaws.com/todo-stage/email",
        data: JSON.stringify({
          email: $('#email').val()
        }),
        success: function(data, textStatus, jqXHR) {
          debugger
        },
        error: function(jqXHR, textStatus, errorThrown) {
          debugger
        }
      });
    });
  </script>

</body>
</html>

Below is a simple html web page that will has the email prefilled for demonstration purpose. The form will submit via jquery.ajax() using default settings so as not to trigger the need for preflight request.

You will see that the email will be added to the DynamoDB table, and the logs of the lambda funciton will be recorded in AWS Cloudwatch.

Conclusion

This exercise helped me understand how lambda is integrated with API Gateway, as well as the immense potential as a robust middleware the latter can be. In addition, I got to understand preflight request and CORS better, as well as the jquery.ajax() function.

The project is saved in this repository for future reference.

Implementing Eslint In Sublime With Airbnb Style Guide

This is a documentation on how to setup eslinting on sublime text editor, bootstrapped with using the style guide set up by Airbnb.

Why Is Eslinting Necessary

Linting your javascript code catches syntax errors, and possibly some runtime errors, while you are coding. It helps you debug faster, putting things like missing ; or wrong closures out of the way from the work. No more rolling your eyes and reducing your lifespan over things that shouldn’t matter.

eyeroll | vic-l

It will also be helpful if we are working together with other developers. It helps to keep the flavor of the JavaScript code same across the code bases, which can speed up development work and make it enjoyable at the very least.

Ok, maybe a teeny weeny bit better since we are talking about JavaScript here.

javascript crying | vic-l

Why Airbnb Style Guide

Linting can also be a good teacher. We can follow style guides, or more technically eslint configurations, setup by other senpais with telling experience or hailing from reputable tech companies, and when we write codes that do no adhere to  what they deem as best practice, we can learn why and change accordingly.

Google, for instance, has it own set of JavaScript style guide, which comes with a shareable eslint config that we can just plug and play into our codebase.

However, I would personally suggest Airbnb’s style guide because of its well-documented reasons for implementing each rule. In its style guide page, it states the reason for implementing certain rules as well as give examples of good and bad coding examples. Here is an image to better explain this point.

Airbnb javascript style guide | vic-l

This is an accumulation of a wealth of knowledge from brilliant programmers that is made easily accessible to us.

Sometimes, we may even learn some quirks of JavaScript that we never knew existed because we have not experienced the problem before. It is like time traveling to the future by leveraging on the past lessons of those before us.

Installing eslinting

Start off by installing the packages required for eslinting with Airbnb style.

npm i -D eslint 
    babel-eslint 
    eslint-config-airbnb 
    eslint-plugin-import 
    eslint-plugin-jsx-a11y 
    eslint-plugin-react 
eslint-plugin-react-hooks

I will attempt to explain what each line does.

-D flag

The -D flag installs the packages as development dependencies for the project at hand. It makes sense to install it under the project instead of globally because the same configurations can be shared to other developers working on the same project. And install it only as a development dependency as it is only required during development work.

eslint

The eslint package is the main package that will handle the linting on Vanilla JavaScript and Vanilla JavaScript only. It is crucial, for the case of JavaScript, to understand the significance of Vanilla JavaScript, and that will require a bit of a history lesson.

babel-eslint

JavaScript is a fairly senior language. It is one of the pioneers of computing languages, which meant that it did inevitably made some bad mistakes in its syntax design. Over the years, people, or should I say geniuses, are unhappy about it and have come up with more efficient ways to write it.
CoffeeScript is one example. To define a JavaScript function that looks like this:

var greet = function(name) {
  return console.log(Hello, ${name});
};

CoffeeScript and its gang of caffeine addicts have decided to shrink the amount of code require with the use of indentations. The same definition can be written in CoffeeScript like this:

greet = (name) ->
  console.log `Hello, ${name}`

TypeScript is another example and their side of the camp stresses the need for JavaScript variables to be type safe, the lack of which has caused much of the JavaScript errors throughout its history. The same function defined in TypeScript looks like this:

var greet = function (name: string): void {
  console.log(`Hello ${name}`);
}

The official standard for JavaScript, however, is the ECMAScript specification, or ES for short (hope that answer the naming of the eslint package). And its official compiler is Babel. The specification has undergone multiple improvements over the years and new standards were iterated. Babel has also evolved with it. The same function can be written in Babel as such:

var greet = (name) => {
  console.log(`Hello ${name}`);
}

All these various compilers mean that we need a different set of linting depending on the compiler you are using to catch the corresponding syntax errors. Airbnb style guide uses Babel and follows the official modern JavaScript syntax. Hence we are installing the babel-eslint package.

eslint-config-airbnb

This package  consist of all the rules that the engineers in Airbnb have deemed as best practices and which their company follows.

eslint-plugin-import

Next is the eslint-plugin-import package which supports linting of file imports using ECMAScript. While the previous mentioned packages watch out for syntax errors in your code, this packages looks out for erroneous file imports. These errors may be due forgetting to export modules that exist in another file, or a wrong spelling in the file name to be imported.

The Other Packages

The other packages are react specific. Airbnb use react as the main JavaScript framework for their front end. Hence their default linting config requires the remaining packages to work, namely eslint-plugin-react, eslint-plugin-react-hooks, and eslint-plugin-jsx-a11y. Without them, you will get missing dependency errors.

Note that it also requires the eslint and eslint-plugin-import packages to work, but since its not specific to react and is a good-to-have tool for development work, I am explained a little more about them.

If you are not using react, but still want to use Airbnb style guide, you will be interested in their base configurations in the eslint-config-airbnb package.

The .eslintrc

What we have done so far is only downloaded the linting packages. To utilise them, some configurations need to be setup, and I will be setting it up using .eslintrc. There are a number different ways to setup the configurations.

Write the .eslintrc as such:

{
  "parser": "babel-eslint",
  "extends": "airbnb",
  "rules": {
    "camelcase": "off",
  }
}

The parser option specifies that we are going to write the latest ECMAScript syntax and using Babel as the compiler, as what the rules in the Airbnb configuration file is expecting. More parser options can be found here.

The extends options tells the linter to use the rules setup by the Airbnb configuration. The rules will be empty if this is not specified, and you will be essentially writing code with the recommended setting in babel-eslint package.

The rules options allows you to overwrite rules that might not fit your workflow. I added the example for ignoring warnings on non camel case variables which I am using right now. I do not follow the camel case practice when naming JavaScript variables because I am working with a Ruby On Rails backend. The Rails framework advocates camel cased variable naming which is different from that of JavaScript. Hence, I decide to keep the same casing for the variable naming so that it is easy to receive data from the API responses, as well as to package requests to send to the backend..

Text Editor Linter

Lastly, we need a package on the text editor you are using to read the eslint packages and configurations to highlight the relevant syntax errors accordingly. For sublime text editor, you need to install these packages.

SublimeLinter
SublimeLinter-eslint

The steps to install sublime text packages can be easily achieved with the Sublime Package Control.

Restarting Sublime Text

The last step is to restart your Sublime Text editor. Open a file with the js extension, and start writing some erroneous code to see the linting in effect!