Reopen And Add Methods To Models In Ruby Gems

This is a documentation on how to add class and instance methods to models that exist in ruby gems. Often, there is a need to add methods to models that are created in ruby gems.

In a recent project that I am working on, I found a particular need for adding images to a tagging gem. The purpose of the gem is for taxonomy and the final UI draft has different images allocated to each of the category (or should I say tag). We decided to have the rails backend handle the image and tag association. Hence, the ideal way to handle this would be to modify the models in the tagging gem to hold its image as well upon creation.

Project Specifics

The tagging gem that I am using is, contrary to the more popular and senior ActsAsTaggableOn gem, the Gutentag gem.

The reason I use the latter instead of the former is because the former does not support the new ActiveRecord 6 when I was working on the project. It returns erroneous results and throws error due to deprecated ActiveModel method in its normal usage for example.

The alternative I found is Gutentag. It support Rails 6 and the contributors are actively resolving issues, keeping its issues count at 0 at the time of writing. I found it reliable and it does its main job well, which is to provide the tagging module.

The only thing it lacks for this particular project is an image to associate with for each tag. Here is where I would need to hack it.

I want to add image to each tag using ActiveStorage via has_on_attached method, and also a custom instance method that will return the tag’s name and image url.

The Rationale

The way I am doing it is to create a module that defines the relevant methods, and have the Gutentag::Tag model include this custom module. I will include it during the initialization phase. This will require some workarounds because of we are accessing the ActiveStorage and ActiveModel/ActiveRecord railitie sduring the initialization phase where these railities are not loaded yet.

The Extension Module

Kudos to this answer on stackoverflow, define the extension module as such:

# lib/extensions/gutentag.rb
# frozen_string_literal: true

module Extensions
  module Gutentag
    extend ActiveSupport::Concern
    included do
      has_one_attached :image

    def json_attributes
      custom_attributes = attributes.dup
      custom_attributes.delete 'created_at'
      custom_attributes.delete 'updated_at'
      custom_attributes.delete 'taggings_count'
      custom_attributes.delete 'id'

      #  add image path base on service used
      if Rails.env.test? || Rails.env.development?
        ActiveStorage::Current.set(host: 'http://localhost:3000') do
          custom_attributes['image'] = self.image.attached? ? self.image.service_url : nil
        custom_attributes['image'] = self.image.attached? ? self.image.service_url : nil


I modified it slightly with the use of ActiveSupport::Concern to do it the Rails 6 way. This helps to resolve module dependencies gracefully.

In this extension, I attached an image to the module using ActiveStorage‘s has_one_attached class method, which will ultimately be applied to the Gutentag::Tag model.

I also defined the instance method json_attributes which will return only the name and the image url in the resultant tag when called. It is used in the api response when frontend clients are retrieving the list of tags for example.

The Initialization

The code will be added to the original Gutentag initializer file under config/initializers/gutentag.rb.

# frozen_string_literal: true

require 'extensions/gutentag'

Gutentag.normaliser = lambda { |value| value.to_s }

Rails.application.config.to_prepare do
    if ActiveRecord::Base.connection.table_exists?(:gutentag_tags)
      Gutentag::Tag.include Extensions::Gutentag
  rescue ActiveRecord::NoDatabaseError

The extension file is imported in line 3.

Line 5 is one of the provided original Gutentag configuration option. This is specific to my project and is trivial in relation to the topic of this article. I am leaving it here to show other Gutentag configuration changes will co-exist with this custom module of mine.

Line 10 is the main line of code to execute. It will add the module to the Gutentag::Tag model which is defined inside the source code of the Gutentag gem. However, as you can see, it is wrapped in a number of codes. Not doing so will result in errors.

Here is why.

As we are going to involve the ActiveRecord and ActiveSupport railities, which have not been initialized yet during the default rails initialization phase, we need to ensure we run the code after they have been loaded.

Rails has 5 initialization events. The first initialization event to fire off after all railities are loaded is to_prepare, hence we define the code after that happens inside its block.

Since we are interacting with a ActiveRecord model, during the initialization phase, it is possible that the table has not been created. In other words, the Gutentag tables migration has not been executed, resulting in errors about the table not existing. An if conditional check is done to prevent this error.

I am not handling the else condition as under normal circumstances, after the proper migration has been executed, this will not happen. A possible scenario that this would happen is during rake tasks to create or migrate the database, either of which does not use the new methods at all.

A non-existing table is not the only thing we have to guard when dealing with Railities during the initialization process. A non-existing database is also a probable scenario that may occur. An example is during the rails db:create step. Hence, we rescue the ActiveRecord::NoDatabaseError error to silence the error. As this is often the only scenario that will happen, I will not handle the exception in the rescue block.


Now we can use it in our application. For instance, I can seed some default tags with images attached to them as shown:

# db/seeds.rb
# frozen_string_literal: true

p 'Creating Tags'
].each do |name|
  tag = Gutentag::Tag.create!(name: name)
  tag.image.attach(io:"#{Rails.root.join('app', 'assets', 'images')}/#{name}_avatar.jpg"), filename: "#{name}_image.jpg")!
p 'Tags created'

Then in my api response for listing the tags, I can use the json_attributes method as such:

# app/controllers/api/v1/tags_controller.rb
# frozen_string_literal: true

module Api
  module V1
    class TagsController < Api::BaseController
      def index
        @tags = Gutentag::Tag.order(:name)

# app/views/api/v1/tags/index.json.jbuilder
json.tags do
  json.array! @tags do |tag|
    json.merge! tag.json_attributes

How To Change Or Add New SSH Key for EC2

This is a documentation of how to change or add new ssh key for your EC2 instance if you lost, and maybe compromised your private key.

The gist of it is to add in a new key pair to the disk volume of the EC2 instance. Pretty straightforward! But how can you do it without  being able to ssh into the EC2 instance without the private key you just lost? You will need to attach the root volume of the EC2 instance to another temporary EC2 instance, which you can access with a new key pair, and add in the new key pair to the original volume from there.

Summon the NewKeyPair!

First, create a new key pair. You can either generate a private and public key pair on your own and import the public one into the AWS console, or create it from the AWS management console and download the private key that they generated for you thereafter. Should you go for the latter, make sure your browser is not blocking the download.

Blocked download | vic-l

For the rest of the article, the new key pair will be referred to as NewKeyPair, and the old key pair LostKeyPair.

Retire The Veteran

Stop your old instance. Do not terminate!

NOTE: Your instance root volume need to be EBS backed and not instance store as instance store volumes are ephemeral. They do not persist the data after power down.

Once it has successfully stopped, you will realise that its volume remains attached. That’s EBS for you!

We will come to detaching it in a while. For now, spin up a new server.

Katon: Summon-The-New-Server-Jutsu

Launch a new server with the NewKeyPair. This is a temporary server and can be any of the linux distribution.

Detach The Old Volume

In the volumes page, select the old instance volume and select Detach as shown. There should be no error unless your old instance is still in the process of shutting down.

Detach old EBS Volume | vic-l

Once it is detached, you will observe that its status has changed to available and its Attachment Information will become blank. Now it is freeee! Time to attach it to the new server and receive its new key pair.

Attach To New Instance

Attach the root volume to the new instance as shown.

Then select the device to mount on.

attach volume device | vic-l

I will set /dev/sdf as suggested. The other devices reserved for the root volume (/dev/sda) and instance store volumes (/dev/sd[b-e]). More information on the device naming in AWS EC2 can be found here.

Run the command lsblk to see the new volume mounted. Note that the linux kernel has change my mount point from sdf to xvdf as noted in the warning callout in the image above.

lsblk | vic-l

Mounting The Volume

You would not be able to use the volume right away after attaching without mounting the volume in the system. Mounting will tell the EC2 instance how to access this new device via its list of directory. This will require setting up a mount point. Run the commands below.

sudo mkdir /mnt/tempvol
sudo mount /dev/xvdf1 /mnt/tempvol

These commands will mount the root of the device to the directory named /mnt/tempvol. You can change directory into the volume and see that it contains content from your old server.

From the image above, you can see that the authorized_keys file containing the old public key is placed in /home/ubuntu/.ssh directory relative to the mount point. The new public key pair exist in the /home/ubuntu/.ssh directory in the absolute path, which exist in the root volume of the new instance.

Adding The NewKeyPair To The Old Volume

Eventually, we want to use the new key pair to access the old server, with the content of the old volume, just like the good old times. To do get, add the NewKeyPair to the ssh folder of the old volume.

Realise that this is now possible because of the attaching and mounting of the old volume to a new instance which can be accessed due to a fresh setup and key pair creation.

I have used the append operation, >> instead of the overwrite operation, which is a single >. This is not necessary. It is up to you to decide if you want to get rid of the old key pair or not, depending on your situation.

If you lost your old key pair, feel free to overwrite it. There is no point hoarding it, and Marie Kondo can’t help you declutter software.

Attaching The Volume Back

Next, you can shut down your new server and attach your old volume back to the old instance. Remember, the EC2 instance will not be deleted and is still available if you chose stop instead of terminate when shutting it down initially.

attach volume back | vic-l
Mount the volume, this time, to /dev/sda1.
attach to sda1 | vic-l

The reason to mount it at /dev/sda1 is because we need to give the instance back its root volume for its boot operations. If we were to mount it to another device, you will see this error when starting the server because no root volume is detected.

error starting old instance | vic-l

Back To The Past

Now you can try to connect to your old instance after it has started up.

NOTE: you will see that the connection instruction is still mentioning the LostKeyPair. Even if you had overwritten your ssh key pair to the new one, this wrong instructions will still persist. Of course, you should connect with your NewKeyPair.

Connect to old instance again | vic-l

To ascertain that you have the old public keys, head dow to the <code>~/.ssh</code> directory and see that the changes you made on the new instance via attaching and mounting of the old instance’s volume has persisted.

Now, we have successfully added a new key pair to the old instance, and we can use it to ssh into the old instance form now on, even though we had lost our original key pair that was used to create it.

Implementing Eslint In Sublime With Airbnb Style Guide

This is a documentation on how to setup eslinting on sublime text editor, bootstrapped with using the style guide set up by Airbnb.

Why Is Eslinting Necessary

Linting your javascript code catches syntax errors, and possibly some runtime errors, while you are coding. It helps you debug faster, putting things like missing ; or wrong closures out of the way from the work. No more rolling your eyes and reducing your lifespan over things that shouldn’t matter.

eyeroll | vic-l

It will also be helpful if we are working together with other developers. It helps to keep the flavor of the JavaScript code same across the code bases, which can speed up development work and make it enjoyable at the very least.

Ok, maybe a teeny weeny bit better since we are talking about JavaScript here.

javascript crying | vic-l

Why Airbnb Style Guide

Linting can also be a good teacher. We can follow style guides, or more technically eslint configurations, setup by other senpais with telling experience or hailing from reputable tech companies, and when we write codes that do no adhere to  what they deem as best practice, we can learn why and change accordingly.

Google, for instance, has it own set of JavaScript style guide, which comes with a shareable eslint config that we can just plug and play into our codebase.

However, I would personally suggest Airbnb’s style guide because of its well-documented reasons for implementing each rule. In its style guide page, it states the reason for implementing certain rules as well as give examples of good and bad coding examples. Here is an image to better explain this point.

Airbnb javascript style guide | vic-l

This is an accumulation of a wealth of knowledge from brilliant programmers that is made easily accessible to us.

Sometimes, we may even learn some quirks of JavaScript that we never knew existed because we have not experienced the problem before. It is like time traveling to the future by leveraging on the past lessons of those before us.

Installing eslinting

Start off by installing the packages required for eslinting with Airbnb style.

npm i -D eslint 

I will attempt to explain what each line does.

-D flag

The -D flag installs the packages as development dependencies for the project at hand. It makes sense to install it under the project instead of globally because the same configurations can be shared to other developers working on the same project. And install it only as a development dependency as it is only required during development work.


The eslint package is the main package that will handle the linting on Vanilla JavaScript and Vanilla JavaScript only. It is crucial, for the case of JavaScript, to understand the significance of Vanilla JavaScript, and that will require a bit of a history lesson.


JavaScript is a fairly senior language. It is one of the pioneers of computing languages, which meant that it did inevitably made some bad mistakes in its syntax design. Over the years, people, or should I say geniuses, are unhappy about it and have come up with more efficient ways to write it.
CoffeeScript is one example. To define a JavaScript function that looks like this:

var greet = function(name) {
  return console.log(Hello, ${name});

CoffeeScript and its gang of caffeine addicts have decided to shrink the amount of code require with the use of indentations. The same definition can be written in CoffeeScript like this:

greet = (name) ->
  console.log `Hello, ${name}`

TypeScript is another example and their side of the camp stresses the need for JavaScript variables to be type safe, the lack of which has caused much of the JavaScript errors throughout its history. The same function defined in TypeScript looks like this:

var greet = function (name: string): void {
  console.log(`Hello ${name}`);

The official standard for JavaScript, however, is the ECMAScript specification, or ES for short (hope that answer the naming of the eslint package). And its official compiler is Babel. The specification has undergone multiple improvements over the years and new standards were iterated. Babel has also evolved with it. The same function can be written in Babel as such:

var greet = (name) => {
  console.log(`Hello ${name}`);

All these various compilers mean that we need a different set of linting depending on the compiler you are using to catch the corresponding syntax errors. Airbnb style guide uses Babel and follows the official modern JavaScript syntax. Hence we are installing the babel-eslint package.


This package  consist of all the rules that the engineers in Airbnb have deemed as best practices and which their company follows.


Next is the eslint-plugin-import package which supports linting of file imports using ECMAScript. While the previous mentioned packages watch out for syntax errors in your code, this packages looks out for erroneous file imports. These errors may be due forgetting to export modules that exist in another file, or a wrong spelling in the file name to be imported.

The Other Packages

The other packages are react specific. Airbnb use react as the main JavaScript framework for their front end. Hence their default linting config requires the remaining packages to work, namely eslint-plugin-react, eslint-plugin-react-hooks, and eslint-plugin-jsx-a11y. Without them, you will get missing dependency errors.

Note that it also requires the eslint and eslint-plugin-import packages to work, but since its not specific to react and is a good-to-have tool for development work, I am explained a little more about them.

If you are not using react, but still want to use Airbnb style guide, you will be interested in their base configurations in the eslint-config-airbnb package.

The .eslintrc

What we have done so far is only downloaded the linting packages. To utilise them, some configurations need to be setup, and I will be setting it up using .eslintrc. There are a number different ways to setup the configurations.

Write the .eslintrc as such:

  "parser": "babel-eslint",
  "extends": "airbnb",
  "rules": {
    "camelcase": "off",

The parser option specifies that we are going to write the latest ECMAScript syntax and using Babel as the compiler, as what the rules in the Airbnb configuration file is expecting. More parser options can be found here.

The extends options tells the linter to use the rules setup by the Airbnb configuration. The rules will be empty if this is not specified, and you will be essentially writing code with the recommended setting in babel-eslint package.

The rules options allows you to overwrite rules that might not fit your workflow. I added the example for ignoring warnings on non camel case variables which I am using right now. I do not follow the camel case practice when naming JavaScript variables because I am working with a Ruby On Rails backend. The Rails framework advocates camel cased variable naming which is different from that of JavaScript. Hence, I decide to keep the same casing for the variable naming so that it is easy to receive data from the API responses, as well as to package requests to send to the backend..

Text Editor Linter

Lastly, we need a package on the text editor you are using to read the eslint packages and configurations to highlight the relevant syntax errors accordingly. For sublime text editor, you need to install these packages.


The steps to install sublime text packages can be easily achieved with the Sublime Package Control.

Restarting Sublime Text

The last step is to restart your Sublime Text editor. Open a file with the js extension, and start writing some erroneous code to see the linting in effect!

Terraform With Docker

This is a documentation on how to use Terraform with Docker to provision cloud resources, mainly using AWS as the provider. It contains tips on certain practices that I personally deem best practices fo various reasons.

It will revolves around these 3 commands

docker run -v `pwd`:/workspace -w /workspace hashicorp/terraform:0.12.9 init
docker run -v `pwd`:/workspace -w /workspace hashicorp/terraform:0.12.9 apply
docker run -v `pwd`:/workspace -w /workspace hashicorp/terraform:0.12.9 destroy

The Terraform image comes with the entrypoint command terraform, so we will append the commands init and apply respectively.

The Flags

The most straightforward way to run Terraform on docker is to do a docker run with a volume mount connecting the directory where the terraform files are to the working directory in the docker container. Assuming the current working directory is where the files are, we can simply run the command.

The -v option mounts your current working directory into the container’s /workspace directory.

The -w flag creates the /workspace directory and sets it as the new working directory, overwriting the terraform image’s original.

To verify the this, we can run the command below to see that the current working directory in the container is in fact /workspace.

docker run -v `pwd`:/workspace -w /workspace --entrypoint /bin/sh hashicorp/terraform:0.12.9 -c pwd

Over here we are overwriting the default entrypoint command of Terraform to run a shell command.

terraform init

The init command will download the necessary provider files and modules required for the execution to the working directory in the container. And due to the volume mount, the files will be reflected in the current working directory on the local machine. The files would be downloaded to the folder .terraform.

It is paramount to have this files downloaded to the current directory because on subsequent runs, the files would not need to be downloaded again since they will be persisted and available.

terraform apply

apply is the command to deploy the resources to the cloud.

If you do not use a Terraform backend, the tfstate file that holds all the information for the provisioning will be written to the working directory, and in turn to the current directory.

If you do use a Terraform backend, there will be no tfstate file written locally. They will be written to the backend that you specified. However, in the event that a Terraform deployment fails, you will have a errored.tfstate file written to the working directory. This ​errored.tfstate​ file is extremely important to keep track of the state of provisioned environment in the event of failures.

A possible scenario is due to lost connectivity. I encountered that when I was traveling in some remote areas of Brazil. The volume mount saved my life. I am pretty surprised that there is not much documentation in the official terraform docs. I tried googling the query below but there is no result.

site: "errored.tfstate"

Without the errored.tfstate file, undesirable duplicate resources may be created on subsequent deployments. In other cases, subsequent deployment itself might fail due to having resources of the same identification that is prohibited in AWS, which otherwise would not have occurred due to the out-of-sync state.

To update the state, we can run the command

docker run -v `pwd`:/workspace -w /workspace hashicorp/terraform:0.12.9 state push errored.tfstate

Apart from the errored.tfstate file, the tf log  file that you specify will also be written, which is may be used for debugging your terraform deployments.

terraform destroy

Lastly, destroy is the command to remove the resources from the cloud.