PHANTOM DEV

PHANTOM DEV

Pushing your Go app to production

Pushing your Go app to production

Subscribe to my newsletter and never miss my upcoming articles

I recently built Pulse Loyalty, a Loyalty API for developers. The idea is to help app developers (Mobile, extensions etc) quickly have some form of a Loyalty program integrated to the users.

I decided to build the app using Go and React for the frontend. I mad a couple of posts on the decisions I made and the tools I chose for development of the product. A few days back I pushed the app into a production taking it into a semi-live preview mode for app developers to test it out.

I decided to deploy the app in to vultr (Refferral link with free credits). Purely because it gave me free credits on sign up, and allowed me see how the product turned out with little investment on my end.

I decided to go on a single host docker deployment with docker-compose. I didn't want to make the deployment process overly complicated. The docker-compose file I had spins up MySQL, allows me to spin up the app using the image I have pushed to my private docker hub registry.

I initially wanted to serve the react frontend using the Go web server. Which is pretty straight forward and it works. But the only downside was I wanted to split the domains pulseloyalty.com for the static site and api.pulseloyalty.com for the API's, also I wanted to get SSL configured. This seemed like too much work so I decided to quickly pull in Caddy Server caddy is an interesting product. First time I came across caddy was during one of the PHP Conference talks by Kai Hendry. It was pretty insane. Now they have released v2.

Will all of these in hand my Caddyfile looked like:


api.pulseloyalty.com {
    reverse_proxy * localhost:8011
}

pulseloyalty.com {
    root * /home/kaboom/web/web/build
    try_files {path} /index.html
        file_server
}

With this in hand, a simple caddy start gets the server running and provisions SSL using letsencrypt. You need to make sure port 80 is open for provisioning to happen without a problem.

One of the problems I came across, which I didn't anticipate was running migrations in production. Since, I decided to launch the MySQL server in the private network and its not exposed to the host, this to work around this I would spin up a temporary container with mysql-client an connect it to the same network run the migrations. I was using goose for this. It was not that bad, but I can improve.

{% github github.com/pressly/goose %}

The docker-compose file used looked something like this,

version: "3.1"

networks:
  api:

services:
  db__service:
    image: mysql:8.0
    container_name: api-mysql
    restart: unless-stopped
    tty: true
    environment:
      MYSQL_USER: *****
      MYSQL_PASSWORD: *****
      MYSQL_ROOT_PASSWORD: ****
      SERVICE_TAGS: dev
      SERVICE_NAME: api-mysql
    volumes:
      - data-volume:/var/lib/mysql
    networks:
      - api
  adminer:
    image: adminer:latest
    container_name: db_client
    ports:
      - "9090:8080"
    networks:
      - api
  api__service:
    image: gayanhewa/pulseloyalty
    container_name: api-service
    ports:
      - "8080:8080"
    depends_on:
      - db__service
    networks:
      - api
volumes:
  data-volume:

I decided to hook adminer so I can spontaneously spin it up and down so I can access the DB using a GUI :D

To sum it up, the experience was fine, not out of this world. But, that because I don't have the tooling in place. I found this much faster and easier than having to spin up LAMP. I guess it would be the same experience we do it as a single host docker deployment.

A few things that caught me offgaurd,

  1. docker-compose and managing secrets for production is a pain I am still digging around docker swarm has native integration with secrets management. May be docker-compose was not the best choice for this.

  2. Vultr servers block all ports by default except for 22. There is the ability to manage firewall using the UI. But for some reason the changes didn't get propagate to my server. I had to open port 80 using Uncomplicated firewall on my server.

To conclude, this is just my experience having to deploy Go web apps, on a less complicated setup. For some reason felt much more difficult than pushing to k8s

 
Share this