<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Erik Minkel]]></title><description><![CDATA[On entrepreneurship and web development.]]></description><link>https://www.erikminkel.com/</link><generator>Ghost 5.69</generator><lastBuildDate>Tue, 28 Apr 2026 12:54:58 GMT</lastBuildDate><atom:link href="https://www.erikminkel.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Deploy to Mac mini with Kamal]]></title><description><![CDATA[Deploy your Ruby on Rails applications with Kamal to a Mac mini.]]></description><link>https://www.erikminkel.com/2026/01/27/deploy-to-mac-mini-with-kamal/</link><guid isPermaLink="false">697941a549a100a673f4decd</guid><category><![CDATA[Ruby on Rails]]></category><category><![CDATA[kamal]]></category><category><![CDATA[macos]]></category><category><![CDATA[tahoe]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Tue, 27 Jan 2026 23:03:42 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;m sure if you&apos;re a developer, or on a team with a bunch of developers you might have some old hardware laying around that&apos;s still useful. In my case, I&apos;ve demoted my old M2 variant Mac mini to my &quot;server&quot; closet, it&apos;s been lightly-to-not used since I put it in there, but recently I decided to push some play projects to it.<br><br>That took me down the path of having an application deploy directly to it via Kamal. To set the stage its an M2 Pro, fair amount of RAM, I recently installed the hideous Tahoe upgrade and enabled the remote ssh (that&apos;s honestly the only reason I installed Tahoe). <br><br>It&apos;s running zsh and orbstack (as a docker replacement).<br><br>Some minor things I ran into:<br><br>Kamal runs ssh in non-interactive mode, as it should, so <code>kamal setup</code> command failed with &quot;docker not available&quot;<br><br>To fix this I added a ~/.zshenv file with: <code>export PATH=/usr/local/bin:$PATH</code>, this puts docker in the path and makes it accessible. </p><p>Then I hit the <code>cp</code> command issues, which I found in an older Kamal repository issue. There&apos;s a difference in the macOS and the Linux flavor of these utilities. To get around this error, we install <code>coreutils</code> with homebrew. (I understand there are some intricacies in the gnutools, however, this is purely a sandbox.) And then update the ~/.zshenv with another line <code>export PATH=/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH</code><br><br>After overcoming these two small issues, I was able to successfully deploy (a relatively simple app) in about 10 seconds.</p>]]></content:encoded></item><item><title><![CDATA[Production SQLite powered by Litestream with Rails 8]]></title><description><![CDATA[<p>Let&apos;s get into it. Ruby on Rails 8 and SQLite in production. Words only a couple years ago that would get you laughed out of the room. I&apos;ve been leaning on SQLite since 2018 with alot of folks doing great work in the Ruby community around</p>]]></description><link>https://www.erikminkel.com/2025/12/31/production-sqlite-powered-by-litestream-with-rails-8/</link><guid isPermaLink="false">69558b9f49a100a673f4de92</guid><category><![CDATA[rails 8]]></category><category><![CDATA[kamal]]></category><category><![CDATA[sqlite]]></category><category><![CDATA[litestream]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Wed, 31 Dec 2025 21:12:44 GMT</pubDate><content:encoded><![CDATA[<p>Let&apos;s get into it. Ruby on Rails 8 and SQLite in production. Words only a couple years ago that would get you laughed out of the room. I&apos;ve been leaning on SQLite since 2018 with alot of folks doing great work in the Ruby community around it.<br><br>Here&apos;s a plan for redundancy utilizing Kamal and <a href="https://litestream.io/reference?ref=erikminkel.com" rel="noreferrer">Litestream</a>. If you haven&apos;t heard of litestream before, its a tool that allows you to fully replicate your SQLite database. It comes with batteries included commands to replicate and restore!<br><br>Follow one of their excellent sources of documentation to replicate to your preferred service. For me, I&apos;m using S3 (<a href="https://litestream.io/guides/s3/?ref=erikminkel.com">https://litestream.io/guides/s3/</a>)</p><p>As far as the Kamal goes, I&apos;m using an accessory service that runs litestream&apos;s provided docker container (<a href="https://hub.docker.com/r/litestream/litestream?ref=erikminkel.com">https://hub.docker.com/r/litestream/litestream</a>).</p><p>You&apos;ll want to be sure to add a <code>config/litestream.yml</code> file in your Rails application with this contents, the keys are interpolated by Litestream later so we need to ensure they&apos;re included in our secrets (<a href="https://litestream.io/reference/config/?ref=erikminkel.com#auto-read-environment-variables">https://litestream.io/reference/config/#auto-read-environment-variables</a>):</p><pre><code>dbs:
  - path: /data/production.sqlite3
    replica:
      type: s3
      path: db # this is the path in your bucket
      bucket: your-bucket
      region: your-region
      access-key-id: ${AWS_ACCESS_KEY_ID}
      secret-access-key: ${AWS_SECRET_ACCESS_KEY}
</code></pre>
<p>Once we have this setup we can move on to the <code>config/deploy.yml</code> for Kamal:</p><pre><code>volumes:
  - &quot;storage:/rails/storage&quot;
  # - /host:/path/on/container
  
accessories:
  litestream:
    host: host-ip
    image: litestream/litestream
    env:
      secret:
        - LITESTREAM_ACCESS_KEY_ID
        - LITESTREAM_SECRET_ACCESS_KEY
    cmd: &quot;replicate&quot;
    files:
      - config/litestream.yml:/etc/litestream.yml
    volumes:
      - &quot;storage:/data&quot;
</code></pre>
<p>I&apos;ve included the overall application volume setup in the Kamal file as this is important. The accessory needs to know this exists and maps the volume to the data folder for the Litestream image container.<br><br>Once you have these committed and run <code>kamal accessory reboot litestream</code> you should be able to <code>tail</code> the Litestream container and see the connection and replication happen to your S3 instance.</p><pre><code>time=2025-12-31T00:00:00.380Z level=INFO msg=&quot;snapshot complete&quot; txid=000000000000001e size=1170108
time=2025-12-31T03:15:00.226Z level=INFO msg=&quot;compaction complete&quot; level=1 txid.min=000000000000001f txid.max=0000000000000020 size=368
time=2025-12-31T03:15:00.299Z level=INFO msg=&quot;compaction complete&quot; level=2 txid.min=000000000000001f txid.max=0000000000000020 size=368
time=2025-12-31T04:00:00.215Z level=INFO msg=&quot;compaction complete&quot; level=3 txid.min=000000000000001f txid.max=0000000000000020 size=368
time=2025-12-31T04:38:30.231Z level=INFO msg=&quot;compaction complete&quot; level=1 txid.min=0000000000000021 txid.max=0000000000000022 size=1121
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Easy Postgres backups]]></title><description><![CDATA[How to use Kamal to backup and restore your Postgres database]]></description><link>https://www.erikminkel.com/2024/06/24/easy-postgres-backups/</link><guid isPermaLink="false">663089681d421e2453fd79ad</guid><category><![CDATA[kamal]]></category><category><![CDATA[Ruby on Rails]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Mon, 24 Jun 2024 19:18:57 GMT</pubDate><content:encoded><![CDATA[<p>An often overlooked area when you&apos;re standing up your application infrastructure is what to do in the face of disaster. How will you recover from data loss? What can you do now that your future self will thank you later on? In this post, I&apos;ll cover a way to help mitigate loss and aid in getting back online quickly. Kamal helps standing up a majority of your application, fast.  You can do everything in this post with a small VPS.<br><br>This post will go over two different ways to manage Postgresql backups with Kamal. Your recovery ends up being a simple psql command running from your database container.</p><p>Both of the Docker projects we&apos;ll be taking advantage of have a similar approach to running on a schedule to create a backup. Taking it a bit further with Docker volumes and networks allows us to have an easier restoration in the case of database loss.</p><h3 id="example-1-local-backups">Example 1: Local Backups</h3><p>In the first example, we&apos;ll utilize a docker image (<a href="https://github.com/prodrigestivill/docker-postgres-backup-local?ref=erikminkel.com" rel="noreferrer">postgres-backup-local</a>) that will backup based on your schedule and keep the backups local to your network share. When we&apos;re working with keeping files and data around in Docker its important we work with volumes. If you&apos;re connecting to a file share locally a good starting point is the Docker documentation on <a href="https://docs.docker.com/storage/volumes/?ref=erikminkel.com#use-a-volume-driver" rel="noreferrer">volume drivers</a>.</p><p>Note: you may need to install nfs-common on your virtual machine, if you&apos;re using Linux.</p><p>Once you&apos;ve figured out where you need to connect, and you have the proper credentials and networking path aligned, you can create a new Docker volume similar below. This example is creating an NFS Docker volume with the name <strong>volume-name.</strong></p><pre><code class="language-bash">$ docker volume create --driver local --opt type=nfs --opt o=addr=IP-ADDRESS,rw --opt device=:/fileshare/path volume-name</code></pre><p>In your Kamal <strong>deploy.yml</strong> configuration file you&apos;ll want to add a section under the accessories key, with your desired name. I&apos;ve configured this accessory as <strong>db-backups</strong>.</p><pre><code class="language-yaml">accessories:
  db-backups:
    image: prodrigestivill/postgres-backup-local
    roles:
      - web
    env:
      clear:
        SCHEDULE: &apos;@daily&apos;
        POSTGRES_USER: postgres
        BACKUP_KEEP_DAYS: 7
        BACKUP_KEEP_WEEKS: 4
        BACKUP_KEEP_MONTHS: 6
        POSTGRES_DB: my-db
        POSTGRES_HOST: local-ip
      secret:
        - POSTGRES_PASSWORD
    volumes:
      - volume-name:/backups
    options:
      &quot;user&quot;: &quot;postgres:postgres&quot;</code></pre><p>Hint: Some items to point out in the config code above. We&apos;re setting our volume to be mounted at the path /backups internal to the container running our accessory. Your POSTGRES_USER should be set to whatever your database user is for the current environment you&apos;re backing up.</p><p>You&apos;ll need to push your new configured environment variables to the server.</p><pre><code class="language-bash">$ kamal env push</code></pre><p>Boot the db-backups accessory. The result of this will be a container you can terminal into that runs on the schedule defined in your ENV variables.</p><pre><code class="language-bash">$ kamal accessory boot db-backups</code></pre><p>You should immediately see a file in the /backups/last directory which would be the last run of the backup.sh (found on the container). Keep in mind that /backups is the volume linked to your network share. If you don&apos;t see a Postgres backup file, double check that you can successfully connect to the accessory container and run <code>bash backup.sh</code> manually.</p><h3 id="restore-database-from-backup">Restore database from backup</h3><p>To restore the database simply login to the accessory that contains the link to the backups. (<code>docker ps</code> to find the container ID)</p><pre><code class="language-bash">$ docker exec --tty --interactive $CONTAINER_ID /bin/sh -c &quot;zcat /backups/last/$MY_DB-latest.sql.gz | psql --username=postgres --dbname=$MY_DB -W&quot;</code></pre><p>You can also mount the volume we created earlier for backups on your postgres database accessory. Then you can run the restore command above from your postgres container.</p><h2 id="example-2-cloud-backups">Example 2: Cloud Backups</h2><p>In the second example, we&apos;ll utilize an object storage volume and the docker image (<a href="https://github.com/eeshugerman/postgres-backup-s3?ref=erikminkel.com" rel="noreferrer">eeshugerman/postgres-backup-s3</a>) to take backups on a schedule and push them to S3. This project has some nice features, so be sure to checkout the linked Github repository.</p><p>In your <strong>deploy.yml </strong>configuration file we&apos;ll want to create a new key that describes our s3 Postgres backup. </p><pre><code class="language-deploy.yml">accessories:
  s3-pgbackup:
    image: eeshugerman/postgres-backup-s3:15
    roles:
      - web
    env:
      clear:
        SCHEDULE: &apos;@midnight&apos;
        BACKUP_KEEP_DAYS: 14
      secret:
        - S3_REGION
        - S3_ACCESS_KEY_ID
        - S3_SECRET_ACCESS_KEY
        - S3_BUCKET
        - S3_PREFIX
        - POSTGRES_HOST
        - POSTGRES_DATABASE
        - POSTGRES_USER
        - POSTGRES_PASSWORD</code></pre><p>You&apos;ll need to have set secret environment variables for S3 and Postgres and push those via Kamal:</p><pre><code class="language-bash">$ kamal env push</code></pre><p>In the deploy.yml example above, you can see I have the image tagged to version 15 of Postgres, you can use any version 12-16. We&apos;re explicitly telling this accessory that it will operate on our web server and sending the schedule and some rules along with it. Check the <a href="https://pkg.go.dev/github.com/robfig/cron?ref=erikminkel.com#hdr-Predefined_schedules" rel="noreferrer">documentation</a> for how the SCHEDULE variable works. </p><p>Boot the accessory.</p><pre><code class="language-bash">$ kamal accessory boot s3-pgbackup</code></pre><p>I have not had the chance to dive into <a href="https://docs.docker.com/storage/volumes/?ref=erikminkel.com#block-storage-devices" rel="noreferrer">adding a block storage device to docker</a>, yet. I&apos;m sure it&apos;s possible so the restore can act similarly to the first example for restoring. (I&apos;ll revisit and update the post if I do.)</p><p>Set a reminder to double check that the container is running backups on schedule at your S3 location.</p><p>That&apos;s all for now!</p>]]></content:encoded></item><item><title><![CDATA[Deploy an app with SQLite, ActiveStorage and Kamal]]></title><description><![CDATA[<p>SQLite as a single or small development team database for Ruby on Rails projects ended out the year as all the rage. For in-depth action in this area follow <a href="https://fractaledmind.github.io/?ref=erikminkel.com" rel="noreferrer">Stephen Margheim</a>, he&apos;s doing a TON of work for the Ruby on Rails community in this realm.</p><p>This article</p>]]></description><link>https://www.erikminkel.com/2024/01/04/deploy-an-app-kamal-sqlite-activestorage/</link><guid isPermaLink="false">6595ab6a1d421e2453fd7934</guid><category><![CDATA[kamal]]></category><category><![CDATA[activestorage]]></category><category><![CDATA[rails]]></category><category><![CDATA[sqlite]]></category><category><![CDATA[local storage]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Thu, 04 Jan 2024 00:00:52 GMT</pubDate><content:encoded><![CDATA[<p>SQLite as a single or small development team database for Ruby on Rails projects ended out the year as all the rage. For in-depth action in this area follow <a href="https://fractaledmind.github.io/?ref=erikminkel.com" rel="noreferrer">Stephen Margheim</a>, he&apos;s doing a TON of work for the Ruby on Rails community in this realm.</p><p>This article is meant as a reminder for myself, but, figured it may help someone out on the internet. I recently updated an old project where I utilized SQLite in production and configured it for deployment with <a href="https://kamal-deploy.org/?ref=erikminkel.com" rel="noreferrer">Kamal</a>.</p><p>This posed a few different problems that anyone with this setup will need to solve for:</p><ul><li>Persisting the SQLite file on the host machine, this would allow us to be able to restart the container, destroy the container, rebuild the container and still have our data.</li><li>Connecting ActiveStorage to a long-lived file system to again, persist any container restarts, rebuilds, etc.</li></ul><p>Firstly, we need to SSH into our target VM on whatever VPS provider you prefer. We&apos;ll create two directories at the top level of the file system. (You can place them wherever you&apos;d like.) I put them here for easy access later.</p><pre><code class="language-sh">$ mkdir /db</code></pre><pre><code class="language-sh">$ mkdir /storage</code></pre><p>Then we&apos;ll need to change the permissions of these directories so our user within our container can access them. <a href="https://medium.com/@nielssj/docker-volumes-and-file-system-permissions-772c1aee23ca?ref=erikminkel.com" rel="noreferrer">You can read more about volumes, docker, and permissions in containers here</a>. I&apos;ve done this with changing the ownership of the directories to the user with ID 1000, as that&apos;s our <code>rails</code> user in our container for the application. You may also need to modify the permissions on the <code>storage</code> folder to: <code>sudo chmod -R 775 /storage</code></p><pre><code class="language-sh">$ chown 1000:1000 /db /storage</code></pre><p>A simplified Kamal deploy file should look as follows:</p><pre><code class="language-deploy.yml"># Name of your application. Used to uniquely configure containers.
service: web-app

# Name of the container image.
image: my-image

# Deploy to these servers.
servers:
  web:
    hosts:
      - MY_HOST_IP

volumes:
  # host path:container path
  - &quot;/db:/rails/sqlite&quot;
  - &quot;/storage:/rails/storage&quot;

# Credentials for your image host.
registry:
  # Specify the registry server, if you&apos;re not using Docker Hub
  # server: registry.digitalocean.com / ghcr.io / ...
  username: kamal

  # Always use an access token rather than real password when possible.
  password:
    - KAMAL_REGISTRY_PASSWORD

# Inject ENV variables into containers (secrets come from .env).
# Remember to run `kamal env push` after making changes!
env:
  secret:
    - RAILS_MASTER_KEY


# Configure builder setup.
builder:
  secrets:
    - RAILS_MASTER_KEY</code></pre><p>The main thing to point out here is we&apos;re using volumes over Kamal&apos;s concept of <a href="https://kamal-deploy.org/docs/configuration?ref=erikminkel.com" rel="noreferrer">files or directories</a>. This means you could put your stage database and your production database in the same folder and mount the container in either environment on the same server if you wanted.</p><p>For data persistence and recovery you&apos;d want to setup a way to offload your <code>/db</code> and <code>/storage</code> directories.</p><p>The configuration for Active Storage is simple, check your environment file is pointing <code>config.active_storage.server = :local</code> and be sure you have an entry in your <code>storage.yml</code> file similar to:</p><pre><code class="language-config/storage.yml">local:
  service: Disk
  root: &lt;%= Rails.root.join(&quot;storage&quot;) %&gt;</code></pre><p>That&apos;s it, <code>kamal deploy</code> to your hearts content.</p>]]></content:encoded></item><item><title><![CDATA[Deploy your Rails 7 app with Kamal and CI/CD]]></title><description><![CDATA[We'll run through an example utilizing the Kamal gem to deploy our Ruby on Rails application to a server with BitBucket Pipelines.]]></description><link>https://www.erikminkel.com/2023/10/18/deploy-your-rails-7-app-with-kamal-and-ci-cd/</link><guid isPermaLink="false">6530bd3534a7670c7caab549</guid><category><![CDATA[rails]]></category><category><![CDATA[kamal]]></category><category><![CDATA[deployment]]></category><category><![CDATA[github]]></category><category><![CDATA[gitlab]]></category><category><![CDATA[bitbucket]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Wed, 18 Oct 2023 00:25:22 GMT</pubDate><content:encoded><![CDATA[<p>In this post I&apos;ll attempt to cover the process to setup your git repository to allow it to deploy your Ruby on Rails application with the Kamal deployment tool. Unless you&apos;ve been stuck in full code sprint mode, I&apos;m sure you&apos;ve heard of Kamal by now.</p><p>I&apos;ve used Gitlab and BitBucket Pipelines to successfully configure this in the recent past. Your mileage may vary with Github, as I haven&apos;t had a need to deploy with it yet. I&apos;ve linked a post below that might help you deploy with Kamal and Github Actions. As with most solutions, this is a simple example to get you deployed. Your environment, development process and application conditions will vary.</p><figure class="kg-card kg-image-card"><img src="https://www.erikminkel.com/content/images/2023/10/Screenshot-2023-10-17-at-2.23.07-PM.png" class="kg-image" alt loading="lazy" width="2000" height="1109" srcset="https://www.erikminkel.com/content/images/size/w600/2023/10/Screenshot-2023-10-17-at-2.23.07-PM.png 600w, https://www.erikminkel.com/content/images/size/w1000/2023/10/Screenshot-2023-10-17-at-2.23.07-PM.png 1000w, https://www.erikminkel.com/content/images/size/w1600/2023/10/Screenshot-2023-10-17-at-2.23.07-PM.png 1600w, https://www.erikminkel.com/content/images/2023/10/Screenshot-2023-10-17-at-2.23.07-PM.png 2132w" sizes="(min-width: 720px) 720px"></figure><div class="kg-card kg-callout-card kg-callout-card-green"><div class="kg-callout-emoji">&#x1F449;</div><div class="kg-callout-text">What you&apos;ll need:<br>1. A virtual machine instance<br>2. Git repository with your code pushed<br>3. Access to configure the repository<br>4. Kamal configured and pushing to your virtual machine</div></div><h2 id="configure-ssh-keys">Configure SSH Keys</h2><p>Once you have a repository for your code, you&apos;ll need to configure the SSH keys to allow the CI/CD workers access to your server. BitBucket makes this relatively easy, they create an SSH key for you as well as allow you to add a known_hosts entry. Go to <strong>Respository Settings &gt; SSH Keys </strong>from your repository. <a href="https://support.atlassian.com/bitbucket-cloud/docs/set-up-pipelines-ssh-keys-on-linux/?ref=erikminkel.com#Update-the-known-hosts">Follow the instructions on screen.</a> You&apos;ll add the generated public key to the authorized_keys on your server (staging/production).</p><h2 id="configure-environment-variables">Configure Environment Variables</h2><p>Next you&apos;ll want to enter in your environment variables for the specific stage of deployment. If you go to <strong>Repository Settings &gt; Deployments</strong> you&apos;ll see three default environments. Enter the variables so they match up with what you&apos;re needing for Kamal to run. For instance, I setup these variables in various forms of clear text and secret values:</p><pre><code>RAILS_MASTER_KEY=
POSTGRES_PASSWORD=
REDIS_URL=
DB_HOST=
RAILS_ENV=
</code></pre>
<p>The above key value pairs will be used when Kamal generates the env files for your application containers. On the BitBucket side <code>$RAILS_MASTER_KEY</code> will evaluate to that value when the Pipeline runs. If the value was entered as a secret, the value will be masked in the logs with the key name. This usually happens when running the envify command of Kamal, alternatively you can also use the Kamal env command to manage the env files and push them up to your virtual machine.</p><h2 id="create-or-update-bitbucket-pipelineyml">Create or update bitbucket-pipeline.yml</h2><p>Time to configure the Pipeline yaml file. As it always seems to go, each vendor has their own flavor of CI/CD configuration. There are differences to each platform, and one may not fit your needs. Below I&apos;ve entered a simple Pipeline which is manually triggered on our repository. When we push to the main branch the Pipeline will run and wait for us to confirm the deploy. This is where your development process will come in and you can slot this deploy anywhere based on your own Pipeline rules.</p><p>From top to bottom, we&apos;re starting off by having the overall Pipeline use the Ruby image. We need docker included to run our Kamal commands and this is specific to BitBucket on how they allow docker in their Pipelines. After that we create a cache for bundler. (This may be unnecessary.)</p><p>Within the pipelines key  in the yaml, you&apos;ll see our named Deploy to Staging step. We reference the cache, set the deployment type and trigger to manual. The script portion of this is where all the action happens. We&apos;ll need to tell BitBucket to enable docker buildkit. After that, since the Kamal gem has dependencies that require a few system level packages, you&apos;ll see we add  <code>apt-get install -y ca-certificates curl gnupg openssh-client build-essential git</code> to ensure we have everything we need to <code>gem install kamal</code>.<br></p><pre><code># bitbucket-pipeline.yml

image: ruby:3.2.2-slim

options:
  docker: true

definitions:
  caches:
    bundler-cache:
      key:
        files:
          - Gemfile.lock
      path: vendor/bundle

pipelines:
  default:
    - step:
        name: &apos;Deployment to Staging&apos;
        caches:
          - bundler-cache
        deployment: staging
        trigger: &apos;manual&apos;
        script:
          - export DOCKER_BUILDKIT=1
          - apt-get update &amp;&amp; apt-get install -y ca-certificates curl gnupg openssh-client build-essential git
          - gem install kamal
          - kamal envify -d staging
          - kamal deploy -d staging
</code></pre>
<p>Let the Kamal fun begin. We&apos;ll focus in on the second to last line <code>kamal envify -d staging</code>. When we run envify, Kamal will lookup the variable values we set for the deployment on BitBucket since that&apos;s the context we&apos;re in. It will generate the <code>.env</code> or <code>.env.staging</code> in this case, based on your <code>.env.*.erb</code>. Be sure to have that file included in your project. Then envify will run based on dotenv gem hierarchy and priority (<a href="https://github.com/bkeepers/dotenv?ref=erikminkel.com#what-other-env-files-can-i-use">https://github.com/bkeepers/dotenv#what-other-env-files-can-i-use</a>). You&apos;ll see SSH connections happening and uploading to your virtual machine. You can login to your server and verify the env values are set and coming in as expected, just check the <code>.kamal</code> folder on the server. You can even modify this file to fix a failing container from starting, if you had to. Just know that it could change after any envify or env push command.</p><p>After that, the running of the Kamal deploy command will go thru the normal build process, push to your repository, and do the reverse, pull latest and deploy dance. After a successful deploy you&apos;ll have the new version of your application released available on your virtual machine.</p><h2 id="some-parting-notes-for-debugging">Some parting notes for debugging</h2><p>If you see a failure happen, check the commands that Kamal is running. You can likely reproduce exactly where the failure is happening and back track to what the issue is. More than likely it will land in the env variable being the culprit, or a value not being present that is expected at application load time. </p><p>You can login to your server and run the images that are stored on your server. <code>docker exec -it CONTAINER_ID /bin/bash</code> is a helpful one to get in and poke around with what Kamal is trying to run. <code>printenv</code> will output environment variables in the container. If the container isn&apos;t running use <code>docker run -it CONTAINER_ID /bin/bash</code>. You can also output logs of running containers with the <code>docker logs CONTAINER_ID</code>  command.</p><p></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://jetrockets.com/blog/how-to-use-basecamp-s-kamal-with-aws-and-github?ref=erikminkel.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">How To Use Basecamp&#x2019;s Kamal With AWS and GitHub</div><div class="kg-bookmark-description">Learn how to use Basecamp&#x2019;s Kamal to deploy Rails application to AWS with GitHub Actions</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://jetrockets.com/apple-touch-icon.png" alt><span class="kg-bookmark-author">JetRockets</span><span class="kg-bookmark-publisher">Igor Alexandrov</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.jetrockets.com/post/201/og_image/711a4a008cf6c0a2758d9d99b4e28fa4.png" alt></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Using Kamal to host multiple Apps on a single server]]></title><description><![CDATA[Use the Kamal deployment tool to host multiple Applications on a single server instance]]></description><link>https://www.erikminkel.com/2023/09/29/using-kamal-to-host-multiple-apps-on-a-single-server/</link><guid isPermaLink="false">6530bd3534a7670c7caab548</guid><category><![CDATA[kamal]]></category><category><![CDATA[rails]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[letsencrypt]]></category><category><![CDATA[SSL]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Fri, 29 Sep 2023 05:30:33 GMT</pubDate><content:encoded><![CDATA[<p>In this post I&apos;ll outline a way you can host multiple Ruby on Rails applications on a single server. This is one way to achieve that desired goal, there may be better ways. </p><p>I understand that many developers may not carry-over to the dark side of DevOps things. Comprehending all that Kamal is can be a daunting task. You should have some knowledge of Traefik and Docker.  <br><br>In this example, we&apos;ll have 2 Rails Applications, luckily Rails 7.1.0.beta1 was recently released with all the new goodness. Both Applications in this example are the exact same, SQLite3, esbuild, vanilla out of the box configuration. I&apos;m using Kamal version 1.0.0. The end goal of each was a green response at the route <code>/up</code>.</p><p><em>Want to run your own <a href="https://www.erikminkel.com/2023/09/25/run-your-own-docker-registry-s3-docker-compose/"><em>Docker registry instance for Kamal</em></a>? Check out my post about it.</em></p><p>Here&apos;s a bit of a visual of how things are setup on the single instance:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.erikminkel.com/content/images/2023/09/Screenshot-2023-09-28-at-7.04.20-PM.png" class="kg-image" alt loading="lazy" width="1706" height="1206" srcset="https://www.erikminkel.com/content/images/size/w600/2023/09/Screenshot-2023-09-28-at-7.04.20-PM.png 600w, https://www.erikminkel.com/content/images/size/w1000/2023/09/Screenshot-2023-09-28-at-7.04.20-PM.png 1000w, https://www.erikminkel.com/content/images/size/w1600/2023/09/Screenshot-2023-09-28-at-7.04.20-PM.png 1600w, https://www.erikminkel.com/content/images/2023/09/Screenshot-2023-09-28-at-7.04.20-PM.png 1706w" sizes="(min-width: 1200px) 1200px"></figure><p>Kamal commands our server instance through the <code>deploy.yml</code> file we create. If this instance is located on the cloud, you&apos;ll have the SSH key to login local to your machine, Kamal simply uses that to login as root and run commands for you.</p><p>Kamals <code>kamal server</code> command will bootstrap the server with what is necessary, and at the base of it all is: docker and curl, the Traefik container being the reverse proxy we need to route the traffic in to each container.</p><p>This setup is a bit odd in that the first project you&apos;ll run the <code>kamal init</code> command in will have some of the traefik configuration we need in it for SSL. <strong><em>If you&apos;re hosting a bunch of small projects and don&apos;t care if there is overlap into another project then you can continue.</em></strong></p><p>After you&apos;ve setup the server instance with docker you can login to it and create a directory <code>/letsencrypt/acme.json</code> and file with the contents <code>{}</code>, you can then chmod it 600.</p><h2 id="site-1-configuration">Site 1 Configuration</h2><pre><code># deploy.yml

service: site1

image: my-registry/site1

# Deploy to these servers.
servers:
  web:
    hosts:
      - yourip
    options:
      &quot;add-host&quot;: host.docker.internal:host-gateway
    labels:
      traefik.http.routers.rails_recipes.entrypoints: websecure
      traefik.http.routers.rails_recipes.rule: Host(`rails.recipes`)
      traefik.http.routers.rails_recipes.tls.certresolver: letsencrypt

# Credentials for your image host.
registry:
  # Specify the registry server, if you&apos;re not using Docker Hub
  server: registry.digitalocean.com
  username: deploy

  # Always use an access token rather than real password when possible.
  password:
    - KAMAL_REGISTRY_PASSWORD

traefik:
  options:
    publish:
      - &quot;443:443&quot;
    volume:
      - &quot;/letsencrypt/acme.json:/letsencrypt/acme.json&quot;
  args:
    entryPoints.web.address: &quot;:80&quot;
    entryPoints.websecure.address: &quot;:443&quot;
    entryPoints.web.http.redirections.entryPoint.to: websecure
    entryPoints.web.http.redirections.entryPoint.scheme: https
    entryPoints.web.http.redirections.entrypoint.permanent: true
    certificatesResolvers.letsencrypt.acme.email: &quot;you@youremail&quot;
    certificatesResolvers.letsencrypt.acme.storage: &quot;/letsencrypt/acme.json&quot;
    certificatesResolvers.letsencrypt.acme.httpchallenge: true
    certificatesResolvers.letsencrypt.acme.httpchallenge.entrypoint: web
</code></pre>
<p>After you have this in your <code>deploy.yml</code>, save it. Be sure your env variables are up to date with <code>kamal env push</code>. You&apos;ll notice the naming conventions Kamal uses for your services. This helps in the flexibility of being able to deploy multiple applications.</p><p>Run <code>kamal deploy</code> to push this application up to the server.</p><p>If you haven&apos;t configured traefik yet, you&apos;ll what to likely run <code>kamal traefik restart</code> or <code>kamal traefik reboot</code> if its not picking up your changes. Give LetsEncrypt a little time to catch up with the certificate.</p><h2 id="site-2-configuration">Site 2 Configuration</h2><pre><code># deploy.yml

service: site2

image: my-registry/site2

servers:
  web:
    hosts:
      - yourip
    options:
      &quot;add-host&quot;: host.docker.internal:host-gateway
    labels:
      traefik.http.routers.site2-web.entrypoints: websecure
      traefik.http.routers.site2-web.rule: Host(`changelog.lol`)
      traefik.http.routers.site2-web.tls.certresolver: letsencrypt

registry:
  # Specify the registry server, if you&apos;re not using Docker Hub
  server: registry.digitalocean.com
  username: deploy

  # Always use an access token rather than real password when possible.
  password:
    - KAMAL_REGISTRY_PASSWORD
</code></pre>
<p>The same goes for this site, be sure you have your env variables  up to date and push them. They&apos;ll end up in <code>.kamal/env/roles</code> directory on your server instance. You can SSH and check them out there, too.</p><p>Now all you need to do is run the <code>kamal deploy</code> command from your other application directory and Kamal will deploy this app for you and start the container. Since we already have Traefik running it will get notified of the changes via the labels we&apos;re pushing. You can also watch this with the docker logs command from the server.</p><p>I hope this helps you with general concepts or debugging getting your application deployed.</p><p>This is dedicated to someone who kept asking me how to do this.</p><p>Well, you&apos;ve made it this far. If you happen to be a developer, check out my <a href="https://fidget.so/?ref=erikminkel.com">in-app feedback tool </a>that I&apos;m hacking on.</p>]]></content:encoded></item><item><title><![CDATA[Run your own docker registry]]></title><description><![CDATA[<p>I had a need arise to run a configurable instance of the docker registry to host images built for the <a href="https://kamal-deploy.org/?ref=erikminkel.com">Kamal deployment</a> tool. In this post, I&apos;ll describe the setup of running your own instance of the docker registry.</p><p>We&apos;ll utilize docker compose for this setup,</p>]]></description><link>https://www.erikminkel.com/2023/09/25/run-your-own-docker-registry-s3-docker-compose/</link><guid isPermaLink="false">6530bd3534a7670c7caab547</guid><category><![CDATA[docker]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[rails]]></category><category><![CDATA[kamal]]></category><category><![CDATA[docker-registry]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Mon, 25 Sep 2023 22:10:34 GMT</pubDate><content:encoded><![CDATA[<p>I had a need arise to run a configurable instance of the docker registry to host images built for the <a href="https://kamal-deploy.org/?ref=erikminkel.com">Kamal deployment</a> tool. In this post, I&apos;ll describe the setup of running your own instance of the docker registry.</p><p>We&apos;ll utilize docker compose for this setup, as well as <a href="https://traefik.io/?ref=erikminkel.com">Traefik</a> to proxy our requests, provide SSL over LetsEncrypt and whitelist our internal IPs.</p><p>First you&apos;ll want to spin up a virtual machine instance, droplet, EC2, whatever you prefer. For this service it shouldn&apos;t need much, I&apos;ve given it a 1gb and 1vCPU instance.<br><br>Be sure you have the latest version of docker installed on this instance.</p><h2 id="configuring-traefik">Configuring Traefik</h2><p>We&apos;ll begin by configuring Traefik. Create a docker-compose.yml file. You&apos;ll likely need to create a network in docker as well (simply run <code>docker network create traefik</code>)</p><pre><code>version: &apos;3&apos;

services:
  traefik:
    image: traefik
    ports:
    - &quot;80:80&quot;
    - &quot;443:443&quot;
    command:
     - &quot;--log.level=INFO&quot;
     - &quot;--providers.docker=true&quot;
     - &quot;--providers.docker.exposedbydefault=false&quot;
     - &quot;--entrypoints.web.address=:80&quot;
     - &quot;--entrypoints.websecure.address=:443&quot;
     - &quot;--certificatesresolvers.le.acme.httpchallenge=true&quot;
     - &quot;--certificatesresolvers.le.acme.httpchallenge.entrypoint=web&quot;
     - &quot;--certificatesresolvers.le.acme.caserver=https://acme-v02.api.letsencrypt.org/directory&quot;
     - &quot;--certificatesresolvers.le.acme.email=you@yourdomain.com&quot;
     - &quot;--certificatesresolvers.le.acme.storage=acme.json&quot;
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - ./acme.json:/acme.json
    networks:
     - traefik
 
networks:
  traefik:
    external: true
</code></pre>
<p>From top to bottom, what this file is doing is passing command line configuration options to the traefik container. Setting the log level, telling Traefik there is a provider at the docker socket but to not expose any of our other services unless we tell it to. Further on, we&apos;re leaving our instance ports of 80 and 443 open to actively accessible from anywhere. These are known as entry points in Traefik and we can define routes and additional services as you&apos;ll see later.  After that we get into setting up the <a href="https://doc.traefik.io/traefik/https/acme/?ref=erikminkel.com">LetsEncrypt certificate resolvers</a>. This is using the most basic challenge, you can view the Traefik documentation for other scenarios.<br><br>The volumes section is identifying the docker socket that Traefik should know about. The acme.json file is a file you&apos;ll need to create within the directory you have your docker-compose.yml file. You may need to change its contents to include <code>{}</code> if an empty string causes issue.<br><br>If you were able to follow along this far, congratulations, you have a server that can accept connections on 80 and 443. To get that hooked up to your domain, you&apos;ll need to change your domains DNS to point to this instance. You should probably do that now.</p><h2 id="docker-registry-setup">Docker Registry Setup</h2><p>Let&apos;s get the Docker registry container setup.<br><br>Add an extra service entry in your docker-compose.yml file. We&apos;ll go ahead and go over each of the configuration options below. If you are okay hosting the repository images on this instance, you can just configure the data directory environment <code>REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data</code> and point it to a local volume <code>/data</code> and be on your way, however I&apos;m going to setup S3 to be the storage provider for this example.</p><pre><code>registry:
    restart: always
    image: registry:latest
    environment:
      REGISTRY_AUTH: htpasswd
      REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
      REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password
      REGISTRY_STORAGE: s3
      REGISTRY_STORAGE_S3_ACCESSKEY: accesskey
      REGISTRY_STORAGE_S3_SECRETKEY: secretkey
      REGISTRY_STORAGE_S3_BUCKET: bucket-name
      REGISTRY_STORAGE_S3_REGION: region-1
      REGISTRY_HEALTH_STORAGEDRIVER_ENABLED: false
    volumes:
      - ./auth:/auth
    labels:
      - &quot;traefik.enable=true&quot;
      - &quot;traefik.http.routers.registry.rule=Host(`yourdomain.com`)&quot;
      - &quot;traefik.http.routers.registry.tls=true&quot;
      - &quot;traefik.http.routers.registry.tls.certresolver=le&quot;
      - &quot;traefik.http.routers.registry.entrypoints=websecure&quot;
      # You can remove the line below to make accessible publicly
      - &quot;traefik.http.middlewares.private-network.ipwhitelist.sourcerange=comma,delimited,ips,here,to,allow&quot;
      - &quot;traefik.http.routers.registry.middlewares=private-network@docker&quot;
      - &quot;traefik.http.services.registry.loadbalancer.server.port=5000&quot;
    networks:
     - traefik
</code></pre>
<p>First we need to install and set the htpasswd on our instance. Install apache2-utils, this includes what we need to generate our authorization information. Run <code>sudo apt install apache2-utils</code>.<br><br>Generate your login with <code>htpasswd -Bc registry.password your_user</code>. You can pass in the your user as however you&apos;d like to identify the login. After that, this command will prompt you to enter a password that you&apos;d like to use. The hash is stored in the registry.password file. You can verify your password set correctly with the <code>htpasswd -v registry.password your_user</code> command, which will prompt you to enter the password and return whether correct or not. <br><br>As you can see, we&apos;re setting up some environment variables. Be sure your pathing matches up. You may need to create a directory for the auth and push the file you just created.</p><h2 id="s3-setup">S3 Setup</h2><p>Do the Amazon S3 bucket configuration dance. And once you&apos;re done, bring back those required environment variable values with you and plug them into your docker-compose.yml.</p><p>When you&apos;re in configuring the IAMS user or group, here&apos;s a policy that might help you. I had a bit of an issue on understanding what was required and it seems the S3 storage component needed a special listing ability. The documentation example file is inaccessible or missing on the official Docker Registry repo.</p><pre><code>{
	&quot;Version&quot;: &quot;2012-10-17&quot;,
	&quot;Statement&quot;: [
		{
			&quot;Effect&quot;: &quot;Allow&quot;,
			&quot;Action&quot;: [
				&quot;s3:ListAllMyBuckets&quot;
			],
			&quot;Resource&quot;: &quot;arn:aws:s3:::*&quot;
		},
		{
			&quot;Effect&quot;: &quot;Allow&quot;,
			&quot;Action&quot;: [
				&quot;s3:ListBucket&quot;,
				&quot;s3:GetBucketLocation&quot;,
				&quot;s3:ListBucketMultipartUploads&quot;
			],
			&quot;Resource&quot;: &quot;arn:aws:s3:::your-bucket&quot;
		},
		{
			&quot;Effect&quot;: &quot;Allow&quot;,
			&quot;Action&quot;: [
				&quot;s3:PutObject&quot;,
				&quot;s3:GetObject&quot;,
				&quot;s3:DeleteObject&quot;,
				&quot;s3:ListMultipartUploadParts&quot;,
				&quot;s3:AbortMultipartUpload&quot;
			],
			&quot;Resource&quot;: &quot;arn:aws:s3:::your-bucket/*&quot;
		}
	]
}
</code></pre>
<p>We&apos;re coming into our final explanation of the remaining lines of the docker-compose.yml file. The labels under the registry service from top to bottom are telling traefik to be aware of this container. The remaining lines are setting up routers based on your domain and the LetsEncrypt certificate resolver we setup earlier. The middle wares can be configured to only allow your internal cloud network resources to access your registry as an extra security precaution. You can comment or remove that line altogether. Lastly, we tell traefik we&apos;re running our service on port 5000. By default, docker registry runs there.</p><h2 id="final-thoughts">Final Thoughts</h2><p>You should be up and running with a <code>docker compose up -d</code> command. You can attempt to login to your registry from another server or your own terminal with <code>docker login registry.yourdomain.com</code>.</p><p>You can view the <a href="https://gist.github.com/eminkel/c08c48b800423d0b071618463eca9413?ref=erikminkel.com">full gist here</a>.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Up and running with BridgetownRB and Tailwind CSS]]></title><description><![CDATA[<p>There are a few articles highlighting how to get going with the Ruby-powered static site generator <a href="https://bridgetownrb.com/?ref=erikminkel.com">BridgetownRb</a>. While they may have worked initially when written, for whatever reason, they throw an asset compilation error due to the webpack configuration.</p><p>Below we&apos;ll install the needed Ruby gems and JavaScript</p>]]></description><link>https://www.erikminkel.com/2020/12/09/up-and-running-with-bridgetownrb-and-tailwind-css/</link><guid isPermaLink="false">6530bd3534a7670c7caab545</guid><category><![CDATA[bridgetown]]></category><category><![CDATA[ruby]]></category><category><![CDATA[tailwind]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Wed, 09 Dec 2020 20:10:51 GMT</pubDate><content:encoded><![CDATA[<p>There are a few articles highlighting how to get going with the Ruby-powered static site generator <a href="https://bridgetownrb.com/?ref=erikminkel.com">BridgetownRb</a>. While they may have worked initially when written, for whatever reason, they throw an asset compilation error due to the webpack configuration.</p><p>Below we&apos;ll install the needed Ruby gems and JavaScript packages necessary to get you designing out a simple static site with Bridgetown and <a href="https://www.tailwindcss.com/?ref=erikminkel.com">TailwindCSS</a>.</p><p>Step 1: Install the gems</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ gem install bundler bridgetown -N
</code></pre>
<!--kg-card-end: markdown--><p>Step 2: Create a project with Bridgetown</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ bridgetown new my-cool-static-project
</code></pre>
<!--kg-card-end: markdown--><p>Step 3: Install JavaScript packages</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ yarn add -D tailwindcss postcss-import postcss-loader autoprefixer postcss
</code></pre>
<!--kg-card-end: markdown--><p>Step 4: Configure postcss.config.js file (put it in the root of your Bridgetown project)</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">module.exports = {
  plugins: [
    require(&quot;postcss-import&quot;, {
      path: &quot;frontend/styles&quot;,
      plugins: [],
    }),
    require(&quot;tailwindcss&quot;),
    require(&quot;autoprefixer&quot;),
  ],
};
</code></pre>
<!--kg-card-end: markdown--><p>Step 5: Update webpack.config.js file</p><p>Your webpack.config.js css configuration should read as follows (we just need to add postcss-loader after css-loader):</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">{
  test: /\.(s[ac]|c)ss$/,
    use: [
      MiniCssExtractPlugin.loader,
      &quot;css-loader&quot;,
      &quot;postcss-loader&quot;,
      {
        loader: &quot;sass-loader&quot;,
        options: {
          sassOptions: {
            includePaths: [path.resolve(__dirname, &quot;src/_components&quot;)],
          },
        },
      },
    ],
}
</code></pre>
<!--kg-card-end: markdown--><p>Step 6: Add TailwindCSS includes to your frontend/styles/index.scss</p><!--kg-card-begin: markdown--><pre><code class="language-scss">@import &quot;tailwindcss/base&quot;;
@import &quot;tailwindcss/components&quot;;

// Your classes here

@import &quot;tailwindcss/utilities&quot;;
</code></pre>
<!--kg-card-end: markdown--><p>Step 6: Run yarn </p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ yarn start
</code></pre>
<!--kg-card-end: markdown--><p>You should be in business, continue your design and build out of your static site. Be sure to check the <a href="https://www.bridgetownrb.com/docs/?ref=erikminkel.com">excellent documentation for Bridgetown</a>.</p><p></p><p>Check out Andrew&apos;s guide for further deployment assistance.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://dev.to/andrewmcodes/build-and-deploy-a-static-site-with-ruby-bridgetown-tailwindcss-and-netlify-3934?ref=erikminkel.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Build and deploy a static site with Ruby, Bridgetown, TailwindCSS, and Netlify</div><div class="kg-bookmark-description">Demo Repository Demo Website What is Bridgetown According to their website, Bridgetown...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://res.cloudinary.com/practicaldev/image/fetch/s--t7tVouP9--/c_limit,f_png,fl_progressive,q_80,w_192/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/devlogo-pwa-512.png" alt><span class="kg-bookmark-author">DEV Community</span><span class="kg-bookmark-publisher">Andrew Mason</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9v60QlZA--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/xctbps1usj2v5ege60hu.jpg" alt></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Querying sqlite3 JSON columns in Rails]]></title><description><![CDATA[<p>This post is an effort to provide the next developer searching for answers with some guidance on using sqlite with the JSON1 extension. When it comes to querying JSON columns in Ruby on Rails the internet didn&apos;t seem to have any applied examples. </p><p>Here is some of my</p>]]></description><link>https://www.erikminkel.com/2020/11/13/query-sqlite3-json-columns-in-rails/</link><guid isPermaLink="false">6530bd3534a7670c7caab544</guid><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[sqlite]]></category><category><![CDATA[json column]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Fri, 13 Nov 2020 01:07:25 GMT</pubDate><content:encoded><![CDATA[<p>This post is an effort to provide the next developer searching for answers with some guidance on using sqlite with the JSON1 extension. When it comes to querying JSON columns in Ruby on Rails the internet didn&apos;t seem to have any applied examples. </p><p>Here is some of my notes on searching JSON column data with ActiveRecord.</p><p>You should be running a newer version of sqlite that has JSON1 extension compiled by default. If you feel your data is not saving as JSON you&apos;ll know pretty fast due to having to parse it to JSON on the server side. </p><p>The JSON data structure for a project I&apos;m working on looks a bit like below. This is stored in a column named <em>details</em>.</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
    &quot;contact&quot;:
    {
        &quot;email&quot;:&quot;user@email.com&quot;
    },
    &quot;pageDetails&quot;:
    {
        &quot;userAgent&quot;:&quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:82.0) Gecko/20100101 Firefox/82.0&quot;,
        &quot;pageUrl&quot;:&quot;http://localhost:4000/&quot;,
        &quot;currentDateTime&quot;:&quot;2020-11-12T07:00:04.681Z&quot;,
        &quot;ipAddress&quot;:&quot;127.0.0.1&quot;
    }
}
</code></pre>
<!--kg-card-end: markdown--><p>To query this you can use JSON1 sqlite extensions within ActiveRecord:</p><!--kg-card-begin: markdown--><pre><code class="language-ruby">Contact.where(&quot;json_extract(contacts.details, &apos;$.contact.email&apos;) like &apos;%user@email.com&apos;&quot;)
</code></pre>
<!--kg-card-end: markdown--><p>I will update this page as I run into more querying requirements on JSON columns in sqlite.</p><p><a href="https://sqlite.org/json1.html?ref=erikminkel.com">For more information on JSON1 sqlite extension.</a></p>]]></content:encoded></item><item><title><![CDATA[Rails 5.2, webpacker and vue-loader v15+]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Just a quick fix for vue-loader v15+</p>
<p>If you&apos;re developing in Rails in 2018, you&apos;re likely using a webpack and a front end framework. This will help you get your vue-loader going from complaining about needing a special loader to compile, back to working.</p>
<p>In your</p>]]></description><link>https://www.erikminkel.com/2018/05/21/rails-5-2-webpacker-and-vue-loader-v15/</link><guid isPermaLink="false">6530bd3534a7670c7caab53b</guid><category><![CDATA[rails 5]]></category><category><![CDATA[webpacker]]></category><category><![CDATA[webpack]]></category><category><![CDATA[vue]]></category><category><![CDATA[vue-loader]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Mon, 21 May 2018 19:15:30 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Just a quick fix for vue-loader v15+</p>
<p>If you&apos;re developing in Rails in 2018, you&apos;re likely using a webpack and a front end framework. This will help you get your vue-loader going from complaining about needing a special loader to compile, back to working.</p>
<p>In your project open up /app/config/webpack/environment.js</p>
<pre><code>const { environment } = require(&apos;@rails/webpacker&apos;)
const vue = require(&apos;./loaders/vue&apos;)

environment.loaders.append(&apos;vue&apos;, vue)
module.exports = environment

</code></pre>
<p>You&apos;ll probably see something like above.</p>
<p>You&apos;ll want to be sure to run <code>yarn add vue-loader</code> and double check in your package.json that your version is v15 or above.</p>
<p>Add these few lines to your environment.js and you&apos;ll be back to compiling:</p>
<pre><code>const { environment } = require(&apos;@rails/webpacker&apos;)
const VueLoaderPlugin = require(&apos;vue-loader/lib/plugin&apos;)
const vue = require(&apos;./loaders/vue&apos;)

environment.loaders.append(&apos;vue&apos;, vue)
environment.plugins.append(&apos;VueLoaderPlugin&apos;, new VueLoaderPlugin())
module.exports = environment
</code></pre>
<p>Further reading: <a href="https://github.com/rails/webpacker/blob/master/docs/webpack.md?ref=erikminkel.com#plugins">https://github.com/rails/webpacker/blob/master/docs/webpack.md#plugins</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Streaming payments on Stellar]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>While putting together an understanding of the Stellar SDK and developing a <strong>platform for smart contracts</strong> (which I&apos;ll announce later); I&apos;ll be blogging excerpts of how to interact with the Stellar network through the JavaScript SDK as I discover more.</p>
<p>Currently, I&apos;m working through</p>]]></description><link>https://www.erikminkel.com/2018/05/06/streaming-payments-on-stellar/</link><guid isPermaLink="false">6530bd3534a7670c7caab53a</guid><category><![CDATA[stellar]]></category><category><![CDATA[lumens]]></category><category><![CDATA[blockchain]]></category><category><![CDATA[stellar-sdk]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Sun, 06 May 2018 05:16:01 GMT</pubDate><media:content url="https://www.erikminkel.com/content/images/2018/05/pexels-photo-813269.jpeg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://www.erikminkel.com/content/images/2018/05/pexels-photo-813269.jpeg" alt="Streaming payments on Stellar"><p>While putting together an understanding of the Stellar SDK and developing a <strong>platform for smart contracts</strong> (which I&apos;ll announce later); I&apos;ll be blogging excerpts of how to interact with the Stellar network through the JavaScript SDK as I discover more.</p>
<p>Currently, I&apos;m working through a process of initializing an account on the network that behaves as a pure smart contract account (will be used for setting up access, rules and executing a binding contract on the public ledger). Initially, I felt it would make most logical sense to fund it after the user has decided on how the contract will execute and we can easily calculate fees and totals. However, due to some technical restrictions the account must be created first before being able to create and sign valid transaction envelopes (I&apos;ll make a follow up later).</p>
<h3 id="thecurrentuserexperienceis">The current user experience is:</h3>
<ol>
<li>User lands on smart contract creation page</li>
<li>A new keypair is generated and shown to the user</li>
<li>The user is required to fund the account with 2 XLM</li>
<li>Request the user enter their funding public key address</li>
<li>A status indicator should display so the user knows we&apos;re watching and waiting for the payment</li>
</ol>
<h3 id="whatimbuilding">What I&apos;m building</h3>
<p><img src="https://www.erikminkel.com/content/images/2018/05/mainstay_funding_process.gif" alt="Streaming payments on Stellar" loading="lazy"></p>
<p>That&apos;s a bit long winded, but, outlines clearly what interfaces will be necessary for the code I&apos;m about to share below.</p>
<p>Let&apos;s make the assumption that we have received the funding public key address from our user.</p>
<p>The user has sent a transaction over the network for our desired 2 XLM to initialize the smart contract.</p>
<p>We&apos;d like to watch for this transaction over the Stellar network and we&apos;re in luck because the Stellar JavaScript SDK has a streaming capability.</p>
<pre><code>import Stellar from &apos;stellar-sdk&apos;
// Set the server
const server = new Stellar.Server(&apos;https://horizon-testnet.stellar.org&apos;)
Stellar.Network.useTestNetwork()

let funding_account = &quot;&quot; // This needs to be a valid account on the network and should return an Account object

let stream = server.payments()
    .forAccount(funding_account)
    .cursor(&apos;now&apos;)
    .stream({
        onmessage: function(message) {
            if (message.type == &quot;create_account&quot; &amp;&amp;  message.account == app.contract.public_key) {
                // Set your own code after we find our create account event
                stream()
            }
        }
     })
</code></pre>
<p>The code above will (in our case use the Test Network) take our input from our user and watch for a &quot;create_account&quot; event on that funding public key address. Since we now know this information, and our smart contract account public key we can further verify this is the proper individual interacting with our system.</p>
<p>The system will respond with the following JSON.</p>
<pre><code>{&quot;_links&quot;:
    {&quot;self&quot;:{&quot;href&quot;:&quot;https://horizon-testnet.stellar.org/operations/37862971392684033&quot;},
    &quot;transaction&quot;:{&quot;href&quot;:&quot;https://horizon-testnet.stellar.org/transactions/0ad5c77ffe9796f4ac96e9536bf93e54e3641efe0f2a66d7a3be2f1340f86a39&quot;},
    &quot;effects&quot;:{&quot;href&quot;:&quot;https://horizon-testnet.stellar.org/operations/37862971392684033/effects&quot;},
    &quot;succeeds&quot;:{&quot;href&quot;:&quot;https://horizon-testnet.stellar.org/effects?order=desc\u0026cursor=37862971392684033&quot;},
    &quot;precedes&quot;:{&quot;href&quot;:&quot;https://horizon-testnet.stellar.org/effects?order=asc\u0026cursor=37862971392684033&quot;}},
  &quot;id&quot;:&quot;37862971392684033&quot;,
  &quot;paging_token&quot;:&quot;37862971392684033&quot;,
  &quot;source_account&quot;:&quot;GDUKSHZ2JFC4AYMHWIQTSMSRWT2H2GDLH6BG35UAMZPIYKBIRUC5JK5C&quot;,
  &quot;type&quot;:&quot;create_account&quot;,
  &quot;type_i&quot;:0,
  &quot;created_at&quot;:&quot;2018-05-06T04:38:12Z&quot;,
  &quot;transaction_hash&quot;:&quot;0ad5c77ffe9796f4ac96e9536bf93e54e3641efe0f2a66d7a3be2f1340f86a39&quot;,
  &quot;starting_balance&quot;:&quot;2.0000000&quot;,
  &quot;funder&quot;:&quot;GDUKSHZ2JFC4AYMHWIQTSMSRWT2H2GDLH6BG35UAMZPIYKBIRUC5JK5C&quot;,
  &quot;account&quot;:&quot;GC4L3V7NKMAN5BG6DVHJZLGVGPRYLBCJOHHIDFLL6S7N3ORJTCDA25E4&quot;
  }	
</code></pre>
<p>As you can see, we get a wealth of information back at our disposal to make further decisions.</p>
<p>If you have questions comment below. Also interested to see what you&apos;re working on.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Setting up Rails + webpacker on Docker Compose]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this, hopefully short post, I want to detail how I&apos;m using <code>docker-compose</code> to run webpacker in a Rails 5.1+ environment.</p>
<p>You&apos;ll need to run through the <a href="https://docs.docker.com/compose/install/?ref=erikminkel.com">docker-compose installation</a> procedures yourself.</p>
<p>Docker paired with docker-compose is an easy way to get a mock production environment</p>]]></description><link>https://www.erikminkel.com/2017/08/01/setting-up-rails-webpack-on-docker/</link><guid isPermaLink="false">6530bd3534a7670c7caab538</guid><category><![CDATA[rails]]></category><category><![CDATA[rails 5]]></category><category><![CDATA[docker]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[webpacker]]></category><category><![CDATA[webpack]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Tue, 01 Aug 2017 20:05:38 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In this, hopefully short post, I want to detail how I&apos;m using <code>docker-compose</code> to run webpacker in a Rails 5.1+ environment.</p>
<p>You&apos;ll need to run through the <a href="https://docs.docker.com/compose/install/?ref=erikminkel.com">docker-compose installation</a> procedures yourself.</p>
<p>Docker paired with docker-compose is an easy way to get a mock production environment spun up really quickly. This assists in developing against a real world-like production environment.</p>
<p>The file below is what I&apos;m using for Dockerfile. This essentially pulls from the lastest ruby docker image, gets some updates and packages we need and installs node and <a href="https://yarnpkg.com/en/docs/install?ref=erikminkel.com">yarn</a> for package management (not the yarn rubygem, that thing is the devil).</p>
<pre><code class="language-text">FROM ruby:latest

RUN apt-get update -qq &amp;&amp; apt-get install -y build-essential apt-transport-https apt-utils

# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev

# for a JS runtime
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs

# for yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo &quot;deb https://dl.yarnpkg.com/debian/ stable main&quot; | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq &amp;&amp; apt-get install -y yarn

ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME

ADD Gemfile* $APP_HOME/
RUN bundle install

ADD . $APP_HOME
</code></pre>
<p>And a simplified production environment setup for our docker-compose YAML file.</p>
<pre><code class="language-yaml">version: &apos;2&apos;
services:
  db:
    image: mysql:latest
    environment:
      - MYSQL_ROOT_PASSWORD=T0pS3cretP@ssword
    ports:
      - &quot;3307:3306&quot;

  web:
    build: .
    command: rails s --port 4000 --binding 0.0.0.0
    ports:
      - &quot;4000:4000&quot;
    links:
      - db
    volumes:
      - .:/app
</code></pre>
<p>If you haven&apos;t built your environment run <code>docker-compose build web</code>. This will process through the Dockerfile and install all the required packages. After all of the system level packages are installed it will run through and install all your gems.</p>
<p>Due to a security issue with earlier versions of the webpacker gem you need to pass the <code>--public LOCALDOCKERIP</code> option when running <code>bin/webpack-dev-server</code>.</p>
<pre><code class="language-text">bin/webpack-dev-server --public LOCALDOCKERIP:8080
</code></pre>
<p>To retrieve your docker instance IP simply login to your instance <code>docker-compose exec NAMEFROMDOCKERCOMPOSEYML bash</code> and run <code>ip addr</code>.</p>
<p>Then all you need to do is run <code>docker-compose up web</code>(replace web with your web frontend name) to spin up all of the instances for the environment. Once that is up, open a new terminal tab and run <code>docker-compose exec up web bash</code> and then run the webpack-dev-server command above to compile and serve your assets.</p>
<p>This can be improved to run when you bring up your environment and will be what I tackle next when I get a moment. Leave any improvements below in the comments. Now, back to webpack.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Apartment gem, tenant databases and MySQL]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Oh, the fun it is.</p>
<p>I&apos;m working on an application that will use the apartment gem and I&apos;ve been needing to drop and recreate the database. I still prefer to work with MySQL and haven&apos;t jumped ship to the popular Postgres. MySQL doesn&apos;</p>]]></description><link>https://www.erikminkel.com/2017/01/01/apartment-gem-tenant-databases-and-mysql/</link><guid isPermaLink="false">6530bd3534a7670c7caab535</guid><category><![CDATA[rails]]></category><category><![CDATA[apartment]]></category><category><![CDATA[rails 5]]></category><category><![CDATA[apartment 1.2]]></category><category><![CDATA[mysql]]></category><category><![CDATA[tenant database]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Sun, 01 Jan 2017 23:22:23 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Oh, the fun it is.</p>
<p>I&apos;m working on an application that will use the apartment gem and I&apos;ve been needing to drop and recreate the database. I still prefer to work with MySQL and haven&apos;t jumped ship to the popular Postgres. MySQL doesn&apos;t seem to have as much support for the Apartment gem that Postgres does.</p>
<p>The out of the box functionality for running <code>rails db:reset</code> will simply drop and recreate your development and test environment databases and not touch any Apartment generated tenant databases that may have been created while developing and testing.</p>
<p>I decided to create a rake task since I&apos;ve found myself dropping the tenant database plenty of times over.</p>
<p>Create a file in <code>lib/tasks</code>, I named it <code>drop_tenants.rake</code>.</p>
<p>Within the rake file add:</p>
<pre><code>namespace :apartment do
	desc &apos;drop current tenants before performing database reset&apos;
		task :drop_tenants =&gt; :environment do
			c = Mysql2::Client.new(host: &quot;localhost&quot;, username: &quot;root&quot;, password: &quot;password&quot;)
			tenant_dbs = User.pluck :subdomain

          tenant_dbs.each do |db|
              r = c.query(&quot;DROP DATABASE #{db};&quot;)
              puts &quot;Dropped #{db}&quot;
          end
	end
end
</code></pre>
<p>This will get you the command <code>rake apartment:drop_tenants</code> so you can prepend this to when you run <code>rake db:reset</code> you&apos;ll simply just run <code>rake apartment:drop_tenants db:reset</code> and you&apos;re good to go!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Aloha Lawns: Developing with SEO in mind]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Disclaimer: I&apos;m not an SEO expert, I just develop things. I&apos;m speaking with past experience growing and ranking local service businesses.</p>
<p>I&apos;ve never understood the idea of businesses touting they pay &quot;$$$&quot; per month for SEO. From my experience your ranking moves up</p>]]></description><link>https://www.erikminkel.com/2016/05/29/aloha-lawns-developing-with-seo-in-mind/</link><guid isPermaLink="false">6530bd3534a7670c7caab533</guid><category><![CDATA[entrepreneurship]]></category><category><![CDATA[alohalawns]]></category><category><![CDATA[SEO]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Sun, 29 May 2016 06:01:18 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Disclaimer: I&apos;m not an SEO expert, I just develop things. I&apos;m speaking with past experience growing and ranking local service businesses.</p>
<p>I&apos;ve never understood the idea of businesses touting they pay &quot;$$$&quot; per month for SEO. From my experience your ranking moves up organically as your site becomes more set in your target industry and region. For instance, if the search rankings are wide open with not many competitors for specific keywords. You could come in with almost any domain and likely be on the first page result for targeted keywords in 6-10 months with the proper content marketing and website structure.</p>
<p>Here is how I&apos;m planning to bake that in from the get go on Aloha Lawns. If you visit the site you can see the basic link structure starting to take shape. It is important to keep your URL end points keyword happy.</p>
<ul>
<li>/services/lawn-mowing</li>
<li>/services/shrub-trimming</li>
<li>/services/lawn-fertilization</li>
</ul>
<p>The URL structure for our services will allow the bot to see multiple keywords associated with our domain name (&quot;aloha lawns lawn mowing&quot;, for example). The bot will pick up keywords in the URL as well as check our title and meta tags AND our page content for keywords. I&apos;m sure you&apos;ve clicked a result because you saw there was multiple keywords bolded on the results page for a specific search listing. However, a majority of our searches will likely be LOCATION keyword specific. So, this won&apos;t help us get in the front of potential customers for our service areas.</p>
<p>The title tags should also be unique for each of our pages. Something as simple as &quot;Aloha Lawns | Lawn Mowing&quot;, &quot;Aloha Lawns | Shrub Trimming&quot;, etc. will suffice. If you wanted to stuff in there a region, you could. (&quot;Aloha Lawns | Hawaii Lawn Mowing | Lawn Mowing&quot;)</p>
<p>You get the point. Each of these pages will follow proper keyword and meta descriptions sprinkled with the target keywords for where we want to get ranked. It&apos;s important that you come up with unique meta descriptions for each page. Of course these service pages will have useful information about each service offering with proper copy and an action that the user can take to get their service booked.</p>
<p>Key things when building and ranking your site in no particular order:</p>
<ol>
<li>Sitemap, create and submit a sitemap to Google Webmaster Tools</li>
<li>Ensure proper meta keywords and descriptions, unique descriptions are best</li>
<li>Have as many keywords in your URL structure that makes the most logical sense</li>
<li>Be sure to have some keywords throughout the content on your pages you want to get ranked</li>
<li>List your business on Google Places/Bing Places and get verified</li>
<li>Stuff your pages with Schema Local Business mark-up wherever you list your location address</li>
<li>Be sure your pages title tags are unique</li>
<li>Responsive and mobile friendly first</li>
</ol>
<p>Once you have these things in order, you simply ping the search engines to crawl your site. And the waiting game begins. This really is a long term investment that pays off down the road, we&apos;re measuring in months, not weeks.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Aloha Lawns: Designing Contractor Onboarding]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This is going to be a short and sweet update. I had some time to configure a multi-step form for contractor onboading today. It was designed with getting pertinent information from those looking to add additional work to their already existing businesses. It needs a bit more polishing, but, you</p>]]></description><link>https://www.erikminkel.com/2016/05/20/aloha-lawns-contractor-onboarding/</link><guid isPermaLink="false">6530bd3534a7670c7caab534</guid><category><![CDATA[Ruby on Rails]]></category><category><![CDATA[alohalawns]]></category><category><![CDATA[contractor marketplace]]></category><dc:creator><![CDATA[Erik Minkel]]></dc:creator><pubDate>Fri, 20 May 2016 07:07:04 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is going to be a short and sweet update. I had some time to configure a multi-step form for contractor onboading today. It was designed with getting pertinent information from those looking to add additional work to their already existing businesses. It needs a bit more polishing, but, you can get the general idea of how it works from the animation below. I apologize for the flashing screen, my Chrome does that for some reason.</p>
<p><img src="http://i.imgur.com/ywGQyVL.gif" alt="Contractor Onboarding" loading="lazy"></p>
<p>The functionality that this will help us develop later is that we can login to an administrative area and vet our applicants for contractors wanting work. If we approve them, they are emailed and can login and link a Stripe account for payments. With the data we collect here we can then lump contractors with specialities (skills) and assign the right people to the right work. Also, note we are collecting regional preferences so we can assign them to a service region and offer work in their area.</p>
<p>That is all for today.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>