Here at Ghost Pi, we've been running Ghost natively on our Raspberry Pi or ASUS Tinker Board using the Ghost CLI and a series of manual commands. Whilst this is satisfying once up-and-running, it can be a little nerve-wracking when it comes to updating Ghost due to the non-supported stack that is the Raspberry Pi.

What if we told you there was an even easier way to install Ghost on Raspbian Buster for Raspberry Pi or on Armbian on ASUS Tinker Board using Docker?

Getting Ghost running via Docker is nothing new, as outlined by the exceptionally knowledgable Alex Ellis from OpenFaaS, but now that Alex has moved onto bigger and better things with his OpenFaaS venture, we wanted to provide a more recent update on how to achieve this, including running multiple blogs.

Docker

We won't spend too much time explaining what Docker is, as there are vast amounts of information about this already.  Simply put, Docker is a platform that allows you to run a series of software packages in containers that are isolated from your operating system. The beauty of Docker is that if something doesn't quite work, it won't have any nasty side effects on your Raspberry Pi (or ASUS Tinker Board).

If you haven't already installed Docker on your Pi or Tinker Board, then simply run:

[email protected]:~ $ curl -sSL https://get.docker.com | sh

This will do the necessary and install Docker on your device. You may need to add your user to the Docker group after, which is done simply:

[email protected]:~ $ sudo usermod -aG docker [user_name]

So in the example above, if your user is boo then you would add:

[email protected]:~ $ sudo usermod -aG docker boo

To finish this, you need to logout and log back in which if you have connected to your Pi or Tinker Board via SSH means:

[email protected]:~ $ exit

Then log back in as normal.

One blog or two?

There are many guides online on how to run Ghost on Raspberry Pi using Docker, but they typically work for just one blog. What if you wanted to host more than one blog on your Raspberry Pi or ASUS Tinker Board? Here at Ghost Pi, we run two blogs - obviously this one, but also Simply Archer, which is the author, Wesley, wife's blog.

Thankfully, using Docker to do this is quite simple once you get your head around it.

For the purpose of keeping this guide as short as possible, we'll be assuming you have at least two domains configured to point towards your public IP address and have the appropriate DNS records in place. Whilst confusing, the "nutshell" explanation is that your domains should have A records in place that point to your public IP address (if you don't know what that is, then IP Chicken is quick way to find out) and forward port 443 (for HTTPS) in your router to that of your Raspberry Pi's internal network IP address (e.g. 192.168.0.10).

As we'll be using Let's Encrypt to create our own SSL certificates, forwarding port 80 for HTTP is not required.

Clone our GitHub repository

To save copying and pasting reams of code, we've made a GitHub repository available so you can get the basis of what's needed to run two Ghost blogs on your Raspberry Pi. Simply clone on your device with:

[email protected]:~ $ git clone https://github.com/raspberrycoulis/docker-ghost-letsencrypt-nginx.git

This contains a Docker Compose yaml file, a NGINX configuration file called default, a cloudflare.ini configuration file and a hidden .env file and should be located in the folder docker-ghost-letsencrypt-nginx unless you specified your own location when cloning.

Next you'll need to make a few tweaks to our code for your installation to run correctly. This is namely:

  1. Add your details to the hidden .env file
  2. Provide your Cloudflare credentials in the cloudflare.ini file
  3. Update default to use your two domains for your dual blogs
  4. Update docker-compose.yml to use your two domains for your dual blogs Update: This is no longer needed because in a update to this post, we've added the details to the .env file instead.

This may sound a little daunting, but our code has been designed to be as hands-off as possible!

Cloudflare and Let's Encrypt

SSL should always be used now, and with Let's Encrypt providing free certificates, there is no excuse not to nowadays. To save a lot of manual DNS changes for Let's Encrypt to issue certificates, we'll be using the Cloudflare API to verify ours. You'll need to get your Cloudflare API key (this guide on the Cloudflare support site is the best guide) and your Cloudflare email address (used when logging in) and then add them to your cloudflare.ini file - i.e.:

# Instructions: https://github.com/certbot/certbot/blob/master/certbot-dns-cloudflare/certbot_dns_cloudflare/__init__.py#L20

# With global api key:
dns_cloudflare_email = [ADD HERE]
dns_cloudflare_api_key = abcdefghijklmnopqrstuvwxyz1234567890

# With token (comment out both lines above and uncomment below):
#dns_cloudflare_api_token = 0123456789abcdef0123456789abcdef01234567

When done, this will allow our Let's Encrypt Docker container to use your Cloudflare account to issue the SSL certificates without any other involvement.

Environment variables

Again, to save adding lots of manual code yourself, you can use something called environment variables within Docker which is super helpful. In our .env file (the . before env means the file is hidden by default), you should substitute the relevant parts to match your preferences.

For example purposes, we'll use blogone.com and blogtwo.com and the blog names of My-First-Blog and My-Second-Blog:

FIRST_DOMAIN=blogone.com
FIRST_BLOG_NAME=My-First-Blog
SECOND_DOMAIN=blogtwo.com
SECOND_BLOG_NAME=My-Second-Blog
[email protected]
PUID=1000
PGID=1000

The PUID and PGID tells Docker which user to run the containers as. You can find your PUID and PGID by logging in (via SSH) to your Pi as the user you want and then typing id at the command prompt.

Now that our .env file has been configured, the docker-compose.yml file will know what to do with exception of a few minor tweaks, which follow next.

docker-compose.yml

This file tells Docker Compose what to do and how to build our containers. It would be helpful at this point to install Docker Compose if you haven't already done so, which is as easy as:

[email protected]:~ $ sudo apt-get update && sudo apt-get install docker-compose -y

After a minute or two, you should be back at the command prompt and Docker Compose should be installed.

We'll now need to edit the docker-compose.yml file to replace a few bits, essentially anywhere you see [BLOG ONE] or [BLOG TWO], such as:

UPDATE: Shortly after posting this guide, we had a sudden realisation... Why not include more variables for the blog names into the .env file so that we can avoid the hassle of editing another file? So we did!

The blog names should now be automatically added to the docker-compose.yml file when you updated the .env file earlier!

So when you look at the docker-compose.yml file, it should look like this now:

  ghost-${FIRST_BLOG_NAME}:
    image: ghost:alpine
    restart: unless-stopped
    container_name: ${FIRST_BLOG_NAME}
    networks:
      - ghost-blogs
    environment:
      - url=https://${FIRST_DOMAIN}
      - server__host=0.0.0.0
      - server__port=2368
      - imageOptimization__resize=false
    volumes:
      - ./${FIRST_BLOG_NAME}/content:/var/lib/ghost/content
    depends_on:
      - letsencrypt

  ghost-${SECOND_BLOG_NAME}:
    image: ghost:alpine
    restart: unless-stopped
    container_name: ${SECOND_BLOG_NAME}
    networks:
      - ghost-blogs
    environment:
      - url=https://${SECOND_DOMAIN}
      - server__host=0.0.0.0
      - server__port=2369
      - imageOptimization__resize=false
    volumes:
      - ./${SECOND_BLOG_NAME}/content:/var/lib/ghost/content
    depends_on:
      - letsencrypt

And when our variables from the .env file are passed through, it should be seen by the Docker engine as (please note, you won't actually see it like this, but it's just an example to help understand):

  ghost-My-First-Blog:
    image: ghost:alpine
    restart: unless-stopped
    container_name: My-First-Blog
    networks:
      - ghost-blogs
    environment:
      - url=https://blogone.com
      - server__host=0.0.0.0
      - server__port=2368
      - imageOptimization__resize=false
    volumes:
      - ./My-First-Blog/content:/var/lib/ghost/content
    depends_on:
      - letsencrypt
  
  ghost-My-Second-Blog:
    image: ghost:alpine
    restart: unless-stopped
    container_name: My-Second-Blog
    networks:
      - ghost-blogs
    environment:
      - url=https://blogtwo.com
      - server__host=0.0.0.0
      - server__port=2368
      - imageOptimization__resize=false
    volumes:
      - ./My-Second-Blog/content:/var/lib/ghost/content
    depends_on:
      - letsencrypt

And do the same for the [BLOG TWO] block too. Note: You cannot have spaces in the docker-compose.yml file when it comes to names, so either keep it all one word, or use hyphens (-) or underscores (_) if necessary! Update: This is no longer needed with the new variables approach!

Caution! Leave the dollar sign curly brackets ${} as they are because these are used for the .env file that we have already tweaked!

NGINX

The last thing we need to tweak is the default file, which is used by the NGINX element within the Let's Encrypt container. This file pretty much directs the web traffic to the relevant blog, so anybody visiting blogone.com will see the Ghost blog for blogone.com and vice versa.

Unfortunately, the .env variables does not get passed to the default file here, so we have to manually add them.

This time, look for the square brackets [] and anything containing [BLOG ONE] or [BLOG TWO] and substitute accordingly. There are 7 instances of [BLOG ONE] and 3 instances of [BLOG TWO] in the default file.

You may wonder why the ssl_certificate path is the same for both blogs. Well, this is because our Let's Encrypt container has issued one SSL certificate that works for both domains, because of how we configured it.

Start your containers

Now we're all set to fire up our Docker containers:

If you want to know which containers you are running, then they are shown via the links above.

To do this, simply run the following at the command prompt in the docker-ghost-letsencrypt-nginx folder:

[email protected]:~/docker-ghost-letsencrypt-nginx $ docker-compose up

The first time we fire up our containers, it will take a little longer than normal and you should see various logs on the screen (don't worry, once it's done, we'll restart the containers in the deamon mode). Initially, the Let's Encrypt container will generate the DHPARAM files for encryption, which can take several minutes - please let it finish otherwise you'll have to delete the folders created to start again!

If all goes well, you should see success messages on the log, including something along the lines of "server ready" and the URLs for each Ghost blog. Once you are at this point, there is one final step...

First, we need to stop our containers by pressing CTRL+C, then restart the containers in deamon mode:

[email protected]:~/docker-ghost-letsencrypt-nginx $ docker-compose up -d

Yep, it's that simple - a -d flag. You should see a few success messages before being returned to the command prompt.

Set up each Ghost blog

Now all that's left is to setup your Ghost blogs! You should be able to access each blog using the domains you specified (i.e. blogone.com and blogtwo.com respectively) although they will look identical at this stage! Create your Ghost blog by visiting the Ghost admin dashboard - blogone.com/ghost or blogtwo.com/ghost and follow the wizard.

Cloudflare and SSL

If you using Cloudflare for additional protection, then you need to set your SSL settings to full to ensure you do not receive any SSL errors.

That's it! You should now have two Ghost blogs running via Docker on your Raspberry Pi or ASUS Tinker Board!