Static Sites with Hugo, Azure Blob Storage and Cloudflare Workers | Aaron|MSFT

Static Sites with Hugo, Azure Blob Storage and Cloudflare Workers

If you have followed me over the years you will likely know I am a huge fan of static sites and that Hugo (written in Go by Steve Francia) is my favorite static site generator. I also like working with on content as Markdown in Visual Studio Code (@code). In fact, this site is powered by Hugo, Azure Blob Storage and Cloudflare Workers exactly as covered in this post.

Why static sites? In a world where publishing platforms run the gamut from Twitter and Medium to Ghost or WordPress (either hosted or on your own servers) we have a smorgasboard to choose from. I am a firm believer that if we are writing quality content it should hosted on our own domain, on a platform we can control or easily swap out. Your words are wasted by Scott Hanselman from back in 2012 outlines some of the reasons this matters. Static sites provide power users control, extensibility, portability scalability and peace of mind at the lowest cost.

Hugo

Hugo is incredibly fast, cross platform and ships as a single binary with no dependencies. However, you may be familiar with or prefer another such as jekyll or octopress (ruby), hexo (JavaScript) or Pelican (python). Once we have a static site we can host it anywhere. My colleague Matt Fisher (@bacongobbler) recently posted “Blogging for Pennies on Azure” which is similar to this post but leverages Azure Blog Storage with Azure CDN instead of Cloudflare Workers. Third parties like netlify are also very popular and used by projects like Kubernetes for their docs. Later in this post I’ll cover why I believe Cloudflare Workers provides us the best of both worlds.

For many people the hardest part about setting up hugo will be choosing a theme from themes.gohugo.io. I am going to save some time by choosing two of my favorites, Hemingway2 and Black & Light. Theme templates are easy to customize and use Go’s powerful html/template and text/template libraries.

The commands we use to create our site are included below. Here are some of the key files and folders we’ll have

  • /config.toml is where we define our url, title, theme and other important site settings.
  • /themes/ contains our themes.
  • /content/ is for content in Markdown format.
  • /static/ is for static files such as images, robots.txt, etc.
  • /public/ is where the generated static site will be published.

I like to do my work in a Docker container (Dockerfile) which contains Alpine Linux, hugo, bash, curl, vim and git. I mount my current directory in /pwd/ and expose port 1313 for hugo. However, this is not required. We can safely ignore the docker commands and install Hugo locally (i.e. brew install hugo on macOS) via its Quick Start and/or Install Hugo. Additionally, we can run git clone https://github.com/aaronmsft/hello-hugo.git if we prefer to skip the hugo new site, git clone, and hugo new post commands below.

Note: all snippets and code below available under github.com/aaronmsft/aaronmsft-com/.

# 1. Hugo + Docker
# ----------------

docker build -t hugo .

docker run --rm -v `pwd`:/pwd/ -p 1313:1313/tcp -it hugo

cd /pwd/

hugo new site aaronmsft-com

cd /pwd/aaronmsft-com/themes/

git clone https://gitlab.com/beli3ver/hemingway2.git

git clone https://github.com/davidhampgonsalves/hugo-black-and-light-theme

cd /pwd/aaronmsft-com/

echo 'theme = "hemingway2"' >> config.toml

hugo new posts/hello-hugo.md

vi content/posts/hello-hugo.md

cd /pwd/aaronmsft-com/

# test our site, view edits live
hugo server -D --bind "0.0.0.0"

# build site
hugo

hugo server ... makes our site available at http://localhost:1313 and we can edit our posts under content/ and they see changes in real time. Once complete, we build our site with hugo and the public/ folder will have our static site ready to go.

We could run the above commands almost anywhere: on our local machine, in a Docker container (as I have above), in Azure Cloud Shell, etc. I like to commit my entire site to a git repository. Not only does this become the single source of truth for our content, but it also enables to us automatically build and deploy our site (for free!) using a CI/CD platform, which I will cover in a future post.

Azure Blob Storage

Next we will use Azure Blob Storage to host our static site on Azure. In addition to the Azure Portal, we have a cross platform Azure CLI (az), available via our free Cloud Shell in the Azure Portal or shell.azure.com. There are also SDKs for every major language, including Go, and higher level tools like blobporter. We also have the cross-platform Storage Explorer which is available on Windows, macOS and Linux. Finally, it is very hard to beat the cost of Azure Blob Storage at only $0.0184 per GB of storage, $0.004 per 10,000 read operations, and bandwidth that starts at $0.087 per GB (Zone 1) after the first 5 GB.

Let’s get started. With bash and the below Azure CLI snippets we’ll create a Blob storage account in Azure, authenticate to it, and create a container for our static content (we also have a quickstart that cover this). We then upload upload our static site using the az storage blob upload-batch command. The az storage blob delete-batch command can also be used if we need to delete (rather than overwrite) files and we can also store our AZURE_STORAGE_CONNECTION_STRING for later use if necessary.

# 2. Azure Blob Storage
# ---------------------

RESOURCE_GROUP='180300-static'
STORAGE_ACCOUNT='180300static' # this needs to be globally unique
STORAGE_CONTAINER='aaronmsft-com'

az group create -n $RESOURCE_GROUP -l eastus

az storage account create -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT \
    --location eastus --sku Standard_LRS

export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT | jq -r .connectionString)

az storage container create -n $STORAGE_CONTAINER --public-access container

cd aaronmsft-com/
# az storage blob delete-batch --source $STORAGE_CONTAINER
az storage blob upload-batch --source public/ --destination $STORAGE_CONTAINER

If we were only using our Azure Blob Storage account for static assets instead of an entire site or blog (which requires default document, custom error document, redirects for SEO, etc), we could just configure a custom domain and we’d be done. However, we can super-charge the functionality of Azure Blob Storage, as well as integrate other services, using the brand new Cloudflare Workers.

Cloudflare Workers

Many people know and love Cloudflare for its DNS, content delivery and security functionality that includes everything from DDoS and bot protection to painless SSL. Troy Hunt has been a long time user of Cloudflare for his personal blog as well as Have I been pwned which is also powered by Azure App Service and Azure Functions. It has made his latest project, Pwned Passwords, incredibly fast and cost-effective to run.

Cloudflare Workers, launched this month, takes this to the next level by enabling us to execute code Cloudflare’s edge using JavaScript the Service Workers API. In their own words:

Cloudflare Workers derive their name from Web Workers, and more specifically Service Workers, the W3C standard API for scripts that run in the background in a web browser and intercept HTTP requests. Cloudflare Workers are written against the same standard API, but run on Cloudflare’s servers, not in a browser.

This is an incredibly powerful concept that extends far beyond running our static site. I’m far from the only one excited about Workers.

How much do they cost? $5/month covers our first 50 million requests, across all domains, and we are billed $0.50 per million requests thereafter. Each request can consume 5-50 milliseconds CPU time per request, depending on Cloudflare Plan (5ms on Free plan), 15 seconds real time per request (which does not limit long running requests), and 128MB memory at any given time. In short, it’s a bargain. Take a quick look at the docs and try them out on the Cloudflare Workers Playground.

Once you are ready to go, setup your domain with Cloudflare and point records for your domain to your Blob Storage account (e.g. aaronmsft.com and www.aaronmsft.com -> 180300static.blob.core.windows.net in my case). Then open Workers under your domain and click “Launch Editor”.

By adding a just few lines to Cloudflare’s “hello world” example we can route traffic back to our Azure Blob Storage account, include a default document (index.html) for any url ending with a trailing slash, and support a Custom 404 page (see basic.js):

addEventListener('fetch', event => {
    event.respondWith(fetchAndLog(event.request))
})

/**
* Fetch and log a given request object
* @param {Request} request
*/
async function fetchAndLog(request) {
    let src = new URL(request.url)
    let dst = new URL('https://180300static.blob.core.windows.net/aaronmsft-com')
    dst.pathname += src.pathname
    if (src.pathname.endsWith('/')) {
        dst.pathname += 'index.html'
    }
    console.log('dst:', dst.toString())

    let response = await fetch(dst, request)
    if (response.status == 404) {
        return new Response('<html><body><h1>We\'re sorry, this page was not found.</h1><p><a href="https://xkcd.com/1969/"><img src="https://imgs.xkcd.com/comics/not_available_2x.png" /></p></body></html>',
            { status: 404, statusText: 'Not found', headers: { 'Content-Type': 'text/html' } })
    }
    return response
}

We can extend the above to support the modification of streaming responses (see streaming.js) which is also exceptionally powerful. I initially thought this was required for all streaming responses, but Kenton Varda, the architect behind Cloudflare Workers, very helpfully pointed out:

You only need to use explicit streaming code if you want to modify the body content in a streaming way. If you’re just passing it through verbatim, it will stream automatically.

After testing our script via “Update Preview” we can click “Save” on the “Script” tab, followed by “Routes” > “Add Route” to add a route with a pattern similar to *aaronmsft.com/*. This will set our Worker live immediately and traffic will begin serving from Azure Blob storage. I even create my workers and manage my routes via cURL and the Workers’ Configuration API.

Now that we are up and running, we can see that Cloudflare Workers provides us:

  • Complete control of every request hitting our domain. No NGINX required! I may have a URL shortener or APIs hosted by Azure Functions or elsewhere, a dynamic site hosted on Azure App Service, or Containers on Azure Container Instances, Web Apps for Containers or Kubernetes-powered Azure Container Service (AKS).
  • Free and painless SSL for my domain and bandwidth served from Cloudflare free of charge. For every cached request I sae on both operations and bandwidth (the “$0.004 per 10,000 operations … $0.087 per GB”).
  • The ability to serve sites per-container inside a storage account and scale out to many hundreds or thousands of sites rather than needing one storage account per domain.
  • The Routes functionality, which enables Cloudflare Workers to handle only the requests we is needed for. Perhaps I have a /static/ container that only holds images that should be served and cached. I can exclude that directory and save the $0.50 per million requests for those assets.

I look forward to sharing some more advanced scenarios in future posts (including the addition of serverless compute to our domain). Questions or feedback welcome! Feel free to reach out any time on twitter via @as_w, and my DMs are always open.

© Aaron|MSFT ~ "My Software Fixes Things" ~ @as_w ~ aka.ms/aaronw