Purge Cloudflare Cache with PowerShell & Azure DevOps

Originally when I setup my blog on Static website hosting in Azure Storage I went with Azure CDN because I wanted to have an SSL endpoint. As of right now, you cannot do custom SSL with static website hosting unless you use Azure CDN.

But then I hit another snag, you cannot redirect the root domain and redirects in general were a bit too complex for a simple project like this one. There is a user voice suggestion for this feature but it remains under review. This started to bother me because I wanted to do redirects with SSL but couldn’t because the I could not install the certificate generated by Azure CDN on the Linux VM I decided to do the rewrites on. All my previous articles were on https://hammertime.tech breaking all of my links from places like Google and Bing, or even worse resulting in the dreaded SSL error.

Fast forward a bit to Cloudflare. In a previous life I used their free plan to protect against attack but that free plan also offers 3 free redirects AND a free SSL endpoint. This meant I could now redirect https://hammertime.tech to www. where my CNAME is setup for Azure Storage. It also meant I could redirect a bunch of my other domains like hammertimetech.com and have all my URLs work how I wanted. Nobody else will probably notice this.. but it’s one of those things I wanted in place - I am weird like that!

But I digress, now that I am on Cloudflare I wanted to make sure that the Cloudflare CDN cache has my latest content whenever I publish. This is probably overkill in some places because Cloudflare seems to pick up on new content fairly quickly. But at least at the end of my release pipeline I’ll know everything is published on the CDN.

This guide assumes a few things are already done.

  • You’ve setup a Cloudflare account and your domain is pointed to their name servers for hosting. You can find instructions in Cloudflare 101.
  • You’ve already setup your CNAME to point to Azure Storage (or other endpoint).
  • You have obtained your API Token for the v4 API
  • You already have a release pipeline setup, see my previous article for more information.

Using Azure DevOps to deploy Hexo

One of the things that I wanted to be able to do was setup a full CI/CD pipeline for my blog. As someone once said If it’s worth doing, it’s worth overdoing and it was in that spirit I configured Azure DevOps to deploy Hexo.

Overview

First off a short intro to Azure DevOps. Previously known as Visual Studio Team Services (VSTS) it is a service that allows you to manage code repositories, boards, pipelines (which we’ll cover here) and even test plans. You can also use Azure Pipelines with GitHub and while I did not do that here because I wanted all my repos to be private, there is no reason you cannot use GitHub as your source repo. Especially since two weeks ago they announced free private repos.

This article will cover the following:

  1. Configuring your build in Azure Pipelines
    • Install npm
    • Install hexo-cli
    • Generate the static site
    • Publish the build artifacts
    • Enable continuous integration
  2. Configuring your release in Azure Pipelines
    • Select your source artifacts
    • Enable AzureBlob file copy to copy content to $web
  3. Running your first build!

If you haven’t used Azure DevOps before there is an article covering how to get started.

Prerequisites

I will be assuming that a few things are already in place for the purposes of this guide.

  1. You’ll need to get Hexo configured locally and committed to a repo. I covered how to get Hexo running in the Windows Subsystem for Linux in a previous article.
  2. Your site committed to an Azure DevOps or GitHub repository.
  3. An Azure Storage Account provisioned and static website hosting enabled. This process would likely work for other services, but this article will explain how to do it with an azure account.

Using WSL and Node.js to build and manage Hexo

I had initially intended to publish this as part of my Azure DevOps entry, but it felt like it needed its own dedicated article. For my Hexo workflow I decided I wanted to use the Windows Subsystem for Linux from here on referred to as WSL. This works best for me because I am primarily a Windows user and I also have a fair bit of experience with Linux.

Getting Started

The first thing you’ll need to do is get Hexo configured locally so that you can generate the static content for later deployment, and also run your local server to test your content. The basics on how to configure Hexo are available on their website, I’ll cover how to do it on WSL specifically.

The first thing you need to do if you have not already is get WSL installed and running. Step by step instructions on this are available over on Microsoft Docs. The instructions below are specifically for the Ubuntu distro.

The below instructions assume a fresh install of the Ubuntu distro from the Windows store.

Getting npm and Node.js installed

You’ll need git for a variety of reasons, so let’s just get it installed first along with all of the prerequisites.

Install Git

sudo apt-get install git-core

Install NVM & Node.js

This is using Node Version Manager by creationix on GitHub. There are many ways to install Node.js, I went with the Hexo documentation.

Download and run installation script:

wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

Load NVM if it isn’t already loaded:

1
2
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm

FInally, install node.js

nvm install stable

Install npm

In this step we’ll install the node package manager. npm is a package manager for node that will then allow you to install Hexo, plugins, and anything else you’d like to do with node.

sudo apt install npm

Install Hexo

At this point we are ready to install the Hexo cli which will allow you to finally build out a hexo environment.

npm install -g hexo-cli

Initialize a new blog

With WSL you can actually navigate to the Windows file system which means you can utilize tools like Visual Studio Code to edit your articles, templates and configuration files. For purposes of this article lets say we want to get Hexo running in c:\hexo

Note:
You can actually customize the mount point noted below. For example you could change it to /c/hexo instead of /mnt/c/hexo by modifying wsl.conf. You can also use these steps if you find that your Windows drives are not mounted inside WSL.

1
2
3
[automount]
root = /
options = "metadata"

Full documentation on settings available in wsl.conf is available here.

Now initialize a new site in that location:

1
2
3
hexo init /mnt/c/hexo
cd /mnt/c/hexo
npm install

You will then have the following folder structure:

.
├── _config.yml
├── package.json
├── scaffolds
├── source
|   ├── _drafts
|   └── _posts
└── themes

That’s it! Now you are ready to configure your blog and start authoring content. Documentation for editing your site configuration can be found here.

Finishing Up

Now that you have the basics in place you can commit the directory to GitHub or Azure Devops and begin authoring posts and even installing custom themes if you want. Below are a few links to get you started:

  • Writing Content - This explains how to write new posts and pages as well as create drafts.
  • Miscellaneous Commands - This contains documentation on how to generate a static site, or launch a local server to test your newly created content.

I hope this article was helpful! You are now well on your way to publishing a Hexo blog in the wild. In my next article I will cover how to automatically deploy Hexo using an Azure Pipeline.

Why I chose Hexo for my Technical Blog

When the time came to relaunch my blog I spent a fair amount of time researching options. I wanted something that worked well for me that didn’t require too much infrastructure. I also wanted to keep costs fairly low and not take on a large monthly recurring cost.

  • Ideally, I wanted static content that I could host in an Azure Storage account or some sort of static website hosting.
  • I wanted something that felt comfortable for me. I spend a fair amount of time authoring articles in Markdown and it has really grown on me over the past few years.
  • An application with a fair amount of plugin and theme options, I don’t really want to get involved in creating a custom template. A reasonably sized community was a plus.

The Competitors

I evaluated a variety of different options before settling on Hexo. Obviously I’m sure there are more options out there, but this is what I looked at.

  • DocFX - I’ve had some experience with DocFX in the past and honestly, it is what I really wanted to use. But the lack of theme selection and the fact it’s primarily focused on documentation meant it was not a great fit for what I wanted to do. It is the engine behind Microsoft Docs and it does that really well so I will continue to monitor the project.
  • Hugo - A close second, Hugo is very similar to Hexo. It is built on Go rather than Node.js. It generates a static site that you can publish in much the same way. The configuration seemed a little bit more involved, but in the end it was almost a coin flip. Now that my content is in Markdown and ready to roll I may still give Hugo a try to compare and contrast.
  • Ghost - Their hosted platform pricing was a bit steep for me. And while they did offer self hosting the config seemed fairly complex and the requirements were also steep. I also don’t really need the fancy admin interface.
  • Medium - This is where my blog was before. It is a great platform for getting your content out there. But it’s not really a great platform for the true technical blogging that I wanted to do. I do still intend to post opinion pieces.. but not enough that Mediums core benefit would be that useful.
  • Wordpress - Obviously this option is one of the largest out there. And there are definitely a ton of options for plugins and themes. And I have experience with it in the past. But for me it felt like using a bulldozer to solve something I could use a shovel for.

Private Cloud Isn’t Dead Yet

I’ve been hearing about the eminent demise of on-premise workloads and “Private Clouds” ever since the Public Cloud became a thing. Most recently I’ve seen quite a few articles about how their days are numbered.

I disagree.

In my role as a Product Architect for Microsoft Hyper-V and Cloud Platform my team and I deal with traditional Virtualization but we also dabble quite a bit in Public Clouds, mostly Microsoft Azure. We try and make sure we utilize every service that we can to provide a streamlined product offering for our customers. So, I live in these worlds and often advise customers and architect environments that take advantage of both worlds. I’ll be speaking primarily from the Microsoft side of things.

First off private clouds still provide significant value and lower your overall IT spend. Your benefits include being able to fully manage your entire stack from the networking on up. You’re in control of the compliance and how everything operates. There are still quite a few reasons to use traditional virtualization for things like legacy workloads and most of all control. You also have full access to all the capacity you purchased up front, and when you aren’t using that capacity it’s available to you right away. It certainly is not hyperscale, but it is guaranteed capacity within your data center.

Public clouds are advancing at a rapid pace, in some cases new features are added hourly. They are starting to become MUCH friendlier to compliance requirements like PCI, FedRAMP and many others. And as a result, it is becoming more appealing to put your workload in Azure or AWS.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×