Using WSL and Node.js to build and manage Hexo

I had initially intended to publish this as part of my Azure DevOps entry, but it felt like it needed its own dedicated article. For my Hexo workflow I decided I wanted to use the Windows Subsystem for Linux from here on referred to as WSL. This works best for me because I am primarily a Windows user and I also have a fair bit of experience with Linux.

Getting Started

The first thing you’ll need to do is get Hexo configured locally so that you can generate the static content for later deployment, and also run your local server to test your content. The basics on how to configure Hexo are available on their website, I’ll cover how to do it on WSL specifically.

The first thing you need to do if you have not already is get WSL installed and running. Step by step instructions on this are available over on Microsoft Docs. The instructions below are specifically for the Ubuntu distro.

The below instructions assume a fresh install of the Ubuntu distro from the Windows store.

Getting npm and Node.js installed

You’ll need git for a variety of reasons, so let’s just get it installed first along with all of the prerequisites.

Install Git

sudo apt-get install git-core

Install NVM & Node.js

This is using Node Version Manager by creationix on GitHub. There are many ways to install Node.js, I went with the Hexo documentation.

Download and run installation script:

wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

Load NVM if it isn’t already loaded:

1
2
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm

FInally, install node.js

nvm install stable

Install npm

In this step we’ll install the node package manager. npm is a package manager for node that will then allow you to install Hexo, plugins, and anything else you’d like to do with node.

sudo apt install npm

Install Hexo

At this point we are ready to install the Hexo cli which will allow you to finally build out a hexo environment.

npm install -g hexo-cli

Initialize a new blog

With WSL you can actually navigate to the Windows file system which means you can utilize tools like Visual Studio Code to edit your articles, templates and configuration files. For purposes of this article lets say we want to get Hexo running in c:\hexo

Note:
You can actually customize the mount point noted below. For example you could change it to /c/hexo instead of /mnt/c/hexo by modifying wsl.conf. You can also use these steps if you find that your Windows drives are not mounted inside WSL.

1
2
3
[automount]
root = /
options = "metadata"

Full documentation on settings available in wsl.conf is available here.

Now initialize a new site in that location:

1
2
3
hexo init /mnt/c/hexo
cd /mnt/c/hexo
npm install

You will then have the following folder structure:

.
├── _config.yml
├── package.json
├── scaffolds
├── source
|   ├── _drafts
|   └── _posts
└── themes

That’s it! Now you are ready to configure your blog and start authoring content. Documentation for editing your site configuration can be found here.

Finishing Up

Now that you have the basics in place you can commit the directory to GitHub or Azure Devops and begin authoring posts and even installing custom themes if you want. Below are a few links to get you started:

  • Writing Content - This explains how to write new posts and pages as well as create drafts.
  • Miscellaneous Commands - This contains documentation on how to generate a static site, or launch a local server to test your newly created content.

I hope this article was helpful! You are now well on your way to publishing a Hexo blog in the wild. In my next article I will cover how to automatically deploy Hexo using an Azure Pipeline.

Why I chose Hexo for my Technical Blog

When the time came to relaunch my blog I spent a fair amount of time researching options. I wanted something that worked well for me that didn’t require too much infrastructure. I also wanted to keep costs fairly low and not take on a large monthly recurring cost.

  • Ideally, I wanted static content that I could host in an Azure Storage account or some sort of static website hosting.
  • I wanted something that felt comfortable for me. I spend a fair amount of time authoring articles in Markdown and it has really grown on me over the past few years.
  • An application with a fair amount of plugin and theme options, I don’t really want to get involved in creating a custom template. A reasonably sized community was a plus.

The Competitors

I evaluated a variety of different options before settling on Hexo. Obviously I’m sure there are more options out there, but this is what I looked at.

  • DocFX - I’ve had some experience with DocFX in the past and honestly, it is what I really wanted to use. But the lack of theme selection and the fact it’s primarily focused on documentation meant it was not a great fit for what I wanted to do. It is the engine behind Microsoft Docs and it does that really well so I will continue to monitor the project.
  • Hugo - A close second, Hugo is very similar to Hexo. It is built on Go rather than Node.js. It generates a static site that you can publish in much the same way. The configuration seemed a little bit more involved, but in the end it was almost a coin flip. Now that my content is in Markdown and ready to roll I may still give Hugo a try to compare and contrast.
  • Ghost - Their hosted platform pricing was a bit steep for me. And while they did offer self hosting the config seemed fairly complex and the requirements were also steep. I also don’t really need the fancy admin interface.
  • Medium - This is where my blog was before. It is a great platform for getting your content out there. But it’s not really a great platform for the true technical blogging that I wanted to do. I do still intend to post opinion pieces.. but not enough that Mediums core benefit would be that useful.
  • Wordpress - Obviously this option is one of the largest out there. And there are definitely a ton of options for plugins and themes. And I have experience with it in the past. But for me it felt like using a bulldozer to solve something I could use a shovel for.

Tech Blog Return!

I am happy to announce the return of the Hammer Time Tech Blog! I know you couldn’t wait for this to happen..

Recently I’ve been wanting to get back into blogging from a more technical perspective with some how to’s and lessons learned over my career. Previously the blog was hosted on Medium but that didn’t feel like the right… medium (harhar?) for what I’d like to be talking about these days.

There will still be some opinion posts and articles with things that I have on my mind. But in addition to that there will be more technical articles where I explain how I solved a problem or post some script snippets that helped me out. I also hope to further add to my GitHub which is sorely lacking in original content.

The first article coming soon will be how to use Azure DevOps to create a full CI/CD pipeline for publishing a Hexo site to Azure Storage using the new static website hosting feature. I’ll also explain my reasoning for moving to Hexo when there’s no shortage of blogging applications out there on the Internet.

I am really looking forward to sharing more knowledge in 2019!

Private Cloud Isn’t Dead Yet

I’ve been hearing about the eminent demise of on-premise workloads and “Private Clouds” ever since the Public Cloud became a thing. Most recently I’ve seen quite a few articles about how their days are numbered.

I disagree.

In my role as a Product Architect for Microsoft Hyper-V and Cloud Platform my team and I deal with traditional Virtualization but we also dabble quite a bit in Public Clouds, mostly Microsoft Azure. We try and make sure we utilize every service that we can to provide a streamlined product offering for our customers. So, I live in these worlds and often advise customers and architect environments that take advantage of both worlds. I’ll be speaking primarily from the Microsoft side of things.

First off private clouds still provide significant value and lower your overall IT spend. Your benefits include being able to fully manage your entire stack from the networking on up. You’re in control of the compliance and how everything operates. There are still quite a few reasons to use traditional virtualization for things like legacy workloads and most of all control. You also have full access to all the capacity you purchased up front, and when you aren’t using that capacity it’s available to you right away. It certainly is not hyperscale, but it is guaranteed capacity within your data center.

Public clouds are advancing at a rapid pace, in some cases new features are added hourly. They are starting to become MUCH friendlier to compliance requirements like PCI, FedRAMP and many others. And as a result, it is becoming more appealing to put your workload in Azure or AWS.

Hyper-V Best Practices

When deploying Windows Server as a hypervisor host for virtual machine workloads there are a variety of best practices that should be put in place. Running Hyper-V on Windows has special considerations when it comes to workload. For example, Virtual Machine Queues and Hyper-V Specific hotfixes. This is blog entry serves as an overview of important checks you should ensure are in place.

General Host Considerations

Starting with your base operating system you should strongly consider Windows Server Core. Server Core has a much lower attack surface as well as lower resource utilization and reduced patching requirements. This is important when running a VM workload as it frees up additional resources for your virtual machines. And the reduced patching allows for greater availability of tenant workloads.

Additionally, consider running minimal roles and features on your host operating system. IIS for example should never be installed on a host as it uses valuable resources and also increases your attack surface. If your host will be clustered, obviously, Failover Cluster Manager and Multipath I/O will need to be installed to access shared storage. Any Anti-Virus installed on the host should be configured to ignore Hyper-V specific files like VHDs, aVHDs, VHDx and VSV.

Drivers and Firmware

Ensure that all of the latest drivers and firmware for your hardware are installed on the host operating system. This is critical because inbox drivers included with Windows are not intended for long term use. Many of the features that Hyper-V takes advantage of on the networking layer require the latest and greatest network card drivers. We have seen outdated drivers and firmware cause reduced performance and instability Hyper-V hosts. Hyper-V adds a layer of complexity which is often times more sensitive to driver and firmware versions.

If you’re clustering Hyper-V, put in place a process to be sure that every single one of your nodes is running the same driver revisions and software updates.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×