Tech Blog Return!

I am happy to announce the return of the Hammer Time Tech Blog! I know you couldn’t wait for this to happen..

Recently I’ve been wanting to get back into blogging from a more technical perspective with some how to’s and lessons learned over my career. Previously the blog was hosted on Medium but that didn’t feel like the right… medium (harhar?) for what I’d like to be talking about these days.

There will still be some opinion posts and articles with things that I have on my mind. But in addition to that there will be more technical articles where I explain how I solved a problem or post some script snippets that helped me out. I also hope to further add to my GitHub which is sorely lacking in original content.

The first article coming soon will be how to use Azure DevOps to create a full CI/CD pipeline for publishing a Hexo site to Azure Storage using the new static website hosting feature. I’ll also explain my reasoning for moving to Hexo when there’s no shortage of blogging applications out there on the Internet.

I am really looking forward to sharing more knowledge in 2019!

Private Cloud Isn’t Dead Yet

I’ve been hearing about the eminent demise of on-premise workloads and “Private Clouds” ever since the Public Cloud became a thing. Most recently I’ve seen quite a few articles about how their days are numbered.

I disagree.

In my role as a Product Architect for Microsoft Hyper-V and Cloud Platform my team and I deal with traditional Virtualization but we also dabble quite a bit in Public Clouds, mostly Microsoft Azure. We try and make sure we utilize every service that we can to provide a streamlined product offering for our customers. So, I live in these worlds and often advise customers and architect environments that take advantage of both worlds. I’ll be speaking primarily from the Microsoft side of things.

First off private clouds still provide significant value and lower your overall IT spend. Your benefits include being able to fully manage your entire stack from the networking on up. You’re in control of the compliance and how everything operates. There are still quite a few reasons to use traditional virtualization for things like legacy workloads and most of all control. You also have full access to all the capacity you purchased up front, and when you aren’t using that capacity it’s available to you right away. It certainly is not hyperscale, but it is guaranteed capacity within your data center.

Public clouds are advancing at a rapid pace, in some cases new features are added hourly. They are starting to become MUCH friendlier to compliance requirements like PCI, FedRAMP and many others. And as a result, it is becoming more appealing to put your workload in Azure or AWS.

Hyper-V Best Practices

When deploying Windows Server as a hypervisor host for virtual machine workloads there are a variety of best practices that should be put in place. Running Hyper-V on Windows has special considerations when it comes to workload. For example, Virtual Machine Queues and Hyper-V Specific hotfixes. This is blog entry serves as an overview of important checks you should ensure are in place.

General Host Considerations

Starting with your base operating system you should strongly consider Windows Server Core. Server Core has a much lower attack surface as well as lower resource utilization and reduced patching requirements. This is important when running a VM workload as it frees up additional resources for your virtual machines. And the reduced patching allows for greater availability of tenant workloads.

Additionally, consider running minimal roles and features on your host operating system. IIS for example should never be installed on a host as it uses valuable resources and also increases your attack surface. If your host will be clustered, obviously, Failover Cluster Manager and Multipath I/O will need to be installed to access shared storage. Any Anti-Virus installed on the host should be configured to ignore Hyper-V specific files like VHDs, aVHDs, VHDx and VSV.

Drivers and Firmware

Ensure that all of the latest drivers and firmware for your hardware are installed on the host operating system. This is critical because inbox drivers included with Windows are not intended for long term use. Many of the features that Hyper-V takes advantage of on the networking layer require the latest and greatest network card drivers. We have seen outdated drivers and firmware cause reduced performance and instability Hyper-V hosts. Hyper-V adds a layer of complexity which is often times more sensitive to driver and firmware versions.

If you’re clustering Hyper-V, put in place a process to be sure that every single one of your nodes is running the same driver revisions and software updates.

Taking Evidence Seriously: Russian Hacking

One of the reasons I decided to start writing again was to put my experience in the field to good use. Primarily to explain the sometimes confusing world of Computers, Technology and Networking by making real world analogies. I hadn’t intended this to be political in any way but with what’s going on out there today it means that that’s what I’ve primarily been writing about.

So let’s talk about the recent hacking news based in the real world evidence and not hyperbole.

Unless you were living under a rock for the last half of 2016 you are aware that there was a massive leak of Democratic National Committee e-mails from key players in the Democratic Party. It’s been determined by a large number of public and private sector officials that these hacks were from the Russian Government. There are certain people who think that this can’t be true and while much of what occurred is classified, there is still plenty of public evidence readily available.

So how do they know that it was the Russians? Much like investigating a crime of say murder — the perpetrators leave behind key evidence like DNA, finger prints, fibers from their clothes, pieces of skin, etc etc. In a hacking attempt whether successful or not they also leave behind a signature. The tools used to infiltrate any systems whether government or private were created somewhere. I speak of things like malware and pieces of code that are created for the very specific purpose of gathering and extracting information from a target. The perpetrators will continue to use these toolsets in multiple attacks over time iterating on them and improving them, but the basis is still there — a signature if you will.

Why we need to take Online Communities Seriously

Recently I got into an argument in some Facebook comments (yea I know) regarding Online Communities. I’ve been thinking about it a lot recently with everything going on in this country.

It’s the Internet. You get a group of people together under a shroud of anonymity and guess what happens? Stupidity that shouldn’t be taken seriously.

I’ve been involved in a variety of online communities and I’ve even been at the helm of a few. I know that they can be fickle beasts with all sorts of crazy personalities, trolls, and debate. It can be very easy to shrug off comments or threats made by someone on the Internet “It’s the Internet. You get a group of people together under a shroud of anonymity and guess what happens? Stupidity that shouldn’t be taken seriously.” but having been the target of online harassment over the years I can tell you that it isn’t anything that should be shrugged off. I’ve even seen toxic people be celebrated and rewarded time and time again when they do nothing but blurt out vitriol and racism. I’ve seen those same people bring other people in to their cause and have those people do the harassment for them, its the modern day version of a henchman.

I say that it should be taken very seriously for a number of reasons.

Just this past weekend an armed man decided to waltz into a pizza shop to investigate a conspiracy theory that is a complete and utter lie.

Every single day teens are bullied online and many even commit suicide due to cyberbullying.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×