Why following best practices in Azure is a good idea

Over my time I have seen so many Azure solutions built in ways that are contrary to agreed best practices. Why does this happen? Typically, it is because people bring old concepts and methodologies to new environments like Azure. Yes, many of the fundaments are the same. Things like TCP/IP, networking and the like are the same as on premises but others are very, very different.

One of the key differences when it comes to storage with Azure Virtual Machines (VMs) is the disk topology. When you spin up an Azure VM you typically get two drives, C: and D:. C: is the boot partition and holds the operating system while D: is a temporary or caching disk that gets recreated upon every reboot.


Above you can see an example of a topology from an Azure machine. You will see that D: has the label ‘Temporary Storage’.


A closer looks at D: reveals the contents shown above.


If you look at the contents of the warning file you see the above. Note the first line which says (in capitals):


Why am I emphasising this? I can’t tell you the number of people I have seen bring previous practices to Azure and put production data (such as Active Directory Databases) onto this temporary drive because ‘this is the way they have always done it”. That unfortunately, is only going to end in tears.

Best practice when it comes to Azure is to always add data disks to Azure and start the labelling from F:. Yes, there is an additional cost for adding data disks but that cost is small compared to the flexibility you gain.

Case in point. I have a nested virtualisation server running in Azure that hosts a number of machines for testing. This machine has two data disks striped together for storage and performance optimisation. Using striping is another change from the ‘de-facto’ that I’ll look at in an upcoming article.

Unfortunately, when I put on some recent Windows updates the machine decided it no longer wanted to boot. I tried all the troubleshooting tips to get the system to boot but to no avail.


I therefore went in to the disk configuration of the failed machine and ‘detached’ the existing data disks, which as you can see, you can do from the Azure portal, although there are also PowerShell commands to accomplish this.

With the data disks ‘freed’ from the original failed machine, I proceeded to create a new virtual machine to mirror the original failed host. After doing this I went to the disks area of the new machine and selected the option to Add data disk. However, instead of specifying to create new clean disk, I elected to use existing disks and select the ones that I had detached from the failed original.

When I now looked at the new machine, with the existing disk attached, I found that the striping environment was already in place and needed no further configuration. All I needed to do was to restore my virtual machines that were on the data disks using the Hyper V manager. All really simple.

If I had installed everything on the C: drive then I would have lost the lot and would have needed to rebuild every virtual machine in that Hyper V environment from scratch. That would have cost me a lot of time, where in fact the total recovery time here was only a matter of minutes. That’s a BIG difference!

The moral of this tale is that a new environment like Azure does operate in a different manner from previous technologies. It is generally not appropriate to always bring old practices to a new environment without taking time to understand the ‘best practices’ for a new environment. Doing things the same old way just because this is the ‘way it’s always done’ can lead to a lot of pain and heartache. On the contrary, when you take the time to understand any new environment and follow best practices for that environment, things tend to be much easier as the above hopefully illustrates. This applies as much to Azure as it does Office 365. New technologies need new approaches and new best practices.

In summary, please oh please DON’T put your production data on C: or D: with Azure virtual machines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s