Need to Know podcast–Episode 230

We welcome back our co-host, Brenton Johnson, after his extended break. We catch up on all the news and events from the Microsoft Cloud. This is followed by an interview with Dave Sobel from MSPRadio.com taking about the transformation that many MSPs need to address to continue success in the light of the constantly changing cloud landscape.

This episode was recorded using Microsoft Teams and produced with Camtasia 2019

ake a listen and let us know what you think – feedback@needtoknow.cloud

You can listen directly to this episode at:

https://ciaops.podbean.com/e/episode-230-dave-sobel/

Subscribe via iTunes at:

https://itunes.apple.com/au/podcast/ciaops-need-to-know-podcasts/id406891445?mt=2

The podcast is also available on Stitcher at:

http://www.stitcher.com/podcast/ciaops/need-to-know-podcast?refid=stpr

Don’t forget to give the show a rating as well as send us any feedback or suggestions you may have for the show.

Resources

@mspradionews

@contactbrenton

@directorcia

Bye, bye basic auth

Updates to Threat Protection Reports

Microsoft’s Surface success

Microsoft’s failure to renew SSL certificates

User enrollment in Intune

Training modules for IT Pros

Introducing Conditional Access for Office 365

SharePoint next steps

Need to Know podcast–Episode 229

FAQ podcasts are shorter and more focused on a particular topic. In this episode I’ll talk about thehow you should be implementing Azure with every Microsoft 365 environment you create.

Take a listen and let us know what you think – feedback@needtoknow.cloud

You can listen directly to this episode at:

https://ciaops.podbean.com/e/episode-229-deploy-microsoft-365-and-azure-together/

Subscribe via iTunes at:

https://itunes.apple.com/au/podcast/ciaops-need-to-know-podcasts/id406891445?mt=2

The podcast is also available on Stitcher at:

http://www.stitcher.com/podcast/ciaops/need-to-know-podcast?refid=stpr

Don’t forget to give the show a rating as well as send us any feedback or suggestions you may have for the show.

Resources

FAQ 5 – FAQ 5 – Deploy Microsoft 365 and Azure together

CIAOPS Patron Community

Setting Archive Tier on Azure storage

In my article

Moving to the Cloud – Part 2

I spoke about using Azure Archive storage as a good location for long term data retention. The way that you configure this is basically to set up a storage account as usual and initially configure it as ‘Cool’ storage (since you can’t do Archive storage directly). You then upload files there (typically using Azure Storage Explorer). The final piece of the puzzle is to change the access tier from ‘Cool’ to ‘Archive’ by right mouse clicking on the item.

image

You can do the same using Azure Storage Explorer.

The challenge becomes when you want to do more than a single file at a time.

image

You’ll see that you now don’t get the option to set a tier any more once you have two items or more selected. The same happens with Azure Storage Explorer as well.

Thanks to Marc Kean who pointed me in the right direction, the solution lies in changing this programmatically. Marc has a script on his site and I found another on GitHub as well but decided to write my own anyway which you’ll find here:

https://github.com/directorcia/Azure/blob/master/az-blob-tierset.ps1

with mine you’ll need to set the following variable first at the top of the script:

$storageaccountname = “<your storage account name here>”

$storageresourcegroup = “<your storage account resource group name here>”

$storagetier = “<your desired storage tier level here>” # Hot, Cool or Archive

You’ll also need to connect to you Azure account beforehand which you can do with script of mine:

https://github.com/directorcia/Azure/blob/master/az-connect.ps1

My script will, get the storage account via:

$storageaccount = Get-AzStorageAccount -name $storageaccountname -ResourceGroupName $storageresourcegroup

Get the access for that account via

$key = (get-azstorageaccountkey -ResourceGroupName $storageaccount.ResourceGroupName -Name $storageaccount.StorageAccountName).value[0]

Get the context via:

$context = New-AzstorageContext -StorageAccountName $storageaccount.StorageAccountName -StorageAccountKey $key

get the actual container via:

$storagecontainers = get-azstoragecontainer -Context $context

It will then build an array of all the objects in that container. It will then cycle through all these items changing their tier level via:

$blob.icloudblob.SetStandardBlobTier($StorageTier)

This therefore effectively changes all the items in the container to the tier level you select. This is why I like to set up containers for specific tiers rather than intermingling.

Just remember to run this script AFTER you upload your files to swap them to the cheaper Archive tier. You could also use this script to swap them back at a later stage if you need.

Azure AD Domain Services Cloud only user passwords

I have been creating a Windows Virtual Desktop (WVD) environment for internal testing. I’ll be sharing the process and tricks soon but this issue was one that I really didn’t know about for Azure AD Domain Services until someone pointed it out to me.  I am eternally grateful to gerry_1974 on the Microsoft Tech Community for this information that lead to the resolution. I thought I’d also share it here so others can avoid the oversight I made and prevent getting as frustrated as I did.

I recently wrote about setting up Azure AD Domain services for a cloud only environment

Moving to the Cloud – Part 3

The reason I needed to do this was to support my planned “cloud only” WVD test environment. Azure AD Domain Services is basically designed to create an ‘old style’ domain that WVD host machines connect to. That will change down the track, but for now WVD needs a traditional AD. Since I did not have an existing on premises domain, I planned to use Azure AD Domain Services.

After getting things working eventually (more about that soon), I was able to successfully login to my WVD environment with a user who didn’t have Multi Factor Authentication (MFA) enabled. I then tried a user with MFA and received:

clip_image001

The remote computer that you are trying to you are trying to connect to requires Network Level Authentication (NLA), but your Windows Domain controller cannot be contacted to perform NLA. if you are an administrator on the remote computer, you can disable NLA by using the options on the Remote tab of the System Properties dialogue box.

I put the issue down to being about MFA but as it turned out, I was so wrong!

When you have cloud only users with Azure AD Domain Services, no password hashes in a format that’s suitable for NT LAN Manager (NTLM) are automatically generated! To force this generation for cloud only users, it is required that the cloud only user change their password per:

Enable user accounts for Azure DS

which says:

The steps to generate and store these password hashes are different for cloud-only user accounts created in Azure AD versus user accounts that are synchronized from your on-premises directory using Azure AD Connect. A cloud-only user account is an account that was created in your Azure AD directory using either the Azure portal or Azure AD PowerShell cmdlets. These user accounts aren’t synchronized from an on-premises directory.

and most importantly:

For cloud-only user accounts, users must change their passwords before they can use Azure AD DS. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Azure AD.

After having this brought to my attention, I understand why this is but would also say this could be a very painful process if you have a lot of users that are wanting access to something like WVD.

Thus, another little configuration tip to remember if you are setting up a cloud only environment that utilises Azure AD Domain Services. Before users can potentially use services that are dependent on Azure AD Domain Services (like Windows Virtual Desktop) they need to change their password so the NTLM password hash can be generated for use by Azure AD Domain Services.

Ignite 2019 sessions on YouTube

Not everyone, including me, is able to get to Microsoft Ignite for various reasons. Microsoft, to their credit, live streams and records the sessions. Eventually, these sessions make their way onto YouTube which is my preferred viewing platform. However, what is missing is a catalogue of the links to each session.

image

As in previous years:

Ignite 2017 sessions on YouTube

Ignite 2018 sessions on YouTube

I have started building this index and making it available on my GitHub:

Ignite session 2019 on YouTube

Please note, all the session are not there as yet. I add them as I discover them along the way through the year.

Of course, if you have a link to a session that I don’t have up there yet, please send it along so I can add it and we can all benefit.

Thanks again to Microsoft for doing this and uploading the sessions to YouTube. They are a great source of learning and allows people like me would couldn’t get to Ignite the ability to work through the content.

Moving to the Cloud–Part 3

This is part 3 of a multi part examination of moving to the Microsoft cloud. If you missed the first episode, you’ll find it here:

Moving to the Cloud  – Part 1

which covered off setting up a site to site VPN to Azure and

Moving to the Cloud – Part 2

which looked at creating traditional ‘dive mapped’ storage as PaaS.

It is now time to consider identity. We need to know where a user’s identity will live in this new environment because there are a few options. Traditionally, a user’s identity has lived on premises in a local domain controller (DC) inside an Active Directory (AD). With the advent of the cloud we now have Azure Active Directory (AAD) as an option as well. It is important here to remember that Azure Active Directory (AAD) is NOT identical to on premises Active Directory (AD) per:

image

What this means is that native Azure AD (AAD) can’t do some things that on premises Active Directory (AD) can do. Much of that is legacy services like Group Policy and machine joins, etc. You’ll see that Windows 10 machines can be joined to Azure AD (AAD) directly but legacy systems, like Windows 7, 8 and Windows Servers can’t be directly joined to AAD. That’s right. As we stand today, even the latest Windows Server cannot be directly joined to AAD like it can be joined to an AD on premises.

Thus, if you have legacy services and devices as well as Windows Servers you want to remain as part of your environment, you are going to need to select an identity model here that supports traditional domain joins. I will also point out that, as of today (changing in the future), if you want to implement Windows Virtual Desktop (WVD), you will also need a traditional AD to join those machines to. However, if you have no devices that require legacy services, for example if your environment is totally Windows 10 pro based with no servers (on prem or in Azure IaaS), then all you will need is Azure AD.

Thus, not every one can jump directly to AAD immediately. Most will have to transition through some form of hybrid arrangement that supports both AAD and AD in the interim. However, most transitions are ultimately aimed at eliminating on premises infrastructure to limit costs such as patching and updating things like physical servers. This will be what we are aiming for in this scenario.

In a migration from a traditional on premises environment with a domain controller (DC) and AD we now have a number of options when it comes to identity in the cloud.

1. You can maintain the on premises domain controller and AD, while using Azure AD Connect to synchronise (i.e. copy) the user’s identity to the AAD. It is important to note here that the identity in Azure is a COPY and the primary identity remains on premises in the local AD. This is still the case if you implement things like password write back that are part of Azure AD P1 and Microsoft 365 Business. Having the user’s primary identity still on premises means this is where you need to go to make changes and updates.

2. You can swing the domain controller from on premises to Azure IaaS. This basically means setting up a new VM in the Azure VNET that has been created already, joining it to the existing on premises domain across the VPN, then using DCPromo to make it a domain controller. To make it the ‘primary’ domain controller, you swing across the domain infrastructure roles via the following in PowerShell:

Move-ADDirectoryServerOperationMasterRole -Identity “Target-DC” -OperationMasterRole SchemaMaster,RIDMaster,InfrastructureMaster,DomainNamingMaster,PDCEmulator

and then DCPromo the original on premises domain controller out and then remove it altogether. This way you now have your Domain Controller and AD on the VM in Azure IaaS working with machines in the Azure VNET and on premises thanks to the site to site VPN established earlier (told you it would be handy!). In essence, this is like picking up the domain controller hardware and moving it to a new location. Nothing else changes. The workstations remain on the same domain, group policy is unaffected, etc, etc. The downside is that you still need to continue to patch and update the new domain controller VM in Azure but the maintenance and flexibility is superior now it is in Azure IaaS.

3. You replace the on premises domain with Azure AD Domain Services. Think of this like a cloud domain controller as a service. It is a Domain Controller as PaaS. This means that when you use Azure AD Domain Services Microsoft will spin up two load balanced domain controller VMs and connect this directly to AAD so the users there now appear in the PaaS domain controllers. Using Azure AD Domain Services removes the burden of you having to patch, update, scale, etc domain controllers for your environment. It also gives you a traditional AD environment you can now connect things like servers to. However, there are some trade offs. When you use Azure AD Domain Services you must start a new domain. This means you can’t swing an existing domain across onto it, like you can in step 2 above. This means detaching and reattaching all your legacy devices, like servers, from the original to new domain. You also get limited functionality with traditional AD services like Group Policy. You should see Azure AD Domain Services as a transitionary step, not an end point.

With all that in mind, you need to make a decision on what works best for your environment, now and in the future. Considering that most environments I see want to eliminate the on premises domain controller hardware as soon as possible and not replicate this going forward. That desire therefore means a migration to PaaS using Azure AD Domain Services.

The first step in this process then is going to be to ensure that all your users are in Azure AD. The assumption here is that you have already set up your Microsoft 365 environment and the users are configured in Azure AD. If you retaining an on premises domain controller you’ll need to have set up Azure AD Connect to copy the user identities to Azure AD. Azure AD is where Azure AD Domain Services will draw it’s identities when it is installed, so the users need to be there first. Once the users appear in Azure AD, next step will be to set up Azure AD Domain Services. You can kind of think of a traditional on premises domain controller as somewhat being equivalent to Azure AD combined with Azure AD Domain Services.

Setting up Azure AD Domain Services is done via the Azure portal.

image

Login as a global administrator and locate Azure AD Domain Services and select that.

image

You’ll most likely find that no services are as yet configured. Select the Add option from the menu across the top as shown above.

image

You then need to complete the details. Here we face an interesting question, what should we call this new ‘traditional’ managed domain we are about to create with Azure AD Domain Services? Should it be the same as what is being used in Azure AD already?

image

image

How you configure this is totally up to you. There is guidance, as shown above, which can be found at:

Active Directory: Best Practices for Internal Domain and Network Names

In this case I have decided to go for a sub-domain, as recommended, and prefix the new Azure AD Domain Services with the letter ‘ds’ i.e. ds.domain.com.

image

With all the options completed, select Next – Networking to continue.

image

Unfortunately, you can’t configure Azure AD Domain Services on the same subnet that has service endpoints as you can see above. You’ll see this if you configured your Azure storage to use private endpoints as we have, which has been previously recommended.

If so, then you can select the Manage link below this box and simply add a new subnet to your Azure VNET and then use that to connect Azure AD Domain Services to.

image

Before you continue to Administration, ensure that you are adding Azure AD Domain Services to your existing Azure VNET as the default is to create a new VNET, which is NOT what you want here. You want to connect it to an existing VNET you have established previously.

When you have selected your existing Azure VNET and a suitable subnet, select the Next – Administration button to continue.

image

Here you’ll need to decide which users will be administrators for the domain.  So from the documentation:

What can an AAD DC Admin do?

Only Microsoft has domain admin and enterprise rights on the managed domain. AAD DC Admins can do the following:

  • Admins can use remote desktop to connect remotely to domain-joined machines

  • Admins can join computers to the domain

  • Admins are in the administration group on each domain-joined machine

Considerations for the AAD DC Administrators group

  • Pick group members for the AAD DC Administrators group that have these needs:

    • Users that need special administrative permissions and are joined to the domain

    • Users that need to join computers to the domain
  • Do not change the name of the AAD DC Administrators group. This will cause all AAD DC Admins to lose their privileges.

The default will be your global administrators and members of a special group called AAD DC Administrators, that will be created. So, you can simple add any Azure AD user to this group and they will have admin privileges in the  Azure AD Domain Services environment going forward.

You can of course configure these permissions any way you wish but generally the defaults are fine so select the Next – Synchronization button to continue.

image

The final question is whether you wish to have all or a subset of your Azure AD users synchronised into the Azure AD Domain Service environment. In most cases, you’ll want all users, so ensure that option is select and press the Review + create button to continue.

image
image

You should now see all your settings and importantly, note the box at the bottom about consenting to store NTLM and Kerberos authentication in Azure AD. This is because these older protocols have potential security concerns and having them stored in a place other than a  domain controller is something you need to be aware of. Generally, there won’t be any issues, but make sure you are aware of what that last box means for your security posture.

Press the Create button when complete.

image

You’ll then receive the above warning about what configurations options can’t be changed after the fact. Once you have reviewed this and you wish to proceed, select the OK button.

Your deployment into Azure will then commence. This process should generally take around 1 hour (60 minutes).

image

You should see the above message when complete and if you select Go to resource you’ll see:

image

You’ll note that it still says Deploying here, so you’ll need to wait a little longer until that process is complete.

image

In about another 15 minutes you should see that the domain is fully deployed as shown above. Here you will note that two domain controllers have automatically been allocated. In this case they are 10.0.1.5 and 10.0.1.6 on the subnet into which Azure AD Domain Services was deployed. You can select from a number menu options on the left but the service is pretty basic. Most times you’ll only need to look at the Activity log here from now on.

Can you actually manage the domain controllers like you can on premises? Yes, somewhat. To do that you’ll need to download and install the:

Remote Server Administration Tools for Windows 10

on a Windows 10 workstation that can access these domain controllers.

image

You can then use that to view your domain in the ‘traditional way’ as shown above.

Thus, with Azure AD Domain Services deployed, you have a ‘traditional’ domain but without infrastructure and with your Azure AD users in there as well.

The summary of the options around identity here are thus:

1. Primary = local AD, Secondary = none (which can be linked to Azure via a VPN)

2. Primary = Azure AD, Secondary = none (no on premises infrastructure like servers to worry about)

3. Primary = local AD, Secondary = Azure AD (thanks to Azure AD Connect, but need a VPN again to connect to Azure IaaS)

4. Primary = Azure AD, Secondary = Azure AD Domain Services (which can be linked backed to on premises via a VPN)

In this case, we’ll be going with Option 4. You can see however that a VPN is going to be required for options 1, 3 and 4. That’s why one of the first steps in this series was to set one up.

With all that now configured, let’s now look at the costs involved. The costs here will vary on what identity solution you select. If you stay with an on premises domain controller only, you will need to have site to site VPN to resources in Azure. The costing for this has been covered previously:

Moving to the Cloud  – Part 1

and equates to around AU$36 per month with less than 5GB of traffic inbound to Azure. Azure AD Connect software you use to synchronise user identities to Azure AD is free.

If you move the domain controller to a virtual machine in Azure, there will be the cost of that virtual machine (compute + disk storage). The cost will therefore vary greatly on what VM type you select. I’ll be covering more about VM options in this migration in an upcoming article, but for now let’s keep it simple and say we use a A2v2 Standard VM (4GB RAM, 20GB HDD) for a single role as just a domain controller. The cost for that is around AU$76 per month. If you also still have on premises infrastructure, like Windows Servers, that need access to the domain, then you’ll also need a site to site VPN to communicate with the domain controller VM in Azure IaaS. Thus, to move the domain controller to Azure IaaS and still allow access to on premises infrastructure the cost would be around AU$112 (Azure VM + VPN). Of course, if you can migrate all your on premises server infrastructure to Azure IaaS, you probably wouldn’t need the VPN but there would then be the costs of the additional infrastructure in Azure. Balanced against this cost in Azure IaaS is the saving in local hardware, power, etc.

Again, let’s keep it simple for now and say we want to maintain on premise infrastructure but have a dedicate domain controller in the Azure IaaS so the one on premises can be de-commissioned. That means the costs would be AU$112 per month for a domain controller in Azure IaaS and a VPN back to on premises.

Finally, the last identity option is if we wanted to use the Azure PaaS service, Azure AD Domain Services, which means no infrastructure at all but also means we need to start with a new ‘clean’ domain separate from the existing on premises one. The costs of this Azure PaaS service can be found at:

Azure Active Directory Domain Services pricing

which reveals:

image

For smaller directories (<25,000 objects) the cost is going to be AU$150 per month flat. Remember, here when equating costs, there are no VMs to backup or operating systems to patch because it is PaaS. This is a domain controller as a service and Microsoft will take care of all the infrastructure “stuff” for you as part of that service. Of course, if you need on premises infrastructure to access Azure AD Domain Services, you’ll again need a site to site VPN to get there. If all your infrastructure is cloud based, then no site to site VPN is required. However, in this scenario, we still want access to on premises infrastructure so the costs would be AU$186 per month (Azure AD Domain Services + VPN).

In summary then, the configuration options/costs will be:

Option 1. Retain on premises AD = AU$36

Option 2. Move domain controller to Azure IaaS = AU$112 (estimated typical cost)

Option 3. Migrate domain controller to Azure PaaS = AU$186 per month

Going forward we’ll be selecting Option 3, because we are aiming to minimise the amount of infrastructure to be maintained and we want to move to PaaS as soon as possible. That means the total cost of the migration so far is:

1. Site to Site VPN = AU$36

2. Storage = AU$107

3. Identity (PaaS) = AU$150

Total maximum infrastructure cost to date = AU$293 per month

This means we have:

1. Eliminated the old on premises domain controller (hardware, patching, backup, power, etc costs)

2. Can connect to on premises infrastructure to Azure AD (via Azure AD Domain Services and the VPN)

3. Have mapped tiered storage locations for things like archiving, profiles, etc that are PaaS

4. We can now build out a Windows Virtual Desktop environment

The next item that we’ll focus on is setting up a Windows Virtual Desktop environment as we now have all the components in place to achieve that.

Moving to the Cloud–Part 2

This is part of a multi part examination of the options of moving to the Microsoft cloud. If you missed the first episode, you’ll find it here:

Moving to the Cloud  – Part 1

which covered off setting up a site to site VPN to Azure.

The next piece of the puzzle that we’ll add here is storage.

Storage in the Microsoft cloud comes in many forms, SharePoint, Teams, OneDrive for Business and Azure. We’ll get to stuff in Microsoft 365 like SharePoint, Teams and OneDrive later, but to start off with we want to take advantage of the site to site VPN that was set up in Part 1.

In Azure there are three different access tiers of storage; hot, cool and archive. They all vary by access speed and cost. The slower the access, the cheaper it is. Hot is the fastest access, followed by cool, then archive. You can read more about this here:

Azure Blob storage: hot, cool, and archive access tiers

The other variable here with Azure storage is the performance tier; standard or premium. You can read more here:

Introduction to Azure storage

Basically, standard performance tier uses HDD while Premium uses SSD. Apart from performance, the major difference is how the storage cost is actually calculated. With the standard tier, you are only billed for the space you consume BUT you are also billed for access (read, write, delete) operations. With premium, you are billed for the total capacity of the storage you allocate immediately BUT, you are not billed for any access operations.

So the key metrics you need to keep in mind when you are designing a storage solution in Azure is firstly the access tier (hot, cool or archive) the performance tier (standard or premium) and the capacity you desire for each. You may find some combinations are unavailable, so check out the document linked above for more details on what is available with all these options.

The easiest approach to Azure storage is to create an Azure SMB Share and map these directly on a workstation which I have previously detailed here:

Creating an Azure SMB Share

as well as an overview on pricing:

Clarification on Azure SMB file share transactions

Azure SMB files currently only supports hot and cool tiers. You can use archive storage but only via blob access, not SMB files. So what good are all of these you may ask? Well, if you read my article:

Data discovery done right

You’ll find that I recommend dividing up your data into items to be deleted, archived and to be migrated.

So we need to ask ourselves the question, what data makes sense where?

Let’s start with Azure archive storage. What makes sense in here, given that Azure archive storage is aimed at replacement of traditional long term storage (think tape drives)? Into this, you want to put data that you aren’t going to access very often, and that doesn’t make sense going into Teams, SharePoint and OneDrive. What sort of data doesn’t make sense going into SharePoint? Data that can’t be indexed such as large image files without text, Outlook PST backups, custom file types SharePoint indexing doesn’t support (think some types of CAD files and other third party file types). In my case, Azure archive storage is a great repository for those PST backups I’ve accumulated over the years.

Here is the guidance from Microsoft:

  • Hot – Optimized for storing data that is accessed frequently.

  • Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

  • Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

We now repeat this again but with the cool tier storage, remember that this tier now directly supports Azure SMB files. So, what makes sense here? There is obviously no hard and fast rule but again, what doesn’t make sense going into SharePoint? Stuff that can’t be indexed, is typically large, is not accessed that often but more often than archive storage AND you also want to be accessible via a mapped drive letter. In my case, that data that springs to mind are my desktop utility apps (like robocopy), ISO images (of old versions of SharePoint server I keep in case I need to do a migration) and copies of my podcast recordings in MP3 format.

We repeat this again for the hot tier which is fastest and most expensive storage option. Initially here I’m going to place the user profile data when I get around to configuring Windows Virtual Desktop (WVD) in this environment. That needs to be quick, however most other current data files I have will go into Microsoft 365. Being the most expensive tier of storage, I want to keep this as small as possible and only REALLY put data on here that makes sense.

You don’t have to use all three tiers as I do. You can always add more storage later if you need to, but I’d recommend you work out what capacity you want for each tier and then implement it. For me, I’m going for 100GB Archive, 100GB cool and 50GB hot as a starting point. Your capacities will obviously vary depending on how much data you plan to put in each location. That why you need to have some idea idea where all your data is going to go BEFORE you set all this stuff up. Some will go to Azure, some will go to Microsoft 365, some will deleted and so on.

As for performance tiers, I’m going to stick with standard across all storage accounts for now to keep costs down and only pay for the capacity I actually use.

Let’s now look at some costs by using the Azure pricing calculator:

image

I’ll firstly work out the price for each based on 1TB total storage for comparisons between the tiers and to SharePoint and OneDrive for Business.

All the storage calculations are in AU$, out of the Australian East data center, on the standard performance tier and locally redundant unless otherwise stated.

You can see that 1TB or archive storage is only AU$2.05, but it ain’t that simple.

image

There are other operations, as you can see above that need to be taken into account. I have adjusted these to what I believe makes sense for this example but as you can see, variations here can significantly alter the price (especially the read operations).

The estimated total for 1TB of archive storage on the standard performance tier = AU$27.05 per month.

Now, as a comparison, if I change the performance tier to Premium I get:

image

The price of the storage goes way up, while the price of operations goes way down. So, if you want to minimise costs and you have lots of operations on your storage, standard tier is your best option.

The estimated total for 1TB of archive storage on the premium performance tier = AU$224.22 per month.

Basically 10 x the cost above the standard tier.

In my case, I don’t need 1TB of storage, I only want 100GB of storage.

image

When I now do the estimation of 100GB of archive storage, the cost of just the storage falls by 10x (as expected) to AU$0.20, Don’t forget however about the storage operations which remain the same. So, my storage cost went down but my operation costs remained the same. Thus,

The estimated total for my 100GB of archive storage on the standard performance tier = AU$25.95 per month.

While premium is:

image

The estimated total for my 100GB of archive storage on the premium performance tier = AU$22.78 per month.

As outlined before, as a general rule of thumb with archive storage, premium performance tier is better value for low storage capacity and also low data operations. Once the capacity increases with premium performance, the price ramps ups.

So why would I recommend staying with the standard performance tier? Although, I ‘estimate’ that my archive will be small, I want the flexibility to grow the capacity if I need it. Remember, that we don’t set a storage capacity quota for block storage, it can just grow as needed and the bigger the storage capacity the more it will cost me if I go premium. Given that storage capacity here is more important than working with the data, I want the cheapest storage costs I can get as the data capacity increases. Thus, I’ll stick with the standard access tier. Also, remember that I’m estimating when my storage reaches 100GB here I’ll be billed AU$25.95 per month but until I reach that capacity and the less operations I do on files there, the cheaper this storage will be. I therefore expect my ‘real world’ costs to in fact be much less than this AU$25.95 figure over time.

Let’s now look at the next two storage locations, which will be Azure SMB file shares.

Unfortunately, the pricing calculator doesn’t allow us to easily calculate the price for an SMB Share on a cool access tier (Azure SMB files doesn’t currently support being on the archive tier). However, the pricing is only an estimate, so I know if I place it on a cool access tier it will be cheaper anyway, so I’m going to keep it simple.

image

Thus, for reference:

The estimated total for 1TB of SMB file storage on the standard performance tier = AU$106.58 per month.

remembering that for the standard tier we need to take into account the cost of operations as shown.

and for Premium:

image

The estimated total for 1TB of SMB file storage on the premium performance tier = AU$348.00 per month.

With premium storage, you don’t need to worry about operations, however don’t forget, if you go premium you’ll be paying for the total allocated capacity no matter how much you are actually using. Thus, I’ll again be sticking with standard storage.

So, for my 50GB Azure SMB files hot tier I calculate the following:

image

The estimated total for my 50GB of hot SMB file storage on the standard performance tier = AU$32.40 per month.

Now how can I get an idea of what the cool SMB file price will be? Although it is not this simple, I’m going to use a ratio from:

Azure Block blob pricing

image

So, by my super rough rule of thumb maths I get:

cool/hot = 0.02060/0.0275 = 0.75

Thus, cool storage is 75% the cost of hot storage say.

The estimated total for my 100GB of cool SMB file storage on the standard performance tier = AU$32.40 per month x 2 x 0.75 = AU$48.60 per month

The 2 x is because the hot price I have is only for 50GB and I want 100GB of cool storage.

In summary then, I will create 3 x storage repositories for my data:

– 100GB blob archive storage = AU$25.95 per month

– 100GB SMB file cool storage = AU$48.60 per month

– 50GB SMB file hot storage = AU$32.40 per month

250GB total storage estimated cost = AU$106.95 per month

Again remember, this is my estimated MAXIMUM cost, I expect it to be much lower until the data capacities actually reach these levels.

Now that I have the costs, how do I actually go about using these storage locations?

Because archive storage is blob storage I’ll need to access it via something like Azure Storage Explorer as I can’t easily use Windows Explorer. I’m not expecting all users to work with this data so Azure Storage Explorer will work fine to upload and manipulate data if needed by a select few.

As for the SMB file cool and hot storage I’m going to map these to two drives across my VPN as I have detailed previously:

Azure file storage private endpoints

This means they’ll just appear as drive letter on workstations and I can copy data up there from anything local, like a file server. The great thing is that these Azure SMB file shares are only available across the VPN and not directly from elsewhere as the article shows. That can be changed if desired, but for now that’s they way I’ll leave it. I can also potentially get to these locations via Azure Storage Explorer if I need to. The flexibility of the cloud.

So far we now have:

– Site to Site VPN to Azure (<5GB egress from/unlimited ingress to Azure)= $36.08 per month

– 100GB blob archive storage = AU$25.95 per month

– 100GB SMB file cool storage (mapped to Y: Drive) = AU$48.60 per month

– 50GB SMB file hot storage (mapped to Z: Drive) = AU$32.40 per month

Total maximum infrastructure cost to date = AU$143.03 per month

So we now have in place the ability to start shifting data that doesn’t make sense going into Microsoft 365 SharePoint, Teams and OneDrive for Business. Each of the three new storage locations has their advantages and disadvantages. That is why I created them all, to give me the maximum flexibility at the minimum cost

We continue to build from here in upcoming articles. Stay tuned.

Optimising Azure OMS data ingestion

image

Every month when I receive my Azure bill I take a careful look at it to see if there is anything I can optimise. This month I saw that the top cost was from my Log analytics workspace as you can see above. This however was no surprise because it basically represents that amount of data that had been ingested from my remote workstations into Azure Sentinel for analysis.

image

When I looked at Azure Sentinel I can see that I am bringing in more performance logs than security events per day. Now the question is, am I really getting value from having that much ingestion of performance logging? Probably not, so I want to go and turn it down a notch and not ingest quite so much and hopefully, save me a few dollars.

image

To do this, I’ll need to log into the Azure Portal and then go to Log Analytics workspaces.

image

I’ll then need to select Advanced settings from the menu on the left.

image

First thing I checked was in Data, Windows Event Logs is that I’m only capturing the errors in the Application and System logs for the devices, which I was.

image

Next I went to Windows Performance Counters and adjusted the sample time limit. I have increased it to every 10 minutes for now to see what difference that makes. I could also remove or add certain performance counters here if I wanted but I wanted to work with the current baseline.

With all that done, I’ll wait and see what the cost differences are in next month’s invoice and adjust again if necessary.