Keep calm and Twitter

Generally, the cloud is pretty reliable. However, it is not perfect and there will be downtimes and outages. Just because you move your information to the cloud doesn’t mean that you abdicate your responsibilities for it. Disaster planning is as important in the cloud as it is on prem.

image

The first place to start if you are having issues with what you believe to be related to Microsoft 365 is the Microsoft 365 Service Health page shown above, which can be found at:

https://portal.office.com/adminportal/home#/servicehealth

Of course, if you are unable to access your tenant for any reason, then you’ll have to try another resource.

image

Your next point of call should be the Office 365 status page here and shown above:

https://status.office365.com/

This is fairly generic and also just links back to the Service Health in your own portal. However there maybe information here around any wider scale issues so it is always worthwhile checking.

image

Next, you should follow the @MSFT365Status Twitter account as shown above. Here you’ll find information posted that is on infrastructure outside Microsoft’s. You can also communicate with this account if you need to.

image

You can also find an Azure Status page at:

https://status.azure.com/en-us/status

Given that many Microsoft 365 services are built on Azure, it is another area that may give you some insight.

image

There is also an Azure support Twitter account @azuresupport that will post information concerning issues and something you can also interact with if you need to.

There are also numerous third party services that will track whether a web site active.

Finally, a good approach is also to do a search across Twitter to see whether others are also having similar issues. People tend to be pretty vocal on social media when they are inconvenienced, so that should a source of both good and bad information.

As Noah knows, you prepare for the flood BEFORE it rains. In the event of cloud issues, how will you know the extent of the issues and where will you get good information? For me, that source has typically been Twitter as the major source. You do have to filter those results a tad to get helpful information there, but that is the nature of social media.

In short, you need a plan. Take my advice and start monitoring Twitter to get a better idea of what might be happening beyond your own screen.

Another great security add on for Microsoft 365

Previously, I have spoken about Cloud App Security being a ‘must have’ add on for any Microsoft 365 environment:

A great security add on for Microsoft 365

I now believe that the next ‘must have’ security add on you should integrate with your tenant is Azure Sentinel.

image

In a nutshell, Azure Sentinel will allow you to monitor, alert and report on you all you logs from just about any location, whether on prem or in the cloud.

image

Once you have created the Sentinel service and assigned it a log workspace, the first place to go is to the Connectors option as shown above.

Here you can connect up your services. There is a huge range of options from Office 365, Azure, on prem and third parties like AWS, At a minimum I would suggest you connect up your Azure and Office 365 services.

image

Next, go to the Analytics option, then select Rule templates from those available. These rules are basically queries across your data sources from your connectors. Add in the rules that make the most sense for your environment.

image

As you create these rules you be stepped through a wizard as shown above.

image

The Set rule logic step allows you to define the rule based on the data being received. You will notice there are lots of options. The great thing about using the templates is that this is already done for you but you can certainly modify these or create your own.

image

The real power of Azure Sentinel lies in the Automated response step shown above. Here you define what actions will be taken when a alert is generated by the rule. This means that you can have something automatically execute when an alert happen. This could be a remediation process, advanced alerting and more. This allows the response action to threat to be immediate and customisable.

image

Next, go into the Workbook options as shown and then the Templates area and add all the options that make sense.

image

A workbook is basically an interactive dashboard where you can graphically query and report on data as shown above.

image

When rules are triggered they will appear as Incidents that you investigate as shown above.

image

You’ll be able to explore incidents in greater depth using the graphical explorer as shown above.

image

Good security is about being pro-active and Azure Sentinel gives you this via the Hunting option as shown above. This allows you to run standard queries against the data to discover items that may need further investigation and analysis. Note the option highlighted here that allows you to Run all queries at the touch of button. This is yet another hugely powerful option as you can now ‘hunt’ across all your information so quickly. Show me another tool that can do this for both cloud and on prem?

image

There are lots more features, but by now you are probably wondering what the costs are? As you can see from above, they are based on storage and you can reserve a storage size to suit your needs. However, you can also opt, as I have, for a pay as you go option.

image

This means the Azure Sentinel cost to analyse all my data is AUD$3.99 per GB of data and

image

on the pay as you go plan I also need to factor in data ingestion, which is shown above in AUD$. Note that you get 5GB of data ingestion free per month. After that, I’d be paying AUD$4.586 per GB.

image

As you can see from the above usage figures I am no where near the 5GB ingestion limit, so all I am currently paying for just Azure Sentinel analysis.

The amount of data you ingest and analyse will depend on the services you connect and well as things like data retention periods. All of these can be adjusted to suit your needs. There are also many other Azure pricing tools you can use to control your spend. However, if you are concerned about running up an excessive bill, just connect and few services and scale from there.

In my case, I have logs from Microsoft 365 Cloud services, Azure, on premises machine monitoring, Defender ATP and more all going into Sentinel. Basically, everything I can, is going in there and the costs remain low.

I have always maintained that when you sell Microsoft 365, you should also sell an Azure subscription:

Deploy Office 365 and Azure together

Azure Sentinel is yet further confirmation that you should be doing this to add greater functionality and security to your environment. I will be spending more time deep diving into Azure Sentinel so make sure you stay tuned.

Allowing extensions with Edge Baseline

image

One of the handy things that Microsoft has now enabled is the ability to control the modern Edge browser (i.e. the one based on Chromium) via policy and services like Intune. In fact, if you visit Intune and look for Security Baseline you’ll find a new Microsoft Edge Baseline policy as shown above.

image

There are lots of great settings you can enforce by using this baseline to create a policy as you can see above.

I enabled the policy without making any changes initially so I could determine the impact, if any. It turns out that the default baseline actually disables any and all existing browser extensions you may have and also prevents you from adding new extensions.

I understand that this approach makes your environment more secure but I really can’t live with both the Lastpass and GetPocket extensions.

image

Unfortunately, by default with the baseline policy, these got blocked as you see above. This meant that I needed to adjust the policy.

image

As it turned out, you need to set the option:

Control which extensions can be installed = Not Configured

Just disabling and removing other options didn’t seem to do the trick.

image

After making that change and forcing the updated policy to sync to the workstation, I was back in business as you see above. I didn’t need to do anything in the browser, the previously disabled extensions were re-enabled automatically.

Enabling extensions is the only change I have made to the default baseline policy so far and now everything is working as expected and is more secure which I like.

I’d like the option to select ‘approved’ extensions so the baseline policy could be applied in total. Hopefully, that feature will make an appearance in the policy soon as I thing many will want it. However, this is quick and easy way to lock down the new Edge browser and another reason that, like me, it is my primary browser.

Changing client Log Analytics workspaces

I have been using Azure Log Analytics solutions for a while now to do things like report on client machine changes, updates, inventory, security and so on. However, I wanted to change my workspace for these clients from one Azure tenant to another.

image

I was thinking that I’d have to do into the registry and change the workspace id and key but when I searched the registry there were far too many entries. Turns out you don’t need to do that at all! All you need to do is got to the control panel and find the Microsoft Monitoring Agent as shown above.

image

When you run that you’ll see any workspaces you are current joined to. You can Edit or Remove what is there.

image

Then you you can add a new workspace as shown above.

image

All you then need to is plug in the new Workspace ID and Key from new workspace and you are away.

I also discovered that you can configure the agent to report to multiple workspaces, even in different tenants if you want. That makes things really easy.

How easy is that?

Security is shared responsibility

The good and bad thing about the Internet is that we are all now pretty much connected to each other all the time. The growth in attacks by bad actors continues to expand and become ever more sophisticated.

One of the ways I have suggested that can help yourself be that little bit more secure is to brand your Microsoft 365 tenant. I wrote an article on how to do this:

Office 365 branding using Azure Resource Manager

Why this makes you a little bit more secure is that most phishing attacks are generic and take you to an unbranded, generic Microsoft 365 login page. Thus, having your own branding on your tenant will hopefully get users to stop and think before giving up their credentials to malicious sites. Yes, I know it is not fool proof, but every little bit helps.

It however, was only a matter of time before the bad actors worked out how to get around this as has now been brought to my attention.

SNAGHTML5ec48a3

image

As you can see from the above, I am getting my tenant branding displayed even though the URL is not for the Microsoft Online URL!

image

You can see the attack does have a flaw on a large screen as shown above, but I’m sure it will fool most people.

So, how can I make sure Microsoft knows about this? I can use my (real) Microsoft 365 Admin portal to report the URL.

image

I go to the Microsoft 365 Security Center and select Submissions from under the Threat Management section. You then select +New submission as shown.

image

You then simply complete the details for what you wish to tell Microsoft about and select the Submit button down the bottom.

image

You should get a confirmation like shown above.

image

You should then also be able to see your submission in the bottom of the screen. Just make sure you select the correct query options and results to see this.

You can read more about this at:

How to submit suspected spam, phish, URLs, and files to Microsoft for Office 365 scanning

Security it never an exact science. Attackers work hard to bypass barriers configured, so you should never be complacent. However, you should also share what you find with people like Microsoft to help them harden their systems against attack and thereby help others.

We are all in this together, so let’s work together to make it a safer place for all.

Legitimate non-MFA protection

image

There is no doubt that multi-factor authentication (MFA) is the great thing for the majority of accounts. It is probably the best protection against account compromise, however are there times that perhaps, MFA doesn’t make sense?

Security is about minimising, not eliminating risk. This means that it is a compromise and never an absolute. Unfortunately, MFA is a technology and all technologies can fail. If MFA did fail, or be unavailable for any reason, then accounts would be unavailable. This could be rather a bad thing if such an issue persisted for an extended period of time.

In such a situation, it would be nice to be able to turn off MFA for accounts, so users could get back to work, and re-enable it when the MFA service is restored. However, today’s best practice is to have all accounts, especially administrators, protected by MFA.

A good example is the baseline policies that are provided for free in Microsoft 365 as shown above.

image

You’ll see that such a policy requires MFA for all admin roles.

image

And yes, there is a risk for admin accounts that don’t have it enabled but there is also a risk if it is enabled and MFA is not working for some reason.

The challenge I have with these types of policies is that they are absolute. It is either on (good) or off (bad), and in the complex world of security that is not the case.

I for one, suggest there is the case for a ‘break glass’ administration account, with no MFA, that be used in the contingency that MFA is unavailable to get into accounts and re-configure them if needed. Such an account, although it has no MFA, is protected by a very long and complex pass phrase along with other measures. Most importantly, it is locked away and never used, except in case of emergency. There should also be additional reporting on this account, so it’s actions are better scrutinised.

Unfortunately, taking such an approach means that you can’t apply such absolute policies. It also means that you won’t be assessed as well in things like Secure Score. However, I think such an approach is more prudent that locking everything under MFA.

As I said initially, security is a compromise, however it would be nice to see the ability to make at least on exception to the current absolute approach because service unavailability can be just as impactful as account compromise for many businesses.

Retrieving credentials securely with PowerShell

In a recent article I highlighted how you can securely save credential from PowerShell to a local file using the Export-Clixml command here:

Saving credentials securely with PowerShell

The idea with saving credentials securely is that you can now get to them quickly and easily. Just as easily in fact as embedding them into your PowerShell (which is a major no-no). So how do you do that?

You basically use the the import-clixml command like so:

$clientidcreds = import-clixml -path .\clientid.xml

to retrieve them. This will open the client.xml in the current directory, read in the encrypted values (username and password) and store them in the variable $clientidcreds.

Now $clientidcreds.password is a secure string, which means it can’t easily be used as a normal string in PowerShell. No problemo, now jus run the command:

$clientid = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($clientIdcreds.password))

and $clientid will have the plain text variable you initially saved and exported to the secure  XML file.

This is pretty neat eh? It allows you to securely save items such as oAuth and API keys in a secure file on you machine and then recall them quickly and easily with the above commands and use them in your PowerShell code.

Saving credentials securely with PowerShell

There are times when you want to securely save and retrieve information in PowerShell. Saving things like passwords and other credentials to plain text is not a good idea at all. To avoid that, you can use the Secure string feature of PowerShell. The most common way to do this is via the command:

$Secure = Read-Host –AsSecureString

This creates a secure string. After you enter the command, any characters that you type are converted into a secure string and then saved in the $secure variable. With this command, the characters you enter are not displayed on the screen.

image

Because the $secure variable contains a secure string, PowerShell displays only the System.Security.SecureString text when you try and view it. So the information to be secured is now saved as a protected variable called $secure in PowerShell. How can this now be written securely to a file so that it can be re-used later and still remain protected, even on the disk?

You can use the command Export-Clixml because a valuable use of this on Windows computers is to export credentials and secure strings securely as XML.

Thus, a better way to capture the value you want to save securely is probably via:

$Secure = get-credential -credential ClientID

image

Which will prompt you for the information as shown above. You will note that the User name filed has already been created thanks to the –credential parameter.

This will then give you a variable with a username (here ClientID) and a secure string that is a PowerShell credential.

You can then save the information via:

$clientid | Export-CliXml -Path .\clientid.xml

image

If the Export-Clixml is used to save that variable to a file (here clientid.xml), it will save it like shown above. You will note that the Password field is encrypted. This is where the secure information is kept, which is great, since it is now encrypted on disk.

The other great thing about using Export-Clixml is that:

The Export-Clixml cmdlet encrypts credential objects by using the Windows Data Protection API. The encryption ensures that only your user account on only that computer can decrypt the contents of the credential object. The exported CLIXML file can’t be used on a different computer or by a different user.

image

Thus, if the file with the saved and encrypted information is copied and used by another login on the same machine or on a different machine, you get the above result. Basically, it can’t be decrypted.

Of course, this isn’t perfect, but it does mean that once you have saved the information using the above technique the only way it can be decrypted is via the same logon on to the same machine. This means you don’t need to have secure variables saved as plain text inside scripts or in unprotected files on disk that can be copied and work anywhere. With this technique you can ensure that information saved to a file is encrypted and cannot be used by any other user or by any other machine. Thus, if someone got hold of the file, the information couldn’t be viewed or decrypted and thus access would be denied.

Hopefully, that should allow you to develop more secure PowerShell scripting.