Power Platform Community Monthly Webinar – November 2021


Join us for our monthly Power Platform webinar where we share the latest news and updates from the Microsoft Power Platform plus take a deeper dive into Power Virtual Agents.

You can register now via:


If you wish to join our community and be part of the regular discussion and participation on the Microsoft Power Platform you can join via:


(look for the Power Platform option here to join us).
We look forward to seeing you on the webinar.

Add TAXII threat intelligence feeds to Azure Sentinel


There a public threat intelligence feeds available that Azure Sentinel can take advantage of. Think of these as providing information around entities that represent threats such as compromised IP addresses, botnet domains and so on. Typically, these feeds will support the TAXII connector inside Azure Sentinel.

Select the Data connectors option from the Azure Sentinel menu on the left. Next search for TAXII. Finally, select Threat Intelligence as shown above, then the Open connector page in the lower right.


On the right hand side of the page, you see the Configuration area as shown above. Here you’ll require information on the following items:

– API root URL

– Collection ID

– Username

– Password

If you have a look at the bottom of this article:

Connect Azure Sentinel to STIX/TAXII threat intelligence feeds

you’ll find the details for the free Anomali Limo threat feed, which is:

– API root URL = https://limo.anomali.com/api/v1/taxii2/feeds/

– Collection ID = see list in article (i.e. 107, 135, 136, etc)

For the username and password of all these feeds use:



Complete the details for each Collection ID, like what is shown above. When you have completed each feed select the Add button.


The feed will be validated and if successful, an alert will confirm this at the top of the page as shown above.


The entered feed will appear in a list at the bottom of the page. At this point, feed information will start flowing into your environment depending on the Polling Frequency you selected for the feed.


Once the feed has started to import data select the Threat intelligence from the main Sentinel menu as shown above. You should see a range of entries. These entries can now be utilised throughout your Sentinel environment. This is detailed here:

Work with threat indicators in Azure Sentinel


This data is stored in a table called ThreatIntelligenceIndicator as shown above, which can be used directly in hunting queries.

Keep in mind that any threat indicator data incurs an ingestion and data storage cost. However, this is not a great amount and the value they provide is well worth that minor cost. You can track threat indicator costs using workbooks that Sentinel provides. You can add more feeds if you wish and details about what is available can be found at:

Threat intelligence integration in Azure Sentinel

Having additional threat data provides more signal information for Sentinel when it examines your environment. If information from these threat indicators is detected in your environment, then alerts will be generated. For a small ingest and storage cost having these threat indicators flow into your Sentinel environment provided a huge amount of value and increase your security.

Intelligence not Information


I use the above diagram to help people understand where they should be investing human capital when it comes to security. I see too many people who are responsible for security focused at the Information (top and widest) level of the above diagram.

The Information level is a constant deluge of independent and uncorrelated signals. At this level I would suggest that probably 95% or more of these signals are benign or should be ignored. Thus, if you are investing precious human capital at this level, you are wasting 95% of that or more.

The Information level in today’s security environment is where the machine (aka software) provides the greatest return on investment. This is because the machine can constantly evaluate every signal that arrives, impartially, consistently and tirelessly. It also doesn’t care that 95% of the signals it evaluates has little or no value. It can also do this 24/7/365. It will continue to do this faster and faster with the passage of time.

The Policy level can takes these raw signals and produce results to better secure the organisation. For example, a Data Loss Prevention Policy (DLP) can evaluate the usage of a document and its contents, then determine whether to allow of block access. The machine can’t create the DLP policy but it can very effectively evaluate it and take action. The human adds value to the equation by creating the policy the machine implements.

The Condition level can further use policies, like Conditional Access (CA) based on multiple signals i.e. where a device is connecting from, what information it wants access to, who the user requesting and so on to then determine whether access should be granted. Once again, the machine doesn’t craft the policy but evaluates and enforces the policy constantly. Once again, the human adds value to the equation by creating the policy the machine evaluates all the combined signals against.

Hopefully, you can see my argument here, that the further down the triangle you go, the more effectively human capital is utilised. Conversely, the further up the triangle the more efficient it is for the machine. At the Events level, services like Microsoft Cloud App Security (MCAS) align signals into a format that is much easier for a human to digest and evaluate. Here the machine looks up signals such as IP locations and usages automatically to provide even more data for human assessment.

The machine can thus digest the raw information, then use techniques such as Artificial Intelligence (AI) and Machine Learning (ML) to refine the information and make it more relevant. That is add value. This allows the human to apply what they are best at, on the highest quality information, not the lowest. The precious human analysis effort is deployed where it has the most impact, in a pool of refined and relevant information that has been culled of low quality results.

I would suggest that the relevancy of signals at the Intelligence level, using tools like Azure Sentinel, is much greater than the mere 5% I suggested as a benchmark at the Information layer. But even if it was just 5%, the value of this 5% is infinitely higher because the total value of the signals at this level is much much greater than at lower levels and there are far fewer of them to examine. If the human has the same amount of time and cognitive load to invest at any level, doing so at the Intelligence level all them to spend far more time on each individual item. Anyone who knows will tell you, when it comes to a quality output, you need to invest time.

As with unread email items in an inbox, many people love to make themselves feel important by pointing to how many emails they are receiving. The number of emails your receive or have accumulated is totally irrelevant! What is the important is the VALUE of the information, NOT the quality. So it also is with security. Overwhelming yourself with signals from many different system doesn’t align with better security. If anything, it introduces greater fatigue, distraction and inconsistency, leading to much poorer security.

We live in a world that has more information coming at it daily than there has ever been in history. Tomorrow there will be even more and so on and so on. That growth is only going to accelerate. You cannot approach this modern environment with old approaches such as drowning yourself in low value signals. There are simply too many, and at some point nothing more gets processed due to overwhelm. The smart move is to use technology efficiently. Put it to work on the repetitive and mundane work that humans are not good at or like doing even less. Move down the levels until you have systems that give you intelligence rather than swamping you in a sea of information. After all, isn’t NOT doing this just a self imposed DDOS (distributed denial or service) attack?

CIAOPS Need to Know Microsoft 365 Webinar – October


Join me for the free monthly CIAOPS Need to Know webinar. Along with all the Microsoft Cloud news we’ll be taking a look at using eDiscovery and Content search in your environment.

Shortly after registering you should receive an automated email from Microsoft Teams confirming your registration, including all the event details as well as a calendar invite! Yeah Teams webinars.

You can register for the regular monthly webinar here:

October Webinar Registrations

The details are:

CIAOPS Need to Know Webinar – October 2021
Friday 29th of October 2021
11.00am – 12.00am Sydney Time

All sessions are recorded and posted to the CIAOPS Academy.

The CIAOPS Need to Know Webinars are free to attend but if you want to receive the recording of the session you need to sign up as a CIAOPS patron which you can do here:


or purchase them individually at:


Also feel free at any stage to email me directly via director@ciaops.com with your webinar topic suggestions.

I’d also appreciate you sharing information about this webinar with anyone you feel may benefit from the session and I look forward to seeing you there.

Implementing Windows Defender Application Control (WDAC)–Part 3

This post is part of a series focused on Windows Defender Application Control (WDAC). The previous article can be found here:

Understanding Policy Rules

In this article I’ll continue looking at the XML used to create WDAC policies. Specifically, I’ll focus on the EKU block.


If you open up the XML policy file that we have been working through so far, you’ll effectively find just a placeholder for EKUs as shown above.


If you look at another, more complete, WDAC policy, you’ll see that the EKU block is populated as shown above. The block reads like:

    <EKU ID=”ID_EKU_WINDOWS” Value=”010A2B0601040182370A0306″ FriendlyName=”Windows System Component Verification –″ />
    <EKU ID=”ID_EKU_ELAM” Value=”010A2B0601040182373D0401″ FriendlyName=”Early Launch Antimalware Driver –″ />
    <EKU ID=”ID_EKU_HAL_EXT” Value=”010a2b0601040182373d0501″ FriendlyName=”HAL Extension –″ />
    <EKU ID=”ID_EKU_WHQL” Value=”010A2B0601040182370A0305″ FriendlyName=”Windows Hardware Driver Verification –″ />
    <EKU ID=”ID_EKU_STORE” Value=”010a2b0601040182374c0301″ FriendlyName=”Windows Store EKU – Windows Store” />
    <EKU ID=”ID_EKU_RT_EXT” Value=”010a2b0601040182370a0315″ FriendlyName=”Windows RT Verification –″ />
    <EKU ID=”ID_EKU_DCODEGEN” Value=”010A2B0601040182374C0501″ FriendlyName=”Dynamic Code Generation EKU –″ />
    <EKU ID=”ID_EKU_AM” Value=”010a2b0601040182374c0b01″ FriendlyName=”AntiMalware EKU – ” />

I am no expert, but in essence this is telling the WDAC policy about trusted Microsoft certificates for the environment.

To simplify let’s look at:

<EKUID=”ID_EKU_WINDOWS “Value=”010A2B0601040182370A0306 “FriendlyName=”Windows System Component Verification –″/>

From what I understand, this refers to capability with PKI style certificate. Trusted certificates are used to sign each file on a Windows 10 device to ensure it is original and untampered with.

The Object Identifier (ODI) number,, helps identify who the certificate is from. If you look at this article:

Object Identifiers (OID) in PKI

you’ll learn that a certificate that starts with is from Microsoft and that the specific certificate OID is for the Windows System Component Verification.


If we now dig into a typical Windows system file that we want to ensure is secure:


and examine that files’ properties, Digital Certificates, Details, View Certificate as shown above, we see that this certificate can be used for:

– Ensuring the software came from the software publisher

– Protects the software from alteration after publication

Which is exactly what functionality we are after.


If we now look at the certificate Details, then select the field Enhanced Key Usage (EKU) as shown above we see:

Windows System Component Verification ( which matches what we found in the EKU block in the WDAC policy in the lower box.

I will say that I am no expert on how certificates and how they exactly interact with file verification but all we need to know, in essence, is that the EKUs in the WDAC XML file tell the policy which certificates to trust when evaluating whether to trust a file. If the file in question is signed with a certificate that is in the EKU list, then that file will be trusted. This makes it easy to trust a large number of files from Microsoft, which is good as we need to trust Windows system files to boot.

<EKUID=”ID_EKU_WINDOWS “Value=”010A2B0601040182370A0306 “FriendlyName=”Windows System Component Verification –″/>

Returning to the EKU line in question from the WDAC policy file, we note that:


This is, as I again understand it, the internal Microsoft identification for the certificate in question. EKU instances have a “Value” attribute consisting of an encoded OID. The process for this Object Identifier (OID) encoding is detailed here:

Object Identifier

which, I must say, is very complex. Luckily, I found this PowerShell function:


which you can use to convert. Now for reasons I can’t yet determine, you need to change the leading 01 to an 06. Thus, to view all the Object IDs using the PowerShell function above you can use the following code:


Which provides the following result:


Note: = AntiMalware EKU

You can select which EKUs you wish to include in the WDAC XML file, but in this case I will include them all.


The best way to add these to the policy, from what I have found so far, is simply to edit the XML and add it. After you do this the modified policy XML file will appear like shown above.

Most of WDAC relies on certificates and verification of signing. I will readily admit that I don’t have a full appreciate for how the world of certificates work, but I hope you, like me, are satisfied enough with what I have detailed here.

So, in summary, the EKU block in the WDAC policy, specifies known certificates from Microsoft that are used to sign Windows system files that we want to trust on the device. Thus, by trusting those certificates we can trust files signed by those certificates. Using the EKU block in the policy allow us to do this for many Microsoft system files quickly and easily and is why, as a best practice, we should include it in the policy.

The next block in the XML policy to focus on will be covered in the next article in the series:

Part 4 – Specifying File Rule

Implementing Windows Defender Application Control (WDAC)–Part 2

This post is part of a series focused on Windows Defender Application Control (WDAC). The previous article can be found here:


In this article I’m going to start looking at the XML you use to create policies.

WDAC policies are composed using XML format. You can view a number of example policies on any Windows 10 device by navigating to:


and looking at the file I’ll be starting is process with:



The first item I want to examine is the Rules block contained within the tags <Rule></Rule> and everything in between as shown above.

Information about the available rules is contained here:

Policy rule options

As a best practice, I would suggest the following rule options should be set:

0 Enabled:UMCI – Enabling this rule option validates user mode executables and scripts.

2 Required:WHQL – Enabling this rule requires that every executed driver is WHQL signed and removes legacy driver support. Kernel drivers built for Windows 10 should be WHQL certified.

3 Enabled:Audit Mode – Instructs WDAC to log information about applications, binaries, and scripts that would have been blocked if the policy was enforced. To enforce the policy rather than just have it log requires removing this option. That will come later in our process, so no now we’ll only be logging results of the policy/

4 Disabled:Flight Signing –  WDAC policies will not trust flightroot-signed binaries. This option would be used by organizations that only want to run released binaries, not pre-release Windows builds. In short, we don’t to support Windows Insider builds with our policy.

6 Enabled:Unsigned System Integrity Policy – Allows the policy to remain unsigned. When this option is removed, the policy must be signed and the certificates that are trusted for future policy updates must be identified in the UpdatePolicySigners section.

8 Required:EV Signers – This rule requires that drivers must be WHQL (Windows Hardware Quality Labs) signed and have been submitted by a partner with an Extended Verification (EV) certificate. All Windows 10 and Windows 11 drivers will meet this requirement.

9 Enabled:Advanced Boot Options Menu – Setting this rule option allows the F8 menu to appear to physically present users. This is a handy recovery option but may be a security concern if physical access to the device is available for an attacker.

10 Enabled:Boot Audit on Failure – Used when the WDAC policy is in enforcement mode. When a driver fails during startup, the WDAC policy will be placed in audit mode so that Windows will load.

12 Required:Enforce Store Applications –  WDAC policies will also apply to Universal Windows applications.

13 Enabled:Managed Installer – Automatically allow applications installed by a managed installer. For more information, see Authorize apps deployed with a WDAC managed installer

14 Enabled:Intelligent Security Graph Authorization – Automatically allow applications with “known good” reputation as defined by Microsoft’s Intelligent Security Graph (ISG).

15 Enabled:Invalidate EAs on Reboot – When the Intelligent Security Graph (above)  is used, WDAC sets an extended file attribute that indicates that the file was authorized to run. This option will cause WDAC to periodically revalidate the reputation for files that were authorized by the ISG.

16 Enabled:Update Policy No Reboot – allow future WDAC policy updates to apply without requiring a system reboot.

Since we are going to be modifying the WDAC policy, best practice is to take a copy of the example start point XML file and modify that. Thus, take a copy of:


and use this going forward.

There are two methods of adjusting this new policy. You can directly edit the XML file or you can make the modifications using PowerShell.

To add a policy rule option add the appropriate rule setting encapsulated by <Rule></Rule> and <Option></Option> tags within the <Rules></Rules> global tag like so:



  <Option>Enabled:Audit Mode</Option>




and like what is shown above.

You can also make the same updates using PowerShell. The new, copied XML prior to modification appears like:


with five rules. To add the additional rules detailed above use the following:

Set-RuleOption -FilePath “.\newaudit.xml” -Option 0
Set-RuleOption -FilePath “.\newaudit.xml” -Option 2
Set-RuleOption -FilePath “.\newaudit.xml” -Option 3
Set-RuleOption -FilePath “.\newaudit.xml” -Option 4
Set-RuleOption -FilePath “.\newaudit.xml” -Option 6
Set-RuleOption -FilePath “.\newaudit.xml” -Option 8
Set-RuleOption -FilePath “.\newaudit.xml” -Option 10
Set-RuleOption -FilePath “.\newaudit.xml” -Option 12
Set-RuleOption -FilePath “.\newaudit.xml” -Option 13
Set-RuleOption -FilePath “.\newaudit.xml” -Option 14
Set-RuleOption -FilePath “.\newaudit.xml” -Option 15
Set-RuleOption -FilePath “.\newaudit.xml” -Option 16

where newaudit.xml is the policy file being built. The numbers for each option correspond to the:

Policy rule options


If you open the XML policy file you should now see all the rule options have been added as shown above (13 in all).

In summary then, we have taken an example policy provided by Microsoft and started to modify it to suit our needs. This modification can either be done by directly editing the XML file or using PowerShell commands. I’d suggest PowerShell is a better way because you can save all the commands together in a script and re-run it if you wish.

With this first modification of the policy complete we’ll next look at how to work with the EKU area in the XML file.

Part 3 – Understanding EKUs