Providing feedback on user reported messages

Hopefully, you are aware that Microsoft 365 provides users the ability to report a suspected email. I have spoken about this here:

Improved security is a shared responsibility

image

What you may not be aware of is that these submissions can viewed and action in the Microsoft Security Center:

https://security.microsoft.com

under the Submissions menu option as shown above.

You may also not be aware that there are further actions you can take in here:

image

You can provide feedback directly to the user about their submission using the Mark as an notify option as shown above.

image

Doing so will send the user an email, like that shown above, to provide feedback about that submission for the user. Doing provides important reinforcement of users remaining vigilant as well as helping them better identify threats.

image

 You’ll also find actions you can take on that message that will provide feedback directly to Microsoft, as shown above.

image

Even better, if you go into Policies & rules | Threat Policies | User submissions you are able to customise what is sent to the user, both before and after reporting as shown above.

For more information on these capabilities visit:

Admin review for reported messages

Getting users involved in security is important. Part of that is providing them feedback and recognition of their contribution, no matter how small. Using these capabilities for reported messages, you are able to do that quickly and easily.

Add TAXII threat intelligence feeds to Azure Sentinel

image

There a public threat intelligence feeds available that Azure Sentinel can take advantage of. Think of these as providing information around entities that represent threats such as compromised IP addresses, botnet domains and so on. Typically, these feeds will support the TAXII connector inside Azure Sentinel.

Select the Data connectors option from the Azure Sentinel menu on the left. Next search for TAXII. Finally, select Threat Intelligence as shown above, then the Open connector page in the lower right.

image

On the right hand side of the page, you see the Configuration area as shown above. Here you’ll require information on the following items:

– API root URL

– Collection ID

– Username

– Password

If you have a look at the bottom of this article:

Connect Azure Sentinel to STIX/TAXII threat intelligence feeds

you’ll find the details for the free Anomali Limo threat feed, which is:

– API root URL = https://limo.anomali.com/api/v1/taxii2/feeds/

– Collection ID = see list in article (i.e. 107, 135, 136, etc)

For the username and password of all these feeds use:

guest

image

Complete the details for each Collection ID, like what is shown above. When you have completed each feed select the Add button.

image

The feed will be validated and if successful, an alert will confirm this at the top of the page as shown above.

image

The entered feed will appear in a list at the bottom of the page. At this point, feed information will start flowing into your environment depending on the Polling Frequency you selected for the feed.

image

Once the feed has started to import data select the Threat intelligence from the main Sentinel menu as shown above. You should see a range of entries. These entries can now be utilised throughout your Sentinel environment. This is detailed here:

Work with threat indicators in Azure Sentinel

image

This data is stored in a table called ThreatIntelligenceIndicator as shown above, which can be used directly in hunting queries.

Keep in mind that any threat indicator data incurs an ingestion and data storage cost. However, this is not a great amount and the value they provide is well worth that minor cost. You can track threat indicator costs using workbooks that Sentinel provides. You can add more feeds if you wish and details about what is available can be found at:

Threat intelligence integration in Azure Sentinel

Having additional threat data provides more signal information for Sentinel when it examines your environment. If information from these threat indicators is detected in your environment, then alerts will be generated. For a small ingest and storage cost having these threat indicators flow into your Sentinel environment provided a huge amount of value and increase your security.

Intelligence not Information

image

I use the above diagram to help people understand where they should be investing human capital when it comes to security. I see too many people who are responsible for security focused at the Information (top and widest) level of the above diagram.

The Information level is a constant deluge of independent and uncorrelated signals. At this level I would suggest that probably 95% or more of these signals are benign or should be ignored. Thus, if you are investing precious human capital at this level, you are wasting 95% of that or more.

The Information level in today’s security environment is where the machine (aka software) provides the greatest return on investment. This is because the machine can constantly evaluate every signal that arrives, impartially, consistently and tirelessly. It also doesn’t care that 95% of the signals it evaluates has little or no value. It can also do this 24/7/365. It will continue to do this faster and faster with the passage of time.

The Policy level can takes these raw signals and produce results to better secure the organisation. For example, a Data Loss Prevention Policy (DLP) can evaluate the usage of a document and its contents, then determine whether to allow of block access. The machine can’t create the DLP policy but it can very effectively evaluate it and take action. The human adds value to the equation by creating the policy the machine implements.

The Condition level can further use policies, like Conditional Access (CA) based on multiple signals i.e. where a device is connecting from, what information it wants access to, who the user requesting and so on to then determine whether access should be granted. Once again, the machine doesn’t craft the policy but evaluates and enforces the policy constantly. Once again, the human adds value to the equation by creating the policy the machine evaluates all the combined signals against.

Hopefully, you can see my argument here, that the further down the triangle you go, the more effectively human capital is utilised. Conversely, the further up the triangle the more efficient it is for the machine. At the Events level, services like Microsoft Cloud App Security (MCAS) align signals into a format that is much easier for a human to digest and evaluate. Here the machine looks up signals such as IP locations and usages automatically to provide even more data for human assessment.

The machine can thus digest the raw information, then use techniques such as Artificial Intelligence (AI) and Machine Learning (ML) to refine the information and make it more relevant. That is add value. This allows the human to apply what they are best at, on the highest quality information, not the lowest. The precious human analysis effort is deployed where it has the most impact, in a pool of refined and relevant information that has been culled of low quality results.

I would suggest that the relevancy of signals at the Intelligence level, using tools like Azure Sentinel, is much greater than the mere 5% I suggested as a benchmark at the Information layer. But even if it was just 5%, the value of this 5% is infinitely higher because the total value of the signals at this level is much much greater than at lower levels and there are far fewer of them to examine. If the human has the same amount of time and cognitive load to invest at any level, doing so at the Intelligence level all them to spend far more time on each individual item. Anyone who knows will tell you, when it comes to a quality output, you need to invest time.

As with unread email items in an inbox, many people love to make themselves feel important by pointing to how many emails they are receiving. The number of emails your receive or have accumulated is totally irrelevant! What is the important is the VALUE of the information, NOT the quality. So it also is with security. Overwhelming yourself with signals from many different system doesn’t align with better security. If anything, it introduces greater fatigue, distraction and inconsistency, leading to much poorer security.

We live in a world that has more information coming at it daily than there has ever been in history. Tomorrow there will be even more and so on and so on. That growth is only going to accelerate. You cannot approach this modern environment with old approaches such as drowning yourself in low value signals. There are simply too many, and at some point nothing more gets processed due to overwhelm. The smart move is to use technology efficiently. Put it to work on the repetitive and mundane work that humans are not good at or like doing even less. Move down the levels until you have systems that give you intelligence rather than swamping you in a sea of information. After all, isn’t NOT doing this just a self imposed DDOS (distributed denial or service) attack?

Implementing Windows Defender Application Control (WDAC)–Part 3

This post is part of a series focused on Windows Defender Application Control (WDAC). The previous article can be found here:

Understanding Policy Rules

In this article I’ll continue looking at the XML used to create WDAC policies. Specifically, I’ll focus on the EKU block.

image

If you open up the XML policy file that we have been working through so far, you’ll effectively find just a placeholder for EKUs as shown above.

image

If you look at another, more complete, WDAC policy, you’ll see that the EKU block is populated as shown above. The block reads like:

<EKUs>
    <EKU ID=”ID_EKU_WINDOWS” Value=”010A2B0601040182370A0306″ FriendlyName=”Windows System Component Verification – 1.3.6.1.4.1.311.10.3.6″ />
    <EKU ID=”ID_EKU_ELAM” Value=”010A2B0601040182373D0401″ FriendlyName=”Early Launch Antimalware Driver – 1.3.6.1.4.1.311.61.4.1″ />
    <EKU ID=”ID_EKU_HAL_EXT” Value=”010a2b0601040182373d0501″ FriendlyName=”HAL Extension – 1.3.6.1.4.1.311.61.5.1″ />
    <EKU ID=”ID_EKU_WHQL” Value=”010A2B0601040182370A0305″ FriendlyName=”Windows Hardware Driver Verification – 1.3.6.1.4.1.311.10.3.5″ />
    <EKU ID=”ID_EKU_STORE” Value=”010a2b0601040182374c0301″ FriendlyName=”Windows Store EKU – 1.3.6.1.4.1.311.76.3.1 Windows Store” />
    <EKU ID=”ID_EKU_RT_EXT” Value=”010a2b0601040182370a0315″ FriendlyName=”Windows RT Verification – 1.3.6.1.4.1.311.10.3.21″ />
    <EKU ID=”ID_EKU_DCODEGEN” Value=”010A2B0601040182374C0501″ FriendlyName=”Dynamic Code Generation EKU – 1.3.6.1.4.1.311.76.5.1″ />
    <EKU ID=”ID_EKU_AM” Value=”010a2b0601040182374c0b01″ FriendlyName=”AntiMalware EKU – 1.3.6.1.4.1.311.76.11.1 ” />
  </EKUs>

I am no expert, but in essence this is telling the WDAC policy about trusted Microsoft certificates for the environment.

To simplify let’s look at:

<EKUID=”ID_EKU_WINDOWS “Value=”010A2B0601040182370A0306 “FriendlyName=”Windows System Component Verification – 1.3.6.1.4.1.311.10.3.6″/>

From what I understand, this refers to capability with PKI style certificate. Trusted certificates are used to sign each file on a Windows 10 device to ensure it is original and untampered with.

The Object Identifier (ODI) number, 1.3.6.1.4.1.311.10.3.6, helps identify who the certificate is from. If you look at this article:

Object Identifiers (OID) in PKI

you’ll learn that a certificate that starts with 1.3.6.1.4.311 is from Microsoft and that the specific certificate 1.3.6.1.4.311.10.3.6 OID is for the Windows System Component Verification.

image

If we now dig into a typical Windows system file that we want to ensure is secure:

c:\windows\system32\kernel32.dll

and examine that files’ properties, Digital Certificates, Details, View Certificate as shown above, we see that this certificate can be used for:

– Ensuring the software came from the software publisher

– Protects the software from alteration after publication

Which is exactly what functionality we are after.

image

If we now look at the certificate Details, then select the field Enhanced Key Usage (EKU) as shown above we see:

Windows System Component Verification (1.3.6.1.4.1.311.10.3.6) which matches what we found in the EKU block in the WDAC policy in the lower box.

I will say that I am no expert on how certificates and how they exactly interact with file verification but all we need to know, in essence, is that the EKUs in the WDAC XML file tell the policy which certificates to trust when evaluating whether to trust a file. If the file in question is signed with a certificate that is in the EKU list, then that file will be trusted. This makes it easy to trust a large number of files from Microsoft, which is good as we need to trust Windows system files to boot.

<EKUID=”ID_EKU_WINDOWS “Value=”010A2B0601040182370A0306 “FriendlyName=”Windows System Component Verification – 1.3.6.1.4.1.311.10.3.6″/>

Returning to the EKU line in question from the WDAC policy file, we note that:

Value=”010A2B0601040182370A0306”

This is, as I again understand it, the internal Microsoft identification for the certificate in question. EKU instances have a “Value” attribute consisting of an encoded OID. The process for this Object Identifier (OID) encoding is detailed here:

Object Identifier

which, I must say, is very complex. Luckily, I found this PowerShell function:

https://gist.github.com/mattifestation/5bdcdbadfc4070f9191705853c5481da

which you can use to convert. Now for reasons I can’t yet determine, you need to change the leading 01 to an 06. Thus, to view all the Object IDs using the PowerShell function above you can use the following code:

image

Which provides the following result:

image

Note: 1.3.6.1.4.1.311.76.11.1 = AntiMalware EKU

You can select which EKUs you wish to include in the WDAC XML file, but in this case I will include them all.

image

The best way to add these to the policy, from what I have found so far, is simply to edit the XML and add it. After you do this the modified policy XML file will appear like shown above.

Most of WDAC relies on certificates and verification of signing. I will readily admit that I don’t have a full appreciate for how the world of certificates work, but I hope you, like me, are satisfied enough with what I have detailed here.

So, in summary, the EKU block in the WDAC policy, specifies known certificates from Microsoft that are used to sign Windows system files that we want to trust on the device. Thus, by trusting those certificates we can trust files signed by those certificates. Using the EKU block in the policy allow us to do this for many Microsoft system files quickly and easily and is why, as a best practice, we should include it in the policy.

The next block in the XML policy to focus on will be covered in the next article in the series:

Part 4 – Specifying File Rule

Implementing Windows Defender Application Control (WDAC)–Part 2

This post is part of a series focused on Windows Defender Application Control (WDAC). The previous article can be found here:

Introduction

In this article I’m going to start looking at the XML you use to create policies.

WDAC policies are composed using XML format. You can view a number of example policies on any Windows 10 device by navigating to:

C:\Windows\schemas\CodeIntegrity\ExamplePolicies\

and looking at the file I’ll be starting is process with:

denyallaudit.xml

image

The first item I want to examine is the Rules block contained within the tags <Rule></Rule> and everything in between as shown above.

Information about the available rules is contained here:

Policy rule options

As a best practice, I would suggest the following rule options should be set:

0 Enabled:UMCI – Enabling this rule option validates user mode executables and scripts.

2 Required:WHQL – Enabling this rule requires that every executed driver is WHQL signed and removes legacy driver support. Kernel drivers built for Windows 10 should be WHQL certified.

3 Enabled:Audit Mode – Instructs WDAC to log information about applications, binaries, and scripts that would have been blocked if the policy was enforced. To enforce the policy rather than just have it log requires removing this option. That will come later in our process, so no now we’ll only be logging results of the policy/

4 Disabled:Flight Signing –  WDAC policies will not trust flightroot-signed binaries. This option would be used by organizations that only want to run released binaries, not pre-release Windows builds. In short, we don’t to support Windows Insider builds with our policy.

6 Enabled:Unsigned System Integrity Policy – Allows the policy to remain unsigned. When this option is removed, the policy must be signed and the certificates that are trusted for future policy updates must be identified in the UpdatePolicySigners section.

8 Required:EV Signers – This rule requires that drivers must be WHQL (Windows Hardware Quality Labs) signed and have been submitted by a partner with an Extended Verification (EV) certificate. All Windows 10 and Windows 11 drivers will meet this requirement.

9 Enabled:Advanced Boot Options Menu – Setting this rule option allows the F8 menu to appear to physically present users. This is a handy recovery option but may be a security concern if physical access to the device is available for an attacker.

10 Enabled:Boot Audit on Failure – Used when the WDAC policy is in enforcement mode. When a driver fails during startup, the WDAC policy will be placed in audit mode so that Windows will load.

12 Required:Enforce Store Applications –  WDAC policies will also apply to Universal Windows applications.

13 Enabled:Managed Installer – Automatically allow applications installed by a managed installer. For more information, see Authorize apps deployed with a WDAC managed installer

14 Enabled:Intelligent Security Graph Authorization – Automatically allow applications with “known good” reputation as defined by Microsoft’s Intelligent Security Graph (ISG).

15 Enabled:Invalidate EAs on Reboot – When the Intelligent Security Graph (above)  is used, WDAC sets an extended file attribute that indicates that the file was authorized to run. This option will cause WDAC to periodically revalidate the reputation for files that were authorized by the ISG.

16 Enabled:Update Policy No Reboot – allow future WDAC policy updates to apply without requiring a system reboot.

Since we are going to be modifying the WDAC policy, best practice is to take a copy of the example start point XML file and modify that. Thus, take a copy of:

C:\Windows\schemas\CodeIntegrity\ExamplePolicies\denyallaudit.xml

and use this going forward.

There are two methods of adjusting this new policy. You can directly edit the XML file or you can make the modifications using PowerShell.

To add a policy rule option add the appropriate rule setting encapsulated by <Rule></Rule> and <Option></Option> tags within the <Rules></Rules> global tag like so:

</Rules>

<Rule>

  <Option>Enabled:Audit Mode</Option>

<Rule>

</Rule>

image

and like what is shown above.

You can also make the same updates using PowerShell. The new, copied XML prior to modification appears like:

image

with five rules. To add the additional rules detailed above use the following:

Set-RuleOption -FilePath “.\newaudit.xml” -Option 0
Set-RuleOption -FilePath “.\newaudit.xml” -Option 2
Set-RuleOption -FilePath “.\newaudit.xml” -Option 3
Set-RuleOption -FilePath “.\newaudit.xml” -Option 4
Set-RuleOption -FilePath “.\newaudit.xml” -Option 6
Set-RuleOption -FilePath “.\newaudit.xml” -Option 8
Set-RuleOption -FilePath “.\newaudit.xml” -Option 10
Set-RuleOption -FilePath “.\newaudit.xml” -Option 12
Set-RuleOption -FilePath “.\newaudit.xml” -Option 13
Set-RuleOption -FilePath “.\newaudit.xml” -Option 14
Set-RuleOption -FilePath “.\newaudit.xml” -Option 15
Set-RuleOption -FilePath “.\newaudit.xml” -Option 16

where newaudit.xml is the policy file being built. The numbers for each option correspond to the:

Policy rule options

image

If you open the XML policy file you should now see all the rule options have been added as shown above (13 in all).

In summary then, we have taken an example policy provided by Microsoft and started to modify it to suit our needs. This modification can either be done by directly editing the XML file or using PowerShell commands. I’d suggest PowerShell is a better way because you can save all the commands together in a script and re-run it if you wish.

With this first modification of the policy complete we’ll next look at how to work with the EKU area in the XML file.

Part 3 – Understanding EKUs

Implementing Windows Defender Application Control (WDAC)–Part 1

Windows Defender Application Control (WDAC) is a technology that is built into Windows 10 that allows control of what applications execute on the device.

image

WDAC also allows you to control which drivers are allowed to run and is thus, a very powerful security measure that many should consider implementing. A typical WDAC blocking message is shown above.

Microsoft also has an older application white listing technology known as AppLocker. Here is the recommendation from Microsoft when choosing between the two technologies:

“Generally, it is recommended that customers, who are able to implement application control using WDAC rather than AppLocker, do so. WDAC is undergoing continual improvements, and will be getting added support from Microsoft management platforms. Although AppLocker will continue to receive security fixes, it will not undergo new feature improvements.”

You can deploy AppLocker and WDAC together if your wish, and thus the best practice recommendation from Microsoft is:

“As a best practice, you should enforce WDAC at the most restrictive level possible for your organization, and then you can use AppLocker to further fine-tune the restrictions.”

This is also a good side by side feature comparison here:

Windows Defender Application Control and AppLocker feature availability

So, WDAC it is!

WDAC policies apply to the managed computer as a whole and affects all users of the device. WDAC rules can be defined based on:


– Attributes of the codesigning certificate(s) used to sign an app and its binaries


– Attributes of the app’s binaries that come from the signed metadata for the files, such as Original Filename and version, or the hash of the file


– The reputation of the app as determined by Microsoft’s

– Intelligent Security Graph


The identity of the process that initiated the installation of the app and its binaries (managed installer)


– The path from which the app or file is launched (beginning with Windows 10 version 1903)


– The process that launched the app or binary

WDAC policies are composed using XML. You can view a number of example policies on any Windows 10 device by navigating to:

C:\Windows\schemas\CodeIntegrity\ExamplePolicies\

and looking at the file I’ll be starting is process with:

denyallaudit.xml

and building from there.

Before we get too much further along I need to give you this warning. Application whitelisting is a lot of work to implement and maintain. The more variations (i.e. third party software) you have floating around the environment, the more challenging it is to implement. Also, remember that application whitelisting of ANY form is placing restrictions on user productivity. The things you do with tools like WDAC have the potential to severely restrict users ability to do their job. This can result in the ‘local villages’ pitching up to your door with burning effigies, sharp weapons, menacing looks and foul language if you are not careful. I’ll give you my best practices for reducing the pain and suffering but be prepared for this to be a journey rather than a set and forget update.

Before anything else, I would suggest that you should be conducting a software audit in your environment. You should know what applications are being run by users, however, I’ll guarantee that you won’t know them all. This should not preclude you from at least making an attempt to catalogue what you have. Doing so will save you a lot of pain in the long run.

Next, you’ll create start creating a default WDAC policy in audit mode to see what is actually happening and what issues may be faced. This policy will then be adjusted along the way to accommodate any required inclusions (typically third party software). Once that stage has been completed, the policy will then be flipped from audit to enforcement mode to actually start preventing unknown applications from running.

That’s the plan for these upcoming series of WDAC articles and I hope you’ll follow along and with the knowledge I share look to implement WDAC in your own environment.

Part 2 – Understanding a basic WDAC policy

Restricting user file downloads in SharePoint Online

https://www.youtube.com/watch?v=9NIcw5jghyA

There are situations with SharePoint Online where businesses wish to restrict users from downloading files. Unfortunately, this can’t be done at a document library level but I can be done at a user level provided you have licenses for Conditional Access.

Conditional Access is a features of Azure AD P1 and is included in SKUs like Microsoft 365 Business Premium. The above video takes you through the steps of configuring an appropriate Conditional Access policy in your environments to prevent downloads. The policy can be targeted at specific users and expanded to include other Microsoft 365 cloud services if desired.

Allow contribute with no delete in SharePoint

By default, users in SharePoint can created, modify and delete file. This deletion capability also extends to files that they may not have created. This is the default nature of collaboration.

However, there are times when businesses may want an environment that allows collaboration but prevents user deletion of a file from say, a document library.

In this video:

https://www.youtube.com/watch?v=_qbWVGwdNV4

I’ll show you how to set that up. In essence, an administrator needs to create a new custom permission level in SharePoint that excludes the delete capability. They then need to assign that new permissions to the location and users where they wish to apply this. The great thing is that once it is set up, it can be used across the whole site collection.