Remediating local administrators with proactive remediations

Like last week, this week is all about proactive remediations, a feature of Endpoint Analytics. As mentioned last week, proactive remediations are script packages that can detect common issues and remediate those issues if needed. All of that before the user even realizes that there is an issue. Those remediations can help with reducing support calls. The strength is that the remediations can be anything to address potential issues, as long as it can be addressed by using PowerShell. Each script package contains a detection script and a remediation script and that script package is deployed through Microsoft Intune. For deploying script packages, Microsoft Intune relies on the Intune Management Extension (IME).

To show the real power of proactive remediations, I’ll further develop the local administrators example of last week. Even in this modern world, local administrators are still a hot item. Last week the focus was on the detection part of proactive remediations and in this post I’ll focus on the remediation part of proactive remediations. I’ll start this post by constructing the remediation script, followed by creating the script package. I’ll end this post by verifying the results. Last week the focus for the results was locally on the device and in this post I’ll focus on the results in Microsoft Intune. This week I’ll skip the important requirements, as I’ve already documented them last week.

Constructing the remediation script

The first step is constructing the remediation script. That script should remediate the faulty configuration that was detected by the detection script. In this example, which is shown below, the remediation script is focused on a scenario in which the user of the device is a local administrator and should remain a local administrator. To fulfill that scenario the remediation script will remove all accounts, with exception of the default Administrator and the currently logged-on user, from the Administrators group. After removal of the accounts from the Administrators group, the script will add the correct accounts to the Administrators group. That will make sure that only the verified and correct accounts are members of the Administrators group and are configured on the local device. When the adjustment to the members of the Administrators group was successful, the script will return an exit code of 0 and when the adjustment to the members of the Administrators was a failure, the script will return an exit code of 1. The last line of output, before the exit code, will be displayed in the logs and can be useful information when verifying and troubleshooting the remediation script. Before using the script, make sure to adjust the array with administrator accounts to the correct list of administrators. That required adjustments is mentioned in the script example below.

#Define variables
$currentUser = (Get-CimInstance Win32_ComputerSystem).Username -replace '.*\\'
$localAdministrators = @("[YourGlobalAdminRoleSid]","[YourDeviceAdminRoleSid]") #Adjust to your local administrators
try {
$administratorsGroup = ([ADSI]"WinNT://$env:COMPUTERNAME").psbase.children.find("Administrators")
$administratorsGroupMembers = $administratorsGroup.psbase.invoke("Members")
foreach ($administratorsGroupMember in $administratorsGroupMembers) {
$administrator = $administratorsGroupMember.GetType().InvokeMember('Name','GetProperty',$null,$administratorsGroupMember,$null)
if (($administrator -ne "Administrator") -and ($administrator -ne $currentUser)) {
$administratorsGroup.Remove("WinNT://$administrator")
Write-Host "Successfully removed $administrator from Administrators group"
}
}
foreach ($localAdministrator in $localAdministrators) {
$administratorsGroup.Add("WinNT://$localAdministrator")
Write-Host "Successfully added $localAdministrator to Administrators group"
}
Write-Host "Successfully remediated the local administrators"
}
catch {
$errorMessage = $_.Exception.Message
Write-Error $errorMessage
exit 1
}

Note: Just like last week I’m relying on ADSI for making the required adjustments.

Creating the script package

The second step is creating the script package. The script package can be created by using the proactive remediations functionality of Endpoint Analytics. That functionality can be used to schedule scripts to detect specific configurations and when that configuration is faulty, it can run scripts to remediate the configuration. The following six steps walk through the process of creating a script package, with a detection script and a remediation script. That will create a script package to detect the currently configured accounts from the Administrators and to remediate any faulty configuration that is detected. That script package can be scheduled to either perform the detection and remediation only once, or on a recurring schedule. The latter will make sure that it can actually be used for creating some baseline configurations, which might sound familiar when previously, or still, using Configuration Manager. The idea is the same, just a bit more simplistic and easier to use.

  1. Open the Microsoft Endpoint Manager admin center portal navigate to Reports Endpoint analytics (Preview) > Proactive remediations to open the Endpoint analytics (Preview) | Proactive remediations blade
  2. On the Endpoint analytics (Preview) | Proactive remediations blade, click Create script package to open the Create custom script wizard
  3. On the Basics page, provide the following information and click Next
  • Name: Provide a valid name for the custom script package
  • Description: (Optional) Provide a valid description for the custom script package
  • Publisher: Provide a valid publisher for the custom script package
  • Version: [Greyed out]
  1. On the Settings page, provide the following information and click Next
  • Detection script file: Select the previously week created detection script
  • Detection script: [Greyed out]
  • Remediation script file: Select the created remediation script
  • Remediation script: [Greyed out]
  • Run this script using the logged-on credentials: No
  • Enforce script signature check: No
  • Run script in 64-bit PowerShell: No

Important: This configuration will run these scripts with bypassing the execution policy. Together with the information of my previous post, this could enable an administrative user to edit the script before running (and without the script being checked before running).

  1. On the Assignments page, provide the following information and click Next
  • Assign to: Select the assigned group and when selecting multiple groups, multiple lines will appear with all separate schedule configurations
  • Schedule: Click on the three dots to open the schedule configuration. This enables the administrator to configure a recurrence frequency for the script package. This can be Once, Daily, or Hourly.
    • When selecting Once, a specific date and time should be configured
    • When selecting Daily, a frequency and daily time should be configured
    • When selecting Hourly, a frequency should be configured
  1. On the Review + create page, verify the information and click Create

Verifying the results

When verifying the results, in Microsoft Intune, the first place to look is the proactive remediations overview. That provides an overview of all the different script packages that are deployed within the organization. That overview provides an easy first method to monitor the detection and remediation results of all the different deployed script packages. That overview is shown below and is available via Reports > Endpoint analytics > Proactive remediations.

When more details are required about a specific script package, the script package overview is the best place to look. That provides an overview that is specifically related to the status of a specific script package. That overview provides a clear bar chart for the status of the detection and remediation script of the script package. Besides that it also shows a nice daily remediation trend. That daily trend provides an overview of how often that specific situation is remediated. That overview is shown below and is available via Reports > Endpoint analytics > Proactive remediations > [Specific script package] > Overview.

When even more details are required about a specific script package on a specific device, the script package device status overview is the best place to look. That provides a clear overview about status details of a script package on the different devices. That overview includes standard information about the device and the last sync time, but also includes useful information that was part of the output of the detection script. That overview is shown below and is available via Reports > Endpoint analytics > Proactive remediations > [Specific script package] > Device status.

More information

For more information about Endpoint Analytics and Proactive Remediations, refer to the following docs

Detecting local administrators with proactive remediations

This week is all about proactive remediations, which is a feature of Endpoint Analytics. Proactive remediations are script packages that can detect common issues and remediate those issues if needed. All of that before the user even realizes that there is an issue. Those remediations can help with reducing support calls. The strength is that the remediations can be anything to address potential issues, as long as it can be addressed by using PowerShell. Each script package contains a detection script and a remediation script and that script package is deployed through Microsoft Intune. For deploying script packages, Microsoft Intune relies on the Intune Management Extension (IME).

To show the power of proactive remediations, I’ll use local administrators as an example. I’ve did something similar a long time ago for Configuration Manager and I still get many questions around that subject on my post about managing local administrators. Even in this modern world, local administrators are still a hot item. In this post I’ll focus on the detection part of proactive remediations and the remediation part itself might only be a week away. I’ll start this post by constructing the detection script, followed by creating the script package. I’ll end this post by verifying the results locally on the device.

Important prerequisites

Before actually starting with Endpoint Analytics it’s important to have the different prerequisites in place. That means the correct licenses and the correct configuration.

  • The device must be running Windows 10 Enterprise, Professional, or Education
  • The devices must be enrolled into Endpoint Analytics
  • The devices must be Azure AD joined or hybrid Azure AD joined and meet one of the following conditions:
    • the device is managed by Intune.
    • the device is co-managed device and running Windows 10, version 1903 or later.
    • the device is co-managed device and running on pre-1903 versions of Windows 10 with the Client apps workload pointed to Intune
  • The user must be licensed for Endpoint Analytics, which is included in Enterprise Mobility + Security E3 or higher and Microsoft 365 Enterprise E3 or higher, and when not using the latter, additional licensing of the following:
    • Windows 10 Enterprise E3 or E5
    • Windows 10 Education A3 or A5
    • Windows Virtual Desktop Access E3 or E5

Constructing the detection script

The first step is constructing the detection script. That script should detect the correct situation. In this example, the detection script should detect the correct number of local administrators and also the correct local administrator users. For a successful detection of the correct (number of) local administrators, the script should return an exit code of 0 and for a failed detection of the (number of) local administrators, the script should return an exit code of 1. That failure will eventually trigger the remediation script. More about that, next week. Besides that, it’s good to know that the last line of output, before the exit code, will also be displayed in the logs. That can be helpful when verifying and troubleshooting the detection script. The following example detection script will verify the number of local administrators and once the number matches, it will actually verify the local administrator users with the configured list of local administrators. Make sure to adjust that list with the correct local administrators and make sure to adjust the number of local administrators. The required adjustments are mentioned in the script example below.

#Define variables
$localAdministrators = @()
$memberCount = 0
$numberLocalAdministrators = 4 #Adjust to your number of administrators
try {
$currentUser = (Get-CimInstance Win32_ComputerSystem).Username -replace '.*\\'
$administratorsGroup = ([ADSI]"WinNT://$env:COMPUTERNAME").psbase.children.find("Administrators")
$administratorsGroupMembers= $administratorsGroup.psbase.invoke("Members")
foreach ($administrator in $administratorsGroupMembers) {
$localAdministrators += $administrator.GetType().InvokeMember('Name','GetProperty',$null,$administrator,$null)
}
if ($localAdministrators.Count -eq $numberLocalAdministrators) {
foreach($localAdministrator in $localAdministrators) {
switch ($localAdministrator) {
#Adjust to your local administrators
"Administrator" { $memberCount = $memberCount + 1; break; }
"$currentUser" { $memberCount = $memberCount + 1; break; }
"[YourGlobalAdminRoleSid]" { $memberCount = $memberCount + 1; break; }
"[YourDeviceAdminRoleSid]" { $memberCount = $memberCount + 1; break; }
default {
Write-Host "The found local administrators are no match"
exit 1
}
}
}
if ($memberCount -eq $numberLocalAdministrators) {
Write-Host "The found local administrators are a match"
exit 0
}
}
else {
Write-Host "The number of local administrators doesn't match"
exit 1
}
}
catch {
$errorMessage = $_.Exception.Message
Write-Error $errorMessage
exit 1
}

Note: Compared to my previous post, I prefer to use ADSI above WMI and Get-LocalGroupMember. WMI was often too slow and the mentioned cmdlet doesn’t really like SIDs yet.

Creating the script package

The second step is creating the script package. The script package can be created by using the relatively new functionality of Endpoint Analytics, which is proactive remediations. That functionality can be used to basically schedule scripts to detect specific configurations and, if needed, remediate the configuration. The following six steps walk through the process of creating a script package, with a focus on the detection script. That will create a script package to detect the local administrators. That script package can be scheduled to either perform the detection (and remediation) only once, or on a recurring schedule. The latter will make sure that it can actually be used for creating some baseline configurations, which might sound familiar when previously, or still, using Configuration Manager. The idea is the same, just a bit more simplistic and easier to use.

  1. Open the Microsoft Endpoint Manager admin center portal navigate to Reports Endpoint analytics (Preview) > Proactive remediations to open the Endpoint analytics (Preview) | Proactive remediations blade
  2. On the Endpoint analytics (Preview) | Proactive remediations blade, click Create script package to open the Create custom script wizard
  3. On the Basics page, provide the following information and click Next
  • Name: Provide a valid name for the custom script package
  • Description: (Optional) Provide a valid description for the custom script package
  • Publisher: Provide a valid publisher for the custom script package
  • Version: [Greyed out]
  1. On the Settings page, provide the following information and click Next
  • Detection script file: Select the created detection script
  • Detection script: [Greyed out]
  • Remediation script file: (Optional) More about the remediation script file next week
  • Remediation script: (Optional) More about the remediation script next week
  • Run this script using the logged-on credentials: No
  • Enforce script signature check: No
  • Run script in 64-bit PowerShell: No
  1. On the Assignments page, provide the following information and click Next
  • Assign to: Select the assigned group and when selecting multiple groups, multiple lines will appear with all separate schedule configurations
  • Schedule: Click on the three dots to open the schedule configuration. This enables the administrator to configure a recurrence frequency for the script package. This can be Once, Daily, or Hourly.
    • When selecting Once, a specific date and time should be configured
    • When selecting Daily, a frequency and daily time should be configured
    • When selecting Hourly, a frequency should be configured
  1. On the Review + create page, verify the information and click Create

Verifying the results

For the detection of the local administrators, I’ll focus mainly on the information locally on the device. Next week, when adding the remediation, the focus will be on Microsoft Intune when looking at the results. As it’s the IME that is addressing the execution of the script packages, the information related to the execution is also logged in the IME related logfiles. The most interesting information can be found in the IntuneManagementExtension.log. That logfile contains the execution information of the script packages and every related line in the log has a prefix of [HS]. That prefix is referring to HealthScripts, which is shown in the different logs and file locations. When looking at the [HS] prefix throughout the log, it’s divided into the following two categories:

  1. Scheduler – Looking at the log, the Scheduler is responsible for requesting and scheduling the script packages. Below, in Figure 3, is an example of the Scheduler that is scheduling the script package for detecting local administrators. The highlighted sections show information about the user, the policy and the schedule. All of that information is referenced below in more logs, file locations and the registry.
  1. Runner – Looking at the log, the Runner is responsible for executing and reporting about the script packages. Below, in Figure 4, is an example of the Runner that is actually executing the scheduled policy for detecting local administrators. The highlighted sections show information about the user, the policy, the schedule, the result and the output. All that information is referenced above in the script, and below in the file location and the registry.

The script package is store locally in the HealthScripts folder in the IMECache folder. That folder contains a folder, as shown below in Figure 5, that corresponds with the policy that was highlighted in the IntuneManagementExtension.log. That folder contains a script named detect and a script named remediate and both of those scripts correspond to the scripts of the script package. When the script package is only used for detection, the remediate script will be empty.

In the end all of the most important information is stored in the registry, in the Scrips key in HKLM\SOFTWARE\Microsoft\IntuneManagementExtension\SideCarPolicies. That key contains a key with information about the execution of the script package and that information is stored below a key of the user and the policy. That key contains the latest execution information and is shown below in Figure 6.

The Scripts key also contains a key with information about the result of the script package and that information is store below a key of the user and the policy. That key contains the result information that is reported and is shown in Figure 7.

The actual content of the Result value is shown below. That information contains the policy, the user, that status and the output. All the information that was referenced earlier in the script, log, file location and registry.

{ 
    "PolicyId":"176b61ca-59e6-4af1-b852-13a3102c5578",
    "UserId":"dedbba8f-f4f1-475c-a30b-895d87d77e9b",
    "PolicyHash":null,
    "Result":3,
    "ResultDetails":null,
    "InternalVersion":1,
    "ErrorCode":0,
    "ResultType":1,
    "PreRemediationDetectScriptOutput":"The found local administrators are a match",
    "PreRemediationDetectScriptError":null,
    "RemediationScriptErrorDetails":null,
    "PostRemediationDetectScriptOutput":null,
    "PostRemediationDetectScriptError":null,
    "RemediationStatus":4,
    "Info": {
        "RemediationExitCode":0,
        "FirstDetectExitCode":0,
        "LastDetectExitCode":0,
        "ErrorDetails":null
    }
}

More information

For more information about Endpoint Analytics and Proactive Remediations, refer to the following docs

Supporting the unsupported platforms

This week is all about supporting the unsupported platforms. More specifically, working with the limitations of the platforms that are unsupported by (parts of) the Microsoft 365 solution. Those platforms are Chrome OS and the different Linux distributions. Often those platforms are around in an organization during the introduction of a Microsoft 365 solution. In many components of the Microsoft 365 solution, those platforms are currently not supported. Think about Microsoft 365 Apps for Enterprise, Microsoft Intune, Conditional Access and so on. Basically nothing is really working and/or supported on those platforms at this moment. From that perspective Chrome OS is maybe even worse than the different Linux distributions. That doesn’t mean that there is no story at all. In this post, I want to start with an introduction to the actual challenge, followed by going through the most common options that are available to address that challenge. The scope for this post is limited to Microsoft 365 E3 license functionalities.

An introduction

When introducing a Microsoft 365 solution, the strength is the completeness of the solutions and the security and management capabilities that it brings. Especially the security of the solution goes as far as to were it’s a closed solution. For that, think about Conditional Access, the front door protection towards the company apps and data. Basically the first line to protect the access to the company apps and data. However, that’s also were the first challenge is that arises. Conditional Access, the protection at the front door, doesn’t support Chrome OS and the different Linux distributions. That means that it’s still possible to protect at our front door, but it’s not possible to take everything into account. It’s not possible to require anything of those platforms, besides multi-factor authentication (MFA) for the user using that platform. That’s not always sufficient. In the ideal world an organization wants to make sure that either the app, or device, that is used to access the company apps and data, is managed. However, that’s simply currently not supported for all platforms. That’s also were the second challenge arises, as Chrome OS and the different Linux distributions are not supported by Microsoft Intune. So, it wouldn’t even be possible to get those platforms managed. And I could go on in that chain, by looking at the Microsoft 365 apps for Enterprise (with the exception of Chrome OS devices that are capable of using Android apps) and all the other different components of the Microsoft 365 solution, but the result would be the same for Chrome OS and the different Linux distributions. Of course there are small exceptions, like the Microsoft Teams app and Microsoft Defender ATP for specific Linux distributions (in that case an E3-license is not sufficient), but those platforms are simply currently not a good fit in a Microsoft 365 solution. Especially when focusing on functionalities that are available within a Microsoft 365 E3 license.

The options

However, that doesn’t mean that there is completely no place for Chrome OS and the different Linux distributions in a Microsoft 365 solution. It will just not be the complete experience as on Windows, Android, macOS and iOS devices. Let’s have a look at the most common options that are available for those platforms, within the Microsoft 365 E3 license functionalities, and let’s go high-over through those options.

  1. Block access for unsupported platforms
  2. Provide access based on the location of the devices
  3. Provide limited access to SharePoint Online and Exchange Online
  4. Provide access via Windows Virtual Desktop
  5. Provide access via a Virtual Machine

Option 1 – Block access for unsupported platforms

When an organization has strict requirements regarding the management of the device and/or app that the user is using for accessing company apps and data (for example to limit the possibility of loosing data, or having company data on personal devices), the Chrome OS devices and the different Linux distributions can’t be part of the Microsoft 365 solution. In that case the best option might be to completely block access to company apps and data, by using Conditional Access. Even though Conditional Access doesn’t support those platforms, Conditional Access can be used to block those platforms. The idea is similar to what I’ve described here, which is to create a Conditional Access policy that will block access to all cloud apps and is assigned to all platforms with the exception of the platforms that the organization does want to allow. The main difference is that the UI changed a little bit over the last year.

This option will completely block access to company apps and data on Chrome OS devices and on the different Linux distributions (see Figure 1 for an example).

Option 2 – Provide access based on the location of the devices

When an organization has separately managed Chrome OS devices and/or different managed Linux distributions and can pinpoint those devices to a specific location, it can be an option to exclude those devices based on their location. The idea is similar to what I’ve described here, just the UI and experience changed over the years. This option could be an addition to the Conditional Access policy described in the first option. In this case there will be an exclusion for unsupported platforms coming from a specific location. The only thing that should be kept in mind is that it’s not possible to control which devices will actually be used by the user when accessing company apps and data on that specific location. We simply can’t differentiate between a separately managed Chrome OS device and a personal Chrome OS device from that specific location. That risk should be assessed.

This option will provide the user with full access to company apps and data on Chrome OS devices and on the different Linux distributions (see Figure 2 for an example).

Option 3 – Provide limited access to SharePoint Online and Exchange Online

When an organization is trying to find a balance between security and usability, it might be an option to allow a limited experience via the browser. That limited experience is only be available for Exchange Online and SharePoint Online and provides the user with the ability to at least perform some basic activities on company apps and data, without the major risk of loosing data or getting data on personal devices. This experience can be created across all platforms, whether those platforms are supported by the different Microsoft 365 components, or not. The idea is similar to what I’ve described here, and something similar is also applicable to Exchange Online. If needed the limited experience can be further limited to a true read only experience. In addition, it’s even possible to differentiate the behavior between SharePoint sites as explained here.

This option will provide the user with limited access to company apps and data on Chrome OS devices and on the different Linux distributions (see Figure 3 for an example).

Option 4 – Provide access via Windows Virtual Desktop

When an organization has strict requirements regarding the management of the device and/or app that the user is using for accessing company apps and data (for example to limit the possibility of loosing data), Windows Virtual Desktop (WVD) might also be an option on basically any platform. WVD is the exception when talking about cross-platform availability. The web client should work with any HTML5-capable browser and is officially supported on the default browsers of Chrome OS (Google Chrome) and the different Linux distributions (Mozilla Firefox). The good thing is that it’s possible to make sure that WVD is part of the Conditional Access policy, by either making sure WVD is hybrid Azure AD joined or by excluding WVD based on their location. Those are both valid options, when configured correctly, for making sure that it can be safely said that only WVD can access company apps and data. For basic account protection it’s still possible to use Conditional Access to require MFA on any platform that is used by the user for accessing WVD.

This option would provide the user with full access to company apps and data within WVD on Chrome OS devices and on the different Linux distributions (see Figure 4 for an example).

Option 5 – Provide access via a Virtual Machine

When an organization has many more advanced users, a last resort could also be to use a Virtual Machine (VM). That VM can exist on the host, can enroll into Microsoft Intune and can be compliant with the company policy. That would enable the user to access company apps and data, without the need for the administrator to create holes in the Conditional Access policy. The VM will be managed and secured in a similar way as any other physical device. This option is mainly applicable to the different Linux distributions.

My conclusion

Even though there are some unsupported platforms within most of the components of a Microsoft 365 solution, that doesn’t mean that an organization can’t use those platforms at all. I wouldn’t advise an organization to choose Chrome OS devices and/or different Linux distributions, when choosing for a Microsoft 365 solution, simply because of the currently limited support within a Microsoft 365 solution. However, when devices with those platforms are already within the organization, there are still options to somehow provide some support for those platforms. That’s the main point of this post. It’s just not a perfect story, yet.

Deploy Microsoft Defender Application Control policies without forcing a reboot

This week is all about Microsoft Defender Application Control (MDAC). More specifically, about configuring MDAC policies on Windows 10 devices by using Microsoft Intune without forcing a reboot. MDAC, often still referred to as Windows Defender Application Control (WDAC), restricts application usage by using a feature that was previously already known as configurable Code Integrity (CI) policies. To make the history lesson complete, configurable CI policies was one of the two main components of Windows Defender Device Guard (WDDG). History aside, CI policies help with protecting Windows 10 devices by checking apps based on the attributes of the code signing certificates and the app binaries, the reputation of the app, the identity of the process that initiated the installation (managed installer) and the path from which the app is launched. In this post I won’t focus on how MDAC technically works, but I want to focus on creating a custom MDAC policy and deploying that policy by using Microsoft Intune, without triggering a reboot. The same steps are actually applicable to deploying any custom MDAC policy by using Microsoft Intune. I’ll end this post by having a look at the end-user experience.

Create Code Integrity policy

The first action is to create a custom MDAC policy, which was formerly known as a Code Integrity policy. However, as a lot in the configuration is still referring to Code Integrity, or CI, I’ll keep referring to it in this post as a Code Integrity policy. Luckily, Windows already contains a few examples that can be used as the starting point (in a folder named CodeIntegrity). As this post is not focussed on constructing a custom Code Integrity policy, I’ll use DefaultWindows_Enforced.xml as my custom Code Integrity policy. That policy enforces the rules that are necessary to ensure that Windows, 3rd party hardware and software kernel drivers, and Windows Store apps will run and is also used as the basis for all Microsoft Endpoint Manager (MEM) policies.

PowerShell can be used to make all kinds of adjustments to a Code Integrity policy (the .xml policy file), by using the ConfigCI module. From that module the Set-RuleOption cmdlet can be used to modify the rule options in a Code Integrity policy. The configured rule options appear under the Rules property in the .xml policy file. Currently there are 19 different rule options that can be configured and those rule options are documented here. For this post the most important rule option, is rule option 16. That rule option can be used to allow future updates to the Code Integrity policy without requiring a system reboot. Below is an example of how to add rule option 16 to the Code Integrity policy. Using that same example with the -Delete parameter, will remove the no reboot information again.

Set-RuleOption -FilePath .\DefaultWindows_Enforced.xml -Option 16

Below in Figure 1, with number 2, is an example of the information that will be added to the .xml policy file, after adding rule option 16 to the Code Integrity policy.

Note: The detailed reader might notice that I’ve removed some default rule options in Figure 1 that are normally already configured by default. That is correct, because I wanted Figure 1 to focus on the specific settings of this post.

Transform Code Integrity policy

The second action is to transform the Code Integrity policy, so it can be distributed by using Microsoft Intune. To distribute the Code Integrity policy, it must be converted from a .xml policy file to .bin file. From the earlier mentioned PowerShell module, the ConverFrom-CIPolicy cmdlet can be used to convert a Code Integrity policy into a binary format. That binary version of the policy can be installed on Windows 10 devices and can be distributed via Microsoft Intune. Below is an example of how to convert the .xml policy file.

ConvertFrom-CIPolicy -XmlFilePath ".\DefaultWindows_Enforced.xml" -BinaryFilePath "DefaultWindows.bin"

Distribute Code Integrity policy

The third action is to distribute the Code Integrity policy, by using Microsoft Intune. To distribute the binary version of the Code Integrity policy, a custom device configuration profile can be used to achieve that. That requires the correct OMA-URI.

Construct OMA-URI

To distribute a custom Code Integrity policy, the ApplicationControl CSP can be used. This CSP was added with Windows 10, version 1903, and provides extended diagnostics capabilities, support for multiple policies and it supports rebootless policy deployment. The latter is the main difference with the AppLocker CSP. Unlike the AppLocker CSP, the ApplicationControl CSP detects the presence of no-reboot option. The following OMA-URI can be used ./Vendor/MSFT/ApplicationControl/Policies/{PolicyID}/Policy. In that OMA-URI, the PolicyID should actually be an existing value and not a self-generated value, like with most other policies that are configured. In this case the PolicyID should be the PolicyID of the Code Integrity policy. That PolicyID can be found in the .xml policy file, as shown in Figure 1, with number 1, and should be used without the curly brackets. For this example that means that the following OMA-URI can be used ./Vendor/MSFT/ApplicationControl/Policies/A244370E-44C9-4C06-B551-F6016E563076/Policy.

Create custom device configuration policy

To actually distribute a custom Code Integrity policy, Microsoft Intune can be used to configure the constructed OMA-URI on Windows 10 devices. The following nine steps walk through the process of creating a new custom device configuration profile that configures a single OMA-URI setting.

  1. Open the Microsoft Endpoint Manager admin center portal navigate to Devices Windows > Configuration profiles to open the Windows | Configuration profiles blade
  2. On the Windows | Configuration profiles blade, click Create profile to open the Create a profile page
  3. On the Create a profile page, provide the following information and click Create to open the Custom wizard
  • Platform: Windows 10 and later
  • Profile type: Custom
  1. On the Basics page, provide the following information and click Next
  • Name: Provide a valid name for the custom device configuration profile
  • Description: (Optional) Provide a valid description for the custom device configuration profile
  1. On the Configuration settings page, click Add to open the Add Row page. On the Add Row page, provide the following information and click Add (and click Next back on the Configuration settings page)
  • Name: Provide a valid name for the OMA-URI setting
  • Description: (Optional) Provide a valid description for the OMA-URI setting
  • OMA-URI: ./Vendor/MSFT/ApplicationControl/Policies/A244370E-44C9-4C06-B551-F6016E563076/Policy
  • Data type: Select Base64 (file)
  • Value: Select the created binary file
  1. On the Scope tags page, configure the applicable scopes and click Next
  2. On the Assignments page, configure the assignment and click Next
  3. On the Applicability rules page, configure the applicability rules (think about the existence of this CSP for version 1903 and later) and click Next
  4. On the Review + create page, verify the configuration and click Create

End-user experience

Now let’s end this post by having a look at the end-user experience, once the Code Integrity policy is distributed and applied to the Windows 10 device of the user. The first thing that the user might notice is that the device doesn’t request a reboot. When the user now wants to start an application that doesn’t comply with the configured Code Integrity policy, the user will be prevented from starting the application. Figure 3 shows an example of a user that wants to start an application that was manually installed and the user receives a clear message that the app is blocked by Windows Defender Application Control.

More information

For more information about deploying WDAC policies, refer to the docs about deploying Windows Defender Application Control policies by using Microsoft Intune.