Syncing User Data from G Suite to Active Directory with Python and LDAP

Hey there folks! It’s been a long while since I’ve had some time to write a blog post, but happy to be back. Over the last year I’ve moved to a new city and started a new position, so I’ve spent most of my time learning many new things. Some of that learning came with a project where we needed to sync user data from a G Suite domain to Active Directory. Surprisingly, a quick Google search for this led to several results with the question of how to do this, but not many (if any) results involving the solution. Most results offered a solution for the opposite (syncing AD to G Suite). Google even built a tool to help with this, called Google Cloud Directory Sync. This is likely because most people are using Active Directory as their IdP, and G Suite purely as an email/collaboration/productivity suite, and thus they want to sync passwords or user attributes from Active Directory over. But what if G Suite is the IdP, and you’re not using Active Directory for that purpose?

PowerShell or Python?

My primary development environment these days is made up of Python 3 and AWS Lambda to run the code. I still dabble with MacAdmin things here and there, but I am primarily responsible for automating internal (or external SaaS) tools together. However, I have some experience with writing PowerShell scripts for minor Windows automation tasks. So while I have some familiarity, and am actually quite impressed with PowerShell for the most part, my comfort zone is in Python.

The Challenges of PowerShell

If you’re like me, you probably instantly thought PowerShell was the go-to tool to perform this task. I knew it had Active Directory modules that would probably allow us to only write a few lines of code, there is even a cmdlet called New-ADUser that seems to fit the bill. However, the issue with this approach was, how were we going to run it in AWS Lambda? AWS does have PowerShell Core support, but not being as familiar with PowerShell, I wasn’t sure if a) this supported all the modules/cmdlet’s I would need, and b) how we would go about connecting to the Domain Controller’s, as I think most of those cmdlets assume a domain-bound system is running them. The other problem then became, if we go with PowerShell, how do we interact with the Google Admin API?

Surprisingly, someone had done some pretty extensive work to bring most (if not all) of the Google Admin API calls over to PowerShell and made the module PSGSuite. This looked (and still looks) very cool and very promising. But again, my unfamiliarity with PowerShell just made this task feel a bit daunting and like I wouldn’t be able to get what I needed done in a timely fashion.

The Challenges of Python

The challenges of Python were far less to start. For one thing, I know how to deploy and setup a Python-based AWS Lambda. Secondly, I know the language well enough and work with the Google API near daily, so it felt like a very low bar for entry. The issue or challenge with Python, was how were we going to talk with AD? The first few Google results brought up things like pyAD which is a Python library that is built specifically on top of the ADSI interfaces, which are only supported on Windows. There was another package called active_directory that also seemed like it could do the job, but again was built for Windows. It wasn’t until I read a few forums that someone suggested using the ldap3 python library to talk with AD over LDAP. It was at this point when I did an (almost literal) :facepalm:. I had worked with Active Directory far more extensively in previous jobs, and had done some AD automation via Python and LDAP. The difference, however, was that I was always retrieving data. It never occurred to me that, of course, this may also work to write data…


The Code

Once I had played a bit with the LDAP3 library, and confirmed that indeed I could manipulate objects in AD via Python, this became a much easier and somewhat more fun task.

The basic flow of the code would be:

  1. Pull all users from G Suite
  2. Pull all users from our target OU in Active Directory
  3. Iterate through our G Suite users and create ldap-friendly formatted JSON objects with the information we cared about (First name, Last name, UPN, etc.)
  4. Check that the user wasn’t already in the list of users returned from Active Directory
  5. Create user in Active Directory

As the G Suite API is very well documented, I’m not going to include the code on how to authenticate and retrieve users with it. But I have documented the code that performs the LDAP bits into a Gist that you can find here:

NOTE: Not all of this code may make sense for your environment. For example, the create_username function strips dashes (-) out of emails when making the username for AD, that may not be desired in your org. Please read ALL of the code first before using this in a production system.

Interesting Lessons Learned

Microsoft has some constraints in place where certain actions are not allowed over an unencrypted connection (i.e. just LDAP, not LDAPS). This means that you cannot create or modify a user without the connection being over either TLS or SSL via LDAPS. Simple enough, and we should all be using SSL everywhere we can as it is.

The hardest part about this whole process was not with creating the users in Active Directory, but creating them as enabled accounts in Active Directory. We already discussed needing to be connected via an encrypted channel to create or modify users, but I could not for the life of me get the user to be created and be enabled. I confirmed I was setting a password that complied with AD’s password policy, that all the necessary bits were set correctly, but still the users were created as Disabled. It wasn’t until I went to try and manually enable the user via the GUI that I got a prompt that a user could not be enabled without a password set. Ok 🤔 but I know I’m sending the password in the LDAP payload, so what’s the issue? Turns out, Microsoft uses a very specific encoding to set the ​UnicodePwd LDAP attribute: UTF-16-LE. You’ll also find the same looking through the ldap3 source code on how the ad_modify_password method works:

Once that was in place, bingo, bango, bongo – we were in business!

We were now able to quickly and automatically provision new Active Directory user accounts with random passwords that were enabled by default.


I hope this helps someone else out and makes some project a bit easier for you! Feel free to comment or let me know if something isn’t working for you, while I’m in no way an LDAP/AD/G Suite expert, I’m happy to try and help.


Moving Beyond Scheduled Jobs to Event-Driven Workflows

I have written before about some of the internal tools we are writing to try and automate certain tasks as well as serve as a bit of glue code between our internal systems. We tend to schedule these scripts to run either using Cron, or a personal favorite, jobber. Overall, this has been a tried and true approach, and we are in fact making great headway in automating several tasks, some smaller, some larger. However, these kinds of automated, scheduled jobs often come with unnecessary overhead. You can build in checks to not repeat operations if they have already occurred, but you still have to write and run the check itself, and when that means possibly iterating over thousands or tens of thousands of machines/records/files, etc., it starts to add up. In other circumstances you may not even write the check and perhaps just make the change over and over again, whether it was needed or not.

Event-driven Workflows

I had seen “Event-driven workflow” in several marketing campaigns for products up until this point, but never really understood what that meant. Turns out, it’s not too complex.  Essentially, “event-driven” or “events” really just mean HTTP callbacks, which are typically HTTP POST requests that the application makes upon an event happening. It means that instead of you having to ask the application a question, it can instead just provide an answer allowing you to instantly take action. Since we are working with AirWatch internally, we’ll be using that in our examples.

AirWatch has what they call “Event Notifications” (more on this later) which can “react” to events happening in your AirWatch environment, such as a device enrolling, or being deleted, or even just a device asset number change. Upon these tasks happening, AirWatch can trigger an Event Notification (an HTTP callback) and make an HTTP POST request to any defined URL. With Event Notifications, we no longer have to do the hard work of pulling all records or devices, and iterating over them looking for status changes or new devices, we can simply let the application tell us that something happened, and take the appropriate actions!

This is not to say scheduled jobs don’t still have their place in automation workflows, they most certainly do, but utilizing event-driven workflows can allow for far quicker reaction or responses to events, and save us some CPU time by only taking the actions needed at that time, for that event.

AirWatch’s Event Notifications

As mentioned above, AirWatch is what we now use internally, and lucky for us, it has Event Notifications built into the product. These can be found and enabled under:
Groups and Settings –> All Settings –> System –> Advanced –> API –> Event Notifications

Screen Shot 2018-05-10 at 11.45.54 AM.png

Once at that screen, click the Add Rule button and enter the URL that you want the notifications to be sent to (for testing, I used this handy little website You can also include authentication if the URL destination requires it. Then you choose the format you want the notification sent in, which I set to JSON, as I think it’s easier to parse and read.

Screen Shot 2018-05-10 at 4.04.15 PM.png

Once you have those details filled out, you can then choose what Events should create Notifications. They are pretty self-explanatory as to what triggers them, so we’re not going to break them down here. What we will talk about, as it will become important later, is the Event ID each of them has included in their Event Notification. They are, in order:

AirWatch Event Event ID
Device Enrollment 148
Device Unenrolled Enterprise Wipe 39
Device Wipe 25
Device Compromised Status Change 178
Device Compliance Status Change 184
Device Delete 662
Device Attribute Change No ID as it is not an actual event
Asset Number 641
Device Friendly Name 642
Organizational Group ID 218
User Email Address 643
Ownership 165
Operating System 163
Phone Number 645
Device MCC 646

There is a lot more information sent in the notifications, but the Event ID’s appear to be consistent across devices and OGs, so they should be something we can reliably look for to know what event has occurred.

Writing a Simple Python Script to Listen for HTTP Callbacks

As Python is the language I’m most familiar with at this point, and because my goal was to turn my existing scheduled scripts into event-driven scripts, I wanted a way to listen for the event notifications in Python. With some quick Googling, it looked like there were probably a few options, but I decided to go with Flask for this project.

So before getting started, and if it’s not already installed, run the following command to install Flask on your machine:

pip install flask

Now we can use Flask to listen for HTTP POSTs by running the following code:


import json
from flask import Flask, request

app = Flask(__name__)


def main():
    data = json.loads(
    print data
    return "OK"

if __name__ == '__main__':

This code is quite simply listening (currently on your localhost or for any HTTP POSTs. When one is received, it will parse the data passed into JSON, and then print that data out.

In order to have the same script listen on your host’s actual IP address, you can change the line to''). It should be noted however that the code above requires no authentication to send data to it, so be cautious before just running this on an internet facing server or anything.

With AirWatch, this ends up spitting out some data like so:

  "EventId": 641,
  "EventType": "Asset Number",
  "DeviceId": 556,
  "DeviceFriendlyName": "My Device",
  "EnrollmentEmailAddress": "",
  "EnrollmentUserName": "username",
  "EventTime": "/Date(1525979516323)/",
  "EnrollmentStatus": "Enrolled",
  "CompromisedStatus": "",
  "CompromisedTimeStamp": "/Date(1525979516330)/",
  "ComplianceStatus": "Compliant",
  "PhoneNumber": "",
  "Udid": "3465FEB3BD615931A073832628A6D022",
  "SerialNumber": "C12345678910",
  "MACAddress": "012345678910",
  "DeviceIMEI": "3465FEB3-BD61-5931-A073-123456789",
  "EnrollmentUserId": 345,
  "AssetNumber": "09876543",
  "Platform": "AppleOsX",
  "OperatingSystem": "10.13.4",
  "Ownership": "CorporateDedicated",
  "SIMMCC": "",
  "CurrentMCC": "",
  "OrganizationGroupName": "Macs"

This output is from a notification about an Asset Number change. In a previous post I wrote about how we’re trying to automate creating Munki manifests with client details. One of the pieces of that script for us is to update the Asset Number in AirWatch with the asset number from our internal property database. We use this asset number for several different things, so it’s important that it be the actual asset tag as defined in the property DB. With the event notification, we can instantly see if an asset number was updated, and then quickly check to see if it matches the real asset number, and if not, change it to the proper number.

Side Note: We currently have set this up now in our environment, and while testing, within about 2 seconds of changing the asset tag to something incorrect, it is back to being the actual number again… it’s pretty amazing.

So let’s expand on our Python code above to add some logic that could catch an event like this, and then take the appropriate actions:


import requests
import json
from flask import Flask, request

known_good_asset = '12345678'

def updateAssetTag(device_serial, good_asset_number):
    update_asset = requests.put('' % device_serial, headers=request_headers, data={'AssetNumber':'%s' % good_asset_number})
    if update_asset.status_code == 204:
        print "Device Asset tag updated successfully for [%s]" % device_serial
        print "Unable to update device asset tag"

app = Flask(__name__)


def main():
    data = json.loads(
        ## We can pull out the key fields that matter to us from the event
        event_id = data['EventId']
        device_serial = data['SerialNumber']
    except KeyError:
        return "Data received was not in the format expected"

    if event_id == 641:
        ## We need to assign the device_asset here as opposed to above,
        ## because the AssetNumber is not passed with all event notifications
        device_asset = data['AssetNumber']
        if not device_asset == known_good_asset:
            updateAssetTag(device_serial, known_good_asset)

    return "OK"

if __name__ == '__main__':

This is clearly a bit of pseudo-code as it would always set the asset to a defined string for every single device that had an asset number change, but hopefully the logic is clear.

While a fairly simple example, it 1) shows just how quickly you can react to events taking place in your environment, and 2) is far less intensive than iterating over 1000 machines, checking the asset number for each device, updating the asset if one is out of sync, all while doing this on a schedule over and over again. Using Event Notifications, we can see a change for a specific device, look up the info just for that device, make the change for just that device, and then go back and quietly wait for the next event notification.

We are already beginning to build better workflows around these Event Notifications, such as using AirWatch as our authority on device status. Meaning, if a device is removed from AirWatch, we want to remove it from all of our other systems (i.e. Munki, MunkiReport, Chef, etc.). With the “Device Deleted” event, we can monitor for the removal of devices, and then instantly remove that device from all other systems to ensure we are not holding onto crufty data.

We are also looking at possibly using the notifications to allow for more granularity, and more options, for things like AirWatch’s compliance policies. At the moment, AirWatch has fairly limited capabilities when a device becomes Non-Compliant. But with event notifications, we could write the code to do whatever we wanted, whether that be to trigger an action, move the device to a more locked down OG, send an Install Application command, etc. The options become far greater and allow you to grow your environment beyond what a product may offer out of the box!

As always, thanks for reading and happy automating!


Managing macOS Software Updates with the AirWatch Agent and Chef


As I’ve discussed before, I work in a high-compliance organization, meaning, when OS updates are released, we need to be able to test them, roll the updates out to customers, and then ensure their successful installation. Up until recently, we had been using LANrev for Mac management and patching, which had the interesting ability to run the softwareupdate utility on a client machine, grab that update package(s), and then upload them to the server. With this method, we would deploy the OS update package like any other to the devices that required the update, after being vetted and approved internally. And overall it worked well. Since the update was treated as a standard package, the typical install status and reporting in LANrev worked the same way. We could audit failures, successes, etc., and repush the update as needed. This however became less and less stable over time, specifically starting with 10.12, where the updates would never install successfully on clients. Once we moved to AirWatch, I was happy to find that they also had an OS update mechanism in place. Their agent was able to download and install updates also using the softwareupdate utility (more on that later), and also interact with the VMware AirWatch Agent GUI in order to show prompts to customers and alert them that they needed to reboot, among other things.

Setting Up Software Updates in AirWatch

In order to utilize the AirWatch Agent for Software Updates, you need to create a “Software Update” profile in AirWatch for macOS. This looks like so:
Screen Shot 2018-04-03 at 5.25.34 PM.png

This profile specifies things like:

  • Update Source – This can be pointed at Apple’s catalogs, or an internal SUS
  • How to install updates and what updates to install – This has options like “Install Updates Automatically,” or “Download updates in the background,” or “Check for updates only.” It also specifies whether macOS beta updates should be allowed, or app updates should be installed by the Agent.
  • Schedule – This allows you to schedule how often to check for software updates.
  • Restart – This allows specifying whether or not the agent should restart after updates are installed (for those that require a reboot), and should the customer be given a grace period before the reboot is forced.

Once you have those settings in place, you can push that profile to the client, and two things should happen:

  • A Software Update profile should get installed with any customizations to things like the Software Update Server
  • A launch daemon and plist file should end up on the device which are used by the AirWatch agent

The Problem

So why talk about any of this in the first place? AirWatch does make this quite easy to setup, so is a blog post really necessary? Probably not… But, we recently noticed that devices were not being updated even though the Software Update profile was on the system. This unfortunately meant that if new updates were made available, the devices may see them in the App Store like normal, but we couldn’t ensure their installation, and thus our device’s compliance. The other issue is that since AirWatch does some under the hood magic when setting this profile up with regards to the agent actually enforcing the updates, there was no indication from our console that anything had gone wrong. AirWatch saw that the profile was reported as being installed on the device, so why would it think anything was wrong?

Digging In

Once I realized we had a bit of an issue with the launch daemon not being present on the systems, the first thing I did was open a ticket with AirWatch to report it!

I started thinking of other ways we could automate software updates, and now that we have a Chef infrastructure setup, I figured that should be pretty easy. We would just need to setup a launch daemon that calls the softwareupdate utility, and then the next time the customer rebooted their machines, the latest updates would all get installed.

No, don’t do this. This does not offer us very good compliance, as we know customers often go days, weeks, maybe months, maybe only once something stops working, before rebooting their machines. AirWatch has also done all of the hard work for us in having their agent be able to alert a customer, set deferral times, and enforce the reboot if needed, and I wanted to make sure those efforts didn’t go wasted!

I started looking at what was happening on a device that had the Software Update profile installed, and found that on devices that were being successfully updated, a launch daemon was present that was not on other systems. The launch daemon was called com.airwatch.AWSoftwareUpdateScheduler.plist and looked something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
        <string>/Library/Application Support/AirWatch/AWSoftwareUpdateScheduler</string>

This declares two important things, 1) the binary to run (AWSoftwareUpdateScheduler), and 2) the interval at which to run that binary, which corresponded to the software update check interval we set in our Software Update profile.

In calling this binary manually and monitoring the logs that it spits out, it became clear that at its core, it was in fact just calling the Apple softwareupdate utility. But, it also opened a socket to the AirWatch agent binary in order to be able to show the prompts to the user regarding the reboots, and could essentially wait idle for an extended time and then reboot the machine, again after notifying the user. The interesting thing was when calling this binary on a device that had the Software Update profile installed, had a macOS Update available, but did not have the launch daemon present, it would run the softwareupdate utility, the update would get installed/staged, but that would be it. There would be no GUI prompt or anything. Running it again, same thing, it would just re-stage the update, but no GUI prompts.

This led me to begin looking for other pieces or files that might be on properly configured devices, but not on problem devices. This is when I discovered the Scheduler.plist, which contains all of the settings that you specify when setting up the Software Update profile in the AirWatch console. This file lives under /Library/Application Support/AirWatch/Data/

The Scheduler.plist file looks something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">

Most of these keys are self explanatory and correspond directly to a setting in the Software Update profile we setup earlier.

With that new knowledge, I copied that file to a problem device, re-ran the AWSoftwareUpdateScheduler tool and lo and behold, when the update was installed/staged, the GUI prompt appeared! This means it should be pretty easy to utilize the AirWatch agent for prompts to our users, but set the settings and ensure they were on the devices using Chef! I would just need to deploy a launch daemon, which with Chef is done using the launchd resource, and put the Scheduler.plist file on disk, which can be done using the cookbook_file resource.

The other interesting thing I found while looking into this, is that the Scheduler.plist file contains a key called gracePeriod which was set to 7200 seconds for us. This corresponds to the Grace Period set in the Software Update profile, which (annoyingly) has a max value of 2 hours. This is actually something we had wanted to extend further, but since it wasn’t in the GUI, didn’t think it would be possible. But now that we were creating the Scheduler.plist with Chef, maybe we can make the initial reboot grace period longer, something like 8 hours? And wouldn’t ya know it, after setting the gracePeriod key to 28800 seconds, and running the AWSoftwareUpdateScheduler, I received a prompt like so:
Screen Shot 2018-03-30 at 11.59.51 AM.png

This then would mean that you should be able to get far more granular with just about all of the keys in the Scheduler.plist if managing it directly (i.e. not using the AirWatch console). You could add more deferrals than the GUI allows, get more granular with time in between deferrals, etc.

Wrapping Up

I did eventually open a support case with AirWatch, since this needs to be fixed in the end as I’m sure many customers rely on it. But, this was a fun way to learn a bit more about the underlying “technology” AirWatch is using to perform and enforce macOS updates, and was once again a reminder of how awesome having a configuration management tool is. Yes, this all could have been done with relatively straight forward bash scripts and deployed via package or something, but this way it is centrally managed and we can ensure it’s compliance on the system.

As always, thanks for reading!


Generating Pre-Populated Munki Manifests Utilizing the AirWatch API


While recently setting up Munki in our environment, I was doing some Googling, and asking for advice on how to handle client manifests. I saw several folks recommending the per-device/per-user manifest method (examples here and here), but from our point of view it was hard to see the need for per-client manifests as the majority of our software/patches are deployed to all equipment. But trusting that these folks knew what they were talking about, and knowing that they had used Munki for far longer than we had, we decided to trust the advice of those smarter than us. We chose to utilize the site_default manifest that Munki defaults to as our primary manifest that all general software would be assigned to. We then made it an included_manifest in the per-device manifests. This way, we would have the granularity to make certain pieces of software available to specific devices as needed, without having to assign every device every piece of standard software. And, if a client manifest somehow got deleted or was unavailable, Munki would default to site_default anyway and the client state really wouldn’t change much.

Once we decided to go this route, we wanted a way to automatically create and pre-populate these client manifests. The idea being that if a customer asked for a piece of software to be made available to their machine(s), we wouldn’t have to ask them their device serial number and go create the manifest, it would already exist. We also wanted a way to have some of the “pretty” information filled out, such as the DisplayName (which we would set to the device hostname) and User (which we would set to the Assigned User in AirWatch), since looking through 1000 manifests named after Serial Numbers isn’t as easy as looking for a customer’s name or device asset tag. The question then became “How can we gather this information in an automated fashion?” We have a property database that holds accountable users and their devices, so perhaps this is straightforward! Alas, it was not. We have several fellows and volunteers that work for us, and since they are not technically contractors or government employees, they cannot directly be “accountable” for a government asset. Typically, this results in their supervisor being listed as the accountable user. So while that option would work, it would not be very accurate, and would be harder to manage down the line.


If you just want to look at the script, it can be found as a Gist on my GitHub here:

AirWatch’s Robust API

AirWatch has an incredibly robust API built into their product. I mean, huge. Knowing this, I thought it should be pretty easy to get a device’s Assigned User, Serial Number, and Device Friendly Name (the device name in the AirWatch DB). Then, with the information pulled from AirWatch, we could create per-client Munki manifests.

Side Note: Before going further, if you are unfamiliar with the AirWatch API or how to use it/access it, I’d recommend checking out Ben Toms nice “Getting Started” post here. I originally wrote the script simply wrapping the curl command up, but as Ben points out, the requests library is awesome and made things incredibly easy. So if you are not already using it, I’d recommend installing that before trying to work with AirWatch’s API (at least when using Python).

Ok, so on to the actual API calls. We first wanted to start by just pulling all of the devices in AirWatch, and their associated information. A call like that would go something like this:


import requests

###### AirWatch Variables ######
airwatch_server = '' ## Enter your AirWatch server here, for example
b64auth = '' ## Base 64 Encoded AirWatch Username and Password here
aw_tenant_code = '' ## Enter the AirWatch API Key from your server here
request_headers = {'aw-tenant-code':'%s' % aw_tenant_code, 'Accept':'application/json', 'Authorization':'Basic %s' % b64auth}
###### AirWatch Variables ######

all_devices = requests.get('%s/api/mdm/devices/search' % airwatch_server, headers=request_headers)

    parsed_device_output = all_devices.json()
except ValueError:
    print "The API call failed."

This will return a parsed dictionary into the variable parsed_device_output for all of your currently enrolled devices, with which you can begin to access all of the juicy bits of info that AirWatch stores. From here, it should be very straightforward to extract the keys we want (Friendly Name, Assigned User, etc.) and use that to generate the Munki manifests for our devices.

The Upstaging “Default Staging User”

Due to the fact that we are migrating devices from our previous system into AirWatch, the vast majority of them are “Staged” because we are enrolling the devices into AirWatch by manually pushing the AirWatch Enrollment profile. Therefore, until our users reboot or logout, their devices are enrolled in AirWatch as the “Default Staging User.” Well so obviously that’s not what we want. We needed to add a little condition in our script to make sure that the enrolled user is in fact real. This would go something like this:


## We are going to extract the device fields we care about
## (SerialNumber, DeviceFriendlyName, etc.) from the
## "parsed_device_output" variable we got earlier and assign it to
## a new list called "airwatch_devices"

for i in xrange(0, len(parsed_device_output['Devices'])):
    client_dict = {}
    client_dict['SerialNumber'] = parsed_device_output['Devices'][i]['SerialNumber']
    client_dict['FriendlyName'] = parsed_device_output['Devices'][i]['DeviceFriendlyName']
    client_dict['AssetNumber'] = parsed_device_output['Devices'][i]['AssetNumber']
    client_dict['Username'] = parsed_device_output['Devices'][i]['UserName']

for device in airwatch_devices:
    if 'Default Staging User' in device['Username']:
        print "Device [%s] has not been fully provisioned yet, skipping manifest creation" % device['SerialNumber']

One slight downside to this method is that it means that the client manifest wouldn’t be generated until the user information is updated, which as we know, some people don’t reboot very frequently, so it could take awhile. But it doesn’t matter much for us since the client manifest will just be inheriting the site_default manifest, which Munki on the device will default to anyway without a device manifest.

Unfriendly “FriendlyName[s]”

The next hurdle we had to overcome was the “DeviceFriendlyName” that AirWatch was returning. We use a standardized naming convention on all of our machines that goes like so: MH + 8 digit asset tag + MAC + LT or DT (for laptop or desktop respectively). This results in a device name like so: MH01234567MACLT

However, up until this point, all of our devices had AirWatch’s default Asset Tag, which is the same as the device’s UDID, some crazy long alphanumeric string. Since we have a property database with this information, we were able to add a function to our script which would query the property database, and make a PUT request to AirWatch to update the device’s Asset Tag (this snippet is not included in the code on GitHub as it relies on internal resources that is likely not relevant to others).

With regards to the script, we just needed to add another conditional below the one we just created above, that will make sure the Asset tag was updated:

    elif len(device['AssetNumber']) < 8:
        print "Device [%s] Asset tag has not been updated yet, skipping manifest creation.

These are all basically just simple checks the script does before creating the manifest on the server. If these checks didn't matter to you, or didn't apply, they could easily be changed or completely removed.

Side Note: I did not show this code, but in the script on GitHub, there is also a check to make sure that the manifest does not already exist on the server, so that we don’t unnecessarily generate one or overwrite anything.

Generating the Manifest

Now that we have all the pieces we need, and have checked to make sure that the device is in the state we want, we can confidently create the device manifest with the necessary bits of info.

We can add the following code to actually create the manifest for us:

        print "\tCreating a manifest for device."
        manifest_template = {}
        manifest_template['catalogs'] = ['production']
        manifest_template['included_manifests'] = ['site_default']
        manifest_template['managed_installs'] = []
        manifest_template['optional_installs'] = []
        manifest_template['display_name'] = device['FriendlyName']
        manifest_template['user'] = device['Username']
        plistlib.writePlist(manifest_template, '%s/%s' % (manifests_dir, device['SerialNumber']))


And there you have it, a way to dynamically generate client manifests for Munki, utilizing AirWatch’s API!

We are hoping to expand upon this idea to make even more use of AirWatch’s API. We would like to do something to the effect of allowing a customer to choose a piece of licensed software that they want on their device, that information would then get sent to AirWatch into something like a Custom Attribute, and then manifest_generator could look and add that piece of software to the client’s manifest.

As always, thanks for reading!


Remotely Approving UAMDM

With the release of 10.13.2, Apple introduced a new “feature” called User Approved Mobile Device Management Enrollment (UAMDM), which withholds certain privileges from the managing MDM until manual action, by the device user, “Approves” the right to those privileges. Apple also made it quite difficult to perform this approval remotely, with the intent that the user of the machine would have to in fact agree to these extra capabilities. This poses an issue for MacAdmins who are managing fleets of Apple devices, especially those who may not have all of their devices in a centralized location, may not have an MDM setup, or may not have DEP for devices coming in, even if they do have an MDM setup.

As I’ve talked about previously, we’re in the middle of an MDM migration over to AirWatch. Because of this, we have (selfishly) been telling our customers to hold off on upgrading their machines to 10.13.x. In our defense, up until 10.13.2, it was primarily due to stability and security concerns. But at this point, we really are just trying to have them wait so that we can avoid UAMDM troubles once we are ready to enroll their machines into our new MDM.

We are also performing our migration in as much of an automated way as possible, which means installing the MDM profile directly on the machines via a package. This means that for any 10.13 machines that are already in our fleet, we will need to figure out a way to click that pesky “Approve…” button in order to reap all of the MDM goodness that we have at our finger tips.

Taking from what I learned about trying to automate the AirWatch location services in this post, I decided to see if we could do the same kind of thing here, by using AppleScript to send button clicks on our behalf in order to Approve UAMDM. From that post we can recall that fully automating this process is essentially impossible, since we can’t authorize Script to have the “Accessibility” access it needs to send button clicks. But it might just allow us to at least approve UAMDM remotely, which is a heck of a lot easier and faster to do than sneaker-netting to all of our 10.13 machines, especially since we are a geographically distributed organization. Side note: Have an intern? This could be a great project if you have lots of non-UAMDM machines in your fleet!


If you just want to get the script and give it a shot, you can find it on my GitHub here along with the basic instructions:

Giving Ourselves over to Script Editor

As mentioned above, before we really do anything further, we might as well go ahead and grant Script the necessary permissions it needs in order to help us. I also verified that this can be done remotely using Screen Sharing.

  1. Open System Preferences –> Security & Privacy
  2. Select the Accessibility option in the left column
    Screen Shot 2018-02-18 at 2.09.06 PM.png
  3. Click the plus (+) button to choose the app we want to allow, which in this case is under /Applications/Utilities/Script
    Screen Shot 2018-02-18 at 2.09.18 PM.png
  4. Wonderful! We should now see that Script Editor has the necessary permissions to move forward!
    Screen Shot 2018-02-18 at 2.09.23 PM.png

Don’t Forget the “…”!

Now that Script Editor is authorized to have some additional rights to our machine, we need to start the process of finding out where the “Approve…” button is in the context of the UI. Upon enrolling a 10.13.2 (or higher) device into an MDM via something like a package or using the profiles command, if you open the “Profiles” preference pane, you will see the following screen advising you that not all MDM functionality is yet available for the device.
Screen Shot 2018-02-18 at 1.59.56 PM.png

This, ladies and gentlemen, is UAMDM. As most already know, trying to click that “Approve…” button via something like ARD or Screen Sharing results in the following alert: Profiles cannot be approved while using remote or automated input methods.
Screen Shot 2018-02-18 at 1.54.55 PM.png

Well, we’ll see about that.
well see.gif

Let’s load up the Accessibility, and take a look at the UI Hierarchy.
Side Note: There are instructions on finding and using this tool in the post about Automating Location Services, so I will not be covering them here.

Using Accessibility Inspector, if we choose the “Approve” button in the Profiles preference pane, we can get the hierarchy of visual attributes, which we will need to write our AppleScript.
Accessibility Inspector.png
We can quickly see a few things that will be important. 1. The “Approve” button is actually “Approve…”, with an ellipsis included at the end. 2. We can see that it is an nested attribute in a “scroll area” that does not have a name or description, which may make this just a little harder. Other than those important little details, we can see that as expected, the scroll area is nested within the “Profiles” window, which is nested in the “System Preferences” application. Great, now we have the necessary components to write our script!

Again, borrowing from what we learned previously, we know that we have to send the button click that we are trying to automate to the “System Events” process, and then from there to the actual application, in this case “System Preferences”. Therefore, we know the starting of the script should look something like this:
Screen Shot 2018-02-18 at 2.48.35 PM.png

From there, we just need to fill in the juicy bits, including the window (“Profiles”), the scroll area (no description), and the button (“Approve…”). However, sending the button click to the scroll area proved more difficult than I realized it would. I didn’t know what to call it when trying to “talk” to it, and went through several iterations. Turns out, it’s easier to just ask the application itself what to call it. We can do this by using a get command within AppleScript, and ask for the UI elements within a window. So our next iteration of the AppleScript was as so:
Screen Shot 2018-02-18 at 2.53.27 PM.png

This spit out a result like so:
Screen Shot 2018-02-18 at 2.54.52 PM.png

I’ve highlighted the two “scroll areas” that the command found. But we still don’t know which one is which. There are two scroll areas in the “Profiles” preference pane, the one on the left that lists all of the profiles installed, and the one on the right, which typically shows the description of the profile, as well as what settings the profile manages. You would think that “scroll area 1” would be the left column, based on how we read left to right, and therefore count as we go. Turns out, not so. If we do another get of the scroll areas to see what kind of UI elements they contain, we can try to figure out which scroll area corresponds to which column in the preference pane. Let’s first query scroll area 1 and see what it contains:
Screen Shot 2018-02-18 at 3.01.46 PM.png

And would you look at that, scroll area 1 is actually the right side scroll view, containing the attributes about the profile, including our “Approve…” button. So this is great news, we now know exactly where we need to click the button, with regards to what is nested in where. This leaves us with a script looking like so:
Screen Shot 2018-02-18 at 3.04.15 PM.png

And would you believe it, this worked! Apple tried to get in our way and add a second prompt to really make sure the user wanted to give their soul over to the MDM, but using the same techniques we did above, we were able to overcome that quite quickly:
Screen Shot 2018-02-18 at 3.06.06 PM.png

The final script ended up looking like this:
Screen Shot 2018-02-18 at 3.07.17 PM.png

Putting It All to the Test, Remotely!

I have reworked the script a few times now to try and handle a few different scenarios and still work. Below is a screenshot of the end result of the script at this time:
Screen Shot 2018-02-19 at 1.11.11 PM.png

So now that we’ve got ourselves a bona fide working AppleScript that will approve our UAMDM for us, does it work remotely? I opened up Screen Sharing and opened a remote session into the VM I was working on, held my breath, and clicked the Play button in Script Editor…

It worked! This may not be the cleanest, nicest, fanciest, or really even good way of doing this, but it is a way. You could theoretically take this method and remote into (remember that intern I recommended?) every non-UAMDM machine you have in your fleet, and get them “UAMDM’d”!

One of these days, I might actually try to learn the Objective-C ways of doing these kinds of things, but for now, Apple Script will have to do!


Creating a DEP VM using Parallels Desktop

Not sure who made the decision, but at some point in the past, my org decided to standardize on Parallels Desktop instead of VMware Fusion. Overall this is fine, but I have found Parallels struggles a bit more with testing things like preboot stuff, such as FileVault. The other thing that Parallels (or really the community) is lacking is guides and tools for MacAdmin tasks using Parallels. Most admins I’ve seen, or guides I’ve found, use VMware Fusion for their macOS VM testing. Since I don’t have a license for that, but have Parallels at my disposal, that’s what we had to make work.

We have just recently started acquiring DEP-enrolled Macs, and with that, wanted to put to test our planned workflow for provisioning a new machine. I knew other admins tested DEP with VMs as I’d seen chatter about it on Twitter and the MacAdmins Slack. But I’d be lying to say that I knew how to create a DEP VM, and thus we were using a physical device and just wiping it… repeatedly. After about the 10th wipe, and literally a full day of (barely) testing, we came to the conclusion:

“There has to be a better way!”

Based on a few guides I found online, and particularly this straightforward one by Ross Derewianko, I realized it should be quite easy, just set the VM’s Serial Number and Hardware Model.

To jump straight to the instructions, see the Creating the VM.

Setting the Device Hardware Model

A Parallels’ VM config file is slightly different than VMware Fusion’s, and we aren’t able to set the hardware model in the config.pvs file like you can in Fusion’s .vmx file. In order to set the hardware model, we have to set a “boot flag” in the VM’s settings. The key for this flag is:


An example of this would be:


Setting the Device Serial Number

The config.pvs method

Unlike the Hardware Model, you can set the Serial Number in the config.pvs file. This file is located within the VM, so you have to find the VM location (normally under ~/Parallels/VM_NAME.pvm/). Right-click the VM and choose “Show Package Contents” and then open the config.pvs file into your desired text editor. The “SerialNumber” key is within the “General” key.
Screen Shot 2018-02-09 at 1.08.29 PM.png

The boot flag method

You can also set the Serial Number via a boot flag, similar to setting the hardware model. The key for this flag is:


Creating the VM

Unlike some of the instructions I saw for Fusion, I was not able to change the settings of a pre-existing Parallels VM to make it into a DEP-enrolled VM. Therefore, the instructions below are how to create a new VM with the Serial Number and Hardware Model set to spoof a DEP device. NOTE: You will need a downloaded copy of a macOS Installer on your device before proceeding.

  1. Begin by opening Parallels Desktop and choose to create a new VM
  2. Choose the Install Windows or another OS from a DVD or image file
    Screen Shot 2018-02-09 at 12.29.03 PM.png
  3.  Choose the macOS installer you downloaded previously.
    Screen Shot 2018-02-09 at 12.30.34 PM.png
  4. Continue through the prompts to create a bootable disk image file, and choose where to save it on your device.
    Screen Shot 2018-02-09 at 12.31.38 PM.png
  5. In the Name and Location window, be sure to check the box at the bottom that says Customize settings before installation. This is crucial, as this is how we’ll set the boot flags mentioned above before the machine is ever provisioned
    Screen Shot 2018-02-09 at 12.39.43 PM.png
  6. Once the VM Configuration screen appears, change the Hardware tab and select the Boot Order option.
    Screen Shot 2018-02-09 at 12.40.33 PM.png
  7. Expand the Advanced Settings and in the Boot flags text box, enter the necessary pieces as documented above. For example:

    Screen Shot 2018-02-09 at 12.41.53 PM.png

  8. You can now close the Configuration window and Continue provisioning the VM.


After successfully setting the two required DEP pieces, if we boot our VM we should see it bring us to the “Remote Management” screen!
Screen Shot 2018-02-08 at 5.15.04 PM.png


Automating the Enablement of App Location Services (and failing…)

In moving over to AirWatch, we were poking around with the location tracking features. As all of our equipment is Government Furnished, we try to keep a big brother close eye on it. We currently use another tool that shall remain nameless, but have been dealing with insane battery drains for the past few months that we’re all but certain it has introduced. We’re talking brand new machines that should be getting 7-10 hours, consistently dying after 2. Therefore, we were looking for alternatives, and hoping AirWatch would fit the bill. It provided us the ability to track location of devices, lock them via MDM, and wipe them, all things our current product is used to do.

Enabling Location Tracking in AirWatch

In order to turn on Location Tracking in AirWatch, you have to dig into the Settings a bit. It is under Groups & Settings –> All Settings –> Devices & Users –> Apple –> Apple macOS –> Agent Settings. In that section, you will see an option for Location with a checkbox to “Collect Location Data”.

Screen Shot 2018-02-06 at 12.06.02 PM.png

Enabling Location Tracking Manually on macOS

Once you’ve enabled Location Tracking in AirWatch, and the Agent on the Mac syncs, you will see a very persistent popup appear (persistent assuming you don’t just enable Location Services for the app).

Screen Shot 2018-02-06 at 12.02.15 PM.png

If you click “OK” in this window, System Preferences will automatically open and specifically will open to the Privacy tab of the Security & Privacy preference pane.

Screen Shot 2018-02-06 at 1.19.14 PM.png

This poses an issue for us, for a few reasons. One is that we try to be as quiet as possible with regards to alerts to our customers. We don’t want to interfere with their work nor do we want to show them random popups that from a user perspective, could look like phishing or being “hacked,” thus generating an IT ticket. The other part of the issue is that even if we were OK with our customers seeing this prompt, and having them be responsible for enabling Location Services, it requires administrative privileges to grant an app Location Services abilities, which the majority of our users do not have.

Trying Our Hardest to Automate This Process

Ok, so now that we know the problem, we need to see not only how can we fix it, but how can we do it automatically? Of course we could have our techs do this during our imaging process, or sneaker-net to 1000 machines… but neither of those is ideal. We like automation, because as my colleague often says, “we’re lazy, and we like it.”

It starts with Python

I first began by looking at how others had done similar things, namely how Clayton Burlison had done this with his open source tool pinpoint. I knew that he worked out how to not only enable Location Services if it was currently disabled, but also how to add Python to be an app or a service that could utilize Location Services, all programmatically. Now, a side note here, because I can see people saying “So why not just use pinpoint, you mentioned you were using MunkiReport previously?” And that is true, we considered it, tested it, and liked it. However, as I’ve written about previously, we try to cram as many things into our top level product, AirWatch, before moving on to a new/alternative solution. Since AirWatch has the ability to do Location tracking, I wanted to utilize it.

Back to the code. Per Mr. Burlison’s pinpoint code, you can add an app to the “Approved” Location Services apps database (located under “​/private/var/db/locationd/clients.plist”). I’m not going to dive into how to achieve this, but you can see the code here: We used the same code, but switched the relevant bits around to use the AirWatch agent’s values, like so:

domain = 'com.airwatch.mac.agent'
bundle_path = ('/Applications/VMware AirWatch')
executable_path = '{}/Contents/MacOS/VMware AirWatch Agent'.format(bundle_path)
requirement = 'anchor apple generic and identifier \\"com.airwatch.mac.agent\\" and (certificate leaf[field.1.2.840.113635.] /* exists */ or certificate 1[field.1.2.840.113635.] /* exists */ and certificate leaf[field.1.2.840.113635.] /* exists */ and certificate leaf[subject.OU] = S2ZMFGQM93)'
auth_plist = {
    'Authorized': True,
    'BundleId': domain,
    'BundlePath': bundle_path,
    'Executable': executable_path,
    'Hide': 0,
    'Registered': "",
    'Requirement': requirement,
    'Whitelisted': False,

Now we have a python script that can add the AirWatch Agent to the Location Services approved apps database. Hooray! Let’s run it!

Screen Shot 2018-02-06 at 2.20.33 PM.png

Uh oh… ok… so now we have a new prompt. Seems like Apple is being clever here. This appears to be a way that Apple is preventing apps from being able to just insert themselves into the approved Location Services database without the user knowing about, and explicitly approving it. This is frustrating for us, but at the same time, a good move on Apple’s part. Otherwise, if we were a bad actor, we could run this type of code on any machine we wanted (assuming we had access) and start tracking the device’s location without any user acknowledgement. So why doesn’t pinpoint show this prompt? My guess is because pinpoint is using the native Python binary on disk, which is signed by Apple themselves, and thus likely has some extra entitlements to be able to “bypass” this extra check.

But it ends with AppleScript

Alright, so how do we get around this? Well, my first thought was to see how I could simulate clicking the “Allow” button programmatically. I knew this had to be possible, and in doing some quick Googling, I found results about doing this in Script (formerly I thought it would be as easy as sending the “Enter” keystroke, since the “Allow” button is the default selection. This is the code that will just send an “Enter” key via Script Editor.
Screen Shot 2018-02-06 at 2.26.14 PM.png

We want to call this script from within our current Python script, which is what is causing the prompt to show in the first place. So if we time it correctly, we should have the aforementioned things happen: the prompt appear, and then the python script will call the AppleScript and the AppleScript will send the Enter key, “Allow[ing]” Location tracking!

One problem (for now): this does indeed send the Enter key, but to the app that has focus at that time. Interestingly enough, the prompt to “Allow” the location tracking does not grab focus, at least consistently enough to trust that this would always work. So what has focus when our script runs? I added this bit of Python code (found on StackExchange) to my script right before and after we tried to authorize the app in the DB.

from AppKit import NSWorkspace
import time
t = range(1,100)
for i in t:
    activeAppName = NSWorkspace.sharedWorkspace().activeApplication()['NSApplicationName']
    print activeAppName

To my surprise, iTerm (or Terminal if it were run in that) had focus the whole time, even though visually it appeared that the Location prompt did. Bummer, that’s not going to work. After much Googling, I came across an interesting Apple Developer guide called the Mac Automation Scripting Guide. In there, it talks about a tool in Xcode called the “Accessibility Inspector,” which allows you to see all attributes of an interface. This sounds promising, because in searching for things like “how to send keystroke to specific app,” I was finding results, but none that were working. It’s made more complicated by the fact that you can’t send a keystroke directly to an app, you have to send it to “System Events”. So perhaps if we had a way to know exactly what window, and exactly what button we need to send the enter key to, we could achieve this. I loaded up Accessibility while the prompt was in view. Accessibility Inspector has a button that looks like a crosshair or target icon called “Start inspection follow points,” which when activated will begin to show the attributes of the view that you click. Screen Shot 2018-02-06 at 2.46.46 PM.png

Once that’s activated, we can click the “Allow” button, which places the target icon on it, and shows all of the attributes associated with the button in the Inspector. Screen Shot 2018-02-06 at 2.52.04 PM.png

We primarily care about the Hierarchy, because that’s what will tell us what we need to send the “Enter” keystroke to. From here, we see that “CoreLocationAgent” is in fact the application or process that’s presenting the prompt. We can also see that the button is labeled “Allow.” It took some trial and error, but with that information, we were able to know exactly how to script in Script Editor the sending of the “Enter” key to the Location prompt, reliably. It looks like this: Screen Shot 2018-02-06 at 2.55.10 PM.png

And, it worked! Almost. Upon running this manually the first time, we received the following error: “System Events got an error: Script Editor is not allowed assistive access.” Screen Shot 2018-02-06 at 2.55.58 PM.png

We had to authorize Script Editor to have some extra controls over our machine. We can do this by going into System Preferences –> Security & Privacy –> Privacy –> Accessibility and granting the Script access to “control your computer.” Screen Shot 2018-02-06 at 2.57.51 PM.png

With that done, let’s try again (though we’ll need to come back to that, because that in itself is another manual step that we don’t want/can’t have or it defeats the whole purpose).

Hey hey, success! We’re one step closer! Now we need to conquer what is hopefully the last piece, being able to add the osascript binary to the approved “Accessibility” apps like we did for Script, since that’s how our Python script has to call the AppleScript we wrote (some ugly script-inception going on). We need to find out what is happening when we authorize an app to have the “Accessibility” access we granted Script Editor, to see if we can do the same for osascript. Side note: I did try to do this via the GUI just for testing’s sake, but System Preferences showed it as grayed out and wouldn’t let me add it. Therefore, we were left with the brute force option, trying to add it directly to the database that this information is being stored in, similar to what we did it with Location Services. Using the trusty (soon to be dead because it’s 32-bit), we can see that the file that’s being changed upon adding or removing an app from the Accessibility prefPane is called TCC.db under “/Library/Application Support/”. That happens to be a sqlite database, so if we load it up into DB Browser for SQLitewe can peruse some of the data, and quickly see under the “access” table some app identifiers in the “client” column.
Screen Shot 2018-02-06 at 3.29.39 PM.png

Lo and behold, in there we see the “” entry, so we know we’re in the right place. We also see a “prompt_count” column, which I assume is how Apple tries to track and be sure that indeed, a prompt was shown to the user and approved. So can we fudge this? Maybe just add the entry manually either via the GUI, or command line? Turns out, NOPE! Some of you probably already knew this, or perhaps you guessed it early on, but that file is SIP protected. If we run an ls -O on the file, we can see the restricted attribute is present.

$ ls -laO /Library/Application\ Support/
-rw-r--r-- 1 root wheel restricted 57344 Feb 6 15:28 /Library/Application Support/

What does this mean? Effectively, we’re SOL. Apple was smart, and I’m ok with it. I certainly don’t want applications to be able to just insert themselves into having full control over my system without me knowing about it and giving explicit permission.

But as an admin, this stinks… we’ll have to figure out if we still want to use AirWatch’s location tracking, and if so how we go about enabling it. But, for the time being, it ain’t gonna happen on the low down.

I will say, all in all, this was a fun project, and I learned some things, especially about Accessibility Inspector. I can truly see that being incredibly useful knowledge when there are future times we want to automate some tasks that may involve clicks or keystrokes.