Redmond, we have a problem. The “Client Task Sequence” template when used with Configuration Manager 2012 SP1 does not work too well when UDI is enabled and we want to Bitlocker our devices.
When we are performing a User Driven Installation, the MDT 2012 Update 1 template makes use of the ConfigMgr “Pre-provision Bitlocker” step, which adequately pre-provisions Bitlocker on the Operating System drive on a ‘used space only’ basis. This is a good thing. Later on in the Task Sequence we have an MDT specific step that uses the ZTIBde.wsf script which SHOULD configure and enable the protectors – but you may find that this will fail, resulting in an exitcode of 6767. Tracing this error in the ZTIBde.wsf script takes us into a failure of the function that enables the Bitlocker protectors. So what is going wrong?
Let us first understand that the ZTIBde.wsf was designed for MDT deployments primarily - with added ConfigMgr integration as an afterthought. The ZTIBde.wsf script ON ITS OWN is capable of pre-provisioning Bitlocker in Windows PE AND ALSO, later (in the OS phase) configuring and enabling the protectors. It’s a great little script. In the MDT Client Task Sequence template however, the ZTIBde.wsf script is not used within the Windows PE phase to Pre-provision Bitlocker and instead the in-built ConfigMgr function is used. The ConfigMgr function will pre-provision Bitlocker, however it does not then configure the appropriate variable that the ZTIBde.wsf Pre-provisioning function would and this is needed and consumed later by the ZTIBde.wsf script when it comes to Enabling the Bitlocker Protectors in the OS Phase.
The ZTIBde.wsf script has a section which checks if Bitlocker is enabled on the drive (IsBDE = TRUE) and also if the drive has NOT been pre-provisioned earlier (IsBDEPreProvisioned <> TRUE). This test is used to accommodate the refresh scenario when the existing suspended protectors of a Bitlocker drive can simply be turned back on without having to configure any. In the absence of the IsBDEPreProvisioned variable (or if it doesn’t equal TRUE) then this section of code will cause the script to skip over the configuration of any required protectors (specified in the UDI wizard) and simply try to enable any existing protectors, of which there are none - resulting in the error 6767 spat out by the EnableProtectors function.
So what do we need to do about this? Well you could amend the script – but that wouldn’t be supported by Microsoft and your modifications could well be wiped out by any future revision (which may or may not address its shortcomings. There are a few (non-exhaustive) alternatives that I could propose;
- If you are using the UDI Wizard AND you want the Bitlocker options to come through from it, then you should replace the “Pre-provision Bitlocker” step in the task sequence that uses the ConfigMgr function with the ZTIBde.wsf script - the very same script used later on to “Enable Bitlocker”. This keeps the Pre-provisioning process and the Enable Bitlocker process solely in the MDT camp. However, if you want the Recovery Key to be backed up to Active Directory then you will need to use a script afterwards to do that, or implement MBAM.
- If you are using the UDI Wizard AND are removing the Bitlocker configuration page but still want Bitlocker enabling then replace the “Enable Bitlocker” step in the task sequence with the ConfigMgr function – as you can specify the protectors used and also escrow the Recovery Key to Active Directory automatically.
- If you are NOT using the UDI Wizard but want to implement Bitlocker in your Task Sequence, then stick with the ConfigMgr step for pre-provisioning Bitlocker and replace the ZTIBde.wsf method for Enabling Bitlocker with the ConfigMgr function – for the same reasons as above. In this scenario, you would need to ensure that your partitioning layout is suitable for Bitlocker.
In ConfigMgr world, one must learn patience – in particular when installing hotfixes, service packs and cumulative updates. It is quite common for the installer GUI to complete and leave you under the false pretense that your environment is ready to go again, but what you will find is that the installer has triggered additional tasks which the SMS/ConfigMgr component manager needs to handle.
I would certainly recommend using trace32/cmtrace to watch the sitecomp.log when you are performing updates to your environment as this will give you an idea as to whether the component manager is initiating a re-installation of certain site components following an update. You may see a flurry of activity which mentions the re-installation of components and the SMS_SERVER_BOOTSTRAP service. When this log has settled back down to normal then you can think about returning your environment to normal usage.
I was working in my Hyper-V lab this morning trying to PXE boot a client VM into a ConfigMgr Task Sequence but somehow things had just stopped working, overnight. SMSPXE.log was showing me this;
[TSMESSAGING] AsyncCallback(): WINHTTP_CALLBACK_STATUS_SECURE_FAILURE Encountered [TSMESSAGING] : WINHTTP_CALLBACK_STATUS_FLAG_CERT_REV_FAILED is set sending with winhttp failed; 80072f8f Failed to get information for MP: https://CON-CM1.contoso.local. 80072f8f. PXE::DB_InitializeTransport failed; 0x80004005 Unspecified error (Error: 80004005; Source: Windows)
My MPControl.log had also, within minutes, gone from this (working);
>>> Selected Certificate [Thumbprint 37d4c9502df29c6780a456597b5088d569ceca6b] issued to 'CON-CM1.contoso.local' for HTTPS Client Authentication Call to HttpSendRequestSync succeeded for port 443 with status code 200, text: OK
to this (broken);
>>> Selected Certificate [Thumbprint 37d4c9502df29c6780a456597b5088d569ceca6b] issued to 'CON-CM1.contoso.local' for HTTPS Client Authentication Call to HttpSendRequestSync failed for port 443 with status code 403, text: Forbidden
So what happened here?
First things, I wanted to isolate if this was a problem with the Management Point component or the PKI setup – so I simply set the Management Point role to run as HTTP only. Within minutes I was seeing a working management point in the MPControl.log – so it was certificate related.
I looked on my Windows Server 2008 R2 Certificate Authority and there were no certificate revocations. Maybe the client certificate is a bit screwed up I thought – so I deleted the Client Authentication Certificate from the Personal Store on the Management Point and tried to request a new one from the CA but received a failure that the Certificate Revocation Server was unavailable. Weird. A quick visit back to the CA and I stopped and restarted the CA service and tried the request again from the MP and it went through fine.
I changed the Management Point back to HTTPS and again within a few minutes I was seeing a working Management Point again in the MPControl.log.
Just goes to show that it isn’t always (actually, it isn’t USUALLY) Configuration Manager that is to blame when things aren’t working correctly.
I love manufacturers who stubbornly refuse to conform to Industry standards for Driver and software Deployment. ATI and NVidia are two such culprits who make the installation of drivers for their products using widely used deployment tools a royal pain in the arse. The driver .inf files can be easily extracted from the vendor supplied software, however when installed using Driver Injection and Plug and Play during Windows Setup they are not ‘completely’ installed and if the first user of the system is not an administrator they will receive a prompt for elevation to complete the install. This is unacceptable guys!
So, we have to work with the vendor supplied drivers in the format they were provided and using whatever silent/unattended methods they provide. ATI do not make this particularly easy with their Catalyst drivers as they use an installer technology called ‘Monet’ – nope, I never heard of it either. There seems to be multiple ways to start the installation routine too; Setup.exe, ATISetup.exe and InstallManagerApp.exe – so what do we use?
After several hours of mucking around trying to get one of these to install the drivers during a task sequence, I can proudly put my name to a command line that actually works! If you create a standard package that contains the extracted files from the vendor supplied install files and create an ‘install.cmd’ file that contains the following;
"%~dp0Bin64\InstallManagerApp.exe" /UNATTENDED_INSTALL:"%~dp0Packages\Drivers" /AUTOACCEPT_ALL /ON_REBOOT_MESSAGE:NO /FORCE_CLOSE_WHEN_DONE /FORCE_HIDE_FIRST_RUN
Create a program that runs the ‘install.cmd’ file (Run Hidden, Whether or not a user is logged on, Allow TS Deployment) and add this as an ‘Install Package’ Step to your Task Sequence. You should enable the ‘Continue On Error’ option on this step, as the ATI installer will exit with a non-zero exitcode even if the drivers install successfully.
In the command line above, I am choosing to only install the drivers and not the associated ‘crap’ that comes with them – but if you want more than just the drivers then just amend the /UNATTENDED_INSTALL option and take off the ‘\Drivers’ at the end of the path.
Even though the ConfigMgr 2012 client is supposedly 64bit now, the issue with 64bit file system redirection is still very much a problem during Task Sequence and even regular package/program deployments when we want to copy things to the ‘Native’ “%Program Files%” or “%WinDir%\System32″. File System redirection kicks in and we are magically transported to the 32bit “Program Files (x86)” or “Windows\SysWOW64″. Boo hiss. I even found a log entry in the client-side execmgr.log which clearly states;
Running "C:\Windows\ccmcache\3\CopyFiles-Temp.cmd" with 32bitLauncher
Why? Why? Why?
In the Task Sequence we can easily get around this problem by choosing to run a “Run Command Line” step instead of an Install Package step; we reference our package and command line and ensure we tick the box to “Disable 64bit file system redirection.” This is all well and good but what about deployments outside of the Task Sequence? There simply isn’t the same option, so we need to build this into our script/batch file that we are trying to run. My borderline Obsessive Compulsiveness dictates that the single solution must ‘just work’ for both 32bit and 64bit operating systems.
The most elegant way I have found for doing this that doesn’t involve re-writing sections of your batch file to cater for the various operating environments, involves taking advantage of being able to invoke the ‘Native’ 64bit command processor (system32\cmd.exe) from within any 32bit command processor running within a 64bit OS if needed. Here’s a snippet of code you simply add to the very top of your existing batch files…
IF "%PROCESSOR_ARCHITEW6432%"=="" GOTO native %SystemRoot%\Sysnative\cmd.exe /c %0 %* Exit :native <your script starts here>
What this will do for you, is first detect if the batch file is being ran from within a 32bit command interpreter on a 64bit OS – if it isn’t then the code jumps to the “:native” label and continues your script as usual. If we are in a 32bit command interpreter on a 64bit OS, then the second line invokes the ’Native’ 64bit command processor and re-launches this very same batch file. The condition at the top will simply be evaluated again by the 64bit command processor and ignored (as we are running 64bit cmd.exe in 64bit OS) and your script will happily continue without being subject to redirection. Neat!
I have found that the MDT 2012 Update 1 Client Task Sequence Template is now missing a crucial step in the later stages of the task sequence which is needed to restore captured data from the ConfigMgr State Migration Point when working under the REPLACE scenario - and we need to add this back in.
You simply need to add a “Request State Store” step just above the new “Connect to State Store” step (which looks to just do an authentication against UNC path) and add a condition on the step to run only if Task Sequence Variable “USMTLOCAL” not equals TRUE.
Also add a “Release State Store” step below the “Restore User State” task, again with the same condition as above.
Here’s a great little integration gem I found and just had to share, which highlights a good relationship between the Task Sequence – Apply Software Updates Step and the Software Center.
As part of Corporate Software Updates process it is commonplace to first advertise your new set of monthly updates to a deployment testing or pilot set of systems, however you don’t necessarily want to automate or enforce the deployment of these updates. You may want to install them at your own leisure or in a more controlled or selective fashion, which the Software Center is great for – but sadly we don’t have any multi-select ability, nor can we queue them up. If you press the ‘Install All Required’ button in the Software Center, this only applies to ‘Required’ updates – not updates deployed as ‘Available’ – so you will see a similar screen as below;
So what can we do if we just want to install all remaining/available updates in one go without clicking on each one and waiting for the install to finish? Enter – the ‘Task Sequence’ !
Simply create a custom Task Sequence with just the step to “Install Software Updates” and deploy this to the collection you use for Software Updates deployment testing / pilot. When this deployment arrives at your system, you can run it and rather nicely it will trigger the installation of all of those Software Updates that you can see in your Software Center – with the download/install/restart status of each being shown right there in front of you.
This is a great little piece of integration between components – something which I find a little lacking between the Application Catalog and the Software Center.
The only downside to this is that a reboot will be triggered automatically if one or more of the updates requires a restart, you just need to be aware of that – which is why my Task Sequence Deployment is named accordingly, to let the user know what they are clicking on.
Advanced Format (AF or 512e) drives are out there, often fitted randomly from one model to the next. I won’t go into the technicalities of what they are all about as Google will tell you this, but what I will tell you is that their presence can slow down the deployment rate on an affected system.
Firstly, if you are not sure whether your system is equipped with an AF drive (DELL include bright orange note with the system, HP just seem to sneak them in) then you can download and run the following tool in the OS or in WindowsPE;
The tool will tell you if an AF drive is fitted and also if the partitions are ‘aligned’.
When we use ConfigMgr and WindowsPE boot images to deploy systems with AF drives you may notice quite a slow down, especially in the “Apply Operating System Image” Task Sequence step. There is a patch to be downloaded from Microsoft that should be installed within a fully patched Windows 7 SP1 OS image AND we must also incorporate this patch into our ConfigMgr boot images;
Once you have obtained the x86 and amd64 versions you can follow my guide below on how to update BOTH of your ConfigMgr boot images. We will do the x86 boot image first and you just need to repeat the process for the amd64 image.
You should have installed the Windows Automated Deployment Toolkit (WAIK). You can undertake this task on the ConfigMgr server as it will have the WAIK installed. From your Start Menu, find the Microsoft Windows AIK\Deployment Tools Command Prompt and run as Administrator.
Create the following structure on a drive of your choice with a good few GB free (where X = YourDriveLetter)
X:\WinPE X:\WinPE\mount X:\WinPE\patches
Copy both of the downloaded Windows6.1-KB982018-v3-x64.msu and Windows6.1-KB982018-v3-x86.msu to X:\WinPE\patches. It does not matter that they are together as the patch injection process is clever enough to pick the right one.
Copy the boot.wim file (ignore the boot.xxx12345.wim) from <ConfigMgrInstallDir>\OSD\boot\i386 to X:\WinPE
From the Deployment Tools Command Prompt, run the following commands (replacing X:\ as appropriate);
DISM /Mount-Wim /WimFile:X:\WinPE\boot.wim /MountDir:X:\WinPE\mount /Index:1 DISM /Image:X:\WinPE\mount /Add-Package:X:\WinPE\patches DISM /Unmount-Wim /MountDir:X:\WinPE\mount /Commit
Rename the existing <ConfigMgrInstallDir>\OSD\boot\i386 .wim file to .old and copy up your replacement boot.wim from X:\WinPE.
From the ConfigMgr Console, Right click the x86 Boot Image and choose to Update Distribution Points. This will take our new boot.wim and re-integrate the ConfigMgr components, your modifications and drivers and re-send out to the DP’s.
The size increases on the boot.wim files you should be looking for, if all went successfully, should be;
i386 boot.wim should increase by approx 10MB
x64 boot.wim should increase by approx 12MB
With a patched WindowsPE boot image, the “Apply Operating System Image” Task Sequence step when ran on an AF drive should speed up considerably. The speed increases I have observed on an HP Touchsmart system were as follows;
BEFORE: AF Drive non-patched took 28 minutes for the “Apply Operating System Image” step to complete (inc. download)
AFTER: AF Drive patched took 13 minutes for the “Apply Operating System Image” step to complete (inc. download)
So you can see, there are significant time savings to be made with a properly patched WindowsPE boot image on an AF drive equipped system.
Oddly, when an AF equipped drive is partitioned and formatted using the boot images generated by ConfigMgr 2012 the Dell Alignment Tool states that the partitions are aligned correctly – however without the patch the disk performance is still poor.
I use a task sequence to perform the initial build of my Windows 7 reference PC, install Microsoft Office 2010 and then I capture it manually. I’ve been doing this for quite some time using ConfigMgr 2007 without any issues, however I was seeing an error with ConfigMgr 2012 when it tries to install Microsoft Office 2010 as an ”Application” (as opposed to a traditional “Package”.) The SMSTS.LOG on the failed build would record an ‘unspecified error’ 0×8000405.
I finally figured out, after a lot of head scratching and forum posting, that the SMSMP switch needs to be used in the “Setup Windows and ConfigMgr” step of your reference build task sequence. This is a very similar issue to what we had/have with ConfigMgr 2007 and Software Updates during OSD.
This problem does not occur when using a Task Sequence to install Applications to a system that has been Domain Joined during the Windows Setup phase. This makes sense, as the Domain Joined client wouldn’t have any problem locating the Management Point as it can query Active Directory. A WORKGROUP joined system needs to be spoon-fed the name of the Management Point – even though it is already communicating with a defined management point (as set by the boot media.)
I encountered a problem earlier this week provisioning a new ConfigMgr 2012 environment with a State Migration Point (SMP). I am used to just installing the SMP role and pointing it at a drive letter to use and letting it create the folder structure it needs. However, I think there may be an issue with ConfigMgr 2012 with this method as it seems to want to modify the permissions of the folder that you specify, and if this happens to be the drive root then we see some problems.
When attempting to install the SMP role and specifying the drive root as a target, the SMPMGR.LOG records the following errors continually;Call to HttpSendRequestSync failed for port 80 with status code 500, text: Internal Server Error
Health check request failed, status code is 500, 'Internal Server Error'.
What seems to happen is that the SMP role setup routine modifies the permissions of the target folder and now has insufficient permissions to create the necessary MigrationStatus folder within the SMPSTOREF_12345678$ folder. This issue does not occur if you use the drive root which ConfigMgr is installed on as a target, weird.
So in summary, you must create a sub-folder (maybe called SMP or SMSSMP) on the drive you intend to use for your SMP role and use that path as the target, never the drive root itself.