Wednesday, September 30, 2015

Deterministic Rebuild of Windows 10 Laptop



Yesterday, I attempted to rebuild a box while verifying SHA1 hashes and signatures at every point, with Secure Boot enabled from the start

It turns out to be surprisingly difficult to rebuild a Windows 10 machine in a deterministic way. Using the supplied Microsoft tools, generated ISO files are unique and cannot be verified out of band.

Here's the method I wound up using.

1. Acquire the official ISO for the Windows 10 version you are installing.

2. Sign in to a Microsoft account and browse to MSDN subscriber downloads: https://msdn.microsoft.com/subscriptions/securedownloads/

Locate the version you're installing and verify the SHA1 hash matches the one you're going to install. You can view this even if you don't have any MSDN subscription.

3. Copy the iso file to a USB key (you might need to format this as NTFS) and boot the laptop from a Linux USB stick (i used Kali).

4. Burn the ISO to a DVD using Brasero or another Linux burning tool with verification enabled. This turned out to be the only way this process would work. Using a USB key resulted in the Windows installer failing to find the hard disk, and there was no way to verify that the iso burned matched that on the Windows installer USB key.

5. Reset the laptop's bios to defaults, verify that secure boot is enabled.

6. Boot from the DVD and install Windows as normal, without any network connection. Disable all Microsoft telemetry except SmartScreen as you go through the install process.

7. Log in for the first time, attach a network cable, perform a Windows update.

8. Open Edge. Download the Sysinternals Suite from https://technet.microsoft.com/en-us/sysinternals/bb842062 and extract it to system32.

9. As you download and reinstall your applications, verify their integrity by using "sigcheck -h -v". Check that they do not have any reported infections on VirusTotal, and perform both Google and Bing searches for the sha1 hashes. Anything you typically install on a base OS should already be in VirusTotal. If it hasnt, and you can't verify the hash using google or bing, you may have a problem.

I ran into issues with the following apps:

Chrome 64 bit installer. The version that came from https://www.google.com/chrome/browser/desktop/index.html did not have a verifiable sha1 hash.

Google Drive installer. As above.

PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html. It aggravates me to no end that Simon Tatham refuses to deploy HTTPS with pinning. VirusTotal reported 2 infections in the versionof PuTTY currently being distributed from there. These are probably false positives, but nevertheless I downloaded a version of PuttyTray from https://puttytray.goeswhere.com/ and verified the SHA1 and GPG signature. This version reported 0 infections on VirusTotal.

Throughout the process above I've run on the following assumptions:

1. I have to trust the manufacturer of my hardware to ensure it's not been backdoored in the UEFI or somewhere else.
2. I have to trust Microsoft as the provider of my OS to not install any backdoors and patch any vulnerabilities in a timely fashion.
3. I have to trust that Google hasn't been compromised to the level where it will serve malicious executables.
4. I can't defend myself against nation state adversaries with the resources to plant binaries in a way that won't be detected using the methods above. I rely on the combined efforts of the community, Microsoft, Google, and all the vendors who participate in VirusTotal to offer me some basic level of assurance that I have done all I can to ensure I don't get compromised.
By documenting this process, anyone else can follow it and point out any flaws in my process and next time I do a rebuild I'll be able to do things better.

Beyond the steps above I've also configured AppLocker with the default file hash rules, and my day to day use is with an unprivileged account.



Wednesday, September 5, 2012

Exchange Online coexistence with generic SMTP mail

Recently we moved a client from MDaemon to Exchange Online. Because the individual user's setups were complex affairs, with lots of users attached and lots of PSTs, we wanted to move them individually, rather than trying to do it all in one hit.

The approach suggested by Microsoft is to forward mail to the .onmicrosoft.com accounts as they are created. However, this doesn't handle return mail from the cloud to the onpremise server.

The way I solved it was this:

1. Create the organisation account customer.onmicrosoft.com in the Exchange cloud, specify it as shared, add customer.com.
2. Set up the onpremise mail server with an alias domain of smtp.customer.com. As there was already an A record for this there was no need to change any DNS.
3. Create all accounts in the cloud, specifying they forward to username@smtp.customer.com. Set customer.com as default sending address.
4. As accounts are moved to the cloud, set a forward from their account on the onpremise server to username@customer.onmicrosoft.com. Delete the forward from the exchange cloud account.

In this way we could move individual mailboxes without compromising their ability to email everyone else in the organisation. Once the migration is complete, we can change the MX records to point to the cloud and remove the onpremise server.


Tuesday, February 14, 2012

Mass creating users and exchange mailboxes from a CSV using Powershell

During a swing migration to a new domain I had to quickly create a ton of users with their mailboxes to enable importing their PST files.

I found this script somewhere on the net but it had some bugs so I cleaned it up.
Note that the CSV as its first line needs labels for each column like:
fn,ln,dispname,alias,upn

and UPN is the users email address.


Function ReadCSV{
 Param([string]$fileName)
 $users = Import-Csv $fileName
 foreach ($user in $users){
 $ht  = @{
 'givenName'=$user.fn
 'sn'=$user.ln
 'displayName'=$user.dispname
 'alias'=$user.alias
 'samAccountName'=$user.alias
 'userPrincipalName' = $user.upn
 'database' = 'Mailbox Database'
 'organizationalUnit' = 'OU=Company,OU=Users,OU=Yoyodyne,DC=yoyodyne,DC=local'
 'name' = ($user.fn +" "+$user.ln)
 }
 Write-Output $ht
 }


}
Function CreateUser{
 Param($userInfo)
 $secureString = ConvertTo-SecureString "User!123" -AsPlainText –Force
 New-Mailbox  -Name $userInfo['name' ]`
 -Alias $userInfo['alias'] -UserPrincipalName $userInfo['userPrincipalName'] `
 -SamAccountName $userInfo['alias'] -Database $userInfo['Database']`
 -FirstName $userInfo['givenName'] -LastName $userInfo['sn'] `
 -OrganizationalUnit $userinfo['organizationalUnit']`
 -DisplayName $userInfo['DISPLAYNAME'] -Password $secureString -ResetPasswordOnNextLogon $false


}


Function CreateMailbox{
 PROCESS
 {
 CreateUser $_
 }
}


 ReadCSV users.csv | CreateMailbox

Tuesday, December 13, 2011

Manually merging shapshots in differencing VHDs using Diskpart

Yesterday I had to move a stopped VM off a Hyper-V server that had run out of disk space. The server had 2 ongoing snapshots dating back to 2009. The snapshots had to be merged before I could restart the server and it wasn't immediately obvious how to do it.

Eventually I rediscovered DiskPart which is a frontend to the Logical Disk Manager in Windows. I'd used this before for special partition trickery on USB keys: in 7 and 2008 Server this includes vdisk (VHD) functionality as well.

To merge the snapshots first I Selected the last snapshot in the chain, then told DiskPart to merge with the parent. Differencing VHDs store a relative and an absolute path to their parent, so the disk structure needs to be approximately the same:

DISKPART> select vdisk file=d:\vm\SERVERNAME\Snapshots\3209A29B-8693-45CC-A9E4-444A9CE3B245\\3209A29B-8693-45CC-A9E4-444A9CE3B245.avhd
DISKPART> merge vdisk depth=2

I used depth=2 because there were two snapshots in my chain. It took a while for the merge to complete during which the file size on the parent VHD was incrementing, and then a little longer for the disk management service to release its lock on the files, but after that I was able to delete the snapshot files and boot the VHDs in a new VM on the new server.

Monday, July 25, 2011

Updating BIOS from USB using a multi image set

I just had to update an IBM ServeRAID bios using the boot floppy set as there was no .iso provided. There were 4 floppy images.

First I had to merge the 4 images into one, but they would never fit on a single floppy image. So I made a custom image using this method:

1. Open Image 1 in WinImage
2. Image -> Change Format
3. select Custom Format
4. Total number of sectors: 11520

This gave me an image with 6mb of space. Then I was able to extract the other three images and Inject their contents into the root directory of the first.

To boot the new image, I extracted memdisk from the syslinux distribution onto my usb key. I then booted from grub2 using the following:

grub2> linux16 /boot/memdisk
grub2> initrd16 /iso/ibm-firmware.ima
grub2> boot

The firmware upgrade then went without a hitch.

Monday, June 13, 2011

BackTrack USB guide updated

Backtrack 5 still (inexplicably) doesn't posess the iso-scan magic necessary to make a multi boot usb key.

I've updated my guide here.

Tuesday, May 10, 2011

Simple rate limit on Juniper SRX

Here's how to apply a simple rate limit to an interface on JunOS 10.2 (SRX):

root@labsrx# show interfaces ge-0/0/1
unit 0 {
    family inet {
        filter {
            input download-limit;
            output upload-limit;
        }
        dhcp {
            client-identifier ascii labsrx;
        }
    }
}

root@labsrx# show firewall
policer rate-limit {
    filter-specific;
    if-exceeding {
        bandwidth-limit 10m;
        burst-size-limit 1m;
    }
    then discard;
}
filter upload-limit {
    term limit-up {
        from {
            source-address {
                192.168.1.0/24;
            }
        }
        then policer rate-limit;
    }

    term accept_all {
        then accept;
    }
}
filter download-limit {
    term limit-down {
        from {
            destination-address {
                192.168.1.0/24;
            }
        }
        then policer rate-limit;
    }

    term accept_all {
        then accept;
    }
}