The Big Tenant-to-Tenant Migration

As you may know, I worked for the Nordic part of the Thomas Cook Group. I was the O365 admin for a tenant with over 30,000 user accounts. And we ran the Azure AD Connect service for the entire group and had just moved to pass-through authentication with Seamless SSO. Although it was a royal pain sometimes to work in such a large company where even a minor change could take weeks to implement and get approval for from everywhere.

As you may also know, Thomas Cook Group filed for bankruptcy in October last year. And there was no advanced warning or anything about what was going to happen next. But for our part, we realised that we would (if the company survived) most likely be moving our Nordic business to a new O365 tenant so we began planning for that. Over the next few months a lot of stuff happened. The Nordic part of the group was sold off and started their own company NLTG and the old group was shutting down all parts of their business. Except the German part because they were backed by their government so they survived (more on that later).

When we got back after the Christmas break we were given a clear order to evacuate the tenant before end of February. Since we were a separate company and legal entity we were no longer entitled to share the old tenant which, even though it makes sense, pretty much lit a torch under our asses to get this done now. And we realised it wouldn’t be a pretty or a smooth operation, as I recall saying, “this is going to take a sledgehammer, not a scalpel!”. Fortunately I’m very used to sledgehammer my way to getting results. Yeap, thinking back to that SharePoint upgrade that was all over the place!

So there we were, less than 8 weeks to pull off a migration with 3,000 users, 5,500 mailboxes, 10TB of SharePoint data, 8TB of OneDrive data and 12TB of Exchange data. And this is how it went…

Identities : The building block of any good tenant is the identities. When we first planned for the migration our plan was to have a new on-prem AD that would be fed by.. well that’s irrelevant since there was no time for that. The only way forward was to use our existing on-prem AD. But the problem was that MSFT doesn’t support syncing your on-prem identities to two tenants. Why? I have no idea – I fully get how you wouldn’t want that in a production environment (since the UPN domain can only be valid in one tenant) but for a migration like this it would have solved a lot of headaches if we were allowed to do it like that. But nope, we really wanted to have Microsoft support for this. And we also had to retain our e-mail domains since we’re heavily dependant on the brand which is almost as Swedish as Ikea, at least in Sweden. So that presented us with the first big problem – pre-populating the new tenant with 3,000 user objects so we could start copying the data and when it was time to migrate and then play around with the UPN domains so the matching would work. But the first step was creating the 3,000 users as cloud only “onmicrosoft” accounts. This was done using powershell to export as much info on the users as possible (including “usagelocation” and “preferredlanguages” since we’re an org with offices from Thailand to Mexico!) and then powershell to recreate the users as closely as possible. Another step we had to take here was setting up a filter in Azure AD Connect that would only sync users to each tenant depending on the value of an extensionattribute. That way we could make sure no user was synced to both tenants at any time, although it did take alot of tinkering to get that logic working but fortunately Microsoft have documented how to do attribute filtering, so thanks for that.

Authentication: Remember how I said we’d just gone over to PTA for the old tenant? Well this little thing meant that as long as users were logging in to the old tenant (which we knew the Germany company would) we couldn’t use PTA for our users since it’s all based around a computer object in the AD forest with a Kerberos encryption key that’s tied to the tenant! So if we set up PTA for our new tenant that would change the key on the computer object and they wouldn’t be able to login anymore! So to solve this we did a “quick and dirty” setup of a temporary AD FS for our users to use based on domain. This was a surprisingly easy thing to do in Windows Server 2019 but it was an added “gotcha!” of this entire scenario!

SharePoint : The first problem with SharePoint was to determine which sites were relevant to keep and which weren’t. Our entire SharePoint was well over 20TB so we had to make sure to only copy over sites we knew were relevant to the Nordics business. But there’s no way of determining that without going through all the underlying permissions and groups to determine if “our” users are working on the site or not. It’s not like you can ask SharePoint to “give me all the sites that any user with the UPN domain @domain.se is working on”. Or maybe there is, I just didn’t have the knowledge to write that powershell at the time. Once that was done we used ShareGate to migrate all the SharePoint data. The biggest fear was that it wouldn’t be able to match the old identities with the new ones – which it did! I’m pretty sure it went by “DisplayName” to match them but we’re just very very thankful it did because that would have been a mess to sort out. The biggest issue I had with ShareGate was how unpredictable it was when it came to doing incremental copies, which was done through powershell. We split it up on 4 different servers with about 80 sites per server. Sometimes it could complete them all in 2 hours, sometimes it took 8 hours for one server, sometimes longer. During the weekend of the actual move it took well over 12 hours to complete which caused me a bit of unnecessary stress.

OneDrive : Since we already had a pretty nice “masterlist” of users that we would be migrating it was pretty easy to setup a CSV file to map “Old OneDrive -> New OneDrive” that we then used ShareGate to copy. That went pretty nicely although there were some instances of data not being copied over so we had to sort that our after the switch and people were missing a few files. Other than that the issue was the same as above – it was very unpredictable and I had to mess around with the queues on the weekend of the switch. We had one incident of a users OneDrive being almost empty but looking back at the old OneDrive is was empty too. So our theory there is that his OneDrive client must’ve been paused so we had to send that computer to the lab for data recovery – but that’s not ShareGates fault one bit!

Exchange : Oh joy! I was in charge of the Nordics business moving from on-prem to Online 3 years ago so I wasn’t looking forward to another move at all. After doing a quick check around for what tool to use (with our extremely limited budget – our company had gone bankrupt and we were still getting back on our feets!). It ended up being CodeTwo which was by far the cheapest alternative. But as the saying goes “you get what you paid for” and in this instance we paid for a software to move data from Mailbox A in Tenant X to Mailbox A in Tenant Y. And it did that job without much of an issue. There were still a lot of things to sort out around the move (like transport rules, conference rooms) but the big issue was just moving all the data. The biggest issue I had with that software was that they didn’t have a CSV import function when moving tenant->tenant! When moving on-prem -> tenant that wasn’t an issue, but tenant -> tenant, well the only way to enter a mailbox was to actually manually enter a mailbox! So we spent days entering 5,500 mailboxes and matching them with their new mailbox. A simply CSV import would’ve saved us days of work on this. My next issue with the software was when we were up to about 800 mailboxes per server on 7 servers, that really slowed the UI down. At the end it was so slow that when you started a queue for a incremental copy the UI would stop responding and you didn’t even know it was working until it was done and it just popped alive again.

Teams: Now Teams was the most interesting bit. Because Teams is based on so many technologies it was difficult to do a proper Teams migration. No matter how far we looked we just couldn’t find a tool that would migrate Teams with the channel/chats that also took the entire underlying SharePoint site! If you had other document libraries or data on the SharePoint site, then that was lost if you migrated the Team. But if you migrated the SharePoint site you will have lost the data in Teams that wasn’t in the default document library! So we made the choice of migrating the SharePoint sites since noone should have have posted anything business critical in a chat in a channel in Teams. Fortunately ShareGate comes with the ability to recreate O365 groups so all the groups got recreated and we only had to make the ones that were Teams into Teams manually, that was it. But it was a bit of a “unexpected behaviour” for ShareGate when it came to legacy sites (that were migrated from on-prem) that now had an O365 group, it simply wouldn’t recognise them as O365 groups or O365 Group sites and created them as legacy sites in the new tenant regardless. But that was easy enough to handle afterwards.

Licenses: This was another headache but fortunately not mine! Since our old license agreement with Microsoft was tied to our old company we couldn’t use that. And since our company was brand new we had no credit score anywhere so Microsoft couldn’t just hand us 3,000 licenses and hope we’d pay. After a lot of back and forth we managed to get the licenses in place well enough to start the migration and begin copying all that data. But there was still the matter of support contract with Microsoft. There was alot of options floating around to try different support alternatives but in the end we agreed on a premiere support deal with Microsoft. Even though the paperwork got sorted and we were told on Friday January 31st that everything was done and we now had premiere support with MSFT it turns out that like a lot of things in O365, sometimes it can take a day or two for the wheels to turn and you’ll see how critical this became for us.

Additional headache: One headache we had was that we’re not only running a normal business, we’re also running an airline. And the pilots must be able to check their e-mail for any notices and warnings from the aviation authorities before takeoff. This may include stuff like “this aircraft model isn’t flight worthy so don’t fly this aircraft model” and “Iran just shot down a civilian aircraft, avoid their airspace”. Things like that is absolutely critical for the pilots to check for, so saying “e-mail will be down for a day” is completely unacceptable from that perspective. And we were supposed to retain all the e-mail domains, and a domain can only exist in one tenant at a time. So we had to figure out a way to handle this and move their accounts and e-mail domain as quickly as possible to avoid any flight delays because their e-mails isn’t working. (spoiler – their email was down for 90 minutes)

The plan: So the best plan we came up with was to start an incremental copy of all the SharePoint/OneDrive data first thing on the morning of Saturday February 1st. Then at about 18:00 CET we’d set automatic forwarding on everyone’s mailbox in the old tenant that would forward every mail to their new mailbox with the “onmicrosoft” address. That way we were guaranteed no mail would go missing in case of bad timings. Then we did an incremental copy of all mailboxes. We had done this in plenty of tests and it would only take 2 hours so we planned to start with the first most important domain for our airline at 21:00 CET, then when that was done continue with the largest domain we had (with about 800 users) and work our way through our list of about 10 domains.
The switch consisted of alof of steps since we weren’t allowed to sync an on-prem object to two tenants.

  • The first step was to change the UPN domain of the users on-prem to newtenant.onmicrosoft.com and let that sync to the old tenant. Since that domain didn’t exist in the old tenant that resulted in the user being given a oldtenant.onmicrosoft.com UPN domain which was crucial since we knew we would end up having to restore users from recycle bin, which would be problematic if they still had their old UPN domain which no longer was in the tenant.
  • The second step was removing them from sync in the old Azure AD Connect sync and changing the extensionattribute so it would sync to the new tenant. This resulted in all users being put into the recycle bin in the old tenant, and in the new tenant it would match everyone properly as long as the UPN matched perfectly for on-prem and in the new tenant. They were then automatically converted to “synced from on-prem” users in Azure AD.
  • Thirdly we removed the domain from the old tenant and added it to the new tenant. Even though this is a straight forward process when you’ve made sure all objects for that domain are changed so the domains aren’t in use, I feared this step the most since I’ve previously had alot of issues removing a domain like this. Then ofcourse we’d have to tell that domain to be federated so it would use ADFS.
  • Lastly we would change the UPN of the user back to their original UPN on-prem and let that sync to the new tenant which now had the new domain and everything was set.
    When we did this with our test domains (of about 20 users each) this entire process took an hour so we felt pretty comfortable we’d be done at about 3-4 on Sunday morning and then we’d get some sleep before the users woke up to check their phones only to see the “error signing you in” and they’d start calling.

But… “no plan of operations extends with any certainty beyond the first contact with the main hostile force“.

How it played out: I woke up early on Saturday (at about 5) to start incremental copy of all the SharePoint / OneDrive data. Unfortunately Sharegate was a bit unpredictable in it’s behaviour so I had to move sites around in the queues to make it before 18:00 but make it I did. Then I ran the powershell to set the automatic forwarding and started the incremental copy of the mail. The team (4 engineers, 1 external SME/contractor and the project manager) met up at the office at about 20:00 in the evening for pizza and a last “go-no go” check for everything. And at 21:00 I started with our airline domain And by 22:30 it was all done, every user had the proper UPN, licens, login everything was good to go. And that’s when it started – the operations team in our airline said they couldn’t access their emails in the Outlook app on their phones or computers. We had ofcourse verified that it worked through the O365 portal so we knew everything worked. After troubleshooting this for about an hour we decided to log a Severity A case with Microsoft (at 23:30) and one of us would work on this case and the rest continue working with the other domains. That work with the other domains came to a halt for one of our largest domains which wasn’t removed from the ole tenant. No user had it in their UPN, no recipient used the domain, nothing. But the domain never got deleted, it was stuck in “pending”. So another severity A case to Microsoft (at about 00:30) and we proceeded with the next domain. At about 02 in the morning that domain did eventually go away by itself and we thought everything was good when our airline operations team (who’s responsibility it is to keep the planes flying, so I have the utmost respect for them and their challenges!) wanted us to do a rollback and try again at a later date. We spent about an hour with them arguing than a rollback wouldn’t solve this issue and we didn’t have time to try again next week since we had to evacuate the old tenant. Another argument was that this is a client issue and the mails are accessible through the web and we can get Microsoft to solve the client issue after. Fortunately we were able to convince them to proceed but now we’re at 03 int he morning and I had been working for 22 hours straight and I had no energy left in me so I tried sleeping for a bit. After 2 hours I woke up to cheers because now the Outlook clients in our airlines started to work so the biggest issue we had was solved and we could keep on with the remaining few domains.
At about lunch on Sunday morning we were all done with all domains and users and started to do the clean up job of on-prem systems no one knew about that had a EWS configured to the old tenant that no longer worked etc and that continued for days.

So where was Microsoft in this? As I mentioned our premier support deal with them got activated on the day before the switch. But that hadn’t replicated to all systems and instances of those systems in Microsoft so there was a big challenge even to get them to accept a SevA case from us. But we had two cases that managed to register as SevA cases with them during this switch and they weren’t helping us with either of them. The first case was regarding Outlook clients no longer being able to connect. Many blogs on many sites on the Internet says “when moving to a new tenant this may take a few hours”. In our case we were already up to hours and when creating new users we were able to connect to them immediately, but not the ones that had been switched and we didn’t see a reason why. This started to resolve itself after about 6 hours. And it wasn’t thanks to Microsoft doing anything on their side because they called me at about 5:30 on the sunday morning to say “sorry but we still haven’t been able to find any engineer to work on this case”. The other case we had with them was regarding the domain that wasn’t getting deleted. The called back on that issue also after it was resolved to ask us to verify the domain name because according to what they were seeing the domain was no longer in the old tenant so they obviously hadn’t done anything on their end in that case either.

Lessons learned:

  • Powershell and CSV files rule! If we didn’t have the proper master files for data information this would have been alot more difficult.
  • Switching over 1,000 mailboxes from on tenant to another actually does take up to 6 hours for all of Exchange Online to know what hit it so the clients can connect again.
  • Azure AD Connect is very powerful and “smart” in how it matches users.
  • Information and user communication and support is vital for this! In our case we started informing right away it was coming and we had staffed up extra support on the monday to get our business up and running after this big switch and that was really needed.
  • You can get away with buying cheaper “off the shelf” products rather than more expensive products but expect to have to work around their flaws and shortcomings. Do you want to pay twice the amount for a more expensive solution or sacrifice a few days work for your staff for manual work?
  • Test-test-test and test again just to be sure.

Checking AD FS Federation & Certificate Status

SCENARIO
You’re managing a large O365 tenant with AD FS service or multiple AD FS services and those certificates are expiring and needs replacing.

PROBLEM
The main problem is that there is no good way of telling ADFS to do something on only the domains that it actually is federated with, it’ll just assume it has them all. This may lead to some complications.

SOLUTION
I wrote this little script because I wanted to know
a) the domains that were federated to this ADFS service
b) the domains that were NOT federated to this ADFS service
c) the domains that hadn’t refreshed the signing certificate.
This little script, which must be executed on the ADFS service in an admin powershell, will first check the URL of the local ADFS service and then go through every domain in your tenant to see which match, and if they match will check the certificate. That way you know exactly which domains to look at.

It spits it all out in the console but also in 3 files in the c:\temp directory. And if you feel brave enough, you can uncomment the “update-federation” command to run that command.

Also it assumes you are already connected to the MSOL Service.

Start-Transcript c:\temp\msolfederation_check_log.txt
# Getting the local AD FS server address:
$stsaddress = ""
$stsaddress = (Get-AdfsEndpoint -AddressPath /adfs/ls/).FullUrl
$stsaddress = $stsaddress -replace "https://","" -replace "/adfs/ls/",""
write-host "The local AD FS address is $stsaddress"
$federateddomains = Get-MsolDomain | where{$_.authentication -eq "Federated"}
foreach($feddomain in $federateddomains)
{
# Clearing the variables
$certmatch = ""
$feddomainname=""
$fedinfo=""
$fedinfosts=""
# Setting the domainname of this domain
$feddomainname=$feddomain.name
if($feddomain.rootdomain)
	{
		write-host -ForegroundColor Yellow "$feddomainname is a subdomain, skipping check"
		$feddomainname >> "C:\temp\FedDomains - Subdomains.txt"
	}
else
	{
	write-host -NoNewline "Checking Federation for domain $feddomainname..."
	# Getting federation information for this domain
	$fedinfo = Get-MsolFederationProperty -domainname $feddomainname -ErrorAction SilentlyContinue
	if($fedinfo)
		{
			# Getting the STS info for this domain that can be in either two of the resulting array
			if($fedinfo.source[0] -eq "Microsoft Office 365") { $fedinfosts = $fedinfo.tokensigningcertificate[0].subject }
			if($fedinfo.source[1] -eq "Microsoft Office 365") { $fedinfosts = $fedinfo.tokensigningcertificate[1].subject }
			# Now we check if the thumbprints match
			if($fedinfo.tokensigningcertificate[0].Thumbprint -eq $fedinfo.tokensigningcertificate[1].Thumbprint) { $certmatch = "1" } else { $certmatch = "" }
			if($fedinfosts -like "*$stsaddress*")
				{
					write-host -NoNewLine " Federated to "
					write-host -NonewLine -foregroundcolor Green "this AD FS service"
					if($certmatch -eq "")
						{
							write-host -ForegroundColor Red " but certificates do not match!!!"
							# You could try to execute the below command to update the Federation information # if you feel safe in this.
							# Update-MsolFederatedDomain -DomainName $feddomainname -SupportMultipleDomain
							$feddomainname >> "C:\temp\FedDomains - ADFS Match - Cert Mismatch.txt"
						}
					else
						{
							write-host -ForegroundColor Green " and certificates do match."
							$feddomainname >> "C:\temp\FedDomains - ADFS Match - Cert Match.txt"
						}
					$feddomainname >> "C:\temp\FedDomains - domains federated to this ADFS.txt"
				}
			else
				{
					write-host -NoNewLine " Federated to "
					write-host -foregroundcolor Yellow "another AD FS instance"
					$feddomainname >> "C:\temp\FedDomains - ADFS Mismatch.txt"
				}
		}
	}
}
Stop-Transcript

New-MSOLUserPrincipalName

SCENARIO
You’re changing the e-mail domain of a user or even a bunch of users. After that you also need to set their UPN’s to reflect the change.

PROBLEM
The problem is that Azure AD Connect service doesn’t currently support changing domain of a UPN of an object that is already synced! So you have to run a powershell command to change it. But it get’s even more complicated because you can’t change the UPN from one federated domain to another without making it “unfederated” first.

SOLUTION
Enter New-MSOLUserPrincipalName, which is a function that will take the user with the current UPN ($UserPrincipalName), change it to a temporary UPN with the domain extension “@[your tenant].onmicrosoft.com” and change it to the new UPN ($NewUserPrincipalName).

function New-MSOLUserPrincipalName {
  param (
    $UserPrincipalName,
    $NewUserPrincipalName
  )
  $TempUPN = "{0}@[your tenantname].onmicrosoft.com" -f $UserPrincipalName.split("@")
  Set-MsolUserPrincipalName -UserPrincipalName $UserPrincipalName -NewUserPrincipalName $TempUPN | Out-Null
  Set-MsolUserPrincipalName -UserPrincipalName $TempUPN -NewUserPrincipalName $NewUserPrincipalName
  Write-Output -InputObject "Successfully changed UPN from $UserPrincipalName to $NewUserPrincipalName"
}   

Thanx to Johan Dahlbom for this one!

Download PS1 from Dropbox

Download PS1 from Dropbox

Upgrading AD FS 2012R2 to 2016

SCENARIO
You have a working ADFS farm running version 3 on Windows 2012R2 and want to upgrade to ADFS 2016 delivered in Windows Server 2016.

PROBLEM
The problem is that this is, if you ask Microsoft, a very straight forward “next-next-finish” process to do as the only TechNet article I found about it makes it look pretty straight forward. But that article was written for Windows Internal Database (there is now also one for SQL cluster backend. Also you’ll notice at the bottom that it’s written for Technical Preview of Windows Server 2016 and also assumed you have no AD group policies that may break stuff! So there are still alot of things that can, and will, go wrong if you follow that procedure.

SOLUTION
There really isn’t one solution since there are so many issues you may run into but I managed to work through them all. But here are my comments to the TechNet article and where things went wrong for me:
2) It’s never showing in a screenshot but it is shown in the next – you have to chose to join an existing farm, the default option is creating a new farm which is a totally different thing!
But even after going through the setup process succesfully after patching and rebooting I got the error 1297 “A privilege that the service requires to function properly does not exist in the service account configuration. You may use the Services Microsoft Management Console (MMC) snap-in (services.msc) and the Local Security Settings MMC snap-in (secpol.msc) to view the service configuration and the account configuration“. As it turns out, this is a policy issue with the Windows Server 2016 baseline that limits who and what can “Log on as a service” and “Generate a security audit”. Creating an override policy for this and adding the service account running the AD FS service solved this issue for me! (thanks to https://blogs.technet.microsoft.com/pie/2015/09/04/adfs-refuses-to-start-error-1297/)
3) This is actually very important later on knowing which server is primary and not!
4) and 5) These are confirmed as not required if you’re running a SQL cluster backend. However, you still need to check later for which server is primary and not.
6) This entire Powershell is just wrong and not accepted at all, atleast in my environment! You’re much better off starting the Remote Access Manager and starting the Wizard from there. This will allow you to chose the certificate in a dropdown without knowing the thumbprint. But this is where I ran into problems and lot’s of them!

The first problem I had when configuring the WAP was connectivity resulting in the error “An error occurred when attempting to establish a trust relationship with the federation service. Error: Unable to connect to the remote server”. This was first due to physical firewall, then the local firewall policy settings and in the end that the service itself was down! So this was basically alot network issues, not the biggest thing in the world.

Now that that was done with I ran into the next problem that caused so much headache for me – “An error occurred when attempting to establish a trust relationship with the federation service. Error: Unauthorized. Verify that the service account has administrative access on the target Federation Server.”! The account that the WAP uses to connect to the internal AD FS server with that has to be a local user and local admin account on the internal AD FS server (since the WAP server shouldn’t be a member of the same domain as the internal AD FS servers). The problem is that there is a group policy baseline for Windows Server 2016 that denies logon from the network for all local users (“Deny access to this computer from the network“)! This resulted in the error since it wasn’t allowed to login with anything but the console! Setting that to only “Guest” should be enough for this.

So after getting that problem solved I got the next error – “An error occurred when attempting to establish a trust relationship with the federation service. Error: Internal Server Error“. Looking at all logs and events and I couldn’t figure out what tha hell was causing this issue. Well, as it turns out it was related to step 4 and 5 which you shouldn’t have done if you’re running SQL backend! When you point to the internal AD FS service address (the web address sts.xxxxxxx.com) you’re supposed to use a host file to control that and point it to the load balanced IP address. Well when I did that I always ended up on a server that was NOT the primary computer and therefor I couldn’t add the WAP! When I changed the host file to point directly to the IP of a server that was Primary computer for the farm it worked! Just remember to change this back since you don’t want the WAP servers point to one specifik AD FS server.

That is as far as I’ve gotten as the rest of the upgrade involves upgrading the forest and domain schema which I’m really not ready to do.

Bulk Converting Domains To Federated

SCENARIO
You’re the administrator of an Exchange environment with lots of domains registered over the years for whatever reasons, as an example different business units with different e-mail domains. You’ve added them all to the Azure AD and verified them but now you need to tie them to the AD Federation Service (ADFS).

PROBLEM
Problem is it takes alot of time to first sort out all domains that are verified and then federating them, a very tedious task.

SOLUTION
Solution is you export all your domains into a CSV file (just listing all the domainnames is fine), the run this script and it will import the CSV file and for every entry it will check to make sure if it’s verified and if so, federate it with the ADFS. Remember to run this on the ADFS server and the Powershell needs to be launched as administrator!

#
# Written by : Kristoffer Strom ([email protected])
# Date: 2017-02-08
#
# Let's begin by importing the file. Change the filename "CSV_FILENAME.csv" to whatever you see fit.
$domains = Import-Csv CSV_FILENAME.csv
# And now we iterate through every entry
foreach ($domain in $domains)
{
  # Getting the status of the domain
  $domainstatus = get-msoldomain -DomainName $domain.DomainName
  # If it's already federated we just say that and move onto the next one
  if($domainstatus.Authentication -eq "Federated") { write-host -Foregroundcolor Yellow "$domain is already federated." }
    # If it's verified we federated it
    ElseIf($domainstatus.Status -eq "Verified") { Convert-MsolDomainToFederated -DomainName $domain.DomainName -SupportMultipleDomain:$true; write-host -Foregroundcolor Green "$domain.DomainName changed to federated" }
    # Or if it's not Verified or doesn't exist we write this error
    ElseIf($domainstatus.Status -ne "Verified") { write-host -Foregroundcolor Red "$domain is not verified or does not exist in tenant." }
}
# End of iteration

OPTIONAL
You could replace the import of the CSV file to read out all the UPN suffixes from your domain. If you’ve done your job for a proper O365 migration you’ve made sure all the UPN’s match their e-mails then all e-mail domains should exist as a UPN suffix. If you want to do that, replace the line “$domains = Import-Csv CSV_FILENAME.csv” with this:

$ADForest = Get-ADForest
$domains = $ADForest.UPNSuffixes

Another option is to do a get-msoldomain and filter on “Verified” domains only. But beware, this will tie all verified domains to your ADFS, be sure you really want that! If you do, replace the “$domains=” statement with this:

$domains = Get-MsolDomain -Status Verified

This script can easily be converted into one that does the initial adding of the domains, but since every domain added gets a vertification code backs doing that in bulk is less than ideal.

Download PS1 from Dropbox

Download PS1 from Dropbox