Tips on granular migrations with the Migration Tools for Azure DevOps

As you know, I am a huge fan of Martin Hinshelwood's Migration Tools for Azure DevOps. I've been using them for the past few months, and I put together a list of common occurrences that you are likely to face.

Throttling - it is going to happen!

You cannot do anything about it - you are hitting a cloud service, so it is inevitable that you are going to get throttled because of the irregular shape and the amount of data you are moving. What I can tell you is that if you are migrating an average amount of Work Items (in the low thousands I reckon) you are very likely to hit throttling using the LinkMigrationContext processor because of the load generated on the service.

Correct use of the ReflectedWorkItemID field

I experimented a fair bit with it, and the solution is to actually have it on both ends for the best outcome. Also remember that custom fields in Azure DevOps Services are unique, so don't be tempted to create the ReflectedWorkItemID field in the custom project used by only a project while you will need to re-use it across the board. 
Always create a custom process first - to be used as a starting point for migrated projects that is going to have that field - and then apply that to the target project with whatever further customisation you need.

Split your migrations into core and non-core processors

When do you need your users to be away from the source system? When can they start using the target system? All questions that are going to pop around, sooner rather than later. 
In my opinion if you are performing a Work Item migration they can start working after Areas/Iterations, Work Items and relationship links have been migrated. Why? Because unless you have someone who is really into his/her attachments, that is the main staple of the Work Item Tracking pillar of Azure DevOps.
Every project is different, of course. But these are my notes so they are skewed from my personal experience 😊

Identify what needs to be migrated right now and what can wait

If you have too much data to move and you cannot afford that downtime, you need to change your scope. A feasible approach is to move what is currently active, meaning people can start working right away. Once that is done, you can start batching all the closed items - remember that the WorkItemMigrationContext uses WIQL behind the scenes to identify what is going to be moved, so it is very straightforward.
Doing this makes sure that everything will eventually be migrated, but without the time pressure of the usage downtime. It is just down to coordination.