-
Windows Blog: “Data, insights and listening to improve the customer experience”
Yesterday, Rob Mauceri and Jane Liles published a white paper on the Windows Blog that talks about using telemetry to figure out if a patch is ready for deployment:
We approach each release with a straightforward question, “Is this Windows update ready for customers?” This is a question we ask for every build and every update of Windows, and it’s intended to confirm that automated and manual testing has occurred before we evaluate quality via diagnostic data and feedback-based metrics. After a build passes the initial quality gates and is ready for the next stages of evaluation, we measure quality based on the diagnostic data and feedback from our own engineers who aggressively self-host Windows to discover potential problems. We look for stability and improved quality in the data generated from internal testing, and only then do we consider releasing the build to Windows Insiders, after which we review the data again, looking specifically for failures.
In other words, MS looks at the telemetry from dog food runs and, if all looks copacetic, the Insiders get it.
I’m not going to snark about it (you folks can do that better than I). It’s obvious that the people involved have advanced tools at their disposal, they’re good at what they do, and they know the statistical analysis cold.
But you have to ask yourself… If the model’s so great, why did Destiny 2 and CoD get hit so badly last week?
Why do we continue to get solid, acknowledged bugs with almost every Windows patch on Patch Tuesday?
And… how on earth did Win10 version 1809 get let out of its cage?