Microsoft officially launched Windows Phone 7.8, the upgrade to Windows Phone 7.5 devices that adds new Live Tile functionality and a few fixes and improvements, last month. The update, as is common with many Microsoft releases, is using a staggered rollout schedule, where a small group of users is offered the update first, and if everything goes well, that update is offered more broadly.
That’s all good unless everything doesn’t quite go as well as planned, as is apparently the case with the 7.8 update. At the heart of the problem is a set of issues with Live Tiles, as explained here by Windows Phone Central. It’s commendable that Microsoft both has a system in place for not allowing widespread problems to manifest, and that they appear to be intent on fixing the problems. However, the intricacies of rolling out an update across a broad range of devices in the wild, with the added complexity introduced by having to deal with a number of different carriers all with their own requirements and update schedules, can make for quite a mess of mixed messages and bad PR if everything doesn’t go as planned.
In the “old days”, it was common to run beta tests with a small but representative set of users, and release updates to that test group in order to catch showstopper bugs before a product was finally released. That model has gone out of style at Microsoft. It’s expensive to set up and maintain, and the quality of feedback was often low, as many beta testers opted in not to rigorously test software, but to get their hands on the latest bits, and perhaps score some free software in the process.
Then along came automatic feedback mechanisms like Microsoft’s Watson, the Customer Experience Improvement Program, and various forms of error reporting, and with them a new philosophy of large scale testing. Beta testing was out, and graduated rollouts coupled with automated error reporting became popular. A small group of the installed user base is offered an update, feedback data is checked for any problems, and the update continues to rollout based on that automated feedback (as well as with anecdotal and human feedback, too).
For Microsoft, this new rollout method of testing offered potentially much better quality feedback, but it also introduced a new set of problems. Instead of foisting a potentially buggy update on a group of dedicated beta testers, who expected less than perfect software, the new model requires that paying customers act as guinea pigs. For perhaps the first time, paying customers were being required to test software for bugs even though they paid full price, and never knowingly agreed to use less than fully tested software. So if something goes wrong, instead of a group of beta testers excited to be able to offer feedback, Microsoft instead faces a set of paying customers who are angry and upset that the products they paid for no longer work as expected.
Both approaches have their good and bad sides, but perhaps an at least partial return to the beta testing model is required. The onus is on Microsoft, if it continues to test software in a graduated rollout environment, to hold themselves to a higher standard before rolling out untested software, even to small groups. Paying customers aren’t beta testers, and shouldn’t be treated as such.