@eniko In the late ‘90s, Microsoft was worried about threats to the desktop monopoly. The biggest ones were from the web, because people who could replace a Windows app with a web app could then replace Windows with Macs, Linux, BeOS (which was about to be the next big thing briefly) and so on. Windows is not a great OS. Few people buy Windows because they want Windows, they buy Windows to run Windows apps.
They undermined Netscape in various ways, including bundling IE4 and building a load of Windows-only things into IE.
Java was another threat. Microsoft built J++, which was a good implementation of Java (faster than Sun’s) and also bundled a load of Windows-only things. In particular, it didn’t clearly mark the extensions (which Java implementations were required to do), so it was easy to think you were writing a portable Java App when you were actually writing a Windows app. Sun sued Microsoft over this, leaving them with good VM technology that they couldn’t put in products (the lawsuit wasn’t settled until a year or so after .NET was released but the writing was on the wall long before).
At the same time, Intel told everyone that Itanium was the future. x86 was going to die. An architecture transition had a risk of undermining the Windows monopoly because moving to a new architecture would reduce the value of legacy apps (if you are emulating the ISA, you might also emulate a legacy OS and run shiny new apps on a different system). Having an architecture-agnostic executable format would have made it easier to move.
Finally, looking as if you are able to transition to a different architecture gave Microsoft more leverage in talks with Intel. This made it easier to get features into the x86 architecture that Windows wanted (which is part of the reason why x86 is so awful now. The Windows team does not have good taste when it comes to proposing new architecture).