In a perfect world, there would be no problem running your new system in these environments. However, inconsistencies do occur. Products produced by such large companies as Microsoft are tested so widely before they are commercially released for sale that issues such as machine/product incompatibility are addressed either internally or during the beta test cycle. (Indeed, the various flavors of Windows currently available do contain the occasional piece of code that detects that it is running on a specific piece of hardware or with a specific piece of software and makes allowances accordingly.) One of these inconsistencies can be attributed to executable file versions. For example, different versions of the WINSOCK.DLL file are available from different manufacturers. Only one of them can be in the Windows System or System32 directory at any time, and if it's not the one you're expecting, problems will occur.
Another problem that can arise in some companies-as incredible as it seems-is that key Windows components can be removed from the corporate installation to recover disk space. Many large corporations made a massive investment in PC hardware back when a 486/25 with 4 MB of RAM and a 340 MB hard disk was a good specification. These machines, now upgraded to 16 MB of RAM, might still have the original hard disks installed, so disk space will be at a premium. This is less of a problem nowadays with the relative cheapness of more powerful machines, so if your organization doesn't suffer from this situation, all is well, but it is a common problem out there. I am aware of one organization, for example, that issued a list of files that could be "safely" deleted to recover a bit of disk space. Apart from the games, help files for programs such as Terminal and the object packager (ever use that? me neither), there was also the file MMSYSTEM.DLL. This file is a key component of the multimedia system. In those days (Windows 3.1), very few of the users had any multimedia requirements, so the problem went unnoticed for a while. The fix was obviously quite straightforward, but it still would have caused problems. If your attitude is "Well, that's not my problem," you are wrong. You need to be aware of anything that is going to prevent your system from running properly at your company, and if a show-stopping bug is not discovered until after the rollout, you'll be the one who looks bad, no matter who you try to blame.
A Final Word of Caution
And now for the bad news: once you have completed the testing, your application or component will still probably have bugs in it. This is the nature of software development, and the true nature of testing is unfortunately to reduce the number of bugs to a small enough number that they do not detract from the usefulness and feel-good factor of the product. This includes the absence of "showstopper" bugs-there is still no excuse for shipping something that has this degree of imperfection. In running through the testing cycles, you will have reduced the number of apparent bugs to zero. At least everything should work OK. However, users are going to do things to your system that you would never have imagined, and this will give rise to problems from time to time. In all likelihood, they might trigger the occasional failure that cannot apparently be repeated. It does happen occasionally, and the cause is most typically that the pressures (commercial or otherwise) on the project management team to deliver become so strong that the team succumbs to the pressure and rushes the product out before it's ready. They then find that the users come back to them with complaints about the stability of the product. Sometimes you just can't win.