There’s a real cultural difference between *NIX (Linux, Solaris, etc.) administrators and their Windows counterparts. To Windows administrators, when one application makes some unrelated part of the system go daffy, it’s the application’s fault. To Unix administrators, when one application makes some unrelated part of the system go daffy, it’s the either the system’s fault (and considered a serious bug) or the administrator’s fault (and considered a big fat clue that you are doing something dumb or n00bish).
In other words, UNIX applications are expected to be stable enough to withstand hissy fits from other UNIX applications. A word processor or media player should never be able to crash your X11 display (“desktop”), only itself. Likewise, your X11 display should never be able to crash your display manager, only itself.
The mentality is the same as one finds in network applications development: a website should never be able to crash your web browser—and to have one do so is considered a serious problem. Likewise, an incoming request should never be able to crash your web service. On Unix, that mentality is applied to every component of the system (at least by admins).
I think part of this is expectations: Unix admins train themselves to expect the lower-level components to be rock-solid (partly experience, partly propaganda), whereas windows admins have trained themselves to expect those components’ windows equivelents to not to work—because for around 15 years that wasn’t an unreasonable expectation.
It’s also why Unix admins still sneer at Windows, even though Server 2003 or XP Pro is nearly as stable today as Solaris was a decade ago.