The notion of stochastic stability is used in game theoretic learning to characterize which joint actions of players exhibit high probabilities of occurrence in the long run. This paper examines the impact of two types of errors on stochastic stability: i) small unstructured uncertainty in the game parameters and ii) slow time variations of the game parameters. In the first case, we derive a continuity result bounds the effects of small uncertainties. In the second case, we show that game play tracks drifting stochastically stable states under sufficiently slow time variations. The analysis is in terms of Markov chains and hence is applicable to a variety of game theoretic learning rules. Nonetheless, the approach is illustrated on the widely studied rule of log-linear learning. Finally, the results are applied in both simulation and laboratory experiments to distributed area coverage with mobile robots.