We present a modification to evolutionary on-line learning of cooperative behavior, based on a special action ``learn'', that allows the performance of a new agent replacing an old agent with somewhat different abilities in an experienced team to be improved. The general idea is to make us of the strategy of the old agent, obtained either directly or by the model the other team members made of it. This would be used as a seed strategy in the on-line learning process providing a focus to the process. This way, the flexibility of on-line learning remains, and the new agent is much less prone to making ``beginner'' mistakes that may prevent achievement of the team goal. Experiments with rather different variants of the pursuit game show that our method allows new agents to overcome the difference in abilities rather quickly. Thus, team performance is much better than when the new agent starts learning from scratch.