Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources

1University of Helsinki 2Technical University of Darmstadt

*Corresponding to shaoxiong.ji@tu-darmstadt.de

Abstract

Large Language Models (LLMs) exhibit significant disparities in performance across languages, primarily benefiting high-resource languages while marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as a promising approach to address this imbalance, although the relative effectiveness of monolingual, bilingual, and code-augmented data strategies remains unclear. This study systematically evaluates 36 CPT configurations involving three multilingual base models, across 30+ languages categorized as altruistic, selfish, and stagnant, spanning various resource levels. Our findings reveal three major insights: (1) Bilingual CPT improves multilingual classification but often causes language mixing issues during generation. (2) Including programming code data during CPT consistently enhances multilingual classification accuracy, particularly benefiting low-resource languages, but introduces a trade-off by slightly degrading generation quality. (3) Contrary to prior work, we observe substantial deviations from language classifications according to their impact on cross-lingual transfer: Languages classified as altruistic often negatively affect related languages, selfish languages show conditional and configuration-dependent behavior, and stagnant languages demonstrate surprising adaptability under certain CPT conditions. These nuanced interactions emphasize the complexity of multilingual representation learning, underscoring the importance of systematic studies on generalizable language classification to inform future multilingual CPT strategies.

πŸ€— Model Code πŸ“ Model Description
πŸ€— L3-Mono-Alt πŸ¦™ Llama-3.1-8B CPT on monolingual data for altruistic languages
πŸ€— L3-Mono-Sel πŸ¦™ Llama-3.1-8B CPT on monolingual data for selfish languages
πŸ€— L3-Mono-Stag πŸ¦™ Llama-3.1-8B CPT on monolingual data for stagnant languages
πŸ€— L3-Bi-Alt πŸ¦™ Llama-3.1-8B CPT on bilingual data for altruistic languages
πŸ€— L3-Bi-Sel πŸ¦™ Llama-3.1-8B CPT on bilingual data for selfish languages
πŸ€— L3-Bi-Stag πŸ¦™ Llama-3.1-8B CPT on bilingual data for stagnant languages
πŸ€— L3-Mono-Code-Alt πŸ¦™ Llama-3.1-8B CPT on monolingual+code data for altruistic languages
πŸ€— L3-Mono-Code-Sel πŸ¦™ Llama-3.1-8B CPT on monolingual+code data for selfish languages
πŸ€— L3-Mono-Code-Stag πŸ¦™ Llama-3.1-8B CPT on monolingual+code data for stagnant languages
πŸ€— L3-Bi-Code-Alt πŸ¦™ Llama-3.1-8B CPT on bilingual+code data for altruistic languages
πŸ€— L3-Bi-Code-Sel πŸ¦™ Llama-3.1-8B CPT on bilingual+code data for selfish languages
πŸ€— L3-Bi-Code-Stag πŸ¦™ Llama-3.1-8B CPT on bilingual+code data for stagnant languages
πŸ€— L2-Mono-Alt πŸ¦™ Llama-2-7B CPT on monolingual data for altruistic languages
πŸ€— L2-Mono-Sel πŸ¦™ Llama-2-7B CPT on monolingual data for selfish languages
πŸ€— L2-Mono-Stag πŸ¦™ Llama-2-7B CPT on monolingual data for stagnant languages
πŸ€— L2-Bi-Alt πŸ¦™ Llama-2-7B CPT on bilingual data for altruistic languages
πŸ€— L2-Bi-Sel πŸ¦™ Llama-2-7B CPT on bilingual data for selfish languages
πŸ€— L2-Bi-Stag πŸ¦™ Llama-2-7B CPT on bilingual data for stagnant languages
πŸ€— L2-Mono-Code-Alt πŸ¦™ Llama-2-7B CPT on monolingual+code data for altruistic languages
πŸ€— L2-Mono-Code-Sel πŸ¦™ Llama-2-7B CPT on monolingual+code data for selfish languages
πŸ€— L2-Mono-Code-Stag πŸ¦™ Llama-2-7B CPT on monolingual+code data for stagnant languages
πŸ€— L2-Bi-Code-Alt πŸ¦™ Llama-2-7B CPT on bilingual+code data for altruistic languages
πŸ€— L2-Bi-Code-Sel πŸ¦™ Llama-2-7B CPT on bilingual+code data for selfish languages
πŸ€— L2-Bi-Code-Stag πŸ¦™ Llama-2-7B CPT on bilingual+code data for stagnant languages
πŸ€— V7-Mono-Alt Viking-7B CPT on monolingual data for altruistic languages
πŸ€— V7-Mono-Sel Viking-7B CPT on monolingual data for selfish languages
πŸ€— V7-Mono-Stag Viking-7B CPT on monolingual data for stagnant languages
πŸ€— V7-Bi-Alt Viking-7B CPT on bilingual data for altruistic languages
πŸ€— V7-Bi-Sel Viking-7B CPT on bilingual data for selfish languages
πŸ€— V7-Bi-Stag Viking-7B CPT on bilingual data for stagnant languages
πŸ€— V7-Mono-Code-Alt Viking-7B CPT on monolingual+code data for altruistic languages
πŸ€— V7-Mono-Code-Sel Viking-7B CPT on monolingual+code data for selfish languages
πŸ€— V7-Mono-Code-Stag Viking-7B CPT on monolingual+code data for stagnant languages
πŸ€— V7-Bi-Code-Alt Viking-7B CPT on bilingual+code data for altruistic languages
πŸ€— V7-Bi-Code-Sel Viking-7B CPT on bilingual+code data for selfish languages
πŸ€— V7-Bi-Code-Stag Viking-7B CPT on bilingual+code data for stagnant languages
πŸ€— Dataset Name πŸ“ Description
πŸ€— Stagnant Stagnant language data.
πŸ€— Selfish Selfish language data.
πŸ€— Altruistic Altruistic language data.
πŸ€— Code Programming code data.

BibTeX


@article{MixCPT,
    title={Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources}, 
    author={Zihao Li and Shaoxiong Ji and Hengyu Luo and JΓΆrg Tiedemann},
    year={2025},
    journal={arXiv preprint 2504.04152},
    url={https://arxiv.org/abs/2504.04152}, 
}