Inhalt anspringen

On the Impact of Cross-Domain Data on German Language Models

Schnelle Fakten

  • Weitere Publizierende

    A. Dada, A. Chen, C. Peng, K. E. Smith, A. Idrissi Yaghir, C. M. Seibold, J. Li, D. Truhn, J. Egger, J. Bian, J. Kleesiek, Y. Wu

  • Veröffentlichung

    • 2023
    • Band Findings of the Association for Computational Linguistics: EMNLP 2023
  • Organisationseinheit

  • Fachgebiete

    • Angewandte Informatik
  • Forschungsschwerpunkte

    • Medizinische Informatik (MI)
  • Format

    Konferenzpaper

Zitat

A. Dada, A. Chen, C. Peng, K. E. Smith, A. Idrissi Yaghir, C. M. Seibold, J. Li, C. M. Friedrich, D. Truhn, J. Egger, J. Bian, J. Kleesiek, and Y. Wu, “On the Impact of Cross-Domain Data on German Language Models,” in Findings of the Association for Computational Linguistics: EMNLP 2023, 2023, pp. 13801–13813 [Online]. Available: https://aclanthology.org/2023.findings-emnlp.922/

Abstract

Traditionally, large language models have been either trained on general web crawls or domain-specific data. However, recent successes of generative large language models, have shed light on the benefits of cross-domain datasets. To examine the significance of prioritizing data diversity over quality, we present a German dataset comprising texts from five domains, along with another dataset aimed at containing high-quality data. Through training a series of models ranging between 122M and 750M parameters on both datasets, we conduct a comprehensive benchmark on multiple downstream tasks. Our findings demonstrate that the models trained on the cross-domain dataset outperform those trained on quality data alone, leading to improvements up to 4.45% over the previous state-of-the-art.

Über die Publikation

Erläuterungen und Hinweise

Diese Seite verwendet Cookies, um die Funktionalität der Webseite zu gewährleisten und statistische Daten zu erheben. Sie können der statistischen Erhebung über die Datenschutzeinstellungen widersprechen (Opt-Out).

Einstellungen (Öffnet in einem neuen Tab)