1. LTC
  2. RESIC
  3. FR
  4. Actualités
  5. Nos actualités

Nouvelle parution : "Striking the Balance in Using LLMs for Fact-Checking: A narrative Literature Review"

Publié le 4 septembre 2024 Mis à jour le 9 octobre 2024

Publication de Laurence Dierickx

Abstract
The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional fact-checking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice.

Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review | SpringerLink
Date(s)
le 31 août 2024

Date de publication