The dark side of LLM-powered chatbots: Misinformation, biases, content moderation challenges in political information retrievalShow others and affiliations
2025 (English)In: Selected Papers of Internet Research, SPIR, ISSN 2162-3317, Vol. 2024:, no AoIR2024Article in journal (Refereed) Published
Abstract [en]
This study investigates the impact of Large Language Model (LLM)-based chatbots, specifically in the context of political information retrieval, using the 2024 Taiwan presidential election as a case study. With the rapid integration of LLMs into search engines like Google and Microsoft Bing, concerns about information quality, algorithmic gatekeeping, biases, and content moderation emerged. This research aims to (1) assess the alignment of AI chatbot responses with factual political information, (2) examine the adherence of chatbots to algorithmic norms and impartiality ideals, (3) investigate the factuality and transparency of chatbot-sourced synopses, and (4) explore the universality of chatbot gatekeeping across different languages within the same geopolitical context. Adopting a case study methodology and prompting method, the study analyzes responses from Microsoft’s LLM-powered search engine chatbot, Copilot, in five languages (English, Traditional Chinese, Simple Chinese, German, Swedish). The findings reveal significant discrepancies in content accuracy, source citation, and response behavior across languages. Notably, Copilot demonstrated a higher rate of factual errors in Traditional Chinese while exhibiting better performance in Simplified Chinese. The study also highlights problematic referencing behaviors and a tendency to prioritize certain types of sources, such as Wikipedia, over legitimate news outlets. These results underscore the need for enhanced transparency, thoughtful design, and vigilant content moderation in AI technologies, especially during politically sensitive events. Addressing these issues is crucial for ensuring high-quality information delivery and maintaining algorithmic accountability in the evolving landscape of AI-driven communication platforms.
Place, publisher, year, edition, pages
2025. Vol. 2024:, no AoIR2024
Keywords [en]
Algorithmic gatekeeping, comparative studies, algorithm auditing, generative information retrieval
National Category
Media and Communication Studies
Research subject
Media and Communication Studies
Identifiers
URN: urn:nbn:se:kau:diva-103165DOI: 10.5210/spir.v2024i0.13977OAI: oai:DiVA.org:kau-103165DiVA, id: diva2:1937762
Conference
AoIR2024: The 25thAnnual Conference of the Association of Internet Researchers. Sheffield, UK. 30 Oct - 2 Nov 2024.
Note
Selected Papers of #AoIR2024: The 25thAnnual Conference of the Association of Internet ResearchersSheffield, UK/ 30 Oct -2Nov 2024
2025-02-142025-02-142025-10-16Bibliographically approved