How generative artificial intelligence portrays science: Interviewing ChatGPT from the perspective of different audience segments.

How generative artificial intelligence portrays science: Interviewing ChatGPT from the perspective of different audience segments.

Publication date: Sep 29, 2024

Generative artificial intelligence in general and ChatGPT in particular have risen in importance. ChatGPT is widely known and used increasingly as an information source for different topics, including science. It is therefore relevant to examine how ChatGPT portrays science and science-related issues. Research on this question is lacking, however. Hence, we simulate “interviews” with ChatGPT and reconstruct how it presents science, science communication, scientific misbehavior, and controversial scientific issues. Combining qualitative and quantitative content analysis, we find that, generally, ChatGPT portrays science largely as the STEM disciplines, in a positivist-empiricist way and a positive light. When comparing ChatGPT’s responses to different simulated user profiles and responses from the GPT-3. 5 and GPT-4 versions, we find similarities in that the scientific consensus on questions such as climate change, COVID-19 vaccinations, or astrology is consistently conveyed across them. Beyond these similarities in substance, however, pronounced differences are found in the personalization of responses to different user profiles and between GPT-3. 5 and GPT-4.

Concepts Keywords
Covid generative artificial intelligence
Interviews human–machine communication
Misbehavior large language models
Science representations of science
Vaccinations science communication
segmentation analysis
talking with machines

Semantics

Type Source Name
disease MESH information source
disease MESH COVID-19

Original Article

(Visited 2 times, 1 visits today)