The advent of new technologies has always spurred questions about changes in journalism – its content, its means of production, and its consumption. A quite recent development in the realm of digital journalism is software-generated content, i.e. automatically produced content. Companies such as Automated Insights offer services that, according to themselves “humanizes big data sets by spotting patterns, trends and key insights and describing those findings in plain English that is indistinguishable from that produced by a human writer” (Automated Insights, 2012).
This paper seeks to investigate how readers perceive software-generated content in relation to similar content written by a journalist. This is investigated through the following empirical research questions:
RQ1 – How is software-generated content perceived by readers, in regards to overall quality and credibility?
RQ2 – Are the software-generated content discernable from similar content written by human journalists?
The study utilizes an experimental methodology where respondents were subjected to different news articles that were written by a journalist or software-generated. The respondents were then asked to answer questions about how they perceived the article; its overall quality, credibility, objectiveness etc.The paper presents the results from a first small-scale study and they indicate that the software-generated content is perceived as, for example, descriptive, boring and objective, but not necessarily discernable from content written by journalists.
The paper discusses the results of the study and its implication for journalism practice.
computational journalism, algorithmic journalism, software-generated content, automated content