The Internet Archive and the Socio-Technical Construction of Historical Facts
Slides from an open lecture hosted by the Centre for Internet Studies, Aarhus University, 31 May 2018.
For the full paper, please refer to Anat Ben-David & Adam Amram (2018) The Internet Archive and the socio-technical construction of historical facts, Internet Histories, 2:1-2, 179-201, DOI: 10.1080/24701475.2018.1455412
Abstract:
After years of stabilisation in the reputation of the web as a reliable source of knowledge, recent events around the 2016 presidential elections in the United States have brought with them new questions about the epistemology and ontology of online materials: how do we know how to trust an online source? What tools
do we have to distinguish between fact and fake? What are the knowledge processes behind the generation of what we see on the screen? Among the various knowledge-devices of the world wide Web, the Internet Archive's Wayback Machine is considered one of the last reliable non-commercial initiatives, committed to
providing universal access to archived snapshots of historical websites, as they were captured in real time.
Yet what are the epistemic processes behind the generation of archived snapshots as facts? This talk aims to strengthen the ontological status of archived websites as evidence, by debunking the Wayback Machine as a monolithic device. Its argues that rather than an arbitrary capturing of snapshots by bots and crawlers, historical knowledge on the Wayback Machine is generated by an entangled and iterative system comprised of proactive human contributions, routinely operated crawls and a reification of external, crowd-sourced knowledge devices. These turn the IAWM into a repository whose knowing of the past is potentially surplus – harboring information which was unknown to each of the contributing actors at the time and place of archiving.
-
-
-