SlideShare a Scribd company logo
Instant Ebook Access, One Click Away – Begin at ebookgate.com
Visualization in the Age of Computerization 1st
Edition Annamaria Carusi
https://ebookgate.com/product/visualization-in-the-age-of-
computerization-1st-edition-annamaria-carusi/
OR CLICK BUTTON
DOWLOAD EBOOK
Get Instant Ebook Downloads – Browse at https://ebookgate.com
Click here to visit ebookgate.com and download ebook now
Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...
Psychology of Sustainability and Sustainable Development
in Organizations First Edition Annamaria Di Fabio
https://ebookgate.com/product/psychology-of-sustainability-and-
sustainable-development-in-organizations-first-edition-annamaria-di-
fabio/
ebookgate.com
Horror in the Age of Steam Tales of Terror in the
Victorian Age of Transitions 1st Edition Carroll Clayton
Savant
https://ebookgate.com/product/horror-in-the-age-of-steam-tales-of-
terror-in-the-victorian-age-of-transitions-1st-edition-carroll-
clayton-savant/
ebookgate.com
The fall of language in the age of English Carpenter
https://ebookgate.com/product/the-fall-of-language-in-the-age-of-
english-carpenter/
ebookgate.com
Politics in the Age of Austerity 1st Edition Wolfgang
Streeck
https://ebookgate.com/product/politics-in-the-age-of-austerity-1st-
edition-wolfgang-streeck/
ebookgate.com
Hope in the Age of Anxiety 1st Edition Anthony Scioli
https://ebookgate.com/product/hope-in-the-age-of-anxiety-1st-edition-
anthony-scioli/
ebookgate.com
Art in the Age of Emergence 1st Edition Michael Pearce
https://ebookgate.com/product/art-in-the-age-of-emergence-1st-edition-
michael-pearce/
ebookgate.com
Race in the Age of Obama 1st Edition Donald Cunnigen
https://ebookgate.com/product/race-in-the-age-of-obama-1st-edition-
donald-cunnigen/
ebookgate.com
Representing Humanity in the Age of Enlightenment 1st
Edition Alexander Cook
https://ebookgate.com/product/representing-humanity-in-the-age-of-
enlightenment-1st-edition-alexander-cook/
ebookgate.com
The unfinished Enlightenment description in the age of the
encyclopedia 1st Edition Stalnaker
https://ebookgate.com/product/the-unfinished-enlightenment-
description-in-the-age-of-the-encyclopedia-1st-edition-stalnaker/
ebookgate.com
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
Visualization in the
Age of Computerization
Digitalization and computerization are now pervasive in science. This has
deep consequences for our understanding of scientific knowledge and of the
scientific process, and challenges longstanding assumptions and traditional
frameworks of thinking of scientific knowledge. Digital media and compu-
tational processes challenge our conception of the way in which perception
and cognition work in science, of the objectivity of science, and the nature
of scientific objects. They bring about new relationships between science,
art and other visual media, and new ways of practicing science and orga-
nizing scientific work, especially as new visual media are being adopted by
science studies scholars in their own practice. This volume reflects on how
scientists use images in the computerization age, and how digital technolo-
gies are affecting the study of science.
Annamaria Carusi is Associate Professor in Philosophy of Medical Science
and Technology at the University of Copenhagen.
Aud Sissel Hoel is Associate Professor in Visual Communication at the
Norwegian University of Science and Technology.
Timothy Webmoor is Assistant Professor adjunct in the Department of
Anthropology, University of Colorado, Boulder.
Steve Woolgar is Chair of Marketing and Head of Science and Technology
Studies at Said Business School, University of Oxford.
Routledge Studies in Science, Technology and Society
1 Science and the Media
Alternative Routes in Scienti¿c
Communication
Massimiano Bucchi
2 Animals, Disease and Human
Society
Human-Animal Relations and the
Rise of Veterinary Medicine
Joanna Swabe
3 Transnational Environmental
Policy
The Ozone Layer
Reiner Grundmann
4 Biology and Political Science
Robert H. Blank and Samuel M.
Hines, Jr.
5 Technoculture and Critical
Theory
In the Service of the Machine?
Simon Cooper
6 Biomedicine as Culture
Instrumental Practices,
Technoscienti¿c Knowledge, and
New Modes of Life
Edited by Regula Valérie Burri
and Joseph Dumit
7 Journalism, Science and Society
Science Communication between
News and Public Relations
Edited by Martin W. Bauer and
Massimiano Bucchi
8 Science Images and Popular
Images of Science
Edited by Bernd Hüppauf and
Peter Weingart
9 Wind Power and Power Politics
International Perspectives
Edited by Peter A. Strachan,
David Lal and David Toke
10 Global Public Health Vigilance
Creating a World on Alert
Lorna Weir and Eric Mykhalovskiy
11 Rethinking Disability
Bodies, Senses, and Things
Michael Schillmeier
12 Biometrics
Bodies, Technologies, Biopolitics
Joseph Pugliese
13 Wired and Mobilizing
Social Movements, New
Technology, and Electoral Politics
Victoria Carty
14 The Politics of Bioethics
Alan Petersen
15 The Culture of Science
How the Public Relates to Science
Across the Globe
Edited by Martin W. Bauer, Rajesh
Shukla and Nick Allum
16 Internet and Surveillance
The Challenges of Web 2.0 and
Social Media
Edited by Christian Fuchs, Kees
Boersma, Anders Albrechtslund
and Marisol Sandoval
17 The Good Life in a Technological
Age
Edited by Philip Brey, Adam
Briggle and Edward Spence
18 The Social Life of
Nanotechnology
Edited by Barbara Herr Harthorn
and John W. Mohr
19 Video Surveillance and Social
Control in a Comparative
Perspective
Edited by Fredrika Björklund and
Ola Svenonius
20 The Digital Evolution of an
American Identity
C. Waite
21 Nuclear Disaster at Fukushima
Daiichi
Social, Political and
Environmental Issues
Edited by Richard Hindmarsh
22 Internet and Emotions
Edited by Tova Benski and Eran
Fisher
23 Critique, Social Media and the
Information Society
Edited by Christian Fuchs and
Marisol Sandoval
24 Commodi¿ed Bodies
Organ Transplantation and the
Organ Trade
Oliver Decker
25 Information Communication
Technology and Social
Transformation
A Social and Historical
Perspective
Hugh F. Cline
26 Visualization in the Age of
Computerization
Edited by Annamaria Carusi, Aud
Sissel Hoel, Timothy Webmoor and
Steve Woolgar
This page intentionally left blank
Visualization in the
Age of Computerization
Edited by
Annamaria Carusi, Aud Sissel Hoel,
Timothy Webmoor and Steve Woolgar
NEW YORK LONDON
First published 2015
by Routledge
711 Third Avenue, New York, NY 10017
and by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
Routledge is an imprint of the Taylor & Francis Group,
an informa business
© 2015 Taylor & Francis
The right of the editors to be identified as the authors of the editorial
material, and of the authors for their individual chapters, has been asserted
in accordance with sections 77 and 78 of the Copyright, Designs and
Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or
utilised in any form or by any electronic, mechanical, or other means, now
known or hereafter invented, including photocopying and recording, or in
any information storage or retrieval system, without permission in writing
from the publishers.
Trademark Notice: Product or corporate names may be trademarks or
registered trademarks, and are used only for identification and explanation
without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Visualization in the age of computerization / edited by Annamaria Carusi,
Aud Sissel Hoel, Timothy Webmoor and Steve Woolgar.
pages cm. — (Routledge studies in science, technology and society
; 26)
Includes bibliographical references and index.
1. Information visualization. 2. Digital images. 3. Visualization.
4. Computers and civilization. I. Carusi, Annamaria.
QA76.9.I52V574 2014
303.48'34—dc23
2013051326
ISBN13: 978-0-415-81445-4 (hbk)
ISBN13: 978-0-203-06697-3 (ebk)
Typeset in Sabon
by IBT Global.
SFI-01234
SFI label applies to the text stock
Contents
List of Figures
Introduction 1
ANNAMARIA CARUSI, AUD SISSEL HOEL,
TIMOTHY WEBMOOR AND STEVE WOOLGAR
PART I
Visualization in the Age of Computerization
1 Algorithmic Alchemy, or the Work of Code
in the Age of Computerized Visualization 19
TIMOTHY WEBMOOR
2 From Spade-Work to Screen-Work:
New Forms of Archaeological Discovery in Digital Space 40
MATT EDGEWORTH
3 British Columbia Mapped: Geology, Indigeneity
and Land in the Age of Digital Cartography 59
TOM SCHILLING
4 Redistributing Representational Work:
Tracing a Material Multidisciplinary Link 77
DAVID RIBES
5 Making the Strange Familiar:
Nanotechnology Images and Their Imagined Futures 97
MICHAEL LYNCH AND KATHRYN DE RIDDER-VIGNONE
viii Contents
6 Objectivity and Representative Practices
across Artistic and Scientific Visualization 118
CHIARA AMBROSIO
7 Brains, Windows and Coordinate Systems 145
ANNAMARIA CARUSI AND AUD SISSEL HOEL
8 A Four-Dimensional Cinema: Computer Graphics,
Higher Dimensions and the Geometrical Imagination 170
ALMA STEINGART
PART II
Doing Visual Work in Science Studies
9 Visual STS 197
PETER GALISON
10 Expanding the Visual Registers of STS 226
TORBEN ELGAARD JENSEN, ANDERS KRISTIAN MUNK,
ANDERS KOED MADSEN AND ANDREAS BIRKBAK
11 Mapping Networks:
Learning From the Epistemology of the “Natives” 231
ALBENA YANEVA
12 Visual STS Is the Answer, What Is the Question? 237
ANNE BEAULIEU
13 Visual Science Studies: Always Already Materialist 243
LISA CARTWRIGHT
Contributors 269
Index 273
Figures
1.1 Lines of code in the programming language C++ (on right)
rendering the visualization (on left) of a London transport
model. 21
1.2 Cached MySQL database. 28
2.1 The use of iPads at Pompeii excavations, 2010. 46
4.1 Two examples of Marie’s work in the application of texture-
mapping to a single surface. Which is more effective? 80
4.2 A texture-map used in one of Marie’s experimental systems. 84
5.1 “Quantum Corral” (1993). 103
5.2 Nanocar models and STM image. 110
6.1 Bernard Siegfried Albinus. 124
6.2 Bernard Siegfried Albinus. 125
6.3 Alfred Stieglitz, The Steerage, 1907. 132
6.4 Martin John Callanan, 2009. 135
8.1 On the left (1a) is a still from Banchoff and Strauss’s first
film, showing a projection of the flat torus into three-space,
which divides the space into two congruent halves. On the
right (1b) is a later rendering of the same projection with
color and shading. 175
8.2 On the top (2a & 2b) are two images from The Hypercube
Projections and Slicing. Below (2c & 2d) are two images
from Complex Functions Graph. 177
8.3 Two versions of the Veronese surface. 180
9.1 Still from Primate. 207
9.2 Dimitri Mugianis. 209
9.3 Still from Leviathan. 210
9.4 Still from Secrecy. 216
11.1 The dynamic network mapping of the process of design
and construction of the 2012 London Olympics Stadium. 235
This page intentionally left blank
Introduction
Annamaria Carusi, Aud Sissel Hoel,
Timothy Webmoor and Steve Woolgar
The pervasive computerization of imaging and visualizing challenges us to
question what changes accompany computerized imagery and whether, for
instance, it is poised to transform science and society as thoroughly as the
printing press and engraving techniques changed image reproduction (Eisen-
stein 1980; Rudwick 1976), or as radically as photography altered the con-
ditions of human sense perception and the aura of works of art (Benjamin
[1936] 1999). Whereas some scholars discern continuity in representational
form from Renaissance single-point perspective to cinematic and digital
arrays, from Alberti’s windows to Microsoft (Crary 1990; Manovich 2001;
Friedberg 2006; Daston and Galison 2007), other scholars understand our
immersion in imagery as heralding a “visual turn” with the engagement of
knowledge in contemporary culture (Rheingold 1992; Mitchell 1994; Staf-
ford 1996). James Elkins suggests that visual literacy spans the specialized
disciplines of the academy (Elkins 2007), while Robert Horn or Thomas
West claim that “visual thinking” is primary and intuitive (Horn 1998;
West 2004). Computational high-tech may be enabling the reclamation of
old visual talents devalued by word-bound modernist thought.
When we organized the Visualization in the Age of Computerization
conference in Oxford in 2011, we were motivated by curiosity regarding
what specific effects these innovations were having and whether claims
regarding new and more effective visualizing techniques and transformed
modes of visual thinking were being borne out. A further motivation was
the conviction that the exploration of the field would be enriched by input
from the broad range of science studies, including sociological, philosophi-
cal, historical and visual studies approaches. The papers presented at the
conference and the further discussion and debate that they engendered
have gone some way in providing a set of crossdisciplinary and sometimes
interdisciplinary perspectives on the ways in which the field of visualiza-
tion has continued to unfold. This volume gathers together a sample of
contributions from the conference, with others being presented in the
special issue of Interdisciplinary Science Reviews, titled “Computational
Picturing.”1
Other interventions at the conference have been published else-
where, and since that time, the domain has garnered the interest of a great
2 Carusi, Sissel Hoel, Webmoor and Woolgar
many scholars across the disciplines. New publications in the area include
Bartscherer and Coover (2011), Yaneva (2012), Wouters et al. (2013) and
Coopmans et al. (2014). The texts gathered in this volume each address the
ways in which the increasing prevalence of computational means of imag-
ing and visualizing is having an impact on practices and understandings of
“making visible,” as well as of the objects visualized. We note five overlap-
ping themes in considerations of the ways in which the modes of visualiza-
tion associated with digital methods are making a difference in areas that
have traditionally been of interest to studies of the visual in science.
First, a difference is being made to the deployment of perception and
cognition in visual evidence. Traditional ideas about cognition have long
been rejected in favor of an understanding of interpretation in terms of
in situ material practices. It is thus recognized that mentalistic precepts
such as “recognizing patterns,” “identifying relationships,” “assessing fit
and correspondence,” etc. are better treated as idealized depictions of the
activities involved in generating, managing and dealing with representa-
tions. In science and technology studies (STS), visual studies, history of
science and related disciplines, the focus on material practices resulted
in a widespread acknowledgment of the muddled and contingent efforts
involved in the practical activities of making sense of the visual culture
of science (Lynch and Woolgar 1990; Cartwright 1995; Galison 1997).
Yet the advent of new modes of computerized visualization has seen the
reemergence of a mentalistic cognitivism, with visualizations assigned
the role of “cognitive aids” (Bederson and Schneiderman 2003; Card,
Mackinley and Schneiderman 1999; McCormick, DeFanti and Brown
1987; Ware 2000; see Arias-Hernandez, Green and Fisher 2012 for a
review). However, alongside the reemerging cognitivist discourse, there
are other claims that accentuate the cognitive import of visualizations—
that is, their active role in forming and shaping cognition—while simulta-
neously pointing to the complex embedding of perception and cognition
in material settings (Ellenbogen 2008; Carusi 2008; Hoel 2012). As Anne
Beaulieu points out in her contribution toward the end of this volume,
even if the term “visual” relies on the perceptual category of sight, vision
is networked and witnessed by way of computational tools that take
primarily nonoptical forms. Lisa Cartwright too calls attention to the
“always already” material aspects of the visible, pointing out the extent to
which a materialist philosophy has been developed in feminist approaches
to images, imaging and visualization. The cognitive import of visualiza-
tions is emphasized in the chapter by Matt Edgeworth in his discussion
of discovery through screen-work in archaeology, and in the chapter by
Tom Schilling in the context of antagonistic communities using maps
or data sets as visual arguments. Alma Steingart, likewise, underscores
the cognitive import of visualizations by tracking the way that computer
graphics animations were put to use by mathematicians as generative of
novel theories. In fact, most contributions to this volume lay stress upon
Introduction 3
the cognitive import of visualizations, by pointing to their transformative
and performative roles in vision and knowledge.
A second change is in the understanding of objectivity in the method-
ological sense of what counts as an objective scientific claim. Shifts in obser-
vational practice are related to fluctuations as to what counts as “good”
vision with respect to the type of vision that is believed to deliver objectivity.
However, there are diverse notions of objectivity in the domain mediated
by computational visualization. Scholars have described the involvement
of technologies in science in terms of the historical flux of epistemic vir-
tues in science, tracing a trajectory from idealized images that required
intervention and specialized craftsmanship to more mechanical forms of
recording. A later hybridization of these virtues has been claimed to occur
when instruments and, later, computers allowed active manipulation versus
passive observation (Hacking 1983; Galison 1997). Other researchers sug-
gest that the computerization of images accounts for an epistemic devalu-
ation of visualizations in favor of their mathematical manipulation that
the digital (binary) format allows (Beaulieu 2001). This raises questions
of objectivity in science, but not without also raising questions about the
objects that are perceived through different modes of visualization (Hoel
2011; Carusi 2012). In this volume, Chiara Ambrosio shows that notions of
scientific objectivity have been shaped in a constant dialogue with artistic
representation. She points to the importance of contextualizing prevalent
ideas of truth in science by attending to the roles of artists in shaping these
ideas as well as challenging them. Annamaria Carusi and Aud Sissel Hoel
develop an approach to visualizations in neuroscience that accentuates
the generative dimension of vision, developing Maurice Merleau-Ponty’s
notion of “system of equivalences” as an alternative to approaches in terms
of subjectivity or objectivity. The generative roles of visualizations, or,
more precisely, of digital mapping practices, are also emphasized in Albena
Yaneva’s contribution, as well as in the contribution by Torben Elgaard Jen-
sen, Anders Kristian Munk, Anders Koed Madsen and Andreas Birkbak.
A third area where we are seeing shifts is in the ontology of what counts
as a scientific object. The notion of what constitutes a scientific object has
received a great deal of attention for quite some time in science studies
broadly (philosophy, history and sociology). However, we are currently see-
ing a resurgence of interest in the ontology of scientific objects (Barad 2007;
Bennett 2010; Brown 2003; Harman 2009; Latour 2005; Olsen et al. 2012;
Woolgar and Lezaun 2013; among many others; see Trentmann 2009 for
an overview). As Cartwright points out in her chapter, a new preoccupation
with ontology in science studies has been continuous with the digitaliza-
tion of medicine, science and publication. It may seem that this is because
computational objects demand a different sort of ontology: They are often
seen as virtual as opposed to physical objects, and their features are often
seen as in some sense secondary to the features of “actual” physical objects
that they are assumed to mimic. However, the distinction between virtual
4 Carusi, Sissel Hoel, Webmoor and Woolgar
and physical objects is problematic (Rogers 2009; Carusi 2011; Hoel and
van der Tuin 2013), and is inadequate to the working objects handled and
investigated by the sciences. This is highlighted when attention is paid to
the mode of visualization implied by digitalization and computerization. In
this volume, Timothy Webmoor observes that the formerly distinct steps of
crafting visualizations are woven together through the relational ontology
of “codework.” A relational ontology is also key to the approach developed
by Carusi and Hoel, as it is to other contributions that emphasize the gen-
erative aspect of visualizations. Steingart, for example, points out that the
events portrayed in the mathematical films she has studied are not indexes of
observable natural events, but events that become phenomena at all only to
the extent that they are graphically represented and animated. However, the
ontological implications of visualization are frequently obscured: Michael
Lynch and Kathryn de Ridder-Vignone, in their study of nanotechnology
images, note that by resorting to visual analogies and artistic conventions
for naturalistic portrayal of objects and scenery, nano-images represent a
missed opportunity to challenge conventional, naturalistic ontology.
In terms of the ontology of visual matter, some visual culturalists and
anthropologists (e.g., Gell 1998; Pinney 1990, 2004) have suggested that tak-
ing ontology seriously entails reorienting our epistemic toolbox for explain-
ing images, or suspending our desire for epistemic explanation. These include
“scaling-out” strategies that move away from the visual object to emphasize
historical contingency, partiality or construction through documenting an
object’s many sociotechnical relationships. A different mode of engagement
may be localizing at the scale of the visual object to get at the “otherness,”
“fetishization” and inherent qualities that make certain visual imagery work
(Harman 2011; Henare, Holbraad and Wastell 2007; Pinney 1990; Webmoor
2013; topics also discussed by Beaulieu in this volume). Deploying media to
register ontological competencies, rather than to serve epistemic goals, means
developing object-oriented metrologies, or new methods of “measuring”
materiality (Webmoor 2014). This tactic would in part strive for the affective
appreciation of visual objects, and it may approximate the collaboratively
produced imagery of artists and scientists. Hence, we view the increase in
collaborations between science and art, or alternatively, between science/
visual studies and art, as a fourth area of change. Certainly, as has been well
documented, there is a long history of productive interchange between scien-
tists and artists (Stafford 1991; Jones and Galison 1998; Kemp 2006). Fur-
ther, the introduction of digital and computational technologies has opened
a burgeoning field of explorations into embodied experience and technology,
challenging boundaries between technoscience, activism and contemporary
art (Jones 2006; da Costa and Philip 2008). In fact, one of the things that we
were aiming for with the conference was to encourage interdisciplinary dia-
logue and debate between different approaches to science and image studies,
and particularly to get more conversations and collaborations going between
STS and the humanities, as well as between science/visual studies and creative
Introduction 5
practitioners. To attain the second goal, artists and designers, including Ali-
son Munro, Gordana Novakovic and Alan Blackwell, were invited to present
their work at the conference. In this volume, relations between science and
art are dealt with by several contributors. Ambrosio discusses three histori-
cal and contemporary examples, where science has crossed paths with art,
arguing that an adequate understanding of the shifting notions of objectiv-
ity requires taking into account the variegated influences of art. Carusi and
Hoel show the extent to which a scientific practice such as neuroimaging
shares characteristics of the process of painting, while David Ribes shows
how today’s visualization researchers apply insights from visual perception
and artistic techniques to design in order to ensure a more efficient visualiza-
tion of data. The chapter by Lynch and de Ridder-Vignone also testifies to the
interplay of science and art, not only through the reliance of nano-images on
established artistic conventions, but also due to the way that nano-images are
often produced and displayed as art, circulated through online “galleries.”
A fifth change relates to the way that computational tools bring about
modifications to practice, both in settings where visualizations are devel-
oped and used in research, and in settings where scholars study these visu-
alizing practices. A series of studies have documented the manner in which
visually inscribing phenomena makes up “the everyday” of science work
(Latour and Woolgar 1979; Lynch and Woolgar 1990; Cartwright 1995;
Galison 1997; Knuuttila, Merz and Mattila 2006), suggesting that much
of what enables science to function is making things visible through the
assembling of inscriptions for strategic purposes. However, whereas previ-
ously researchers investigating these visual inscription processes made their
observations in “wet-labs” or physical sites of production, today they often
confront visualizations assembled entirely in the computer or in web-based
“spaces” (Merz 2006; Neuhaus and Webmoor 2012). Several contributors
explore how tasks with computational tools of visualization are reconfigur-
ing organizational and management practices in research settings. Ribes,
focusing on visualization researchers who both publish research findings
and develop visualization software, shows how, through visual input, tech-
nology takes on a leading role in sustaining collaborations across disci-
plines, which no longer depend solely on human-to-human links. Webmoor
ethnographically documents changes to the political economy of academia
that are attendant with the cocreation of data, code and visual outputs by
researchers, software programmers and lab directors. He poses this reflexive
question: What changes for academics in terms of promotion, assignation
of publishing credit and determination of “impact factors” will accompany
the increasing online publishing of research outputs, or visualizations, in
digital format? Modifications to practice due to new computational tools
is also a key topic in Edgeworth’s chapter, which tracks the transition from
spade-work to screen-work in archaeology, as it is in Schilling’s chapter,
which points to the blurring of boundaries between producers and users in
today’s mapmaking practices.
6 Carusi, Sissel Hoel, Webmoor and Woolgar
When it comes to the modifications to the second setting, to the way that
today’s scholars go about investigating the practices of science, Peter Gali-
son’s contribution to this volume challenges scholars to do visual work and
not only to study it. Galison’s chapter, however, is of a different kind than
the preceding chapters, being set up as a debate contribution advocating film
as a research tool in science and technology studies—or what he refers to
as “second-order visual STS.” Upon receiving this contribution, we had the
idea of inviting scholars of the visual culture of science to respond, or alter-
natively, to write up their own position pieces on the topic of visual STS.
Thus, whereas the first part of the book consists of a series of (in Galison’s
terms) first-order VSTS contributions, the second part consists of a series
of debate contributions (varying in length from short pieces to full length
papers) that discuss the idea of visual STS, and with that, the idea of doing
visual work in science studies (second-order VSTS). This was particularly
appropriate since, as the book was being finalized, we noted that the call for
submissions for the 2013 Annual Meeting of the Society for Social Studies
of Science included a request for “sessions and papers using ‘new media’ or
other forms of new presentation,” and that there would be “special sessions
on movies and videos where the main item submitted will be a movie or
video.”2
This initiative has been followed up by the announcement of a new
annual film festival, Ethnografilm, the first of which to be held in Paris, April
2014. The expressed purpose of the film festival is to promote filmmaking
as a “legitimate enterprise for academic credit on a par with peer reviewed
articles and books.”3
Clearly, science/visual studies scholars are themselves
experiencing the challenge of digital technologies to their own research and
presentation practice, and are more than willing to experiment with using
the visual as well as studying it. As many of the contributors to the sec-
ond part of the book discuss, conducting research through filmmaking or
other visual means is not unprecedented—Galison himself discusses visual
anthropology, and Cartwright discusses the use of film by researchers such
as Christina Lammer. However, digital tools make movies and videos avail-
able to a far wider range of scholars (consider Galison’s discussion of the
extremely time-consuming editing process in predigital filming), and also
make other techniques that have been used in the past—such as mapping and
tracing networks—more available. This is not only a question of availability:
Digitalization also transforms these techniques and media, and the mode
of knowledge attained through them, as is clearly seen in the discussion of
mapping in the responses of Jensen et al. and Yaneva. Besides, the increasing
availability of digital video challenges science studies scholars not to fall back
into what Cartwright terms an “anachronistic realism”, but to deploy new
modes of reflexivity in their use of the medium.
With the call for second-order VSTS, and the merging of crafts formerly
divided between the arts and sciences, we note an optimistic potential for
the engagement with visualizations in the age of computerization. With
computerized forms of capture, rendering, distribution and retrieval of
Introduction 7
scholarly information, the repertoires of science as well as science/visual
studies expand well beyond the historical trajectory of what paper-based
media permit. As several contributors urge, visual media with their unique
affordances supplement paper-based media and allow complementary and
richer portrayals of the practices of science. Further, with this expansion
come new forms of knowledge. Certainly, this volume cannot exhaust a field
already acknowledged for its inventiveness of new tools and techniques,
and is but an initial exploration of some of its central challenges that will
no doubt continue to elicit the attention of science studies scholars. What
we hope to have achieved by gathering together these studies of computer-
ized visualization is that the visual dimension of visualization warrants
attention in its own right, and not only as an appendage of digitalization.
OVERVIEW
The present volume is divided into two parts. The first part consists of eight
chapters that examine the transformative roles of visualizations across dis-
ciplines and research domains. All of these chapters are based on presenta-
tions made at the 2011 conference in Oxford. The second part considers
instead the use of the visual as a medium in science/visual studies, and con-
sists of two full-length chapters and three short position pieces. The con-
tributors to this part were also affiliated with the Oxford conference in one
way or another: as keynote, invited keynote, respondent or participants.
The following paragraphs provide a brief overview of all contributions.
Timothy Webmoor, in his chapter “Algorithmic Alchemy, or the Work
of Code in the Age of Computerized Visualization,” offers an ethnography
of an academic-cum commercial visualization lab in London. Work with
and reliance upon code is integral to computerized visualization. Yet work
with code is like Nigel Thrift’s “technological unconscious” (Thrift 2004).
Until quite recently it has remained in the black boxes of our computerized
devices: integral to our many mundane and scientific pursuits, yet little
understood. Close ethnographic description of how code works suggests
some novelties with respect to the tradition of examining representation in
science and technology studies. Webmoor argues that formerly separated
or often sequential tasks, principally data sourcing, programming and visu-
alizing, are now woven together in what researchers do. This means that
previously separated roles, such as those of researcher and programmer,
increasingly converge in what he terms “codework,” an activity resembling
reiterative knot-making. Outputs or visualizations, particularly with the
“mashed-up” web-based visualizations he studies, are held only provision-
ally like a knot before they are redone, adjusted, updated or taken down.
Nevertheless, codework demonstrates vitality precisely because it con-
founds conventional schemes of accountability and governance. It plays on
a tension between creativity and containment.
8 Carusi, Sissel Hoel, Webmoor and Woolgar
Matt Edgeworth’s “From Spade-Work to Screen-Work: New Forms of
Archaeological Discovery in Digital Space” undertakes an ethnography of
practices in archaeology involving the integration of digital tools. A pre-
sentation of his own intellectual development in terms of deploying digital
tools as an archaeologist parallels and grants a candor to his empirical
observations surrounding changes to a field that is often caricatured as
rather a-technological due to the down-and-dirty conditions of archaeo-
logical fieldwork. Among other new tools of archaeology, Edgeworth
underscores the embodiment and tacit skill involved with imaging tech-
nologies, particularly those of Google Earth and LiDAR satellite images.
He makes the case that screen-work, identifying and interpreting archaeo-
logical features on the computer screen, entails discovery in the quintes-
sential archaeological sense, that of excavating pertinent details from the
“mess” or inundation of visual information. He proceeds to ask whether
these modes of discovery are drastically different, or whether the shift to
digital techniques is a step-wise progression in the adoption of new tools.
In moving the locus of archaeological discovery toward the archaeological
office space, Edgeworth brings up a fundamental issue of identity for “the
discipline of the spade” in the digital age.
In his chapter “British Columbia Mapped: Geology, Indigeneity and
Land in the Age of Digital Cartography,” Tom Schilling offers a detailed
consideration of the practices, processes and implications of digital map-
ping by exploration geologists and Aboriginal First Nations in British
Columbia, Canada. In both cases the communities produce maps that enter
the public domain with explicit imperatives reflecting their economic and
political interests. Exploration geologists use digital mapping tools to shape
economic development, furthering their search for new mineral prospects
in the Canadian Rocky Mountains. First Nations, on their side, produce
digital maps to amplify claims to political sovereignty, by developing data
layers consisting of ethnographic data. The chapter also explores the speci-
ficities of digital cartography compared to its paper-based predecessors.
While both digital and paper-based cartography are invariably political,
digital mapping tools are distinguished by their manipulability: By invit-
ing improvisational reconstruction they challenge the distinction between
producers and users of maps. However, as Schilling’s case studies make
clear, these practices have yet to fulfill the ideals of collaborative sharing of
data and democratic participation. Further, as community-assembled data
sets are taken up by others, the meaning and origins of databases become
increasingly obscured.
The contribution by David Ribes, “Redistributing Representational
Work: Tracing a Material Multidisciplinary Link,” turns our atten-
tion to the way in which scientists’ reliance on visualization software
results in a redistribution of labor in terms of knowledge production,
with computer scientists playing a key intermediary role in the process.
The chapter follows the productive work of one computer scientist as she
Introduction 9
built visualization tools for the sciences and medicine using techniques
from experimental psychology. He shows that the methods and findings
of the computer scientists have two trajectories: On one hand, they are
documented in scholarly publications, where their strengths and weak-
nesses are discussed; on the other, research outcomes also independently
inform the production of future visualization tools, and become incorpo-
rated into a process of scientific knowledge production in another field.
Ribes explores the gap between these two trajectories, showing that as
visualization software comes to be used and reused in different contexts,
multidisciplinarity is loosened from human-to-human links, and instead
becomes embedded in the technology itself.
Michael Lynch and Kathryn de Ridder-Vignone, in their chapter “Mak-
ing the Strange Familiar: Nanotechnology Images and Their Imagined
Futures,” examine different types of images of nanoscale phenomena.
Images play a prominent role in the multidisciplinary field of nanoscience
and nanotechnology; and to an even greater extent than in other research
areas images are closely bound to the promotion and public interface of
the field. Examining nano-images circulated through online image galler-
ies, press releases and other public forums, Lynch and de Ridder-Vignone
make the observation that, even if they portray techno-scientific futures
that challenge the viewer’s imagination, nano-images resort to classic artis-
tic conventions and naturalistic portrayals in order to make nanoscale phe-
nomena sensible and intelligible—pursuing the strategy of “making the
strange familiar.” This may seem ironic, since the measurements of scan-
ning-tunneling microscopes are not inherently visual, and the nanoscale
is well below the minimum wavelengths of visible light. Nonetheless, the
conventions used take different forms in different contexts, and the chapter
proceeds to undertake an inventory of nano-images that brings out their
distinct modes of presentation and their distinct combinations of imagina-
tion and realism.
With the chapter by Chiara Ambrosio, “Objectivity and Representative
Practices across Artistic and Scientific Visualization,” we turn to the his-
tory of objectivity in art and science, and specifically to the way in which
they are interrelated. She shows that scientific objectivity has constantly
crossed paths with the history of artistic representation, from which it has
received some powerful challenges. Her aim is twofold: firstly to show the
way in which artists have crucially contributed to shaping the history of
objectivity; and secondly to challenge philosophical accounts of repre-
sentation that not only are ahistorical but also narrowly focus on science
decontextualized from its conversations with art. Ambrosio’s discussion of
three case studies from eighteenth-century science illustration, nineteenth-
century photography and twenty-first-century data visualization highlight
the importance of placing current computational tools and technologies in
a historical context, which encompasses art and science. She proposes a
historically grounded and pragmatic view of “representative practices,” to
10 Carusi, Sissel Hoel, Webmoor and Woolgar
account for the key boundary areas in which art and science have comple-
mented each other, and will continue to do so in the age of computerization.
Annamaria Carusi and Aud Sissel Hoel, in their chapter “Brains, Win-
dows and Coordinate Systems,” develop an account of neuroimaging that
conceives brain imaging methods as at once formative and revealing of
neurophenomena. Starting with a critical discussion of two metaphors that
are often evoked in the context of neuroimaging, the “window” and the
“view from nowhere,” they propose an approach that goes beyond con-
trasts between transparency and opacity, or between complete and partial
perspectives. Focusing on the way brain images and visualizations are used
to convey the spatiality of the brain, neuroimaging is brought into juxta-
position with painting, which has a long history of grappling with space.
Drawing on Merleau-Ponty’s discussion of painting in “Eye and Mind,”
where he sets forth an integrated account of vision, images, objects and
space, Carusi and Hoel argue that the handling and understanding of space
in neuroimaging involve the establishment of a “system of equivalences”
in the terms of Merleau-Ponty. Accentuating the generative dimension of
images and visualizations, the notion of seeing according to a system of
equivalences offers a conceptual and analytic tool that opens a new line of
inquiry into scientific vision.
In her chapter “A Four-Dimensional Cinema: Computer Graphics,
Higher Dimensions and the Geometrical Imagination,” Alma Steingart
investigates the way that computer graphic animation has reconfigured
mathematicians’ visual culture and transformed mathematical practice by
providing a new way of engaging with mathematical objects and theories.
Tracking one of the earliest cases in which computer graphics technology
was applied to mathematical work, the films depicting four-dimensional
surfaces by Thomas Banchoff and Charles Strauss, Steingart argues that
computer graphics did more than simply represent mathematical phenom-
ena. By transforming mathematical objects previously known only through
formulas and abstract argument into perceptible events accessible to direct
visual investigation, computer graphic animation became a new way of
producing mathematical knowledge. Computer graphics became a tool for
posing new problems and exploring new solutions, portraying events that
would not be accessible except through their graphic representation and
animation. It became an observational tool that allowed mathematicians
to see higher dimensions, and hence a tool for cultivating and training their
geometrical imagination.
The focus on film as a mode of discovery makes a nice transition to
the next chapter, by Peter Galison, which introduces the second part of
the book. Galison challenges science studies to use the visual as well as
to study it, and delineates the contours of an emerging visual science and
technology studies or VSTS in a contribution with two parts: a theoret-
ical reflection on VSTS, and a description of his own experience doing
science studies through the medium of film, in such projects as Ultimate
Introduction 11
Weapon: The H-Bomb Dilemma (Hogan 2000) and Secrecy (Galison and
Moss 2008). Galison draws a distinction between first-order VSTS, which
continues to study the uses that scientists make of the visual, and second-
order VSTS, which uses the visual as the medium in which it conducts
and conveys its own research. He proposes that exploring the potential for
second-order VSTS is a logical further development of what he holds up as
the key accomplishment of science studies in the last thirty years: develop-
ing localization as a counter to global claims about universal norms and
transhistorical markers of demarcation.
In their collective piece, “Expanding the Visual Registers of STS,” Tor-
ben Elgaard Jensen, Anders Kristian Munk, Anders Koed Madsen and
Andreas Birkbak respond to Galison’s call to expand the visual research
repertoire by advising an even larger expansion: Why stop at filmmaking
when there are also other visual practices that could be taken up by second-
order VSTS? Focusing in particular on the practice of making digital maps,
they argue that the take-up of maps by second-order VSTS would have to
be accompanied by a conceptual rethinking of maps, including a discussion
of the way maps structure argumentation. They also pick up on Galison’s
observations concerning the affectivity of film and video by suggesting that
maps also leave the viewer affected through his or her own unique mode
of engagement.
In her response, “Mapping Networks: Learning from the Epistemology
of the ‘Natives,’” Albena Yaneva starts out by pointing to the role of eth-
nographic images in shaping the fieldworker’s explanations and arguments.
Further, with the introduction of digital mapping tools into the STS tool
box with large projects such as the controversy mapping collaborative proj-
ect MACOSPOL, a shift to second-order VSTS has already taken place.
Emphasizing the performative force of mapping, Yaneva argues that map-
ping is not a way of illustrating but a way of generating and deploying
knowledge. In her own fieldwork, which followed the everyday visual work
of artists, technicians, curators, architects and urban planners, Yaneva
experimented with swapping tools with the “natives,” learning a lot from
their indigenous visual techniques. Drawing on this “organic” development
of second-order visual methods gained through ethnographic intimacy, she
suggests borrowing epistemologies and methods from the “natives.”
In her piece “If Visual STS Is the Answer, What Is the Question?” Anne
Beaulieu outlines her position on the topic of visual STS. She starts out
by countering claims that STS has always been visual or that attending to
the visual is equal to fetishizing it. Defending the continued relevance of
talking about a “visual turn” in STS, Beaulieu lists five reasons why the
question of VSTS becomes interesting if one attends to its specifics: Images
are an emergent entity that remains in flux; visual practices are just as
boring as other material practices studied by STS; observation takes many
forms and must be learned; vision and images are to an increasing extent
networked; and attending to the visual can serve to expand the toolkit of
12 Carusi, Sissel Hoel, Webmoor and Woolgar
STS by drawing on resources from disciplines such as media studies, film
studies, art history and feminist critiques of visuality.
The concluding contribution to this volume, Lisa Cartwright’s “Visual
Science Studies: Always Already Materialist,” demonstrates the importance
of approaching the visual in an interdisciplinary manner. Cartwright points
out the attention paid to the materiality of the visual that was a hallmark of
Marxist materialist and feminist visual studies from the late 1970s onwards.
The chapter highlights the specific contributions of materialist and feminist
visual studies in addressing subjectivity and embodiment. The focus of visual
studies on materiality does not result in a disavowal of or skepticism toward
the visual. Cartwright reminds us that the visual turn of science studies since
the 1990s coincided with the digital turn in science and technology, includ-
ing the use of digital media in the fieldwork of science studies scholars; here
the author calls for a more reflexive use of the medium. The chapter con-
cludes with a detailed discussion of two films by the sociologist and videogra-
pher Christina Lammer—Hand Movie 1 and 2—showing how they present
a third way, which neither reduces things to their meanings in the visual, nor
reduces images to the things and processes around them; instead, they bring
out a more productive way of handling visuality and materiality.
NOTES
1. Volume 37, Number 1, March 2012. All the contributions published in the
special issue are freely available for download at the following link: http://
www.ingentaconnect.com/content/maney/isr/2012/00000037/00000001.
Last accessed 4th November, 2013.
2. http://www.4sonline.org/meeting. Last accessed 4th November, 2013.
3. For more information about the film festival, see: http://ethnografilm.org.
Last accessed 4th November, 2013.
REFERENCES
Arias-Hernandez, Richard, Tera M. Green and Brian Fisher. 2012. “From Cogni-
tive Amplifiers to Cognitive Prostheses: Understandings of the Material Basis of
Cognition in Visual Analytics.” In “Computational Picturing,” edited by Anna-
maria Carusi, Aud Sissel Hoel and Timothy Webmoor. Special issue. Interdisci-
plinary Science Reviews 37 (1): 4–18.
Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the
Entanglement of Matter. Durham, NC: Duke University Press.
Bartscherer, Thomas, and Roderick Coover, eds. 2011. Switching Codes: Thinking
through Digital Technology in the Humanities and Arts. Chicago: University
of Chicago Press.
Beaulieu, Anne. 2001. “Voxels in the Brain: Neuroscience, Informatics and Chang-
ing Notions of Objectivity.” Social Studies of Science 31 (5): 635–680.
Bederson, Benjamin B., and Ben Shneiderman. 2003. The Craft of Information
Visualization: Readings and Reflections. Amsterdam: Morgan Kaufmann.
Bennett, Jane. 2010. Vibrant Matter: A Political Ecology of Things. Durham, NC:
Duke University Press.
Introduction 13
Benjamin, Walter. (1936) 1999. “The Work of Art in the Age of Mechanical Repro-
duction.” In Visual Culture: The Reader, edited by Jessica Evans and Stuart
Hall, 61–71. London: SAGE.
Brown, Bill. 2003. A Sense of Things: The Object Matter of American Literature.
Chicago: University of Chicago Press.
Card, Stuart C., Jock Mackinlay and Ben Shneiderman. 1999. Readings in Informa-
tion Visualization: Using Vision to Think. San Francisco: Morgan Kaufmann.
Cartwright, Lisa. 1995. Screening the Body: Tracing Medicine’s Visual Culture.
Minneapolis: University of Minnesota Press.
Carusi, Annamaria. 2008. “Scientific Visualisations and Aesthetic Grounds for
Trust.” Ethics and Information Technology 10: 243–254.
Carusi, Annamaria. 2011. “Trust in the Virtual/Physical Interworld.” In Trust and
Virtual Worlds: Contemporary Perspectives, edited by Charles Ess and May
Thorseth, 103–119. New York: Peter Lang.
Carusi, Annamaria. 2012. “Making the Visual Visible in Philosophy of Science.”
Spontaneous Generations 6 (1): 106–114.
Carusi, Annamaria, Aud Sissel Hoel and Timothy Webmoor, eds. 2012. “Com-
putational Picturing.” Special issue. Interdisciplinary Science Reviews 37 (1).
Coopmans, Catelijne, Janet Vertesi, Michael Lynch and Steve Woolgar. 2014. Rep-
resentation in Scientific Practice Revisited. Cambridge, MA: MIT Press.
Crary, Jonathan. 1990. Techniques of the Observer: On Vision and Modernity in
the Nineteenth Century. Cambridge, MA: MIT Press.
da Costa, Beatriz, and Kavita Philip, eds. 2008. Tactical Biopolitics: Art, Activ-
ism, and Technoscience. Cambridge, MA: MIT Press.
Daston, Lorraine, and Peter Galison. 2007. Objectivity. New York: Zone Books.
Eisenstein, Elizabeth. 1980. The Printing Press as an Agent of Change: Commu-
nications and Cultural Transformations in Early Modern Europe. Cambridge:
Cambridge University Press.
Elkins, James. 2007. Visual Practices across the University. Munich: Wilhelm Fink
Verlag.
Ellenbogen, Josh. 2008. “Camera and Mind.” Representations 101 (1): 86–115.
Friedberg, Anne. 2006. The Virtual Window: From Alberti to Microsoft. Cam-
bridge, MA: MIT Press.
Galison, Peter. 1997. Image & Logic: A Material Culture of Microphysics. Chi-
cago: University of Chicago Press.
Galison, Peter, and Rob Moss. 2008. Secrecy. Redacted Pictures. DVD.
Gell, Alfred. 1998. Art and Agency: An Anthropological Theory. Oxford:
Clarendon.
Hacking, Ian. 1983. Representing and Intervening: Introductory Topics in the
Philosophy of Natural Science. Cambridge: Cambridge University Press.
Harman, Graham. 2009. The Prince of Networks: Bruno Latour and Metaphys-
ics. Melbourne: Re.Press.
Harman, Graham. 2011. “On the Undermining of Objects: Grant, Bruno and
Radical Philosophy.” In The Speculative Turn: Continental Materialism and
Realism, edited by Levi R. Bryant, Nick Srnicek and Graham Harman, 21–40.
Melbourne: Re.Press.
Henare, Amiria, Martin Holbraad and Sari Wastell. 2007. “Introduction.” In Think-
ing through Things: Theorising Artefacts in Ethnographic Perspective, edited by
Amiria Henare, Martin Holbraad and Sari Wastell, 1–31. London: Routledge.
Hoel, Aud Sissel. 2011. “Thinking “Difference” Differently: Cassirer versus Der-
rida on Symbolic Mediation.” Synthese 179 (1): 75–91.
Hoel, Aud Sissel. 2012. “Technics of Thinking.” In Ernst Cassirer on Form and
Technology: Contemporary Readings, edited by Aud Sissel Hoel and Ingvild
Folkvord, 65–91. Basingstoke: Palgrave Macmillan.
14 Carusi, Sissel Hoel, Webmoor and Woolgar
Hoel, Aud Sissel, and Iris van der Tuin. 2013. “The Ontological Force of Technic-
ity: Reading Cassirer and Simondon Diffractively.” Philosophy and Technology
26 (2): 187–202.
Hogan, Pamela. 2000. Ultimate Weapon: The H-Bomb Dilemma. Superbomb
Documentary Production Company. DVD.
Horn, Robert. 1998. Visual Language: Global Communication for the 21st Cen-
tury. San Francisco: MacroVU.
Jones, Caroline A., ed. 2006. Sensorium: Embodied Experience, Technology and
Contemporary Art. Cambridge, MA: MIT Press.
Jones, Caroline A., and Peter Galison, eds. 1998. Picturing Science Producing Art.
New York: Routledge.
Kemp, Martin. 2006. Seen | Unseen: Art, Science, and Intuition from Leonardo to
the Hubble Telescope. Oxford: Oxford University Press.
Knuuttila, Tarja, Martina Merz and Erika Mattila. 2006. “Editorial.” In “Com-
puter Models and Simulations in Scientific Practice,” edited by Tarja Knuuttila,
Martina Merz and Erika Mattila. Special issue, Science Studies 19 (1): 3–11.
Latour, Bruno, and Steve Woolgar. 1979. Laboratory Life: The Social Construc-
tion of Scientific Facts. Beverly Hills: SAGE.
Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Net-
work-Theory. Oxford: Oxford University Press.
Lynch, Michael, and Steve Woolgar, eds. 1990. Representation in Scientific Prac-
tice. Cambridge, MA: MIT Press.
Manovich, Lev. 2001. The Language of New Media. Cambridge, MA: MIT Press.
McCormick, Bruce, Thomas DeFanti and Maxine Brown, eds. 1987. “Visualiza-
tion in Scientific Computing.” Special issue, Computer Graphics 21 (6).
Merz, Martina. 2006. “Locating the Dry Lab on the Lab Map.” In Simulation:
Pragmatic Construction of Reality, edited by Johannes Lenhard, Günter. Küp-
pers and Terry Shinn, 155–172. Dordrecht: Springer.
Mitchell, W. J. T. 1994. “The Pictorial Turn.” In Picture Theory: Essays on Verbal
and Visual Representation, 11–34. Chicago: University of Chicago Press.
Neuhaus, Fabian, and Timothy Webmoor. 2012. “Agile Ethics for Massified Research
and Visualisation.” Information, Communication and Society 15 (1): 43–65.
Olsen, Bjørnar, Michael Shanks, Timothy Webmoor and Christopher Witmore.
2012. Archaeology: The Discipline of Things. Berkeley: University of California
Press.
Pinney, Christopher. 1990. “The Quick and the Dead: Images, Time, and Truth.”
Visual Anthropology Review 6 (2): 42–54.
Pinney, Christopher. 2004. “Photos of the Gods”: The Printed Image and Political
Struggle in India. London: Reaktion Books.
Rheingold, Howard. 1992. Virtual Reality. New York: Simon and Schuster.
Rogers, Richard. 2009. The End of the Virtual: Digital Methods. Amsterdam:
Vossiuspers.
Rudwick, Martin J. S. 1976. “The Emergence of a Visual Language for Geological
Science 1760–1840.” History of Science 14 (3): 149–195.
Stafford, Barbara Maria. 1991. Body Criticism: Imaging the Unseen in Enlighten-
ment Art and Medicine. Cambridge, MA: MIT Press.
Stafford, Barbara Maria. 1996. Good Looking: Essays on the Virtue of Images.
Cambridge, MA: MIT Press.
Thrift, Nigel. 2004. “Remembering the Technological Unconscious by Fore-
grounding Knowledges of Position.” Environment and Planning D: Society and
Space 22 (1): 175–190.
Trentmann, Frank. 2009. “Materiality in the Future of History: Things, Practices,
and Politics.” Journal of British Studies 48: 283–307.
Introduction 15
Ware, Colin. 2000. Information Visualization: Perception for Design. San Fran-
cisco: Morgan Kaufman.
Webmoor, Timothy. 2013. “STS, Symmetry, Archaeology.” In The Oxford Hand-
book of the Archaeology of the Contemporary World, edited by Paul Graves-
Brown, Rodney Harrison and Angela Piccini, 105–120. Oxford: Oxford
University Press.
Webmoor, Timothy. 2014. “Object-Oriented Metrologies of Care and the Proxi-
mate Ruin of Building 500.” In Ruin Memories: Materialities, Aesthetics and
the Archaeology of the Recent Past, edited by Bjørnar Olsen and Þóra Péturs-
dóttir, 462–485. London: Routledge.
West, Thomas G. 2004. Thinking Like Einstein: Returning to Our Roots with
the Emerging Revolution in Computer Information Visualization. New York:
Prometheus Books.
Woolgar, Steve, and Javier Lezaun, eds. 2013. A Turn to Ontology? Special issue,
Social Studies of Science 43 (3).
Wouters, Paul, Anne Beaulieu, Andrea Scharnhorst and Sally Wyatt, eds. 2013.
Virtual Knowledge: Experimenting in the Humanities and the Social Sciences.
Cambridge, MA: MIT Press.
Yaneva, Albena. 2012. Mapping Controversies in Architecture. Farnham: Ashgate.
This page intentionally left blank
Part I
Visualization in the
Age of Computerization
This page intentionally left blank
1 Algorithmic Alchemy, or the
Work of Code in the Age of
Computerized Visualization
Timothy Webmoor
INTRODUCTION
“I’m doing something very dangerous right now” (Informant 1, July
8, 2010).
“Yah, now is not a good time for me!” (Informant 2, July 8, 2010).
Ethnographic silence can speak volumes. Despite prompts from the anthro-
pologist, the dialogue dried up. Falling back on observation, the two infor-
mants were rapidly, if calmly, moving between their multiple program
windows on their multiple computer displays. I had been observing this
customary activity of coding visualizations for nearly a month now—a
visual multitasking that is so characteristic of the post–Microsoft Windows
age (Friedberg 2006). Looking around the open plan office setting, every-
one was huddled in front of a workstation. Unlike ethnographic work in
“wet labs,” where the setting and activities at hand differ from the anthro-
pologist’s own cubicled site of production, this dry lab (Merz 2006) seemed
so mundane and familiar, as a fellow office worker and computer user, that
admittedly I had little idea of how to gain any analytic purchase as their
resident anthropologist. What was interesting about what these researchers
and programmers were doing?
It wasn’t until I had moved on to questioning some of the PhD research-
ers in the cramped backroom that the importance of what the two infor-
mants were doing was overheard: “Well done! So it’s live?” (Director,
July 8, 2010). The two programmers had launched a web-based survey
program, where visitors to the site could create structured questionnaires
for other visitors to answer. The responses would then be compiled for
each visitor and integrated with geo-locational information obtained from
browser-based statistics to display results spatially on a map. It was part
of what this laboratory was well known for: mashing up and visualizing
crowd-sourced and other “open” data. While focused upon the UK, within
a few months the platform had received over 25,000 visitors from around
the world. The interface looked deceptively simple, even comical given a
20 Timothy Webmoor
cartoon giraffe graced the splash page as a mascot of sorts. However, the
reticence the day of its launch was due to the sheer labor of coding to dis-
simulate the complicated operations allowing such an “open” visualization.
“If you’re going to allow the world to ask the rest of the world anything,
it is actually quite complicated” (Director, June 11, 2010). As one of the
programmers later explained his shifting of attention between paper notes,
notes left in the code, and the code itself, “I have to keep abreast of what
I’ve done because an error . . . (pause) . . . altering the back channel infra-
structure goes live on a website” (Informant 1, July 21, 2010).
Being familiar with basic HTML, a type of code or, more precisely, a
markup text often used to render websites, I knew the two programmers
were writing and debugging code that day. Alphanumeric lines, full of sym-
bols and incongruous capitalizations and spacing, were recognizable enough;
at the lab this lingua franca was everywhere apparent and their multiple
screens were full of lines of code. Indeed, there were “1,000 lines just for the
web page view itself [of the web-based survey platform]” (Informant 2, June
6, 2011)—that is, for the window on the screen to correctly size, place and
frame the visualized data. Yet when looking at what they were doing there
was little to see, per se. There was no large image on their screens to anchor
my visual attention—a link between the programming they were engrossed
in and what it was displaying, or, more accurately, the dispersed data the
code was locating, compiling and rendering in the visual register.
The web-based survey and visualization platform was just one of several
visualizing platforms that the laboratory was working on. For other types
of completed visualizations rendering large amounts of data—for example,
a London transport model based upon publicly available data from the
Transport for London (TfL) authority—you might have much more coding:
“This is about 6,000 lines of code, for this visualization [of London traf-
fic]” (Informant 3, June 10, 2011) (Figure 1.1).
Over the course of roughly a year during which I regularly visited the
visualization laboratory in London, I witnessed the process of launching
many such web-based visualizations, some from the initial designing ses-
sions around the whiteboards to the critical launches. A core orientation of
the research lab was a desire to make the increasingly large amounts of data
in digital form accessible to scholars and the interested public. Much infor-
mation has become widely available from government authorities (such as
the TfL) through mandates to make collected digital data publicly available,
for instance, through the 2000 Freedom of Information Act in the UK, or
the 1996 Electronic Freedom of Information Act Amendments in the US.
Massive quantities are also being generated through our everyday engage-
ments with the Internet. Of course, the tracking of our clicks, “likes,”
visited pages, search keywords, browsing patterns, and even email con-
tent has been exploited by Internet marketing and service companies since
the commercialization of the Internet in the late 1990s (see Neuhaus and
Webmoor 2012 on research with social media). Yet embedded within an
Algorithmic Alchemy 21
academic institution, this laboratory was at the “bleeding edge” of harvest-
ing and visualizing such open databases and other traces left on the Inter-
net for scholarly purposes. Operating in a radically new arena for potential
research, there is a growing discussion in the social sciences and digital
humanities over how to adequately and ethically data mine our “digital
heritage” (Webmoor 2008; Bredl, Hünniger and Jensen 2012; Giglietto,
Rossi and Bennato 2012). Irrespective of what sources of data were being
rendered into the visual register by programmers and researchers at this
lab, I constantly found myself asking for a visual counterpart to their inces-
sant coding: “Can you show me what the code is doing?” I needed to see
what they were up to.
CODEWORK
Code, or specifically working with code as a software programmer, has
often been portrayed as a complicated and arcane activity. Broadly defined,
code is “[a]ny system of symbols and rules for expressing information or
instructions in a form usable by a computer or other machine for process-
ing or transmitting information” (OED 2013). Of course, like language,
Figure 1.1 Lines of code in the programming language C++ (on right) rendering the
visualization (on left) of a London transport model (Informant 3, June 10, 2011).
22 Timothy Webmoor
there are many forms of code: C++, JavaScript, PHP, Python—to name a
few more common ones discussed later. The ability to “speak computer”
confers on programmers a perceived image of possessing inscrutable and
potent abilities to get computers to comply.1
Part nerd, part hero, program-
mers and specifically hacker culture have been celebrated in cyberpunk
literature and cinema for being mysterious and libertarian.2
The gothic
sensibility informing such portrayals reinforces a darkened and distanced
view of working with code.
The academic study of code, particularly from the science and technol-
ogy studies (STS) perspective of exhibiting what innervates the quintessen-
tial “black boxes” that are our computing devices, has only recently been
pursued ethnographically (e.g., Coleman and Golub 2008; Demazière,
Horn and Zune 2007; Kelty 2008). Oftentimes, however, such studies scale
out from code, from a consideration of its performative role in generat-
ing computerized outputs, to discuss the identity and social practices of
code workers. More closely examining working with code has received
less attention (though see Brooker, Greiffenhagen and Sharrock 2011;
Rooksby, Martin and Rouncefield’s 2006 ethnomethodological study).
Sterne (2003) addresses the absent presence of code and software more
generally in academic work and suggests it is due to the analytic challenge
that code presents. The reasons for this relate to my own ethnographic
encounter. It is boring. It is also nonindexical of visual outputs (as least to
the untrained eye unfamiliar with “reading” code; see Rooksby, Martin
and Rouncefield 2006). In other words, code, like Thrift’s (2004) “tech-
nological unconscious,” tends to recede from immediate attention into
infrastructural systems sustaining and enabling topics and practices of
concern. Adrian Mackenzie, in his excellent study Cutting Code (2006,
2), describes how software is felt to be intangible and immaterial, and for
this reason it is often on the fringe of academic and commercial analyses
of digital media. He is surely right to bemoan not taking code seriously,
downplayed as it is in favor of supposed higher-order gestalt shifts in cul-
ture (“convergence”), political economy (“digital democracy” and “radical
sharing”) and globalization (“network society”). No doubt the “technical
practices of programming interlace with cultural practices” (ibid., 4), with
the shaping and reshaping of sociality, forms of collectivity and ideas of
selfhood; what Manovich (2001, 45) termed “trans-coding” (e.g., Ghosh
2005; Himanen, Torvalds and Castells 2002; Lessig 2004; Weber 2004; for
academic impacts see Bartscherer and Coover 2011). However, these larger
order processes have dominated analyses of the significance involving the
ubiquity of computer code.
Boring and analytically slippery, code is also highly ambiguous. The
Oxford Dictionary of Computing (1996) offers no less than 113 techni-
cal terms that use the word “code” in the domain of computer science and
information technology. So despite acknowledging the question “Why is it
hard to pin down what software is?” (2006, 19), Mackenzie, a sociologist
Algorithmic Alchemy 23
of science, admirably takes up the summons in his work. For Mackenzie,
code confounds normative concepts in the humanities and social sciences.
It simply does not sit still long enough to be easily assigned to conventional
explanatory categories, to be labeled as object or practice, representation
or signified, agent or effect, process or event. He calls this “the shifting
status of code” (ibid., 18). Mackenzie’s useful approach is to stitch together
code with agency. Based upon Alfred Gell’s (1998) innovative anthropo-
logical analysis of art and agency, Mackenzie (2006, 2005) pursues an
understanding of software and code in terms of its performative capacity.
“Code itself is structured as a distribution of agency” (2006, 19). To string
together what he sees as distributed events involving code’s agency, he takes
up another anthropologist’s methodological injunction to pursue “multi-
sited ethnography” (Marcus 1995). In terms of how code is made to travel,
distributed globally across information and communication technologies
(ICTs) and networked servers as a mutable mobile (cf. Latour 1986), this
approach permits Mackenzie to follow (the action of) code and offer one of
the first non-technical considerations of its importance in the blood flow of
contemporary science, commerce and society.
Given its mobility, mutability, its slippery states, code can usefully be
studied through such network approaches. Yet I am sympathetic with recent
moves within ethnography to reassess the importance of locality and resist
the tendency (post-globalization) to scale out (see Candea 2009; Falzon
2009; in STS see Lynch 1993). While there is of course a vast infrastructural
network that supports the work code performs in terms of the “final” visu-
alizations, which will be discussed with respect to “middle-ware”, most of
the work involving code happens in definite local settings—in this case, in a
mid-sized visualization and research laboratory in central London.
Describing how code works and what it does for the “hackers” of com-
puterized visualizations will help ground the larger order studies of cul-
tural impacts of computerization, as well as complement the more detailed
research into the effects of computerization on scientific practices. I am,
therefore, going to pass over the much studied effects of software in the
workplace (e.g., Flowers 1996; Hughes and Cotterell 2002) and focus upon
when technology is the work (Grint and Woolgar 1997; Hine 2006). Stay-
ing close to code entails unpacking what occurs at the multiple screens
on programmers’ computers. Like a summer holiday spent at home, it is
mundane and a little boring to “stay local,” but like the launch of the new
web-based open survey visualizer that tense day, there are all the same
quite complex operations taking place with code.
With the computerization of data and visualizations, the work with code
weaves together many formerly distinct roles. This workflow wraps together
the practices of: sourcing data to be visualized; programming to trans-
form and render data visually; visualizing as a supposed final stage. I term
these activities “codework.” Merging often sequential stages involved with
the generation of visual outputs, I highlight how proliferating web-based
24 Timothy Webmoor
visualizations challenge analytic models oriented by paper-based media.
Computerized visualizations, such as those in this case study, are open-
ended. They require constant care in the form of coding in order to be sus-
tained on the Internet. Moreover, they are open in terms of their continuing
influence in a feedback cycle that plays into both the sourcing of data and
the programming involved to render the data visually.
Codework, as an emergent and distinct form of practice in scientific
research involving visualization, also blends several sets of binary catego-
ries often deployed in visual studies: private/public, visible/invisible, mate-
rial/immaterial. While these are of interest, I focus upon the manner in
which code confounds the binary of creativity/containment and discuss the
implications for the political economy of similar visualization labs and the
accountability of codework. For clarity, I partially parse these categories
and activities in what follows.
SOURCING/PROGRAMMING/VISUALIZING
The July 6, 2010 edition of The Guardian, a very popular UK newspaper,
featured a map of London on the second page compiled from “tweets,”
or posts to the microblogging service Twitter. It resembled a topographic
map in that it visually depicted the density of tweeting activity around the
city by using classic hypsometric shading from landscape representations.
Given that there was a mountain over Soho, it was apparent to anyone
familiar with the city that the data being visualized were not topographi-
cal. The caption read, “London’s Twitterscape: Mapping the City Tweet by
Tweet.”3
It was good publicity for the lab. The visualizations were a spin-off
or side project of one of the PhD researchers. As he stated, it was an experi-
ment “to get at the social physics of large cities through Twitter activity”
(Informant 4, July 7, 2010). It was one of my earliest visits to the lab at a
time when research deploying emergent social media such as Facebook,
Flickr, Foursquare and Twitter, or other online sources such as Wikipedia
editing activity, was in its infancy (see Viégas and Wattenberg 2004 as an
early example). Indeed, many of these now popular online services did not
exist before 2006.
Sourcing the data to visualize from these online platforms is not particu-
larly difficult. It does, however, take an understanding of how the data are
encoded, how they might be “mined” and made portable with appropriate
programming, and whether the information will be amenable to visual-
izing. These programming skills are driven by a creative acumen; knowing
where to look online for information relevant to research and/or commer-
cial interests, and whether it might provide interesting and useful, or at
least aesthetic, visualizations. Creative sourcing and programming are nec-
essary crafts of codework.
Algorithmic Alchemy 25
Many online services, such as Twitter, provide data through an appli-
cation programming interface (API). Doing so allows third-party devel-
opers to provide “bolt-on” applications, and this extensibility benefits
the service provider through increased usage. “There’s an app for that!”:
It is much like developing “apps” for Apple iTunes or Google Android.
Importantly, though, sourcing these APIs or “scraping” web pages for
data to visualize does require programming, and both the format of the
data and the programming language(s) involved heavily determine the
“final” visualizations themselves.
In the case of Twitter, the company streams “live” a whole set of infor-
mation bundled with every tweet. Most of this information, such as Internet
protocol (IP) location, browser or service used to tweet, link to user profile
and sometimes latitude and longitude coordinates (with a 5–15 m accu-
racy), is not apparent to users of the service.4
The researchers at the London
lab applied to Twitter to download this open data from their development
site.5
The format is key for what types of visualization will be possible,
or at least how much translation of the data by code will be required. For
instance, as discussed later, many open data sources are already encoded
in a spatial format like Keyhole Markup Language (KML) for display in
industry standard analytic software and mapping platforms (for Google
Earth or in ESRI’s geographic information system [GIS] programs such as
ArcGIS). Twitter, like most social media companies, government institu-
tions and scientific organizations, formats its data as comma-separated val-
ues (CSV). For simplicity and portability across programming languages,
this format for organizing data has become the primary de facto standard
for open datasets. Information is arranged in tabular format much like an
Excel or Numbers spreadsheet, and values are separated by either a comma
or a tab-spacing (tabular-space values [TSV] format is a variety of CSV).
The London lab logged a week’s worth of tweets for various metropolises.
This amounted to raw data tables containing about 150,000 tweets and
over 1.5 million discrete data points for each city. Such massive datasets
could be visualized based upon various criteria—for instance, semanti-
cally for word frequency or patterning. Importantly, being in CSV format
means that the Twitter data are highly mutable by a wide range of pro-
gramming languages, and therefore there are a number of possible paths
to visualization.
The researcher was interested in tying such Twitter “landscapes” into his
larger PhD research involving travel patterns in the city of London. Spatial
and temporal aspects were therefore most important, and a program was
written to mine the data for spatial coordinates within a certain radius of
the urban centers, as well as for time stamps. When and where a Twitter
user sent a tweet could then be plotted. Once aggregated, the visualizations
indicated patterns of use in terms of diurnal and weekly activity. They also
suggested varying usage around the cities of concern.
26 Timothy Webmoor
Sourcing data packaged as CSV files was common for the laboratory.
Indeed, in step with the growing online, open data repositories, where much
data are user-generated and least user-contributed, the lab was developing a
visualizing middle-ware program to display spatial data based upon Open-
StreetMap and OpenLayers.6
“A place to share maps and compare data
visually,” as their website states, is the goal. Unlike either large commercial
companies that typically fund and manage server farms where these large
datasets are uploaded, such as Google’s Spreadsheets or Yahoo!’s Pipes, or
commercially funded research groups like IBM’s ManyEyes,7
this lab was
providing a visualizing tool, or “visualizer,” that would fetch data which
was located remotely. Otherwise, despite the recent purchase of three new
Dell stacked servers located in a closet in the hallway, there simply was not
enough storage space for the modest lab to host the huge datasets. Described
as “a service for researchers,” the web-based platform would, for example,
“serve up on an ad hoc basis a visualization of the dataset whenever a query
was made by a user” (Informant 1, November 10, 2010). As he went on to
unpack the operation, when a user selected a set of statistics (e.g., crime
statistics) to display over a selected region (e.g., the UK), there were a host
of operations that had to transpire rapidly, and all hinged upon the web-
based platform and the coding the researcher had written and “debugged”
(and rewritten). These operations might be thought of as putting together
layers consisting of different information and different formats in order to
build a complexly laminar visualization. As the informant described, the
programming allows the “fusing of CSV data [like census data] with geo-
spatial data [coordinates] on a tile by tile basis [portions of the map viewed
on the screen] . . . this involves three servers and six websites” (Informant
1, November 10, 2010). Data, plug-ins and the program code itself were
variously dispersed and are pulled together to form the visualization on a
user’s screen—hence the “middle” or go-between function of the software
package developed by the lab and later released as GMapCreator. For pro-
gramming web-based visualizations, code is coordination work.
In addition to the growing online datasets stored primarily as CSV files,
many government agencies now provide their stored data in accessible
digital repositories. The UK’s data.london.gov.uk was another popular
archive where the visualization lab and other researchers were obtaining
UK census data. This was variously fetched and displayed as a dynamic,
interactive map through the lab’s web-based platform. Twenty-seven
months after the initial launch of the visualizing platform, 885 datas-
ets had been “uploaded” to the map visualizer (or more precisely shared
through linking to a remote server where the datasets were physically
stored) and there were “about 10,000 users for the program” (Informant
1, November 10, 2010).
Other data was being sourced by the lab through scraping. While
the “ready-made” data stream from Twitter’s API or other online data
Algorithmic Alchemy 27
repository requires some initial coding to obtain the data—for instance,
writing a program to log data for a specified period of time as in the case
of the Twitterscapes—web-scraping typically requires more involved code-
work. Frequently, the data scraped must go through a series of steps all
defined by code in order to be useful; and in many cases converted into a
standard data format such as CSV.
Let’s consider another example from the visualization lab. One of the
programmers was interested in getting at “the social demography of,
and city dynamics relating to, bike share usage” (Informant 5, Octo-
ber 19, 2010). Bike share or bike rentals programs had recently become
quite popular in international cities. London’s own bike sharing scheme,
known as the Barclay’s Cycle Hire after its principal commercial con-
tributor, was launched on June 30, 2010, and represents a quite large
and well-utilized example. Visualizing and making publicly accessible
the status of the bike share schemes piggybacks off of the data that the
corporate managers of the various schemes collect for operational pur-
poses. The number of bikes at each docking station (London has over
560 stations) is updated electronically every three minutes. The stages
involved begin with the programmer using the “view source” feature in
the web browser Firefox to de-visualize the “front end” of the commer-
cial websites in order to assess what types of information are encoded.
The “back end” or source code of the scheme’s website showed several
types of information that could be rendered spatially. Specifically, time
stamps, dock location and number of bicycles were data he thought could
be harvested from these websites. With visualizations in mind, he was
confident “that they would be something cool to put out there” (Infor-
mant 5, September 23, 2010). He wrote a code in Python to parse the
information scraped to narrow in on geographic coordinates and num-
ber of bicycles. Using a MySQL database to store the information, the
Python program pulled the selected data into CSV format by removing
extraneous information (such as HTML markup). A cron program was
written to schedule how frequently the web scraping takes place. Finally,
the programmer aggregated the information scraped for each individual
bike station to scale up to the entire system or city. To visualize the data,
he used JavaScript to make it compatible with many web-based map dis-
plays, such as Google Maps. In this case, he used OpenStreetMap with
(near) real time information displayed for ninety-nine cities worldwide
(Figure 1.2). Finally, he used Google’s Visualization API8
to generate and
embed the charts and graphs. Several months after the world bike shar-
ing schemes visualizer went live, the programmer related how he was
contacted by the corporate managers of four different international cities
asking him “to take down the visualizations . . . likely because it made
apparent which schemes were being underutilized and poorly managed”
(Informant 7, October 27, 2010).
28 Timothy Webmoor
SOURCING/PROGRAMMING/VISUALIZING
Visualizations crafted by the lab were clearly having a form of public “impact”
and response. While not necessarily the type of impact accredited within the
political economy of academia, the lab was all the same successful in attract-
ing more funding based upon a reputation of expanding public engagement
with its web presence. As the director remarked, “We are flush with grant
money right now . . . we are chasing the impact” (October 19, 2010).
Not dissimilarly to other studies of scientific visualizations (e.g., Beau-
lieu 2001; Lynch and Edgerton 1988; Lynch and Ridder-Vignone this
volume), there was a need to pursue public outreach achieved through
accessible visualizations. At the same time, the researchers and program-
mers themselves de-emphasized such “final” visualizations. Of course, I
overheard several informants speaking approvingly of the visual styles
used by colleagues in the lab. “He has good visualizations” (Informant
7, October 27, 2010) was a typical, if infrequent, response to my queries
about their fellow researchers. More often, though, were the comments:
Figure 1.2 Cached MySQL database consisting of number of bicycles and time
stamps “scraped” from websites (on left) with the near real time visualization of the
data in JavaScript (on right; in this case London’s bicycle share scheme). Bubbles
indicate location of bicycle docks, with size proportional to the number of bicycles
currently available. Colored lines show movement of bicycles between dock stations
(Informant 5, October 19, 2010).
Algorithmic Alchemy 29
“He has clean code” (Informant 5, October 27, 2010); or “the style of
programming is very important . . . everything around the actual algo-
rithm, even the commenting [leaving lines within the code prefaced by ‘//’
so as not to be readable/executable by the computer] is very important”
(Informant 2, June 10, 2011). More than visualizations or data, the pro-
gramming skill that made both possible was identified as the guarantor of
reliability in web-based visualizations.
“Code is king” assertions contrast with previous studies where research-
ers engaging visualizations stated greater confidence in, or preference for,
the supposed raw data visualized. In this laboratory, the programmers and
researchers tended to hold a conflicted opinion of data. Ingenuity in find-
ing sources of data to visualize was admired, such as with the bike share
scheme or the Twitter maps. Equally, the acts of researchers who leveraged
open data initiatives to amass new repositories, such as making requests
through the Freedom of Information Act to the Transport for London (TfL)
to release transportation statistics, were approvingly mentioned (e.g., Infor-
mant 5, November 10, 2010). Yet once sourced, the data tended to recede
into the background or bedrock as a substrate to be worked upon through
programming skill.
Everyone has their preferred way of programming, and preferred style
of programming . . . The data is much more clearly defined. The struc-
ture of the data and how you interface, or interact, with it. Whereas
programming is so much more complex. It’s not so easy . . . maybe
sounds like it’s isolated. But it’s really hundreds of lines of code. (Infor-
mant 3, June 10, 2011)
Like this informant, many felt that the data were fairly “static and
closed,” and for this reason were less problematic and, consequently, less
interesting to work with. One programmer explained, with reference to
the prevalence of digital data in CSV format, that you need to “use a
base-set that was ubiquitous . . . because web-based visualizations were
changing constantly . . . with new versions being released every week or
so” (Informant 1, June 13, 2011). Such a perception of data as a relatively
stable “base” was often declared in contrast to the code to “mash up” the
data and render it readable by the many changing platforms and plug-
ins. As a go-between, code has to be dynamic to ensure the data remains
compatible with the perpetually changing platforms for visualizing. Pro-
grammers often bemoaned how much time they spent updating their code
to keep their visualizations and platforms live on the Internet. Informant
1 (November 10, 2010) explained how he was “developing a point-click
interface to make it easier to use the site [the mapping visualizer that
the lab hosted], but it requires much more KML [a standard geospatial
markup language] to make it compatible with Google Maps and Open-
StreetMap [which the site used to display the maps].” When Google or
30 Timothy Webmoor
another API provider updated their software, the programmers often had
to update the code for their visualizations accordingly.
Perhaps unsurprisingly, activities that required skill were more highly
regarded. These preferences fed into the treatment of data vis-à-vis code
in terms of lab management. There was much informal sharing of datasets
that had already been sourced, sharing links to where relevant data might
be found, or otherwise discussing interesting and pertinent information for
one another’s research projects. To make sharing and collaborating with
data easier and more accountable, the lab was beginning to set up a data
repository using Trac, an open source project management program, on a
central server. When asked about a code repository, the informant identi-
fied the need, but admitted that a “data repository was much less of a prob-
lem” (Informant 5, October 13, 2010). Instead, despite “revision control
being essential to programming . . . and taught in the 2nd year of software
engineering,” it was largely left to the individual programmer to create a
“code repository so that [you] can back up to where [the software] was
working . . . like a safety net” (Informant 2, March 16, 2011).
Updating the felicitous phrase describing Victorian sex, code was every-
where admired, spoken about and inherent to the survival of the lab, but
never shared. This incongruous observation prompted a conversation later
in the fieldwork:
TW: You don’t share code, you share data?
Informant: Mostly data, not so much code. The thing with data, once you
load it and you have your algorithms, you can start to transform
the data in different ways. You can also (pause) . . . quite often
you download data in a certain format, and you load it, and your
programming transforms it in different ways. Then it becomes
useful . . . With C++ there is no boundary to what you can do.
TW: So you can do whatever with the data. So you do that by writing
new code? Do you write a bit of code in C++?
Informant: I do it all the time. It’s how I spend my days. (Informant 3, June
10, 2011)
Programming was held in deference for two principal reasons. First, as a
creative skill it was seen as proprietary (see Graham 2004 on the paral-
lels between artists and programmers). It was not something that could
be straightforwardly taught, but as a craft it was learned on the job; and
programmers strove to improve this skill in order to write the cleanest,
most minimalist code. This fostered a definite peer awareness and review
of code. These interpersonal dynamics fed into the larger management and
economy of the lab already mentioned. This lab was chasing the impact.
To do so they were creating “fast visualizations.” In addition, as discussed
with respect to the mapping visualizer, web-based visualizations rely upon
middle-ware or go-between programs gathering distributed data and
Algorithmic Alchemy 31
rendering it on an ad hoc basis on the computer screen. Given the larger
infrastructural medium, the lab’s visualizations needed constant coding
and recoding in order to maintain operability with the rapidly changing
software platforms and APIs that they were tied to. Each researcher who
began a project had to quickly and on an ad hoc basis develop code that
could render source data visually. Intended for rapid results but not neces-
sarily long-term sustainability, the only reasonable way to manage such
projects was to minimize confusion and the potential for working at cross-
purposes by leaving the coding to individuals. At a smaller research lab
where programming happens, this was feasible. At larger, corporate labora-
tories, the many tasks of codework are often broken up in a Fordist manner
among several specialists: software engineers, network engineers, graphic
designers. All of whom may be involved in creating web-based visualizing
platforms. Bolted together by teams, such coding is more standardized,
minimalist and therefore more compatible and fungible.
In contrast, the London lab found itself in a double-bind of sorts. They
clearly knew they needed some form of accountability of the code produced
by the lab: “Revision control is essential for the lab” (Informant 2, March
2, 2011). At the same time, the success of the lab in garnering financial sup-
port, the “soft funding” it was entirely dependent upon, was largely due “to
the creative mix of code and tools” and the “hands-off management style”
(Director, March 2, 2011). Individual researchers were therefore largely left
to practice their own codework.
Secondly, in addition to being a highly creative and skilled craft, writing
code, and specifically the code itself, was seen as dynamic and potent in
relation to data. Mackenzie discusses the agency of code in general terms
with respect to its ability to have effects at a distance (2005, 2006)—for
instance, forging a shared sense of identity among the internationally based
software engineers who develop the open source software Unix. For this
lab, the software they were writing had agency in the very definite sense
of transforming data. You want to “build an algorithm so you have good
performance” (Informant 6, October 13, 2010). Unlike code more gener-
ally, which includes human-readable comments and other instructions, an
algorithm is a subset or specific type of code that is expressly written for a
computer to perform a specified function in a defined manner. The defin-
ing function of many algorithms in computer programming is the ability
to manipulate data. For instance, as the informant describes ahead, a basic
operation of algorithms is precisely the transformation of data from one
format (e.g., the markup language KML) to a different format (e.g., C++).
An algorithm performs a well-defined task. Like sorting a series of
numbers, for example. Like a starting point [for the] input of data and
output of data in different formats. You can do everything with source
code. You can write algorithms, but you can do all other kinds of stuff
as well. (Informant 3, June 10, 2011)
32 Timothy Webmoor
Code as transitive revealed the conflicting view of data as being less than
stable. In fact, during many conversations, code’s potency was estimated in
fairly direct relation to how malleable it rendered data.
TW: You transform these two sources of data that you have into XML
files, so that you can look at them in ArcGIS?
Informant: From ArcGIS you understand how the data, how these things
are supposed to be connected. So then you can connect the net-
work in the way it’s supposed to be connected, in your connec-
tion structure that you have defined in your code, in C++.
TW: Does it have to be in C++, or is that a standard for running simu-
lations? . . . So you have to transform this data in C++, let me
get at this, for a couple of reasons: One, you are familiar with
it; two, it’s going to be much faster when you make queries [as
opposed to ArcGIS]. Anything else?
Informant: Well, I’m familiar with the code. (Informant 6, October 13,
2010)
Code’s purpose was to translate and to work with data. Yet different types
of code worked on data differently. Part of this was due to personal pref-
erence and background. What certain programmers could do with code
depended upon their familiarity with it. For this reason, depending upon the
programmer, certain code was asserted to be “top end” or “higher order.”
This implies that the coding language is more generalized and so could be
written to perform many types of tasks. It also means, however, that the code
must be programmed much more extensively. Whatever code was preferred,
the acknowledgment that it transformed and manipulated data rarely led to
discussion of a potential corollary: that data were somehow “constructed”
or may become flawed. Part of this has to do with the data asserting a mea-
sure of independence from code. More specifially, the format of the data
partially determines the type of visualization pursued and so constrains to a
certain degree the coding deployed. More emphasis was, however, given to
code’s neutral operation upon data. It was held to merely translate the data’s
format for “readability” across the networked computers and programs in
order to render the visualization. Put another way, code transforms metadata
not data. The view that code transformed without corruption became most
apparent when researchers discussed the feedback role of visualization in
examining and correcting the original datasets.
SOURCING/PROGRAMMING/VISUALIZING
Most visualizations mashed up by the lab were not finalized outputs. Just
as the format of the sourced data influenced the programming required
and the choices for visualization, the visualizations themselves recursively
Algorithmic Alchemy 33
looped back into this process of codework. Several informants flipped the
usual expectation that visualizations were end products in the information
chain by discussing their integral role at the beginning of the process.
It’s a necessary first step to try and visualize certain facets of the infor-
mation. Because it does give you a really quick way of orienting your
research. You can see certain patterns straightaway . . . you literally do
need to see the big picture sometimes. It informed my research com-
pletely . . . There are some themes in your research. But you still, (pause)
the visualization informs your trajectory . . . Is part of the trajectory
of your research . . . You draw on that . . . because you see something.
(Informant 8, October 27, 2010)
For this informant, deploying “sample” visualizations allowed him to
identify patterns or other salient details in an otherwise enormous corpus
of data. He was researching with Foursquare, a social media service that
personalizes information based upon location. As he stated, he was “min-
ing their API to harvest 300,000,000 records” (Informant 8, October 27,
2010). Awash in data, he needed to reduce the complexity in order to iden-
tify and anchor research problems. Much like the lab’s resident anthropolo-
gist, he needed a visual counterpart to what was otherwise undifferentiated
and unfamiliar clutter on the computer screen.
More than suggesting what to do with mined data, another researcher
noted with pride that the working visualizations actually identified flaws
in the data. He was coding with open data from Transport for London
and the UK Ordnance Survey. Much of this entails merging traffic flow
volume (TfL data) with geospatial coordinates (Ordnance data) to create
large visuals consisting of lines (roadways, paths) and nodes (intersections).
Responding to a question about accuracy in the original data and worries
about creating errors through transforming the CSV files into C++, he dis-
cussed the “bridge problem.” This was an instance where the visualization
he began to run on the small scale actually pinpointed an error in the data.
Running simulations of a small set of nodes and lines, he visually noticed
traffic moving across where no roadway had been labeled. After inspecting
a topographic map, he concluded that the Ordnance Survey had not plotted
an overpass where it should have been.
TW: When you say you know exactly what is going on with the algo-
rithm, does that mean you can visualize each particular node
that is involved in this network?
Informant: Yes, once you program you can debug it where you can stop
where the algorithm is running, you can stop it at anytime and
see what kind of data it is reading and processing . . . you can
monitor it, how it behaves, what it is doing. And once you check
that and you are happy with that, you go on to a larger scale
34 Timothy Webmoor
. . . once you see what it is giving you. (Informant 6, October
13, 2010)
Running visualizations allowed the researcher to feel confident in the reli-
ability of his data as he aggregated (eventually) to the scale of the UK.
Whether visualizations were used by the lab researchers at the beginning
of the process of codework to orient their investigations, or throughout the
process to periodically corroborate the data, visualizations as final outputs
were sometimes expected. Where this was the case, especially with respect
to paper-based visualizations, many at the lab were resigned to their neces-
sity but skeptical of the quality. Several felt it was an inappropriate medium
for what were developed to be web-based and dynamic. Yet they openly
acknowledged the need for such publication within the confines of aca-
demia’s political economy.
Online publishing has to be critical. For me, my research is mainly
online. And that’s a real problem as a PhD student. Getting published
is important. And the more you publish online independently the less
scope you have to publish papers . . . still my motivation is to publish
online, particularly when it’s dynamic . . . You might have a very, very
interesting dynamic visualization that reveals a lot and its impact in
the academic community might be limited . . . the main constraint is
that these are printed quite small, that’s why I have to kind of tweak
the visuals. So they make sense when they are reproduced so small.
Because I look at them at twenty inches, on a huge screen. That’s the
main difference, really. There’s a lot of fine-grained stuff in there.
(Informant 8, October 27, 2010)
Set within academic strictures of both promotion and funding, the lab’s
researchers found themselves needing to generate fast visualizations, while
at the same time “freezing” them, or translating and reducing them, to fit
traditional print.
CONCLUDING DISCUSSION: CODEWORK AND
RIGHT WRITING IN SCIENTIFIC PRODUCTION
Given the way codework weaves together the many activities happening at
the visualization lab, the demands of academic publication to assign defi-
nite credit became an arena for contestation. This is because programming,
as a mode of writing and a skill integral to all of these activities, abrades
against a tradition of hierarchical assignation of authorship going back to
pre-Modern science (Shapin 1989). This tradition would restrict the role of
software programming along the lines of Shapin’s “invisible technicians.”
Writing code is not the right type of writing.
Exploring the Variety of Random
Documents with Different Content
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is
derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is
posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or
providing access to or distributing Project Gutenberg™
electronic works provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project
Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except
for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set
forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.
Section 2. Information about the Mission
of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.
The Foundation’s business office is located at 809 North 1500
West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws
regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states
where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot
make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.
Please check the Project Gutenberg web pages for current
donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.
Project Gutenberg™ eBooks are often created from several
printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookgate.com

More Related Content

PDF
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
PDF
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
PDF
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
PDF
Developers Dilemma The Secret World Of Videogame Creators Casey Odonnell
PDF
Instrumental Community Probe Microscopy And The Path To Nanotechnology Cyrus ...
PDF
Media technologies essays on communication materiality and society 1st Editio...
PDF
Eden Medina Ed Ivan Da Costa Marques Ed Christina Holmes Ed Edited By Eden Me...
PDF
Media Technologies 1st Edition Tarleton Gillespie
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
Visualization in the Age of Computerization 1st Edition Annamaria Carusi
Developers Dilemma The Secret World Of Videogame Creators Casey Odonnell
Instrumental Community Probe Microscopy And The Path To Nanotechnology Cyrus ...
Media technologies essays on communication materiality and society 1st Editio...
Eden Medina Ed Ivan Da Costa Marques Ed Christina Holmes Ed Edited By Eden Me...
Media Technologies 1st Edition Tarleton Gillespie

Similar to Visualization in the Age of Computerization 1st Edition Annamaria Carusi (20)

PDF
Technology and Globalisation 2nd Edition David Pretel
PDF
Culture Technology And The Image Techniques Of Engaging With Visual Culture 1...
PDF
Media technologies essays on communication materiality and society 1st Editio...
PDF
Seamless Computing Notes
PDF
Between Reason And Experience Essays In Technology And Modernity New Andrew F...
PDF
Between Reason And Experience Essays In Technology And Modernity New Andrew F...
PDF
Media Technologies Essays On Communication Materiality And Society Gillespie
PDF
Digitizing Identities Doing Identity In A Networked World Irma Van Der Ploeg
PPT
The Future Of The Image Part 2
PDF
Producing Power The Pre Chernobyl History of the Soviet Nuclear Industry Sonj...
PDF
Media Ecologies David Gee Reader In Digital Media Matthew Fuller
PPTX
Digital sustainability 2019
PDF
Contingency and Plasticity in Everyday Technologies Media Philosophy 4th Edi...
PDF
Between Reason and Experience Essays in Technology and Modernity 1st Edition ...
PDF
Between Reason and Experience Essays in Technology and Modernity 1st Edition ...
PDF
Producing Power The Pre Chernobyl History of the Soviet Nuclear Industry Sonj...
PDF
World Economic Forum Tipping Points Report
DOCX
Topic Impact of Bring Your Device (BYOD) at workplace Analys.docx
PDF
The Digital Innovation Race Conceptualizing the Emerging New World Order 1st ...
PDF
You Want the Future? You Can't Handle The Future
Technology and Globalisation 2nd Edition David Pretel
Culture Technology And The Image Techniques Of Engaging With Visual Culture 1...
Media technologies essays on communication materiality and society 1st Editio...
Seamless Computing Notes
Between Reason And Experience Essays In Technology And Modernity New Andrew F...
Between Reason And Experience Essays In Technology And Modernity New Andrew F...
Media Technologies Essays On Communication Materiality And Society Gillespie
Digitizing Identities Doing Identity In A Networked World Irma Van Der Ploeg
The Future Of The Image Part 2
Producing Power The Pre Chernobyl History of the Soviet Nuclear Industry Sonj...
Media Ecologies David Gee Reader In Digital Media Matthew Fuller
Digital sustainability 2019
Contingency and Plasticity in Everyday Technologies Media Philosophy 4th Edi...
Between Reason and Experience Essays in Technology and Modernity 1st Edition ...
Between Reason and Experience Essays in Technology and Modernity 1st Edition ...
Producing Power The Pre Chernobyl History of the Soviet Nuclear Industry Sonj...
World Economic Forum Tipping Points Report
Topic Impact of Bring Your Device (BYOD) at workplace Analys.docx
The Digital Innovation Race Conceptualizing the Emerging New World Order 1st ...
You Want the Future? You Can't Handle The Future
Ad

Recently uploaded (20)

PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PDF
My India Quiz Book_20210205121199924.pdf
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PDF
Computing-Curriculum for Schools in Ghana
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Practical Manual AGRO-233 Principles and Practices of Natural Farming
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
AI-driven educational solutions for real-life interventions in the Philippine...
Virtual and Augmented Reality in Current Scenario
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
My India Quiz Book_20210205121199924.pdf
FORM 1 BIOLOGY MIND MAPS and their schemes
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
TNA_Presentation-1-Final(SAVE)) (1).pptx
Computing-Curriculum for Schools in Ghana
Introduction to pro and eukaryotes and differences.pptx
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Ad

Visualization in the Age of Computerization 1st Edition Annamaria Carusi

  • 1. Instant Ebook Access, One Click Away – Begin at ebookgate.com Visualization in the Age of Computerization 1st Edition Annamaria Carusi https://ebookgate.com/product/visualization-in-the-age-of- computerization-1st-edition-annamaria-carusi/ OR CLICK BUTTON DOWLOAD EBOOK Get Instant Ebook Downloads – Browse at https://ebookgate.com Click here to visit ebookgate.com and download ebook now
  • 2. Instant digital products (PDF, ePub, MOBI) available Download now and explore formats that suit you... Psychology of Sustainability and Sustainable Development in Organizations First Edition Annamaria Di Fabio https://ebookgate.com/product/psychology-of-sustainability-and- sustainable-development-in-organizations-first-edition-annamaria-di- fabio/ ebookgate.com Horror in the Age of Steam Tales of Terror in the Victorian Age of Transitions 1st Edition Carroll Clayton Savant https://ebookgate.com/product/horror-in-the-age-of-steam-tales-of- terror-in-the-victorian-age-of-transitions-1st-edition-carroll- clayton-savant/ ebookgate.com The fall of language in the age of English Carpenter https://ebookgate.com/product/the-fall-of-language-in-the-age-of- english-carpenter/ ebookgate.com Politics in the Age of Austerity 1st Edition Wolfgang Streeck https://ebookgate.com/product/politics-in-the-age-of-austerity-1st- edition-wolfgang-streeck/ ebookgate.com
  • 3. Hope in the Age of Anxiety 1st Edition Anthony Scioli https://ebookgate.com/product/hope-in-the-age-of-anxiety-1st-edition- anthony-scioli/ ebookgate.com Art in the Age of Emergence 1st Edition Michael Pearce https://ebookgate.com/product/art-in-the-age-of-emergence-1st-edition- michael-pearce/ ebookgate.com Race in the Age of Obama 1st Edition Donald Cunnigen https://ebookgate.com/product/race-in-the-age-of-obama-1st-edition- donald-cunnigen/ ebookgate.com Representing Humanity in the Age of Enlightenment 1st Edition Alexander Cook https://ebookgate.com/product/representing-humanity-in-the-age-of- enlightenment-1st-edition-alexander-cook/ ebookgate.com The unfinished Enlightenment description in the age of the encyclopedia 1st Edition Stalnaker https://ebookgate.com/product/the-unfinished-enlightenment- description-in-the-age-of-the-encyclopedia-1st-edition-stalnaker/ ebookgate.com
  • 6. Visualization in the Age of Computerization Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of thinking of scientific knowledge. Digital media and compu- tational processes challenge our conception of the way in which perception and cognition work in science, of the objectivity of science, and the nature of scientific objects. They bring about new relationships between science, art and other visual media, and new ways of practicing science and orga- nizing scientific work, especially as new visual media are being adopted by science studies scholars in their own practice. This volume reflects on how scientists use images in the computerization age, and how digital technolo- gies are affecting the study of science. Annamaria Carusi is Associate Professor in Philosophy of Medical Science and Technology at the University of Copenhagen. Aud Sissel Hoel is Associate Professor in Visual Communication at the Norwegian University of Science and Technology. Timothy Webmoor is Assistant Professor adjunct in the Department of Anthropology, University of Colorado, Boulder. Steve Woolgar is Chair of Marketing and Head of Science and Technology Studies at Said Business School, University of Oxford.
  • 7. Routledge Studies in Science, Technology and Society 1 Science and the Media Alternative Routes in Scienti¿c Communication Massimiano Bucchi 2 Animals, Disease and Human Society Human-Animal Relations and the Rise of Veterinary Medicine Joanna Swabe 3 Transnational Environmental Policy The Ozone Layer Reiner Grundmann 4 Biology and Political Science Robert H. Blank and Samuel M. Hines, Jr. 5 Technoculture and Critical Theory In the Service of the Machine? Simon Cooper 6 Biomedicine as Culture Instrumental Practices, Technoscienti¿c Knowledge, and New Modes of Life Edited by Regula Valérie Burri and Joseph Dumit 7 Journalism, Science and Society Science Communication between News and Public Relations Edited by Martin W. Bauer and Massimiano Bucchi 8 Science Images and Popular Images of Science Edited by Bernd Hüppauf and Peter Weingart 9 Wind Power and Power Politics International Perspectives Edited by Peter A. Strachan, David Lal and David Toke 10 Global Public Health Vigilance Creating a World on Alert Lorna Weir and Eric Mykhalovskiy 11 Rethinking Disability Bodies, Senses, and Things Michael Schillmeier 12 Biometrics Bodies, Technologies, Biopolitics Joseph Pugliese 13 Wired and Mobilizing Social Movements, New Technology, and Electoral Politics Victoria Carty 14 The Politics of Bioethics Alan Petersen 15 The Culture of Science How the Public Relates to Science Across the Globe Edited by Martin W. Bauer, Rajesh Shukla and Nick Allum
  • 8. 16 Internet and Surveillance The Challenges of Web 2.0 and Social Media Edited by Christian Fuchs, Kees Boersma, Anders Albrechtslund and Marisol Sandoval 17 The Good Life in a Technological Age Edited by Philip Brey, Adam Briggle and Edward Spence 18 The Social Life of Nanotechnology Edited by Barbara Herr Harthorn and John W. Mohr 19 Video Surveillance and Social Control in a Comparative Perspective Edited by Fredrika Björklund and Ola Svenonius 20 The Digital Evolution of an American Identity C. Waite 21 Nuclear Disaster at Fukushima Daiichi Social, Political and Environmental Issues Edited by Richard Hindmarsh 22 Internet and Emotions Edited by Tova Benski and Eran Fisher 23 Critique, Social Media and the Information Society Edited by Christian Fuchs and Marisol Sandoval 24 Commodi¿ed Bodies Organ Transplantation and the Organ Trade Oliver Decker 25 Information Communication Technology and Social Transformation A Social and Historical Perspective Hugh F. Cline 26 Visualization in the Age of Computerization Edited by Annamaria Carusi, Aud Sissel Hoel, Timothy Webmoor and Steve Woolgar
  • 10. Visualization in the Age of Computerization Edited by Annamaria Carusi, Aud Sissel Hoel, Timothy Webmoor and Steve Woolgar NEW YORK LONDON
  • 11. First published 2015 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2015 Taylor & Francis The right of the editors to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Visualization in the age of computerization / edited by Annamaria Carusi, Aud Sissel Hoel, Timothy Webmoor and Steve Woolgar. pages cm. — (Routledge studies in science, technology and society ; 26) Includes bibliographical references and index. 1. Information visualization. 2. Digital images. 3. Visualization. 4. Computers and civilization. I. Carusi, Annamaria. QA76.9.I52V574 2014 303.48'34—dc23 2013051326 ISBN13: 978-0-415-81445-4 (hbk) ISBN13: 978-0-203-06697-3 (ebk) Typeset in Sabon by IBT Global. SFI-01234 SFI label applies to the text stock
  • 12. Contents List of Figures Introduction 1 ANNAMARIA CARUSI, AUD SISSEL HOEL, TIMOTHY WEBMOOR AND STEVE WOOLGAR PART I Visualization in the Age of Computerization 1 Algorithmic Alchemy, or the Work of Code in the Age of Computerized Visualization 19 TIMOTHY WEBMOOR 2 From Spade-Work to Screen-Work: New Forms of Archaeological Discovery in Digital Space 40 MATT EDGEWORTH 3 British Columbia Mapped: Geology, Indigeneity and Land in the Age of Digital Cartography 59 TOM SCHILLING 4 Redistributing Representational Work: Tracing a Material Multidisciplinary Link 77 DAVID RIBES 5 Making the Strange Familiar: Nanotechnology Images and Their Imagined Futures 97 MICHAEL LYNCH AND KATHRYN DE RIDDER-VIGNONE
  • 13. viii Contents 6 Objectivity and Representative Practices across Artistic and Scientific Visualization 118 CHIARA AMBROSIO 7 Brains, Windows and Coordinate Systems 145 ANNAMARIA CARUSI AND AUD SISSEL HOEL 8 A Four-Dimensional Cinema: Computer Graphics, Higher Dimensions and the Geometrical Imagination 170 ALMA STEINGART PART II Doing Visual Work in Science Studies 9 Visual STS 197 PETER GALISON 10 Expanding the Visual Registers of STS 226 TORBEN ELGAARD JENSEN, ANDERS KRISTIAN MUNK, ANDERS KOED MADSEN AND ANDREAS BIRKBAK 11 Mapping Networks: Learning From the Epistemology of the “Natives” 231 ALBENA YANEVA 12 Visual STS Is the Answer, What Is the Question? 237 ANNE BEAULIEU 13 Visual Science Studies: Always Already Materialist 243 LISA CARTWRIGHT Contributors 269 Index 273
  • 14. Figures 1.1 Lines of code in the programming language C++ (on right) rendering the visualization (on left) of a London transport model. 21 1.2 Cached MySQL database. 28 2.1 The use of iPads at Pompeii excavations, 2010. 46 4.1 Two examples of Marie’s work in the application of texture- mapping to a single surface. Which is more effective? 80 4.2 A texture-map used in one of Marie’s experimental systems. 84 5.1 “Quantum Corral” (1993). 103 5.2 Nanocar models and STM image. 110 6.1 Bernard Siegfried Albinus. 124 6.2 Bernard Siegfried Albinus. 125 6.3 Alfred Stieglitz, The Steerage, 1907. 132 6.4 Martin John Callanan, 2009. 135 8.1 On the left (1a) is a still from Banchoff and Strauss’s first film, showing a projection of the flat torus into three-space, which divides the space into two congruent halves. On the right (1b) is a later rendering of the same projection with color and shading. 175 8.2 On the top (2a & 2b) are two images from The Hypercube Projections and Slicing. Below (2c & 2d) are two images from Complex Functions Graph. 177 8.3 Two versions of the Veronese surface. 180 9.1 Still from Primate. 207 9.2 Dimitri Mugianis. 209 9.3 Still from Leviathan. 210 9.4 Still from Secrecy. 216 11.1 The dynamic network mapping of the process of design and construction of the 2012 London Olympics Stadium. 235
  • 16. Introduction Annamaria Carusi, Aud Sissel Hoel, Timothy Webmoor and Steve Woolgar The pervasive computerization of imaging and visualizing challenges us to question what changes accompany computerized imagery and whether, for instance, it is poised to transform science and society as thoroughly as the printing press and engraving techniques changed image reproduction (Eisen- stein 1980; Rudwick 1976), or as radically as photography altered the con- ditions of human sense perception and the aura of works of art (Benjamin [1936] 1999). Whereas some scholars discern continuity in representational form from Renaissance single-point perspective to cinematic and digital arrays, from Alberti’s windows to Microsoft (Crary 1990; Manovich 2001; Friedberg 2006; Daston and Galison 2007), other scholars understand our immersion in imagery as heralding a “visual turn” with the engagement of knowledge in contemporary culture (Rheingold 1992; Mitchell 1994; Staf- ford 1996). James Elkins suggests that visual literacy spans the specialized disciplines of the academy (Elkins 2007), while Robert Horn or Thomas West claim that “visual thinking” is primary and intuitive (Horn 1998; West 2004). Computational high-tech may be enabling the reclamation of old visual talents devalued by word-bound modernist thought. When we organized the Visualization in the Age of Computerization conference in Oxford in 2011, we were motivated by curiosity regarding what specific effects these innovations were having and whether claims regarding new and more effective visualizing techniques and transformed modes of visual thinking were being borne out. A further motivation was the conviction that the exploration of the field would be enriched by input from the broad range of science studies, including sociological, philosophi- cal, historical and visual studies approaches. The papers presented at the conference and the further discussion and debate that they engendered have gone some way in providing a set of crossdisciplinary and sometimes interdisciplinary perspectives on the ways in which the field of visualiza- tion has continued to unfold. This volume gathers together a sample of contributions from the conference, with others being presented in the special issue of Interdisciplinary Science Reviews, titled “Computational Picturing.”1 Other interventions at the conference have been published else- where, and since that time, the domain has garnered the interest of a great
  • 17. 2 Carusi, Sissel Hoel, Webmoor and Woolgar many scholars across the disciplines. New publications in the area include Bartscherer and Coover (2011), Yaneva (2012), Wouters et al. (2013) and Coopmans et al. (2014). The texts gathered in this volume each address the ways in which the increasing prevalence of computational means of imag- ing and visualizing is having an impact on practices and understandings of “making visible,” as well as of the objects visualized. We note five overlap- ping themes in considerations of the ways in which the modes of visualiza- tion associated with digital methods are making a difference in areas that have traditionally been of interest to studies of the visual in science. First, a difference is being made to the deployment of perception and cognition in visual evidence. Traditional ideas about cognition have long been rejected in favor of an understanding of interpretation in terms of in situ material practices. It is thus recognized that mentalistic precepts such as “recognizing patterns,” “identifying relationships,” “assessing fit and correspondence,” etc. are better treated as idealized depictions of the activities involved in generating, managing and dealing with representa- tions. In science and technology studies (STS), visual studies, history of science and related disciplines, the focus on material practices resulted in a widespread acknowledgment of the muddled and contingent efforts involved in the practical activities of making sense of the visual culture of science (Lynch and Woolgar 1990; Cartwright 1995; Galison 1997). Yet the advent of new modes of computerized visualization has seen the reemergence of a mentalistic cognitivism, with visualizations assigned the role of “cognitive aids” (Bederson and Schneiderman 2003; Card, Mackinley and Schneiderman 1999; McCormick, DeFanti and Brown 1987; Ware 2000; see Arias-Hernandez, Green and Fisher 2012 for a review). However, alongside the reemerging cognitivist discourse, there are other claims that accentuate the cognitive import of visualizations— that is, their active role in forming and shaping cognition—while simulta- neously pointing to the complex embedding of perception and cognition in material settings (Ellenbogen 2008; Carusi 2008; Hoel 2012). As Anne Beaulieu points out in her contribution toward the end of this volume, even if the term “visual” relies on the perceptual category of sight, vision is networked and witnessed by way of computational tools that take primarily nonoptical forms. Lisa Cartwright too calls attention to the “always already” material aspects of the visible, pointing out the extent to which a materialist philosophy has been developed in feminist approaches to images, imaging and visualization. The cognitive import of visualiza- tions is emphasized in the chapter by Matt Edgeworth in his discussion of discovery through screen-work in archaeology, and in the chapter by Tom Schilling in the context of antagonistic communities using maps or data sets as visual arguments. Alma Steingart, likewise, underscores the cognitive import of visualizations by tracking the way that computer graphics animations were put to use by mathematicians as generative of novel theories. In fact, most contributions to this volume lay stress upon
  • 18. Introduction 3 the cognitive import of visualizations, by pointing to their transformative and performative roles in vision and knowledge. A second change is in the understanding of objectivity in the method- ological sense of what counts as an objective scientific claim. Shifts in obser- vational practice are related to fluctuations as to what counts as “good” vision with respect to the type of vision that is believed to deliver objectivity. However, there are diverse notions of objectivity in the domain mediated by computational visualization. Scholars have described the involvement of technologies in science in terms of the historical flux of epistemic vir- tues in science, tracing a trajectory from idealized images that required intervention and specialized craftsmanship to more mechanical forms of recording. A later hybridization of these virtues has been claimed to occur when instruments and, later, computers allowed active manipulation versus passive observation (Hacking 1983; Galison 1997). Other researchers sug- gest that the computerization of images accounts for an epistemic devalu- ation of visualizations in favor of their mathematical manipulation that the digital (binary) format allows (Beaulieu 2001). This raises questions of objectivity in science, but not without also raising questions about the objects that are perceived through different modes of visualization (Hoel 2011; Carusi 2012). In this volume, Chiara Ambrosio shows that notions of scientific objectivity have been shaped in a constant dialogue with artistic representation. She points to the importance of contextualizing prevalent ideas of truth in science by attending to the roles of artists in shaping these ideas as well as challenging them. Annamaria Carusi and Aud Sissel Hoel develop an approach to visualizations in neuroscience that accentuates the generative dimension of vision, developing Maurice Merleau-Ponty’s notion of “system of equivalences” as an alternative to approaches in terms of subjectivity or objectivity. The generative roles of visualizations, or, more precisely, of digital mapping practices, are also emphasized in Albena Yaneva’s contribution, as well as in the contribution by Torben Elgaard Jen- sen, Anders Kristian Munk, Anders Koed Madsen and Andreas Birkbak. A third area where we are seeing shifts is in the ontology of what counts as a scientific object. The notion of what constitutes a scientific object has received a great deal of attention for quite some time in science studies broadly (philosophy, history and sociology). However, we are currently see- ing a resurgence of interest in the ontology of scientific objects (Barad 2007; Bennett 2010; Brown 2003; Harman 2009; Latour 2005; Olsen et al. 2012; Woolgar and Lezaun 2013; among many others; see Trentmann 2009 for an overview). As Cartwright points out in her chapter, a new preoccupation with ontology in science studies has been continuous with the digitaliza- tion of medicine, science and publication. It may seem that this is because computational objects demand a different sort of ontology: They are often seen as virtual as opposed to physical objects, and their features are often seen as in some sense secondary to the features of “actual” physical objects that they are assumed to mimic. However, the distinction between virtual
  • 19. 4 Carusi, Sissel Hoel, Webmoor and Woolgar and physical objects is problematic (Rogers 2009; Carusi 2011; Hoel and van der Tuin 2013), and is inadequate to the working objects handled and investigated by the sciences. This is highlighted when attention is paid to the mode of visualization implied by digitalization and computerization. In this volume, Timothy Webmoor observes that the formerly distinct steps of crafting visualizations are woven together through the relational ontology of “codework.” A relational ontology is also key to the approach developed by Carusi and Hoel, as it is to other contributions that emphasize the gen- erative aspect of visualizations. Steingart, for example, points out that the events portrayed in the mathematical films she has studied are not indexes of observable natural events, but events that become phenomena at all only to the extent that they are graphically represented and animated. However, the ontological implications of visualization are frequently obscured: Michael Lynch and Kathryn de Ridder-Vignone, in their study of nanotechnology images, note that by resorting to visual analogies and artistic conventions for naturalistic portrayal of objects and scenery, nano-images represent a missed opportunity to challenge conventional, naturalistic ontology. In terms of the ontology of visual matter, some visual culturalists and anthropologists (e.g., Gell 1998; Pinney 1990, 2004) have suggested that tak- ing ontology seriously entails reorienting our epistemic toolbox for explain- ing images, or suspending our desire for epistemic explanation. These include “scaling-out” strategies that move away from the visual object to emphasize historical contingency, partiality or construction through documenting an object’s many sociotechnical relationships. A different mode of engagement may be localizing at the scale of the visual object to get at the “otherness,” “fetishization” and inherent qualities that make certain visual imagery work (Harman 2011; Henare, Holbraad and Wastell 2007; Pinney 1990; Webmoor 2013; topics also discussed by Beaulieu in this volume). Deploying media to register ontological competencies, rather than to serve epistemic goals, means developing object-oriented metrologies, or new methods of “measuring” materiality (Webmoor 2014). This tactic would in part strive for the affective appreciation of visual objects, and it may approximate the collaboratively produced imagery of artists and scientists. Hence, we view the increase in collaborations between science and art, or alternatively, between science/ visual studies and art, as a fourth area of change. Certainly, as has been well documented, there is a long history of productive interchange between scien- tists and artists (Stafford 1991; Jones and Galison 1998; Kemp 2006). Fur- ther, the introduction of digital and computational technologies has opened a burgeoning field of explorations into embodied experience and technology, challenging boundaries between technoscience, activism and contemporary art (Jones 2006; da Costa and Philip 2008). In fact, one of the things that we were aiming for with the conference was to encourage interdisciplinary dia- logue and debate between different approaches to science and image studies, and particularly to get more conversations and collaborations going between STS and the humanities, as well as between science/visual studies and creative
  • 20. Introduction 5 practitioners. To attain the second goal, artists and designers, including Ali- son Munro, Gordana Novakovic and Alan Blackwell, were invited to present their work at the conference. In this volume, relations between science and art are dealt with by several contributors. Ambrosio discusses three histori- cal and contemporary examples, where science has crossed paths with art, arguing that an adequate understanding of the shifting notions of objectiv- ity requires taking into account the variegated influences of art. Carusi and Hoel show the extent to which a scientific practice such as neuroimaging shares characteristics of the process of painting, while David Ribes shows how today’s visualization researchers apply insights from visual perception and artistic techniques to design in order to ensure a more efficient visualiza- tion of data. The chapter by Lynch and de Ridder-Vignone also testifies to the interplay of science and art, not only through the reliance of nano-images on established artistic conventions, but also due to the way that nano-images are often produced and displayed as art, circulated through online “galleries.” A fifth change relates to the way that computational tools bring about modifications to practice, both in settings where visualizations are devel- oped and used in research, and in settings where scholars study these visu- alizing practices. A series of studies have documented the manner in which visually inscribing phenomena makes up “the everyday” of science work (Latour and Woolgar 1979; Lynch and Woolgar 1990; Cartwright 1995; Galison 1997; Knuuttila, Merz and Mattila 2006), suggesting that much of what enables science to function is making things visible through the assembling of inscriptions for strategic purposes. However, whereas previ- ously researchers investigating these visual inscription processes made their observations in “wet-labs” or physical sites of production, today they often confront visualizations assembled entirely in the computer or in web-based “spaces” (Merz 2006; Neuhaus and Webmoor 2012). Several contributors explore how tasks with computational tools of visualization are reconfigur- ing organizational and management practices in research settings. Ribes, focusing on visualization researchers who both publish research findings and develop visualization software, shows how, through visual input, tech- nology takes on a leading role in sustaining collaborations across disci- plines, which no longer depend solely on human-to-human links. Webmoor ethnographically documents changes to the political economy of academia that are attendant with the cocreation of data, code and visual outputs by researchers, software programmers and lab directors. He poses this reflexive question: What changes for academics in terms of promotion, assignation of publishing credit and determination of “impact factors” will accompany the increasing online publishing of research outputs, or visualizations, in digital format? Modifications to practice due to new computational tools is also a key topic in Edgeworth’s chapter, which tracks the transition from spade-work to screen-work in archaeology, as it is in Schilling’s chapter, which points to the blurring of boundaries between producers and users in today’s mapmaking practices.
  • 21. 6 Carusi, Sissel Hoel, Webmoor and Woolgar When it comes to the modifications to the second setting, to the way that today’s scholars go about investigating the practices of science, Peter Gali- son’s contribution to this volume challenges scholars to do visual work and not only to study it. Galison’s chapter, however, is of a different kind than the preceding chapters, being set up as a debate contribution advocating film as a research tool in science and technology studies—or what he refers to as “second-order visual STS.” Upon receiving this contribution, we had the idea of inviting scholars of the visual culture of science to respond, or alter- natively, to write up their own position pieces on the topic of visual STS. Thus, whereas the first part of the book consists of a series of (in Galison’s terms) first-order VSTS contributions, the second part consists of a series of debate contributions (varying in length from short pieces to full length papers) that discuss the idea of visual STS, and with that, the idea of doing visual work in science studies (second-order VSTS). This was particularly appropriate since, as the book was being finalized, we noted that the call for submissions for the 2013 Annual Meeting of the Society for Social Studies of Science included a request for “sessions and papers using ‘new media’ or other forms of new presentation,” and that there would be “special sessions on movies and videos where the main item submitted will be a movie or video.”2 This initiative has been followed up by the announcement of a new annual film festival, Ethnografilm, the first of which to be held in Paris, April 2014. The expressed purpose of the film festival is to promote filmmaking as a “legitimate enterprise for academic credit on a par with peer reviewed articles and books.”3 Clearly, science/visual studies scholars are themselves experiencing the challenge of digital technologies to their own research and presentation practice, and are more than willing to experiment with using the visual as well as studying it. As many of the contributors to the sec- ond part of the book discuss, conducting research through filmmaking or other visual means is not unprecedented—Galison himself discusses visual anthropology, and Cartwright discusses the use of film by researchers such as Christina Lammer. However, digital tools make movies and videos avail- able to a far wider range of scholars (consider Galison’s discussion of the extremely time-consuming editing process in predigital filming), and also make other techniques that have been used in the past—such as mapping and tracing networks—more available. This is not only a question of availability: Digitalization also transforms these techniques and media, and the mode of knowledge attained through them, as is clearly seen in the discussion of mapping in the responses of Jensen et al. and Yaneva. Besides, the increasing availability of digital video challenges science studies scholars not to fall back into what Cartwright terms an “anachronistic realism”, but to deploy new modes of reflexivity in their use of the medium. With the call for second-order VSTS, and the merging of crafts formerly divided between the arts and sciences, we note an optimistic potential for the engagement with visualizations in the age of computerization. With computerized forms of capture, rendering, distribution and retrieval of
  • 22. Introduction 7 scholarly information, the repertoires of science as well as science/visual studies expand well beyond the historical trajectory of what paper-based media permit. As several contributors urge, visual media with their unique affordances supplement paper-based media and allow complementary and richer portrayals of the practices of science. Further, with this expansion come new forms of knowledge. Certainly, this volume cannot exhaust a field already acknowledged for its inventiveness of new tools and techniques, and is but an initial exploration of some of its central challenges that will no doubt continue to elicit the attention of science studies scholars. What we hope to have achieved by gathering together these studies of computer- ized visualization is that the visual dimension of visualization warrants attention in its own right, and not only as an appendage of digitalization. OVERVIEW The present volume is divided into two parts. The first part consists of eight chapters that examine the transformative roles of visualizations across dis- ciplines and research domains. All of these chapters are based on presenta- tions made at the 2011 conference in Oxford. The second part considers instead the use of the visual as a medium in science/visual studies, and con- sists of two full-length chapters and three short position pieces. The con- tributors to this part were also affiliated with the Oxford conference in one way or another: as keynote, invited keynote, respondent or participants. The following paragraphs provide a brief overview of all contributions. Timothy Webmoor, in his chapter “Algorithmic Alchemy, or the Work of Code in the Age of Computerized Visualization,” offers an ethnography of an academic-cum commercial visualization lab in London. Work with and reliance upon code is integral to computerized visualization. Yet work with code is like Nigel Thrift’s “technological unconscious” (Thrift 2004). Until quite recently it has remained in the black boxes of our computerized devices: integral to our many mundane and scientific pursuits, yet little understood. Close ethnographic description of how code works suggests some novelties with respect to the tradition of examining representation in science and technology studies. Webmoor argues that formerly separated or often sequential tasks, principally data sourcing, programming and visu- alizing, are now woven together in what researchers do. This means that previously separated roles, such as those of researcher and programmer, increasingly converge in what he terms “codework,” an activity resembling reiterative knot-making. Outputs or visualizations, particularly with the “mashed-up” web-based visualizations he studies, are held only provision- ally like a knot before they are redone, adjusted, updated or taken down. Nevertheless, codework demonstrates vitality precisely because it con- founds conventional schemes of accountability and governance. It plays on a tension between creativity and containment.
  • 23. 8 Carusi, Sissel Hoel, Webmoor and Woolgar Matt Edgeworth’s “From Spade-Work to Screen-Work: New Forms of Archaeological Discovery in Digital Space” undertakes an ethnography of practices in archaeology involving the integration of digital tools. A pre- sentation of his own intellectual development in terms of deploying digital tools as an archaeologist parallels and grants a candor to his empirical observations surrounding changes to a field that is often caricatured as rather a-technological due to the down-and-dirty conditions of archaeo- logical fieldwork. Among other new tools of archaeology, Edgeworth underscores the embodiment and tacit skill involved with imaging tech- nologies, particularly those of Google Earth and LiDAR satellite images. He makes the case that screen-work, identifying and interpreting archaeo- logical features on the computer screen, entails discovery in the quintes- sential archaeological sense, that of excavating pertinent details from the “mess” or inundation of visual information. He proceeds to ask whether these modes of discovery are drastically different, or whether the shift to digital techniques is a step-wise progression in the adoption of new tools. In moving the locus of archaeological discovery toward the archaeological office space, Edgeworth brings up a fundamental issue of identity for “the discipline of the spade” in the digital age. In his chapter “British Columbia Mapped: Geology, Indigeneity and Land in the Age of Digital Cartography,” Tom Schilling offers a detailed consideration of the practices, processes and implications of digital map- ping by exploration geologists and Aboriginal First Nations in British Columbia, Canada. In both cases the communities produce maps that enter the public domain with explicit imperatives reflecting their economic and political interests. Exploration geologists use digital mapping tools to shape economic development, furthering their search for new mineral prospects in the Canadian Rocky Mountains. First Nations, on their side, produce digital maps to amplify claims to political sovereignty, by developing data layers consisting of ethnographic data. The chapter also explores the speci- ficities of digital cartography compared to its paper-based predecessors. While both digital and paper-based cartography are invariably political, digital mapping tools are distinguished by their manipulability: By invit- ing improvisational reconstruction they challenge the distinction between producers and users of maps. However, as Schilling’s case studies make clear, these practices have yet to fulfill the ideals of collaborative sharing of data and democratic participation. Further, as community-assembled data sets are taken up by others, the meaning and origins of databases become increasingly obscured. The contribution by David Ribes, “Redistributing Representational Work: Tracing a Material Multidisciplinary Link,” turns our atten- tion to the way in which scientists’ reliance on visualization software results in a redistribution of labor in terms of knowledge production, with computer scientists playing a key intermediary role in the process. The chapter follows the productive work of one computer scientist as she
  • 24. Introduction 9 built visualization tools for the sciences and medicine using techniques from experimental psychology. He shows that the methods and findings of the computer scientists have two trajectories: On one hand, they are documented in scholarly publications, where their strengths and weak- nesses are discussed; on the other, research outcomes also independently inform the production of future visualization tools, and become incorpo- rated into a process of scientific knowledge production in another field. Ribes explores the gap between these two trajectories, showing that as visualization software comes to be used and reused in different contexts, multidisciplinarity is loosened from human-to-human links, and instead becomes embedded in the technology itself. Michael Lynch and Kathryn de Ridder-Vignone, in their chapter “Mak- ing the Strange Familiar: Nanotechnology Images and Their Imagined Futures,” examine different types of images of nanoscale phenomena. Images play a prominent role in the multidisciplinary field of nanoscience and nanotechnology; and to an even greater extent than in other research areas images are closely bound to the promotion and public interface of the field. Examining nano-images circulated through online image galler- ies, press releases and other public forums, Lynch and de Ridder-Vignone make the observation that, even if they portray techno-scientific futures that challenge the viewer’s imagination, nano-images resort to classic artis- tic conventions and naturalistic portrayals in order to make nanoscale phe- nomena sensible and intelligible—pursuing the strategy of “making the strange familiar.” This may seem ironic, since the measurements of scan- ning-tunneling microscopes are not inherently visual, and the nanoscale is well below the minimum wavelengths of visible light. Nonetheless, the conventions used take different forms in different contexts, and the chapter proceeds to undertake an inventory of nano-images that brings out their distinct modes of presentation and their distinct combinations of imagina- tion and realism. With the chapter by Chiara Ambrosio, “Objectivity and Representative Practices across Artistic and Scientific Visualization,” we turn to the his- tory of objectivity in art and science, and specifically to the way in which they are interrelated. She shows that scientific objectivity has constantly crossed paths with the history of artistic representation, from which it has received some powerful challenges. Her aim is twofold: firstly to show the way in which artists have crucially contributed to shaping the history of objectivity; and secondly to challenge philosophical accounts of repre- sentation that not only are ahistorical but also narrowly focus on science decontextualized from its conversations with art. Ambrosio’s discussion of three case studies from eighteenth-century science illustration, nineteenth- century photography and twenty-first-century data visualization highlight the importance of placing current computational tools and technologies in a historical context, which encompasses art and science. She proposes a historically grounded and pragmatic view of “representative practices,” to
  • 25. 10 Carusi, Sissel Hoel, Webmoor and Woolgar account for the key boundary areas in which art and science have comple- mented each other, and will continue to do so in the age of computerization. Annamaria Carusi and Aud Sissel Hoel, in their chapter “Brains, Win- dows and Coordinate Systems,” develop an account of neuroimaging that conceives brain imaging methods as at once formative and revealing of neurophenomena. Starting with a critical discussion of two metaphors that are often evoked in the context of neuroimaging, the “window” and the “view from nowhere,” they propose an approach that goes beyond con- trasts between transparency and opacity, or between complete and partial perspectives. Focusing on the way brain images and visualizations are used to convey the spatiality of the brain, neuroimaging is brought into juxta- position with painting, which has a long history of grappling with space. Drawing on Merleau-Ponty’s discussion of painting in “Eye and Mind,” where he sets forth an integrated account of vision, images, objects and space, Carusi and Hoel argue that the handling and understanding of space in neuroimaging involve the establishment of a “system of equivalences” in the terms of Merleau-Ponty. Accentuating the generative dimension of images and visualizations, the notion of seeing according to a system of equivalences offers a conceptual and analytic tool that opens a new line of inquiry into scientific vision. In her chapter “A Four-Dimensional Cinema: Computer Graphics, Higher Dimensions and the Geometrical Imagination,” Alma Steingart investigates the way that computer graphic animation has reconfigured mathematicians’ visual culture and transformed mathematical practice by providing a new way of engaging with mathematical objects and theories. Tracking one of the earliest cases in which computer graphics technology was applied to mathematical work, the films depicting four-dimensional surfaces by Thomas Banchoff and Charles Strauss, Steingart argues that computer graphics did more than simply represent mathematical phenom- ena. By transforming mathematical objects previously known only through formulas and abstract argument into perceptible events accessible to direct visual investigation, computer graphic animation became a new way of producing mathematical knowledge. Computer graphics became a tool for posing new problems and exploring new solutions, portraying events that would not be accessible except through their graphic representation and animation. It became an observational tool that allowed mathematicians to see higher dimensions, and hence a tool for cultivating and training their geometrical imagination. The focus on film as a mode of discovery makes a nice transition to the next chapter, by Peter Galison, which introduces the second part of the book. Galison challenges science studies to use the visual as well as to study it, and delineates the contours of an emerging visual science and technology studies or VSTS in a contribution with two parts: a theoret- ical reflection on VSTS, and a description of his own experience doing science studies through the medium of film, in such projects as Ultimate
  • 26. Introduction 11 Weapon: The H-Bomb Dilemma (Hogan 2000) and Secrecy (Galison and Moss 2008). Galison draws a distinction between first-order VSTS, which continues to study the uses that scientists make of the visual, and second- order VSTS, which uses the visual as the medium in which it conducts and conveys its own research. He proposes that exploring the potential for second-order VSTS is a logical further development of what he holds up as the key accomplishment of science studies in the last thirty years: develop- ing localization as a counter to global claims about universal norms and transhistorical markers of demarcation. In their collective piece, “Expanding the Visual Registers of STS,” Tor- ben Elgaard Jensen, Anders Kristian Munk, Anders Koed Madsen and Andreas Birkbak respond to Galison’s call to expand the visual research repertoire by advising an even larger expansion: Why stop at filmmaking when there are also other visual practices that could be taken up by second- order VSTS? Focusing in particular on the practice of making digital maps, they argue that the take-up of maps by second-order VSTS would have to be accompanied by a conceptual rethinking of maps, including a discussion of the way maps structure argumentation. They also pick up on Galison’s observations concerning the affectivity of film and video by suggesting that maps also leave the viewer affected through his or her own unique mode of engagement. In her response, “Mapping Networks: Learning from the Epistemology of the ‘Natives,’” Albena Yaneva starts out by pointing to the role of eth- nographic images in shaping the fieldworker’s explanations and arguments. Further, with the introduction of digital mapping tools into the STS tool box with large projects such as the controversy mapping collaborative proj- ect MACOSPOL, a shift to second-order VSTS has already taken place. Emphasizing the performative force of mapping, Yaneva argues that map- ping is not a way of illustrating but a way of generating and deploying knowledge. In her own fieldwork, which followed the everyday visual work of artists, technicians, curators, architects and urban planners, Yaneva experimented with swapping tools with the “natives,” learning a lot from their indigenous visual techniques. Drawing on this “organic” development of second-order visual methods gained through ethnographic intimacy, she suggests borrowing epistemologies and methods from the “natives.” In her piece “If Visual STS Is the Answer, What Is the Question?” Anne Beaulieu outlines her position on the topic of visual STS. She starts out by countering claims that STS has always been visual or that attending to the visual is equal to fetishizing it. Defending the continued relevance of talking about a “visual turn” in STS, Beaulieu lists five reasons why the question of VSTS becomes interesting if one attends to its specifics: Images are an emergent entity that remains in flux; visual practices are just as boring as other material practices studied by STS; observation takes many forms and must be learned; vision and images are to an increasing extent networked; and attending to the visual can serve to expand the toolkit of
  • 27. 12 Carusi, Sissel Hoel, Webmoor and Woolgar STS by drawing on resources from disciplines such as media studies, film studies, art history and feminist critiques of visuality. The concluding contribution to this volume, Lisa Cartwright’s “Visual Science Studies: Always Already Materialist,” demonstrates the importance of approaching the visual in an interdisciplinary manner. Cartwright points out the attention paid to the materiality of the visual that was a hallmark of Marxist materialist and feminist visual studies from the late 1970s onwards. The chapter highlights the specific contributions of materialist and feminist visual studies in addressing subjectivity and embodiment. The focus of visual studies on materiality does not result in a disavowal of or skepticism toward the visual. Cartwright reminds us that the visual turn of science studies since the 1990s coincided with the digital turn in science and technology, includ- ing the use of digital media in the fieldwork of science studies scholars; here the author calls for a more reflexive use of the medium. The chapter con- cludes with a detailed discussion of two films by the sociologist and videogra- pher Christina Lammer—Hand Movie 1 and 2—showing how they present a third way, which neither reduces things to their meanings in the visual, nor reduces images to the things and processes around them; instead, they bring out a more productive way of handling visuality and materiality. NOTES 1. Volume 37, Number 1, March 2012. All the contributions published in the special issue are freely available for download at the following link: http:// www.ingentaconnect.com/content/maney/isr/2012/00000037/00000001. Last accessed 4th November, 2013. 2. http://www.4sonline.org/meeting. Last accessed 4th November, 2013. 3. For more information about the film festival, see: http://ethnografilm.org. Last accessed 4th November, 2013. REFERENCES Arias-Hernandez, Richard, Tera M. Green and Brian Fisher. 2012. “From Cogni- tive Amplifiers to Cognitive Prostheses: Understandings of the Material Basis of Cognition in Visual Analytics.” In “Computational Picturing,” edited by Anna- maria Carusi, Aud Sissel Hoel and Timothy Webmoor. Special issue. Interdisci- plinary Science Reviews 37 (1): 4–18. Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter. Durham, NC: Duke University Press. Bartscherer, Thomas, and Roderick Coover, eds. 2011. Switching Codes: Thinking through Digital Technology in the Humanities and Arts. Chicago: University of Chicago Press. Beaulieu, Anne. 2001. “Voxels in the Brain: Neuroscience, Informatics and Chang- ing Notions of Objectivity.” Social Studies of Science 31 (5): 635–680. Bederson, Benjamin B., and Ben Shneiderman. 2003. The Craft of Information Visualization: Readings and Reflections. Amsterdam: Morgan Kaufmann. Bennett, Jane. 2010. Vibrant Matter: A Political Ecology of Things. Durham, NC: Duke University Press.
  • 28. Introduction 13 Benjamin, Walter. (1936) 1999. “The Work of Art in the Age of Mechanical Repro- duction.” In Visual Culture: The Reader, edited by Jessica Evans and Stuart Hall, 61–71. London: SAGE. Brown, Bill. 2003. A Sense of Things: The Object Matter of American Literature. Chicago: University of Chicago Press. Card, Stuart C., Jock Mackinlay and Ben Shneiderman. 1999. Readings in Informa- tion Visualization: Using Vision to Think. San Francisco: Morgan Kaufmann. Cartwright, Lisa. 1995. Screening the Body: Tracing Medicine’s Visual Culture. Minneapolis: University of Minnesota Press. Carusi, Annamaria. 2008. “Scientific Visualisations and Aesthetic Grounds for Trust.” Ethics and Information Technology 10: 243–254. Carusi, Annamaria. 2011. “Trust in the Virtual/Physical Interworld.” In Trust and Virtual Worlds: Contemporary Perspectives, edited by Charles Ess and May Thorseth, 103–119. New York: Peter Lang. Carusi, Annamaria. 2012. “Making the Visual Visible in Philosophy of Science.” Spontaneous Generations 6 (1): 106–114. Carusi, Annamaria, Aud Sissel Hoel and Timothy Webmoor, eds. 2012. “Com- putational Picturing.” Special issue. Interdisciplinary Science Reviews 37 (1). Coopmans, Catelijne, Janet Vertesi, Michael Lynch and Steve Woolgar. 2014. Rep- resentation in Scientific Practice Revisited. Cambridge, MA: MIT Press. Crary, Jonathan. 1990. Techniques of the Observer: On Vision and Modernity in the Nineteenth Century. Cambridge, MA: MIT Press. da Costa, Beatriz, and Kavita Philip, eds. 2008. Tactical Biopolitics: Art, Activ- ism, and Technoscience. Cambridge, MA: MIT Press. Daston, Lorraine, and Peter Galison. 2007. Objectivity. New York: Zone Books. Eisenstein, Elizabeth. 1980. The Printing Press as an Agent of Change: Commu- nications and Cultural Transformations in Early Modern Europe. Cambridge: Cambridge University Press. Elkins, James. 2007. Visual Practices across the University. Munich: Wilhelm Fink Verlag. Ellenbogen, Josh. 2008. “Camera and Mind.” Representations 101 (1): 86–115. Friedberg, Anne. 2006. The Virtual Window: From Alberti to Microsoft. Cam- bridge, MA: MIT Press. Galison, Peter. 1997. Image & Logic: A Material Culture of Microphysics. Chi- cago: University of Chicago Press. Galison, Peter, and Rob Moss. 2008. Secrecy. Redacted Pictures. DVD. Gell, Alfred. 1998. Art and Agency: An Anthropological Theory. Oxford: Clarendon. Hacking, Ian. 1983. Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge: Cambridge University Press. Harman, Graham. 2009. The Prince of Networks: Bruno Latour and Metaphys- ics. Melbourne: Re.Press. Harman, Graham. 2011. “On the Undermining of Objects: Grant, Bruno and Radical Philosophy.” In The Speculative Turn: Continental Materialism and Realism, edited by Levi R. Bryant, Nick Srnicek and Graham Harman, 21–40. Melbourne: Re.Press. Henare, Amiria, Martin Holbraad and Sari Wastell. 2007. “Introduction.” In Think- ing through Things: Theorising Artefacts in Ethnographic Perspective, edited by Amiria Henare, Martin Holbraad and Sari Wastell, 1–31. London: Routledge. Hoel, Aud Sissel. 2011. “Thinking “Difference” Differently: Cassirer versus Der- rida on Symbolic Mediation.” Synthese 179 (1): 75–91. Hoel, Aud Sissel. 2012. “Technics of Thinking.” In Ernst Cassirer on Form and Technology: Contemporary Readings, edited by Aud Sissel Hoel and Ingvild Folkvord, 65–91. Basingstoke: Palgrave Macmillan.
  • 29. 14 Carusi, Sissel Hoel, Webmoor and Woolgar Hoel, Aud Sissel, and Iris van der Tuin. 2013. “The Ontological Force of Technic- ity: Reading Cassirer and Simondon Diffractively.” Philosophy and Technology 26 (2): 187–202. Hogan, Pamela. 2000. Ultimate Weapon: The H-Bomb Dilemma. Superbomb Documentary Production Company. DVD. Horn, Robert. 1998. Visual Language: Global Communication for the 21st Cen- tury. San Francisco: MacroVU. Jones, Caroline A., ed. 2006. Sensorium: Embodied Experience, Technology and Contemporary Art. Cambridge, MA: MIT Press. Jones, Caroline A., and Peter Galison, eds. 1998. Picturing Science Producing Art. New York: Routledge. Kemp, Martin. 2006. Seen | Unseen: Art, Science, and Intuition from Leonardo to the Hubble Telescope. Oxford: Oxford University Press. Knuuttila, Tarja, Martina Merz and Erika Mattila. 2006. “Editorial.” In “Com- puter Models and Simulations in Scientific Practice,” edited by Tarja Knuuttila, Martina Merz and Erika Mattila. Special issue, Science Studies 19 (1): 3–11. Latour, Bruno, and Steve Woolgar. 1979. Laboratory Life: The Social Construc- tion of Scientific Facts. Beverly Hills: SAGE. Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Net- work-Theory. Oxford: Oxford University Press. Lynch, Michael, and Steve Woolgar, eds. 1990. Representation in Scientific Prac- tice. Cambridge, MA: MIT Press. Manovich, Lev. 2001. The Language of New Media. Cambridge, MA: MIT Press. McCormick, Bruce, Thomas DeFanti and Maxine Brown, eds. 1987. “Visualiza- tion in Scientific Computing.” Special issue, Computer Graphics 21 (6). Merz, Martina. 2006. “Locating the Dry Lab on the Lab Map.” In Simulation: Pragmatic Construction of Reality, edited by Johannes Lenhard, Günter. Küp- pers and Terry Shinn, 155–172. Dordrecht: Springer. Mitchell, W. J. T. 1994. “The Pictorial Turn.” In Picture Theory: Essays on Verbal and Visual Representation, 11–34. Chicago: University of Chicago Press. Neuhaus, Fabian, and Timothy Webmoor. 2012. “Agile Ethics for Massified Research and Visualisation.” Information, Communication and Society 15 (1): 43–65. Olsen, Bjørnar, Michael Shanks, Timothy Webmoor and Christopher Witmore. 2012. Archaeology: The Discipline of Things. Berkeley: University of California Press. Pinney, Christopher. 1990. “The Quick and the Dead: Images, Time, and Truth.” Visual Anthropology Review 6 (2): 42–54. Pinney, Christopher. 2004. “Photos of the Gods”: The Printed Image and Political Struggle in India. London: Reaktion Books. Rheingold, Howard. 1992. Virtual Reality. New York: Simon and Schuster. Rogers, Richard. 2009. The End of the Virtual: Digital Methods. Amsterdam: Vossiuspers. Rudwick, Martin J. S. 1976. “The Emergence of a Visual Language for Geological Science 1760–1840.” History of Science 14 (3): 149–195. Stafford, Barbara Maria. 1991. Body Criticism: Imaging the Unseen in Enlighten- ment Art and Medicine. Cambridge, MA: MIT Press. Stafford, Barbara Maria. 1996. Good Looking: Essays on the Virtue of Images. Cambridge, MA: MIT Press. Thrift, Nigel. 2004. “Remembering the Technological Unconscious by Fore- grounding Knowledges of Position.” Environment and Planning D: Society and Space 22 (1): 175–190. Trentmann, Frank. 2009. “Materiality in the Future of History: Things, Practices, and Politics.” Journal of British Studies 48: 283–307.
  • 30. Introduction 15 Ware, Colin. 2000. Information Visualization: Perception for Design. San Fran- cisco: Morgan Kaufman. Webmoor, Timothy. 2013. “STS, Symmetry, Archaeology.” In The Oxford Hand- book of the Archaeology of the Contemporary World, edited by Paul Graves- Brown, Rodney Harrison and Angela Piccini, 105–120. Oxford: Oxford University Press. Webmoor, Timothy. 2014. “Object-Oriented Metrologies of Care and the Proxi- mate Ruin of Building 500.” In Ruin Memories: Materialities, Aesthetics and the Archaeology of the Recent Past, edited by Bjørnar Olsen and Þóra Péturs- dóttir, 462–485. London: Routledge. West, Thomas G. 2004. Thinking Like Einstein: Returning to Our Roots with the Emerging Revolution in Computer Information Visualization. New York: Prometheus Books. Woolgar, Steve, and Javier Lezaun, eds. 2013. A Turn to Ontology? Special issue, Social Studies of Science 43 (3). Wouters, Paul, Anne Beaulieu, Andrea Scharnhorst and Sally Wyatt, eds. 2013. Virtual Knowledge: Experimenting in the Humanities and the Social Sciences. Cambridge, MA: MIT Press. Yaneva, Albena. 2012. Mapping Controversies in Architecture. Farnham: Ashgate.
  • 32. Part I Visualization in the Age of Computerization
  • 34. 1 Algorithmic Alchemy, or the Work of Code in the Age of Computerized Visualization Timothy Webmoor INTRODUCTION “I’m doing something very dangerous right now” (Informant 1, July 8, 2010). “Yah, now is not a good time for me!” (Informant 2, July 8, 2010). Ethnographic silence can speak volumes. Despite prompts from the anthro- pologist, the dialogue dried up. Falling back on observation, the two infor- mants were rapidly, if calmly, moving between their multiple program windows on their multiple computer displays. I had been observing this customary activity of coding visualizations for nearly a month now—a visual multitasking that is so characteristic of the post–Microsoft Windows age (Friedberg 2006). Looking around the open plan office setting, every- one was huddled in front of a workstation. Unlike ethnographic work in “wet labs,” where the setting and activities at hand differ from the anthro- pologist’s own cubicled site of production, this dry lab (Merz 2006) seemed so mundane and familiar, as a fellow office worker and computer user, that admittedly I had little idea of how to gain any analytic purchase as their resident anthropologist. What was interesting about what these researchers and programmers were doing? It wasn’t until I had moved on to questioning some of the PhD research- ers in the cramped backroom that the importance of what the two infor- mants were doing was overheard: “Well done! So it’s live?” (Director, July 8, 2010). The two programmers had launched a web-based survey program, where visitors to the site could create structured questionnaires for other visitors to answer. The responses would then be compiled for each visitor and integrated with geo-locational information obtained from browser-based statistics to display results spatially on a map. It was part of what this laboratory was well known for: mashing up and visualizing crowd-sourced and other “open” data. While focused upon the UK, within a few months the platform had received over 25,000 visitors from around the world. The interface looked deceptively simple, even comical given a
  • 35. 20 Timothy Webmoor cartoon giraffe graced the splash page as a mascot of sorts. However, the reticence the day of its launch was due to the sheer labor of coding to dis- simulate the complicated operations allowing such an “open” visualization. “If you’re going to allow the world to ask the rest of the world anything, it is actually quite complicated” (Director, June 11, 2010). As one of the programmers later explained his shifting of attention between paper notes, notes left in the code, and the code itself, “I have to keep abreast of what I’ve done because an error . . . (pause) . . . altering the back channel infra- structure goes live on a website” (Informant 1, July 21, 2010). Being familiar with basic HTML, a type of code or, more precisely, a markup text often used to render websites, I knew the two programmers were writing and debugging code that day. Alphanumeric lines, full of sym- bols and incongruous capitalizations and spacing, were recognizable enough; at the lab this lingua franca was everywhere apparent and their multiple screens were full of lines of code. Indeed, there were “1,000 lines just for the web page view itself [of the web-based survey platform]” (Informant 2, June 6, 2011)—that is, for the window on the screen to correctly size, place and frame the visualized data. Yet when looking at what they were doing there was little to see, per se. There was no large image on their screens to anchor my visual attention—a link between the programming they were engrossed in and what it was displaying, or, more accurately, the dispersed data the code was locating, compiling and rendering in the visual register. The web-based survey and visualization platform was just one of several visualizing platforms that the laboratory was working on. For other types of completed visualizations rendering large amounts of data—for example, a London transport model based upon publicly available data from the Transport for London (TfL) authority—you might have much more coding: “This is about 6,000 lines of code, for this visualization [of London traf- fic]” (Informant 3, June 10, 2011) (Figure 1.1). Over the course of roughly a year during which I regularly visited the visualization laboratory in London, I witnessed the process of launching many such web-based visualizations, some from the initial designing ses- sions around the whiteboards to the critical launches. A core orientation of the research lab was a desire to make the increasingly large amounts of data in digital form accessible to scholars and the interested public. Much infor- mation has become widely available from government authorities (such as the TfL) through mandates to make collected digital data publicly available, for instance, through the 2000 Freedom of Information Act in the UK, or the 1996 Electronic Freedom of Information Act Amendments in the US. Massive quantities are also being generated through our everyday engage- ments with the Internet. Of course, the tracking of our clicks, “likes,” visited pages, search keywords, browsing patterns, and even email con- tent has been exploited by Internet marketing and service companies since the commercialization of the Internet in the late 1990s (see Neuhaus and Webmoor 2012 on research with social media). Yet embedded within an
  • 36. Algorithmic Alchemy 21 academic institution, this laboratory was at the “bleeding edge” of harvest- ing and visualizing such open databases and other traces left on the Inter- net for scholarly purposes. Operating in a radically new arena for potential research, there is a growing discussion in the social sciences and digital humanities over how to adequately and ethically data mine our “digital heritage” (Webmoor 2008; Bredl, Hünniger and Jensen 2012; Giglietto, Rossi and Bennato 2012). Irrespective of what sources of data were being rendered into the visual register by programmers and researchers at this lab, I constantly found myself asking for a visual counterpart to their inces- sant coding: “Can you show me what the code is doing?” I needed to see what they were up to. CODEWORK Code, or specifically working with code as a software programmer, has often been portrayed as a complicated and arcane activity. Broadly defined, code is “[a]ny system of symbols and rules for expressing information or instructions in a form usable by a computer or other machine for process- ing or transmitting information” (OED 2013). Of course, like language, Figure 1.1 Lines of code in the programming language C++ (on right) rendering the visualization (on left) of a London transport model (Informant 3, June 10, 2011).
  • 37. 22 Timothy Webmoor there are many forms of code: C++, JavaScript, PHP, Python—to name a few more common ones discussed later. The ability to “speak computer” confers on programmers a perceived image of possessing inscrutable and potent abilities to get computers to comply.1 Part nerd, part hero, program- mers and specifically hacker culture have been celebrated in cyberpunk literature and cinema for being mysterious and libertarian.2 The gothic sensibility informing such portrayals reinforces a darkened and distanced view of working with code. The academic study of code, particularly from the science and technol- ogy studies (STS) perspective of exhibiting what innervates the quintessen- tial “black boxes” that are our computing devices, has only recently been pursued ethnographically (e.g., Coleman and Golub 2008; Demazière, Horn and Zune 2007; Kelty 2008). Oftentimes, however, such studies scale out from code, from a consideration of its performative role in generat- ing computerized outputs, to discuss the identity and social practices of code workers. More closely examining working with code has received less attention (though see Brooker, Greiffenhagen and Sharrock 2011; Rooksby, Martin and Rouncefield’s 2006 ethnomethodological study). Sterne (2003) addresses the absent presence of code and software more generally in academic work and suggests it is due to the analytic challenge that code presents. The reasons for this relate to my own ethnographic encounter. It is boring. It is also nonindexical of visual outputs (as least to the untrained eye unfamiliar with “reading” code; see Rooksby, Martin and Rouncefield 2006). In other words, code, like Thrift’s (2004) “tech- nological unconscious,” tends to recede from immediate attention into infrastructural systems sustaining and enabling topics and practices of concern. Adrian Mackenzie, in his excellent study Cutting Code (2006, 2), describes how software is felt to be intangible and immaterial, and for this reason it is often on the fringe of academic and commercial analyses of digital media. He is surely right to bemoan not taking code seriously, downplayed as it is in favor of supposed higher-order gestalt shifts in cul- ture (“convergence”), political economy (“digital democracy” and “radical sharing”) and globalization (“network society”). No doubt the “technical practices of programming interlace with cultural practices” (ibid., 4), with the shaping and reshaping of sociality, forms of collectivity and ideas of selfhood; what Manovich (2001, 45) termed “trans-coding” (e.g., Ghosh 2005; Himanen, Torvalds and Castells 2002; Lessig 2004; Weber 2004; for academic impacts see Bartscherer and Coover 2011). However, these larger order processes have dominated analyses of the significance involving the ubiquity of computer code. Boring and analytically slippery, code is also highly ambiguous. The Oxford Dictionary of Computing (1996) offers no less than 113 techni- cal terms that use the word “code” in the domain of computer science and information technology. So despite acknowledging the question “Why is it hard to pin down what software is?” (2006, 19), Mackenzie, a sociologist
  • 38. Algorithmic Alchemy 23 of science, admirably takes up the summons in his work. For Mackenzie, code confounds normative concepts in the humanities and social sciences. It simply does not sit still long enough to be easily assigned to conventional explanatory categories, to be labeled as object or practice, representation or signified, agent or effect, process or event. He calls this “the shifting status of code” (ibid., 18). Mackenzie’s useful approach is to stitch together code with agency. Based upon Alfred Gell’s (1998) innovative anthropo- logical analysis of art and agency, Mackenzie (2006, 2005) pursues an understanding of software and code in terms of its performative capacity. “Code itself is structured as a distribution of agency” (2006, 19). To string together what he sees as distributed events involving code’s agency, he takes up another anthropologist’s methodological injunction to pursue “multi- sited ethnography” (Marcus 1995). In terms of how code is made to travel, distributed globally across information and communication technologies (ICTs) and networked servers as a mutable mobile (cf. Latour 1986), this approach permits Mackenzie to follow (the action of) code and offer one of the first non-technical considerations of its importance in the blood flow of contemporary science, commerce and society. Given its mobility, mutability, its slippery states, code can usefully be studied through such network approaches. Yet I am sympathetic with recent moves within ethnography to reassess the importance of locality and resist the tendency (post-globalization) to scale out (see Candea 2009; Falzon 2009; in STS see Lynch 1993). While there is of course a vast infrastructural network that supports the work code performs in terms of the “final” visu- alizations, which will be discussed with respect to “middle-ware”, most of the work involving code happens in definite local settings—in this case, in a mid-sized visualization and research laboratory in central London. Describing how code works and what it does for the “hackers” of com- puterized visualizations will help ground the larger order studies of cul- tural impacts of computerization, as well as complement the more detailed research into the effects of computerization on scientific practices. I am, therefore, going to pass over the much studied effects of software in the workplace (e.g., Flowers 1996; Hughes and Cotterell 2002) and focus upon when technology is the work (Grint and Woolgar 1997; Hine 2006). Stay- ing close to code entails unpacking what occurs at the multiple screens on programmers’ computers. Like a summer holiday spent at home, it is mundane and a little boring to “stay local,” but like the launch of the new web-based open survey visualizer that tense day, there are all the same quite complex operations taking place with code. With the computerization of data and visualizations, the work with code weaves together many formerly distinct roles. This workflow wraps together the practices of: sourcing data to be visualized; programming to trans- form and render data visually; visualizing as a supposed final stage. I term these activities “codework.” Merging often sequential stages involved with the generation of visual outputs, I highlight how proliferating web-based
  • 39. 24 Timothy Webmoor visualizations challenge analytic models oriented by paper-based media. Computerized visualizations, such as those in this case study, are open- ended. They require constant care in the form of coding in order to be sus- tained on the Internet. Moreover, they are open in terms of their continuing influence in a feedback cycle that plays into both the sourcing of data and the programming involved to render the data visually. Codework, as an emergent and distinct form of practice in scientific research involving visualization, also blends several sets of binary catego- ries often deployed in visual studies: private/public, visible/invisible, mate- rial/immaterial. While these are of interest, I focus upon the manner in which code confounds the binary of creativity/containment and discuss the implications for the political economy of similar visualization labs and the accountability of codework. For clarity, I partially parse these categories and activities in what follows. SOURCING/PROGRAMMING/VISUALIZING The July 6, 2010 edition of The Guardian, a very popular UK newspaper, featured a map of London on the second page compiled from “tweets,” or posts to the microblogging service Twitter. It resembled a topographic map in that it visually depicted the density of tweeting activity around the city by using classic hypsometric shading from landscape representations. Given that there was a mountain over Soho, it was apparent to anyone familiar with the city that the data being visualized were not topographi- cal. The caption read, “London’s Twitterscape: Mapping the City Tweet by Tweet.”3 It was good publicity for the lab. The visualizations were a spin-off or side project of one of the PhD researchers. As he stated, it was an experi- ment “to get at the social physics of large cities through Twitter activity” (Informant 4, July 7, 2010). It was one of my earliest visits to the lab at a time when research deploying emergent social media such as Facebook, Flickr, Foursquare and Twitter, or other online sources such as Wikipedia editing activity, was in its infancy (see Viégas and Wattenberg 2004 as an early example). Indeed, many of these now popular online services did not exist before 2006. Sourcing the data to visualize from these online platforms is not particu- larly difficult. It does, however, take an understanding of how the data are encoded, how they might be “mined” and made portable with appropriate programming, and whether the information will be amenable to visual- izing. These programming skills are driven by a creative acumen; knowing where to look online for information relevant to research and/or commer- cial interests, and whether it might provide interesting and useful, or at least aesthetic, visualizations. Creative sourcing and programming are nec- essary crafts of codework.
  • 40. Algorithmic Alchemy 25 Many online services, such as Twitter, provide data through an appli- cation programming interface (API). Doing so allows third-party devel- opers to provide “bolt-on” applications, and this extensibility benefits the service provider through increased usage. “There’s an app for that!”: It is much like developing “apps” for Apple iTunes or Google Android. Importantly, though, sourcing these APIs or “scraping” web pages for data to visualize does require programming, and both the format of the data and the programming language(s) involved heavily determine the “final” visualizations themselves. In the case of Twitter, the company streams “live” a whole set of infor- mation bundled with every tweet. Most of this information, such as Internet protocol (IP) location, browser or service used to tweet, link to user profile and sometimes latitude and longitude coordinates (with a 5–15 m accu- racy), is not apparent to users of the service.4 The researchers at the London lab applied to Twitter to download this open data from their development site.5 The format is key for what types of visualization will be possible, or at least how much translation of the data by code will be required. For instance, as discussed later, many open data sources are already encoded in a spatial format like Keyhole Markup Language (KML) for display in industry standard analytic software and mapping platforms (for Google Earth or in ESRI’s geographic information system [GIS] programs such as ArcGIS). Twitter, like most social media companies, government institu- tions and scientific organizations, formats its data as comma-separated val- ues (CSV). For simplicity and portability across programming languages, this format for organizing data has become the primary de facto standard for open datasets. Information is arranged in tabular format much like an Excel or Numbers spreadsheet, and values are separated by either a comma or a tab-spacing (tabular-space values [TSV] format is a variety of CSV). The London lab logged a week’s worth of tweets for various metropolises. This amounted to raw data tables containing about 150,000 tweets and over 1.5 million discrete data points for each city. Such massive datasets could be visualized based upon various criteria—for instance, semanti- cally for word frequency or patterning. Importantly, being in CSV format means that the Twitter data are highly mutable by a wide range of pro- gramming languages, and therefore there are a number of possible paths to visualization. The researcher was interested in tying such Twitter “landscapes” into his larger PhD research involving travel patterns in the city of London. Spatial and temporal aspects were therefore most important, and a program was written to mine the data for spatial coordinates within a certain radius of the urban centers, as well as for time stamps. When and where a Twitter user sent a tweet could then be plotted. Once aggregated, the visualizations indicated patterns of use in terms of diurnal and weekly activity. They also suggested varying usage around the cities of concern.
  • 41. 26 Timothy Webmoor Sourcing data packaged as CSV files was common for the laboratory. Indeed, in step with the growing online, open data repositories, where much data are user-generated and least user-contributed, the lab was developing a visualizing middle-ware program to display spatial data based upon Open- StreetMap and OpenLayers.6 “A place to share maps and compare data visually,” as their website states, is the goal. Unlike either large commercial companies that typically fund and manage server farms where these large datasets are uploaded, such as Google’s Spreadsheets or Yahoo!’s Pipes, or commercially funded research groups like IBM’s ManyEyes,7 this lab was providing a visualizing tool, or “visualizer,” that would fetch data which was located remotely. Otherwise, despite the recent purchase of three new Dell stacked servers located in a closet in the hallway, there simply was not enough storage space for the modest lab to host the huge datasets. Described as “a service for researchers,” the web-based platform would, for example, “serve up on an ad hoc basis a visualization of the dataset whenever a query was made by a user” (Informant 1, November 10, 2010). As he went on to unpack the operation, when a user selected a set of statistics (e.g., crime statistics) to display over a selected region (e.g., the UK), there were a host of operations that had to transpire rapidly, and all hinged upon the web- based platform and the coding the researcher had written and “debugged” (and rewritten). These operations might be thought of as putting together layers consisting of different information and different formats in order to build a complexly laminar visualization. As the informant described, the programming allows the “fusing of CSV data [like census data] with geo- spatial data [coordinates] on a tile by tile basis [portions of the map viewed on the screen] . . . this involves three servers and six websites” (Informant 1, November 10, 2010). Data, plug-ins and the program code itself were variously dispersed and are pulled together to form the visualization on a user’s screen—hence the “middle” or go-between function of the software package developed by the lab and later released as GMapCreator. For pro- gramming web-based visualizations, code is coordination work. In addition to the growing online datasets stored primarily as CSV files, many government agencies now provide their stored data in accessible digital repositories. The UK’s data.london.gov.uk was another popular archive where the visualization lab and other researchers were obtaining UK census data. This was variously fetched and displayed as a dynamic, interactive map through the lab’s web-based platform. Twenty-seven months after the initial launch of the visualizing platform, 885 datas- ets had been “uploaded” to the map visualizer (or more precisely shared through linking to a remote server where the datasets were physically stored) and there were “about 10,000 users for the program” (Informant 1, November 10, 2010). Other data was being sourced by the lab through scraping. While the “ready-made” data stream from Twitter’s API or other online data
  • 42. Algorithmic Alchemy 27 repository requires some initial coding to obtain the data—for instance, writing a program to log data for a specified period of time as in the case of the Twitterscapes—web-scraping typically requires more involved code- work. Frequently, the data scraped must go through a series of steps all defined by code in order to be useful; and in many cases converted into a standard data format such as CSV. Let’s consider another example from the visualization lab. One of the programmers was interested in getting at “the social demography of, and city dynamics relating to, bike share usage” (Informant 5, Octo- ber 19, 2010). Bike share or bike rentals programs had recently become quite popular in international cities. London’s own bike sharing scheme, known as the Barclay’s Cycle Hire after its principal commercial con- tributor, was launched on June 30, 2010, and represents a quite large and well-utilized example. Visualizing and making publicly accessible the status of the bike share schemes piggybacks off of the data that the corporate managers of the various schemes collect for operational pur- poses. The number of bikes at each docking station (London has over 560 stations) is updated electronically every three minutes. The stages involved begin with the programmer using the “view source” feature in the web browser Firefox to de-visualize the “front end” of the commer- cial websites in order to assess what types of information are encoded. The “back end” or source code of the scheme’s website showed several types of information that could be rendered spatially. Specifically, time stamps, dock location and number of bicycles were data he thought could be harvested from these websites. With visualizations in mind, he was confident “that they would be something cool to put out there” (Infor- mant 5, September 23, 2010). He wrote a code in Python to parse the information scraped to narrow in on geographic coordinates and num- ber of bicycles. Using a MySQL database to store the information, the Python program pulled the selected data into CSV format by removing extraneous information (such as HTML markup). A cron program was written to schedule how frequently the web scraping takes place. Finally, the programmer aggregated the information scraped for each individual bike station to scale up to the entire system or city. To visualize the data, he used JavaScript to make it compatible with many web-based map dis- plays, such as Google Maps. In this case, he used OpenStreetMap with (near) real time information displayed for ninety-nine cities worldwide (Figure 1.2). Finally, he used Google’s Visualization API8 to generate and embed the charts and graphs. Several months after the world bike shar- ing schemes visualizer went live, the programmer related how he was contacted by the corporate managers of four different international cities asking him “to take down the visualizations . . . likely because it made apparent which schemes were being underutilized and poorly managed” (Informant 7, October 27, 2010).
  • 43. 28 Timothy Webmoor SOURCING/PROGRAMMING/VISUALIZING Visualizations crafted by the lab were clearly having a form of public “impact” and response. While not necessarily the type of impact accredited within the political economy of academia, the lab was all the same successful in attract- ing more funding based upon a reputation of expanding public engagement with its web presence. As the director remarked, “We are flush with grant money right now . . . we are chasing the impact” (October 19, 2010). Not dissimilarly to other studies of scientific visualizations (e.g., Beau- lieu 2001; Lynch and Edgerton 1988; Lynch and Ridder-Vignone this volume), there was a need to pursue public outreach achieved through accessible visualizations. At the same time, the researchers and program- mers themselves de-emphasized such “final” visualizations. Of course, I overheard several informants speaking approvingly of the visual styles used by colleagues in the lab. “He has good visualizations” (Informant 7, October 27, 2010) was a typical, if infrequent, response to my queries about their fellow researchers. More often, though, were the comments: Figure 1.2 Cached MySQL database consisting of number of bicycles and time stamps “scraped” from websites (on left) with the near real time visualization of the data in JavaScript (on right; in this case London’s bicycle share scheme). Bubbles indicate location of bicycle docks, with size proportional to the number of bicycles currently available. Colored lines show movement of bicycles between dock stations (Informant 5, October 19, 2010).
  • 44. Algorithmic Alchemy 29 “He has clean code” (Informant 5, October 27, 2010); or “the style of programming is very important . . . everything around the actual algo- rithm, even the commenting [leaving lines within the code prefaced by ‘//’ so as not to be readable/executable by the computer] is very important” (Informant 2, June 10, 2011). More than visualizations or data, the pro- gramming skill that made both possible was identified as the guarantor of reliability in web-based visualizations. “Code is king” assertions contrast with previous studies where research- ers engaging visualizations stated greater confidence in, or preference for, the supposed raw data visualized. In this laboratory, the programmers and researchers tended to hold a conflicted opinion of data. Ingenuity in find- ing sources of data to visualize was admired, such as with the bike share scheme or the Twitter maps. Equally, the acts of researchers who leveraged open data initiatives to amass new repositories, such as making requests through the Freedom of Information Act to the Transport for London (TfL) to release transportation statistics, were approvingly mentioned (e.g., Infor- mant 5, November 10, 2010). Yet once sourced, the data tended to recede into the background or bedrock as a substrate to be worked upon through programming skill. Everyone has their preferred way of programming, and preferred style of programming . . . The data is much more clearly defined. The struc- ture of the data and how you interface, or interact, with it. Whereas programming is so much more complex. It’s not so easy . . . maybe sounds like it’s isolated. But it’s really hundreds of lines of code. (Infor- mant 3, June 10, 2011) Like this informant, many felt that the data were fairly “static and closed,” and for this reason were less problematic and, consequently, less interesting to work with. One programmer explained, with reference to the prevalence of digital data in CSV format, that you need to “use a base-set that was ubiquitous . . . because web-based visualizations were changing constantly . . . with new versions being released every week or so” (Informant 1, June 13, 2011). Such a perception of data as a relatively stable “base” was often declared in contrast to the code to “mash up” the data and render it readable by the many changing platforms and plug- ins. As a go-between, code has to be dynamic to ensure the data remains compatible with the perpetually changing platforms for visualizing. Pro- grammers often bemoaned how much time they spent updating their code to keep their visualizations and platforms live on the Internet. Informant 1 (November 10, 2010) explained how he was “developing a point-click interface to make it easier to use the site [the mapping visualizer that the lab hosted], but it requires much more KML [a standard geospatial markup language] to make it compatible with Google Maps and Open- StreetMap [which the site used to display the maps].” When Google or
  • 45. 30 Timothy Webmoor another API provider updated their software, the programmers often had to update the code for their visualizations accordingly. Perhaps unsurprisingly, activities that required skill were more highly regarded. These preferences fed into the treatment of data vis-à-vis code in terms of lab management. There was much informal sharing of datasets that had already been sourced, sharing links to where relevant data might be found, or otherwise discussing interesting and pertinent information for one another’s research projects. To make sharing and collaborating with data easier and more accountable, the lab was beginning to set up a data repository using Trac, an open source project management program, on a central server. When asked about a code repository, the informant identi- fied the need, but admitted that a “data repository was much less of a prob- lem” (Informant 5, October 13, 2010). Instead, despite “revision control being essential to programming . . . and taught in the 2nd year of software engineering,” it was largely left to the individual programmer to create a “code repository so that [you] can back up to where [the software] was working . . . like a safety net” (Informant 2, March 16, 2011). Updating the felicitous phrase describing Victorian sex, code was every- where admired, spoken about and inherent to the survival of the lab, but never shared. This incongruous observation prompted a conversation later in the fieldwork: TW: You don’t share code, you share data? Informant: Mostly data, not so much code. The thing with data, once you load it and you have your algorithms, you can start to transform the data in different ways. You can also (pause) . . . quite often you download data in a certain format, and you load it, and your programming transforms it in different ways. Then it becomes useful . . . With C++ there is no boundary to what you can do. TW: So you can do whatever with the data. So you do that by writing new code? Do you write a bit of code in C++? Informant: I do it all the time. It’s how I spend my days. (Informant 3, June 10, 2011) Programming was held in deference for two principal reasons. First, as a creative skill it was seen as proprietary (see Graham 2004 on the paral- lels between artists and programmers). It was not something that could be straightforwardly taught, but as a craft it was learned on the job; and programmers strove to improve this skill in order to write the cleanest, most minimalist code. This fostered a definite peer awareness and review of code. These interpersonal dynamics fed into the larger management and economy of the lab already mentioned. This lab was chasing the impact. To do so they were creating “fast visualizations.” In addition, as discussed with respect to the mapping visualizer, web-based visualizations rely upon middle-ware or go-between programs gathering distributed data and
  • 46. Algorithmic Alchemy 31 rendering it on an ad hoc basis on the computer screen. Given the larger infrastructural medium, the lab’s visualizations needed constant coding and recoding in order to maintain operability with the rapidly changing software platforms and APIs that they were tied to. Each researcher who began a project had to quickly and on an ad hoc basis develop code that could render source data visually. Intended for rapid results but not neces- sarily long-term sustainability, the only reasonable way to manage such projects was to minimize confusion and the potential for working at cross- purposes by leaving the coding to individuals. At a smaller research lab where programming happens, this was feasible. At larger, corporate labora- tories, the many tasks of codework are often broken up in a Fordist manner among several specialists: software engineers, network engineers, graphic designers. All of whom may be involved in creating web-based visualizing platforms. Bolted together by teams, such coding is more standardized, minimalist and therefore more compatible and fungible. In contrast, the London lab found itself in a double-bind of sorts. They clearly knew they needed some form of accountability of the code produced by the lab: “Revision control is essential for the lab” (Informant 2, March 2, 2011). At the same time, the success of the lab in garnering financial sup- port, the “soft funding” it was entirely dependent upon, was largely due “to the creative mix of code and tools” and the “hands-off management style” (Director, March 2, 2011). Individual researchers were therefore largely left to practice their own codework. Secondly, in addition to being a highly creative and skilled craft, writing code, and specifically the code itself, was seen as dynamic and potent in relation to data. Mackenzie discusses the agency of code in general terms with respect to its ability to have effects at a distance (2005, 2006)—for instance, forging a shared sense of identity among the internationally based software engineers who develop the open source software Unix. For this lab, the software they were writing had agency in the very definite sense of transforming data. You want to “build an algorithm so you have good performance” (Informant 6, October 13, 2010). Unlike code more gener- ally, which includes human-readable comments and other instructions, an algorithm is a subset or specific type of code that is expressly written for a computer to perform a specified function in a defined manner. The defin- ing function of many algorithms in computer programming is the ability to manipulate data. For instance, as the informant describes ahead, a basic operation of algorithms is precisely the transformation of data from one format (e.g., the markup language KML) to a different format (e.g., C++). An algorithm performs a well-defined task. Like sorting a series of numbers, for example. Like a starting point [for the] input of data and output of data in different formats. You can do everything with source code. You can write algorithms, but you can do all other kinds of stuff as well. (Informant 3, June 10, 2011)
  • 47. 32 Timothy Webmoor Code as transitive revealed the conflicting view of data as being less than stable. In fact, during many conversations, code’s potency was estimated in fairly direct relation to how malleable it rendered data. TW: You transform these two sources of data that you have into XML files, so that you can look at them in ArcGIS? Informant: From ArcGIS you understand how the data, how these things are supposed to be connected. So then you can connect the net- work in the way it’s supposed to be connected, in your connec- tion structure that you have defined in your code, in C++. TW: Does it have to be in C++, or is that a standard for running simu- lations? . . . So you have to transform this data in C++, let me get at this, for a couple of reasons: One, you are familiar with it; two, it’s going to be much faster when you make queries [as opposed to ArcGIS]. Anything else? Informant: Well, I’m familiar with the code. (Informant 6, October 13, 2010) Code’s purpose was to translate and to work with data. Yet different types of code worked on data differently. Part of this was due to personal pref- erence and background. What certain programmers could do with code depended upon their familiarity with it. For this reason, depending upon the programmer, certain code was asserted to be “top end” or “higher order.” This implies that the coding language is more generalized and so could be written to perform many types of tasks. It also means, however, that the code must be programmed much more extensively. Whatever code was preferred, the acknowledgment that it transformed and manipulated data rarely led to discussion of a potential corollary: that data were somehow “constructed” or may become flawed. Part of this has to do with the data asserting a mea- sure of independence from code. More specifially, the format of the data partially determines the type of visualization pursued and so constrains to a certain degree the coding deployed. More emphasis was, however, given to code’s neutral operation upon data. It was held to merely translate the data’s format for “readability” across the networked computers and programs in order to render the visualization. Put another way, code transforms metadata not data. The view that code transformed without corruption became most apparent when researchers discussed the feedback role of visualization in examining and correcting the original datasets. SOURCING/PROGRAMMING/VISUALIZING Most visualizations mashed up by the lab were not finalized outputs. Just as the format of the sourced data influenced the programming required and the choices for visualization, the visualizations themselves recursively
  • 48. Algorithmic Alchemy 33 looped back into this process of codework. Several informants flipped the usual expectation that visualizations were end products in the information chain by discussing their integral role at the beginning of the process. It’s a necessary first step to try and visualize certain facets of the infor- mation. Because it does give you a really quick way of orienting your research. You can see certain patterns straightaway . . . you literally do need to see the big picture sometimes. It informed my research com- pletely . . . There are some themes in your research. But you still, (pause) the visualization informs your trajectory . . . Is part of the trajectory of your research . . . You draw on that . . . because you see something. (Informant 8, October 27, 2010) For this informant, deploying “sample” visualizations allowed him to identify patterns or other salient details in an otherwise enormous corpus of data. He was researching with Foursquare, a social media service that personalizes information based upon location. As he stated, he was “min- ing their API to harvest 300,000,000 records” (Informant 8, October 27, 2010). Awash in data, he needed to reduce the complexity in order to iden- tify and anchor research problems. Much like the lab’s resident anthropolo- gist, he needed a visual counterpart to what was otherwise undifferentiated and unfamiliar clutter on the computer screen. More than suggesting what to do with mined data, another researcher noted with pride that the working visualizations actually identified flaws in the data. He was coding with open data from Transport for London and the UK Ordnance Survey. Much of this entails merging traffic flow volume (TfL data) with geospatial coordinates (Ordnance data) to create large visuals consisting of lines (roadways, paths) and nodes (intersections). Responding to a question about accuracy in the original data and worries about creating errors through transforming the CSV files into C++, he dis- cussed the “bridge problem.” This was an instance where the visualization he began to run on the small scale actually pinpointed an error in the data. Running simulations of a small set of nodes and lines, he visually noticed traffic moving across where no roadway had been labeled. After inspecting a topographic map, he concluded that the Ordnance Survey had not plotted an overpass where it should have been. TW: When you say you know exactly what is going on with the algo- rithm, does that mean you can visualize each particular node that is involved in this network? Informant: Yes, once you program you can debug it where you can stop where the algorithm is running, you can stop it at anytime and see what kind of data it is reading and processing . . . you can monitor it, how it behaves, what it is doing. And once you check that and you are happy with that, you go on to a larger scale
  • 49. 34 Timothy Webmoor . . . once you see what it is giving you. (Informant 6, October 13, 2010) Running visualizations allowed the researcher to feel confident in the reli- ability of his data as he aggregated (eventually) to the scale of the UK. Whether visualizations were used by the lab researchers at the beginning of the process of codework to orient their investigations, or throughout the process to periodically corroborate the data, visualizations as final outputs were sometimes expected. Where this was the case, especially with respect to paper-based visualizations, many at the lab were resigned to their neces- sity but skeptical of the quality. Several felt it was an inappropriate medium for what were developed to be web-based and dynamic. Yet they openly acknowledged the need for such publication within the confines of aca- demia’s political economy. Online publishing has to be critical. For me, my research is mainly online. And that’s a real problem as a PhD student. Getting published is important. And the more you publish online independently the less scope you have to publish papers . . . still my motivation is to publish online, particularly when it’s dynamic . . . You might have a very, very interesting dynamic visualization that reveals a lot and its impact in the academic community might be limited . . . the main constraint is that these are printed quite small, that’s why I have to kind of tweak the visuals. So they make sense when they are reproduced so small. Because I look at them at twenty inches, on a huge screen. That’s the main difference, really. There’s a lot of fine-grained stuff in there. (Informant 8, October 27, 2010) Set within academic strictures of both promotion and funding, the lab’s researchers found themselves needing to generate fast visualizations, while at the same time “freezing” them, or translating and reducing them, to fit traditional print. CONCLUDING DISCUSSION: CODEWORK AND RIGHT WRITING IN SCIENTIFIC PRODUCTION Given the way codework weaves together the many activities happening at the visualization lab, the demands of academic publication to assign defi- nite credit became an arena for contestation. This is because programming, as a mode of writing and a skill integral to all of these activities, abrades against a tradition of hierarchical assignation of authorship going back to pre-Modern science (Shapin 1989). This tradition would restrict the role of software programming along the lines of Shapin’s “invisible technicians.” Writing code is not the right type of writing.
  • 50. Exploring the Variety of Random Documents with Different Content
  • 51. Gutenberg” appears, or with which the phrase “Project Gutenberg” is associated) is accessed, displayed, performed, viewed, copied or distributed: This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. 1.E.2. If an individual Project Gutenberg™ electronic work is derived from texts not protected by U.S. copyright law (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase “Project Gutenberg” associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg™ electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg™ License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg™ License terms from this work, or any files
  • 52. containing a part of this work or any other work associated with Project Gutenberg™. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1 with active links or immediate access to the full terms of the Project Gutenberg™ License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg™ work in a format other than “Plain Vanilla ASCII” or other format used in the official version posted on the official Project Gutenberg™ website (www.gutenberg.org), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original “Plain Vanilla ASCII” or other form. Any alternate format must include the full Project Gutenberg™ License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg™ works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg™ electronic works provided that: • You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg™ works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg™ trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty
  • 53. payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, “Information about donations to the Project Gutenberg Literary Archive Foundation.” • You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg™ License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg™ works. • You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. • You comply with all other terms of this agreement for free distribution of Project Gutenberg™ works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™ electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from the Project Gutenberg Literary Archive Foundation, the manager of the Project Gutenberg™ trademark. Contact the Foundation as set forth in Section 3 below. 1.F. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread works not protected by U.S. copyright
  • 54. law in creating the Project Gutenberg™ collection. Despite these efforts, Project Gutenberg™ electronic works, and the medium on which they may be stored, may contain “Defects,” such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund.
  • 55. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg™ electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg™ electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg™ work, (b) alteration, modification, or additions or deletions to any Project Gutenberg™ work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg™
  • 56. Project Gutenberg™ is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg™’s goals and ensuring that the Project Gutenberg™ collection will remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non- profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws. The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact
  • 57. Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate. While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and
  • 58. credit card donations. To donate, please visit: www.gutenberg.org/donate. Section 5. General Information About Project Gutenberg™ electronic works Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support. Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our website which has the main PG search facility: www.gutenberg.org. This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.
  • 59. Welcome to Our Bookstore - The Ultimate Destination for Book Lovers Are you passionate about books and eager to explore new worlds of knowledge? At our website, we offer a vast collection of books that cater to every interest and age group. From classic literature to specialized publications, self-help books, and children’s stories, we have it all! Each book is a gateway to new adventures, helping you expand your knowledge and nourish your soul Experience Convenient and Enjoyable Book Shopping Our website is more than just an online bookstore—it’s a bridge connecting readers to the timeless values of culture and wisdom. With a sleek and user-friendly interface and a smart search system, you can find your favorite books quickly and easily. Enjoy special promotions, fast home delivery, and a seamless shopping experience that saves you time and enhances your love for reading. Let us accompany you on the journey of exploring knowledge and personal growth! ebookgate.com