River Rail Colby Issue
River Rail

Data at the Dawn of the Anthropocene

How is passive observation used to predict and ultimately control nature? Author Aaron R. Hanlon weaves a historical account.

First came the comet, then the computers. In 1682, when the comet cut its path across the summer sky, Edmond Halley was ready. Having conducted research on comet sightings from the 14th century onward, he could recognize that the paths of three comets—the first observed by Peter Apian in 1531, the second by Johannes Kepler in 1607, and now the path of the comet Halley himself witnessed at home—were suspiciously similar.

Reasons for a Bill: A Reward for the Discovery of Longitude. Courtesy the Library and Archives of theRoyal Society
Reasons for a Bill: A Reward for the Discovery of Longitude. Courtesy the Library and Archives of theRoyal Society

Halley surmised that these three comets were not distinct astral phenomena but a single comet that could be seen from Earth every seventy-six years. At first Halley was not sure what to do with his data, since no one really knew what kind of path comets took, whether they moved in orbits or in some other fashion. Later, in 1705, using Newton’s laws to calculate gravitational effects on the orbits of comets, Halley published his findings in the groundbreaking volume A Synopsis of the Astronomy of Comets. If Halley was right, his comet would return to view in 1758. Halley died in 1742. But sure enough, come 1758, what we now call Halley’s Comet returned, and has done so every 74–79 years. It is due to be sighted next in 2061.1

“Computer” was the term used in 17th- and 18th-century Britain to refer to people who made laborious mathematical calculations by hand, typically in astronomical contexts.2 They did the unglamorous work that predictions like Halley’s required, processing massive amounts of observational data taken from instruments like the astronomical sextant, a telescope mounted on an arc that enabled one to measure angles between two objects in the sky. When the comet appeared in 1682, Halley spent seven days assiduously recording its position against that of fixed stars in the night, and measured the length of its tail. But to account for inconsistencies in Halley’s published data and to ascertain the comet’s date of return, computers set to work.

I begin with this story because it illustrates one of the most important developments of the historical Enlightenment: the effort not only to passively observe nature to know its ways but to predict and ultimately control it. Though this episode was not the first time that people sought both to know and to apply new knowledge, it is a particularly lucid example of the fusion of the desire to know and the desire to claim some semblance of control in a world full of unknowns. Today, rightly or wrongly, we associate astronomy with—per the cliché—“blue sky” thinking, futurity, an unwillingness to be constrained by immediate concerns. But during the Enlightenment, astronomy was one of the most consequentially applied sciences. It held the key to the knowledge required to give humans a modicum of control in dealing with one of nature’s most formidable variables: the earth’s vast oceans and seas.

In the Scilly naval disaster of 1707, the British Royal Navy lost four ships, amid perilous weather off the Isles of Scilly, to disorientation at sea. In one of the worst maritime disasters in British naval history, more than 1,300 sailors lost their lives. This event drew attention to a vulnerability of which Queen Anne and parliament were already acutely aware. Although it was easy enough for navigators to calculate the latitude of a ship’s position by observing the position of the sun during the day and using declination charts, or by finding the declination of fixed stars at night, the calculation of longitude, by contrast, was a seemingly intractable problem. When ship’s crews lacked the ability to orient themselves by the presence of visible land or according to latitudinal and longitudinal coordinates, voyages could be delayed, rations depleted, direct routes avoided out of necessity, and, worst of all, ships lost at sea.

Once out of sight of land, and therefore without a visible reference point, navigators relied on a technique called “dead reckoning,” which involved periodic calculations of the ship’s position based on a previously known position and entailed estimating and tracking the ship’s speed along the way. As one might imagine, dead reckoning was highly prone to error. It required making frequent estimations—and thus there were many occasions for possible error—and inevitably ran up against the variable and unpredictable effects of ocean conditions, visibility, and weather. It was also difficult to make precise measurements and calculations in a leaky cabin or a swaying ship. When Halley traced the comet’s path across the sky, measuring its position and tail length, he stood on firm ground. Dead reckoning was another matter.

To address this challenge, Queen Anne’s government passed the Longitude Act of 1714, which established a prize competition to encourage the development of new ways of determining longitude at sea. By 1720, when Halley became the second Astronomer Royal, the director of the Royal Observatory in Greenwich, computers were hard at work on their calculations, and inventors were designing and testing new sextants and chronometers (an instrument for measuring time accurately under conditions of varying motion, temperature, humidity, and air pressure). The business of charting the sky was integral to the business of transoceanic trade and British imperialism.

This history is important in light of our contemporary understanding of the Anthropocene. In 2000 Paul J. Crutzen and Eugene F. Stoermer defined the Anthropocene as our current geological and historical epoch, in which “humans have become the single most potent force in shaping terrestrial and marine ecology, global geology, and the atmosphere.”3 Central to this definition is the idea that human industrial and scientific advancement are not the result of passive activities conducted under the auspices of an endlessly accommodating Earth. In other words, the Anthropocene is to some extent a product of modern conceptions of our relationship to nature: we are not perpetually at its mercy, but rather have the knowledge and the capabilities required to bring nature to order, to impose our will upon it.

For this reason, scholars of the Anthropocene frequently locate its origins in the historical Enlightenment, at the end of the 18th century in particular. And, in turn, scholars examining the 18th century and the historical Enlightenment have taken interest in the Anthropocene. As Alan Mikhail writes, “the idea of the Anthropocene allows for a new, expanded, ecologically inflected understanding of the beginnings of modernity, one that invites both humanists and scientists to the table of a long tradition of trying to explain the emergence of the modern world.”4

My contention here will be that the concept of data, including the data that gave rise to Halley’s predictions and the toiling of countless computers from the 17th century onward, is a central component of both the form of modernity characterized by the Enlightenment and the emergence of the Anthropocene age. Halley’s story illustrates a series of developments without which neither the Enlightenment nor the Anthropocene would be comprehensible: the belief that the mysteries of nature can be known and that specific elements of nature are predictable and controllable; and the prominence given to data, so that it became the currency of both modern knowledge and the Anthropocene age itself.

Like “computer,” “data” sounds like it does not belong in the English language of the 1600s, given its association today with computing and information science of the 20th-century sort, and with what Rita Raley calls “dataveillance,” the 21st-century practice of monitoring and collecting personal digital data to know things about people.5 But the first instance of the term “data” I have come across in an English-language text appears in 1630, in a tract by William Batten on—as one might have guessed by now—how to calculate the sun’s azimuth and amplitude for maritime navigational purposes.

William Batten, data, 1630.
William Batten, data, 1630.

Batten uses “data” in the sense of one of its two most common usages in the 17th century.6 Imported from Latin, in which the plural word data means “things given,” “data” in English tended to describe various kinds of givens, things that could be taken as true or axiomatic.7 Batten labels the figures in his table “data” because we are to take them as givens of the ship’s position, recordings not to be questioned but to be accepted for the sake of the mathematical exercise he demonstrates here. In this usage, in other words, data describes axiomatic, and therefore unarguable, statements or figures given for the sake of calculation.

The second most common usage of the term in the 17th century was theological. As Daniel Rosenberg notes, the phrase “a heap of data,”

as it appears in a 1646 theological tract by Henry Hammond, is “not a pile of numbers but a list of theological propositions accepted as true for the sake of argument.”8 In this sense, these propositions are to be taken as “data” because, like Batten’s reference to figures in his chart, they are to be taken as given, as axiomatic: the word of God is neither to be questioned nor subjected to empirical testing. Over the course of the 18th century and into the 19th century, “data” was also used to describe historical facts, as in Joseph Priestley’s Lectures on History and General Policy (1788), and the events that took place in fictional narratives, as in Elizabeth Hamilton’s quixotic novel Memoirs of Modern Philosophers (1800).

Crucially, when “data” enters the English language, its function is to signal a type of information meant to be taken as given, not to be questioned, whether in the realm of mathematics or of theology. What I have called Halley’s astronomical data, the data the computers computed, was not actually called “data.” This omission points to a curious fact about the emergence of “data” as a word and a concept in the English language: though used as early as the 17th century with regard to forms of evidence or items in support of a proof or an argument, it was not employed to describe the results of scientific experimentation, numerical or otherwise, until the end of the 18th century.

This observation allows us to draw an important lesson about the history of data and its relationship to the Anthropocene. Because of the term’s Latin meaning—things given—“data” entered the English language for the purpose not of describing any particular form of evidence—such as numbers or empirical results—but of treating evidence or propositions as given. Astronomers like Halley understandably would not have referred to the heaps of figures the computers processed as “data” because their method was generally to observe things in nature and then subject their observations to empirical and mathematical tests: nothing was to be taken as given. The motto of the Royal Society—the patron organization for experimental science, founded in 1660 and chartered by King Charles II in 1662—was “Nullius in verba,” “on the word of no one” or “take no one’s word for it.” Halley and other Fellows of the Royal Society were interested in things tested, things demonstrated, not things given. By the end of the 18th century, however, when the legitimacy of experimental science as a way of knowing had become more widespread and data was increasingly associated with empirical or experimental results, the term came to signify a different kind of given: that which is given because of the reliability of empirical science to vouch for its authority.

The word “data,” then, I suggest, matured around the same time that the Anthropocene age began. Not only had scientific and technological progress brought us to a point where we could reliably predict and sometimes control aspects of nature, such that humans could become the primary shapers of ecology, geology, and atmosphere, but also the givens—the unquestionable—had changed. This is not to say that scientific findings themselves were not questioned or were regarded as unquestionable—far from it—but that the rhetorical force of the word “data” began to shift, moving from the realms of mathematics and theology into the spheres of empirical science and public policy.

Today, in the throes of the Anthropocene age, as the stakes of our understanding of and trust in climate data increase with the passing of time, it is crucial to observe two key components of the historical concept of data discussed above. First, the modernization of data as a concept coincided with a growing association of data with the empirical, the observable. As Paul N. Edwards writes of 21st-century climate science in A Vast Machine, “the models we use to project the future of climate are not pure theories, ungrounded in observation. They are filled with data—data that bind the models to measurable realities.”9 That is, regardless of evidentiary form, the basis of data remains empirical, observational. Numerical or quantitative representation may be the most common way to show data today, though we should not fail to acknowledge the importance of understanding where any particular dataset comes from. The second key component—especially important to bear in mind in relation to the first—is that data has always performed a rhetorical function.10 The legacy of the choice to call something “data” is to signal its given-ness, to posit that we should accept its truth-value.

Appeals to data therefore have a double edge. They ask you to accept something as given, as factual, as unquestionable. But they also tend to come with a tacit presupposition that given x, a particular response should logically follow: “given the data, you should do the following. . . .”

But as the history of climate denial demonstrates, it is not enough to give a fact or pose a question. We must answer an additional question. If data is the currency of knowledge in the Anthropocene age, what will be the currency of persuasion?

  1. “Halley” is pronounced like “Sally,” though the “Halley” in “Halley’s Comet” is often mispronounced with a long “a” and a long “e,” like “Bayley” or “Kayleigh.”
  2. See David Alan Grier, When Computers Were Human (Princeton, NJ: Princeton University Press, 2007).
  3. Alan Mikhail, “Enlightenment Anthropocene,” Eighteenth-Century Studies 49, no. 2 (2016): 211.
  4. Mikhail, “Enlightenment Anthropocene,” 212.
  5. Rita Raley, “Dataveillance and Counterveillance,” in Raw Data Is an Oxymoron, ed. Lisa Gitelman (Cambridge, MA: MIT Press, 2013), 121–46.
  6. L“Data” was a term not widely used in the English language in the 17th and 18th centuries, although, as in the examples cited here, it was employed in various specialist ways.
  7. When the plural word “data” entered the English language, it was almost immediately singularized: people wrote of “the data” about as often as “these data.” The Latin singular “datum” never really took off in English, I suspect because the epistemological value of “data” was always in the aggregate. A “datum” is simply a singular account, an item, something less than an anecdote.
  8. Daniel Rosenberg, “Data Before the Fact,” in Raw Data Is an Oxymoron, ed. Gitelman, 20.
  9. Paul N. Edwards, A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (Cambridge, MA: MIT Press, 2010), xii.
  10. See Rosenberg, “Data Before the Fact,” 18: “The semantic function of data is specifically rhetorical.”

Contributor

Aaron R. Hanlon

Aaron R. Hanlon is an assistant professor of English at Colby College and a Lisa Jardine History of Science Fellow (2019) at the Royal Society of London, whose library collections are the basis for this article. His first book is A World of Disorderly Notions (University of Virginia Press, 2019). His second book is a conceptual history of science denial, with Johns Hopkins University Press.

close

The Brooklyn Rail

River Rail Colby

All Issues