Publication Cover
History and Technology
An International Journal
Volume 39, 2023 - Issue 3-4
614
Views
0
CrossRef citations to date
0
Altmetric
Historiographic Essay

Computation, data and AI in Anthropocene history

ORCID Icon &
Pages 328-346 | Received 15 Nov 2023, Accepted 11 Mar 2024, Published online: 08 Apr 2024

ABSTRACT

This essay engages with recent scholarship on the epistemology of AI, data and automation, to assert how these practices are becoming increasingly central both to the projects of monitoring and of managing a global environment. We also review Jürgen Renn’s recent contribution The Evolution of Knowledge (2020) in relation to the history of environmental data. Using Renn as point of departure, we stake out a way for understanding the Anthropocene through the interaction between data and environment, taking into account the deeper political implications of datafication. We conclude with discussions about how historians of technology and environment could play an important role in assessing the opportunities and risks of AI for global environmental justice before their full-scale implementation is a fait accompli. In face of the Anthropocene, there is a general need today for integrative efforts of bridging knowledge from natural, technical, social and humanistic domains, and therefore a strong imperative for humanistic studies to transposetools, methodologies, and insights into the realms of policymaking, and legislation. Thus, assessments of AI and environment must account for these historical processes in the present as well as offer critical analysis of the full ontological spectrum from object to epistemology via data and mediation.

The datafication of the global environment

Over the past half century, the global environment has become subject to an accelerating mediation and datafication. Knowledge, management and governance of the Earth system are completely dependent on enormous flows of data from a ‘vast machine’ of measuring tools.Footnote1 These processes combined have formed what we call a ‘mediated planet’, which is to say that the dominant mode of knowledge of our planet – of a global environment – is the result of sensing, data processing, storage and transmission.Footnote2 If the planet and its overlaying spheres are understood as global environmental commons – as suggested by the UN reports on the Sustainable Development Goals (SDGs) from 2018 to 2023Footnote3 – this raises questions regarding the governance of the digital infrastructures that we depend on for that planetary knowledge today.

We contend, in brief, that mediation is always subjected to politics. This is not to say that mediation is reducible to political preference, but that this point is worth emphasizing due to predominant conceptions in academia, in policymaking and in public debates that more data in and of itself leads to better global environmental governance.Footnote4 This is not the case. While it is true that since the postwar era the trajectory has been towards increased datafication of the environment – due to successively greater computing capacities – this datafication has not in comparative terms helped humanity mitigate its environmental impact. Quite the opposite, increasing datafication has been synchronous with increasing extractivism.

In this sense, then, the increasing amounts of environmental data being produced can be said to constitute a part of the Great Acceleration in all manner of resources use, and with no end in sight.Footnote5 Against this background, this essay engages with recent critical work on data and AI as in relation to environmental applications and environmental change. We focus in particular on the contribution made by Jürgen Renn’s book The Evolution of Knowledge: Rethinking Science for the Anthropocene, as it offers a deep historicization of the digitalization of the environment by placing it within a longer evolution of abstraction and externalization reaching back to the economic structures of Mesopotamia. By digitalization, we refer here to the widespread deployment of digital information and communication technologies.Footnote6

Today, datafication and digitalization of the environment have reached a point where intergovernmental actors like the European Union (EU) are building a full-scale digital twin of the planet. Such large-scale digital twins are intended as replicates of large physical systems, like the world ocean or the whole Earth system. To this end, digital twins use continuous real-time data from sensor networks on past and present behavior to model and forecast future scenarios, which then become central for policies and actions in the physical world.Footnote7 In brief, the development of modelling of parts or the entirety of the Earth system as digital twins represents an increasing reliance on large scale digital infrastructure for global environmental governance.

The digital twin of the planet is presented as a coherent, high-resolution, multi-dimensional, multi-variable and near real-time representation of the Earth system, that integrates different new data sources with modelling, artificial intelligence and high-performance computing.Footnote8 The so-called Digital Twin Earth (EU DTE) is initiated by the European Commission, along with partners such as the European Space Agency (ESA). EU DTE is intended to not only integrate data on both natural and human activity on the planet but also to provide forecasts of these.

A main feature of the digital twin is connectivity and interactivity. The aim is to supply citizens and decision makers with up-to-date science-backed knowledge for the expressed purpose of protecting environments. This two-way exchange of information between the Earth system and the digital twin creates a feedback loop between the digital and the physical realms, so that modelled scenarios guide decision-making to change that modelled environment. In a comment from the EU, this form of modelling technology is described as a means to transform our understanding of, and relationship with, the planet. The Commission develops the EU DTE explicitly to empower a shared European responsibility to monitor, preserve and enhance ecosystems and habitats, and support a sustainable future. The scope of the EU DTE is vast as it professes to provide real-time updates, and long- and short-term forecasts of anthropogenic impacts, along with biodiversity conservation strategies at a regional and planetary scale.

As mentioned above, the mediated planet emerging from initiatives like the EU DTE has co-evolved with the Great Acceleration. It is synchronous with the entering of the planet into the Anthropocene, as understood by the criteria used within the Anthropocene Working Group.Footnote9 While the concept of the Anthropocene has been criticized for naming an environmental impact after all of humanity,Footnote10 with reference to causes and effects of this impact not being evenly distributed among humans, geologists make use of a narrower set of data: impacts on the Earth System identifies a mid-century ‘golden spike’ (Global boundary Stratotype Section and Point), corresponding with the usage of nuclear weapons as starting date for this proposed geological epoch. This is not to say that no anthropogenic planetary changes occurred before this point – indeed, there is a longer history of world systems and the development of infrastructures on which subsequent global environmental knowledge has been founded.Footnote11

On the one hand, the knowledge that relies on accumulating environmental data has contributed to the emergence of a particular conception of the environment, since we could overview – and see – the harmful effects of human activities on that environment.Footnote12 On the other hand, those same surveying technologies rely on staggering increases in emissions and are deployed to enable still further extraction of fossil fuels, and can thus be said to constitute a core part of the Great Acceleration itself. Wealth has come to more people on Earth but at a high cost for the planet’s habitability, for example with regard to other species. Today, these tendencies converge in what can be described as the technosphere –popularized by geologist Peter Haff – to describe an emerging planetary layer spread throughout and above the biosphere, lithosphere, hydrosphere, cryosphere and atmosphere. Similar to the biosphere that constitutes the Earth’s total biomass, the technosphere’s interconnection of thirty trillion tons of human-manufactured objects can be observed as an interconnected parasitical technical system. The technosphere draws resources and fuel from all other spheres in order to grow. In the process it tampers with the Earth system as a whole. The observation of the technosphere, for example through sensors spread over land, throughout the seas, and up into or beyond the atmosphere, is crucial to knowing about any planetary phenomenon, while the very same tools for observation themselves constitute part of the technosphere’s expansion.Footnote13

Through the onset of the Anthropocene, humans have then irreversibly changed the environment of the planet in every aspect. In this essay, we argue that the central enabler for this radical environmental change was and remains data gathering technologies. While these have a longer prehistory, it is particularly the development of computational technology since the postwar era that drove the accelerated development into the present-day Anthropocene condition. What is the role of environmental data in this historical development and how can we understand our near future where the coupling of digital technology and environment is likely to intensify? In this sense, we approach Anthropocene History as a question of the relationship between sustainability and digitization, in which technological path dependencies from the past and present extend into the shaping of the future. This technological focus in Anthropocene history deals with the material conditions of possibility for the emergence of the new epoch rather than responsive concepts, such as planetary boundaries, or Half-Earth. The point being that the knowledge infrastructures we describe form the conditions of possibility for any given response to the Anthropocene.

Today, rapidly growing big datasets are used to train Artificial Intelligence (AI), which is developed as a means to compete commercially, to secure geopolitical interests, as well as for scientific purposes. While the corporations known as Big Tech (Alphabet, Amazon, Apple and Microsoft) have developed AI systems since the early 2010s to secure an oligarchic dominance over digital infrastructure and data capture, state actors and intergovernmental bodies of all kinds are now trying to grapple with the new political conditions for the knowledge economy that these infrastructures create.

So far, critical scholarship on data and AI has mostly focused on consequences for extracting and exploiting human behavioral data, with notable exceptions like Kate Crawford’s Atlas of AI and Thomas Mullaney et al.’s co-edited volume Your Computer is on Fire, which offer routes for dismantling the power of Big Tech. The authors of the latter volume show how computing and new media are nothing if not entirely physical, material and organic, in which the physical fires of energy combustion are a counterpoint to the concept of the lofty data cloud that companies have us talk about. This extractive metabolism consists in a direct link between ever growing computing power and rising environmental costs (Ensmenger). In addition to the extractive metabolism of energy, there is also a huge hidden cost in the form of exploitative human labor, most recently discussed in the case of clickworkers in Kenya paid below international standards for minimum wages to sift through material considered controversial so that users of large language models such as OpenAI’s ChatGPT do not have to see it. In brief, everything that happens online or in the virtual has a stark material counterpart that should not be forgotten, which is particularly clear in the case of large language models, as pointed out by Emily Bender and Timnit Gebru in their seminal 2021 paper ‘Stochastic parrots: Can large language models be too big?’.Footnote14

Another recent development concerns the use of AI models for environmental data, such as the digital twins mentioned above. A key feature of AI systems is the ability to perceive and learn from their surroundings, i.e. digital and physical environments. Models are trained using big data sets as source material – a ‘ground truth’ of sorts for how these systems learn and interact with new data.Footnote15 In this sense, the past and present affect visions of the future, as past biases are programmed into new AI models. This has raised concerns that algorithmic bias reproduces pattern discrimination,Footnote16 of which class, gender and race are the most frequently cited examples of social categories affecting interactions, predictions and decisions.Footnote17 But what about algorithmic bias with regard to environmental data? To understand this, we have to trace the epistemology and economics of big data alongside emerging uses of AI that underpin how a global environment is known and managed.

AI, capitalism and the environment

In relation to environmental data, we believe that AI should be understood historically – as part of a longer development of human interventions at various scales in the Earth System. This entails analyzing technical mediation of knowledge about the planet over centuries as a process having both epistemological and material dimensions, in that knowledge about the natural world comes to inform interventions into it on an increasing scale, from early modern maps and sea charts to the digital twins of today. Our theoretical concept of ‘environing media’ – which builds on the concept of environing technology, presented previously in this journal by Sörlin and WormbsFootnote18 – captures a feedback loop between knowing and doing on a planetary scale as new knowledge gained from environmental data processing are implicated in interventions and alterations of the environment.Footnote19 We claim, in brief, that knowing the environment is increasingly crucial for making environments. Theoretical concepts like environing media and mediated planet are means whereby we can analyze how AI enters the field of knowing and doing a global environment.

AI is in many ways a perfect example of the feedback loop inherent to environing media. As autonomous systems like AI are trained on available data, they produce new knowledge which then informs interventions back into the material world. To take a straightforward example, we can consider how autonomous systems for sensing and interpreting CO2 concentration in the atmosphere inform decisions on implementing new technological systems that reduce the global heat resulting from that increased CO2 concentration. NASA and Harvard University’s project SCoPEx (Stratospheric Controlled Perturbation Experiment) is another example of this feedback loop. Using a scientific balloon to release calcium carbonate into the atmosphere so as to test solar radiation management techniques, SCoPEx demonstrates a form of geoengineering informed by an environmental epistemology of anthropogenic climate change, and that would in turn be deployed as a fully automated system. As various data-driven Earth sciences couple with the capabilities of AI, we see a need for an integrative approach that questions but also offers alternatives to how the owners of these systems are concentrating power. There is, for starters, the question of how future uses of AI will affect the Earth system as a whole, but also at what levels of governance solutions to unwanted effects can be formulated or legitimized.

Perhaps surprisingly given its prominence, ‘AI’ is notoriously hard to pin down to a textbook-type of definition and therefore also difficult to regulate and craft policy for. Machines with traditionally biological capabilities, like perception and cognition, often feature in the various attempts to define AI. In simple terms, then, we may understand AI as machines that can perceive of their surroundings – i.e. environments – and from this perception learn how to solve problems and reach goals. A recent review of AI and the SDGs define AI as a ‘software technology with at least one of the following capabilities: perception, decision-making, prediction, automatic knowledge extraction and pattern recognition from data, interactive communication and logical reasoning’.Footnote20 Like humans and non-human biological organisms, AI systems can then be understood through their ability to learn and change behavior to solve problems or improve desired outcomes. Somewhat counterintuitively, the surging popularity of AI has so far aggravated ambiguities around the concept also within expert discourse,Footnote21 by continuation hampering attempts by policymakers to regulate the technologies involved, with the debates and lobbying around the EU AI act being a case in point.Footnote22 Many developers simply use the concept of ‘machine learning’ as a less hyped and more technically-specific concept. As AI researcher Michael Wooldridge puts it,

While the media generically uses the label ‘AI’ in their coverage of our field, the headline advances have been in machine learning (ML), and more specifically, in the field of neural networks. Advances in algorithms for training neural nets, new neural network structures (particularly convolutional neural nets, generative networks, and adversarial networks), coupled with the availability of rich, carefully curated data sets and the availability of cheap computer power for training neural nets, have made possible systems that seemed firmly out of reach just two decades ago.Footnote23

In Wooldridge’s view, these advances are within what researchers call ‘narrow’ AI systems. These are systems which have the ability to solve clearly defined problems but where humans essentially control and steer the process. By contrast, ‘general’ AI systems are those where a machine can reason for itself, but which have yet to be developed. Narrow AI works to solve the precise tasks it has been designed for and works well until it encounters phenomena that greatly differ from what it has experienced before. In this way, AI in its current state depends heavily on the notions, worldviews, and aims of its developers, which is why these systems in their bias and discrimination currently reflect the ideology of Silicon Valley.

Kate Crawford, who focuses on equity and justice in AI, has recently proposed that AI is neither artificial nor intelligent.Footnote24 By this she means that the abstraction of a disembodied AI hides the huge human and environmental costs that go into running any such technological system. The hype of AI has even led to instances of fauxtomation, where the services of alleged AI systems are actually carried out by a global proletariat, working below what the UN defines as a minimum wage, to create for customers the semblance of an efficient AI system that does not exist as of yet.Footnote25 In another project called Anatomy of an AI system, Crawford demonstrated how the disembodied appearance of an AI system, like the Amazon Echo, is part of its commercial attraction but obfuscates the costs of human labor, unclean energy and rare mineral mining.Footnote26

Definitions of AI often reflect assumptions, or wishes, by developers of digital technologies that the sector shall remain neutral or unpolitical. Since this putative neutrality helps keep accountability at bay, it ends up serving the AI companies. A recent example of this is the intense lobbying by OpenAI and its CEO Sam Altman to the US senate and EU to avoid having their model GPT-4 defined in a way that would allow legal accountability for its harmful effects (as a general-purpose model). The envisaged uses of AI models range from very practical applications that increase revenues from digital platforms operated by Silicon Valley-based tech firms, like Microsoft and Alphabet, to solving existential challenges like climate change or modelling epidemics. While government agencies and universities conduct their own development of AI, many institutions, initiatives and personnel still relate to, or end up being recruited by, these tech firms.Footnote27

Shoshana Zuboff has proposed the term, ‘surveillance capitalism’ for the new economic logic whereby human experience is claimed as free raw material for hidden commercial practices of extraction, prediction and sales. Until recently, we have primarily seen this new form of capitalism monetizing the digital trace that end users produce when using a supposedly free service like Google’s search engine or Facebook’s social media network. The data rendered by users is continuously captured, stored and processed to be turned into products on behavioral futures markets, i.e. corporations interested in buying predictions of, and increasingly steering, human behavior for commercial or political purposes. The latter can be exemplified with dubious attempts by tech-firms, like the notorious Cambridge Analytica, to influence democratic processes, a feature widely discussed in relation to the UK referendum and US election of 2016.

Zuboff contends that the economic model of surveillance capitalism differs from that of industrial capitalism primarily because it ‘unilaterally claims human experience as free raw material for translation into behavioral data’. The data harnessed is then fed into machine learning algorithms. Based on this data, the algorithms produce predictions, or anticipations, about how people will behave. In the later stages of behavioral data analysis, the surveillance capitalist shifts from predicting to modifying human behavior by combining data from the user’s online activities with data gathered about their activities in the physical world, for example from smartphones and other electronically connected devices that together make up what is referred to as the Internet of Things. This process was made possible by the lack of regulation and jurisdiction – surveillance capitalists could simply claim the data from unknowing users. Surveillance capitalism also notably presuppose the technology of controlled and steered AI to make the goal-oriented processing of big data possible.

Nick Couldry and Ulisses Meijias have proposed that we should also think of these translations into behavioral data in terms of data colonialism.Footnote28 They insist that the term colonialism is not used in a metaphorical sense but because it accurately describes the logics of operation that surveillance capitalism has evolved into. Historical colonialism had four key components, according to Couldry and Mejias, which include resource appropriation, unequal social- and economic relations that secured the former (including slavery), an unequal global distribution of benefits from these resource appropriations, along with the spread of ideologies to justify this process. However, Couldry and Mejias do not discuss environmental data as a source of resource appropriation.

Recent developments by leading surveillance capitalists suggests that a new field of AI systems will indeed be constituted of environmental data. Microsoft has since 2018 invested US$ 50 million yearly into their program ‘AI for Earth’, claiming they can reduce greenhouse gas emissions while boosting GDP by US $5 trillion by the year 2030, a claim that says nothing of how their emissions are accounted for using dubious carbon dioxide removal offers.Footnote29 Ahead of the climate meeting COP26 in 2021, Microsoft launched an AI system called Microsoft Cloud for Sustainability. This AI system is meant to be a carbon accounting platform to be used for reporting on CO2 emissions globally. It is self-evident that controlling a global carbon accounting system yields immense power in an economic system based on carbon emissions, which raises the question of who should be allowed to own, control and profit from such a digital infrastructure. As an example, the EU’s Corporate Sustainability Reporting Directive (CSRD) which was implemented in 2023, requires companies by law to publish regular reports on their environmental and social impact activities. The control of the digital and automated accounting of these activities thereby becomes a potential vessel for power and epistemic truth.

Currently the tech industry is responsible for 3 percent of global annual greenhouse emissions, on a par with aviation. At an exponential growth rate, which AI systems would perpetuate and accelerate, emissions could rise up to 14 percent by 2040, if projecting developments from past decades into the future. A growing interest in recent years in large language models does point in this direction.Footnote30 These emissions then are increasingly coming from cloud computing services. In addition, Big Tech companies offering AI services, like Microsoft, Google and Amazon, are aggressively marketing their services to the fossil fuel industry, claiming to help optimize and accelerate oil production and resource extraction. Based on current trends in the development of AI systems, they are rapidly becoming integral to that ecological exploitation which developers of these systems profess to mitigate.

In relation to how AI systems have the potential to both accelerate and mitigate extraction and consumption of natural resources, we should also consider the current tendency to shift from economic theories focused on GDP towards something akin to what economist Partha Dasgupta refer to as natural capital.Footnote31 The concept of global environmental commons is increasingly used in environmental policy – for example by the EU and UN, notably in the two landmark Global Sustainable Development reports 2019 and 2023 – to promote regulation of crucial life supporting planetary phenomena that are visible only through aggregated datasets depicting a global environment. The global environmental commons include the oceans, land, biodiversity, cryosphere, forests and the atmosphere, and can be understood as systems, or large-scale biomes, that contribute directly or indirectly to the functioning of the Earth system and hence to supporting life itself.Footnote32 In economic terms, these commons make up a stock of natural capital from which flow benefits that often are shared across humanity.

As Jürgen Renn suggest, it is specific forms of knowledge rather than just data and information that are necessary, even essential, elements for equitable governance in the Anthropocene.Footnote33 Against the background of Big Tech developers of AI systems, such as OpenAI’s GPT-4, Renn’s argument can be used to expand policy regulations from individual rights to protecting these global environmental commons of knowledge, as AI systems directly affect them, even when these systems are implemented for scientific purposes. Under the present voluntary regulation paradigm, which has proven to be toothless in a number of court cases regarding the use of language models, the pool of common knowledge become private property merely by taking the enormous data flows needed to train and run these models. As the datafication and digitization of these global environmental commons proceeds, questions about control and ownership of these big data become more salient.

With this said, Microsoft’s AI offer to reduce emissions while boosting GDP is both a technically, and technocratically, attractive option to world leaders, as made visible with world economies struggling during and after the COVID-19 pandemic. This episode induced recession and saw growing demand for political action on climate change. It is worth keeping in mind then, as Zuboff exemplifies abundantly, that ‘free’ services in the digital economy come instead with hidden price tags. Just like in earlier phases of surveillance capitalism, offers will be attractive but likely hide that the basis of this business model is the surveillance capitalist’s ability to capture data and profit from AI-driven processing of that data.

With growing political demands to act on behalf of the global environment, it becomes harder for democratic institutions to think through or resist implementing these technologies regardless of the bottom line of the offers. Mullaney et al. present, as part of arguing against perceptions entertained by the tech industry, a number of cases where computing and new media depend upon flesh-and-bone metabolism of human and non-human labor as well as a complex system of environmental impacts including extremely high water-use to cool data centers, in addition to pollution from a fast-growing mining sector to supply electronic manufacturers with rare earth minerals as well as rising greenhouse gas emissions.Footnote34

A counter example to this corporate tendency is the EU’s project of building digital twins of the Earth and ocean discussed above, launched while the EU is implementing new regulations on AI uses of data known as the Digital Markets Act, which first came into effect in Spring 2023.

In the context of governance, the key aspect of AI systems is ownership and control of the tools whereby available environmental data can be used to train algorithms, so as to analyze or predict environmental change. Based on how Big Tech have operated algorithmic platforms to this date, we have to assume that any offer of a platform that monitors an environment implies allowing the sponsoring corporation to own the data that monitoring may produce, which in turn can be used for whatever purpose they wish. One recent example is how Big Tech has moved into AI-assisted farming in the Global South.Footnote35 Microsoft, Amazon and Google are all offering apps to farmers that collect their data and offer advice on improving crop yields. The data is then sold to third parties offering products like fertilizers, pesticides, and seeds. Microsoft’s platform Azure Farmbeats is connected to their cloud-computing service and provides farmers with real-time information on the condition of their soils and waters, crop growth, pests, and diseases, as well as climate change threats. This data is in turn sold to companies who develop technologies that receive and act upon the information, for example by deploying pesticide-spraying drones.Footnote36

The larger trend at play here is how big tech companies become indispensable to farmers and the agricultural companies that in turn come to constitute every link of a food supply chain. Prakash et al. argue that AI application in the agricultural sector is related to the older model and concept of the Green revolution, from early- to late twentieth century, where synthetic fertilizers were promised to end world hunger but in the process also expanded the reach and influence of Western companies to the Global south, effectively replacing local agricultural community expertise. And more recently, both in the Global south and numerous European countries, there are farmer revolts against the increasing influence of tech companies whose edge in smart farming is to deploy capital-intensive monitoring and machinery to increase productivity per unit, often at the cost of local farmers and agro-ecological practices, most notably observed in the Netherlands.Footnote37

The larger trend, then, is tech companies using AI to gain control over the global food market and merge with influential stake holders in the food industry to control food supplies, from farm to table. This is the sort of development that underpins the growing interest for, and projected value in, not only developing but also owning and controlling AI systems trained on environmental data. Applications range from managing the atmosphere, oceans and natural resources to wildlife conservation and farming.

From this overview of environmental data and AI systems, we observe how once they become coupled on a larger scale, numerous great possibilities as well as difficult challenges emerge. Against this background of myriad debates surrounding the concept of AI and its relation to the environment, we now turn to Jürgen Renn for a historical understanding of data as part of the evolution of knowledge.

Data evolution

The Evolution of Knowledge: Rethinking Science for the Anthropocene, published by Princeton University Press in 2020, is a synthesis of decades of research conducted by Jürgen Renn and colleagues at the Max Planck Institute. What Renn describes as the ‘evolution of knowledge’ – that is, his understanding of what knowledge is and how it comes about – is a process encompassing everything from the experiential and experimental to the esoteric and abstract. It ranges from technoscientific inventions to social interactions. In making this argument, Renn draws upon case studies from various societal sectors, as well as places and historical periods. And for his analysis of data, he builds on and responds to some of the literature discussed above.

Knowledge, Renn argues, takes shape through an interaction between social, mental, and material dimensions. Abstract concepts are the residue of these interactions. There are in this sense no clear-cut changes, or paradigms, in knowledge systems but instead individual knowledge components from past continue to be used in new ones. From this theoretical standpoint, Renn explores how societies and their knowledge influence each other, or co-evolve, over time. In particular, he focuses on processes of production, reproduction, transfer or distribution, as well as sharing or appropriation of knowledge throughout societies.Footnote38 In addition, Renn demonstrates how knowledge sharing comprises a global history of human activity. From a long-term perspective, knowledge is entangled in processes of globalization, of which naval trade- and colonial enterprises are among the most striking, and which resulted in intercultural transfers that transformed both the production of knowledge and society itself. Importantly, Renn argues that environmental change is to be understood in relation to these changes in knowledge production, specifically the emergence of a capitalist knowledge economy that correlates with the Great Acceleration in particular, and the Anthropocene in general.

Renn concludes that coping with the Anthropocene condition starts with rethinking our current system of knowledge production. Specifically, it is the ties between the knowledge economy and corporate and military actors that need to be severed, as these serve narrow interests which have yielded unintended and increasingly untenable environmental consequences for the entire planet. By contrast, a new knowledge system would need to integrate not only natural sciences but also social, political, and ethical concepts, which are to be derived from nonacademic settings and in turn revalue existing, traditional, knowledge systems while also making the new knowledge widely accessible throughout society.

The methodological motivation for Renn’s far-reaching scope is to conduct what he calls longitudinal studies, where the number of figures – certain strands of knowledge development – are reduced to a select few, so as to make possible the tracing of these knowledge strands across different contexts.Footnote39 One of the crucial areas for these longitudinal studies of knowledge is that of mechanical knowledge, which has existed in many cultures and evolved towards what one today would be describe as AI.

Renn situates AI, which only occupies a smaller subsection of his book, in a longer history of the evolution of knowledge that have transformed not only knowing but also how knowledge has been used when altering environments. It should be stated then that our own conceptualization of the global environment as a mediated planet – with its reliance on the processing, storing and transmission of data – resonates with Renn’s conceptualization of how knowledge over time transforms not only social structures but also the environmental surroundings it is part of. Instead of scientific revolutions or technological innovations, the rise of AI is conceptualized as the gradual accumulation, transmission, and evolution of knowledge.Footnote40 As pointed out above, however, this way of portraying the history of AI as a co-evolution with knowledge writ large runs the risk of neutralizing present-day developments and applications by Big Tech as inevitable. It’s worth juxtaposing this account with alternative sources, such as Eden Medina’s Cybernetic Revolutionaries, which tells the story of how Project Cybersyn, led by Stafford Beer under Salvador Allende’s government, attempted to use cybernetics and automation to increase governmental control over the economy, which at the time stood in contrast to interests of foreign, especially American, companies.Footnote41 A recent nine- episode podcast series by historian of science Evgeny Morozov called The Santiago Boys returns to this story with new source material and reinforces the need to understand the Big Tech version of AI that is currently spearheaded by companies like OpenAI and Google as contingent on particular historical and political forces.

Renn’s concern with AI in this context is as part of the larger crisis of the human-Earth relationship known as the Anthropocene. For Renn, AI constitutes the machine room of the Anthropocene – an essential part of the growing technosphere – and the history of that mechanical knowledge is a story about how we got to this place.

A central aspect of the cultural evolution of knowledge is the concept of ‘externalization’. Since the invention of writing and numerical calculation, humans have learned to externalize knowledge into abstract forms. Renn highlights how material embodiments of knowledge provided a backbone to cultural evolution. He proposes that these embodiments – commonly referred to as technologies – serve as a means for cognition, both symbolically and practically. As technologies change, so too changes the structures of human thinking.

In the rise of industrial capitalism, for example, the medium of money emerged as the external representation of exchange value in a market economy and became capital, which in turn regulated the material production of the market. Renn predicts that data, which initially served as external representation of information and as a medium for storage and transfer, is on its way to becoming a means of acquiring control over societal and economic processes. This claim is exemplified by the AI-related catchphrase ‘data is the new oil’, first coined in 2007 by mathematician Clive Humby but now used prolifically to describe the economic order of surveillance/platform/data capitalismFootnote42 – a subject to which we will return later in this essay.

Renn’s argument echoes that of media historian Lisa Gitelman who conceptualized data as both culturally and historically specific.Footnote43 By contrast, current developments tend towards understanding data less as something personal and increasingly as a resource waiting to be extracted. Stark and Hoffman point to how the rhetoric about data being a resource is reminiscent of colonial appropriation ideologies,Footnote44 which in turn would aid the use of data as a form of capital.Footnote45 Data collection, in this instance, is driven by capital accumulation, where the whole world is perceived as data to be collected, extracted, and made profitable. Often corporations collect data without a predetermined means for making a profit, but with an expectation that sooner or later there will be a way to do so.Footnote46

In this emerging landscape of Big Data, environmental science is increasingly reliant on digital infrastructures. Without the availability of large quantities of data, adequate computing facilities and sophisticated modelling, our concept of global climate change would look very different and, more importantly, would have less proximity to technocratic policy processes that at present rely on such models, as is the case with work on several of the UN’s SDGs. This particular point resonates with the core idea of our theoretical concept of environing media – envisaged as a tool to aid such meta-analysis of environmental epistemology on the empirical level.

So if present-day knowledge relies on externalization, what is the role of AI in its continued evolution? Machine learning algorithms, Renn suggests, ‘are simply a new form of the externalization of human thinking, even if they are a particularly intelligent kind’. With the concept ‘externalization’, Renn refers to the result of the dynamic interplay between a society and their environment in which, he explains, certain features may be repurposed as a result of the combination of society and environment in terms of potential and need. It can be exemplified by the biological concept ‘exaptation’ which, according to one theory, occurred when animals evolved feathers first as a means to regulate temperature but eventually were repurposed to aid flight.Footnote47 Renn uses the concept of exaptation to propose an inverse process of ‘endaptation’ in which human material interventions into their environments over time come to affect their cognitive and social systems as experiences are assimilated into systems of shared knowledge and regulative networks of existing institutions.

Understood as externalization, then, AI is set up to affect how human societies practically relate to their environment, which in turn leads to new internalization of shared and accumulated experiences as accumulated knowledge. From this vantage point, we may conceive of the core tension between risks and benefits surrounding AI and the global environment as a question of almost existential dimensions regarding which existing values, beliefs and practices (i.e. subjugation, exploitation and extractivism) will be part of the new externalization, and how the increased scale and intensity of this technology can reinforce or undo existing internalizations in human society.

So what is to be done? According to Renn, AI could instead be developed to enable a Web of Knowledge, drawing upon ideals of the founders of the Internet for how it was intended to function. This proposition will probably strike many readers as a bit unrealistic, particularly given the internet’s militarized history as it sprang from the Arpanet. The founding myth of the internet has also repeatedly been subject to critique by historians of technology.Footnote48 According to media theorist Tiziana Terranova, the internet as such has already become residual, operating in the background while it has been displaced by a new hegemonic logic of digital connectivity controlled by corporate private platforms.Footnote49

Against this background of a corporate takeover of the means of connectivity, the need for alternative visions is admittedly strong, but any such proposal would also have to figure out how to dismantle the current hegemonic power of Big Tech. Tech activist and author Cory Doctorow recently proposed a manifesto-like roadmap to this end with the straightforward title How To Destroy Surveillance Capitalism, a book that also offers a critique of the limitations of Zuboff’s account.Footnote50 Other scholars like economist Mariana Mazzucato have proposed that we are already facing tech-feudalism in the digital world. This proposition rests on the idea that Big Tech exploits technologies originally built by the public sector to acquire a market position, which then lets them operate with rents and dispossession to monopolize data. Mazzucato insists that governments can and should be shaping digital markets to ensure that collectively created value serves collective ends, much like what the EU is currently attempting.Footnote51

Renn is thus far from alone in seeking ways out of the current Big Tech lock-in, but his conception is the one that most closely draws upon historical accounts for analysis. For Renn, the aim is a global coproduction of knowledge of distributed prosumers (producers and consumers) so as to locally organize, adapt, and implement scientific knowledge. For this shift to come about – from Internet to a Web of Knowledge – not only data but also the network links and the digital infrastructure itself would have to become public goods that are open and accessible for all to use, annotate, and develop,Footnote52 in resonance with Mazzucato’s proposal. Historically, the emergence of information technologies stimulated new fields of research, not only in technical but also humanistic domains. For example, identifying intelligence as artificial (as opposed to human), addresses how rationality, and the logic of thinking, does not presume universal, individual, thinking but can follow our abstractions back to its sources, to its data, that ultimately are our experiences with the world. AI thus affects the very criteria by which knowledge is defined (reproducibility, credibility, causality). This makes the interfaces between humans and AI more relevant to open up, to unpack as black boxes, and make understandable.Footnote53

AI illustrates less how technology is becoming autonomous from humans and more how humans are increasingly co-dependent on each other through technology. The eight billion people currently inhabiting Earth do so within high varied and unequally accessible technological niches of information and energy systems that enables production and distribution of food, shelter, and transport. In particular, AI can be used to limit the use of energy and resulting greenhouse gas emissions by monitoring everything from agricultural to industrial production,Footnote54 which is a point we will come back to in this essay. As with the Anthropocene in general, the scaling up to a planetary view needs to be complemented by how these historical techno-environmental changes play out in local contexts and affect different groups of people.

Given the ubiquity of AI, the knowledge generated needs to become a common good. Renn reminds us that the different commons, such as arable land, forests, oceans, biodiversity and the atmosphere cannot be managed as if they were separate entities. He draws on Shoshana Zuboff’s work on Surveillance Capitalism to point to previous scandals in the use of AI in the personal and political realm as warning examples of environmental data, too, being traded as a commodity or service for private profit. As more services become connected, resulting in a datafication not only of everyday life but of the daily environment, Renn’s point is that it becomes relevant to expand regulations of AI.Footnote55

After reading Renn’s tour de force describing the cultural evolution of knowledge, we wonder if it might not be wise to include knowledge itself in the aforementioned pool of global commons. As Renn points out, access to and connectivity between items of knowledge is crucial to surviving in the Anthropocene, and we would like to add, knowledge processed and used from big data is now the center stage of global power struggles between nations and big tech corporations. Renn does recognize that knowledge alone is not a sufficient condition for regulation to come about,Footnote56 but he remains vague as to what the fault lines will look like in the ensuing political struggle. For example, how is democratic data governance to be negotiated within existing belief- and value systems of liberal trade policies and data capitalism? One of the more pressing issues concerns how the capability of AI to maximize an objective function – such as making profit – is to be coupled with advanced digital capitalism in the twenty-first century.

The notion of knowledge being a global common would also need to be further debated in relation to the role of traditional custodians of knowledge, most notably that of scholars and scientists. One could revisit the taxonomies by Roger Pielke – climatologist and ecomodern philosopher – who distinguishes between four ideal roles for scientist to take on: first, a pure scientist; second, a science arbiter; third, an issue advocate, or; fourth, an honest broker. The purpose of Pielke’s distinction is to define how knowledge relates to politics – that is, the process of negotiating who gets what, when, and how – and to policy, which Pielke understands to be a ‘commitment to a particular course of action’, which is a specific political preference.Footnote57 One can of course also infer whether or not the position from which Pielke presents these options is itself a deeply political one.

Since Renn believes knowledge should contribute to the possibility for life to go on within – and despite – the Anthropocene, he would opt for scientists and scholars to take on the role of issue advocates or honest brokers so as to promote specific policy preferences. Renn envisions a stakeholder model for science, where a plurality of epistemologies – from various contexts – are included and considered in the production of knowledge. It is these that then add up to alternatives presented by experts for a democratic public to choose between. But against this idea stands the view that democracy is not best guided by knowledge, but by a pluralism of interest groups, with conflicting values and resource claims. It is these competing notions of democracy that a theory on the evolution of knowledge would need to turn to next, as knowledge itself becomes increasingly politicized not only by politicians or activists but by the scientists and scholars themselves. If data has evolved into de facto the most valuable resource in the Anthropocene, how are access to and control over environmental data to be negotiated? Data and models increasingly steer environmental futures, and as such the neutral position of the scholar is flawed from the outset.

Conclusion

From our discussion in this essay, certain crucial points about AI and the evolution of environmental knowledge can be made. First, the strong dominance of surveillance capitalists could become a real threat to democracy in the face of the global eco-crisis. Second, AI can be productively understood historically, as a technology that changes environments and profoundly affects the human-Earth relation, and as such appears as a central dimension of Anthropocene history. As data assumes center stage in global economic and political power relations, knowledge – which is deduced from data – needs to be safeguarded as a common good in order to responsibly and ethically respond to the eco-crisis. The concept of Global environmental commons lends itself to this end. Third, datafication of life on Earth is now far-reaching and is rapidly changing what it means to manage the global environment. As this change is ontological, mediation becomes increasingly important to analyze and understand. Fourth, due to the scale and intensity of projected AI operations in the global environment, it holds great potential for rapidly changing the escalating factors of the eco-crisis – such as CO2 emissions or biodiversity loss – but as long as the primary incentive of data capitalists continues to be increased economic profit, there is a risk that green AI initiatives become little more than greenwashing.

Historians of science, technology and environment could make important contributions to understanding the fundamental reorganization of life on Earth that AI and Big Data represent for the global environment, historically analyzing and assessing the implications of an AI-powered data-driven conception and management of the Earth before its full-scale implementation. In the face of the Anthropocene, there is a general need today for integrative efforts of bridging knowledge from natural, technical, social and humanistic domains, and this need is particularly urgent in areas with fast development and far-reaching consequences for planetary habitability. Understanding new technologies, like AI, should not be limited to empirical histories, but also encompass an assessment of the present period, of how these technologies unfold in our own time. There is a strong imperative, we believe, for humanistic studies to transpose our tools, methodologies, and insights into the realms of policymaking, legislation, and politics without compromising with the power of well-informed historical analysis. It is not only a matter of so-called historical context, but rather that forces from the past remain active in the present and continue to shape our future. Thus, any exhaustive assessment of AI and environment needs to account for these historical processes on the full ontological spectrum, from object to epistemology via data and mediation, so as to derive from the past and present a projection of our future.

In view of how personal experience is harnessed as data and processed with AI for behavioral prediction and modification, what will be the business model for environmental data under surveillance capitalism? How will surveillance capitalists use the big data harnessed in applying AI to various societal sectors? Is there a risk that climate change and extreme weather events will become yet another layer of behavioral prediction and modification? How will we secure transparency and democracy in dealing with the difficult trade-offs between mitigating climate change, saving biodiversity and feeding a growing population if data is controlled and owned by corporations with huge financial interests? If the success of companies like Google lies in their ability to predict and thereby shape human behaviour with AI, their moving into the prediction of human-environmental futures needs thorough regulation and legislation. In addition, the tendency from smart farming already points to inequality reinforcement as big corporations monopolize the circuit of data – tools – products. In the context of the global environment, many datasets are at present open source, but the risk is that the models trained by the data and the tools to use them fall under complete control of Big Tech as it moves into new environmental schemes, like CO2 removal.

What if, in the face of the global eco-crisis, the massive surveillance and behavioral steering capacity of AI were to be regulated by sovereign governing bodies, or for a planetary common good beyond specific human short-term interests of specific actors? This may sound like a provocative proposition, because it is, but consider the fact that as citizens we are already nudged toward unsustainable behavioral patterns by market forces. As the UN Global Sustainable Development Report (GSDR 2019) makes clear, management of global commons must address environmental injustice and work against unequal use of resources, by means of available technical and political interventions.Footnote58 If available technology could secure resilience, should it not be incentivized for that use instead of being deployed in the service of the current power concentration of Big Tech in the form of surveillance capitalism? If the management of global environmental commons is crucial to human survival, such management must start with more transparent control and use of global environmental data by all actors involved in harnessing it.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Svenska Forskningsrådet Formas.

Notes

1. Edwards, Vast Machine.

2. Wickberg et al., “The Mediated Planet.”

3. Disco and Kranakis, eds., Cosmopolitan Commons.

4. Bonneuil and Fressoz, The Shock of the Anthropocene.

5. Ensmenger, “The Environmental History of Computing.”

6. Creutzig et al., “Digitalisation and the Anthropocene.”

7. Jones, “Characterising the Digital Twin.”

8. Bye, “Digital Twin of the Ocean.”

9. Steffen et al., “The Trajectory of the Anthropocene.”

10. Malm and Hornborg, “The geology of mankind?”; and Barua, “Plantationocene: A Vegetal Geography.”

11. See Peters and Wickberg, “Media: The Case of Spain and New Spain.”

12. Warde, Sörlin, and Robin, The Environment.

13. Wickberg, “Environing Media and Cultural Techniques.”

14. Bender et al., “On the Danger of Stochastic Parrots.”

15. Jaton, The Constitution of Algorithms.

16. Chun et al., Pattern Discrimination.

17. O’Neil, Weapons of Math Destruction.

18. Sörlin and Wormbs, “Environing Technology.”

19. Wickberg and Gärdebo, “Where Humans and the Planetary Conflate”; and Wickberg and Gärdebo, eds., Environing Media.

20. Vinuesa et al., “The Role of Artificial Intelligence.”

21. Wooldridge, “Artificial Intelligence Requires more than Deep Learning.”

22. Krafft et al., “Defining AI in Policy vs Practice.”

23. See note 21 above.

24. Crawford, Atlas of AI.

25. Ibid., 66.

26. Crawford and Joler, Anatomy of an AI System; and Jones-Imhotep, “Ghost Factories.”

27. Tegmark, Life 3.0, 121–5.

28. Couldry and Mejias, The Costs of Connection.

29. Joppa and Herweijer, “How AI can enable a Sustainable Future.”

30. Belkher and Elmeligi, “Assessing ICT Global Emissions Footprint”; and Bender et al., “On the Dangers of Stochastic Parrots.”

31. Dasgupta, The Economics of Biodiversity.

32. Global Sustainable Development Report 2019, 94.

33. Renn, Evolution of Knowledge, 397.

34. Mullaney et al., eds., Your Computer is on Fire, 20.

35. See note 13 above.

36. Grain, Digital Control.

37. Prakash et al., “Roundtable”; Gärdebo, “Environmental History of Science.”

38. Renn, Evolution of Knowledge, 146.

39. Ibid., x.

40. Ibid., xi.

41. Medina, Cybernetic Revolutionaries.

42. Arthur, “Tech Giants.”

43. Gitelman and Jackson, “Introduction,” 1–14.

44. Stark and Hoffman, “Data is the New What?”

45. See Crawford, Atlas of AI; and Sadowski, “When Data is Capital.”

46. Zuboff, Surveillance Capitalism.

47. Renn, Evolution of Knowledge, 328, 398.

48. Edwards, “Some Say the Internet should have never Happened.”

49. Terranova, After the Internet.

50. Doctorow, How to Destroy Surveillance Capitalism.

51. Mazzucato et al., “Public Value.”

52. Renn, Evolution of Knowledge, 33.

53. Ibid., 234, 398–9.

54. Ibid., 395.

55. Ibid., 400, 403.

56. Ibid., 403.

57. Pielke, The Honest Broker, 31.

58. See note 32 above.

Bibliography

  • Arthur, C. “Tech Giants May be Huge, but Nothing Matches Big Data.” The Guardian, August 23, 2013.
  • Barua, M. “Plantationocene: A Vegetal Geography,” Annals of the American Association of Geographers 113, no. 1 (2023): 13–29. doi:10.1080/24694452.2022.2094326.
  • Belkhir, L., and A. Elmeligi. “Assessing ICT Global Emissions Footprint: Trends to 2040 & Recommendations.” Journal of Cleaner Production 177 (March 2018): 448–463. doi:10.1016/j.jclepro.2017.12.239.
  • Bender, E., T. Gebru, A. McMillan-Major, and S. Schmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In FACC’’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, United States. Association for Computing Machinery. 1–14, 2021.
  • Bonneuil, C., and J.-B Fressoz. The Shock of the Anthropocene: The Earth, History and Us. London & New York: Verso Books, 2016.
  • Bye, B. L., G. Sylaios, A.-J. Berre, S. Van Dam, and V. Kiousi “Digital Twin of the Ocean – an Introduction to the ILIAD Project.” EGU General Assembly 2022, Vienna, Austria, 2022.
  • Chun, W., H. Steyerl, F. Cramer, and C. Apprich. Pattern Discrimination. Minneapolis: University of Minnesota Press, 2019.
  • Cieslik, K., and D. Margócsy. “Datafication, Power and Control in Development: A Historical Perspective on the Perils and Longevity of Data,” Progress in Development Studies 22, no. 4 (2022): 352–373. doi:10.1177/14649934221076580.
  • Couldry, N., and U. Mejias. The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism. Palo Alto: Stanford University Press, 2019.
  • Crawford, K. Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press, 2021.
  • Crawford, K., R. Dobbe, T. Dryer, G. Fried, B. Green, E. Kaziunas, A. Kak, et al. AI Now 2019 Report. New York: AI Now Institute, 2019. https://ainowinstitute.org/AI_Now_2019_Report.html
  • Crawford, K., and V. Joler. “Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data and Planetary Resources.” AI Now Institute and Share Lab, September 7, 2018. https://anatomyof.ai
  • Creutzig, F., D. Acemoglu, X. Bai, P. N. Edwards, M. J. Hintz, L. H. Kaack, S. Kilkis, et al. “Digitalisation and the Anthropocene,” Annual Review of Environment and Resources 47 (2022): 479–509. doi:10.1146/annurev-environ-120920-100056.
  • Dasgupta, P. The Economics of Biodiversity: The Dasgupta Review. Abridged Version. London: HM Treasury, 2021.
  • Disco, N., and E. Kranakis, eds. Cosmopolitan Commons: Sharing Resources and Risks Across Borders. Cambridge, Mass: MIT Press, 2013.
  • Doctorow, C. How to Destroy Surveillance Capitalism. New York: Medium Editions, 2021.
  • Edwards, P. “Some Say the Internet Should Have Never Happened.” In Media, Technology and Society: Theories of Media Evolution, edited by R. Neumann, 141–161. Michigan: University of Michigan Press, 2010.
  • Edwards, P. A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming. Cambridge, MA: MIT Press, 2010.
  • Ensmenger, N. “The Environmental History of Computing,” Technology and Culture 59, no. 4 Supplement (2018): 7–33. doi:10.1353/tech.2018.0148.
  • Gärdebo, J. “Environmental History of Science.” In Debating Contemporary Approaches to the History of Science, edited by L. M. Verburgt, 167–188. London: Bloomsbury Academic, 2024.
  • Gärdebo, J., A. Marzecova, and S. Knowles. “The Orbital Technosphere: The Provision of Meaning and Matter by Satellites,” The Anthropocene Review 4, no. 1 (2017): 44–52. doi:10.1177/2053019617696106.
  • Ghosh, A. The Great Derangement: Climate Change and the Unthinkable. Chicago: University of Chicago Press, 2016.
  • Gitelman, L., and V. Jackson. “Introduction.” In Raw Data is an Oxymoron, edited by L. Gitelman, 1–14. Cambridge, MA: MIT Press, 2013.
  • Grain. Digital Control: How Big Tech Moves into Farming, report 2021. https://grain.org/en/article/6595-digital-control-how-big-tech-moves-into-food-and-farming-and-what-it-means
  • Jones, D. “Characterising the Digital Twin: A Systematic Literature Review,” CIRP Journal of Manufacturing Science and Technology 29, no. Part A (2020): 36–52. doi:10.1016/j.cirpj.2020.02.002.
  • Jones-Imhotep, E. The Unreliable Nation: Hostile Nature and Technological Failure in the Cold War. Cambridge, MA: MIT Press, 2017.
  • Jones-Imhotep, E. “The Ghost Factories: Histories of Automata and Artificial Intelligence,” History and Technology 36, no. 1 (2020): 3–29. doi:10.1080/07341512.2020.1757972.
  • Joppa, L., and C. Herweijer “How AI Can Enable a Sustainable Future.” Corporate report. Microsoft & PWC, 2019.
  • Krafft, P. M., M. Young, M. Katell, and K. Huang. “Defining AI in Policy Vs Practice.” AIES‘’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 72–78, February, 2020. doi:10.1145/3375627.3375835.
  • Malm, A., and A. Hornborg. “The Geology of Mankind? A Critique of the Anthropocene Narrative,” The Anthropocene Review 1, no. 1 (2014): 62–69. doi:10.1177/2053019613516291.
  • Mazzucato, M., R. Kattel, and P. Bahra. “Public Value, Platform Capitalism and Digital Feudalism.” In Faster Than the Future: Facing the Digital Age, edited by C. Artigas and C. Grau, 146–165. Barcelona: Digital Future Society, 2021.
  • Medina, E. Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile. Cambridge., MA: MIT Press, 2011.
  • O’Neil, K. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.
  • Peters, J. D., and A. Wickberg. “Media: The Case of Spain and New Spain,” Critical Inquiry 48, no. 4 (2022): 676–696. doi:10.1086/719838.
  • Pielke, R., Jr. The Honest Broker. Cambridge: Cambridge University Press, 2007.
  • Prakash, K., T. Lorek, T. C. Olsson, N. Sackley, S. Schmalzer, and G. Soto Laveaga. “Roundtable: New Narratives of the Green Revolution,” Agricultural History 91, no. 3 (2017): 397–422. doi:10.3098/ah.2017.091.3.397.
  • Renn, J. The Evolution of Knowledge: Rethinking Science for the Anthropocene. Princeton: Princeton University Press, 2020.
  • Sadowski, J. “When data is capital: Datafication, accumulation, and extraction,” Big Data & Society 6 (January 2019): 205395171882054. doi:10.1177/2053951718820549.
  • Sörlin, S., and N. Wormbs. “Environing Technologies: A Theory of Making Environment,” History and Technology 34, no. 2 (2018): 101–125. doi:10.1080/07341512.2018.1548066.
  • Stark, L., and A. Hoffmann. “Data Is the New What? Popular Metaphors and Professional Ethics in Emerging Data Culture,” Journal of Cultural Analytics 4, no. 1 (2019). doi:10.22148/16.036.
  • Steffen, W., W. Broadgate, L. Deutsch, O. Gaffney, and C. Ludwig. “The Trajectory of the Anthropocene: The Great Acceleration,” The Anthropocene Review 2, no. 1 (2015): 81–98. doi:10.1177/2053019614564785.
  • Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf, 2017.
  • Terranova, T. After the Internet: Digital Networks Between Capital and the Common. Los Angeles: Semiotext(e)/MIT Press, 2022.
  • UN Global Sustainable Development Report 2019. The Future is Now: Science for Achieving Sustainable Development. UN, December 2019.
  • Vinuesa, R., H. Azizpour, I. Leite, M. Balaam, V. Dignum, S. Domisch, A. Felländer, S. D. Langhans, M. Tegmark, and F. Fuso Nerini. “The Role of Artificial Intelligence in Achieving the Sustainable Development Goals,” Nature Communications 11, no. 1 (2020): 1–10. doi:10.1038/s41467-019-14108-y.
  • Warde, P., L. Robin, and S. Sörlin. The Environment. A History of the Idea. Baltimore: Johns Hopkins University Press, 2018.
  • Wickberg, A. “Plus Ultra: Coloniality and the Mapping of American Natureculture in the Empire of Philip II,” NECSUS: European Journal of Media Studies 7, no. 2 (2018): 181–205.
  • Wickberg, A. “Temporal Poetics of Planetary Transformations: Alexander von Humboldt and the Geo-Anthropological History of America.” In Scientific Temporalities: Mediating and Materializing Time, edited by A. Ekström and S. Bergwik, 205–227. Oxford: Berghan Books, 2021.
  • Wickberg, A. “Environing Media and Cultural Techniques: From the History of Agriculture to AI-Driven Smart Farming,” International Journal of Cultural Studies 26, no. 4 (2022): 392–409. Special issue: Mediating Environments. doi:10.1177/13678779221144762.
  • Wickberg, A., and J. Gärdebo. “Where Humans and the Planetary Conflate: Where Humans and the Planetary Conflate—An Introduction to Environing Media,” Humanities 9, no. 3 (2020): 65. doi:10.3390/h9030065.
  • Wickberg, A., and J. Gärdebo, eds. Environing Media. London: Routledge, 2022.
  • Wickberg, A., S. Lidström, A. Lagerkvist, T. Meyer, N. Wormbs, J. Gärdebo, S. Sörlin, and S. Höhler. “The Mediated Planet: Datafication and the Environmental SDGs,” Environmental Science & Policy 153 (2024): 103673. doi:10.1016/j.envsci.2024.103673.
  • Wooldridge, M. “Artificial Intelligence Requires More Than Deep Learning — But What, Exactly?” Artificial Intelligence 289 (2020): 103386. doi:10.1016/j.artint.2020.103386.
  • Yusoff, K. A Billion Black Anthropocenes or None. Minneapolis: University of Minnesota Press, 2018.
  • Zuboff, S. The Age of Surveillance Capitalism. New York: Random House, 2019.