Spectral Earth: Electromagnetic Geo-Technics & Climate Governance

In 2019 the announcement of the spectrum allocations for 5G mobile communications caused a widely publicised uproar among meteorologists (link). The controversy surrounded the frequency of 23.8GHz, used by weather forecasters for the passive sensing of atmospheric water vapour. Their concern was that 5G radiation between 24.25GHz and 27.5 GHz would interfere with measurements made by weather satellites, affecting the accuracy of forecasting. The impossibility of broadcasting communications at a wavelength of 1.23mm without disrupting sensors tuned to radiation at 1.26mm highlights the increasing competition between science and industry for narrow bandwidths of this frequential space, exemplifying the entanglement of technical media with geophysical phenomena through which the planetary is grasped and produced.


During the 1999 talk in which she coined the term ‘planetarity’, Gayatri Spivak asked us to “think the planet as the proper receiver and transmitter of imperatives”. For Spivak, these ‘imperatives’ are principally the policies and practices of civil society; she speaks for example of “bio-prospecting leading to bio-piracy, leading to monocultures, leading to the death of biodiversity”. But here I would like to focus on understanding Earth as a receiver and transmitter – not so much of imperatives as of signals, frequencies and vibrations – to think the planet spectrally.


In Introduction to Comparative Planetology Lukáš Likavčan proposes the model of a ‘Spectral Earth’, but whereas his understanding is focused on spectres: a planet haunted by the species of its multiple mass extinctions; my understanding of Earth’s spectrality is grounded in the electromagnetic spectrum that surrounds, forms and transforms it, a spectrum that traverses geophysical, technological and geopolitical understandings of the planetary.

The electromagnetic spectrum connects the sensible phenomena of light, sound, heat and vibration in a continuum with the resonant Schumann frequencies of the planet, the toxic wavelengths of ionizing radiation, and the entire history of media communications from radio to 5G. If we centre these spectral phenomena, afford them an equivalent importance in sustaining the biosphere as we commonly do Earth’s unique atmosphere and moist environment, then we can understand the planet as a mass of mineral and organic matter that is immersed in and produced by – that emits, reflects, and absorbs – radiation across the full breadth of the electromagnetic spectrum. The conditions for abundant biodiversity are not only the chemistry of an oxygen-rich atmosphere and water-rich surface but also continuous solar radiation and the cyclical re/production of minerals by tectonic movement and volcanic activity. The infrared spectrum provides warmth from above, while a combination of seismically induced magma flows in the Earth’s core and radioactive isotopes (which, counter-intuitively, are found in greater concentrations in the crust) provide warmth from below. Changes in the absorption and reflection spectra of the planet’s surface directly impact planetary albedo which, as James Lovelock’s Daisyworld model showed, can inflect climate change. According to current climate science “the balance between net incoming solar radiation and outgoing terrestrial radiation at the top of the Earth’s atmosphere fundamentally drives our climate system” (link). Many of the phenomena central to geophysics can therefore be understood spectrally, as the interaction of electromagnetic frequencies with mineral bodies, of the physical repercussions of a spectrum of vibrations spanning the tactile and toxic, in short – of matter in media.

Planetary spectrality however is not only conditioned by these cosmic and geophysical phenomena, but is equally produced by technical media’s harnessing of the electromagnetic spectrum. Earth is wrapped in an atmosphere of broadcast frequencies and microwave communications as well as one of oxygen, nitrogen and carbon dioxide. Cables stretched from molten sand transmit words and images encoded as pulses of invisible infrared at three discrete wavelengths of 850, 1300 and 1550nm. And above ground, the air is so densely packed with transmissions that the allocation of this saturated spectral space increasingly poses logistical challenges and potential interference between competing functions of the same bandwidth. But the spectral relationship between geophysics and technics is not always one of friction. As Gilbert Simondon pointed out, radio wavelengths over 80 metres reflect partially off the Heaviside layer of ionised gases between 90 and 150km altitude in the atmosphere, and wavelengths longer than 800 metres “undergo a veritable metallic reflection” enabling radio transmissions to reach beyond the province of their origin and traverse national borders in a way that the shorter wavelengths used in broadcasting television are unable to. Through such examples we can observe what Yuk Hui describes as a “unity between the geographical milieu and the technical milieu” (link). Spectral technics can conjoin constructively with geophysical conditions to ensure an uninterrupted propagation of the worldview they articulate, or their signal can equally be inhibited, for example by the magnetic pull of large ore bodies in the ground. But, through planetary sensing and monitoring, spectral technics also translate geophysical and atmospheric conditions back into the orbit of the technosphere, visualising and quantifying the planetary. If we accept Peter Haff’s convincing thesis that the technosphere now constitutes a provisional global paradigm (link), then surely it is in large part the harnessing of the full electromagnetic spectrum that has enabled technics to become a geologic force.


This complete technical mobilisation of the electromagnetic environment is entirely reliant on a strict partitioning of the spectrum, on the allocation of discrete bands for specific purposes. In the same way that the prefixes infra- and ultra- (infrared, ultraviolet, infrasound, ultrasound) imply the centrality of anthropologically perceptible regions of the spectrum, the division of wavelengths into long-, short-, radio-, or micro- reference only their technical apparatus or function. For Simondon “these distinctions are never founded on the very nature of the phenomenon considered; they do not exist properly speaking according to physical science but only according to technics”. This is not to say that all such spectral semantics are arbitrary, it is surely no coincidence that the two frequencies of light most productive in photosynthesis correspond with two of the colours to which the human retina is sensitive: red and blue. The dominant wavelengths of our shared spectral environment have shaped the evolution of biological sight and vegetal metabolism. But thinking the spectrum from a purely technical stance leads to thinking of multiple channels of interpenetrating communication and radiation as separable and discrete entities, whose effects can be contained.


Discontinuity is essential to understanding the relationality of spectral phenomena, but continuity is essential to totalising the exchanges of energy between reflected shortwave radiation at the top of the atmosphere and outgoing longwave radiation emitted from Earth, or in Haff’s terminology the “incident solar flux” and the absorption by the biosphere. Simondon illustrates this “antinomy of the continuous and the discontinuous” [2020, p.98] with the example of the photo-electric effect, a phenomenon closely related to the technics photo-voltaic cells and digital photography and one that can be seen as a microcosm of the relationship between solar radiation and terrestrial surface. To understand this energy exchange between photon and electron we must conceive of the photon and electron as discrete particles and yet, as Simondon describes, “when a plate of alkaline metal is illuminated by a beam of light … the free electrons [in the metal] behave as beings equivalent to the continuum”. Particles can exist relationally with one another and behave homogenously.


Understanding the planetary as a spectral entity might help enable us to undo what Simondon refers to as the “two complementary representations of the real” which are “perhaps not merely complementary but really one” and to perceive of the planetary system as one in which the apparently distinct spectral signatures of technical and geophysical phenomena are imbricated and entangled in an energetic exchange whose output fundamentally governs climatic conditions.


I want now to move on to the discussion of a specific spectral technicity that exemplifies the sort of geo–technical entanglements discussed above: Remote Sensing. This technique of infrared data visualisation makes direct use of the photo-electric effect to capture terrestrial surfaces, providing the technosphere with a geological analysis of its bedrock. Depending on how this data is employed, hyperspectral remote sensing has the potential to exacerbate, monitor or perhaps even curb anthropogenic climate impacts.


The 1983 edition of the Manual of Remote Sensing begins with a note from its editor proposing that the geological analysis of infrared satellite imagery could be used as a means of planetary resource accounting. With the spectral resolution of contemporary instruments, it is well within the bounds of technical possibility that the perpetual orbits of remote sensing satellites could enable calculations of remaining reserves of key minerals. However, from today’s perspective, in which such far-sighted resource management still seems a distant goal and remote sensing is increasingly touted as a commercial tool of geological prospecting for the new mineral resources required by the digital economy, it also seems a somewhat naïve proposition.

Geological remote sensing operates by photographing the infrared reflectance spectrum of a terrestrial surface and analysing it with respect to the known spectra of certain target minerals. In this instance the photo-electric effect is central both to the function of the apparatus and to the phenomenon observed. In his 1977 article ‘Spectral Signature of Particulate Minerals’ (link), Graham Hunt discusses the phenomenon of charge transfer as one of the “intrinsic spectral features” of minerals. Charge transfer, he writes: “refers to the process whereby absorbed energy causes an electron to migrate between neighbouring ions”, and it is this differential absorption and transfer of radiant energy by minerals that defines the spectral signature by which remote sensing identifies them. Charge transfer, however, is also the process by which incident photons are converted into electrons in digital imaging, so in remote sensing there exists a unity not of milieu, but of the geological phenomenon observed and the technics of observation. The photographic operation performed in camera mirrors the absorption of radiation by the surface beneath: a purified metallic semiconductor quantifies the charge transfers occurring in terrestrial minerals by charge-transferring the photons they reflect back. The photosensitivity of silicon measures the photosensitivity of the predominantly silicate crust from which it was mined and above which it orbits.


The imbrication of remote sensing with planetary resource ecologies is emphasised by the target applications of current developers of hyperspectral cameras, which include: crop and infrastructure inspection, mining, and geological prospecting. All of these explicitly model or inspect terrestrial space to optimise its use for resource extraction, energy generation or commercial agricultural production. The hyperspectral imaging industry positions itself as the superior technology both to assess unchartered resources and to monitor the efficiency of established operations across multiple industries. These applications perpetuate the accelerating trend of outsourcing inspection and visual analysis as machinic processes, in which visualisation is synonymous with processes of quantification and terrestrial accounting: the image operationalised as a spatial display of geo-data for future economisation.


While the use of remote sensing as a method of geological prospecting is relatively recent, aggregating data from four decades of earth observation satellites enables climate scientists to analyse the spectral signature of global warming and calculate Earth’s radiation ‘budget’. Were the necessary political consensus achieved these techniques could potentially be used to enforce a kind of climatic governance.


Satellite images have three different resolutions, a spatial resolution – measured in metres/pixel, a temporal resolution (more commonly known as the revisit rate) – the number of days between each image of the same location, and a spectral resolution – measured in nanometres of bandwidth that are imaged separately. Each of these resolutions plays a role in the use of satellite remote sensing for climate governance. As with the example of water vapour above, many climatic processes exhibit peaks or troughs at specific frequencies, meaning a fine-grained spectral resolution is required to identify and monitor multiple atmospheric processes with a single instrument, whereas “broadband measurements effectively integrate all the energy across the shortwave or longwave [and] may mask signatures associated with particular climate processes” (link). Once again the technics used in climate modelling requires a segregation of the continuous spectral radiation into discrete bandwidths to disaggregate individual phenomena, as can be seen in the graph below comparing the reflectance of snow and ice with the spectral resolution of various satellite missions.

Among the future satellites currently planned by the European Space Agency is the earth observation mission CO2M or CarbonSat (link), which is currently scheduled for a 2025 launch. The development of CarbonSat was explicitly tied to monitoring global emissions targets at the local level. In the paper outlining the technique of remote sensing CO2 emissions at the pixel level of a satellite image (link), the authors frame their proposal as a means to address the Kyoto protocol’s requirement for independent verification of emission reporting. The ability to make reliable estimates of the carbon output of individual coal-fired power plants is the express target of the mission and the rapid increase in construction of coal power plants in India and China is mentioned anecdotally as a likely cause of future emissions growth. Implicit among the complex technical specifications described is the positioning of European scientific method as a moralising emissions monitor over the coal-fired future of the developing world. In this context it’s hard not to see this satellite mission as an act of what Jack Stilgoe refers to as ‘anticipatory governance’ (link), where the rush to meet the power needs of growing populations in Asia is met in Western Europe with a simultaneous scramble to devise a technocratic means to enforce their emissions commitments.


The spectral specifications for CarbonSat are for sensitivity to 3 bands, one in the near infrared (NIR) and two in the shortwave infrared (SWIR). Of these the NIR resolution is highest at 0.1nm, meaning that in the 747-773nm band 260 discrete measurements will be made for each pixel of the array. The main objective of the scientists developing CarbonSat however relies on its spatial resolution, where, according to their calculations, a 2km2 ground resolution will be sufficient to identify the emissions of a single power station. Spectral enforcement of emissions is only possible by a granular fragmentation of continuous solar radiation and terrestrial surface. As Holly Jean Buck writes (albeit in a different context): “engaging in this breaking apart, diagramming and modelling of the systems is how we have learned to think. Sciences both social and biophysical are doing just this. But it is not working. Another line of approach is needed”.


CarbonSat makes clear the potential for such remote sensed satellite observations to be used in implementing a form of spectral climate governance. And, if humanity were to actively engage in the kind of planetary scale geoengineering experiments surveyed by Buck in After Geoengineering, then these same hyperspectral instruments might provide our most immediate means of analysing their climatic effects. But the governance model suggested by the CarbonSat documents relies on satellite surveillance identifying local rogue polluters, which seems about as likely to reduce global emissions as CCTV cameras are to prevent crime. If another model is needed then it should be one that is capable of connecting both spectral phenomena and populations rather than establishing adversarial oversight. Perhaps conceiving of a spectral planetarity could enable us to abandon what Benjamin Bratton refers to as the “tenuous differentiation of geoculture from geotechnology” and instead realise and build upon the mutually constitutive spectral relations between geotechnics and geophysics.

Pixel Mining

From Friday 23rd to Sunday 25th July I will be showing a new video installation at  D-UNIT, Bristol:


We mine and refine rocks to make pixels glow. Digital electronics now outnumber human beings, each individual unit uses the vast majority of terrestrial metals in its components. According to the US Geological Survey 22 billion handheld electronic devices were manufactured in 2014. The LED screens in these devices used 130kg of gallium, 170kg of cerium, 120kg of arsenic and 180kg of lanthanum. If we knew the average number of pixels in each device we could calculate the geological cost per pixel of our screen time. The earth observation satellite Sentinel II produces another image of every site from which its raw materials were extracted every five days. These images have a ground resolution of 10 square metres per pixel, but we can’t calculate how many square metres of terrestrial surface were turned over to produce each pixel in its camera. If we could we might be able to derive a planetary resolution: the total number of pixels the Earth can support.


Using hacked PC monitors, satellite time-lapse, and found footage of electronics manufacture and recycling, this new video installation connects the flickering screen image with the cycles of extraction that make its appearance possible, asking how many pixels and how much screen time the planet can sustain.

D-UNIT has been initiated by artist Megan Broadmeadow and her partner Ed Metcalf with an aim to provide opportunites for 2020 graduates who were unable to have degree shows as well as artist-led initiatives from Bristol based artists and exhibitions for UK based mid-career artists. They will also be running public workshops in practical work and digital skills over the winter months.


D-UNIT is located at:

Durnford Street




www. dunit.space Instagram – d.unit.studios

Current Concerns in Artistic Research

I was recently runner-up in applying for a new job. For the interview I was asked to give a presentation on this title, and as I spent some considerable time working on a response I thought I would post the text of the talk here (apologies for length, I was asked to talk for half an hour):

I find it increasingly difficult to separate the most pressing concerns of artistic research from those of society at large. Looking at my current students’ research topics sketches the territory clearly. Among them are students working on and writing about, the microbial health of the oceans, the surveillance of populations by data collection through smart speakers, and the effects of capitalism on the methods, markets and aesthetics of the arts. These same threads of ecological, technological and political critique seem to recur year on year, with ever more urgency. Over the coming decades we face an unprecedented triangulation of crises that will surely require extraordinary levels of co-operation between people of different cultures and disciplines.


Artistic research may not be equipped to provide solutions, but artists continue to engage with and contribute to these debates, to foster dialogue, to visualise possible futures, and to bring that which is obscured to the foreground.


Hopefully, faced with such challenges, we can declare the question of what exactly might or might not constitute artistic research to be irrelevant. I seem to have spent much of my professional life on the fringes of such fruitless semantic debates, for example about exactly where to draw the line between music and sound art. And I think we need to take seriously Hito Steyerl’s warning that such arguments over inclusion and exclusion in any one discipline become in themselves disciplinary. So – (as much as I don’t believe that science has a monopoly on truth) – I’m very happy to refer anyone still interested in drawing lines between disciplines to Karen Barad’s observation that the closer one looks at an edge .. the more it disappears, dissolving into a diffraction pattern, oscillating between dark and light, interior and exterior.

Before considering the current concerns of artistic research I would first like to quickly identify one of its strengths, one that has previously been highlighted by Michael Dieter in his writings on Critical Technical Practice , that is the creation, formation, or articulation of problems. This is of course not the exclusive domain of artistic research. As Dieter reminds us Foucault considered his writing to be ‘an act of thought involving the process of defining a problem’ and surely the work of much critical writing in the humanities today continues that tradition. But artistic research is perhaps unique in working with these problems materially, articulating them through practice and therefore often directly engaging with the very materiality that defines the problem in the first place.


Holding this fondness for realising problems in our heads I would like to propose that one crucial concern for a discipline with such heterogeneous foundations as artistic research, a discipline whose boundaries must necessarily remain flexible, pourous and indistinct is surely how it negotiates its relationship with other disciplines, both within and beyond academia.


Henk Borgdorff’s concept of the ‘boundary work’ continues to prove useful in this regard, because as much as artistic research will always be a located between art and academia it’s knowledge also often inhabits the boundary of another practice, another discipline, another field. If, as Borgdorff has written elsewhere “an important distinction between art practice in itself and artistic research” is that “artistic research seeks to contribute not just to the artistic universe, but to what we know and understand” and that knowledge and understanding is often targeted beyond the boundaries of what he refers to as the ‘artistic universe’. If artistic research is good at framing problems, and asking questions then those problems and questions are often addressed to another sphere beyond the arts. This is perhaps both why researchers outside the arts like to collaborate with artists, and also why others become frustrated by working with artists, because we revel in creation of problems outside of their own discipline.


This concern is not particularly new, the framing of the 2009 Sensuous Knowledge conference at Bergen National Academy of Arts for example included the question: “How can artistic research make a meaningful and relevant contribution outside of itself?”, but it is a question that persists today and, shows no sign of either abating or becoming satisfactorily resolved just yet.  


One presumption of the arts that appears to be being actively challenged by creative practitioners from a wide range of backgrounds, is of the ambiguous relationship between the arts and functionality or maybe more accurately – purpose. We are, it seems to me, at a moment in which decreasing numbers of artists are content with the paradigm of ‘raising awareness’ of the issues with which their practice engages, while more and more are producing works that seek to operate actively in cultural spheres beyond their own

From Amy Balkin’s Public Smog project, the long-term ambition of which is to have the earth’s atmosphere listed as a UNESCO Heritage Site, to the legal testimonies of Forensic Architecture,  artist-researchers are creating work that no longer merely formulates problems, serves as a provocation or publicises its concerns but instead seeks to actively submit evidence, build a case, propose an alternative or challenge an existing power structure.


Examples such as these seem to me to move beyond what Tom Holert identifies as the demand “voiced in various sections of public culture” that artists “work on appropriate, adequate and timely responses to historical events, political change, social crises, or environmental catastrophes”. Conversely, the demands made by these practices refute the artists position as simply a ‘respondent’ to their geopolitical context, invoking in its place a role in which the work of art serves to actually alter that context.


Peter Sonderen has said that “artistic research actualises what it wants to show, it makes its knowledge tangible”, but in works like these there remain emphatic aspirations that are not realised, and that are often considered unrealisable, or perhaps even unrealistic.

It is then somewhat ironic that artist, activist and occasional curator Paolo Cirio used the title Evidentiary Realism for a group show encapsulating the work of artist-researchers who investigate, document and “examine the underpinning economic, political, legal, linguistic, and cultural structures that impact society at large”. Balkin and Weizman were both included in the 2018 exhibition alongside work by Suzanne Treister who exhibited these print-outs of documents from the Edward Snowden files, defaced or redacted with doodles that appropriated the graphic content of the original slides to partially obscure the leaks – and Ingrid Burrington whose lenticular prints overlaid before and after satellite images of locations in which major data centres had been built, evidencing the physical scale and environmental impact of the data storage that we have all come to rely so heavily upon. Alongside these contemporary examples were what might be thought of as historical precedents for such research-based evidential practices, exemplified by the work of Hans Haake, Mark Lombardi or Harun Farocki.

The controversy surrounding Cirio’s own most recent project Capture, which was censored prior to the opening of the exhibition Panorama 22 in France, exposes the difficulties of producing work on the boundary between art and politics. The work consists of a collection of widely available press and social media images of the faces of French riot police officers, processed by facial recognition software and then pasted both on the interior walls of the gallery and exterior walls throughout the city. The project is intended to highlight the danger to privacy represented by facial recognition, and is accompanied by a provocative online platform that proposes to crowd-source the officers’ identities. Cirio adopts the now familiar strategy of inverting the gaze of such technologies back upon the authorities who usually wield them.


The controversy surrounding the work and its subsequent censorship highlights the fact that when the research questions posed by artists raise implications beyond their own discipline, the consequences can also extend beyond the control of cultural institutions. In this case it is too soon to know whether the outrage and demands by the French Interior Minister to withdraw the work from the show will eventually serve Cirio’s own aim to challenge the increasing use of facial recognition systems, or are merely a demonstration that such inversions of existing power structures will never be tolerated. For the artist stepping beyond their discipline into a political arena, there can also be disciplinary consequences.

Stepping back to consider the relationship between a project such as this and research in other disciplines, I am struck by how often the agenda of research in engineering, technology and the sciences has – intentionally or otherwise – established possibilities, protocols and systems which end up becoming embedded in society at large. The streaming of this talk, and in fact the vast majority of University lectures this semester, are made possible by two research projects from the 1960s, one in the University of Southampton which pioneered the transmission of data in fibre optics and another in Bell Telephone Laboratories which invented a rudimentary image sensor capable of digitally encoding the incident light on its surface. We all carry the outcomes of innumerable research projects in these fields in our pockets and produce critical artworks or write theoretical tracts about their societal impact either too late or from too marginal a position to have an impact on their widespread adoption.


It will doubtless sound like what in business talk is referred to as blue sky thinking – which is also surely not so far from having one’s head in the clouds – to think that an artistic research project could ever realise such widespread impact. But nevertheless one of my questions today about the future of artistic research is: How we might develop mechanisms or means for its knowledge and understanding to be put into action, for the problems which it formulates to become part of our shared social discourse?


Another question that I believe remains unresolved is how exactly to make use of the position of artistic research within the academy or University. Now that it has become institutionally accepted that artistic projects can constitute research might it be possible to leverage this privilege into some actual influence? And if one of the strengths of artistic research lies in its ability to formulate problems outside of itself – then might it be possible to cluster around those problems a transdisciplinary team of researchers, practitioners or experimentalists who between them have the expertise, facilities and resources to adequately address those problems.

Interdisciplinarity itself is also certainly rife with the familiar difficulties brought about by collaborations in general and the conflicting interests and frames of reference that arise when people from different backgrounds work together. This has been highlighted by a current artistic research project at Central Saint Martins in London. Manifest Data Lab is a transdisciplinary research group “employing climate data within critical arts settings”. The project aims to provide a visual imaginary of climate change that is “capable of accounting for how the planet and its climate functions as a set of connected material, social and cultural relations within which we are implicated”.

The first in a series of slides mapping the problematics of art, data and climate states: “artists illustrating science rather than imaginative transformations of climate knowledge” highlighting a particularly intransigent issue that was also identified by Hans Jorg Rheinberger, almost a decade ago. As he puts it art science collaborations have often been “nurtured on the part of sciences, mainly in the name of renewing understandings of science”. Indeed in my own experience of such collaborations scientists often seem naïve of – or surprised by – the ability of the arts to formulate and address many of the same questions that inform the ethics and ambitions of their own discipline.


The expectation that hiring an artist-in-residence will increase public engagement with – or comprehension of – your scientific research outcomes seems exemplified by a recent call from the Sinfonia research project at the Center for Biosustainability of the Technical University of Denmark. Their specification that a musician or composer is “especially welcome” to apply conveniently aligns with the project’s youtube explainer which relies heavily on musical metaphors of cellular harmony to argue the benefits of their synthetic biological methodology.

To break out of this pattern it might be necessary to develop the current model of the artist-in-residence in which an individual artist is embedded in a discipline or organisation to produce work responsive to that context. Within this model there exists a structural imbalance between the organisation – which is always in the role of the host and sometimes also that of the funder or the commissioner, and the artist, who is bound by the etiquette of the guest, and usually also grateful for the opportunity, expenses or fee, and may also be isolated, immersed in a practice or disciplinary culture which is alien to them.


A precedent from before the time of artistic research is perhaps instructive here. The Artist Placement Group, conceived and founded by Barbara Steveni in 1965 arranged long-term placements for artists in various industries and government departments in an explicit attempt to “shift the function of art towards decision making”. Its ground-breaking activities throughout the 1970s are often cited as establishing the model of the artist-in-residence that is now so familiar to us. As John Walker wrote in 1972 “the Artist Placement Group’s position was one of realism: in the present society it is decision-making that counts, and therefore the greatest hope for change resides in the attempt to influence decision-makers”. This hope is, I believe, is the same as that which motivated Amy Balkin to send 90,000 signed postcards to Germany’s Minister for the Environment in 2012. And it is the same hope which motivates the transdisciplinary team of researchers that make up Forensic Architecture to prepare meticulous reports into state-sanctioned atrocities.


Perhaps the model of the solitary artist-in-residence – striving to articulate problems in other disciplines of which they have little expertise, while surrounded by experts – is not one capable of delivering this influence. This is not intended to discredit the impressive legacy of APG’s pioneering work, but to say that perhaps we need to look to other models of transdisciplinary collaboration if the research agenda of the arts is to be taken seriously beyond it’s own boundaries.


How else then might we think of the interaction between disciplines? While one obvious alternative would be to formulate research agendas in a transdisciplinary context in the first place, I would like to suggest that perhaps a model of “co-inquiry” articulated by curator Nicola Triscott, founder of the Arts Catalyst and now director of FACT Liverpool might be more fruitful. According to Triscott, this model “enables different types of inquiry to work side-by-side, to cooperate rather than demanding collaboration which requires a continued attempt to construct and maintain a shared conception of a problem”.


The desire of artistic research to have an impact on decision-making brings us back again to the evidential role played by some contemporary practices, because – as Susan Schuppli has said – “the notion of evidence has become crucial under the conditions of climate change and global warming, because one requires evidence in order to make a political claim and to influence environmental policy or political decision-making”. Schuppli’s practice, and writing, is to my mind particularly pertinent here, because in reframing the legal-linguistic term “material witness” in relation to artistic research, and in doing so she locates the evidential as a capacity of the material.


For Schuppli “Materials record, capture and carry traces of external events, and can be scrutinised and unfolded to produce some kind of history, sometimes even a counter narrative”. In her own practice this capacity is demonstrated most recently through her project Learning from Ice in which she has been working with Ice Core scientists who use the tiny bubbles of air trapped in an ice core to map the historic changes in the quantities of atmospheric carbon dioxide, so in this example as Schuppli says “the thing itself is captured by the materials”. Ice then carries an irrefutable testimony in its very materiality, one which connects to theoretical debates in artistic circles around indexicality and material truths.

But examples such as this might also be seen by some artists as placing demands upon artistic research that move the field beyond its traditional concerns — or even imply that it is only through meeting this requirement for evidence which Schuppli cites that artists can contribute to such debates. As it is certainly not my intention to imply that the only way in which artists can make an epistemic contribution is through this sort of documentary practice. I would like to close by briefly discussing a work which – to my mind – equally contributes to ecological debates, but through a less earnest and more speculative means.


In their collaborative project Asunder the conception shared by Tega Brain, Julian Oliver and Bengt Sjölen is of network technologies being diverted from their current disturbingly authoritarian, extractive and accumulative practices to face the environmental challenges of a changing climate. At the heart of their installation for transmediale 2019, a supercomputer analysed satellite, climate and geological data to generate geoengineering plans for various terrestrial regions before simulating these possible futures. On the one hand the project seems to propose a viable technological solution to repairing environmental damage by tasking an algorithmic intelligence trained on our communal knowledge of climatology.

But in the absurdity of some of the solutions generated – including for example the straightening of coastlines and re-routing of rivers – it also demonstrates a healthy dose of scepticism about what the reality of such a system could entail. The project poses a plausible scenario in which artificial intelligence is used to inform environmental planning while simultaneously pointing to its likely pitfalls.


In extrapolating from current trends in machine intelligence and applying them to planetary problems, the artists pre-empt a speculative science, but also embed its critique within its prototype. It seems to me that this capacity to poke fun at one’s own creations, to problematise solutions while you are working on them will be indispensable if we are to envision and implement new relationships between biosphere and technosphere. And that artists should always be part of those conversations.

[some brief thoughts on] Semiconductor Supply Chains

As part of my ongoing Earth Art Fellowship at Bristol University I have been trying to research what raw materials might be found in the two iPhone 5s that we have been slicing up and melting. This in itself is a near-impossible task, as Apple are keen to obscure the details of their silicon and mirrors: the now widely available PDF of the PCB layout that I am using to locate possible raw materials is labelled ‘Foxconn Confidential’ on the top left. Luckily such secrecy breeds curiosity and we are awash in teardowns identifying the parts and functions within this schematic. But even armed with knowledge of the manufacturer, function and chip-code of each semiconductor, working out the materials used, their proportions and origins is far harder ask in the deliberate opacity of smartphone supply chains. For now I just want to make two quick observations based on what I have found out so far.

The iPhone 5 used the, then new, Apple A6 chip as its central processor. This chip, Wikipedia tells us, was the first to use a ‘high-k dielectric’ material as its substrate. Delving further it seems that the sole benefit of this substrate over the usual silicon dioxide material is that it enables ‘further miniaturisation’. (This could be considered somewhat ironic given that ever since the release of the iPhone 5, Apple’s subsequent smartphones have all got larger). This miniaturisation is – like much of the functionality of contemporary digital media – reliant on rare metals, in this case it is speculated that the A6 chip is doped with Hafnium. Hafnium is found in heavy mineral sand deposits, usually found in beach environments such as those in Western Australia and South Africa, where it exists in solid solution with Zirconium. Hafnium is produced as a by-product in the refinement of the high purity Zirconium which is required by the nuclear industry for the outer cladding of nuclear fuel rods. Current production of Hafnium is approximately 70 tonnes per annum, but the increasing shutdown of nuclear reactors globally is likely to hinder the growth of the Hafnium market. The miniaturisation of consumer electronics is therefore incidentally entangled with and reliant on the nuclear energy industry.

At the bottom left of the rear side of the iPhone PCB we find a chip called the Skyworks 77352-15, the precursor to this current chip. This chip amplifies global satellite signals and is based on an Indium Gallium Phosphide (InGaP) substrate. Indium has become synonymous with contemporary technology, as it is a vital component of both touchscreens and solar cells, all of which are coated with Indium Tin Oxide (ITO). If, as both Marinetti and YoHa have contended, Aluminium was the defining metal of modernity for the twentieth century, then surely the conductivity and transparence to the visual spectrum of ITO make it a leading contender for the defining substance of our technological present.

Indium is also produced as a by-product of a larger refinement process, this time during the production of Zinc from the mineral Sphalerite. Known indium reserves are estimated to be 15,000 tonnes. Although the true figure is likely to be considerably higher, as with Hafnium, its availability is limited by the cost of its production. Recycling Indium from end-of-life devices currently accounts for less than 1% of global production. In recent years  numerous scienitific papers have shown that the Indium from ITO can be reclaimed from solar cells and LCD displays by crushing them to millimetre sized particles which are then soaked in an acid solution from which the Indium can then be recovered electrolytically. However, as yet this process has not yet been implemented at a scale sufficient for the mass recycling of indium, largely because current price levels have not ‘justified’ the recovery of Indium from laptops, phones, and other e-waste. It is estimated that the price of Indium would need to exceed $700/kg to make recovery from end-of-life devices ‘profitable’. The myopia of the marketplace again takes precedence over an economy of means and materials. Once again the abstract numerical economy outweighs the material, planetary ecology on which even this brief foray into one commodity demonstrates it to be entirely reliant.

Petrified Media

Micrograph of fragment of molten iPhone 6, heated to 1500ºC

If one of the potential markers of the Anthropocene in the strata record of the planet will be the concentration of CO2 in the atmosphere, then – as Katherine Yusoff points out – this marker has a cyclical fossilisation. It is the discovery and combustion of fossil fuels that has enabled the massive expansion of population and consumption over the last 200 years. The burning of fossils petrified over the millions of years since the Carboniferous period causing the CO2 spike, the effects of which we are now experiencing. And many of the technical and scientific discoveries that are emblematic of modernity are founded on the energetic intensity of this combustion – including the high temperature furnaces required for both volcanology and semiconductor manufacture. The extensive physical traces that we will leave in the sedimentary record of the planet has only been possible due to an equally extensive extraction and consumption of carboniferous fossil fuels from Earth’s deep past. As Yusoff writes “in unearthing one fossil layer we create another contemporary fossil stratum that has our name on it”. 

Contemporary geologists have begun to categorise these speculative future fossils according to the ichnological system used by palaeontologists. Using this system, habitation traces are termed domichnia, locomotion traces repichnia feeding traces fodichnia and so forth. There are however several categories of trace that will be left by human habitation that do not translate directly onto existing ichnological classifications. Jan Zalasiewicz, for example, proposes the category of frivolichnia to stand for pleasure traces: “Think of it: cinemas, sports stadiums, parks, museums and art galleries, theatres, gardening centres…”. But what of our media? How might we classify the many technical objects that humans have invented and used for the purposes of recording, communicating, and computing. If we are to follow this method of categorisation by purpose or function then we can hardly reduce the many social, commercial, and cultural functions fulfilled by such devices to simply pleasure. A further expansion of such categorisations might then include commichnia for communications media or compichnia for computational media. And it is speculating on the petrification of these devices within the strata record of the Anthropocene that I am primarily concerned here, the sedimentary accretion of which Jussi Parikka describes as “piling up slowly but steadily as an emblem of an apocalypse in slow motion”.

Zalasiewicz has spent several years working (more broadly) on this very question as part of his role in the Anthropocene Working Group (AWG), and across various articles makes several observations that are of relevance to an attempt to speculate on the future fossilisation of contemporary electronics. He notes for example that “humans produce artefacts from materials that are either very rare in nature or are unknown naturally”. These novel or highly refined materials exist in our media in concentrations and combinations not found occurring naturally, and it is reasonable to assume that the “anthropogenic lithologies” that they will petrify into will be no less extraordinary. Take, for example, the smartphone, which a recent geological research project at the University of Plymouth found to contain such a vast array of metals and minerals that they merit listing: iron, silicon, carbon, calcium, chromium, aluminium, copper, nickel, tin, indium, germanium, antimony, niobium, tantalum, molybdenum, cobalt, tungsten, gold, silver, dysprosium, gadolinium, praseodymium, and neodymium. How might such a densely packed combination of rare chemical elements petrify if buried, either in landfill or by the slow underwater sedimentation?

The key variables, Zalasiewicz et al. inform us, are moisture, temperature, oxygen content and pH. In the example of landfill, the human propensity to dispose of rubbish in plastic bags produces numerous micro-environments within the lining that surrounds the whole.

Placed in a bag with discarded food, a watch will soon stew in acid leachate and may corrode away completely. However, if placed together with some discarded plaster or concrete it could rapidly become encased in newly crystallised calcium carbonate. (legacy of the technosphere) 

How the plastic casings, printed circuit boards, glass screens, ceramic and metallic components of contemporary media will fare under these myriad subterranean chemical conditions is likely then to be almost as variable as the obscene diversity of brands and model numbers under which they are now manufactured. Some percentage of the plastics and polymers may in the right conditions ultimately form percolate through the surrounding rock to form new oilfields. Some of the metals may erode fairly quickly, oxidise and recombine with other surrounding minerals, while others, particularly stainless and other industrially hardened types of steel, may well last long enough to leave an inscription of their shape in the surrounding rock. But one of the most intriguing possibilities lies in the omnipresent silicon microchip, or integrated circuit, which has become the defining component of our contemporary media. Silicon and quartz – which Zalasiewicz describes as “chief” of the most resistant minerals – are remarkably inert, most acids do not attack them and they defy most chemical weathering. There is then a tantalising possibility that a significant number of these could survive the extremes of pressure and temperature, and, furthermore, given that microscopic details of graptolites have been preserved in the process of fossilisation, might the microelectronic paths of some of these chips retain or impress their form in the surrounding lithosphere through deep time?

These microscopic details of graptolite structures are retained due to the formation of pyrite – otherwise known as fool’s gold – inside the hollow spaces left by their skeletons. Pyrite, Zalasiewicz informs us, “tends to form in subsurface cavities … often filling the entire space to create perfect replicas of their interior”. Once pyritized these structures are remarkably resilient, surviving the extreme pressures through which mudrock transforms into slate. So, although once exposed to oxygen and water pyrite weathers away, the cavity remains intact. Commenting on which contemporary urban detritus might be candidates for pyritisation in the coming millennia Zalasiewicz includes: “the interiors of any of the myriads of tiny metal and electronic gadgets that we now produce in their millions … for these in themselves contain iron, one of the ingredients of pyrite”. According to the experiment referenced above iron in fact accounts for the largest proportion of a current smartphone: 33 grams, so, as Zalasiewicz concludes, “part of the detritus of human civilisation will certainly bear the sheen of fool’s gold”.

According to recent geological expertise then, there is a significant chance of our current media persisting as petrified traces of our technological culture. While the apt poetic irony of the term fool’s gold will not survive through deep time, it seems likely that the media technological trinkets of the present will, perhaps in the form of polished rectangular pebbles of improbably pure silicon surrounded by a glistening pyritised cavity. If such a fossil is ever unearthed millions of years hence, then the folly of its mass production and visual appeal might well be legible in its coincidence with the dramatic increase in CO2 levels and its concomitant impact on the biosphere. As Sy Taffel writes: “technofossils leave curious material traces whose geological appearance will be accompanied by a major reduction in global biodiversity, the sixth mass extinction event in the stratagraphic record”.

Thermocultures of Volcanology

I have recently started an Earth Art Fellowship with the School of Earth Sciences at Bristol University, alongside a group of volcanologists working on what is known – in shorthand – as the DisEqm project. DisEqm stands for Disequilibrium, which I am told is a relatively new concept in volcanology and one which marks a radical break with all previous laboratory models of volcanic eruptions which were based on measurements taking during ‘equilibrium’ conditions, and are therefore irrelevant to modelling conditions during an eruption when all of the variables of temperature, pressure, viscosity etc are in constant flux: disequilibrium.

The team at Bristol have spent the past 3 years building a high temperature, high pressure (HTHP) rheometer. A rheometer is a device that quantifies the viscosity of any given liquid by measuring the torque required to stir it. The challenge in this instance is to build an apparatus capable of stirring a tiny sample of magma that has been heated to temperatures as high as 1400˚C and at a pressure equivalent to that of magma 6km beneath the earth’s crust. What quickly becomes apparent from hearing about their progress is the extent of the artifice required to synthesise these conditions. In volcanological laboratories pressure and scale are inversely proportional: the higher the pressure you wish to emulate, the smaller your sample has to be – for the simple reason that large samples at high pressure are potentially extremely powerful explosives. In this case their sample is just 6cm long and a few millimetres wide. But to work at equivalent conditions to the earth’s core for example, your magma sample must be squashed into a space between two diamonds measuring just a few microns. Processes that occur in a subterranean layer more than 2000km thick are modelled in laboratories on the area of a single pixel of your screen.

In her essay on the ‘Thermocultures of Geological Media’, Nicole Starosielski uses the example of thermal image sensors composed of pure germanium doped with mercury whose sensitivity to infrared frequencies is used in the geological remotely sensing of minerals in the earth’s crust. To render these thermal images the sensor itself must be “cooled to between −243.15 degrees and −196.15˚C… The stabilisation of the thermal environment … in turn enables the remote detection of temperature”. Although a measurement of temperature is not the experimental goal, a similar dynamic is at work in the operation of the HTHP rheometer. To measure the torque required to stir pressurised magma without simultaneously melting your measuring apparatus requires several means of thermal control, primarily through insulation and water-cooling, but also a physical discontinuity between sample and instrument. The magma sample must be pressurised and heated to 1400ºC, the electronics measuring the torque, however, are required to remain at room temperature and atmospheric pressure. So, while in a traditional rheometer the spindle stirring the liquid is the same as that used to measure torque, here the sample must be stirred magnetically to prevent the conduction of heat through the spindle.

Overheating is a common problem in technical apparatuses. The central processing chip of a computer can reach temperatures as high as 400˚C while performing CPU-intensive tasks. To mitigate these extremes of temperature, which would otherwise crash software and permanently damage the chip, a heatsink and fan are clamped against it using thermal paste to ensure efficient transition of heat out of the silicon into the aluminium. Most heatsinks used in consumer electronics are cast from pure aluminium, the quintessential metal of contemporary technologies, and one with good thermal conductivity. This thermal relationship between silicon and aluminium in electronic circuitry is mirrored in the volcanology laboratory. The viscosity of magma samples is governed by the proportion of silicon dioxide (SiO2) they contain, and the crucibles in which these samples are melted are made of Alumina (Al3O2). Computation extracts pure elements from raw ores, refining rocks in order that they can micromanage electron flux, process data, or record an image. But in exploiting their thermal and electrically semi/conductive properties it inevitably imitates lithic processes. The abstractions of computation are as reliant on the properties of the minerals from which they are made as they are on the cultural manipulations performed to those substances. The chemical properties of conductivity, photosensitivity, and inscription play out geologically in earth processes just as they do technologically in media processes.    

Photography, Radiation & Robotics Beyond the Visible: Fukushima

While researching instances of cameras exposed to radiation during my PhD, I spent a long time combing through the media archive of the Tokyo Electric Power Company (TEPCO) which contains a vast repository of video from the investigations and attempted clean-up of the Fukushima plant. I quickly became fascinated by the videos from the interior of the Primary Containment Vessel in Reactor 2. 

Following the completion of the PhD I decided to write something about this archive and its relation with (in)visibility. That essay has just been published as part of a special issue on online journal Continent on Apocryphal Technologies. It is available here: 



Before Our Eyes (part 3)

Lost Time and the Artificial Present

For such a system to succeed, the speed of our nervous impulses must be exceeded by the rate of the stimulus. In DLP systems two distinct frequencies combine, both well above the temporal resolution of human sight. The colour wheel revolves at a frequency of approximately 120 revolutions per second, while the micromirrors on the DMD chip dither at a frequency near 10,000Hz. When media technical operations so routinely outstrip human temporal resolution, the instantaneity so hard sought by the photographic industry during the twentieth century loses its meaning. The appearance of an image on the screen of a digital camera is now fast enough to be commonly described as instantaneous, at least with reference to our perception, yet it conducts many operations of correction, optimisation, reduction, and compression on each image before it is displayed on the screen. Even ‘an instant’ has become an interval capable of being instrumentalised by image processing algorithms.

The micro-temporality of these technical operations is also predicated on a physiological understanding of human perceptual response established in the nineteenth century by Helmholtz’s measurements of stimulus and response. Prior to these experiments, nerves were presumed to transmit stimuli instantaneously around the body. Contrary to this presumption, Helmholtz “aimed at investigating this alleged instantaneity more closely and, if possible, to define it more precisely” (p. 61-2). To conduct this research Helmholtz first constructed an apparatus assembled from a sample of frog muscle, a rotating cylinder and a steel stylus (see image below). When the muscle was stimulated with an electrical impulse, its contraction caused the stylus to inscribe a curve in a soot-coated transparency that was wrapped around the clockwork-driven brass cylinder. From these curves it was possible to observe, and indeed measure, for the first time, a gap between sensation and resulting movement – cause and muscular effect – a gap which Helmholtz figured as temps perdu. Helmholtz’s subsequent experiments with human subjects measured a surprisingly consistent delay between stimulus and response of 0.12 and 0.20 seconds. Helmholtz’s repetition of these experiments in different areas of the body led him to conclude that “in humans the ‘message of an impression’ propagates itself to the brain with a speed of circa 60 meters per second” (p.144). The limit speed of lived experience was revealed and defined by a machine that hybridised the mechanical with the organic, stimulating the latter with electricity. Such precise measurements of physiological time were only made possible by the twin technics of clockwork and the electrical telegraph, time had to have been mechanised and the body conceived as a network of electrical impulses before the duration of human nervous impulses could be measured. Media again precedes the mechanistic understanding of physiology.    

In the context of digital technologies this temps perdu, the lost time of bodily reaction, has too become externalised in an array of buffers, caches, and shift registers that all serve – be it in an operation of image capture, video playback or networked communication – to delay the materialisation of the instant in temporary stasis while it is archived or resynchronised by the time signature of the machine. And, due to the wide discrepancy between embodied temporalities and media-technical frequencies these momentary delays are opportunities for further computation, or as Wolfgang Ernst puts it: “suspended in memory, time becomes mathematically available” (p. 28). To a chip whose clocking frequency is 10,000Hz, even the fastest possible human response time of 0.1 seconds represents a significant opportunity. The psychophysical quantification of a lag between stimulus and response enables the acquisition of the ephemeral by the logic of the machine. It is within this temps perdu that the processes of encoding, optimisation and compression are all achieved. As Florian Sprenger writes: “the fact that transmissions are constantly interrupted means that they are never completed in putative real-time … and that we have no direct access to the world we are connected to” (p. 20-1). Experience is extracted into memory before it registers in the mind.

What does it mean for an image to be instantaneous when it is routinely manipulated in advance of being seen?  What is our experience of time when these operations are continually occurring in an imperceptible buffer before the screen? This is neither the time of the phenomenological present, nor the time of the live electronic broadcast, but time dissected, quantized and reconstructed in pre-instantaneous moments before our very eyes. For Ernst this means that “computing dislocates the metaphysics of the pure present to a micro-deferred now” (2018: 35). As Ernst shows in Chronopoetics, synchronicity was vital to the time-image of electronic television, but in the individualised playback of digital media synchronicity dissolves into myriad individualised timelines whose buffers and connectivity resynthesise the impression of synchronicity on demand. The live has been replaced with the live-like, a parallel temporality that slips in and out of sync with the now, in and out of sync with its soundtrack, in and out of sync with others.

In his analysis of The Helmholtz Curves, Schmidgen analogises Helmholtz’s method to photography, noting that these experiments both “cropped a specific part of reality in the lab” and “defined their own temporality” which Schmidgen calls an artificially created present (14), a temporality extracted from the conditions of the real in order that it might be measured. Conditions that were necessary for the precise study of bodily time are now replicated in media technical temporalities which capitalise on the relatively sluggish human physiological response times measured by Helmholtz under these same conditions. The artificial temporality of an experiment that revealed the durations of perceptual signals is now reproduced by one that capitalises on precisely those durations to construct the visible in advance of perception. Digital media recreate this artificial present anew every time we press play. Between the ‘stream’ of conscious experience and the ‘streaming’ of digital media lies a concantenation of technical processes of artificial colourimetry and temporalisation.

Duration and spectrum are not directly experienced, but recreated from micro-temporal and mono-chromatic fragments, re-synthesised afresh for each viewer. How do these media re-temporalisations of ‘the live’ and ‘the present’ re-model our own temporal perception? In media environments that are optimised for the individual, where search results, adverts and content are all are tailored to our preferences, where ‘timelines’ are personalised, do we still inhabit time communally? To be con-temporary is literally to be in-time-with, but what happens to communal experience of time when we are no longer in sync with our contemporaries?

Before Our Eyes (part 2)

Psychophysics of Colour

To reproduce a single colour frame of moving image, a DLP projector overlays three discrete images in quick succession, their output synchronised with the motion of a filter wheel divided equally into segments of red, green and blue, the three primaries that correspond with the colour sensitivities of our retinal cones. From a technical perspective the full colour image that we perceive never exists, but is only created in the audience’s perception by additive colour synthesis.  From the perspective of the machine there is only of a sequence of distinct red, green and blue images, whose intensity is micro-managed at the level of the individual pixel. Colour, as experienced in both DLP projection and unmediated human perception then, is never ‘true’ (as BenQ claim), but always a technical construction.

Through processes such as this, the production and reproduction of the digital image is founded on an externalisation of our perceptual faculties. Digital image technologies are designed so explicitly to be seen, that their technical specifications not only reflect but directly imitate the anatomical construction and perceptual effects of human vision. The pixelation of a digital micromirror device (DMD) reproduces on an optoelectronic grid the mosaic of cones lining the retina, while the colour wheel enforces a trichromatic filtering that targets their colour sensitivities. We can therefore conceptualise the optical mechanisms of a DLP projector as an attempt to build a projecting eye, a luminous electronic retina radiating colour onto the surfaces of its environment.

The optical principles on which this mechanism is based originate in the trichromatic theory of vision, hypothesised by Thomas Young in 1802 and subsequently proven through the psychophysical experiments of Hermann von Helmholtz and James Clerk Maxwell. The colour triangle, initially posited by Young  (below, left) to describe colour spatially as created between the three poles of red, green and – as he supposed – violet has now become a standard means of measuring the colour gamut of various display technologies, in which different technical standards can be described as differing sizes of triangle within the complete perceptual colour space circumscribed the CIE system (below, right).    

This chromatic space postulated by Young was subsequently mapped empirically by Maxwell. To conduct his experiments, Maxwell constructed a handheld wheel (below, left) onto which could be clipped overlapping discs of different colours. The wheel was then spun fast enough that the colours mixed together in the perception of their observer in much the same way that the discrete frames of a moving image appear as continuous motion. Using this simple instrument, Maxwell was able to quantify the perceptual effects of different ratios and combinations of the three primaries. In so doing, Maxwell ascribed numerical values to the proportions of vermillion, emerald and ultramarine used to achieve different tones, shades and hues; producing a series of discrete values within a field of subjective experience that had previously been understood as a continual spectrum. To quantify colour in this manner can be understood as a kind of proto-digitisation, and Maxwell’s method prefigures the numericalisation of colour gamuts in media technical standards from the 216 ‘websafe’ colours to the considerably wider gamut of 16 million colours that can be coded in a six digit RGB hex code.

Maxwell’s conclusion from these perceptual experiments: “that the judgment thus formed is determined not by the real identity of the colours, but by a cause residing in the eye of the observer” (link) established human vision as a manipulable system of perceptual limitations. This psychophysical conception of sight as fallible and slow relative to mechanical motion persists throughout our contemporary media environment, and is the foundation on which all moving image technologies rely. And – in the colour filter of DLP projectors (below, right) – Maxwell’s colour wheel persists today as a techno-chromatic mechanism of externalised sight. A spinning disc originally used to measure the chromatic operation of the human vision has now become a central component in the reproduction of projected colour. The dissolving of biological sight into its trichromatic primaries was diagnosed by the exact same mechanism that now resolves those colours before us.

In 1855, when black and white photography was still in its experimental infancy, Maxwell proposed a system for producing a colour photograph. By photographing the same scene through three separate red, green and blue filters and then, using magic lanterns, projecting each result through its respective filter on top of one another, he hypothesised that a full colour image could be produced. This process was successfully demonstrated six years later creating a now much reproduced image of a tartan ribbon. In DLP projection each frame of the moving image replicates exactly Maxwell’s process of additive colour synthesis, combining three discrete monochromatic images in the audience’s perception. Maxwell’s trichromatic system of projection is now automated by contemporary cinema to occur, in some systems, as often as ten times for every frame, or 250 times a second.

Such accelerations of photographic temporality began, as Paul Virilio writes, from the moment of its invention: “from Nièpce’s thirty minutes in 1829 to roughly twenty seconds with Nadar in 1860” (p. 21), and rapidly continued past the frame rate of film projection to now operate habitually at rates far beneath human temporal perception. If celluloid cinema enabled the capture of movement by the intervention of a rotating shutter, fragmenting time into a sequence of freeze frames, then in DLP it is this now historic whole of the individual frame itself whose unity is dissolved both spatially into pixels and chromatically (and, as we will see in next post) temporally into three subsequent perceptual primaries. 

Before Our Eyes (part 1)

A 2018 BenQ home cinema advert begins with a white middle-aged man (with whom the target market is presumably meant to identify) settling down in an armchair next to his projector to watch three cinematic clips, each with carefully managed near-monochrome colour spaces. The first, captioned BLUE MONDAY, stands for introspection, solitude, and melancholy; the second, RED VALENTINE, for passion, drama, love and loss, the third, GREEN MIRACLE, for the awe of the natural world, as embodied by the aurora borealis, whose cosmic light phenomena BenQ are at pains to analogise with their new digital light processing (DLP) projector. The ad then cuts – in a manner popularised by late twentieth century shampoo commercials – to a computer animation of the internal technics of the projector. This sequence begins with a close-up of the viewer’s eye that quickly fades to a similar perspective on the projector lens. Beams of white light flash across the screen as the camera appears to track back into the machine, falling on a spinning colour disc divided into 6 segments, two each of red, green and blue (RGB). Moving alongside this disc, the white light is shown as consisting of these three primaries. We cut to a second animation, this time of a digital micromirror chip seen from above, a saturated spectrum of digital light reflects of its surface with an accompanying swoosh, as the earnest voiceover informs us that “only true colours convey the deepest feelings”. At this point the ad cuts back from animated to cinematographic images, now in saturated technicolour, flashing between clichés of strolling through the Casbah, a sunset embrace, playing in autumn leaves, a newborn yawn, a kiss on a window pane. Obscured behind its hackneyed equivalences of emotion and colour, and yet hinted at by the knowing analogy between human eye and projector lens, is a far deeper historical and technical connection between physiology and projection. As Henning Schmidgen has shown this connection in fact dates back beyond the invention of cinema to 1872 when German physiologist Johann Czermak pioneered the use of projection in what he called his Spectatorium: “a fragmentary cinematographic apparatus consisting of projector, screen, and rows of seats”. In this mediatised version of an anatomy theatre “cells, tissues and organs functioned in the place of recordings on celluloid” (p44). Schmidgen goes on to describe an arrangement of an eviscerated frog’s heart, two mirrors, lenses and a light source that projected an enlarged image of the contracting heart – removed from the frog’s body but still connected to its nerves – onto a screen above the audience. 

In the decades that both preceded and followed this anecdotal convergence of projection and physiology, experimental discoveries about human physiology were made by, among others, James Clerk Maxwell and Hermann von Helmholtz which comprehensively undermined the conception of human sight as objective and transparent, insisting – and indeed proving – its complexity, subjectivity, and its flaws. In these posts I will discuss the technical correspondence between the operation of DLP and human visual perception, with a particular emphasis on how contemporary projection has instrumentalised the knowledge of nineteenth century psychophysics, showing how the technical specifications of DLP projection are derived from a history of the empiricial measurement and quantification of subjective phenomena. This relationship is emblematic of what Jonathan Crary has described as “the reconfiguration of optical experience into synthetic and machinic operations that occur external to the observing subject” (p.226). The literal externalisation of the still beating heart in Czermak’s projections precedes a less violent externalisation of sight in the technics of contemporary projection. However, whereas in Czermak’s Spectatorium the frog heart projections served to demonstrate anatomical function through direct visual reproduction, in the case of DLP, knowledge of human physiology is used to ensure that its operation remains imperceptible to its audience. So, while for Czermak, projection was a transparent tool of instruction, the spectacle of DLP projection relies on the opacity of its technics to maintain the spectacle of its moving image. The psychophysical discoveries of Maxwell and Helmoltz are inscribed in DLP as a series of chromatic principles and temporal intervals within which certain operations must be occur to retain the illusory nature of its image. Whereas in the nineteenth century projection served to reveal physiological operations, projection now uses nineteenth century knowledge to conceal its operation.