Pixel Mining

From Friday 23rd to Sunday 25th July I will be showing a new video installation at  D-UNIT, Bristol:

 

We mine and refine rocks to make pixels glow. Digital electronics now outnumber human beings, each individual unit uses the vast majority of terrestrial metals in its components. According to the US Geological Survey 22 billion handheld electronic devices were manufactured in 2014. The LED screens in these devices used 130kg of gallium, 170kg of cerium, 120kg of arsenic and 180kg of lanthanum. If we knew the average number of pixels in each device we could calculate the geological cost per pixel of our screen time. The earth observation satellite Sentinel II produces another image of every site from which its raw materials were extracted every five days. These images have a ground resolution of 10 square metres per pixel, but we can’t calculate how many square metres of terrestrial surface were turned over to produce each pixel in its camera. If we could we might be able to derive a planetary resolution: the total number of pixels the Earth can support.

 

Using hacked PC monitors, satellite time-lapse, and found footage of electronics manufacture and recycling, this new video installation connects the flickering screen image with the cycles of extraction that make its appearance possible, asking how many pixels and how much screen time the planet can sustain.

D-UNIT has been initiated by artist Megan Broadmeadow and her partner Ed Metcalf with an aim to provide opportunites for 2020 graduates who were unable to have degree shows as well as artist-led initiatives from Bristol based artists and exhibitions for UK based mid-career artists. They will also be running public workshops in practical work and digital skills over the winter months.

 
 

D-UNIT is located at:

Durnford Street

Bristol

BS3 2AW

 

www. dunit.space Instagram – d.unit.studios

Current Concerns in Artistic Research

I was recently runner-up in applying for a new job. For the interview I was asked to give a presentation on this title, and as I spent some considerable time working on a response I thought I would post the text of the talk here (apologies for length, I was asked to talk for half an hour):

I find it increasingly difficult to separate the most pressing concerns of artistic research from those of society at large. Looking at my current students’ research topics sketches the territory clearly. Among them are students working on and writing about, the microbial health of the oceans, the surveillance of populations by data collection through smart speakers, and the effects of capitalism on the methods, markets and aesthetics of the arts. These same threads of ecological, technological and political critique seem to recur year on year, with ever more urgency. Over the coming decades we face an unprecedented triangulation of crises that will surely require extraordinary levels of co-operation between people of different cultures and disciplines.

 

Artistic research may not be equipped to provide solutions, but artists continue to engage with and contribute to these debates, to foster dialogue, to visualise possible futures, and to bring that which is obscured to the foreground.

 

Hopefully, faced with such challenges, we can declare the question of what exactly might or might not constitute artistic research to be irrelevant. I seem to have spent much of my professional life on the fringes of such fruitless semantic debates, for example about exactly where to draw the line between music and sound art. And I think we need to take seriously Hito Steyerl’s warning that such arguments over inclusion and exclusion in any one discipline become in themselves disciplinary. So – (as much as I don’t believe that science has a monopoly on truth) – I’m very happy to refer anyone still interested in drawing lines between disciplines to Karen Barad’s observation that the closer one looks at an edge .. the more it disappears, dissolving into a diffraction pattern, oscillating between dark and light, interior and exterior.

Before considering the current concerns of artistic research I would first like to quickly identify one of its strengths, one that has previously been highlighted by Michael Dieter in his writings on Critical Technical Practice , that is the creation, formation, or articulation of problems. This is of course not the exclusive domain of artistic research. As Dieter reminds us Foucault considered his writing to be ‘an act of thought involving the process of defining a problem’ and surely the work of much critical writing in the humanities today continues that tradition. But artistic research is perhaps unique in working with these problems materially, articulating them through practice and therefore often directly engaging with the very materiality that defines the problem in the first place.

 

Holding this fondness for realising problems in our heads I would like to propose that one crucial concern for a discipline with such heterogeneous foundations as artistic research, a discipline whose boundaries must necessarily remain flexible, pourous and indistinct is surely how it negotiates its relationship with other disciplines, both within and beyond academia.

 

Henk Borgdorff’s concept of the ‘boundary work’ continues to prove useful in this regard, because as much as artistic research will always be a located between art and academia it’s knowledge also often inhabits the boundary of another practice, another discipline, another field. If, as Borgdorff has written elsewhere “an important distinction between art practice in itself and artistic research” is that “artistic research seeks to contribute not just to the artistic universe, but to what we know and understand” and that knowledge and understanding is often targeted beyond the boundaries of what he refers to as the ‘artistic universe’. If artistic research is good at framing problems, and asking questions then those problems and questions are often addressed to another sphere beyond the arts. This is perhaps both why researchers outside the arts like to collaborate with artists, and also why others become frustrated by working with artists, because we revel in creation of problems outside of their own discipline.

 

This concern is not particularly new, the framing of the 2009 Sensuous Knowledge conference at Bergen National Academy of Arts for example included the question: “How can artistic research make a meaningful and relevant contribution outside of itself?”, but it is a question that persists today and, shows no sign of either abating or becoming satisfactorily resolved just yet.  

 

One presumption of the arts that appears to be being actively challenged by creative practitioners from a wide range of backgrounds, is of the ambiguous relationship between the arts and functionality or maybe more accurately – purpose. We are, it seems to me, at a moment in which decreasing numbers of artists are content with the paradigm of ‘raising awareness’ of the issues with which their practice engages, while more and more are producing works that seek to operate actively in cultural spheres beyond their own

From Amy Balkin’s Public Smog project, the long-term ambition of which is to have the earth’s atmosphere listed as a UNESCO Heritage Site, to the legal testimonies of Forensic Architecture,  artist-researchers are creating work that no longer merely formulates problems, serves as a provocation or publicises its concerns but instead seeks to actively submit evidence, build a case, propose an alternative or challenge an existing power structure.

 

Examples such as these seem to me to move beyond what Tom Holert identifies as the demand “voiced in various sections of public culture” that artists “work on appropriate, adequate and timely responses to historical events, political change, social crises, or environmental catastrophes”. Conversely, the demands made by these practices refute the artists position as simply a ‘respondent’ to their geopolitical context, invoking in its place a role in which the work of art serves to actually alter that context.

 

Peter Sonderen has said that “artistic research actualises what it wants to show, it makes its knowledge tangible”, but in works like these there remain emphatic aspirations that are not realised, and that are often considered unrealisable, or perhaps even unrealistic.

It is then somewhat ironic that artist, activist and occasional curator Paolo Cirio used the title Evidentiary Realism for a group show encapsulating the work of artist-researchers who investigate, document and “examine the underpinning economic, political, legal, linguistic, and cultural structures that impact society at large”. Balkin and Weizman were both included in the 2018 exhibition alongside work by Suzanne Treister who exhibited these print-outs of documents from the Edward Snowden files, defaced or redacted with doodles that appropriated the graphic content of the original slides to partially obscure the leaks – and Ingrid Burrington whose lenticular prints overlaid before and after satellite images of locations in which major data centres had been built, evidencing the physical scale and environmental impact of the data storage that we have all come to rely so heavily upon. Alongside these contemporary examples were what might be thought of as historical precedents for such research-based evidential practices, exemplified by the work of Hans Haake, Mark Lombardi or Harun Farocki.

The controversy surrounding Cirio’s own most recent project Capture, which was censored prior to the opening of the exhibition Panorama 22 in France, exposes the difficulties of producing work on the boundary between art and politics. The work consists of a collection of widely available press and social media images of the faces of French riot police officers, processed by facial recognition software and then pasted both on the interior walls of the gallery and exterior walls throughout the city. The project is intended to highlight the danger to privacy represented by facial recognition, and is accompanied by a provocative online platform that proposes to crowd-source the officers’ identities. Cirio adopts the now familiar strategy of inverting the gaze of such technologies back upon the authorities who usually wield them.

 

The controversy surrounding the work and its subsequent censorship highlights the fact that when the research questions posed by artists raise implications beyond their own discipline, the consequences can also extend beyond the control of cultural institutions. In this case it is too soon to know whether the outrage and demands by the French Interior Minister to withdraw the work from the show will eventually serve Cirio’s own aim to challenge the increasing use of facial recognition systems, or are merely a demonstration that such inversions of existing power structures will never be tolerated. For the artist stepping beyond their discipline into a political arena, there can also be disciplinary consequences.

Stepping back to consider the relationship between a project such as this and research in other disciplines, I am struck by how often the agenda of research in engineering, technology and the sciences has – intentionally or otherwise – established possibilities, protocols and systems which end up becoming embedded in society at large. The streaming of this talk, and in fact the vast majority of University lectures this semester, are made possible by two research projects from the 1960s, one in the University of Southampton which pioneered the transmission of data in fibre optics and another in Bell Telephone Laboratories which invented a rudimentary image sensor capable of digitally encoding the incident light on its surface. We all carry the outcomes of innumerable research projects in these fields in our pockets and produce critical artworks or write theoretical tracts about their societal impact either too late or from too marginal a position to have an impact on their widespread adoption.

 

It will doubtless sound like what in business talk is referred to as blue sky thinking – which is also surely not so far from having one’s head in the clouds – to think that an artistic research project could ever realise such widespread impact. But nevertheless one of my questions today about the future of artistic research is: How we might develop mechanisms or means for its knowledge and understanding to be put into action, for the problems which it formulates to become part of our shared social discourse?

 

Another question that I believe remains unresolved is how exactly to make use of the position of artistic research within the academy or University. Now that it has become institutionally accepted that artistic projects can constitute research might it be possible to leverage this privilege into some actual influence? And if one of the strengths of artistic research lies in its ability to formulate problems outside of itself – then might it be possible to cluster around those problems a transdisciplinary team of researchers, practitioners or experimentalists who between them have the expertise, facilities and resources to adequately address those problems.

Interdisciplinarity itself is also certainly rife with the familiar difficulties brought about by collaborations in general and the conflicting interests and frames of reference that arise when people from different backgrounds work together. This has been highlighted by a current artistic research project at Central Saint Martins in London. Manifest Data Lab is a transdisciplinary research group “employing climate data within critical arts settings”. The project aims to provide a visual imaginary of climate change that is “capable of accounting for how the planet and its climate functions as a set of connected material, social and cultural relations within which we are implicated”.

The first in a series of slides mapping the problematics of art, data and climate states: “artists illustrating science rather than imaginative transformations of climate knowledge” highlighting a particularly intransigent issue that was also identified by Hans Jorg Rheinberger, almost a decade ago. As he puts it art science collaborations have often been “nurtured on the part of sciences, mainly in the name of renewing understandings of science”. Indeed in my own experience of such collaborations scientists often seem naïve of – or surprised by – the ability of the arts to formulate and address many of the same questions that inform the ethics and ambitions of their own discipline.

 

The expectation that hiring an artist-in-residence will increase public engagement with – or comprehension of – your scientific research outcomes seems exemplified by a recent call from the Sinfonia research project at the Center for Biosustainability of the Technical University of Denmark. Their specification that a musician or composer is “especially welcome” to apply conveniently aligns with the project’s youtube explainer which relies heavily on musical metaphors of cellular harmony to argue the benefits of their synthetic biological methodology.

To break out of this pattern it might be necessary to develop the current model of the artist-in-residence in which an individual artist is embedded in a discipline or organisation to produce work responsive to that context. Within this model there exists a structural imbalance between the organisation – which is always in the role of the host and sometimes also that of the funder or the commissioner, and the artist, who is bound by the etiquette of the guest, and usually also grateful for the opportunity, expenses or fee, and may also be isolated, immersed in a practice or disciplinary culture which is alien to them.

 

A precedent from before the time of artistic research is perhaps instructive here. The Artist Placement Group, conceived and founded by Barbara Steveni in 1965 arranged long-term placements for artists in various industries and government departments in an explicit attempt to “shift the function of art towards decision making”. Its ground-breaking activities throughout the 1970s are often cited as establishing the model of the artist-in-residence that is now so familiar to us. As John Walker wrote in 1972 “the Artist Placement Group’s position was one of realism: in the present society it is decision-making that counts, and therefore the greatest hope for change resides in the attempt to influence decision-makers”. This hope is, I believe, is the same as that which motivated Amy Balkin to send 90,000 signed postcards to Germany’s Minister for the Environment in 2012. And it is the same hope which motivates the transdisciplinary team of researchers that make up Forensic Architecture to prepare meticulous reports into state-sanctioned atrocities.

 

Perhaps the model of the solitary artist-in-residence – striving to articulate problems in other disciplines of which they have little expertise, while surrounded by experts – is not one capable of delivering this influence. This is not intended to discredit the impressive legacy of APG’s pioneering work, but to say that perhaps we need to look to other models of transdisciplinary collaboration if the research agenda of the arts is to be taken seriously beyond it’s own boundaries.

 

How else then might we think of the interaction between disciplines? While one obvious alternative would be to formulate research agendas in a transdisciplinary context in the first place, I would like to suggest that perhaps a model of “co-inquiry” articulated by curator Nicola Triscott, founder of the Arts Catalyst and now director of FACT Liverpool might be more fruitful. According to Triscott, this model “enables different types of inquiry to work side-by-side, to cooperate rather than demanding collaboration which requires a continued attempt to construct and maintain a shared conception of a problem”.

 

The desire of artistic research to have an impact on decision-making brings us back again to the evidential role played by some contemporary practices, because – as Susan Schuppli has said – “the notion of evidence has become crucial under the conditions of climate change and global warming, because one requires evidence in order to make a political claim and to influence environmental policy or political decision-making”. Schuppli’s practice, and writing, is to my mind particularly pertinent here, because in reframing the legal-linguistic term “material witness” in relation to artistic research, and in doing so she locates the evidential as a capacity of the material.

 

For Schuppli “Materials record, capture and carry traces of external events, and can be scrutinised and unfolded to produce some kind of history, sometimes even a counter narrative”. In her own practice this capacity is demonstrated most recently through her project Learning from Ice in which she has been working with Ice Core scientists who use the tiny bubbles of air trapped in an ice core to map the historic changes in the quantities of atmospheric carbon dioxide, so in this example as Schuppli says “the thing itself is captured by the materials”. Ice then carries an irrefutable testimony in its very materiality, one which connects to theoretical debates in artistic circles around indexicality and material truths.

But examples such as this might also be seen by some artists as placing demands upon artistic research that move the field beyond its traditional concerns — or even imply that it is only through meeting this requirement for evidence which Schuppli cites that artists can contribute to such debates. As it is certainly not my intention to imply that the only way in which artists can make an epistemic contribution is through this sort of documentary practice. I would like to close by briefly discussing a work which – to my mind – equally contributes to ecological debates, but through a less earnest and more speculative means.

 

In their collaborative project Asunder the conception shared by Tega Brain, Julian Oliver and Bengt Sjölen is of network technologies being diverted from their current disturbingly authoritarian, extractive and accumulative practices to face the environmental challenges of a changing climate. At the heart of their installation for transmediale 2019, a supercomputer analysed satellite, climate and geological data to generate geoengineering plans for various terrestrial regions before simulating these possible futures. On the one hand the project seems to propose a viable technological solution to repairing environmental damage by tasking an algorithmic intelligence trained on our communal knowledge of climatology.

But in the absurdity of some of the solutions generated – including for example the straightening of coastlines and re-routing of rivers – it also demonstrates a healthy dose of scepticism about what the reality of such a system could entail. The project poses a plausible scenario in which artificial intelligence is used to inform environmental planning while simultaneously pointing to its likely pitfalls.

 

In extrapolating from current trends in machine intelligence and applying them to planetary problems, the artists pre-empt a speculative science, but also embed its critique within its prototype. It seems to me that this capacity to poke fun at one’s own creations, to problematise solutions while you are working on them will be indispensable if we are to envision and implement new relationships between biosphere and technosphere. And that artists should always be part of those conversations.

[some brief thoughts on] Semiconductor Supply Chains

As part of my ongoing Earth Art Fellowship at Bristol University I have been trying to research what raw materials might be found in the two iPhone 5s that we have been slicing up and melting. This in itself is a near-impossible task, as Apple are keen to obscure the details of their silicon and mirrors: the now widely available PDF of the PCB layout that I am using to locate possible raw materials is labelled ‘Foxconn Confidential’ on the top left. Luckily such secrecy breeds curiosity and we are awash in teardowns identifying the parts and functions within this schematic. But even armed with knowledge of the manufacturer, function and chip-code of each semiconductor, working out the materials used, their proportions and origins is far harder ask in the deliberate opacity of smartphone supply chains. For now I just want to make two quick observations based on what I have found out so far.

The iPhone 5 used the, then new, Apple A6 chip as its central processor. This chip, Wikipedia tells us, was the first to use a ‘high-k dielectric’ material as its substrate. Delving further it seems that the sole benefit of this substrate over the usual silicon dioxide material is that it enables ‘further miniaturisation’. (This could be considered somewhat ironic given that ever since the release of the iPhone 5, Apple’s subsequent smartphones have all got larger). This miniaturisation is – like much of the functionality of contemporary digital media – reliant on rare metals, in this case it is speculated that the A6 chip is doped with Hafnium. Hafnium is found in heavy mineral sand deposits, usually found in beach environments such as those in Western Australia and South Africa, where it exists in solid solution with Zirconium. Hafnium is produced as a by-product in the refinement of the high purity Zirconium which is required by the nuclear industry for the outer cladding of nuclear fuel rods. Current production of Hafnium is approximately 70 tonnes per annum, but the increasing shutdown of nuclear reactors globally is likely to hinder the growth of the Hafnium market. The miniaturisation of consumer electronics is therefore incidentally entangled with and reliant on the nuclear energy industry.

At the bottom left of the rear side of the iPhone PCB we find a chip called the Skyworks 77352-15, the precursor to this current chip. This chip amplifies global satellite signals and is based on an Indium Gallium Phosphide (InGaP) substrate. Indium has become synonymous with contemporary technology, as it is a vital component of both touchscreens and solar cells, all of which are coated with Indium Tin Oxide (ITO). If, as both Marinetti and YoHa have contended, Aluminium was the defining metal of modernity for the twentieth century, then surely the conductivity and transparence to the visual spectrum of ITO make it a leading contender for the defining substance of our technological present.

Indium is also produced as a by-product of a larger refinement process, this time during the production of Zinc from the mineral Sphalerite. Known indium reserves are estimated to be 15,000 tonnes. Although the true figure is likely to be considerably higher, as with Hafnium, its availability is limited by the cost of its production. Recycling Indium from end-of-life devices currently accounts for less than 1% of global production. In recent years  numerous scienitific papers have shown that the Indium from ITO can be reclaimed from solar cells and LCD displays by crushing them to millimetre sized particles which are then soaked in an acid solution from which the Indium can then be recovered electrolytically. However, as yet this process has not yet been implemented at a scale sufficient for the mass recycling of indium, largely because current price levels have not ‘justified’ the recovery of Indium from laptops, phones, and other e-waste. It is estimated that the price of Indium would need to exceed $700/kg to make recovery from end-of-life devices ‘profitable’. The myopia of the marketplace again takes precedence over an economy of means and materials. Once again the abstract numerical economy outweighs the material, planetary ecology on which even this brief foray into one commodity demonstrates it to be entirely reliant.

Petrified Media

Micrograph of fragment of molten iPhone 6, heated to 1500ºC

If one of the potential markers of the Anthropocene in the strata record of the planet will be the concentration of CO2 in the atmosphere, then – as Katherine Yusoff points out – this marker has a cyclical fossilisation. It is the discovery and combustion of fossil fuels that has enabled the massive expansion of population and consumption over the last 200 years. The burning of fossils petrified over the millions of years since the Carboniferous period causing the CO2 spike, the effects of which we are now experiencing. And many of the technical and scientific discoveries that are emblematic of modernity are founded on the energetic intensity of this combustion – including the high temperature furnaces required for both volcanology and semiconductor manufacture. The extensive physical traces that we will leave in the sedimentary record of the planet has only been possible due to an equally extensive extraction and consumption of carboniferous fossil fuels from Earth’s deep past. As Yusoff writes “in unearthing one fossil layer we create another contemporary fossil stratum that has our name on it”. 

Contemporary geologists have begun to categorise these speculative future fossils according to the ichnological system used by palaeontologists. Using this system, habitation traces are termed domichnia, locomotion traces repichnia feeding traces fodichnia and so forth. There are however several categories of trace that will be left by human habitation that do not translate directly onto existing ichnological classifications. Jan Zalasiewicz, for example, proposes the category of frivolichnia to stand for pleasure traces: “Think of it: cinemas, sports stadiums, parks, museums and art galleries, theatres, gardening centres…”. But what of our media? How might we classify the many technical objects that humans have invented and used for the purposes of recording, communicating, and computing. If we are to follow this method of categorisation by purpose or function then we can hardly reduce the many social, commercial, and cultural functions fulfilled by such devices to simply pleasure. A further expansion of such categorisations might then include commichnia for communications media or compichnia for computational media. And it is speculating on the petrification of these devices within the strata record of the Anthropocene that I am primarily concerned here, the sedimentary accretion of which Jussi Parikka describes as “piling up slowly but steadily as an emblem of an apocalypse in slow motion”.

Zalasiewicz has spent several years working (more broadly) on this very question as part of his role in the Anthropocene Working Group (AWG), and across various articles makes several observations that are of relevance to an attempt to speculate on the future fossilisation of contemporary electronics. He notes for example that “humans produce artefacts from materials that are either very rare in nature or are unknown naturally”. These novel or highly refined materials exist in our media in concentrations and combinations not found occurring naturally, and it is reasonable to assume that the “anthropogenic lithologies” that they will petrify into will be no less extraordinary. Take, for example, the smartphone, which a recent geological research project at the University of Plymouth found to contain such a vast array of metals and minerals that they merit listing: iron, silicon, carbon, calcium, chromium, aluminium, copper, nickel, tin, indium, germanium, antimony, niobium, tantalum, molybdenum, cobalt, tungsten, gold, silver, dysprosium, gadolinium, praseodymium, and neodymium. How might such a densely packed combination of rare chemical elements petrify if buried, either in landfill or by the slow underwater sedimentation?

The key variables, Zalasiewicz et al. inform us, are moisture, temperature, oxygen content and pH. In the example of landfill, the human propensity to dispose of rubbish in plastic bags produces numerous micro-environments within the lining that surrounds the whole.

Placed in a bag with discarded food, a watch will soon stew in acid leachate and may corrode away completely. However, if placed together with some discarded plaster or concrete it could rapidly become encased in newly crystallised calcium carbonate. (legacy of the technosphere) 

How the plastic casings, printed circuit boards, glass screens, ceramic and metallic components of contemporary media will fare under these myriad subterranean chemical conditions is likely then to be almost as variable as the obscene diversity of brands and model numbers under which they are now manufactured. Some percentage of the plastics and polymers may in the right conditions ultimately form percolate through the surrounding rock to form new oilfields. Some of the metals may erode fairly quickly, oxidise and recombine with other surrounding minerals, while others, particularly stainless and other industrially hardened types of steel, may well last long enough to leave an inscription of their shape in the surrounding rock. But one of the most intriguing possibilities lies in the omnipresent silicon microchip, or integrated circuit, which has become the defining component of our contemporary media. Silicon and quartz – which Zalasiewicz describes as “chief” of the most resistant minerals – are remarkably inert, most acids do not attack them and they defy most chemical weathering. There is then a tantalising possibility that a significant number of these could survive the extremes of pressure and temperature, and, furthermore, given that microscopic details of graptolites have been preserved in the process of fossilisation, might the microelectronic paths of some of these chips retain or impress their form in the surrounding lithosphere through deep time?

These microscopic details of graptolite structures are retained due to the formation of pyrite – otherwise known as fool’s gold – inside the hollow spaces left by their skeletons. Pyrite, Zalasiewicz informs us, “tends to form in subsurface cavities … often filling the entire space to create perfect replicas of their interior”. Once pyritized these structures are remarkably resilient, surviving the extreme pressures through which mudrock transforms into slate. So, although once exposed to oxygen and water pyrite weathers away, the cavity remains intact. Commenting on which contemporary urban detritus might be candidates for pyritisation in the coming millennia Zalasiewicz includes: “the interiors of any of the myriads of tiny metal and electronic gadgets that we now produce in their millions … for these in themselves contain iron, one of the ingredients of pyrite”. According to the experiment referenced above iron in fact accounts for the largest proportion of a current smartphone: 33 grams, so, as Zalasiewicz concludes, “part of the detritus of human civilisation will certainly bear the sheen of fool’s gold”.

According to recent geological expertise then, there is a significant chance of our current media persisting as petrified traces of our technological culture. While the apt poetic irony of the term fool’s gold will not survive through deep time, it seems likely that the media technological trinkets of the present will, perhaps in the form of polished rectangular pebbles of improbably pure silicon surrounded by a glistening pyritised cavity. If such a fossil is ever unearthed millions of years hence, then the folly of its mass production and visual appeal might well be legible in its coincidence with the dramatic increase in CO2 levels and its concomitant impact on the biosphere. As Sy Taffel writes: “technofossils leave curious material traces whose geological appearance will be accompanied by a major reduction in global biodiversity, the sixth mass extinction event in the stratagraphic record”.

Thermocultures of Volcanology

I have recently started an Earth Art Fellowship with the School of Earth Sciences at Bristol University, alongside a group of volcanologists working on what is known – in shorthand – as the DisEqm project. DisEqm stands for Disequilibrium, which I am told is a relatively new concept in volcanology and one which marks a radical break with all previous laboratory models of volcanic eruptions which were based on measurements taking during ‘equilibrium’ conditions, and are therefore irrelevant to modelling conditions during an eruption when all of the variables of temperature, pressure, viscosity etc are in constant flux: disequilibrium.

The team at Bristol have spent the past 3 years building a high temperature, high pressure (HTHP) rheometer. A rheometer is a device that quantifies the viscosity of any given liquid by measuring the torque required to stir it. The challenge in this instance is to build an apparatus capable of stirring a tiny sample of magma that has been heated to temperatures as high as 1400˚C and at a pressure equivalent to that of magma 6km beneath the earth’s crust. What quickly becomes apparent from hearing about their progress is the extent of the artifice required to synthesise these conditions. In volcanological laboratories pressure and scale are inversely proportional: the higher the pressure you wish to emulate, the smaller your sample has to be – for the simple reason that large samples at high pressure are potentially extremely powerful explosives. In this case their sample is just 6cm long and a few millimetres wide. But to work at equivalent conditions to the earth’s core for example, your magma sample must be squashed into a space between two diamonds measuring just a few microns. Processes that occur in a subterranean layer more than 2000km thick are modelled in laboratories on the area of a single pixel of your screen.

In her essay on the ‘Thermocultures of Geological Media’, Nicole Starosielski uses the example of thermal image sensors composed of pure germanium doped with mercury whose sensitivity to infrared frequencies is used in the geological remotely sensing of minerals in the earth’s crust. To render these thermal images the sensor itself must be “cooled to between −243.15 degrees and −196.15˚C… The stabilisation of the thermal environment … in turn enables the remote detection of temperature”. Although a measurement of temperature is not the experimental goal, a similar dynamic is at work in the operation of the HTHP rheometer. To measure the torque required to stir pressurised magma without simultaneously melting your measuring apparatus requires several means of thermal control, primarily through insulation and water-cooling, but also a physical discontinuity between sample and instrument. The magma sample must be pressurised and heated to 1400ºC, the electronics measuring the torque, however, are required to remain at room temperature and atmospheric pressure. So, while in a traditional rheometer the spindle stirring the liquid is the same as that used to measure torque, here the sample must be stirred magnetically to prevent the conduction of heat through the spindle.

Overheating is a common problem in technical apparatuses. The central processing chip of a computer can reach temperatures as high as 400˚C while performing CPU-intensive tasks. To mitigate these extremes of temperature, which would otherwise crash software and permanently damage the chip, a heatsink and fan are clamped against it using thermal paste to ensure efficient transition of heat out of the silicon into the aluminium. Most heatsinks used in consumer electronics are cast from pure aluminium, the quintessential metal of contemporary technologies, and one with good thermal conductivity. This thermal relationship between silicon and aluminium in electronic circuitry is mirrored in the volcanology laboratory. The viscosity of magma samples is governed by the proportion of silicon dioxide (SiO2) they contain, and the crucibles in which these samples are melted are made of Alumina (Al3O2). Computation extracts pure elements from raw ores, refining rocks in order that they can micromanage electron flux, process data, or record an image. But in exploiting their thermal and electrically semi/conductive properties it inevitably imitates lithic processes. The abstractions of computation are as reliant on the properties of the minerals from which they are made as they are on the cultural manipulations performed to those substances. The chemical properties of conductivity, photosensitivity, and inscription play out geologically in earth processes just as they do technologically in media processes.    

Photography, Radiation & Robotics Beyond the Visible: Fukushima

While researching instances of cameras exposed to radiation during my PhD, I spent a long time combing through the media archive of the Tokyo Electric Power Company (TEPCO) which contains a vast repository of video from the investigations and attempted clean-up of the Fukushima plant. I quickly became fascinated by the videos from the interior of the Primary Containment Vessel in Reactor 2. 

Following the completion of the PhD I decided to write something about this archive and its relation with (in)visibility. That essay has just been published as part of a special issue on online journal Continent on Apocryphal Technologies. It is available here: 

http://continentcontinent.cc/index.php/continent/article/view/330

 

Before Our Eyes (part 3)

Lost Time and the Artificial Present

For such a system to succeed, the speed of our nervous impulses must be exceeded by the rate of the stimulus. In DLP systems two distinct frequencies combine, both well above the temporal resolution of human sight. The colour wheel revolves at a frequency of approximately 120 revolutions per second, while the micromirrors on the DMD chip dither at a frequency near 10,000Hz. When media technical operations so routinely outstrip human temporal resolution, the instantaneity so hard sought by the photographic industry during the twentieth century loses its meaning. The appearance of an image on the screen of a digital camera is now fast enough to be commonly described as instantaneous, at least with reference to our perception, yet it conducts many operations of correction, optimisation, reduction, and compression on each image before it is displayed on the screen. Even ‘an instant’ has become an interval capable of being instrumentalised by image processing algorithms.

The micro-temporality of these technical operations is also predicated on a physiological understanding of human perceptual response established in the nineteenth century by Helmholtz’s measurements of stimulus and response. Prior to these experiments, nerves were presumed to transmit stimuli instantaneously around the body. Contrary to this presumption, Helmholtz “aimed at investigating this alleged instantaneity more closely and, if possible, to define it more precisely” (p. 61-2). To conduct this research Helmholtz first constructed an apparatus assembled from a sample of frog muscle, a rotating cylinder and a steel stylus (see image below). When the muscle was stimulated with an electrical impulse, its contraction caused the stylus to inscribe a curve in a soot-coated transparency that was wrapped around the clockwork-driven brass cylinder. From these curves it was possible to observe, and indeed measure, for the first time, a gap between sensation and resulting movement – cause and muscular effect – a gap which Helmholtz figured as temps perdu. Helmholtz’s subsequent experiments with human subjects measured a surprisingly consistent delay between stimulus and response of 0.12 and 0.20 seconds. Helmholtz’s repetition of these experiments in different areas of the body led him to conclude that “in humans the ‘message of an impression’ propagates itself to the brain with a speed of circa 60 meters per second” (p.144). The limit speed of lived experience was revealed and defined by a machine that hybridised the mechanical with the organic, stimulating the latter with electricity. Such precise measurements of physiological time were only made possible by the twin technics of clockwork and the electrical telegraph, time had to have been mechanised and the body conceived as a network of electrical impulses before the duration of human nervous impulses could be measured. Media again precedes the mechanistic understanding of physiology.    

In the context of digital technologies this temps perdu, the lost time of bodily reaction, has too become externalised in an array of buffers, caches, and shift registers that all serve – be it in an operation of image capture, video playback or networked communication – to delay the materialisation of the instant in temporary stasis while it is archived or resynchronised by the time signature of the machine. And, due to the wide discrepancy between embodied temporalities and media-technical frequencies these momentary delays are opportunities for further computation, or as Wolfgang Ernst puts it: “suspended in memory, time becomes mathematically available” (p. 28). To a chip whose clocking frequency is 10,000Hz, even the fastest possible human response time of 0.1 seconds represents a significant opportunity. The psychophysical quantification of a lag between stimulus and response enables the acquisition of the ephemeral by the logic of the machine. It is within this temps perdu that the processes of encoding, optimisation and compression are all achieved. As Florian Sprenger writes: “the fact that transmissions are constantly interrupted means that they are never completed in putative real-time … and that we have no direct access to the world we are connected to” (p. 20-1). Experience is extracted into memory before it registers in the mind.

What does it mean for an image to be instantaneous when it is routinely manipulated in advance of being seen?  What is our experience of time when these operations are continually occurring in an imperceptible buffer before the screen? This is neither the time of the phenomenological present, nor the time of the live electronic broadcast, but time dissected, quantized and reconstructed in pre-instantaneous moments before our very eyes. For Ernst this means that “computing dislocates the metaphysics of the pure present to a micro-deferred now” (2018: 35). As Ernst shows in Chronopoetics, synchronicity was vital to the time-image of electronic television, but in the individualised playback of digital media synchronicity dissolves into myriad individualised timelines whose buffers and connectivity resynthesise the impression of synchronicity on demand. The live has been replaced with the live-like, a parallel temporality that slips in and out of sync with the now, in and out of sync with its soundtrack, in and out of sync with others.

In his analysis of The Helmholtz Curves, Schmidgen analogises Helmholtz’s method to photography, noting that these experiments both “cropped a specific part of reality in the lab” and “defined their own temporality” which Schmidgen calls an artificially created present (14), a temporality extracted from the conditions of the real in order that it might be measured. Conditions that were necessary for the precise study of bodily time are now replicated in media technical temporalities which capitalise on the relatively sluggish human physiological response times measured by Helmholtz under these same conditions. The artificial temporality of an experiment that revealed the durations of perceptual signals is now reproduced by one that capitalises on precisely those durations to construct the visible in advance of perception. Digital media recreate this artificial present anew every time we press play. Between the ‘stream’ of conscious experience and the ‘streaming’ of digital media lies a concantenation of technical processes of artificial colourimetry and temporalisation.

Duration and spectrum are not directly experienced, but recreated from micro-temporal and mono-chromatic fragments, re-synthesised afresh for each viewer. How do these media re-temporalisations of ‘the live’ and ‘the present’ re-model our own temporal perception? In media environments that are optimised for the individual, where search results, adverts and content are all are tailored to our preferences, where ‘timelines’ are personalised, do we still inhabit time communally? To be con-temporary is literally to be in-time-with, but what happens to communal experience of time when we are no longer in sync with our contemporaries?

Before Our Eyes (part 2)

Psychophysics of Colour

To reproduce a single colour frame of moving image, a DLP projector overlays three discrete images in quick succession, their output synchronised with the motion of a filter wheel divided equally into segments of red, green and blue, the three primaries that correspond with the colour sensitivities of our retinal cones. From a technical perspective the full colour image that we perceive never exists, but is only created in the audience’s perception by additive colour synthesis.  From the perspective of the machine there is only of a sequence of distinct red, green and blue images, whose intensity is micro-managed at the level of the individual pixel. Colour, as experienced in both DLP projection and unmediated human perception then, is never ‘true’ (as BenQ claim), but always a technical construction.

Through processes such as this, the production and reproduction of the digital image is founded on an externalisation of our perceptual faculties. Digital image technologies are designed so explicitly to be seen, that their technical specifications not only reflect but directly imitate the anatomical construction and perceptual effects of human vision. The pixelation of a digital micromirror device (DMD) reproduces on an optoelectronic grid the mosaic of cones lining the retina, while the colour wheel enforces a trichromatic filtering that targets their colour sensitivities. We can therefore conceptualise the optical mechanisms of a DLP projector as an attempt to build a projecting eye, a luminous electronic retina radiating colour onto the surfaces of its environment.

The optical principles on which this mechanism is based originate in the trichromatic theory of vision, hypothesised by Thomas Young in 1802 and subsequently proven through the psychophysical experiments of Hermann von Helmholtz and James Clerk Maxwell. The colour triangle, initially posited by Young  (below, left) to describe colour spatially as created between the three poles of red, green and – as he supposed – violet has now become a standard means of measuring the colour gamut of various display technologies, in which different technical standards can be described as differing sizes of triangle within the complete perceptual colour space circumscribed the CIE system (below, right).    

This chromatic space postulated by Young was subsequently mapped empirically by Maxwell. To conduct his experiments, Maxwell constructed a handheld wheel (below, left) onto which could be clipped overlapping discs of different colours. The wheel was then spun fast enough that the colours mixed together in the perception of their observer in much the same way that the discrete frames of a moving image appear as continuous motion. Using this simple instrument, Maxwell was able to quantify the perceptual effects of different ratios and combinations of the three primaries. In so doing, Maxwell ascribed numerical values to the proportions of vermillion, emerald and ultramarine used to achieve different tones, shades and hues; producing a series of discrete values within a field of subjective experience that had previously been understood as a continual spectrum. To quantify colour in this manner can be understood as a kind of proto-digitisation, and Maxwell’s method prefigures the numericalisation of colour gamuts in media technical standards from the 216 ‘websafe’ colours to the considerably wider gamut of 16 million colours that can be coded in a six digit RGB hex code.

Maxwell’s conclusion from these perceptual experiments: “that the judgment thus formed is determined not by the real identity of the colours, but by a cause residing in the eye of the observer” (link) established human vision as a manipulable system of perceptual limitations. This psychophysical conception of sight as fallible and slow relative to mechanical motion persists throughout our contemporary media environment, and is the foundation on which all moving image technologies rely. And – in the colour filter of DLP projectors (below, right) – Maxwell’s colour wheel persists today as a techno-chromatic mechanism of externalised sight. A spinning disc originally used to measure the chromatic operation of the human vision has now become a central component in the reproduction of projected colour. The dissolving of biological sight into its trichromatic primaries was diagnosed by the exact same mechanism that now resolves those colours before us.

In 1855, when black and white photography was still in its experimental infancy, Maxwell proposed a system for producing a colour photograph. By photographing the same scene through three separate red, green and blue filters and then, using magic lanterns, projecting each result through its respective filter on top of one another, he hypothesised that a full colour image could be produced. This process was successfully demonstrated six years later creating a now much reproduced image of a tartan ribbon. In DLP projection each frame of the moving image replicates exactly Maxwell’s process of additive colour synthesis, combining three discrete monochromatic images in the audience’s perception. Maxwell’s trichromatic system of projection is now automated by contemporary cinema to occur, in some systems, as often as ten times for every frame, or 250 times a second.

Such accelerations of photographic temporality began, as Paul Virilio writes, from the moment of its invention: “from Nièpce’s thirty minutes in 1829 to roughly twenty seconds with Nadar in 1860” (p. 21), and rapidly continued past the frame rate of film projection to now operate habitually at rates far beneath human temporal perception. If celluloid cinema enabled the capture of movement by the intervention of a rotating shutter, fragmenting time into a sequence of freeze frames, then in DLP it is this now historic whole of the individual frame itself whose unity is dissolved both spatially into pixels and chromatically (and, as we will see in next post) temporally into three subsequent perceptual primaries. 

Before Our Eyes (part 1)

A 2018 BenQ home cinema advert begins with a white middle-aged man (with whom the target market is presumably meant to identify) settling down in an armchair next to his projector to watch three cinematic clips, each with carefully managed near-monochrome colour spaces. The first, captioned BLUE MONDAY, stands for introspection, solitude, and melancholy; the second, RED VALENTINE, for passion, drama, love and loss, the third, GREEN MIRACLE, for the awe of the natural world, as embodied by the aurora borealis, whose cosmic light phenomena BenQ are at pains to analogise with their new digital light processing (DLP) projector. The ad then cuts – in a manner popularised by late twentieth century shampoo commercials – to a computer animation of the internal technics of the projector. This sequence begins with a close-up of the viewer’s eye that quickly fades to a similar perspective on the projector lens. Beams of white light flash across the screen as the camera appears to track back into the machine, falling on a spinning colour disc divided into 6 segments, two each of red, green and blue (RGB). Moving alongside this disc, the white light is shown as consisting of these three primaries. We cut to a second animation, this time of a digital micromirror chip seen from above, a saturated spectrum of digital light reflects of its surface with an accompanying swoosh, as the earnest voiceover informs us that “only true colours convey the deepest feelings”. At this point the ad cuts back from animated to cinematographic images, now in saturated technicolour, flashing between clichés of strolling through the Casbah, a sunset embrace, playing in autumn leaves, a newborn yawn, a kiss on a window pane. Obscured behind its hackneyed equivalences of emotion and colour, and yet hinted at by the knowing analogy between human eye and projector lens, is a far deeper historical and technical connection between physiology and projection. As Henning Schmidgen has shown this connection in fact dates back beyond the invention of cinema to 1872 when German physiologist Johann Czermak pioneered the use of projection in what he called his Spectatorium: “a fragmentary cinematographic apparatus consisting of projector, screen, and rows of seats”. In this mediatised version of an anatomy theatre “cells, tissues and organs functioned in the place of recordings on celluloid” (p44). Schmidgen goes on to describe an arrangement of an eviscerated frog’s heart, two mirrors, lenses and a light source that projected an enlarged image of the contracting heart – removed from the frog’s body but still connected to its nerves – onto a screen above the audience. 

In the decades that both preceded and followed this anecdotal convergence of projection and physiology, experimental discoveries about human physiology were made by, among others, James Clerk Maxwell and Hermann von Helmholtz which comprehensively undermined the conception of human sight as objective and transparent, insisting – and indeed proving – its complexity, subjectivity, and its flaws. In these posts I will discuss the technical correspondence between the operation of DLP and human visual perception, with a particular emphasis on how contemporary projection has instrumentalised the knowledge of nineteenth century psychophysics, showing how the technical specifications of DLP projection are derived from a history of the empiricial measurement and quantification of subjective phenomena. This relationship is emblematic of what Jonathan Crary has described as “the reconfiguration of optical experience into synthetic and machinic operations that occur external to the observing subject” (p.226). The literal externalisation of the still beating heart in Czermak’s projections precedes a less violent externalisation of sight in the technics of contemporary projection. However, whereas in Czermak’s Spectatorium the frog heart projections served to demonstrate anatomical function through direct visual reproduction, in the case of DLP, knowledge of human physiology is used to ensure that its operation remains imperceptible to its audience. So, while for Czermak, projection was a transparent tool of instruction, the spectacle of DLP projection relies on the opacity of its technics to maintain the spectacle of its moving image. The psychophysical discoveries of Maxwell and Helmoltz are inscribed in DLP as a series of chromatic principles and temporal intervals within which certain operations must be occur to retain the illusory nature of its image. Whereas in the nineteenth century projection served to reveal physiological operations, projection now uses nineteenth century knowledge to conceal its operation.

 

 

 

On Detritus

Constant Linear Velocity installed at the Onassis Cultural Centre for Detritus Festival, January 2018.

 

This text is the sleevenotes written for the publication of the Constant Linear Velocity CD on Consumer Waste, which discusses the experience of rebuilding the work in January 2018 for the Detritus Festival:

The six floor cube of the Onassis Cultural Centre stands on Sygrou Avenue, an eight-lane artery running between central Athens and the coast. Opposite, flanking the hotel where I am staying stand two strip clubs: Babylon GIRLS Live Show GIRLS Night Club & the Everything you want right now!!! Dream Girls Bar, whose sign is bullet-pointed with all five senses, in case you doubt their definition of everything. Amongst the 4 star hotels, car showrooms and strip joints, the OCC, wrapped in pinstripes of white marble, is incongruously opulent.

I am here to reconstruct a work made from numerous empty desktop computer cases using end-of-life machines sourced in the city. Even on the brief walk out for dinner last night it was clear that this work has considerably more poignancy in a city and country which has borne the brunt of the last decade’s financial meltdown in Europe. I have arranged with the festival producers for the hire of 120 or more desktop computers stripped down to just their metal chassis in which I will install the customised optical drives that form the kinetic and auditory content of the work. When I arrive in the morning they are being wheeled in and unloaded, but my instructions to strip down the machines in advance have been lost in translation, most of the plastic and electronics remains.

For the next eight hours I perform the labour of low waged e-gleaners on the polished marble of the 4th floor foyer. Systematically stripping out disc drives, power supplies, fans and USB ports, occasionally watched by an increasingly concerned production team as the volume of discarded components swells into heaps. In the dynamic established by the global electronics industry this work is supposed to be invisible, it happens in the margins and the fringes, not the foyers of a ‘Centre’. For some of the OCC staff, I sense there is something shameful in this relocation of salvage labour to the gleaming interior of their privately financed art space. But this performance of manual labour, whose audience is restricted to workers of the Cultural Centre, feels more vital than the aesthetic work I am here to build for a festival attending public. Over the course of the day there is a satisfying inversion in play as cleaners, caretakers, security and reception staff – all doubtless earning less than me today – drift pass or linger to watch me hurriedly tearing down PC after PC.

I try to sort and stack the components as they come out, but the quantity regularly exceeds the spaces I have allocated. A janitor with a large roll of corrugated card is instructed to cover the floor I am working on to protect against scratches (casing screws skittering across marble make a lovely sound). Halfway through the day my proposal to keep all of this detritus, to  build it into the work, raises concerned brows from the production team and by the end of the day I am talked out of it. A team of men arrive, all hands on hips, rolled eyes and muted sighs. After the customary mutterings and gesticulations they bag it all into large rubble sacks and wheel it away, trolley-load after trolley-load. But as it is tided out of sight, the emptiness of the computer chassis feel stripped, not only of the functional parts and coagulates of dust which clung to them this morning, but also of the hierarchies of labour revealed by the day’s activity.

The next morning, to the palpable relief of the production team, I am safely back in art worker role, able to contemplate  the architectonic relation of the sculpture to the aggregated polykatoikia lining the horizon. The fringes of Athens, we are told a couple of days later, are dense with unfinished buildings, holiday homes begun over a decade ago whose completion was curtailed by the crash. Windowless, unfurnished concrete shells, projections of a future that has been denied.