For thousands of years, humans have searched for a way to turn matter into gold. Ancient alchemists considered this precious metal to be the highest form of matter. As human knowledge advanced, the mystical aspects of alchemy gave way to the sciences we know today. And yet, with all our advances in science and technology, the origin story of gold remained unknown. Until now.
Finally, scientists know how the universe makes gold. Using our most advanced telescopes and detectors, we’ve seen it created in the cosmic fire of the two colliding stars first detected by LIGO via the gravitational wave they emitted.
Origins of our elements
Scientists have been able to piece together where many of the elements of the periodic table come from. The Big Bang created hydrogen, the lightest and most abundant element. As stars shine, they fuse hydrogen into heavier elements like carbon and oxygen, the elements of life. In their dying years, stars create the common metals – aluminum and iron – and blast them out into space in different types of supernovaexplosions.
For decades, scientists have theorized that these stellar explosions also explained the origin of the heaviest and most rare elements, like gold. But they were missing a piece of the story. It hinges on the object left behind by the death of a massive star: a neutron star. Neutron stars pack one-and-a-half times the mass of the sun into a ball only 10 miles across. A teaspoon of material from their surface would weigh 10 million tons.
Many stars in the universe are in binary systems – two stars bound by gravity and orbiting around each other (think Luke’s home planet’s suns in “Star Wars”). A pair of massive stars might eventually end their lives as a pair of neutron stars. The neutron stars orbit each other for hundreds of millions of years. But Einstein says that their dance cannot last forever. Eventually, they must collide.
Massive collision, detected multiple ways
On the morning of August 17, 2017, a ripple in space passed through our planet. It was detected by the LIGO and Virgo gravitational wave detectors. This cosmic disturbance came from a pair of city-sized neutron stars colliding at one third the speed of light. The energy of this collision surpassed any atom-smashing laboratory on Earth.
Hearing about the collision, astronomers around the world, includingus, jumped into action. Telescopes large and small scanned the patch of sky where the gravitational waves came from. Twelve hours later, three telescopes caught sight of a brand new star – called a kilonova – in a galaxy called NGC 4993, about 130 million light years from Earth.
Astronomers had captured the light from the cosmic fire of the colliding neutron stars. It was time to point the world’s biggest and best telescopes toward the new star to see the visible and infrared light from the collision’s aftermath. In Chile, the Gemini telescope swerved its large 26-foot mirror to the kilonova. NASA steered the Hubble to the same location.
Just like the embers of an intense campfire grow cold and dim, the afterglow of this cosmic fire quickly faded away. Within days the visible light faded away, leaving behind a warm infrared glow, which eventually disappeared as well.
Observing the universe forging gold
But in this fading light was encoded the answer to the age-old question of how gold is made.
Shine sunlight through a prism and you will see our sun’s spectrum – the colors of the rainbow spread from short wavelength blue light to long wavelength red light. This spectrum contains the fingerprints of the elements bound up and forged in the sun. Each element is marked by a unique fingerprint of lines in the spectrum, reflecting the different atomic structure.
The spectrum of the kilonova contained the fingerprints of the heaviest elements in the universe. Its light carried the telltale signature of the neutron-star material decaying into platinum, gold and other so-called “r-process” elements.
For the first time, humans had seen alchemy in action, the universe turning matter into gold. And not just a small amount: This one collision created at least 10 Earths’ worth of gold. You might be wearing some gold or platinum jewelry right now. Take a look at it. That metal was created in the atomic fire of a neutron star collision in our own galaxy billions of years ago – a collision just like the one seen on August 17.
And what of the gold produced in this collision? It will be blown out into the cosmos and mixed with dust and gas from its host galaxy. Perhaps one day it will form part of a new planet whose inhabitants will embark on a millennia-long quest to understand its origin.
There are two forms of invariance, two ways in which something can persist in being amid the unceasing flight and flux of forms. There is that which is changeless, the cliff-face rigid and immutable against gale and tide, the frigid polestar-constancy of that which does not alter where it alteration finds. And there is that which, taking leave from itself, nevertheless is able to come back to itself. Unrelentingness: resilience. The one stands against and outside change, the other finds constancy in and through it. The rock that seems sternly to rebut the mutable; and then the boiling maelstrom that is never the same thing for one microsecond, but builds constancy through rotation, repetition. As Michel Serres has said, le dure ne dure pas; seul dure le doux – the hard does not endure, only softness survives. Hardness melts or is eroded, softness persists in being by yielding, by dying easily, like flies, like snow, giving way, expiring without a thought, like a breath, like a mist, like a thought. When the going gets tough, only the weak, giving way, going under, can last out. The condition of Annie Cattrell’s work is this oscillating condition of mutatis mutandis: the things that have been changed being in their turn changed.
Art is not the only way in which humans have battled against variation and inconstancy, though it is one of the emblematic ways, the way that affects to figure forth all efforts in general to instance the general, the unchanging.
Art long ago had to concede its claims to exactitude to what are sometimes, oddly and anachronistically, called the ‘hard sciences’. Exactness implies exaction, that is requirement, severity, coercion – exact being the past participle of Latin exagere, to drive out. Exactness is exigent, obdurate, hard to please. Exactitude used once to guarantee the authority of the absolute, of that which is exactly and precisely itself. To be exact always implies some correspondence, some exact fit between two measures or registers, between an original and a copy, an inside and an outside, a prescription and an action. But as exactitude increases, becoming ever more exacting, something unexpected happens. The closer one approaches to absolute exactitude, the more it recedes. No two measurements of a given phenomenon can ever be exactly the same, or only will be if those measurements are in fact inexact. No observation can take account of precisely all the positions and velocities of all the molecules in a given volume of gas. The only way to be exact is in fact to estimate statistically. To begin with, as certainty increases, variability declines, until that certain point at which it begins once again to increase. At very minute scales, exactness merges once again into approximation. Exactness begins by being hard and rigorous, but as it increases, it becomes ever more frail, and almost infinitely weak in its susceptibility to uncontrollable fluctuations. So what is called ‘art’ and what is called ‘science’ change places once again. Art, ethics, politics, can allow themselves to aspire to the absolute because they make fewer demands of exactitude on themselves. Science must nowadays earn its name through a precision that must expose the inevitability of fluctuation and the necessity of approximation. At its limits, finitude meets the infinitesimal.
Annie Cattrell’s works seem to be drawn towards this indeterminate zone where the exact and the fragile converge, at the point at the heart of every state of being at which there seems to be some tremor, some fading, flicker or deflection from itself. To be sure, her works are often characterised by a high-resolution, pinprick-sharp exactitude. They seem to offer the precision that we expect of perfectly-adjusted apparatus, in which form and function, appearance and effect, are locked tightly together, with neither residue nor deficit. The actions of casting and moulding are frequently employed or implied in her work, for example in the delicate bronze eggshell of From Within, which maps from the inside the delicate channels and filiations inscribed on the interior of the skull. The hard and the soft here seem here to be brought into improbable association, the tough lustre of the bronze mingled with the buttery softness of an infant’s head. A cast seems like the ideal, absolute form of reproduction, in which some original form is induced to replicate itself exactly, without superfluity or omission, as though something could depart from itself while remaining itself, and 1+1 could magically still equal 1. It is not surprising that the history of casting, seal-making and minting has such sacred associations, implying the divine power of self-divergence without diminution, of that which, like divine grace, or the head of the sovereign, can become many while never ceasing to be one.
But Annie Cattrell’s castings and rapid-prototype three-dimensional scans are so precise that they go beyond static exactitude to encompass variation. Depending on the light and angle of viewing, concavity and convexity change places in From Within, in a version of what is known as the Hollow Face illusion, which turns declivities into lines of relief. A similar passion of the surface is apparent in Currents, which is a rendering of some surface agitated by undulations, whether of a body of water pestered by a stiff breeze, or of the score of a complex piece of music, or the ripplings of a mountain range seen from above. The piece is not just a breathtakingly exact rendering of a natural process of fluctuation, it is itself a kind of fluctuation between possibilities. It seems struck off in and from a single moment, as though it were possible instantly to scan and cast all the infinite complexity of a single stretch of mild turbulence. But the prototype becomes protean – reminding us that the ungraspable Proteus derives his name from the fact that he was the first-born of the sea-god Poseidon. In the beginning, at the first, there was variation, the manying of the one.
Annie Cattrell’s drawings play variations on these processes of variation, the rippled striations of Pressure and the interlaced tendrils of Sustain instituting for example a shimmering quiver between plane and depth. Their titles hint at material processes, lifting, pouring, parting, rather than the forms of matter that effect them or that they affect. Annie Cattrell’s art not only attends closely to processes of variation within each piece of work, the works themselves enact variations across and between each other. The title of Process establishes an interchange between the alimentary process it figures and the process required to make it, as though it were in some sense figuring its own workings, a machine made to make itself.
As above, so below, writes Hermes Trismegistus in the Smaragdine Tablet, and continues, in the translation of Isaac Newton, ‘to do the miracles of one only thing’. One might rotate this mystical formula (or turn it inside out), as many adherents of mystical doctrine have done, to assert likewise ‘as within, so without’, Much of Annie Cattrell’s work dwells in this logic of coincident contraries, in which the further inwards one goes, and especially into that interior of all interiors, the inside of the body, the more the forms seem to resemble the forms of the exterior world. Lungs, digestive system, cerebral tissue, bloom like clouds, and branch like coral-forests. Everywhere, there appears to be morphological rhyme: a brain swells like a mushroom, like a bomb burst, like a nebula, like the lacy fistulae of blood from a wound held underwater. Annie Cattrell’s collaborations, with neuroscientists, meteorologists and foresters, seem designed to limn these rhymings. And yet her forms seem to decline the unanimity of the ‘one only thing’, that mystical allergy to number, or to any number but one. The world of forms she patiently tracks is one that never quite becomes one or comes back to itself, in which the formative principle is endlessly branching and budding off.
Annie Cattrell is drawn and detained by secret, hidden, normally inaccessible spaces and forms, especially parts of the body which we not only rarely see, but of which can also form no real continuous conception. But these forms are not merely inward. They have the quality that Gilles Deleuze called ‘increscence’. They bloat and blister, but inwards as well as outwards, turning into, rolling over on themselves, delving inwards into the inner space they themselves scoop out. Where leaves and flowers seem to grow the very space they bud and branch out into, the bronchial and cerebral arborescences that draw Annie Cattrell’s eye and hand complexify space rather than rarefying it, multiplying it inwards. They brood and breed, they go on out in, curling, tucking in and doubling back on themselves even as they billow outwards.
The lungs that are figured in Capacity are an image of this astonishing involution. Small creatures, such as flies, do not have lungs, because they do not need them, their volume being small enough in relation to their outer surfaces to be able to absorb the oxygen they need directly from the air around them. But as creatures grow, their volume increases by the cube of their length, while their surface area increases by its square, so, the larger creatures grow, the greater their oxygen needs in proportion to their surface areas. For a creature the size of a human, or indeed for most creatures larger than Craseonycteris thonglongyai, or the bumblebee bat of Thailand, which weighs only a couple of grams, and is the smallest lung-breathing creature in the world, the only way to be able to absorb enough oxygen is in effect to turn themselves inside out, or outside in. They must, in the words of Marlowe’s Barabas, ‘enclose/Infinite riches in a little room’, approximating the effect of a large surface area within a very constrained space. This is achieved through millions of alveoli (Latin, ‘little alcoves’) which bud out from the end of capillaries to maximise the exposure of the blood to the tissues which extract oxygen from it. The human lung contains 700 million of these structures, the equivalent if opened out of a surface area of 70m2, or around the size of a tennis court. Capacity not only mimics the maximising of space through interior folding, it folds together time and space too; the work of countless hours is compressed into the image of a single inbreath, as though the work had spontaneously formed itself out of air made palpable and visible. We not only need room to breathe, it seems, but breathing also remakes space, burrows out room for itself.
Anne Cattrell’s works are a serenely seething contour map of prepositions, out, back, on, in, through, along, beside. Mystical materialists like Teilhard de Chardin have evoked a kind of awareness in introversion, as though an energy that turned towards itself rather than jetting out and away were all that were required for consciousness to stir, whether in the coiling of the molecule, or the slow wheelings of galaxies. But Anne Cattrell’s forms seem to have a kind of consciousness without self-consciousness. This is why there can sometimes seem to be a kind of fungal horror in this obese blooming, amid all its delicacy; we recoil from the blind, shoving nescience of what seems to teem without limit or plan, a becoming-other that wants to become everything and to go everywhere, making everything itself, making itself everything, yet without ever quite coming back to what it is.
Sense gives us sculptural reconstructions of the areas of the brain activated by the work of the five different senses. Where previous ages emblematised the senses with different animals – the monkey or the spider for touch, the lynx for sight – Annie Cattrell gives us a more abstract morphological bestiary, presenting each of the senses as though it were not just the animation of an idea, but also the idea of some kind of animal, flaring into intermittent being. The piece might seem to offer the same kind of reassurance that neuroscience can often seem to offer to the incautious, that there is an inner architecture that answers precisely and predictably to an outer, and that the ideas we have about ourselves – that we have precisely five senses, for example, and not four (as Aristotle thought), or as many psychologists would nowadays prefer to say, 9 – are verified point for point by what happens in the brain. But a sense is not the simple reflex in the brain of some equally simple cause in the world; it is the predisposition of a brain, acting, as brains always must, in complex concert with the body that it is both a part of and apart from, to make certain kinds of sense of the world. Few will now be surprised by the news that the brains of synaesthetes show more connectivity between different areas of the brain used to activate different sensory responses than non-synaesthetes. But the real question to ask may be why non-synaesthetes with no such immediate experience of, say, hearing colours or tasting shapes, can nevertheless can make perfect sense of such experiences in narrative or metaphor.
And where, one wonders, might coenesthesia – the powerful, yet oddly fluctuating sense of the mineness of my senses – have its seat? Responding to Descartes’s conviction that each of us have immediate and undoubtable access to what we must all infallibly recognise as a self, David Hume protests, with mischievous, magisterial coolness, ‘I am certain there is no such principle in me’. Where might that certainty reside, if not somewhere between the brain that formed these words and the words themselves? Does the idea of nowhere have a location? Where or what would be the ‘I’-ness that is so positive about its nonentity, so certain of its inaccessibility to itself? There can be no doubt that there must have been some kind of brain state at the moment at which David Hume inscribed these words, very likely one that would seem roundly to contradict his statement, and that there may be some kind of equivalence between equivalent brain states induced in various readers, including David Hume and me and you, reading these words at various times. But David Hume’s point is not that there is no such thing as perception, but that there is no such thing as ‘pure’ perception, or perception as such, since all perception is perception of something else. What happens on the inside of brains is not a mere reaction to what happens to them, it is a construing of a relation to that exterior. The brain is constantly at work actively producing the forms of its responsiveness. It ceaselessly projects, from the inside, the kind of outside it takes there to be, just as it also constantly projects – for example in works like Sense – the kind of inside it takes itself to be, in relation to that outside.
Sense shows us what sensing neurologically is – seeing seems to have the form, for example, of a handkerchief suddenly ravelled by a gust of wind, or an egg splatted messily on a moving windscreen, while hearing is a pair of headphones or cauliflower ears. But we would be mistaken to see the sense regions as simple, invariant objects. As the brain functions modelled in a work such as Pleasure/Pain indicate, these apparent condensations of function are in fact the reified forms of connections, patterns of interchange between areas of the brain rather than sealed chambers. It is a stochastic silhouette formed by the possible thickening into the probable. It is the sculpting of a neurological conversation rather than a portrait of a single interlocutor, a telephone network rather than the profile of a speaking head.
And these connections ramify not only within but beyond the brain. As we look into and through the cool, translucent acrylic cubes of Sense in which these abstract sense-homunculi are suspended, we sense that there must be some kind of answering topology in our own perceptions, that our brains must be miming out some kind of anagram of what they are seeing. But this very action adds our perception to the series. Are we outside the series, as its observer, or an extension to it? Are we looking out of what we are looking in on? And when we see hearing, or tasting or smelling, what new neurological ravellings, what new forms of consensus, are being effected? Are we to read this sequence of shapes as primal engrams, the Platonic solids of sensing? Or do they form a sequence of variables, a meteorological phase space of feeling, contouring Dylan Thomas’s restless ‘process in the weather of the heart’?
As the title of the recent piece Conditions suggests, Anne Cattrell’s art exists in a world, (the world, there evidently being no other), of conditions. To say that some statement is true is to claim to define the conditions under which it will be true. To affirm that something exists is always also to assert the conditions under which its existence will be possible. Given certain conditions of existence, certain kinds of thing may exist. To say that something existed absolutely would be to say that no conditions exist or are conceivable under which it could not exist. And perhaps there are no such unconditional truths or existences. Everything is what it is only under certain conditions, certain forms of speaking or agreeing together, for, indeed, condition is from Latin condicere to speak together. And that concordance is never complete, there are always at least two parties, the entity and its conditions, two halves to a compact that can never compact into simple unity. And, if things are what they are only under certain conditions, those conditions are never absolute or wholly and exactly specifiable, so there is never an exact fit between what something is and the conditions under which it comes to be that thing. All existence is, in this sense, as we say, iffy, making the being of what is almost infinitely fragile, infinitely open to the shifting contingencies that alone permit or prohibit its being. This makes being both finite and fragile.
In the case of Conditions, there are many images of this conditional agreement. There are first of all the cloud-forms themselves, etched by the same kind of focussed laser that is used in some kinds of surgery to reach into the inaccessible heart of the brain and other areas of delicate tissue in the body. Our looking in on these forms is a similar kind of action-at-a-distance, the kind of optical tactility provoked by inviolable interiorities of the snow-globe or the ship in a bottle. The etched cloud-forms suggest that they may be variations on some primary form, a lexicon derivable from some degree-zero of in-itself vapour, prior to any deformation, swervings away from some elemental or archetypal or as-such state of cloud. This is in accord with our thinking about conditionality, in which there is a primary essence which is subject to this or that variation, this or that inflection in response to changing conditions. But there is no primary of ur-cloud, there are only states of cloud, translations without an original. Something can be what it is only on condition that it converses with that which it is not, with that which, as we may say, provides the conditions for the thing it is.
The transparent columns of Conditions enclose cloud-forms that are typical of (but never, of course, absolutely identified with or definitional of) particular times of the year: January’s clouds are low, dense and brooding, June’s cottony and clumped, July’s a hazy cirrocumulus. The angles of the glass columns splinter, refract and multiply the cloud-forms, creating commerce between the incommensurable orders of the edged and the edgeless. But the neat divisions between the columns and the prevailing conditions they signify are an illusion, for in reality the divisions between the cloud-forms characteristic of particular months are no more hard and fast than the divisions between clouds themselves and the clear air in which they are suspended.
All of Annie Cattrell’s work institutes a strange doubling whereby the material forms she represents seem to suggest the shapes of the thoughts we have about them, but none more so than the cerebral nebulae of Conditions. There is a long tradition which associates the intangible, ephemeral forms of clouds with the drifting play of thought itself. Clouds, like thoughts, are only there as long as they are there for us, and yet can be there only if they are over there, remote from us. Here, Annie Cattrell gives to thought a kind of impossible, imponderable materiality, giving us up to our own thought, and, returning our thinking to itself, the changed thing changed back, the fluctuating uncertainty of the exchange captured with tender, rapt exactitude.
Exhibition Statement (excerpt) by Hannah Star Rogers:
Umwelt a three-artist exhibition at BioBAT Art Space, takes the concept of collaboration to new heights and complications. It exposes the multilayered work of artists who engage with the sciences while offering visitors a nuanced view of what science both is and can be. Meredith Tromble, Patricia Olynyk, and Christine Davis are established artists who approach science as material for art. They have individually worked directly with scientists: as residents in their labs, as observers of scientific proceedings, as interviewers treating scientists as informants, and as direct co-creators of artworks. This collaborative presentation offers the opportunity to think about the different approaches that artists are taking to work with science in the new wave of art-science interactions and collaborations that is now well underway.
The complexities of science that these artists are investigating are reflected in the title of the exhibition. The concept of “umwelt,” as described in the semiotic theories of Jakob von Uexküll and interpreted by Thomas A. Sebeok (1976), is the world as it is experienced by a particular organism. As such, umwelt evokes more than environment; it emphasizes an organism’s ability to sense—a condition for the existence of shared signs. These signs offer meanings about the world, albeit of divergent sorts, to different types of organisms or even individual beings. Umwelt also calls attention to the specific senses that different organisms use to make meaning from their environments, including signs made by other organisms.”
Christine Davis: Tlön, or How I held in my hands a vast methodical fragment of our planet’s entire history , 2019 Ethically sourced butterfly wings on black gessoed canvas 48” x 70”
Christine Davis is a Canadian artist born in Vancouver. She currently lives and works in New York City. Modes of seeing, classifying and producing both scientific and cultural knowledge, often tied to the feminine and the natural world, underpin many projects. Through a cosmological impulse Davis’ installations seem to propose that meanings from disparate historical and pedagogical contexts overlap and are released slowly over long periods of time. In her work “Tlön, or How I held in my hands a vast methodical fragment of an unknown planet’s entire history” (exhibited at the Musee de Beaux Arts de Montreal) documentation of the heavens and classification of wildlife are overlaid in a system of ordering and symmetry that is at once mystical and sadistic, absurd and universal. As film scholar Olivier Asselin notes, “Davis’ work establishes a link between artistic abstraction and scientific abstraction – between formal abstraction and conceptual abstraction. [F]orm is chaotic; it is one of those complex phenomena, like climate change and liquid turbulence, which are determinate, but non-linear, and, as a result, remain largely unpredictable. As such, it prompts an epistemological reflection on the complexity of the sensible and the limits of the concept… from this perspective, her work is archaeological.” Exhibiting since 1987 Her work is held in numerous collections including the National Gallery of Canada, Le Muse d’Art Contemporain de Montréal, Collection Helga de Alvear and the Yvon Lambert Collection Avignon. Publications on her work include monographs published by CREDAC (Paris), MACM (Montreal), AGO (Toronto) and Presentation House (Vancouver).
Patricia Olynyk: Extension II , 2014 Digital pigment print on archival paper 22 ¼” x 61 ¼”
Patricia Olynyk is a multimedia artist, scholar and educator whose work explores art, science and technologyrelated themes that range from the mind-brain to interspecies communication and the environment. Her prints, photographs, and video installations investigate the ways in which social systems and institutional structures shape our understanding of science, human life, and the natural world. Working across disciplines to develop “third culture” projects, she frequently collaborates with scientists, humanists, and technology specialists. Her multimedia environments call upon the viewer to expand their awareness of the worlds they inhabit—whether those worlds are their own bodies or the spaces that surround them. Olynyk is the recipient of numerous awards and distinctions, including a Helmut S. Stern Fellowship at the Institute for the Humanities, University of Michigan and a Francis C. Wood Fellowship at the College of Physicians, Philadelphia. She has held residencies at UCLA’s Design Media Arts Department, the Banff Center for the Arts, Villa Montalvo in California, and the University of Applied Arts, Vienna. Her work has been featured in Venice Design 2018 at Palazzo Michiel, Venice; the Los Angeles International Biennial; the Saitama Modern Art Museum, Japan; Museo del Corso in Rome; and the National Academy of Sciences in Washington. Olynyk is Chair of the Graduate School of Art and Florence and Frank Bush Professor of Art at Washington University and co-director of the Leonardo/ISAST NY LASER program in New York. Her writing is featured in publications that include Public Journal, the Routledge Companion to Biology in Art and Architecture, Technoetic Arts, and Leonardo Journal.
Meredith Tromble: Dream Vortex: Lab Meeting , 2019 Matrix of 9 framed digital prints Each (framed) 20 x 23.5 inches. Overall dimensions: 62 x 72
Meredith Tromble is an Oakland-based intermedia artist and writer whose curiosity about links between imagination and knowledge led her to form collaborations with scientists in addition to making installations, drawings, and performances. A central theme in her work is circulation: between ideas and materials, through collaborative creative process, from psychological impulses through images and texts. Her work asserts the continuity between the physical and virtual worlds. She has made drawings, installations and performances for venues ranging from the Yerba Buena Center for the Arts and Southern Exposure in San Francisco, to the National Academy of Sciences in Washington, D.C. and the Glasgow School of Art in the UK. She has been artistin- residence at the Complexity Sciences Center at the University of California, Davis (UCD), since 2011 in active collaboration with UCD geobiologist Dawn Sumner. Their interactive 3-D digital art installation Dream Vortex has been widely presented in various iterations at ISEA2015, Vancouver, and Creativity & Cognition, Glasgow School of Art, and at more than a dozen American universities ranging from Stanford University in Palo Alto to Brown University in Providence. Dream Vortex was chosen as an “Exemplar Project” of interdisciplinary research by the Association for the Arts in Research Universities (a2ru) in 2015. A related performance project, The Vortex, in collaboration with Donna Sternberg and Dancers of Los Angeles, had weekend runs in Los Angeles in 2016 and 2018. Tromble’s other recent projects include an art installation developed with a neuroscientist at Gazzaley Lab, University of California San Francisco, and performance/lectures by “Madame Entropy.” Her 2012 blog “Art and Shadows,” on contemporary art and science, was supported by the Art Writers Initiative of the Andy Warhol Foundation. From 2000-2010 she was a core member of the artist collective Stretcher; and made flash “guerrilla” performances using a mechanism based on the research of biologist Larry Rome to generate electricity from the motion of her body.
I recently visited the Hermitage in St Petersburg, Russia – one of the best art museums in the world. I was expecting to serenely experience its masterpieces, but my view was blocked by a wall of smart phones taking pictures of the paintings. And where I could find a bit of empty space, there were people taking selfies to create lasting memories of their visit.
For many people, taking hundreds, if not thousands, of pictures is now a crucial part of going on holiday – documenting every last detail and posting it on social media. But how does that affect our actual memories of the past – and how we view ourselves? As an expert on memory, I was curious.
Unfortunately, psychological research on the topic is so far scant. But we do know a few things. We use smart phones and new technologies as memory repositories. This is nothing new – humans have always used external devices as an aid when acquiring knowledge and remembering.
Writing certainly serves this function. Historical records are collective external memories. Testimonies of migrations, settlement or battles help entire nations trace a lineage, a past and an identity. In the life of an individual, written diaries serve a similar function.
Nowadays we tend to commit very little to memory – we entrust a huge amount to the cloud. Not only is it almost unheard of to recite poems, even the most personal events are generally recorded on our cellphones. Rather than remembering what we ate at someone’s wedding, we scroll back to look at all the images we took of the food.
This has serious consequences. Taking photos of an event rather than being immersed in it has been shown to lead to poorer recall of the actual event – we get distracted in the process.
Relying on photos to remember has a similar effect. Memory needs to be exercised on a regular basis in order to function well. There are many studies documenting the importance of memory retrieval practice – for example in university students. Memory is and will remain essential for learning. There is indeed some evidence showing that committing almost all knowledge and memories to the cloud might hinder the ability to remember.
However, there is a silver lining. Even if some studies claim that all this makes us more stupid, what happens is actually shifting skills from purely being able to remember to being able to manage the way we remember more efficiently. This is called metacognition, and it is an overarching skill that is also essential for students – for example when planning what and how to study. There is also substantial and reliable evidence that external memories, selfies included, can help individuals with memory impairments.
But while photos can in some instances help people to remember, the quality of the memories may be limited. We may remember what something looked like more clearly, but this could be at the expense of other types of information. One study showed that while photos could help people remember what they saw during some event, they reduced their memory of what was said.
There are some rather profound risks when it comes to personal memory. Our identity is a product of our life experiences, which can be easily accessed through our memories of the past. So, does constant photographic documentation of life experiences alter how we see ourselves? There is no substantial empirical evidence on this yet, but I would speculate that it does.
Too many images are likely to make us remember the past in a fixed way – blocking other memories. While it is not uncommon for early childhood memories to be based on photos rather than the actual events, these are not always true memories.
Another issue is the fact that research has uncovered a lack of spontaneity in selfies and many other photos. They are planned, the poses are not natural and at times the image of the person is distorted. They also reflect a narcissistic tendency which shapes the face in unnatural mimics – artificial big smiles, sensual pouts, funny faces or offensive gestures.
Importantly, selfies and many other photos are also public displays of specific attitudes, intentions and stances. In other words, they do not really reflect who we are, they reflect what we want to show to others about ourselves at the moment. If we rely heavily on photos when remembering our past, we may create a distorted self identity based on the image we wanted to promote to others.
That said, our natural memory isn’t actually perfectly accurate. Research shows that we often create false memories about the past. We do this in order to maintain the identity that we want to have over time – and avoid conflicting narratives about who we are. So if you have always been rather soft and kind – but through some significant life experience decide you are tough – you may dig up memories of being aggressive in the past or even completely make them up.
Having multiple daily memory reports on the phone of how we were in the past might therefore render our memory less malleable and less adaptable to the changes brought about by life – making our identity more stable and fixed.
But this can create problems if our present identity becomes different from our fixed, past one. That is an uncomfortable experience and exactly what the “normal” functioning of memory is aimed to avoid – it is malleable so that we can have a non-contradictory narrative about ourselves. We want to think of ourselves as having a certain unchanging “core”. If we feel unable to change how we see ourselves over time, this could seriously affect our sense of agency and mental health.
So our obsession with taking photos may be causing both memory loss and uncomfortable identity discrepancies.
It is interesting to think about how technology changes the way we behave and function. As long as we are aware of the risks, we can probably mitigate harmful effects. The possibility that actually sends shivers to my spine is that we lose all those precious pictures because of some widespread malfunctioning of our smart phones.
So the next time you’re at a museum, do take a moment to look up and experience it all. Just in case those photos go missing.
Humans have learned to travel through space, eradicate diseases and understand nature at the breathtakingly tiny level of fundamental particles. Yet we have no idea how consciousness – our ability to experience and learn about the world in this way and report it to others – arises in the brain.
In fact, while scientists have been preoccupied with understanding consciousness for centuries, it remains one of the most important unanswered questions of modern neuroscience. Now our new study, published in Science Advances, sheds light on the mystery by uncovering networks in the brain that are at work when we are conscious.
It’s not just a philosophical question. Determining whether a patient is “aware” after suffering a severe brain injury is a huge challenge both for doctors and families who need to make decisions about care. Modern brain imaging techniques are starting to lift this uncertainty, giving us unprecedented insights into human consciousness.
For example, we know that complex brain areas including the prefrontal cortex or the precuneus, which are responsible for a range of higher cognitive functions, are typically involved in conscious thought. However, large brain areas do many things. We therefore wanted to find out how consciousness is represented in the brain on the level of specific networks.
The reason it is so difficult to study conscious experiences is that they are entirely internal and cannot be accessed by others. For example, we can both be looking at the same picture on our screens, but I have no way to tell whether my experience of seeing that picture is similar to yours, unless you tell me about it. Only conscious individuals can have subjective experiences and, therefore, the most direct way to assess whether somebody is conscious is to ask them to tell us about them.
But what would happen if you lose your ability to speak? In that case, I could still ask you some questions and you could perhaps sign your responses, for example by nodding your head or moving your hand. Of course, the information I would obtain this way would not be as rich, but it would still be enough for me to know that you do indeed have experiences. If you were not able to produce any responses though, I would not have a way to tell whether you’re conscious and would probably assume you’re not.
Scanning for networks
Our new study, the product of a collaboration across seven countries, has identified brain signatures that can indicate consciousness without relying on self-report or the need to ask patients to engage in a particular task, and can differentiate between conscious and unconscious patients after brain injury.
When the brain gets severely damaged, for example in a serious traffic accident, people can end up in a coma. This is a state in which you lose your ability to be awake and aware of your surrounding and need mechanical support to breathe. It typically doesn’t last more than a few days. After that, patients sometimes wake up but don’t show any evidence of having any awareness of themselves or the world around them – this is known as a “vegetative state”. Another possibility is that they show evidence only of a very minimal awareness – referred to as a minimally conscious state. For most patients, this means that their brain still perceives things but they don’t experience them. However, a small percentage of these patients are indeed conscious but simply unable to produce any behavioural responses.
We used a technique known as functional magnetic resonance imaging (fMRI), which allows us to measure the activity of the brain and the way some regions “communicate” with others. Specifically, when a brain region is more active, it consumes more oxygen and needs higher blood supply to meet its demands. We can detect these changes even when the participants are at rest and measure how it varies across regions to create patterns of connectivity across the brain.
We used the method on 53 patients in a vegetative state, 59 people in a minimally conscious state and 47 healthy participants. They came from hospitals in Paris, Liège, New York, London, and Ontario. Patients from Paris, Liège, and New York were diagnosed through standardised behavioural assessments, such as being asked to move a hand or blink an eye. In contrast, patients from London were assessed with other advanced brain imaging techniques that required the patient to modulate their brain to produce neural responses instead of external physical ones – such as imagining moving one’s hand instead of actually moving it.
We found two main patterns of communication across regions. One simply reflected physical connections of the brain, such as communication only between pairs of regions that have a direct physical link between them. This was seen in patients with virtually no conscious experience. One represented very complex brain-wide dynamic interactions across a set of 42 brain regions that belong to six brain networks with important roles in cognition (see image above). This complex pattern was almost only present in people with some level of consciousness.
Importantly, this complex pattern disappeared when patients were under deep anaesthesia, confirming that our methods were indeed sensitive to the patients’ level of consciousness and not their general brain damage or external responsiveness.
Research like this has the potential to lead to an understanding of how objective biomarkers can play a crucial role in medical decision making. In the future it might be possible to develop ways to externally modulate these conscious signatures and restore some degree of awareness or responsiveness in patients who have lost them, for example by using non-invasive brain stimulation techniques such as transcranial electrical stimulation. Indeed, in my research group at the University of Birmingham, we are starting to explore this avenue.
Excitingly the research also takes us as step closer to understanding how consciousness arises in the brain. With more data on the neural signatures of consciousness in people experiencing various altered states of consciousness – ranging from taking psychedelics to experiencing lucid dreams – we may one day crack the puzzle.
Universities in the US have long wrangled over who owns the world’s largest drum. Unsubstantiated claims to the title have included the “Purdue Big Bass Drum” and “Big Bertha”, which interestingly was named after the German World War I cannon and ended up becoming radioactive during the Manhattan Project.
Unfortunately for the Americans, however, the Guinness Book of World Records says a traditional Korean “CheonGo” drum holds the true title. This is over 5.5 metres in diameter, some six metres tall and weighs over seven tonnes. But my latest scientific results, just published in Nature Communications, have blown all of the contenders away. That’s because the world’s largest drum is actually several tens of times larger than our planet – and it exists in space.
You may think this is nonsense. But the magnetic field (magnetosphere) that surrounds the Earth, protecting us by diverting the solar wind around the planet, is a gigantic and complicated musical instrument. We’ve known for 50 years or so that weak magnetic types of sound waves can bounce around and resonate within this environment, forming well defined notes in exactly the same way wind and stringed instruments do. But these notes form at frequencies tens of thousands of times lower than we can hear with our ears. And this drum-like instrument within our magnetosphere has long eluded us – until now.
Massive magnetic membrane
The key feature of a drum is its surface – technically referred to as a membrane (drums are also known as membranophones). When you hit this surface, ripples can spread across it and get reflected back at the fixed edges. The original and reflected waves can interfere by reinforcing or cancelling each other out. This leads to “standing wave patterns”, in which specific points appear to be standing still while others vibrate back and forth. The specific patterns and their associated frequencies are determined entirely by the shape of the drum’s surface. In fact, the question “Can one hear the shape of a drum?” has intrigued mathematicians from the 1960s until today.
The outer boundary of Earth’s magnetosphere, known as the magnetopause, behaves very much like an elastic membrane. It grows or shrinks depending on the varying strength of the solar wind, and these changes often trigger ripples or surface waves to spread out across the boundary. While scientists have often focused on how these waves travel down the sides of the magnetosphere, they should also travel towards the magnetic poles.
Physicists often take complicated problems and simplify them considerably to gain insight. This approach helped theorists 45 years ago first demonstrate that these surface waves might indeed get reflected back, making the magnetosphere vibrate just like a drum. But it wasn’t clear whether removing some of the simplifications in the theory might stop the drum from being possible.
It also turned out to be very difficult to find compelling observational evidence for this theory from satellite data. In space physics, unlike say astronomy, we’re usually dealing with the completely invisible. We can’t just take a picture of what’s going on everywhere, we have to send satellites out and measure it. But that means we only know what’s happening in the locations where there are satellites. The conundrum is often whether the satellites are in the right place at the right time to find what you’re looking for.
Over the past few years, my colleagues and I have been further developing the theory of this magnetic drum to give us testable signatures to search for in our data. We were able to come up with some strict criteria that we thought could provide evidence for these oscillations. It basically meant that we needed at least four satellites all in a row near the magnetopause.
Thankfully, NASA’s THEMIS mission gave us not four but five satellites to play with. All we had to do was find the right driving event, equivalent to the drum stick hitting the drum, and measure how the surface moved in response and what sounds it created. The event in question was a jet of high speed particles impulsively slamming into the magnetopause. Once we had that, everything fell into place almost perfectly. We have even recreated what the drum actually sounds like (see the video above).
This research really goes to show how tricky science can be in reality. Something which sounds relatively straightforward has taken us 45 years to demonstrate. And this journey is far from over, there’s plenty more work to do in order to find out how often these drum-like vibrations occur (both here at Earth and potentially at other planets, too) and what their consequences on our space environment are.
This will ultimately help us unravel what kind of rhythm the magnetosphere produces over time. As a former DJ, I can’t wait – I love a good beat.
From the raindrops that soak you on your way to work to the drops of coffee that inevitably end up on your white shirt when you arrive, you’d be forgiven for thinking of drops as a mere nuisance.
But beneath a mundane facade, droplets exhibit natural beauty and conceal complex physics that scientists have been trying to figure out for decades. Recently, I have contributed to this field by working on a new theory explaining what happens to the critical thin layer of air between a drop of water and a surface to cause a splash.
At just a few thousandths of a second, the lifetime of a splashing drop is too rapid for us to see. It took pioneering advances in high-speed imaging to capture these events – the most iconic being Edgerton’s Milk Drop Coronet in 1957. These pictures simultaneously captured the public’s imagination with their aesthetic nature while intriguing physicists with their surprising complexity. The most obvious question is why, and when, do drops splash?
Nowadays, cameras can take over a million frames per second and resolve the fine details of a splash. However, these advances have raised as many questions as they have answered. Most importantly, remarkable observations, coming from the NagelLab in 2005, showed that the air surrounding the drop plays a critical role. By reducing the air pressure, one can prevent a splash (see second video). In fact, drops which splash at the bottom of Mount Everest may not do so at the top, where the air pressure is lower.
The discoveries created an explosion of experimental work aimed at uncovering the curious details of the air’s role. New experimental methods revealed incredible dynamics: millimetre-sized liquid drops are controlled by the behaviour of microscopic air films that are 1,000 times smaller.
Notably, after a liquid drop contacts a solid it can be prevented from spreading across it by a microscopically thin layer of air that it can’t push aside. The sizes involved are equivalent to a one-centimetre layer of air stopping a tsunami wave spreading across a beach. When this occurs, a sheet of liquid can fly away from the main drop and break into smaller droplets – so that a splash is generated.
From a coffee stain all we can see is the outcome of this event – a pool of liquid (the drop) surrounded by a ring of smaller drops (the splash).
Experimental analyses have produced incredibly detailed observations of drops splashing. But they do not establish why the drops splash, which means we don’t understand the underlying physics. Remarkably, for such a seemingly innocuous problem the classical theory of fluids – used to forecast weather, design ships and predict blood flow – is inadequate. This is because the air layer’s height becomes comparable to the distance air molecules travel between collisions. So for this specific problem we need to feed in microscopic details that the classical theory simply doesn’t account for.
The air’s behaviour can only be captured by a theory originally developed for violent aerodynamic gas flows – such as for space shuttles entering the Earth’s atmosphere – namely the kinetic theory of gases. My new article, published in Physical Review Letters, is the first to use kinetic theory to understand how the air film behaves as it is displaced by a liquid spreading over a solid.
The article establishes criteria for the maximum speed at which a liquid can stably spread over a solid. It was already known that for a splash to be produced, this critical speed must be exceeded. If the speed is lower than that, the drop spreads smoothly instead. Notably, the new theory explains why reducing the air pressure can suppress splashing: in this case, air escapes more easily from the layer and provides less resistance to the liquid drop. This is the missing piece of a jigsaw to which numerous important scientific contributions have been made since the experimental discoveries of 2005.
While being of fundamental scientific interest, an understanding of the conditions that cause splashing can be exploited – leading to potential breakthroughs in a number of practical fields.
One example is 3D printing where liquid drops form the building blocks of tailor-made products such as hearing aids. Here, stopping splashing is key to making products of the desired quality. Another important area is forensic science, where blood-stain-pattern analysis relies on splash characteristics to provide insight into where the blood came from – yielding vital information in a criminal investigation.
Most promisingly, the new theory will have applications to a wide range of related flows where microscopic layers of air appear. For example, in climate science it will enable us to understand how water drops collide during the formation of clouds and to estimate the quantity of gas being dragged into our oceans by rainfall.
Do keep this in mind the next time you splatter coffee drops across your desk. Take a moment to admire the pattern and appreciate the underlying complexity before cursing and heading for your “mopper upper” of choice.
Conventionally understood as the interface between us (humans) and the ‘out there’, this article proposes that there is an urgent need to write philosophy of language from a perspective which can account for the new ontologies of language being promoted by its increasingly non-human, digital, disembodied applications and ‘realities’. The work starts with a question: what is language when it is no longer made by humans, but by a machine? Employing Heinz von Foerster’s distinction between ‘Non-Trivial’ and ‘Trivial’, Machines, which describes machinic processes involving agency and those which do not, this practice and theory based research explores that question.
Most philosophies of language still take as a given that language is a human-made artefact (speech/writing), albeit at different levels of ‘proximity’ to the human subject: speech being closer than writing, and writing being closer than printing or typing. By this argument, speech is more closely related to the human than typing, and in turn permits more human agency. With the typewriter’s tendency towards automation and standardisation in mind, Nietzsche, who turned towards using a Malling Hansen Writing Ball (invented in 1867), has been described by Friedrich Kittler as the “first mechanized philosopher”. Nietzsche noted, while using the Malling Hansen: “The writing ball is a thing like me: made of iron yet easily twisted on our journeys” and observed that “Our writing tools are also working on our thoughts” Beyerlen later comments on the act of typing:”[A]fter one briefly presses down on a key, the typewriter creates in the proper position on the paper a complete letter, which is not only untouched by the writer’s hand but also located in a place entirely apart from where the hands work.” This is a more general tendency of technology: to distance the subject from its object via various degrees of technological mediation (fig.1). And yet, despite its mechanized characteristic, we cannot say that the writings of Nietzsche possess less agency in their typed form, only that the relationship between his thought and its transcription is rendered slightly more distant due to the new technology of the typewriter.
Figure 1. Instruction manual for the Hermes 3000 typewriter.
Despite these distinctions, which in this case take the form of noting the distance between speech and writing, or writing and typing, human-made language is largely taken to be analogue, material, and definitional within philosophy. Language is made by rational human agents, not machines. It is definitional to the degree that humans possess language, whereas animals do not, and this is what is taken as one of the defining characteristics of the ‘human’. Therefore, when philosophers speak of language, and its human-made form, it’s with a full sense of languages’ significance, culturally, intellectually, and historically. To reinforce this point, in the film ‘Threads’ (1984), the bleakest description of the effects of nuclear war and its aftermath concludes that in the imagined post-apocalyptic future, language breaks down to such an extent that the threads which tie human to human and constitute primary social and ethical contracts are broken. The linguistic threads are also unravelled, and (after some time), language is reduced to single word, brute force descriptions of fact, directed entirely towards survival. The story provides a stark reminder of the significance of language within a culture, and its role in uplifting humans from a state of mere survival, to the formation of social, political and legal bonds, along with literature, creativity and (most importantly within the narrative of the film), the capacity for human empathy.
This concern over the changing conditions of language is not new. Writing itself is a technology, which took what the Romans called ‘Verba Volant’ (the spoken word flies), and relegated it to a mere ‘Scripta Manent’ (the written word as something dead, lifeless). Plato feared that language would alter our relationship to the human act of memory. However, I wish to suggest here, that such questions about language and agency and language and the social, while not new, are being dramatically amplified by the new technological contexts within which language exists, and that the emergence of non-human languages (which mimic human language or, more precisely, non-human agents whose material substrate for producing such language is code) requires us to radically rethink the philosophical assumption that language is a human made phenomena, and moreover to consider why that matters (fig. 2).
Figure 2. 000111111, by Aude Rouaux, 2016 (video: 15.42 minutes). Words are vocally performed, in binary code.
Language is rapidly changing, and migrating to machine-driven forms, which are increasingly detached from human modes of articulation and yet which possess great power to shape human actions and affect human identity as articulated through language. Artificial intelligence, artificial languages, speech/text recognition systems and other forms of mechanization, are changing the ways in which language relates to the human, and therefore, arguably changing what it means to be human. To consider these matters, In order to consider these matters I will here apply Heinz von Foerster’s distinction between Trivial and Non-Trivial Machines to language, revisit Deleuzes’ notion of ‘The Event’ (especially as it pertains to language), and consider Heidegger’s reformulation of one of the classical law of thought, known as The Principle of Identity. Here he poses identity, not as a matter of direct equivalence between two things (A=A), but as a relation between them, located in the ‘is’, not what lies on either side (A is A). This will be relevant to the mimetic qualities of non-human languages and the possibilities of seeing what lies between human and non-human language.
To conclude, I will suggest that we might think of the relationship between human and non-human language as less a question of seeking equivalence between human and non-human language (currently based on mimicry) and more one of seeking a new relation (somewhere in between the two). I will briefly outline some practice-based experiments, which aim to explore this space, and which are in progress. The collaborative publishing and language research project ‘one’ (provisionally titled: ontological non-human editions), will seek to evaluate the potential for this in-between space of human/non-human languages and to break the dichotomy between the two.
The Trivial Machine vs. The Non-Trivial Machine Within the context of mid-20th century writing on cybernetics, Heinz von Foerster proposed the notion of the ‘Non-Trivial Machine’, referring to it as possessing the “well-defined properties of an abstract entity”, and in so doing, posed a machine as not necessarily something with ‘wheels and cogs’. Instead, a machine is “how a certain state is transformed into a different state”. Alan Turing previously described a machine as a set of rules and laws. By these definitions a machine could be something immaterial, every bit as much as something physical, opening the way for code-as-machine. The important aspect of a ‘Non-Trivial Machine’, for von Foerster, is that its “input-output relationship is not invariant, but is determined by the machine’s previous output.” In other words, its previous steps determine its present reactions and so it is reactive, variant, and dynamic. In contrast, a ‘Trivial Machine’ would be one in which the input creates an invariant output. This kind of machine is inherently stable, and produces no fluctuations or errors: it’s predictable. As such, by definition, a ‘Non-Trivial Machine’ would be one in which the output cannot be predicted from its input, constituting a machine which has agency, and autonomy. We might call these attributes ‘intelligence’, creativity, and the human propensity for unpredictability. Based on von Foerster’s distinction, it seem clear that we are presently still operating in the realm of the ‘Trivial Machine’ with respect to non-human languages, including those produced by automated voice assistants or IBM’s flagship debating technology ‘Project Debater’, since its operations lack agency and true linguistic contingency: they only mimic such effects, if ever more convincingly. Even ‘Debater’, which claims to engage in true discussion/argument with a human interlocutor, uses the power of its almost unlimited access to databases of information, thereafter constructing its arguments using a (vocal) linguistic interface which has been ‘trained’ in the art of classical rhetoric and persuasive debating techniques.
Figure 3. Heinz von Foerster’s own drawings of his trivial (left) and non-trivial machines (right). On the left, the input-output relation is invariant. On the right, the input-output is variant and therefore unpredictable, since it’s non-linear. The internal logic changes with every operator. In other words, in the trivial machine scenario, you won’t get peppermint or condoms if you put a coin in chewing gum machine, but you might in a non-trivial machine (von Foerster and Poerkson, 1999, p.57).
We might therefore make an initial observation: human beings are (borrowing this definition), ‘Non-Trivial Machines’ by definition, because the input humans receive does not (always) result in a predetermined output. Their (immaterial) thought processes could be seen to correspond to von Foerster’s notion of an abstract entity with well-defined properties. Absent of wheels and cogs, these processes are nonetheless real. Returning to our present subject: language, such processes are materialised through the interface of language, and these abstract cognitive process are evidenced in sounds and marks.
It follows that if humans are unpredictable: they interpret, subvert, alter and take ownership of language at the point of input, creating new forms, and bringing their subjectivity (including their identity/agency, along with the materiality and ‘event’ of language in time and space), into play. However, as a caveat, at the same time, what they produce is based on their previous interactions with language, and understanding of the rules, as well as those linguistic elements with which they are familiar (everyone shares and utilizes the same letterforms within a specific language). This is a paradox: language is both a site of intense non-trivial production (non-predictable input=output), but at the same time it works with pre-existing elements (predictable input), and to that extent it could (arguably), be called ‘trivial’ (input = output, predictable within those given limits). This is because, for example, we don’t suddenly create new symbols within the existing chain of 26 letters in the Roman alphabet but accept that restriction of the linguistic/symbolic ‘machine’. We don’t normally rewrite the grammar and syntax, unless we are experimenting with form. Nonetheless, what we do with this input, despite its pre-given nature, is intrinsically unpredictable. As humans we generate the new, from the given.
These are not trivial questions. As we embark upon the full employment of artificial language[s] as the interface between ourselves and machines, Siri, Amazon Echo, Watson, Chat-bots, to name just a few, we see that the trivial (input = output) model of AI is potentially moving closer to a non-trivial form of language. When Amazon Alexa starts creating poetry, connected to an autonomous thought process, we will be in the presence of the linguistic singularity, and we will know this because of the forms of language being used, and the ways in which the input/output conditions are changed. Alan Turing famously used written language as the basis of his test for the presence of machine intelligence: The Turing Test. However, the use of language on the part of the non-human writer within the test was ‘trivial’, for the purposes of this definition.
To summarise, the distinction offered by Heinz von Foerster states that in a ‘trivial’ machine, input and output can be predicted (reliable/mechanical). In ‘non-trivial’ machines the output is unpredictable and involves risk (unreliable/creative). However, I want to propose that, however distant, these distinctions are now under threat by the potential for an autonomous machine (AI), to exist on the non-trivial output side of the equation. The ‘trivial machine’ is fast becoming closer to being ‘non-trivial’, and this requires us to critique and reassess what language is and what we value in it. This requires a method of critiquing such language, leading to a further distinction posed by von Foerster: that between allo-observation and auto-observation. The method we apply to critiquing language relies on this distinction, which I will briefly summarize in the section to follow.
Allo/Auto-Observation of Language
As noted, language as the interface between ourselves and robots or ‘intelligent assistants’ such as Amazon echo (or other forms of artificial voice assistants), is still relatively trivial. We don’t expect Watson or Siri to produce utterances or fragments of writing, which are autonomous (not input=output). Language produced by human beings on the other hand, is radically non-trivial. I cannot anticipate with any degree of reliability, what you will say next. Literature and poetry are unequivocally non-trivial, tethered to the human subject with its essential autonomy, but the trivial-machine will only demonstrate intelligence, when it starts speaking and writing to us, or other machines, non-trivially. This moves beyond the limits of Turing’s test, which identified the presence of intelligence on a language-based demonstration. [V]on Foerster offers us a useful method of working through some of the complexities of this terrain, with another distinction, one which has been employed extensively by creative practitioners working with language, whether implicitly or explicitly.
“[V]on Foerster suggests that the non-trivial machine should change itself as a result of auto-observation: currently it does so as a result of allo-observation“.
Allo means different/other: a form of observation which comes from the outside of the subject under scrutiny, in the present example, language. Auto-observation would imply that the observation comes from within the subject: in this case, using language to examine language. The creative properties of material language would be used as a form/medium of investigation, and not give way to the hierarchy of imposing an explanatory meta language (which is a language which explains another language). Any use of a meta language poses problems, because it’s difficult to critique language from ‘a view from nowhere’, and claim any degree of objectivity. Meta languages (arguably) fall foul of a contradiction: claiming to stand outside the subject, language is both the subject and the medium of any meta language.
In contrast, auto-observation would imply that the observation comes from within the subject (immanently), using language itself to examine language (without calling upon anything outside that language, to do the explaining). In this method the paradox/contradiction of language being both subject and medium paradox is embraced by exploring language from within language. The creative properties of material language are thus posed as a form, method, and means of investigation. The proposed creative works described at the end of this paper, do not revert to a meta language which would ‘explain’ such language. The work will proceed based on ‘auto-observational’ interventions, not ‘allo-observational’ techniques. In this way (as von Foerster suggests), the non-trivial machine should undergo change. Rather than describing a static state of affairs in language, using language to do so (again, in a meta language), the work will take a form of language (human/non-human or in-between), and immersively interrogate its primary conditions, from within that language as a creative medium.
In the next section I would also like to pose a further method, one of looking ‘sideways’ at language and of relieving it of its representational function, challenging the basic assumption that identity forms the ground of language, and that meaning, and therefore ‘truth’ can be established on the basis of what it refers to (put simply: what it represents/points to, beyond itself). In place of mimesis (taken here to mean representation), thinkers such as Deleuze and Guattari, and also Derrida suggest that an alternative ‘logic of representation’ is possible, one where an ‘a-signifying, a-syntactic material’ forms the ground for a discretely different grammar. This in turn brings forth other forms of understanding, or: “an essentially heterogeneous reality” Deleuze and Guattari explain how: “A method of the Rhizome type [on the contrary], can analyse language only by decentering it onto other dimensions and other registers”, suggesting that language can only be scrutinized sideways, tangentially, without looking directly at the object itself. This may seem contradictory to the notion of an auto-observational method of interrogating language from within language. However, these might also be seen as complementary methods, since each asks us to relieve language of its straightforward representational function and look at it afresh. There are two further considerations of the trivial/non-trivial machine analogy which I would like to introduce with respect to non-human/human language: identity and the event. For this I will attempt to simplify some fairly complex philosophical remarks by both Heidegger and Deleuze.
Language and Identity/The Event
The Principle of Identity is also known as the Law of Identity. In its simplest form it states that A=A. This can be seen in the following examples:
“A rose is a rose is a rose is a rose”— Gertrude Stein, from the poem ‘Sacred Emily’, 1913.
“The number 1 is self-identical” — Gottlob Frege, The Foundations of Mathematics, 1884.
The first primitive truth of reason is stated as a self-referring form of identity: “Everything is what it is”— Gottfried Leibniz, Nouv. Ess. IV, 2, § i.
Challenging this classic law of thought known as The Principle of Identity, which takes as a given that A=A, Heidegger, in his lectures from 1957, wants to rethink the principle of identity as one of relation (with the emphasis on the relation), rather than one in which the terms being related take precedence. A=A therefore becomes A is A, where the ‘is’ takes precedence over the identities of the individual A’s. This represents a move away from metaphysics, which always casts the same as a self-unity. Heidegger states that, in its place, “The event of appropriation… should now serve as the key term in the service of thinking.”
The ‘event of appropriation’ is a singularity [an event] which delivers over beings into Being. Whereas metaphysics asserts that identity presupposes Being (Being is subservient to identity: identity is its ground[ing]). In the event as posed, identity is recast as the relationship between the together in terms of the belonging, and not in favour of the terms being related. Perdurance is the term Heidegger uses for the simultaneous withholding and closure of the space between the terms; one which is forever in a state of oscillation between them.
In Heidegger’s conception of the event of appropriation, language itself provides the tools for this type of thought, since through its “self-suspended structure”, language holds everything in a fragile, delicate, susceptible framework, one which is infinitely collapsible at any point. The event of appropriation is thus to be found, and is founded, in language; in that “self-vibrating realm” where we dwell. Heidegger states it in this way: “The doctrine of Metaphysics represents identity as a fundamental characteristic of Being.” To ‘Be’ is to be identified. He wants to challenge this.
In Heidegger’s new formulation, the essential quality of identity is to be found within the event of appropriation (in the “self-vibration realm”). Where Metaphysics presupposes that Being is the ground of beings, and forms its identity; gives it its characteristics, the ‘spring’ away from identity as posed by the concrete relation A=A, constitutes a leap into the relative ‘abyss’ of the event of appropriation, where stable identity gives way to a less familiar way of thinking and being. However, this abyss is not a place of loss or confusion, but the space of a more originary relation of identity, one which retains difference, and where the vibration, or oscillation between beings and Being is retained, and Heidegger thinks this is place of true Being. Thinking is also transformed by this movement, and the “essential origin of identity” is retained through that which joins and separates them, simultaneously (he calls this simultaneous process of opening and closing, perdurance).
As Nietzsche also reminds us, this is a game of speed and intensity; one which denies a stable/causal ground for meaning, and we can apply this observation directly to language as well as his subject of logic: “Causality eludes us; to suppose a direct causal link between thoughts, as logic does–that is the consequence of the crudest and clumsiest observation. Between two thoughts, all kinds of affects play their game: but their motions are too fast, therefore we fail to recognize them, we deny them.” Much of what happens in logic (and by extension, other forms of language) takes place, Nietzsche claims, beyond the radar screen, since the non-metaphysical, affective attributes of language, including speed and intensity, are denied. To claim that causality is a simple relation (as logic does), is too simplistic a position. Not everything can be (nor should be) stated unambiguously, and thought should strain against its own limits, in search of conceptual integrity. It’s this space of encounter with the non-causal affects of language which intrigues and informs the present work, since it requires us to think of language (human and non-human) as something which operates on a far more complex ‘plane’ than that of straightforward representation. If we are to move beyond mimesis and mere mimicry of human language, then we need to move beyond simple notions of identity and recognize the complexity of language.
The speed and intensity at which such effects operates within language reminds us of Walter Benjamin’s claim that thought necessarily involves the discontinuous presentation of ‘fragments of thoughts’, set in an interruptive relationship of infinite detours. Coherence is to be found in the ‘flashes’ and gaps between perceptible knowledge; not in the coherent sequencing of ideas, or in the relatively uncomplicated collision of ideas and their presentation. Dissolution and dissonance, rather than denotation; polyphony, rather than homophony; elision, rather than elucidation, bring meaning [truth] into view. Ideas precede presentation, but are only to be sought in the interstices, the oblique, the constellatory. Benjamin explains the constellation as the place where: “[I]deas are not represented in themselves, but solely and exclusively in an arrangement of concrete elements in the concept: as the configuration of these elements… Ideas are to objects as constellations are to stars.”
Finally, Goethe, in his Scientific Studies points to a second and fundamental difficulty with correspondence theories of truth, grounded in identity: “How difficult it is… to refrain from replacing the thing with its sign, to keep the object alive before us instead of killing it with the word.” I will briefly turn to come comments on the notion of the ‘event’ in language,
Michel Foucault, in Theatrum Philosophicum, shows how Gilles Deleuze rejects, for thinking, the model of the circle, with its promise of closure, centre and certainty, in favour of ‘fibrils and bifurcations’, which open out onto extended and unanticipated series, and defy principles of organization. In Foucault’s own words:
‘As Deleuze has said to me, however… there is no heart, there is no centering, only decenterings, series, from one to another, with the limp of a presence and an absence of an excess, of a deficiency’
Similarly, Nietzsche directly confronts the concept of a ‘ground’, upon which to base a philosophy, offering instead, a deconstruction, or critique of the tradition. Thinking against ‘the reason and fetish of the totality’, he seeks to dismantle the ‘universal’ account, replacing it with a series of fragmentary, unstable perspectives on truth, knowledge, and subjectivity: “For Nietzsche, the world consists of an absolute parallax, infinite points of view determined and defined by, and within, a fragmented poetic fabrication”. In other words: shifting objects and observers, coupled with shifting positions, produces shifting meaning, and it is through the fragmentary, aphoristic style of his writing that Nietzsche articulates this unstable plurality. The correspondence between Nietzsche and Deleuze’s approach is clear. Similarly, as previously seen, Walter Benjamin proposes that “meaning hangs loosely, as departure, tangentially, like a royal robe with ample folds” and that in language: “Fragments of a vessel which are to be glued together… need not to be like another… as fragments of a greater language.” Each view language itself as a productive site of philosophical critique, and question its ability to provide singular, unambiguous and final meaning (based on a stable identity).
For Deleuze, there is ‘something else’ operating in language, but this ‘something’ (the event), is not describable by simple observation; it is not able to be represented, but nonetheless makes expression possible. In The Logic of Sense Deleuze attempts to show how the ‘event’ ‘haunts’ language. The ‘event’, which is synonymous with the unspoken, and incorporeal; the unrepresentable, nonetheless makes language possible, subsisting in language as its primary means of expression; partaking in the moment of expression and being both indistinguishable from it, and entirely different from it, at the same time:
“The expression, which differs in nature from the representation, acts no less as that which is enveloped (or not) inside the representation… Representation must encompass an expression which it does not represent, but without which it would not be ‘comprehensive’, and would have truth only by chance or from outside.”
Representation is problematic for Deleuze, since it is extrinsic by nature, operating on the basis of resemblance, or mimesis; exclusively externalized (fixed, static, immobilised and invariant). However, the ‘something’ (the event) which consistently escapes this manner of representation is a matter internal to the expression (enveloped, or subsisting within it), providing its fully ‘comprehensive’ character while remaining enigmatically inexpressible. Representation on this account is always abstract and empty, incomplete and unfulfilled.
As with those non-human language which seek to mimic the human forms, such representations are always empty of the fullness supplied by the ‘event’, since, according to Deleuze, without the event, representation would remain ‘lifeless and senseless’. In short: for Deleuze, the ‘extra-representative’ exceeds the functional, while the tension between the representable and the non-representable is that which makes possible the fullest form of representation:
“Representation envelops the event in another nature, it envelops it at its borders, it stretches until this point, and it brings about this lining or hem. This is the operation which defines living usage, to the extent that representation, when it does not reach this point, remains only a dead letter confronting that which it represents, and stupid in its representiveness”.
Differently stated, Deleuze proposes an expression which is both internal and invisible to language, but nonetheless intrinsic and crucial to meaning; something unrepresentable but irreducible and essential. This refers us back to von Foerster’s notion of the Non-Trivial in language. Whilst language continues to be merely replicated in non-human systems, it is this ‘event’ which is missing, and which denies the fullness of language. It’s found in the poem, the performance of language, and the relationship between the agency of the thinking human subject and the language being performed. It’s also found in the tension between linguistic utterances (speech/writing) made by human beings and those made by machines, which (at the time of writing), lack agency and comprehension: we might think od this as the ‘eventness’ of language as produced by human beings, one which requires Heidegger’s ‘oscillation’, in place of static identity. Thought in this way, human language retains instability and event-ness or presence in ways which non-human languages lack. Reliant in mimicry and without agency, they nonetheless remind us of what is missing in such languages, and what language is for human beings: irreducible evidence of the human propensity for creation, the non-identical and the unpredictable.
Deleuze criticizes the structuralist proposal of language as a system of signifiers, one which presupposes a referent (the world), upon which meaning is imposed or found. His difficulty is that in structuralism, language is seen as transcendent, it stands to re-present the world as given to us, through a system of signs (which transcend that reality: a meta language, as discussed previously). It constructs and represents some ‘outside’ world, while being independent from that world. Deleuze wants to suggest instead, that signs run throughout life, in forms such as genetic codes, biological processes, and computer actions; that there are only events, not stable meanings. “it is the myth of representation that separates man from an inert and passive world that he then brings to language”. Instead, he wants to say that:
‘Words are genuine intensities within certain aesthetic systems’. Once communication between diverse, or heterogeneous series is established, all kinds of consequences follow. “Something ‘passes’ between the borders, events explode, phenomena flash, like thunder and lightening. Spatio-temporal dynamisms fill the system, expressing simultaneously the resonance of the coupled series and the amplitude of the forced movement which exceeds them”.
Deleuze objects to the structuralist programme on the basis that that before signs are extensive, or representational, they are first of all intensive. This is amplified and demonstrated by the various ways in which artists, designers, poets, have explored and exploited the intensive nature of language, and in doing so, have pointed to an alternative ‘truth’ of language: one which embraces paradox, diversity, and a-logicality. For Deleuze, this affective, intensive dimension of language is its primary ‘event’; rhythmic, creative, infinitely productive, or non-trivial. Instead of doubling a pre-given world, language produces it. Nonsense literature, such as Lewis Carroll’s ‘Snark’, in his poem The Hunting of the Snark, and ‘The Jabberwocky’ from Through the Looking Glass, in which no referents exist, shows how language still has a sense, despite its lack of concrete referent, and reveals how language is ‘active creation’, rather than ‘reactive representation’. Moreover, sense is not reducible to the singular meanings of a language, it is what allows a language to be meaningful, it is not attachable to each instance of language, but is a method of thinking about or approaching things, in which we see language’s power to transform itself via the proliferation of meanings, and intensive affects (events).
Representational painting or literature points beyond itself to an external world (secondarity); it is essentially ‘about’ something other than itself. It is referential. Conceptual art, including art about the act of making art, or about the surface of the work, is self-referring. In the same respect, non-representational language directs attention towards itself; towards languages’ sensory, affective qualities, and this ‘concrete visual order of signifiers’ takes precedence in any semiotic account of it. Drawing attention to language as an event, as an image of itself, from within itself, based on auto-observation, reveals a phenomenon with its own characteristics and immanent qualities. We learn something about language’s limits and possibilities; its inherent instability, as well as its productivity, when contemplating its non-referential character as a pure event. This is the auto-observation of the ‘Non-Trivial Machine’, as posed by von Foerster.
Philosophy cannot as yet account for the emergence of non-human language since it commences from the assumption that language is produced by human beings. This paper has offered insights into the theoretical ground for a new body of creative work, which emerges from a close investigation of the ‘new’ conditions of a language which is rapidly migrating to machines. Advances in Al technologies use increasingly sophisticated replications of human language as their interfaces. Much has been produced creatively (and written) about Al. However, little detailed attention is being paid to such ‘machinic’ language, as the key means by which we will come to accept Al. This prompts a revisitation of the significance, value, and purpose of language for humans, alongside exploring the potential for hybrid/emergent forms of human/machine language to emerge. The creative method will be to work with the cybernetic theory outlined in the mid-20thC by Heinz Von Foerster, for whom ‘Trivial Machines’ are those in which the input and output are predictable, while ‘Non-Trivial Machines’ are highly unpredictable. It’s clear that although significant advances have been made within Natural Language Processing, Voice Recognition, etc., that we are still very much in the ‘Trivial’ zone in terms of machine-replicated language: input equals output. However, humans are by definition ‘Non-Trivial Machines’ in terms of how they use/inhabit language, which is highly autonomous, unpredictable, and creative. Rethinking the relationship between language and identity and posing language as an event, allows us to loosen the ties between language and its role in representation: it places the emphasis elsewhere. If we rethink how we presently configure such languages, to be less concerned with replicating human language (spoken/written) and more on such creative potential, then we might be able to see a new role for non-human language as a creative force, closing the gap between these trivial and non-trivial machines and creating a hybrid, collaborative somewhere between these two polar opposites.
The Non-Trivial [Language] Machines
In response to these questions, a series of workshops and projects under the provisional tiles listed below are under development. Participants will include technology experts, artists, writers, philosophers, poets and ‘other’ If you are interested in being involved in any of these experiments, please contact firstname.lastname@example.org.
The focus within the creative works will be on exploring autonomy and imitation as the fundamental basis of the human/machine duality. What happens when that duality is less clear-cut? What creative potential can be tapped into, within the collision (and collusion) between humans and machines, across the interface[s] of language? We will create trivial/non-trivial machines (based on language), to both explore this potential, and expose its limits.
Chaosmos, or Materia Prima (What is Language?)
‘Chaosmos’ is James Joyce’s term for the way in which order comes from chaos (comprised of the raw materials of the universe), but not before it has moved to the limits of comprehension; prior to it becoming (in this case), language/literature/art. Chaosmos might be thought of as a ‘composed chaos (but something neither foreseen nor preconceived)’, which emerges out of a temporary alignment of images/sensations/ actions. In this initial experiment, the emphasis will be on working within an intentional form of linguistic chaos; a ‘Materia Prima’ (the primary materials of language), which precedes meaning. Random fragments of not-yet-language will be the medium, in writing, sound and drawing. The resulting work will be unknown; forged on the chaos of randomness and the randomness of chaos. This experiment is intended to draw attention to how language is arbitrary, and yet constructs a world for us. However, before we get to meaningful marks/sounds and/or conventional codes (human or otherwise), there is nothing but ‘wandering’.
Materia Secunda/Enactment (What is Code?)
Within alchemy, ‘Materia Secunda’ is the second phase of the emergence of meaning. It takes the raw materials of language (the Material Prima) and produces meaning from agreed codes, creating coherent patterns and systems. In this experiment, the participants will examine codes, with a view to seeing how they are inherently artificial, even before they become aligned with the digital, and afterward, the machine. Codes (numerical or otherwise) are a conceit of the human intellect and yet they contain infinite potential to create meaningful utterance and gestures. Working with people from philosophy, and technology/science, this experiment will examine codes in detail and consider how ‘to code’ is ‘to create’.
Imitation Game[s] (What is a Machine?)
To imitate is to mimic, to mimic is to replicate, to replicate is to repeat and to repeat does not imply creation, but simple adherence to the original (perhaps). We replicate language within AI systems, but we don’t yet have the capacity to re-code its raw materials (imitative forms of language), to create new patterns, or to perform unexpected language games. AI is still a trivial machine at this point in time. However, advances in robotics, and automation mean that we are moving ever closer to the point where autonomous creation is possible, not just imitation; resulting in a Non-Trivial Machine. By the time of this last experiment, we expect that technological developments will have advanced to the degree that we can glimpse the potential for ‘non-imitative’ technologies of voice and language; ones nonetheless produced by algorithms.
‘one’ stands for (amongst other things): ‘ontologies of non-human expression’, and is a collaborative project, focusing in particular on both the limits and potential of non-human publishing. A group of designers, artists, theorists and educators Jack Clarke, Joshua Trees, Yvan Martinez, Robert Hetherington and Sheena Calvert formed this group in late 2019, in response to shared questions about the shifting relationship between humans, machines and published work (books, texts and ‘other’). We started by revisiting the premise that to ‘publish’ is the act of making public, and, within that broad definition which we aim to refines as the research progresses, proposing to consider the implications of autonomy, agency, automation and algorithmic production, in published work produced by human and non-human agents. The intention is to interrupt our present understanding of how humans and machines both produce and disseminate such outputs. ‘one’ is at the same time a practice-based, and a theoretical project, involving writings, practice-based experiments and performative modes of dissemination. We are interested in the relationship between the reader and the read, and in asking the question: Is non-human publishing changing the relationship between the seen (the published) and the seeing subject (the ‘reader’)? Ultimately, what if only machines are reading? By examining the social relations that underpin such technologies, we hope to raise some important questions about the ways in which non-human publishing challenges the definition of that term, as well as revisiting the central role of language and published expression in human life.
The non-trivial machine distinction will create a space for one of the first experiments of this newly formed research group. By asking software to generate as many possible versions of the ‘one’ acronym/title, we can (playfully) see how and where machines are able to generate meaningful phrases. However, this will also be trivial, since the input is simply the store of possible words, in random combination, without an acknowledgment of languages’ social context. We will publish this list as a first act of non-human publishing.
Typing in ‘human’, ‘machine’, ‘ontological’, or in fact any other word into the main box, and pressing ‘go’ (more than once), generates variations of the full ‘one’ project title. Scrolling down the page reveals the collection of words from which the system is drawing. Each interaction with the generator adds more to this list and so the number of possible variants multiplies with use. While the machine generates these collections of words, randomly (or ‘trivially’), and without making any sense of them, as humans we read these words and groupings quite differently, since we cannot escape our ‘non-trivial’ relationship to language.
 Friedrich Kittler, Gramophone, Film, Typewriter (California, Stanford, 1999)
 Friedrich Nietzsche, letter of March 17, 1882, in idem 1975–84, pt.3, 1: 180.
 Friedrich Nietzsche, letter toward the end of February, in F. Nietzsche Briefwechsel: Kritische Gesamtausgabe, G. Colli and M. Montinari (eds), Berlin, 1975 – 84, pt. 3, 1: 172.
 Angelo Beyerlen, the royal stenographer of Wiirttemberg, quoted in Herbertz, 1909, 559.
 Plato, Phaedrus, 370BC. Socrates (recounting a dialogue between Thueth and Thamos), questions the role of written language in supporting memory, and claims it will damage that facility in humans. ‘letters’ “… will create forgetfulness in the learners’ souls…”
 Heinz von Foerster, Heinz. (2003). Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer.
 For Turing, a ‘machine’ could be a set of set of rules and laws (mathematical/procedural). Hence, a ‘Turing Machine’. Cf: Turing, A.M. (1936). “On Computable Numbers, with an Application to the Entscheidungsproblem”. Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. Also, 1948, “Intelligent Machinery.” Reprinted in “Cybernetics: Key Papers.” Ed. C.R. Evans and A.D.J. Robertson. Baltimore: University Park Press, 1968. p. 31.
 For more information on Project Debater, see: https:// artificial-intelligence/project-debater/www.research.ibm.com/
 Heinz von Foerster, Heinz. (2003). Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer.
 Jacques Derrida, Points… Interviews, 1974-1994, Jacques Derrida, Ed. Elizabeth Weber (California: Stanford, 1995), p.75.
 Deleuze, Cinema 2: The Time-Image (U. of Minnesota, 1984), pp 43-44
 Deleuze and Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, University of Minnesota Press, 1987. D + G preface their remarks about heterogeneity with: ”A semiotic chain is like a tuber agglomerating very diverse acts, not only linguistic, but also perceptive, mimetic, gestural, and cognitive: there is no language in itself, nor are there any linguistic universals, only a throng of dialects, patois, slangs, and specialized languages. There is no ideal speaker-listener, any more than there is a homogeneous linguistic community”. p.7
 Martin Heidegger, Identity and Difference, (Chicago University Press, 2002), p. 36.
 Friedrich Nietzsche, Will to Power, Translated by W. A. Kaufmann and R. J. Hollingdale, (Vintage Books, 1967), p. 477.
 Op. cit., Negative Dialectics,The Disenchantment of the Concept, p.12. Adorno puts the point in this way: “Initially, such concepts as that of “being” at the start of Hegel’s Logic emphatically mean non-conceptualities; as Lask put it, they “mean beyond themselves.” Dissatisfaction with their own conceptuality is part of their meaning, though the inclusion of non-conceptuality in their meaning makes it tendentially their equal and thus keeps them trapped within themselves”.
 See: Walter Benjamin, Selected Writings, volumes 1/2/3, edited by H. Eiland and M. W. Jennings (Harvard U. Press, 2006)
 Walter Benjamin, the ‘Epistemo-Critical Prologue’ to Ursprung des deutschen Trauerspiels (1928), translated as The Origin of German Tragic Drama (1977).
 Johann Goethe, Scientific Studies, The Collected Works, ed. D. Miller (Princeton U. Press, 1995).
 Michel Foucault: Aesthetics, Method and Epistemology. Edited by J. D. Faubion, and Translated by R. Hurley and others. (New York: The New Press, 1998).
 G. B. Madison, Coping with Nietzsche’s Legacy: Rorty, Derrida, Gadamer, The Politics of Postmodernity, Essays in Applied Hermeneutics. p.1
 Terry Eagleton, Awakening from modernity. Times Literary Supplement, 20th February (1987).
 Stephen Barker, Nietzsche/Derrida, Blanchot/Beckett:Fragmentary Progressions of the Unnamable, (California, 1995)
 Walter Benjamin, “The Task of the Translator”, in Illuminations, trans. Harry Zohn, London, Fontana, 1992 pp. 70-82.
 In the Genealogy of Morals (1887), Nietzsche throws out a strident challenge that even those philosophers of the ‘modern’ era, who grounded understanding in science and mathematics are shadowed by the same pursuit: “They are far from being Free Spirits: for they still have faith in truth. Quoted in G. B. Madison, Coping with Nietzsche’s Legacy in Philosophy Today 36 (1):3- 19 (1992). p.1
 Gilles Deleuze, The Logic of Sense, Twentieth series (1969) Logique du sens (Paris: Minuit); tr. as The Logic of Sense, by M. Lester with C. Stivale (New York: Columbia University Press, 1990). p.145.
It’s a natural impulse to reach out and touch an original artwork, perhaps to feel the strong brushstrokes in van Gogh’s Starry Night or to trace the shape of a compelling sculpture. You can’t though, and for good reason: a multitude of individual touches would soon damage the work, so museums sensibly use “Please don’t touch” signs, velvet ropes and alert guards to keep viewers at a distance. It helps that those same museums have put their collections online so you can appreciate great art right on your digital device. However, even at high resolution, images on flat screens do not clearly show surface texture or convey volumes in space. But now researchers in art and technology are finding ways for viewers to experience the texture of artworks in 2D and the solidity of those in 3D.
The missing third dimension is significant even for flat works, which typically show the texture of the background paper or canvas, or of the pigment. Some nominally two-dimensional works are inherently textured, such as Canadian artist Brian Jungen’sPeople’s Flag (2006), an immense vertical hanging piece made of red textiles. Helen Fielding of the University of Western Ontario has perceptively noted how vision and touch intertwine in this work:
As my eyes run across the texture of the flag, I can almost feel the textures of the materials I see; my hands know the softness of wool, the smoothness of vinyl. Though touching the work is prohibited…my hands are drawn to the fabrics, subtly reversing the priority of vision over touch…
Textural features like these are a material record of the artist’s effort that enhances a viewer’s interaction with the work. Such flat but textured works are art in “2.5D” because they extend only slightly into the third dimension. Now artworks shown in 2.5D and 3D on flat screens and as solid 3D models are giving new pleasures and insights to art lovers, curators, and scholars. As exact copies, these replicas can also help conserve fragile works while raising questions about the meaning of original art.
One approach, developed at the Swiss Federal Institute of Technology (EPFL) in Lausanne, creates a digital 2.5D image of an artwork by manipulating its lighting. Near sunset, when the sun’s rays enter a scene at an angle, small surface elevations cast long shadows that make them stand out. Similarly, the EPFL process shines a simulated light source onto a digital image. As the source is moved, it produces highlights and shadows that enhance surface details to produce a quasi-3D appearance.
This approach has links to CGI, computer-generated imagery, the technology that creates imaginary scenes and characters in science fiction and fantasy films. One powerful CGI tool is an algorithm called the bidirectional scattering distribution function (BSDF). For every point in an imagined scene, the BSDF shows how incoming light traveling in any direction would be reflected or transmitted to produce the outgoing ray seen by a viewer. The result fully describes the scene for any location of the light source.
In films, the BSDF is obtained from optical theory and the properties of the imaginary scene. The EPFL group, however, generated it from real art. In 2014, they illuminated a pane of stained glass with light from different directions and recorded the results with a high-resolution camera, creating a BSDF and showing that the method works for nearly planar items. This approach has been commercialized by Artmyn, a Swiss company co-founded by Luic Baboulaz who led the EPFL team. Artmyn makes 2.5D digital images of artworks by lighting them with LEDs at different visible wavelengths to provide color fidelity, and at infrared and ultraviolet wavelengths to further probe the surface. The result is a BSDF with up to a terabyte of data.
As an illustration, Artmyn has worked with Sotheby’s auction house to digitize two Marc Chagall works: Le Printemps (1975, oil on canvas), a village scene with a couple embracing, and Dans L’Atelier (1980, tempera on board), an artist’s studio. The Artmyn software lets a viewer zoom from the full artwork down to the fine scale of the weave of the canvas, while moving the lighting to display blobs, islands and layers of pigment. This reveals how Chagall achieves his effects and clearly illustrates the difference between oils and tempera as artistic media. Currently in process for similar digitization, Baboulaz told me, are a Leonardo da Vinci painting and a drawing, in recognition of the 500th anniversary of his death this year.
Artmyn has also digitized cultural artifacts such as a Sumerian clay tablet circa 2,000 BCE covered in cuneiform script; signatures and letters from important figures in the American Revolution; and a digital milestone, the original Apple-1 computer motherboard. These 2.5D images display signs of wear and of their creator’s presence that hugely enhance a viewer’s visceral appreciation of the real objects and their history.
For the next step, creating full 3D representations and physical replicas, the necessary data must be obtained without touching the original. One approach is LIDAR (light detection and ranging), where a laser beam is scanned over the object and reflected back to a sensor. The distance from the laser to each point on the object’s surface is found from the speed of light and its travel time, giving a map of the surface topography. LIDAR is most suitable for big artifacts such as a building façade at a coarse resolution of millimeters. Other approaches yield finer detail. In the triangulation method, for instance, a laser puts a dot of light on the object while a nearby camera records the dot’s location, giving data accurate to within 100 micrometers (0.1 millimeter). Copyists typically combine scanning methods to obtain the best surface replication and color rendition.
One big 3D copying effort is underway at the Smithsonian Institution, whose 19 museums preserve 155 million cultural and historic artifacts and artworks. Since 2013, the Smithsonian has put over 100 of these online as interactive 3D displays that can be viewed from different angles, and as data for 3D printers so people can make their own copies. The objects, chosen for popularity and diversity, include the original 1903 Wright Brothers flyer; a highly decorated 2nd century BCE Chinese incense burner; costume boots from the Broadway musical The Wiz from 1975; a mask of Abraham Lincoln’s face from shortly before his assassination in 1865; and for the 50th anniversary of the Apollo 11 moon landing, astronaut Neil Armstrong’s spacesuit. Recently added is a small 3D version of a full-sized dinosaur skeleton display at the National Museum of Natural History showing a T-rex attacking a triceratops, for which hundreds of bones were scanned by LIDAR and other methods.
A different goal animates the 3D art and technology studio Factum Arte in Madrid, Spain. Founded by British artist Adam Lowe in 2001, Factum Arte protects cultural artifacts by copying them, using its own high-resolution 3D scanning, printing and fabrication techniques.
Museums already use copies to preserve sensitive artworks on paper that need long recovery times in darkness and low humidity between showings. During these rests, the museum displays instead high-quality reproductions (and informs patrons that they are doing so). In a recent interview entitled “Datareality,” Adam Lowe expressed his similar belief that an artistically valid copy can provide a meaningful viewing experience while preserving a fragile original. One of his current projects is to replicate the tombs of the pharaohs Tutankhamun (King Tut) and Seti I, and queen Nefertari, in the Egyptian Valley of the Kings. The tombs were sealed by their builders, but once opened, they are deteriorating due to the throngs of visitors. As Lowe recently explained, “by going to see something that was designed to last for eternity, but never to be visited, you’re contributing to its destruction.”
The copies, approved by Egypt’s Supreme Council of Antiquities, will give visitors alternate sites to enter and view. At a resolution of 0.1 millimeter, the copies provide exact reproductions of the intricate colored images and text adorning thousands of square meters in the tombs. The first copy, King Tut’s burial chamber, was opened to the public in 2014, and in 2018, Factum Arte displayed its copied “Hall of Beauties” from the tomb of Seti I.
Earlier, Factum Arte had copied the huge Paolo Veronese oil on canvas The Wedding Feast at Cana (1563, 6.8 meters x 9.9 meters), which shows the biblical story where Jesus changes water into wine. The original was plundered from its church in Venice by Napoleon’s troops in 1797 and now hangs in the Louvre. The full-size copy, however, commissioned by the Louvre and an Italian foundation, was hung back at the original church site in 2007.
Factum Arte’s efforts highlight the questions that arise as exact physical copies of original art become available. Museums, after all, trade in authenticity. They offer viewers the chance to stand in the presence of a work that once felt the actual hands of its creator. But if the copy is indistinguishable from the work, does that dispel what the German cultural critic Walter Benjamin calls the “aura” of the original? In his influential 1935 essay The Work of Art in the Age of Mechanical Reproduction, he asserted that a copy lacks this aura:
In even the most perfect reproduction, one thing is lacking: the here and now of the work of art – its unique existence in a particular place. It is this unique existence – and nothing else – that bears the mark of the history to which the work has been subject.
The Factum Arte reproductions show that “original vs copy” is more nuanced than Benjamin indicates. The Egyptian authorities will charge a higher fee to enter the original tombs and a lower one for the copies, giving visitors the chance to feel the experience without causing damage. Surely this helps preserve a “unique existence in a particular place” for the original work. And for the repatriated Weddingat Cana, Lowe tellingly points out that a copy can bring its own authenticity of history and place:
Many people started to question about whether the experience of seeing [the copy] in its correct setting, with the correct light, in dialogue with this building that it was painted for, is actually more authentic than the experience of seeing the original in the Louvre.
We are only beginning to grasp what it means to have near-perfect copies of artworks, far beyond what Walter Benjamin could have imagined. One lesson is that such a copy can enhance an original rather than diminish it, by preserving it, and by recovering or extending its meaning.
Copying art by technical means has often been an unpopular idea. Two centuries ago, the English artist William Blake, known for his unique personal vision, expressed his dislike of mechanical reproduction such as imposing a grid to copy an artwork square by square. Current technology can also often stand rightfully accused of replacing the human and the intuitive with the robotic and the soulless. But properly used, today’s high-tech replications show that technology can also enlarge the power and beauty of an innately human impulse, the need to make art.
Explaining consciousness is one of the hardest problems in science and philosophy. Recent neuroscientific discoveries suggest that a solution could be within reach – but grasping it will mean rethinking some familiar ideas. Consciousness, I argue in a new paper, may be caused by the way the brain generates loops of energetic feedback, similar to the video feedback that “blossoms” when a video camera is pointed at its own output.
I first saw video feedback in the late 1980s and was instantly entranced. Someone plugged the signal from a clunky video camera into a TV and pointed the lens at the screen, creating a grainy spiralling tunnel. Then the camera was tilted slightly and the tunnel blossomed into a pulsating organic kaleidoscope.
Video feedback is a classic example of complex dynamical behaviour. It arises from the way energy circulating in the system interacts chaotically with the electronic components of the hardware.
As an artist and VJ in the 1990s, I would often see this hypnotic effect in galleries and clubs. But it was a memorable if unnerving experience during an LSD-induced trip that got me thinking. I hallucinated almost identical imagery, only intensely saturated with colour. It struck me then there might be a connection between these recurring patterns and the operation of the mind.
Brains, information and energy
Fast forward 25 years and I’m a university professor still trying to understand how the mind works. Our knowledge of the relationship between the mind and brain has advanced hugely since the 1990s when a new wave of scientific research into consciousness took off. But a widely accepted scientific theory of consciousness remains elusive.
I doubt this claim for several reasons. First, there is little agreement among scientists about exactly what information is. Second, when scientists refer to information they are often actually talking about the way energetic activity is organised in physical systems. Third, brain imaging techniques such as fMRI, PET and EEG don’t detect information in the brain, but changes in energy distribution and consumption.
Brains, I argue, are not squishy digital computers – there is no information in a neuron. Brains are delicate organic instruments that turn energy from the world and the body into useful work that enables us to survive. Brains process energy, not information.
Recognising that brains are primarily energy processors is the first step to understanding how they support consciousness. The next is rethinking energy itself.
What is energy?
We are all familiar with energy but few of us worry about what it is. Even physicists tend not to. They treat it as an abstract value in equations describing physical processes, and that suffices. But when Aristotle coined the term energeia he was trying to grasp the actuality of the lived world, why things in nature work in the way they do (the word “energy” is rooted in the Greek for “work”). This actualised concept of energy is different from, though related to, the abstract concept of energy used in contemporary physics.
When we study what energy actually is, it turns out to be surprisingly simple: it’s a kind of difference. Kinetic energy is a difference due to change or motion, and potential energy is a difference due to position or tension. Much of the activity and variety in nature occurs because of these energetic differences and the related actions of forces and work. I call these actualised differences because they do actual work and cause real effects in the world, as distinct from abstract differences (like that between 1 and 0) which feature in mathematics and information theory. This conception of energy as actualised difference, I think, may be key to explaining consciousness.
The human brain consumes some 20% of the body’s total energy budget, despite accounting for only 2% of its mass. The brain is expensive to run. Most of the cost is incurred by neurons firing bursts of energetic difference in unthinkably complex patterns of synchrony and diversity across convoluted neural pathways.
What is special about the conscious brain, I propose, is that some of those pathways and energy flows are turned upon themselves, much like the signal from the camera in the case of video feedback. This causes a self-referential cascade of actualised differences to blossom with astronomical complexity, and it is this that we experience as consciousness. Video feedback, then, may be the nearest we have to visualising what conscious processing in the brain is like.
The neuroscientific evidence
The suggestion that consciousness depends on complex neural energy feedback is supported by neuroscientific evidence.
Researchers recently discovered a way to accurately index the amount of consciousness someone has. They fired magnetic pulses through healthy, anaesthetised, and severely injured peoples’ brains. Then they measured the complexity of an EEG signal that monitored how the brains reacted. The complexity of the EEG signal predicted the level of consciousness in the person. And the more complex the signal the more conscious the person was.
The researchers attributed the level of consciousness to the amount of information processing going on in each brain. But what was actually being measured in this study was the organisation of the neural energy flow (EEG measures differences of electrical energy). Therefore, the complexity of the energy flow in the brain tells us about the level of consciousness a person has.
Also relevant is evidence from studies of anaesthesia. No-one knows exactly how anaesthetic agents annihilate consciousness. But recent theories suggest that compounds including propofol interfere with the brain’s ability to sustain complex feedback loops in certain brain areas. Without these feedback loops, the functional integration between different brain regions breaks down, and with it the coherence of conscious awareness.
What this, and other neuroscientific work I cite in the paper, suggests is that consciousness depends on a complex organisation of energy flow in the brain, and in particular on what the biologist Gerald Edelman called “reentrant” signals. These are recursive feedback loops of neural activity that bind distant brain regions into a coherent functioning whole.
Explaining consciousness in scientific terms, or in any terms, is a notoriously hard problem. Some have worried it’s so hard we shouldn’t even try. But while not denying the difficulty, the task is made a bit easier, I suggest, if we begin by recognising what brains actually do.
The primary function of the brain is to manage the complex flows of energy that we rely on to thrive and survive. Instead of looking inside the brain for some undiscovered property, or “magic sauce”, to explain our mental life, we may need to look afresh at what we already know is there.
Paul Broks: I was putting the finishing touches to my first book, Into the Silent Land: Travels in Neuropsychology when my wife, Sonja, was diagnosed with breast cancer. I put off submitting the manuscript in order to support her through the difficult first stages of treatment but I also used the time to write an additional chapter dealing with our response to her diagnosis. In that final chapter I recount the true story of a conversation we’d had over dinner in which I’d presented the following scenario (borrowed from a Milan Kundera novel). A kindly visitor from an advanced alien civilisation brings the good news that death is not the end and that she will be moving on to another life. But there’s a choice to be made. Does she want to commit to having me with her in that future life, or would she rather go it alone? She said she’d go it alone. One lifetime was enough, however much you loved someone. So I’d better make the most of it. We had another eight years together, more than twice as long as expected from the original, rather grim, prognosis.
Into the Silent Land ends with Sonja’s cancer diagnosis and The Darker the Night begins with her death. That Stoic injunction – Just the one life; better make the most of it – finds an echo in the first few pages. Close to the end she said to me, You don’t know how precious life is. You think you do, but you don’t, and those words, effectively an encapsulation of the Stoic message of Marcus Aurelius, resonate through the pages of the book. Both Into the Silent Land and The Darker the Night contain neurological case stories and autobiographical strands. They both make excursions into philosophy and fiction. But The Darker the Night is more layered and has a more discernible narrative arc. There is a beginning, a middle, and an end, even if at times the narrative thread that leads you through the journey dissolves into fictional digressions and the retelling of stories from Greek mythology from time to time. I know I’m asking a lot from readers. And it’s a bookseller’s nightmare with its mix of genres. It ends with me entering a new relationship and an encounter with a self-proclaimed archangel at the top of Glastonbury Tor who presents me with a difficult dilemma. It’s a happy ending, I think, or at least a wistful one.
The idea of interspersing images with text occurred to me quite late on. I aim to counterbalance darkness and viscerality in my writing with humour and sheer wonder at the mystery of the world. There’s a similar chemistry in some of your work, I think, and you were the first person I considered approaching.
Garry Kennard: I had read Into the Silent Land some years before I founded the Art and Mind festivals and had been extremely impressed. I admired the way you blended the science, philosophy and a personal feel for the world via marvellous writing into a satisfying and moving work of art. It was exactly this kind of melding that I was looking for in the festivals. As director I was in the fortunate position of being able to invite anyone I liked to take part and you were the obvious first choice. Apart from a few more appearances at my events, that was the extent of our acquaintance.
That was that until I received your email asking if would be interested in illustrating your new book. This was an amazing surprise – and a flattering offer. I wasn’t even aware that you had seen any of my work.
I had made a rule for myself, born of long experience, that I would always refuse commissions. I had realised that when I did, it was guaranteed I would produce awful work. The idea of someone looking over my shoulder, waiting expectantly for the masterwork, made me falter and stutter.
So I demurred to begin with. But after you had sent me some chapters to read, it began to dawn on me that I could do something. I was still very doubtful about it but the feeling grew that this would offer me the opportunity of trying something new, although I had to get over the problem of too close a collaboration spoiling the thing. I remember saying that I would have a go at this, but it would be under certain conditions. One was that I would not discuss the images with you. I would not send you sketches for you to comment on. I would not directly illustrate the book. I would read the text closely and then let my hand and brain semi-improvise the images and see what emerged. I would send you the finished pictures. If you didn’t like them or didn’t think them appropriate you could ditch them. I would continue to produce images to replace those you didn’t like. When you agreed to this, I started.
I had no idea of what you expected. You had said nothing about what you hoped for, no rules that you wanted me to adhere to. That left me an opening to start work without constraints – the only way I could do anything. I managed five or six pictures, sent them to you and waited for your reaction.
PB: I had no more than vague intuitions as to what to expect but didn’t at any stage envisage the pictures as mere ‘illustration’. I was more interested in seeing what might come of a more loosely imaginative – subconscious, even – reaction to the text. That wouldn’t work if you’d felt in any way obliged to produce images to order. Actually, I think there’s some similarity in the way I produced the text and the way you produced the pictures. Although my writing sometimes involves quite hard deliberation (over how to frame a philosophical argument, say) I think it works best when I don’t think too much and just leave it to my subconscious ‘brownies’ to do the creative legwork. I’m usually just noodling around with a notebook and pen and suddenly a phrase or image presents itself and sets off a train of thought and I think, where the hell did that come from? Nothing to do with me! I sense something similar was happening with your semi-improvised pictures. Parts of the book are, in fact, explicitly concerned with the unconscious, semi-autonomous machineries of ‘imaginal reality’, and it also chimes with the non-linear, “knights move” progression of the essays and stories, so it’s all quite apt.
We had a bit of to-ing a fro-ing over a couple of the images, I recall, and you made your own unprompted revisions here and there, but, for the most part, I was pretty much blown away by the “first takes”. By the way, I happened to be in the thick of a children’s birthday party when those first dark, still, mysterious images came through on my phone. It was exquisitely incongruous! I knew straight away they were going to work.
GK: In the past most of my paintings and drawings have been carefully worked out. They were designed to discombobulate the viewer by presenting two different modes of perception on the same plain – one highly realistic, the other of suggestive abstractions. Now, with the opportunity to free myself from this and to semi-improvise the images, I could explore a new way of reaching the same destination. Of waiting for the image itself to disarm me as it grew.
I had a small note book by my side all the time and I would scribble in it at the most peculiar times – while watching television or doing a crossword. The scribbles, and they grew into hundreds, gave me a feel for what might be done. I didn’t refer to these when drawing the final pictures but some of those came out very similar to the initial sketches.
The deeper I got into this the more astonished I became at what was emerging. Themes kept re-appearing – a black sun, clocks, doors opening. I felt these held the sequence together. But on a deeper level what was appearing before me became an exploration of my own psyche, almost as if I was creating my own Rorschach tests with the added device of being able to develop and strengthen those emerging images to which I found myself responding. This is obviously not a new way of doing things. Artists have always produced work like this in some way. But it was new to me and a revelation. You can see from this how my ‘method’ was very close to your own in composing the text.
Of course, the other thing which preoccupied me was your text, which I read over and over, hoping that something from it would infiltrate the pictures without me actually illustrating the words. I think that happened to a greater or lesser extent in the series.
PB: Sorry to say a couple of my favourite pictures didn’t make it onto the pages of the book owing to the publisher’s squeamishness. One is the image of the Greek god Pan standing in a doorway, proudly erect as he often is in classical depictions. This was ruled out on grounds of being ‘pornographic’. The other was the image of the gipsy woman, again a very traditional one, which was rejected for reasons of political correctness. I found this hard to accept, and protested but, regrettably, didn’t have the final say.
GK: I was quite devastated when the publisher rejected a number of the images, some of which I had been very pleased with. It made no sense to me. With some of the pictures missing the connecting themes, apparent in the whole set, disappeared. I know you tried your best to reinstate some drawings (and succeeded with a couple of them). I realise you were as disappointed as me. But – it was done and that was that.
The Complete Drawings
1. Stairway to sunlit room: “Push, and the door will open into a sunlit room, forever sunlit, regardless of the depth of the night.”
2. Trees through the windows: “Doors opened into unexpected rooms. Through this window, a crisp winter morning, though that, a summer afternoon.”
3. Boy at night: “Sleep won’t come. Thoughts are running like rats through his head and a shadow on the far wall of the bedroom unsettles him.”
4. Man with dark moon: “For a minute or two I had the sense that she was still alive. I could catch up with her and we would carry on as normal.”
5. Time Traveller at the station: ‘Mike the Time Traveller?’ No. He was just a miserable dipso on his way home from the miserable office, having a drink or nine to gird his miserable loins for miserable home.
6. Tabletops: “The tabletops are identical in size and shape, yet the one on the left appears elongated. There’s a mismatch between mental and physical reality.”
7. Pan at the door: “There’s a knock at the door and there he is with his hooves, his horns, his fur and, slightly worryingly, his large, erect penis.”
8. CS Lewis: “Jack has a morbid dread of insects. ‘To my eye,’ he says, ‘they are either machines that have come to life or life degenerating into mechanism.’”
9. Pat Martino: “With music as the golden thread, he began to weave a new version of himself. He was a genius twice over, but this time with a piece of his brain missing.”
10. The White Bull of Minos: “Listen, the bull said to himself, nonverbally, I may be a beast of the field but I’m no mug. I’m doing this of my own conscious volition.”
11. Zombies: “Now Lewys was telling me that zombies were real, not merely conceivable. They walk down every street.”
12. Multiplicity: There are infinite histories to choose from with infinite variations on the theme of you and your life.
13. Tyger, tyger: “Have moonlight if you want. There’s no sign of a tiger. Why would there be in an English forest?”
14. Spiral head figure: “The labyrinth is a primordial image of the psyche. It symbolizes the winding, snakelike path to psychological wholeness and authenticity.”
15. Into the Labyrinth I’ll give you a clew, he told her, but she was in no mood for games and just wanted an answer. No, this sort of clew, he said, producing a ball of thread.
16.Time and the woman: “Stabs of absence; stabs to the brain and heart; an entering of the flesh, knowing in the flesh that she’s not here anymore.”
17. Gipsy at the door: “So you’re in a good place, then. The gipsy told me. You’re thinking of me, she said, and you want me to find another good woman.”
18. The drunk on the bench “Isaac Newton, he told me, was a genius but died a virgin. He was a sad fucker. I was taken aback because I’d just then been thinking about celestial mechanics.”
19. Hierarchy: ‘One day I’ll be dead’. It’s an oddly exhilarating thought. Something unimaginable – eternal nothingness – awaits us all. It sharpened my senses. Let’s not forget we’re alive.
20 Perseus and the Dead Girl: The image of the dead girl surprised me. She bobbed in a flowing white garment, like an infant Ophelia. The sea itself was subdued. Small waves broke indifferently.
21. Sisyphus: “The toil of Sisyphus represents the human condition, ‘…his whole being exerted towards accomplishing nothing.’ ”
22 Incubus: The firewall between fantasy and reality collapses and all the monstrous archetypes break free: witches and goblins, demons and other strange creatures. They have the shine of sentience in their eyes.
23. Universe and beer “All moments, all times, are equally real, equally present, including all the moments of your life, which are, from beginning to end, ‘in place’”
24. Carpet flower: “Whenever I recall the carpet flower I have a sense of seeing, of being, for the very first time.”
25. Glastonbury Tor “So you can, if you want, totally erase the life you’ve had. It will never have existed. Up to you, he said. I made my choice.”