Articles

Auto Added by WPeMatico

Language: The Non-Trivial Machine

Abstract

Conventionally understood as the interface between us (humans) and the ‘out there’, this article proposes that there is an urgent need to write philosophy of language from a perspective which can account for the new ontologies of language being promoted by its increasingly non-human, digital, disembodied applications and ‘realities’. The work starts with a question: what is language when it is no longer made by humans, but by a machine? Employing Heinz von Foerster’s distinction between ‘Non-Trivial’ and ‘Trivial’, Machines, which describes machinic processes involving agency and those which do not, this practice and theory based research explores that question.

Introduction

Most philosophies of language still take as a given that language is a human-made artefact (speech/writing), albeit at different levels of ‘proximity’ to the human subject: speech being closer than writing, and writing being closer than printing or typing. By this argument, speech is more closely related to the human than typing, and in turn permits more human agency. With the typewriter’s tendency towards automation and standardisation in mind, Nietzsche, who turned towards using a Malling Hansen Writing Ball (invented in 1867), has been described by Friedrich Kittler as the “first mechanized philosopher”.[1] Nietzsche noted, while using the Malling Hansen: “The writing ball is a thing like me: made of iron yet easily twisted on our journeys”[2] and observed that “Our writing tools are also working on our thoughts”[3] Beyerlen later comments on the act of typing:”[A]fter one briefly presses down on a key, the typewriter creates in the proper position on the paper a complete letter, which is not only untouched by the writer’s hand but also located in a place entirely apart from where the hands work.”[4] This is a more general tendency of technology: to distance the subject from its object via various degrees of technological mediation (fig.1). And yet, despite its mechanized characteristic, we cannot say that the writings of Nietzsche possess less agency in their typed form, only that the relationship between his thought and its transcription is rendered slightly more distant due to the new technology of the typewriter.

 

 

Figure 1. Instruction manual for the Hermes 3000 typewriter.

Despite these distinctions, which in this case take the form of noting the distance between speech and writing, or writing and typing, human-made language is largely taken to be analogue, material, and definitional within philosophy. Language is made by rational human agents, not machines. It is definitional to the degree that humans possess language, whereas animals do not, and this is what is taken as one of the defining characteristics of the ‘human’. Therefore, when philosophers speak of language, and its human-made form, it’s with a full sense of languages’ significance, culturally, intellectually, and historically. To reinforce this point, in the film ‘Threads’ (1984)[5], the bleakest description of the effects of nuclear war and its aftermath concludes that in the imagined post-apocalyptic future, language breaks down to such an extent that the threads which tie human to human and constitute primary social and ethical contracts are broken. The linguistic threads are also unravelled, and (after some time), language is reduced to single word, brute force descriptions of fact, directed entirely towards survival. The story provides a stark reminder of the significance of language within a culture, and its role in uplifting humans from a state of mere survival, to the formation of social, political and legal bonds, along with literature, creativity and (most importantly within the narrative of the film), the capacity for human empathy.

This concern over the changing conditions of language is not new. Writing itself is a technology, which took what the Romans called ‘Verba Volant’ (the spoken word flies), and relegated it to a mere ‘Scripta Manent’ (the written word as something dead, lifeless). Plato feared that language would alter our relationship to the human act of memory.[6] However, I wish to suggest here, that such questions about language and agency and language and the social, while not new, are being dramatically amplified by the new technological contexts within which language exists, and that the emergence of non-human languages (which mimic human language or, more precisely, non-human agents whose material substrate for producing such language is code) requires us to radically rethink the philosophical assumption that language is a human made phenomena, and moreover to consider why that matters (fig. 2).

Figure 2. 000111111, by Aude Rouaux, 2016 (video: 15.42 minutes). Words are vocally performed, in binary code.

Language is rapidly changing, and migrating to machine-driven forms, which are increasingly detached from human modes of articulation and yet which possess great power to shape human actions and affect human identity as articulated through language. Artificial intelligence, artificial languages, speech/text recognition systems and other forms of mechanization, are changing the ways in which language relates to the human, and therefore, arguably changing what it means to be human. To consider these matters, In order to consider these matters I will here apply Heinz von Foerster’s distinction between Trivial and Non-Trivial Machines to language, revisit Deleuzes’ notion of ‘The Event’ (especially as it pertains to language), and consider Heidegger’s reformulation of one of the classical law of thought, known as The Principle of Identity. Here he poses identity, not as a matter of direct equivalence between two things (A=A), but as a relation between them, located in the ‘is’, not what lies on either side (A is A). This will be relevant to the mimetic qualities of non-human languages and the possibilities of seeing what lies between human and non-human language.

To conclude, I will suggest that we might think of the relationship between human and non-human language as less a question of seeking equivalence between human and non-human language (currently based on mimicry) and more one of seeking a new relation (somewhere in between the two). I will briefly outline some practice-based experiments, which aim to explore this space, and which are in progress. The collaborative publishing and language research project ‘one’ (provisionally titled: ontological non-human editions), will seek to evaluate the potential for this in-between space of human/non-human languages and to break the dichotomy between the two.

The Trivial Machine vs. The Non-Trivial Machine
Within the context of mid-20th century writing on cybernetics, Heinz von Foerster proposed the notion of the ‘Non-Trivial Machine’[7], referring to it as possessing the “well-defined properties of an abstract entity”, and in so doing, posed a machine as not necessarily something with ‘wheels and cogs’. Instead, a machine is “how a certain state is transformed into a different state”.[8] Alan Turing previously described a machine as a set of rules and laws.[9] By these definitions a machine could be something immaterial, every bit as much as something physical, opening the way for code-as-machine. The important aspect of a ‘Non-Trivial Machine’, for von Foerster, is that its “input-output relationship is not invariant, but is determined by the machine’s previous output.”[10] In other words, its previous steps determine its present reactions and so it is reactive, variant, and dynamic. In contrast, a ‘Trivial Machine’ would be one in which the input creates an invariant output. This kind of machine is inherently stable, and produces no fluctuations or errors: it’s predictable. As such, by definition, a ‘Non-Trivial Machine’ would be one in which the output cannot be predicted from its input, constituting a machine which has agency, and autonomy. We might call these attributes ‘intelligence’, creativity, and the human propensity for unpredictability. Based on von Foerster’s distinction, it seem clear that we are presently still operating in the realm of the ‘Trivial Machine’ with respect to non-human languages, including those produced by automated voice assistants or IBM’s flagship debating technology ‘Project Debater’, since its operations lack agency and true linguistic contingency: they only mimic such effects, if ever more convincingly. Even ‘Debater’, which claims to engage in true discussion/argument with a human interlocutor, uses the power of its almost unlimited access to databases of information, thereafter constructing its arguments using a (vocal) linguistic interface which has been ‘trained’ in the art of classical rhetoric and persuasive debating techniques.[11]

Figure 3. Heinz von Foerster’s own drawings of his trivial (left) and non-trivial machines (right). On the left, the input-output relation is invariant. On the right, the input-output is variant and therefore unpredictable, since it’s non-linear. The internal logic changes with every operator. In other words, in the trivial machine scenario, you won’t get peppermint or condoms if you put a coin in chewing gum machine, but you might in a non-trivial machine (von Foerster and Poerkson, 1999, p.57).

We might therefore make an initial observation: human beings are (borrowing this definition), ‘Non-Trivial Machines’ by definition, because the input humans receive does not (always) result in a predetermined output. Their (immaterial) thought processes could be seen to correspond to von Foerster’s notion of an abstract entity with well-defined properties. Absent of wheels and cogs, these processes are nonetheless real. Returning to our present subject: language, such processes are materialised through the interface of language, and these abstract cognitive process are evidenced in sounds and marks.

It follows that if humans are unpredictable: they interpret, subvert, alter and take ownership of language at the point of input, creating new forms, and bringing their subjectivity (including their identity/agency, along with the materiality and ‘event’ of language in time and space), into play. However, as a caveat, at the same time, what they produce is based on their previous interactions with language, and understanding of the rules, as well as those linguistic elements with which they are familiar (everyone shares and utilizes the same letterforms within a specific language). This is a paradox: language is both a site of intense non-trivial production (non-predictable input=output), but at the same time it works with pre-existing elements (predictable input), and to that extent it could (arguably), be called ‘trivial’ (input = output, predictable within those given limits). This is because, for example, we don’t suddenly create new symbols within the existing chain of 26 letters in the Roman alphabet but accept that restriction of the linguistic/symbolic ‘machine’. We don’t normally rewrite the grammar and syntax, unless we are experimenting with form. Nonetheless, what we do with this input, despite its pre-given nature, is intrinsically unpredictable. As humans we generate the new, from the given.

These are not trivial questions. As we embark upon the full employment of artificial language[s] as the interface between ourselves and machines, Siri, Amazon Echo, Watson, Chat-bots, to name just a few, we see that the trivial (input = output) model of AI is potentially moving closer to a non-trivial form of language. When Amazon Alexa starts creating poetry, connected to an autonomous thought process, we will be in the presence of the linguistic singularity, and we will know this because of the forms of language being used, and the ways in which the input/output conditions are changed. Alan Turing famously used written language as the basis of his test for the presence of machine intelligence: The Turing Test. However, the use of language on the part of the non-human writer within the test was ‘trivial’, for the purposes of this definition.

To summarise, the distinction offered by Heinz von Foerster states that in a ‘trivial’ machine, input and output can be predicted (reliable/mechanical). In ‘non-trivial’ machines the output is unpredictable and involves risk (unreliable/creative). However, I want to propose that, however distant, these distinctions are now under threat by the potential for an autonomous machine (AI), to exist on the non-trivial output side of the equation. The ‘trivial machine’ is fast becoming closer to being ‘non-trivial’, and this requires us to critique and reassess what language is and what we value in it. This requires a method of critiquing such language, leading to a further distinction posed by von Foerster: that between allo-observation and auto-observation. The method we apply to critiquing language relies on this distinction, which I will briefly summarize in the section to follow.

Allo/Auto-Observation of Language 

As noted, language as the interface between ourselves and robots or ‘intelligent assistants’ such as Amazon echo (or other forms of artificial voice assistants), is still relatively trivial. We don’t expect Watson or Siri to produce utterances or fragments of writing, which are autonomous (not input=output). Language produced by human beings on the other hand, is radically non-trivial. I cannot anticipate with any degree of reliability, what you will say next. Literature and poetry are unequivocally non-trivial, tethered to the human subject with its essential autonomy, but the trivial-machine will only demonstrate intelligence, when it starts speaking and writing to us, or other machines, non-trivially. This moves beyond the limits of Turing’s test, which identified the presence of intelligence on a language-based demonstration. [V]on Foerster offers us a useful method of working through some of the complexities of this terrain, with another distinction, one which has been employed extensively by creative practitioners working with language, whether implicitly or explicitly.

“[V]on Foerster suggests that the non-trivial machine should change itself as a result of auto-observation: currently it does so as a result of allo-observation“.[12]

Allo means different/other: a form of observation which comes from the outside of the subject under scrutiny, in the present example, language. Auto-observation would imply that the observation comes from within the subject: in this case, using language to examine language. The creative properties of material language would be used as a form/medium of investigation, and not give way to the hierarchy of imposing an explanatory meta language (which is a language which explains another language). Any use of a meta language poses problems, because it’s difficult to critique language from ‘a view from nowhere’, and claim any degree of objectivity. Meta languages (arguably) fall foul of a contradiction: claiming to stand outside the subject, language is both the subject and the medium of any meta language.

In contrast, auto-observation would imply that the observation comes from within the subject (immanently), using language itself to examine language (without calling upon anything outside that language, to do the explaining). In this method the paradox/contradiction of language being both subject and medium paradox is embraced by exploring language from within language. The creative properties of material language are thus posed as a form, method, and means of investigation. The proposed creative works described at the end of this paper, do not revert to a meta language which would ‘explain’ such language. The work will proceed based on ‘auto-observational’ interventions, not ‘allo-observational’ techniques. In this way (as von Foerster suggests), the non-trivial machine should undergo change. Rather than describing a static state of affairs in language, using language to do so (again, in a meta language), the work will take a form of language (human/non-human or in-between), and immersively interrogate its primary conditions, from within that language as a creative medium.

In the next section I would also like to pose a further method, one of looking ‘sideways’ at language and of relieving it of its representational function, challenging the basic assumption that identity forms the ground of language, and that meaning, and therefore ‘truth’ can be established on the basis of what it refers to (put simply: what it represents/points to, beyond itself). In place of mimesis (taken here to mean representation), thinkers such as Deleuze and Guattari, and also Derrida suggest that an alternative ‘logic of representation’[13] is possible, one where an ‘a-signifying, a-syntactic material’[14] forms the ground for a discretely different grammar. This in turn brings forth other forms of understanding, or: “an essentially heterogeneous reality”[15] Deleuze and Guattari explain how: “A method of the Rhizome type [on the contrary], can analyse language only by decentering it onto other dimensions and other registers”, suggesting that language can only be scrutinized sideways, tangentially, without looking directly at the object itself. This may seem contradictory to the notion of an auto-observational method of interrogating language from within language. However, these might also be seen as complementary methods, since each asks us to relieve language of its straightforward representational function and look at it afresh. There are two further considerations of the trivial/non-trivial machine analogy which I would like to introduce with respect to non-human/human language: identity and the event. For this I will attempt to simplify some fairly complex philosophical remarks by both Heidegger and Deleuze.

Language and Identity/The Event 

The Principle of Identity is also known as the Law of Identity. In its simplest form it states that A=A. This can be seen in the following examples:

“A rose is a rose is a rose is a rose”— Gertrude Stein, from the poem ‘Sacred Emily’, 1913.

“The number 1 is self-identical” — Gottlob Frege, The Foundations of Mathematics, 1884.

The first primitive truth of reason is stated as a self-referring form of identity: “Everything is what it is”— Gottfried Leibniz, Nouv. Ess. IV, 2, § i.

Challenging this classic law of thought known as The Principle of Identity, which takes as a given that A=A, Heidegger, in his lectures from 1957, wants to rethink the principle of identity as one of relation (with the emphasis on the relation), rather than one in which the terms being related take precedence. A=A therefore becomes A is A, where the ‘is’ takes precedence over the identities of the individual A’s. This represents a move away from metaphysics, which always casts the same as a self-unity. Heidegger states that, in its place, “The event of appropriation… should now serve as the key term in the service of thinking.”[16]

The ‘event of appropriation’ is a singularity [an event] which delivers over beings into Being. Whereas metaphysics asserts that identity presupposes Being (Being is subservient to identity: identity is its ground[ing]). In the event as posed, identity is recast as the relationship between the together in terms of the belonging, and not in favour of the terms being related. Perdurance is the term Heidegger uses for the simultaneous withholding and closure of the space between the terms; one which is forever in a state of oscillation between them.

In Heidegger’s conception of the event of appropriation, language itself provides the tools for this type of thought, since through its “self-suspended structure”, language holds everything in a fragile, delicate, susceptible framework, one which is infinitely collapsible at any point. The event of appropriation is thus to be found, and is founded, in language; in that “self-vibrating realm” where we dwell. Heidegger states it in this way: “The doctrine of Metaphysics represents identity as a fundamental characteristic of Being.”[17] To ‘Be’ is to be identified. He wants to challenge this.

In Heidegger’s new formulation, the essential quality of identity is to be found within the event of appropriation (in the “self-vibration realm”). Where Metaphysics presupposes that Being is the ground of beings, and forms its identity; gives it its characteristics, the ‘spring’ away from identity as posed by the concrete relation A=A, constitutes a leap into the relative ‘abyss’ of the event of appropriation, where stable identity gives way to a less familiar way of thinking and being. However, this abyss is not a place of loss or confusion, but the space of a more originary relation of identity, one which retains difference, and where the vibration, or oscillation between beings and Being is retained, and Heidegger thinks this is place of true Being. Thinking is also transformed by this movement, and the “essential origin of identity” is retained through that which joins and separates them, simultaneously (he calls this simultaneous process of opening and closing, perdurance).

As Nietzsche also reminds us, this is a game of speed and intensity; one which denies a stable/causal ground for meaning, and we can apply this observation directly to language as well as his subject of logic: “Causality eludes us; to suppose a direct causal link between thoughts, as logic does–that is the consequence of the crudest and clumsiest observation. Between two thoughts, all kinds of affects play their game: but their motions are too fast, therefore we fail to recognize them, we deny them.”[18] Much of what happens in logic (and by extension, other forms of language) takes place, Nietzsche claims, beyond the radar screen, since the non-metaphysical, affective attributes of language, including speed and intensity, are denied. To claim that causality is a simple relation (as logic does), is too simplistic a position. Not everything can be (nor should be) stated unambiguously, and thought should strain against its own limits, in search of conceptual integrity.[19] It’s this space of encounter with the non-causal affects of language which intrigues and informs the present work, since it requires us to think of language (human and non-human) as something which operates on a far more complex ‘plane’ than that of straightforward representation. If we are to move beyond mimesis and mere mimicry of human language, then we need to move beyond simple notions of identity and recognize the complexity of language.

The speed and intensity at which such effects operates within language reminds us of Walter Benjamin’s claim that thought necessarily involves the discontinuous presentation of ‘fragments of thoughts’[20], set in an interruptive relationship of infinite detours. Coherence is to be found in the ‘flashes’ and gaps between perceptible knowledge; not in the coherent sequencing of ideas, or in the relatively uncomplicated collision of ideas and their presentation. Dissolution and dissonance, rather than denotation; polyphony, rather than homophony; elision, rather than elucidation, bring meaning [truth] into view. Ideas precede presentation, but are only to be sought in the interstices, the oblique, the constellatory. Benjamin explains the constellation as the place where: “[I]deas are not represented in themselves, but solely and exclusively in an arrangement of concrete elements in the concept: as the configuration of these elements… Ideas are to objects as constellations are to stars.”[21]

Finally, Goethe, in his Scientific Studies points to a second and fundamental difficulty with correspondence theories of truth, grounded in identity: “How difficult it is… to refrain from replacing the thing with its sign, to keep the object alive before us instead of killing it with the word.”[22] I will briefly turn to come comments on the notion of the ‘event’ in language, 

The Event

 Michel Foucault, in Theatrum Philosophicum, shows how Gilles Deleuze rejects, for thinking, the model of the circle, with its promise of closure, centre and certainty, in favour of ‘fibrils and bifurcations’, which open out onto extended and unanticipated series, and defy principles of organization. In Foucault’s own words:

‘As Deleuze has said to me, however… there is no heart, there is no centering, only decenterings, series, from one to another, with the limp of a presence and an absence of an excess, of a deficiency’[23]

Similarly, Nietzsche directly confronts the concept of a ‘ground’[24], upon which to base a philosophy, offering instead, a deconstruction, or critique of the tradition. Thinking against ‘the reason and fetish of the totality’[25], he seeks to dismantle the ‘universal’ account, replacing it with a series of fragmentary, unstable perspectives on truth, knowledge, and subjectivity: “For Nietzsche, the world consists of an absolute parallax, infinite points of view determined and defined by, and within, a fragmented poetic fabrication”[26]. In other words: shifting objects and observers, coupled with shifting positions, produces shifting meaning, and it is through the fragmentary, aphoristic style of his writing that Nietzsche articulates this unstable plurality. The correspondence between Nietzsche and Deleuze’s approach is clear. Similarly, as previously seen, Walter Benjamin proposes that “meaning hangs loosely, as departure, tangentially, like a royal robe with ample folds”[27] and that in language: “Fragments of a vessel which are to be glued together… need not to be like another… as fragments of a greater language.”[28] Each view language itself as a productive site of philosophical critique, and question its ability to provide singular, unambiguous and final meaning (based on a stable identity).[29]

For Deleuze, there is ‘something else’ operating in language, but this ‘something’ (the event), is not describable by simple observation; it is not able to be represented, but nonetheless makes expression possible. In The Logic of Sense Deleuze attempts to show how the ‘event’ ‘haunts’ language. The ‘event’, which is synonymous with the unspoken, and incorporeal; the unrepresentable, nonetheless makes language possible, subsisting in language as its primary means of expression; partaking in the moment of expression and being both indistinguishable from it, and entirely different from it, at the same time:

“The expression, which differs in nature from the representation, acts no less as that which is enveloped (or not) inside the representation… Representation must encompass an expression which it does not represent, but without which it would not be ‘comprehensive’, and would have truth only by chance or from outside.”[30]

Representation is problematic for Deleuze, since it is extrinsic by nature, operating on the basis of resemblance, or mimesis; exclusively externalized (fixed, static, immobilised and invariant). However, the ‘something’ (the event) which consistently escapes this manner of representation is a matter internal to the expression (enveloped, or subsisting within it), providing its fully ‘comprehensive’ character while remaining enigmatically inexpressible. Representation on this account is always abstract and empty, incomplete and unfulfilled.

As with those non-human language which seek to mimic the human forms, such representations are always empty of the fullness supplied by the ‘event’, since, according to Deleuze, without the event, representation would remain ‘lifeless and senseless’. In short: for Deleuze, the ‘extra-representative’ exceeds the functional, while the tension between the representable and the non-representable is that which makes possible the fullest form of representation:

“Representation envelops the event in another nature, it envelops it at its borders, it stretches until this point, and it brings about this lining or hem. This is the operation which defines living usage, to the extent that representation, when it does not reach this point, remains only a dead letter confronting that which it represents, and stupid in its representiveness”.[31]

Differently stated, Deleuze proposes an expression which is both internal and invisible to language, but nonetheless intrinsic and crucial to meaning; something unrepresentable but irreducible and essential. This refers us back to von Foerster’s notion of the Non-Trivial in language. Whilst language continues to be merely replicated in non-human systems, it is this ‘event’ which is missing, and which denies the fullness of language. It’s found in the poem, the performance of language, and the relationship between the agency of the thinking human subject and the language being performed. It’s also found in the tension between linguistic utterances (speech/writing) made by human beings and those made by machines, which (at the time of writing), lack agency and comprehension: we might think od this as the ‘eventness’ of language as produced by human beings, one which requires Heidegger’s ‘oscillation’, in place of static identity. Thought in this way, human language retains instability and event-ness or presence in ways which non-human languages lack. Reliant in mimicry and without agency, they nonetheless remind us of what is missing in such languages, and what language is for human beings: irreducible evidence of the human propensity for creation, the non-identical and the unpredictable.

Deleuze criticizes the structuralist proposal of language as a system of signifiers, one which presupposes a referent (the world), upon which meaning is imposed or found. His difficulty is that in structuralism, language is seen as transcendent, it stands to re-present the world as given to us, through a system of signs (which transcend that reality: a meta language, as discussed previously). It constructs and represents some ‘outside’ world, while being independent from that world. Deleuze wants to suggest instead, that signs run throughout life, in forms such as genetic codes, biological processes, and computer actions; that there are only events, not stable meanings. “it is the myth of representation that separates man from an inert and passive world that he then brings to language”.[32] Instead, he wants to say that:

‘Words are genuine intensities within certain aesthetic systems’. Once communication between diverse, or heterogeneous series is established, all kinds of consequences follow. “Something ‘passes’ between the borders, events explode, phenomena flash, like thunder and lightening. Spatio-temporal dynamisms fill the system, expressing simultaneously the resonance of the coupled series and the amplitude of the forced movement which exceeds them”.[33]

Deleuze objects to the structuralist programme on the basis that that before signs are extensive, or representational, they are first of all intensive. This is amplified and demonstrated by the various ways in which artists, designers, poets, have explored and exploited the intensive nature of language, and in doing so, have pointed to an alternative ‘truth’ of language: one which embraces paradox, diversity, and a-logicality. For Deleuze, this affective, intensive dimension of language is its primary ‘event’; rhythmic, creative, infinitely productive, or non-trivial. Instead of doubling a pre-given world, language produces it. Nonsense literature, such as Lewis Carroll’s ‘Snark’, in his poem The Hunting of the Snark, and ‘The Jabberwocky’ from Through the Looking Glass, in which no referents exist, shows how language still has a sense, despite its lack of concrete referent, and reveals how language is ‘active creation’, rather than ‘reactive representation’.[34] Moreover, sense is not reducible to the singular meanings of a language, it is what allows a language to be meaningful, it is not attachable to each instance of language, but is a method of thinking about or approaching things, in which we see language’s power to transform itself via the proliferation of meanings, and intensive affects (events).

Representational painting or literature points beyond itself to an external world (secondarity); it is essentially ‘about’ something other than itself. It is referential. Conceptual art, including art about the act of making art, or about the surface of the work, is self-referring. In the same respect, non-representational language directs attention towards itself; towards languages’ sensory, affective qualities, and this ‘concrete visual order of signifiers’ takes precedence in any semiotic account of it. Drawing attention to language as an event, as an image of itself, from within itself, based on auto-observation, reveals a phenomenon with its own characteristics and immanent qualities. We learn something about language’s limits and possibilities; its inherent instability, as well as its productivity, when contemplating its non-referential character as a pure event. This is the auto-observation of the ‘Non-Trivial Machine’, as posed by von Foerster.

Conclusion

Philosophy cannot as yet account for the emergence of non-human language since it commences from the assumption that language is produced by human beings. This paper has offered insights into the theoretical ground for a new body of creative work, which emerges from a close investigation of the ‘new’ conditions of a language which is rapidly migrating to machines. Advances in Al technologies use increasingly sophisticated replications of human language as their interfaces. Much has been produced creatively (and written) about Al. However, little detailed attention is being paid to such ‘machinic’ language, as the key means by which we will come to accept Al. This prompts a revisitation of the significance, value, and purpose of language for humans, alongside exploring the potential for hybrid/emergent forms of human/machine language to emerge. The creative method will be to work with the cybernetic theory outlined in the mid-20thC by Heinz Von Foerster, for whom ‘Trivial Machines’ are those in which the input and output are predictable, while ‘Non-Trivial Machines’ are highly unpredictable. It’s clear that although significant advances have been made within Natural Language Processing, Voice Recognition, etc., that we are still very much in the ‘Trivial’ zone in terms of machine-replicated language: input equals output. However, humans are by definition ‘Non-Trivial Machines’ in terms of how they use/inhabit language, which is highly autonomous, unpredictable, and creative. Rethinking the relationship between language and identity and posing language as an event, allows us to loosen the ties between language and its role in representation: it places the emphasis elsewhere. If we rethink how we presently configure such languages, to be less concerned with replicating human language (spoken/written) and more on such creative potential, then we might be able to see a new role for non-human language as a creative force, closing the gap between these trivial and non-trivial machines and creating a hybrid, collaborative somewhere between these two polar opposites.

The Non-Trivial [Language] Machines 

In response to these questions, a series of workshops and projects under the provisional tiles listed below are under development. Participants will include technology experts, artists, writers, philosophers, poets and ‘other’ If you are interested in being involved in any of these experiments, please contact sheena.calvert@btinternet.com. 

The focus within the creative works will be on exploring autonomy and imitation as the fundamental basis of the human/machine duality. What happens when that duality is less clear-cut? What creative potential can be tapped into, within the collision (and collusion) between humans and machines, across the interface[s] of language? We will create trivial/non-trivial machines (based on language), to both explore this potential, and expose its limits.

  1. Chaosmos, or Materia Prima (What is Language?)

‘Chaosmos’ is James Joyce’s term for the way in which order comes from chaos (comprised of the raw materials of the universe), but not before it has moved to the limits of comprehension; prior to it becoming (in this case), language/literature/art. Chaosmos might be thought of as a ‘composed chaos (but something neither foreseen nor preconceived)’, which emerges out of a temporary alignment of images/sensations/ actions. In this initial experiment, the emphasis will be on working within an intentional form of linguistic chaos; a ‘Materia Prima’ (the primary materials of language), which precedes meaning. Random fragments of not-yet-language will be the medium, in writing, sound and drawing. The resulting work will be unknown; forged on the chaos of randomness and the randomness of chaos. This experiment is intended to draw attention to how language is arbitrary, and yet constructs a world for us. However, before we get to meaningful marks/sounds and/or conventional codes (human or otherwise), there is nothing but ‘wandering’.

  1. Materia Secunda/Enactment (What is Code?)

Within alchemy, ‘Materia Secunda’ is the second phase of the emergence of meaning. It takes the raw materials of language (the Material Prima) and produces meaning from agreed codes, creating coherent patterns and systems. In this experiment, the participants will examine codes, with a view to seeing how they are inherently artificial, even before they become aligned with the digital, and afterward, the machine. Codes (numerical or otherwise) are a conceit of the human intellect and yet they contain infinite potential to create meaningful utterance and gestures. Working with people from philosophy, and technology/science, this experiment will examine codes in detail and consider how ‘to code’ is ‘to create’.

  1. Imitation Game[s] (What is a Machine?) 

To imitate is to mimic, to mimic is to replicate, to replicate is to repeat and to repeat does not imply creation, but simple adherence to the original (perhaps). We replicate language within AI systems, but we don’t yet have the capacity to re-code its raw materials (imitative forms of language), to create new patterns, or to perform unexpected language games. AI is still a trivial machine at this point in time. However, advances in robotics, and automation mean that we are moving ever closer to the point where autonomous creation is possible, not just imitation; resulting in a Non-Trivial Machine. By the time of this last experiment, we expect that technological developments will have advanced to the degree that we can glimpse the potential for ‘non-imitative’ technologies of voice and language; ones nonetheless produced by algorithms.

  1. one

‘one’ stands for (amongst other things): ‘ontologies of non-human expression’, and is a collaborative project, focusing in particular on both the limits and potential of non-human publishing.  A group of designers, artists, theorists and educators Jack Clarke, Joshua Trees, Yvan Martinez, Robert Hetherington and Sheena Calvert formed this group in late 2019, in response to shared questions about the shifting relationship between humans, machines and published work (books, texts and ‘other’). We started by revisiting the premise that to ‘publish’ is the act of making public, and, within that broad definition which we aim to refines as the research progresses, proposing to consider the implications of autonomy, agency, automation and algorithmic production, in published work produced by human and non-human agents. The intention is to interrupt our present understanding of how humans and machines both produce and disseminate such outputs. ‘one’ is at the same time a practice-based, and a theoretical project, involving writings, practice-based experiments and performative modes of dissemination. We are interested in the relationship between the reader and the read, and in asking the question: Is non-human publishing changing the relationship between the seen (the published) and the seeing subject (the ‘reader’)? Ultimately, what if only machines are reading? By examining the social relations that underpin such technologies, we hope to raise some important questions about the ways in which non-human publishing challenges the definition of that term, as well as revisiting the central role of language and published expression in human life.

The non-trivial machine distinction will create a space for one of the first experiments of this newly formed research group. By asking software to generate as many possible versions of the ‘one’ acronym/title, we can (playfully) see how and where machines are able to generate meaningful phrases. However, this will also be trivial, since the input is simply the store of possible words, in random combination, without an acknowledgment of languages’ social context. We will publish this list as a first act of non-human publishing.

The test version of this  language generator can be found at https://idealpress.org/one/

Typing in ‘human’, ‘machine’, ‘ontological’, or in fact any other word into the main box, and pressing ‘go’ (more than once), generates variations of the full ‘one’ project title. Scrolling down the page reveals the collection of words from which the system is drawing. Each interaction with the generator adds more to this list and so the number of possible variants multiplies with use. While the machine generates these collections of words, randomly (or ‘trivially’), and without making any sense of them, as humans we read these words and groupings quite differently, since we cannot escape our ‘non-trivial’ relationship to language.

……………………..

Notes

[1] Friedrich Kittler, Gramophone, Film, Typewriter (California, Stanford, 1999)

[2] Friedrich Nietzsche, letter of March 17, 1882, in idem 1975–84, pt.3, 1: 180.

[3] Friedrich Nietzsche, letter toward the end of February, in F. Nietzsche Briefwechsel: Kritische Gesamtausgabe, G. Colli and M. Montinari (eds), Berlin, 1975 – 84, pt. 3, 1: 172.

[4] Angelo Beyerlen, the royal stenographer of Wiirttemberg, quoted in Herbertz, 1909, 559.

[5]Threads’, September 23rd, 1984, BBC.

[6] Plato, Phaedrus, 370BC. Socrates (recounting a dialogue between Thueth and Thamos), questions the role of written language in supporting memory, and claims it will damage that facility in humans. ‘letters’ “… will create forgetfulness in the learners’ souls…”

[7] Heinz von Foerster, Heinz. (2003). Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer.

[8] Heinz von Foerster, Heinz. (2003).

[9] For Turing, a ‘machine’ could be a set of set of rules and laws (mathematical/procedural). Hence, a ‘Turing Machine’. Cf: Turing, A.M. (1936). “On Computable Numbers, with an Application to the Entscheidungsproblem”. Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. Also, 1948, “Intelligent Machinery.” Reprinted in “Cybernetics: Key Papers.” Ed. C.R. Evans and A.D.J. Robertson. Baltimore: University Park Press, 1968. p. 31.

[10] Heinz von Foerster, Heinz. (2003).

[11] For more information on Project Debater, see: https:// artificial-intelligence/project-debater/www.research.ibm.com/

[12] Heinz von Foerster, Heinz. (2003). Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer.

[13] Jacques Derrida, Points… Interviews, 1974-1994, Jacques Derrida, Ed. Elizabeth Weber (California: Stanford, 1995), p.75.

[14] Deleuze, Cinema 2: The Time-Image (U. of Minnesota, 1984), pp 43-44

[15] Deleuze and Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, University of Minnesota Press, 1987. D + G preface their remarks about heterogeneity with: ”A semiotic chain is like a tuber agglomerating very diverse acts, not only linguistic, but also perceptive, mimetic, gestural, and cognitive: there is no language in itself, nor are there any linguistic universals, only a throng of dialects, patois, slangs, and specialized languages. There is no ideal speaker-listener, any more than there is a homogeneous linguistic community”. p.7

[16] Martin Heidegger, Identity and Difference, (Chicago University Press, 2002), p. 36.

[17] Ibid. p. 38.

[18] Friedrich Nietzsche, Will to Power, Translated by W. A. Kaufmann and R. J. Hollingdale, (Vintage Books, 1967), p. 477.

[19] Op. cit., Negative Dialectics, The Disenchantment of the Concept, p.12. Adorno puts the point in this way: “Initially, such concepts as that of “being” at the start of Hegel’s Logic emphatically mean non-conceptualities; as Lask put it, they “mean beyond themselves.” Dissatisfaction with their own conceptuality is part of their meaning, though the inclusion of non-conceptuality in their meaning makes it tendentially their equal and thus keeps them trapped within themselves”.

[20] See: Walter Benjamin, Selected Writings, volumes 1/2/3, edited by H. Eiland and M. W. Jennings (Harvard U. Press, 2006)

[21] Walter Benjamin, the ‘Epistemo-Critical Prologue’ to Ursprung des deutschen Trauerspiels (1928), translated as The Origin of German Tragic Drama (1977).

[22] Johann Goethe, Scientific Studies, The Collected Works, ed. D. Miller (Princeton U. Press, 1995).

[23] Michel Foucault: Aesthetics, Method and Epistemology. Edited by J. D. Faubion, and Translated by R. Hurley and others. (New York: The New Press, 1998).

[24] G. B. Madison, Coping with Nietzsche’s Legacy: Rorty, Derrida, Gadamer, The Politics of Postmodernity, Essays in Applied Hermeneutics. p.1

[25] Terry Eagleton, Awakening from modernity. Times Literary Supplement, 20th  February (1987).

[26] Stephen Barker, Nietzsche/Derrida, Blanchot/Beckett:Fragmentary Progressions of the Unnamable, (California, 1995)

[27] Walter Benjamin, “The Task of the Translator”, in Illuminations, trans. Harry Zohn, London, Fontana, 1992 pp. 70-82.

[28] Ibid.

[29] In the Genealogy of Morals (1887), Nietzsche throws out a strident challenge that even those philosophers of the ‘modern’ era, who grounded understanding in science and mathematics are shadowed by the same pursuit: “They are far from being Free Spirits: for they still have faith in truth. Quoted in G. B. Madison, Coping with Nietzsche’s Legacy in Philosophy Today 36 (1):3- 19 (1992). p.1

[30] Gilles Deleuze, The Logic of Sense, Twentieth series (1969) Logique du sens (Paris: Minuit); tr. as The Logic of Sense, by M. Lester with C. Stivale (New York: Columbia University Press, 1990). p.145.

[31] Gilles Deleuze, The Logic of Sense, p. 146.

[32] Claire Colebrook, Deleuze, Routledge Critical Thinkers series (Routledge, 2002).  p. 108.

[33] Gilles Deleuze, Difference and Repetition. Translated by P. Patton. (London: The Athlone Press, 1994). p.118.

[34] Claire Colebrook, Deleuze, Routledge Critical Thinkers series (Routledge, 2002).

……………………

Bibliography

Theodor Adorno, Negative Dialectics, The Disenchantment of the Concept, Routledge, 1990.

Stephen Barker, Nietzsche/Derrida, Blanchot/Beckett:Fragmentary Progressions of the Unnamable, California, 1995.

Walter Benjamin, ‘The Task of the Translator’, in Illuminations, trans. Harry Zohn, London, Fontana, 1992.

Walter Benjamin, Selected Writings, Volumes 1/2/3, Harvard University Press, 2006.

Claire Colebrook, Deleuze, Routledge Critical Thinkers Series, Routledge, 2002.

Walter Benjamin, the ‘Epistemo-Critical Prologue’ to Ursprung des deutschen Trauerspiels (1928), translated as The Origin of German Tragic Drama (1977).

Deleuze and Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, University of Minnesota Press, 1987.

Gilles Deleuze, The Logic of Sense, Columbia University Press, 1990.

Gilles Deleuze, Difference and Repetition, The Athlone Press, 1994.

Terry Eagleton, Awakening from modernity. Times Literary Supplement, 20th  February, 1987.

Jacques Derrida, Points… Interviews, 1974-1994, California: Stanford, 1995.

Michel Foucault, Aesthetics, Method and Epistemology. New York: The New Press, 1998.

Heinz von Foerster. Understanding Understanding: Essays on Cybernetics and Cognition, New York: Springer, 2003.

Johann Goethe, Scientific Studies, The Collected Works, Princeton University Press, 1995.

Martin Heidegger, Identity and Difference, Chicago University Press, 2002.

Friedrich Kittler, Gramophone, Film, Typewriter, California, Stanford, 1999.

  1. B. Madison, Coping with Nietzsche’s Legacy: Rorty, Derrida, Gadamer, The Politics of Postmodernity, Essays in Applied Hermeneutics, 2001.

Friedrich Nietzsche, letters, in F. Nietzsche Briefwechsel: Kritische Gesamtausgabe, Berlin, 1975 – 84.

Friedrich Nietzsche, Will to Power, Vintage Books, 1967.

Plato, Phaedrus, 370 B.C.

Threads (film), September 23rd, BBC, 1984.

Alan Turing (1936). ‘On Computable Numbers, with an Application to the Entscheidungsproblem’. Proceedings of the London Mathematical Society. 2 (published 1937).

 

 

 

The post Language: The Non-Trivial Machine appeared first on Interalia Magazine.

Altered States: 2D digital displays become 3D reality – Digital Technology Lets You Touch Great Art

It’s a natural impulse to reach out and touch an original artwork, perhaps to feel the strong brushstrokes in van Gogh’s Starry Night or to trace the shape of a compelling sculpture. You can’t though, and for good reason: a multitude of individual touches would soon damage the work, so museums sensibly use “Please don’t touch” signs, velvet ropes and alert guards to keep viewers at a distance. It helps that those same museums have put their collections online so you can appreciate great art right on your digital device. However, even at high resolution, images on flat screens do not clearly show surface texture or convey volumes in space. But now researchers in art and technology are finding ways for viewers to experience the texture of artworks in 2D and the solidity of those in 3D.

The missing third dimension is significant even for flat works, which typically show the texture of the background paper or canvas, or of the pigment. Some nominally two-dimensional works are inherently textured, such as Canadian artist Brian Jungen’s People’s Flag (2006), an immense vertical hanging piece made of red textiles. Helen Fielding of the University of Western Ontario has perceptively noted how vision and touch intertwine in this work:

As my eyes run across the texture of the flag, I can almost feel the textures of the materials I see; my hands know the softness of wool, the smoothness of vinyl. Though touching the work is prohibited…my hands are drawn to the fabrics, subtly reversing the priority of vision over touch…

Textural features like these are a material record of the artist’s effort that enhances a viewer’s interaction with the work. Such flat but textured works are art in “2.5D” because they extend only slightly into the third dimension. Now artworks shown in 2.5D and 3D on flat screens and as solid 3D models are giving new pleasures and insights to art lovers, curators, and scholars. As exact copies, these replicas can also help conserve fragile works while raising questions about the meaning of original art.

One approach, developed at the Swiss Federal Institute of Technology (EPFL) in Lausanne, creates a digital 2.5D image of an artwork by manipulating its lighting. Near sunset, when the sun’s rays enter a scene at an angle, small surface elevations cast long shadows that make them stand out. Similarly, the EPFL process shines a simulated light source onto a digital image. As the source is moved, it produces highlights and shadows that enhance surface details to produce a quasi-3D appearance.

This approach has links to CGI, computer-generated imagery, the technology that creates imaginary scenes and characters in science fiction and fantasy films. One powerful CGI tool is an algorithm called the bidirectional scattering distribution function (BSDF). For every point in an imagined scene, the BSDF shows how incoming light traveling in any direction would be reflected or transmitted to produce the outgoing ray seen by a viewer. The result fully describes the scene for any location of the light source.

In films, the BSDF is obtained from optical theory and the properties of the imaginary scene. The EPFL group, however, generated it from real art. In 2014, they illuminated a pane of stained glass with light from different directions and recorded the results with a high-resolution camera, creating a BSDF and showing that the method works for nearly planar items. This approach has been commercialized by Artmyn, a Swiss company co-founded by Luic Baboulaz who led the EPFL team. Artmyn makes 2.5D digital images of artworks by lighting them with LEDs at different visible wavelengths to provide color fidelity, and at infrared and ultraviolet wavelengths to further probe the surface. The result is a BSDF with up to a terabyte of data.

As an illustration, Artmyn has worked with Sotheby’s auction house to digitize two Marc Chagall works: Le Printemps (1975, oil on canvas), a village scene with a couple embracing, and Dans L’Atelier (1980, tempera on board), an artist’s studio. The Artmyn software lets a viewer zoom from the full artwork down to the fine scale of the weave of the canvas, while moving the lighting to display blobs, islands and layers of pigment. This reveals how Chagall achieves his effects and clearly illustrates the difference between oils and tempera as artistic media. Currently in process for similar digitization, Baboulaz told me, are a Leonardo da Vinci painting and a drawing, in recognition of the 500th anniversary of his death this year.

Artmyn has also digitized cultural artifacts such as a Sumerian clay tablet circa 2,000 BCE covered in cuneiform script; signatures and letters from important figures in the American Revolution; and a digital milestone, the original Apple-1 computer motherboard. These 2.5D images display signs of wear and of their creator’s presence that hugely enhance a viewer’s visceral appreciation of the real objects and their history.

For the next step, creating full 3D representations and physical replicas, the necessary data must be obtained without touching the original. One approach is LIDAR (light detection and ranging), where a laser beam is scanned over the object and reflected back to a sensor. The distance from the laser to each point on the object’s surface is found from the speed of light and its travel time, giving a map of the surface topography. LIDAR is most suitable for big artifacts such as a building façade at a coarse resolution of millimeters. Other approaches yield finer detail. In the triangulation method, for instance, a laser puts a dot of light on the object while a nearby camera records the dot’s location, giving data accurate to within 100 micrometers (0.1 millimeter). Copyists typically combine scanning methods to obtain the best surface replication and color rendition.

One big 3D copying effort is underway at the Smithsonian Institution, whose 19 museums preserve 155 million cultural and historic artifacts and artworks. Since 2013, the Smithsonian has put over 100 of these online as interactive 3D displays that can be viewed from different angles, and as data for 3D printers so people can make their own copies. The objects, chosen for popularity and diversity, include the original 1903 Wright Brothers flyer; a highly decorated 2nd century BCE Chinese incense burner; costume boots from the Broadway musical The Wiz from 1975; a mask of Abraham Lincoln’s face from shortly before his assassination in 1865; and for the 50th anniversary of the Apollo 11 moon landing, astronaut Neil Armstrong’s spacesuit. Recently added is a small 3D version of a full-sized dinosaur skeleton display at the National Museum of Natural History showing a T-rex attacking a triceratops, for which hundreds of bones were scanned by LIDAR and other methods.

A different goal animates the 3D art and technology studio Factum Arte in Madrid, Spain. Founded by British artist Adam Lowe in 2001, Factum Arte protects cultural artifacts by copying them, using its own high-resolution 3D scanning, printing and fabrication techniques.

Museums already use copies to preserve sensitive artworks on paper that need long recovery times in darkness and low humidity between showings. During these rests, the museum displays instead high-quality reproductions (and informs patrons that they are doing so). In a recent interview entitled “Datareality,” Adam Lowe expressed his similar belief that an artistically valid copy can provide a meaningful viewing experience while preserving a fragile original. One of his current projects is to replicate the tombs of the pharaohs Tutankhamun (King Tut) and Seti I, and queen Nefertari, in the Egyptian Valley of the Kings. The tombs were sealed by their builders, but once opened, they are deteriorating due to the throngs of visitors. As Lowe recently explained, “by going to see something that was designed to last for eternity, but never to be visited, you’re contributing to its destruction.”

The copies, approved by Egypt’s Supreme Council of Antiquities, will give visitors alternate sites to enter and view. At a resolution of 0.1 millimeter, the copies provide exact reproductions of the intricate colored images and text adorning thousands of square meters in the tombs. The first copy, King Tut’s burial chamber, was opened to the public in 2014, and in 2018, Factum Arte displayed its copied “Hall of Beauties” from the tomb of Seti I.

Earlier, Factum Arte had copied the huge Paolo Veronese oil on canvas The Wedding Feast at Cana (1563, 6.8 meters x 9.9 meters), which shows the biblical story where Jesus changes water into wine. The original was plundered from its church in Venice by Napoleon’s troops in 1797 and now hangs in the Louvre. The full-size copy, however, commissioned by the Louvre and an Italian foundation, was hung back at the original church site in 2007.

Factum Arte’s efforts highlight the questions that arise as exact physical copies of original art become available. Museums, after all, trade in authenticity. They offer viewers the chance to stand in the presence of a work that once felt the actual hands of its creator. But if the copy is indistinguishable from the work, does that dispel what the German cultural critic Walter Benjamin calls the “aura” of the original? In his influential 1935 essay The Work of Art in the Age of Mechanical Reproduction, he asserted that a copy lacks this aura:

In even the most perfect reproduction, one thing is lacking: the here and now of the work of art – its unique existence in a particular place. It is this unique existence – and nothing else – that bears the mark of the history to which the work has been subject.

The Factum Arte reproductions show that “original vs copy” is more nuanced than Benjamin indicates. The Egyptian authorities will charge a higher fee to enter the original tombs and a lower one for the copies, giving visitors the chance to feel the experience without causing damage. Surely this helps preserve a “unique existence in a particular place” for the original work. And for the repatriated Wedding at Cana, Lowe tellingly points out that a copy can bring its own authenticity of history and place:

Many people started to question about whether the experience of seeing [the copy] in its correct setting, with the correct light, in dialogue with this building that it was painted for, is actually more authentic than the experience of seeing the original in the Louvre.

We are only beginning to grasp what it means to have near-perfect copies of artworks, far beyond what Walter Benjamin could have imagined. One lesson is that such a copy can enhance an original rather than diminish it, by preserving it, and by recovering or extending its meaning.

Copying art by technical means has often been an unpopular idea. Two centuries ago, the English artist William Blake, known for his unique personal vision, expressed his dislike of mechanical reproduction such as imposing a grid to copy an artwork square by square. Current technology can also often stand rightfully accused of replacing the human and the intuitive with the robotic and the soulless. But properly used, today’s high-tech replications show that technology can also enlarge the power and beauty of an innately human impulse, the need to make art.

The post Altered States: 2D digital displays become 3D reality – Digital Technology Lets You Touch Great Art appeared first on Interalia Magazine.

How a trippy 1980s video effect might help to explain consciousness

How a trippy 1980s video effect might help to explain consciousness

Still from a video feedback sequence.
© Robert Pepperell 2018, Author provided

Robert Pepperell, Cardiff Metropolitan University

Explaining consciousness is one of the hardest problems in science and philosophy. Recent neuroscientific discoveries suggest that a solution could be within reach – but grasping it will mean rethinking some familiar ideas. Consciousness, I argue in a new paper, may be caused by the way the brain generates loops of energetic feedback, similar to the video feedback that “blossoms” when a video camera is pointed at its own output.

I first saw video feedback in the late 1980s and was instantly entranced. Someone plugged the signal from a clunky video camera into a TV and pointed the lens at the screen, creating a grainy spiralling tunnel. Then the camera was tilted slightly and the tunnel blossomed into a pulsating organic kaleidoscope.

Video feedback is a classic example of complex dynamical behaviour. It arises from the way energy circulating in the system interacts chaotically with the electronic components of the hardware.

As an artist and VJ in the 1990s, I would often see this hypnotic effect in galleries and clubs. But it was a memorable if unnerving experience during an LSD-induced trip that got me thinking. I hallucinated almost identical imagery, only intensely saturated with colour. It struck me then there might be a connection between these recurring patterns and the operation of the mind.

Brains, information and energy

Fast forward 25 years and I’m a university professor still trying to understand how the mind works. Our knowledge of the relationship between the mind and brain has advanced hugely since the 1990s when a new wave of scientific research into consciousness took off. But a widely accepted scientific theory of consciousness remains elusive.

The two leading contenders – Stanislas Dehaene’s Global Neuronal Workspace Model and Giulio Tononi’s Integrated Information Theory – both claim that consciousness results from information processing in the brain, from neural computation of ones and zeros, or bits.

I doubt this claim for several reasons. First, there is little agreement among scientists about exactly what information is. Second, when scientists refer to information they are often actually talking about the way energetic activity is organised in physical systems. Third, brain imaging techniques such as fMRI, PET and EEG don’t detect information in the brain, but changes in energy distribution and consumption.

Brains, I argue, are not squishy digital computers – there is no information in a neuron. Brains are delicate organic instruments that turn energy from the world and the body into useful work that enables us to survive. Brains process energy, not information.

Recognising that brains are primarily energy processors is the first step to understanding how they support consciousness. The next is rethinking energy itself.

Is the human brain a squishy digital computer or a delicate organic instrument for processing energy?
Installation shot of ‘I am a brain’, 2008. Cast of human brain in resin and metal. Robert Pepperell

What is energy?

We are all familiar with energy but few of us worry about what it is. Even physicists tend not to. They treat it as an abstract value in equations describing physical processes, and that suffices. But when Aristotle coined the term energeia he was trying to grasp the actuality of the lived world, why things in nature work in the way they do (the word “energy” is rooted in the Greek for “work”). This actualised concept of energy is different from, though related to, the abstract concept of energy used in contemporary physics.

When we study what energy actually is, it turns out to be surprisingly simple: it’s a kind of difference. Kinetic energy is a difference due to change or motion, and potential energy is a difference due to position or tension. Much of the activity and variety in nature occurs because of these energetic differences and the related actions of forces and work. I call these actualised differences because they do actual work and cause real effects in the world, as distinct from abstract differences (like that between 1 and 0) which feature in mathematics and information theory. This conception of energy as actualised difference, I think, may be key to explaining consciousness.

The human brain consumes some 20% of the body’s total energy budget, despite accounting for only 2% of its mass. The brain is expensive to run. Most of the cost is incurred by neurons firing bursts of energetic difference in unthinkably complex patterns of synchrony and diversity across convoluted neural pathways.

What is special about the conscious brain, I propose, is that some of those pathways and energy flows are turned upon themselves, much like the signal from the camera in the case of video feedback. This causes a self-referential cascade of actualised differences to blossom with astronomical complexity, and it is this that we experience as consciousness. Video feedback, then, may be the nearest we have to visualising what conscious processing in the brain is like.

Does consciousness depend on the brain looking at itself?
Robert Pepperell, 2018

The neuroscientific evidence

The suggestion that consciousness depends on complex neural energy feedback is supported by neuroscientific evidence.

Researchers recently discovered a way to accurately index the amount of consciousness someone has. They fired magnetic pulses through healthy, anaesthetised, and severely injured peoples’ brains. Then they measured the complexity of an EEG signal that monitored how the brains reacted. The complexity of the EEG signal predicted the level of consciousness in the person. And the more complex the signal the more conscious the person was.

The researchers attributed the level of consciousness to the amount of information processing going on in each brain. But what was actually being measured in this study was the organisation of the neural energy flow (EEG measures differences of electrical energy). Therefore, the complexity of the energy flow in the brain tells us about the level of consciousness a person has.

Also relevant is evidence from studies of anaesthesia. No-one knows exactly how anaesthetic agents annihilate consciousness. But recent theories suggest that compounds including propofol interfere with the brain’s ability to sustain complex feedback loops in certain brain areas. Without these feedback loops, the functional integration between different brain regions breaks down, and with it the coherence of conscious awareness.

What this, and other neuroscientific work I cite in the paper, suggests is that consciousness depends on a complex organisation of energy flow in the brain, and in particular on what the biologist Gerald Edelman called “reentrant” signals. These are recursive feedback loops of neural activity that bind distant brain regions into a coherent functioning whole.

Video feedback may be the nearest we have to visualising what conscious processing in the brain is like.
Still from video feedback sequence. Robert Pepperell, 2018

Explaining consciousness in scientific terms, or in any terms, is a notoriously hard problem. Some have worried it’s so hard we shouldn’t even try. But while not denying the difficulty, the task is made a bit easier, I suggest, if we begin by recognising what brains actually do.

The primary function of the brain is to manage the complex flows of energy that we rely on to thrive and survive. Instead of looking inside the brain for some undiscovered property, or “magic sauce”, to explain our mental life, we may need to look afresh at what we already know is there.The Conversation

Robert Pepperell, Professor, Cardiff Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post How a trippy 1980s video effect might help to explain consciousness appeared first on Interalia Magazine.

On ‘The Darker the Night, the Brighter the Stars’

Paul Broks: I was putting the finishing touches to my first book, Into the Silent Land: Travels in Neuropsychology when my wife, Sonja, was diagnosed with breast cancer. I put off submitting the manuscript in order to support her through the difficult first stages of treatment but I also used the time to write an additional chapter dealing with our response to her diagnosis. In that final chapter I recount the true story of a conversation we’d had over dinner in which I’d presented the following scenario (borrowed from a Milan Kundera novel). A kindly visitor from an advanced alien civilisation brings the good news that death is not the end and that she will be moving on to another life. But there’s a choice to be made. Does she want to commit to having me with her in that future life, or would she rather go it alone? She said she’d go it alone. One lifetime was enough, however much you loved someone. So I’d better make the most of it. We had another eight years together, more than twice as long as expected from the original, rather grim, prognosis.

Into the Silent Land ends with Sonja’s cancer diagnosis and The Darker the Night begins with her death. That Stoic injunction – Just the one life; better make the most of it – finds an echo in the first few pages. Close to the end she said to me, You don’t know how precious life is. You think you do, but you don’t, and those words, effectively an encapsulation of the Stoic message of Marcus Aurelius, resonate through the pages of the book. Both Into the Silent Land and The Darker the Night contain neurological case stories and autobiographical strands. They both make excursions into philosophy and fiction. But The Darker the Night is more layered and has a more discernible narrative arc. There is a beginning, a middle, and an end, even if at times the narrative thread that leads you through the journey dissolves into fictional digressions and the retelling of stories from Greek mythology from time to time. I know I’m asking a lot from readers. And it’s a bookseller’s nightmare with its mix of genres. It ends with me entering a new relationship and an encounter with a self-proclaimed archangel at the top of Glastonbury Tor who presents me with a difficult dilemma. It’s a happy ending, I think, or at least a wistful one.

The idea of interspersing images with text occurred to me quite late on. I aim to counterbalance darkness and viscerality in my writing with humour and sheer wonder at the mystery of the world. There’s a similar chemistry in some of your work, I think, and you were the first person I considered approaching.

Garry Kennard: I had read Into the Silent Land some years before I founded the Art and Mind festivals and had been extremely impressed. I admired the way you blended the science, philosophy and a personal feel for the world via marvellous writing into a satisfying and moving work of art. It was exactly this kind of melding that I was looking for in the festivals. As director I was in the fortunate position of being able to invite anyone I liked to take part and you were the obvious first choice. Apart from a few more appearances at my events, that was the extent of our acquaintance.

That was that until I received your email asking if would be interested in illustrating your new book. This was an amazing surprise – and a flattering offer. I wasn’t even aware that you had seen any of my work.

I had made a rule for myself, born of long experience, that I would always refuse commissions. I had realised that when I did, it was guaranteed I would produce awful work. The idea of someone looking over my shoulder, waiting expectantly for the masterwork, made me falter and stutter.

So I demurred to begin with. But after you had sent me some chapters to read, it began to dawn on me that I could do something. I was still very doubtful about it but the feeling grew that this would offer me the opportunity of trying something new, although I had to get over the problem of too close a collaboration spoiling the thing. I remember saying that I would have a go at this, but it would be under certain conditions. One was that I would not discuss the images with you. I would not send you sketches for you to comment on. I would not directly illustrate the book. I would read the text closely and then let my hand and brain semi-improvise the images and see what emerged. I would send you the finished pictures. If you didn’t like them or didn’t think them appropriate you could ditch them.  I would continue to produce images to replace those you didn’t like. When you agreed to this, I started.

I had no idea of what you expected. You had said nothing about what you hoped for, no rules that you wanted me to adhere to. That left me an opening to start work without constraints – the only way I could do anything. I managed five or six pictures, sent them to you and waited for your reaction.

PB: I had no more than vague intuitions as to what to expect but didn’t at any stage envisage the pictures as mere ‘illustration’. I was more interested in seeing what might come of a more loosely imaginative – subconscious, even – reaction to the text. That wouldn’t work if you’d felt in any way obliged to produce images to order. Actually, I think there’s some similarity in the way I produced the text and the way you produced the pictures. Although my writing sometimes involves quite hard deliberation (over how to frame a philosophical argument, say) I think it works best when I don’t think too much and just leave it to my subconscious ‘brownies’ to do the creative legwork. I’m usually just noodling around with a notebook and pen and suddenly a phrase or image presents itself and sets off a train of thought and I think, where the hell did that come from? Nothing to do with me! I sense something similar was happening with your semi-improvised pictures. Parts of the book are, in fact, explicitly concerned with the unconscious, semi-autonomous machineries of ‘imaginal reality’, and it also chimes with the non-linear, “knights move” progression of the essays and stories, so it’s all quite apt.

We had a bit of to-ing a fro-ing over a couple of the images, I recall, and you made your own unprompted revisions here and there, but, for the most part, I was pretty much blown away by the “first takes”. By the way, I happened to be in the thick of a children’s birthday party when those first dark, still, mysterious images came through on my phone. It was exquisitely incongruous! I knew straight away they were going to work.

GK: In the past most of my paintings and drawings have been carefully worked out. They were designed to discombobulate the viewer by presenting two different modes of perception on the same plain – one highly realistic, the other of suggestive abstractions. Now, with the opportunity to free myself from this and to semi-improvise the images, I could explore a new way of reaching the same destination. Of waiting for the image itself to disarm me as it grew.

I had a small note book by my side all the time and I would scribble in it at the most peculiar times – while watching television or doing a crossword. The scribbles, and they grew into hundreds, gave me a feel for what might be done. I didn’t refer to these when drawing the final pictures but some of those came out very similar to the initial sketches.

The deeper I got into this the more astonished I became at what was emerging. Themes kept re-appearing – a black sun, clocks, doors opening. I felt these held the sequence together. But on a deeper level what was appearing before me became an exploration of my own psyche, almost as if I was creating my own Rorschach tests with the added device of being able to develop and strengthen those emerging images to which I found myself responding. This is obviously not a new way of doing things. Artists have always produced work like this in some way. But it was new to me and a revelation. You can see from this how my ‘method’ was very close to your own in composing the text.

Of course, the other thing which preoccupied me was your text, which I read over and over, hoping that something from it would infiltrate the pictures without me actually illustrating the words. I think that happened to a greater or lesser extent in the series.

PB: Sorry to say a couple of my favourite pictures didn’t make it onto the pages of the book owing to the publisher’s squeamishness. One is the image of the Greek god Pan standing in a doorway, proudly erect as he often is in classical depictions. This was ruled out on grounds of being ‘pornographic’. The other was the image of the gipsy woman, again a very traditional one, which was rejected for reasons of political correctness. I found this hard to accept, and protested but, regrettably, didn’t have the final say.

GK: I was quite devastated when the publisher rejected a number of the images, some of which I had been very pleased with. It made no sense to me. With some of the pictures missing the connecting themes, apparent in the whole set, disappeared. I know you tried your best to reinstate some drawings (and succeeded with a couple of them). I realise you were as disappointed as me. But – it was done and that was that.

The Complete Drawings

1. Stairway to sunlit room:
“Push, and the door will open into a sunlit room, forever sunlit, regardless of the depth of the night.”

 

2. Trees through the windows:
“Doors opened into unexpected rooms. Through this window, a crisp winter morning, though that, a summer afternoon.”

 

3. Boy at night:
“Sleep won’t come. Thoughts are running like rats through his head and a shadow on the far wall of the bedroom unsettles him.”

 

4. Man with dark moon:
“For a minute or two I had the sense that she was still alive. I could catch up with her and we would carry on as normal.”

 

5. Time Traveller at the station:
‘Mike the Time Traveller?’ No. He was just a miserable dipso on his way home from the miserable office, having a drink or nine to gird his miserable loins for miserable home.

 

6. Tabletops:
“The tabletops are identical in size and shape, yet the one on the left appears elongated. There’s a mismatch between mental and physical reality.”

 

7. Pan at the door:
“There’s a knock at the door and there he is with his hooves, his horns, his fur and, slightly worryingly, his large, erect penis.”

 

8. CS Lewis:
“Jack has a morbid dread of insects. ‘To my eye,’ he says, ‘they are either machines that have come to life or life degenerating into mechanism.’”

 

9. Pat Martino:
“With music as the golden thread, he began to weave a new version of himself. He was a genius twice over, but this time with a piece of his brain missing.”

 

10. The White Bull of Minos:
“Listen, the bull said to himself, nonverbally, I may be a beast of the field but I’m no mug. I’m doing this of my own conscious volition.”

 

11. Zombies:
“Now Lewys was telling me that zombies were real, not merely conceivable. They walk down every street.”

 

12. Multiplicity:
There are infinite histories to choose from with infinite variations on the theme of you and your life.

 

13. Tyger, tyger:
“Have moonlight if you want. There’s no sign of a tiger. Why would there be in an English forest?”

 

14. Spiral head figure:
“The labyrinth is a primordial image of the psyche. It symbolizes the winding, snakelike path to psychological wholeness and authenticity.”

 

15. Into the Labyrinth
I’ll give you a clew, he told her, but she was in no mood for games and just wanted an answer. No, this sort of clew, he said, producing a ball of thread.

 

16.Time and the woman:
“Stabs of absence; stabs to the brain and heart; an entering of the flesh, knowing in the flesh that she’s not here anymore.”

 

17. Gipsy at the door:
“So you’re in a good place, then. The gipsy told me. You’re thinking of me, she said, and you want me to find another good woman.”

 

18. The drunk on the bench
“Isaac Newton, he told me, was a genius but died a virgin. He was a sad fucker. I was taken aback because I’d just then been thinking about celestial mechanics.”

 

19. Hierarchy:
‘One day I’ll be dead’. It’s an oddly exhilarating thought. Something unimaginable – eternal nothingness – awaits us all. It sharpened my senses. Let’s not forget we’re alive.

 

20 Perseus and the Dead Girl:
The image of the dead girl surprised me. She bobbed in a flowing white garment, like an infant Ophelia. The sea itself was subdued. Small waves broke indifferently.

 

21. Sisyphus:
“The toil of Sisyphus represents the human condition, ‘…his whole being exerted towards accomplishing nothing.’ ”

 

22 Incubus:
The firewall between fantasy and reality collapses and all the monstrous archetypes break free: witches and goblins, demons and other strange creatures. They have the shine of sentience in their eyes.

 

23. Universe and beer
“All moments, all times, are equally real, equally present, including all the moments of your life, which are, from beginning to end, ‘in place’”

 

24. Carpet flower:
“Whenever I recall the carpet flower I have a sense of seeing, of being, for the very first time.”

 

25. Glastonbury Tor
“So you can, if you want, totally erase the life you’ve had. It will never have existed. Up to you, he said. I made my choice.”

 

……………………..

www.garrykennard.com

All images copyright and courtesy of Garry Kennard

 

 

The post On ‘The Darker the Night, the Brighter the Stars’ appeared first on Interalia Magazine.

From Computational Creativity to Creative AI and Back Again

Abstract

I compare and contrast the AI research field of Computational Creativity and the Creative AI technological movement, both of which are contributing to progress in the arts. I raise the sceptre of a looming crisis wherein public opinion moves on from the spectacle of software being creative to viewing the lack of authenticity in creative AI systems as being a major drawback. I propose a roadmap from Creative AI systems to Computationally Creative systems which address this lack of authenticity via the software expressing aspects of its computational life experiences in the art, music, games and literature that it produces. I posit that only by harnessing Creative AI technologies and Computational Creativity philosophies in the pursuit of truly creative software able to express the machine condition, will we gain maximum societal benefit in further understanding the human condition.

 

  1. Introduction

This year, we passed a milestone in my field, as the 10th annual International Conference on Computational Creativity (ICCC) was held in the USA. The conference brings together AI researchers who test the idea of software being independently creative, describing projects with goals ranging from enhancing human creativity to advancing our philosophical understanding of creativity and producing fully autonomous creative machines. The conference series was built on roughly ten years of preceding workshops [1], with interest in the idea of machine creativity going back to the birth of modern computing. For instance, in their 1958 paper [2], AI luminary Alan Newell and Nobel Prize winner Herbert Simon hypothesised that: “Within ten years, a digital computer will discover and prove an important mathematical theorem”. In [3], we proposed the following working definition of Computational Creativity research as:

“the philosophy, science and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviours that unbiased observers would deem to be creative.”

In the last few years, we have seen unprecedented interest across society in generative AI systems able to create culturally interesting artefacts such as pictures, musical compositions, texts and games. Indeed, it’s difficult to read a newspaper or magazine these days without stumbling across a story about a new project to generate poems, or a symphony orchestra playing AI-generated music or an art exhibition in which AI systems are purported to be artists.

This wave of interest has been fuelled by a step change in the quality of computer-generated cultural artefacts, brought on largely by advances in machine learning technologies, and in particular the deep learning of artificial neural networks. Such techniques are able to generate new material by learning from data about the structure of existing material – such as a database of images, a corpus of texts or a collection of songs – and determining a way to create more of the same. An umbrella term for this groundswell of interest and activity in generative art/music/literature/games is “Creative AI”, and people from arts and sciences, within and outwith academia are actively engaged in producing art using AI techniques. We surveyed different communities engaged in generative arts – including Creative AI practitioners – in a recent ICCC paper [4].

While we might have expected the Creative AI community to have grown from the field of Computational Creativity, this is not the case. Indeed, somewhat of a schism has developed where the two communities have different aims and ambitions. Both communities have a main interest in the development of generative technologies for societal good. The Creative AI movement has an emphasis on quality of output and developing apps to commercial level for mass consumption. There is also a tendency to disavow the idea that software itself could/should be independently creative, in favour of a strong commitment to producing software purely for people to use to enhance their own creativity. In contrast, Computational Creativity researchers tend to be interested in the bigger picture of Artificial Intelligence, philosophical discourse around notions of human and machine creativity, novel ways to automate creative processes, and the idea that software, itself, could one day be deemed to be creative.

To highlight the schism: I personally find it difficult to think of any computational system as being “a Creative AI” if it cannot communicate details about a single decision it has taken, which is generally the case for approaches popular in Creative AI circles, such as Generative Adversarial Networks (GANs) [5]. I prefer therefore to describe Creative AI projects as “AI for creative people”, because the most literal reading of the phrase “Creative AI” is currently inaccurate for the majority of the projects under that banner. I often go further to point out that many Creative AI applications should be categorised as graphics (or audio, etc) projects which happen to employ techniques such as GANs that were originally developed by AI researchers.

As another example, I’ve argued in talks and papers many times that the end result of having more computer creativity in society is likely to be an increased understanding and celebration of human creativity, in much the same way that hand-made craft artefacts, like furniture or food, are usually preferred over machine-produced ones. I point out that I’ve met dozens of artists, musicians, poets and game designers, none of whom have expressed any concern about creative software, because they understand the value of humanity in creative practice. On the other hand, I’ve also spoken to Creative AI practitioners who remain convinced that truly creative software will lead to job losses, demoralisation and devaluation in the creative industries.

 

  1. Product versus Process

The Creative AI movement has helped to swing the global effort in engineering creative software systems firmly towards human-centric projects where AI techniques are used purely as tools for human use, with ease of use and quality of output disproportionately more important than any other considerations. I’ve been trying recently to put together arguments and thought experiments to help explain why I believe this is a retrograde step, and I’ve been trying to articulate ways in which the wealth of knowledge accrued through decades of Computational Creativity projects could be of use to Creative AI practitioners. Almost every project ever presented within Computational Creativity circles started with building a generative system with similar aims to Creative AI projects. Hence I feel we are well placed to consider the role that AI systems could have in creative practice, and to encourage Creative AI researchers and practitioners to consider some of the ideas we’ve developed over the years.

Imagine a generative music system created by a large technology company, which is able to generate 10,000 fully orchestrated symphonies in just 1 hour. Let’s say that each symphony would be lauded by experts as a beautiful work of genius had it been produced by a human composer like Beethoven; and each one sounds uniquely different to the others. If we accept the reality of an AI system (AlphaGo Zero) able to train itself from scratch to play Go, Chess and Shogi at superhuman levels [6], then we should entertain the idea that superhuman symphony writing is possible in our lifetimes. If we only concentrate on the quality of output and ease of which software can generate outputs as complex as a symphony, then the above scenario is presumably a suitable end point for generative music and would be a cause for celebration – it would certainly tick the box of huge technical achievement, as the AlphaGo project did. However, one has to wonder what the benefits of having these symphonies (and the ability to generate them so easily) are for society.

I would predict that the classical music world would find very few practical applications for a database of 10,000 high-quality symphonies, and it would likewise find little value in generating more such material. I would also predict that there would be little, if any, devaluation of symphonic music as a whole, and no devaluation of the work of gifted composers able to hand-produce symphonies. Superhuman chess playing by computers has been around since the time of Deep Blue, and has likely increased rather than decreased the popularity of the game. The chess world has responded to computer chess by being clearer about the human-centric struggle at the heart of every game of chess, and “[a]mong the chess elite, the idea of challenging a computer has fallen into the realm of farce and retort” [7]. It is clear that computer chess has made the game of chess more human. Part of the attraction of the music from composers such as Mozart and Beethoven is that these were mere mortals with superhuman creative abilities in composition. Society celebrates such creative people, often by lauding the works they produce, but also by applauding their motivations, exploring their backgrounds, expressing awe about their process, and by taking inspiration for a fresh wave of creative activity. Creativity in society serves various purposes, only one of which is to bring into being artefacts of value.

While board games have hugely driven forward AI research, chess isn’t some mathematical Drosophilia for AI problem solving (as some researchers would have you believe). It is actually a game and pastime played by two people, which can be elevated to highly competitive levels. Likewise, a symphony isn’t just a collection of notes to guide musicians to produce sound waves, but is created by human endeavour for human entertainment, often condensing into abstract form aspects of human life experience and expression. I would predict that – in an age of superhuman symphony generation – a huge premium would be placed on compositions borne of human blood, sweat and tears, with the generation of music via statistical manipulation of data by computer remaining a second class process.

 

  1. Computational Authenticity

To hit home with the points above, I usually turn to poetry, due to the highly human-centric nature of the medium: poems are condensed humanity, written by people, for people, usually about people. The following poem provides a useful focal point to illustrate the humanity gap [8] in Computational Creativity.

———————————————————————————————

Childbirth

by Maureen Q. Smith

The joy, the pain, the begin again. My boy.

Born of me, for me, through my tears, through my fears.

———————————————————————————————-

This short poem naturally invites interpretation, and we might think of the joy, pain, tears as fears as referring literally to the birth of a child, perhaps from the first-person perspective of the author, as possibly indicated by “My boy … Born of me”. We might also interpret the “begin again” as referring to the start of a baby’s life, but equally it might reflect a fresh start for the family.

Importantly, the poem was not actually written by Maureen Q. Smith. The author was in fact a man called Maurice Q. Smith. In this light, we might want to re-think our interpretation. The poem takes on a different flavour now, but we can still imagine the male author witnessing a childbirth, possibly with his own tears and fears, reflecting the joy and pain of a woman giving birth. However, I should reveal that Maurice Q. Smith was actually a convicted paedophile when he wrote this poem, and it was widely assumed to be about the act of grooming innocent children, which he referred to as “childbirth”. The poem now affords a rather sinister reading, with “tears” and “fears” perhaps reflecting the author’s concerns for his own freedom; and the phrases “Joy and pain” and “Born of me, for me” now taking on very dark tones.

Fortunately, as you may have guessed, the poem wasn’t written by a paedophile, but was instead generated by a computer program using a cut-up technique. Thankfully, we can now go back and project a different interpretation onto the poem. Looking at “Joy and pain”, perhaps the software was thinking about… Well, the part about “Born of me, for me” must have been written to convey… Hmmmm. We see fairly quickly that it is no longer possible to project feelings, background and experiences onto the author, and the poem has lost some of its value. If the words have been put together algorithmically with nothing resembling the human thought processes we might have expected, we may also think of the poem as having lost its authenticity and a lot, if not all, of its meaning. We could, of course, pretend that it was written by a person. In fact, it’s possible to imagine an entire anthology of computer generated poems that we are instructed to read as if written by various people. But then, why wouldn’t we prefer to read an anthology of poems written by actual people?

For full and final disclosure: I actually wrote the poem and found it remarkably easy to pen a piece for which a straightforward interpretation changes greatly as the nature of the author changes. I’ve been using this provocative poem to try to change the minds of researchers in Computational Creativity research for a few years, in particular to try and shift the focus away from an obsession with the quality of output judged as if it were produced by a person. I’ve argued that the nature of the generative processes [9], how software frames its creations [10], and where motivations for computational creativity come from [11] are more important for us to investigate than how to increase the quality or diversity of output. This led to a study of the notion of computational authenticity [12], which pays into the discussion below.

As with pretty much all things generative, the advent of deep learning has led to a step change in the quality of the output of poetry generators, which have a long history dating (at least) as far back as an anthology entitled: “The Policeman’s Beard is Half Constructed” [13]. On the whole, the scientists pushing forward these advances have barely thought of addressing the deficiencies with these poems, namely that they were made by an inauthentic process. It is not impossible to imagine a poem-shaped computer generated text that would have been classed as a masterpiece had it been written by a person, but is not accepted by anyone as even being a poem, because public opinion has swung against inauthentic generative processes. I have for many years advocated using the name “c-poem” for the poem-shaped texts produced by computers. Just as people know that they won’t be unwrapping a beautifully bound e-book for their birthday, they should know that their ability to project human beliefs, emotions and experiences onto the author of a c-poem will be very limited.

 

  1. Responses to the Rise of Creative AI

Returning to the observation that the quality of the artistic output of AI systems has much increased in recent years, we can consider some appropriate responses to this situation.

One response is to follow the lead from the Creative AI community, and disavow the idea that software should be developed to be fully creative, concentrating instead on using AI techniques to aid human creativity. This certainly simplifies the situation, with AI systems becoming just the latest tools for creative people. It is also a public-friendly response, as journalists, broadcasters and documentary makers (along with the occasional politician, member of the clergy, philosopher or royal) often publish missives about how AI software is going to take everyone’s job, strangle our cats and devalue our life. On the whole, I believe it would be very sad if this response dominates the discourse and drives the field, as it would certainly curtail the dream of Artificial General Intelligence, which brought many of us into AI, and it will limit the ways in which people interact with software, which has the potential to be much more than a mere muse or tool. Software systems we have developed in Computational Creativity projects can be seen as creative collaborators; motivating yet critical partners; and sometimes independent creative entities. We should not throw away the idea that software can itself be creative, as the world always needs more creativity, and truly creative AI systems could radically drive humanity forward.

A second response is to accept the point above that the processes and personality behind creative practice are indeed important in the cultural appreciation of output from generative AI systems. In this context, given that software won’t be particularly human-like anytime soon, we could say that it’s impossible to take an AI system seriously as an authentic creative voice. An extreme version of this argument is that machines will never be valuable in the arts because they are not human. I argue below that this is shortsighted and missing an opportunity to understand technology in-situ. A closely related opinion is that people should or could dislike computer generated material precisely because it has been made by computer. This point of view has certainly been simmering under the surface of many conversations I’ve had, leading people to talk of computers lacking a soul or a spark, and often employing other such obfuscating rhetoric. Perhaps surprisingly, I’ve argued on a number of occasions that such a view is not extreme, and is indeed perfectly natural: such a view would, in my opinion, be a suitable personal response to the childbirth poem above, if indeed it had been computer generated.

Well intentioned people would never dream of saying that they dislike something because it was produced by a particular minority (or majority) group of people. Hence it feels to those people that they are being prejudicial to say that a painting, poem or composition is inferior purely because it was computer generated. Moreover, the view that works such as paintings and novels should be evaluated in their own terms, i.e., independently from information about their author and the creative process, has been reinforced philosophically with movements such as the Death of the Author [14], and numerous artistic manifestos.

Software systems do not form a minority human group whose creative freedom has to be protected. Throughout the history of humanity, art has been celebrated as a particularly human endeavour, and the art world is utterly people-centric. Software is not human, but due to decades of anthropomorphic thinking on AI, it seems more acceptable to think of computers somehow as under-evolved or under-developed humans, perhaps like monkeys or toddlers, rather than non-humans with intelligence, albeit low. Disliking a work of art purely because of its computational origins is more akin to expressing a preference of one type of process over another, than it is to expressing preferences of one ethnicity, gender or religion over another. “I don’t like this painting because it is a pointillist piece” is not the same as: “I don’t like this painting because it was painted by a Brit”.

So, we could say that, while the output of the current/future wave of generative AI systems is remarkable, and could – under Turing-style conditions of anonymity – be taken for human works, there is a natural limiting factor in the non-humanity of computational systems which gives us a backstop against the devaluation of human artistic endeavour. This is a reasonable response and may lead to increased celebration of human creativity, which would be no bad thing. However, I believe that this response will also (eventually) be limiting and lead to missed opportunities, as I hope to explain below.

A third response, which I greatly favour, is to start from the truism that software is not human. In many research and industry circles, it often seems that creating human-like intelligence through nueroscience-inspired approaches such as deep learning, is the only goal and the only approach. Not every AI researcher wants to build a software version of the brain, but this fact is often lost, and helps to obfuscate the fact that software has different experiences to people. The Painting Fool is software that I’ve developed over nearly 20 years [15], and has met minor and major celebrities and painted their portraits in half a dozen different countries, often in front of large audiences in interesting venues ranging from science museums and art galleries to a pub in East London. I have, of course, anthropomorphised this experience and The Painting Fool didn’t experience it as I have portrayed. But it did have experiences, and those experiences were authentic in the sense that the software was present, did interact with people and created things independently of me which entertained and provoked people in equal measure.

We could therefore respond to the uptick in quality of output from Creative AI systems by agreeing to concentrate more on investigating plausible internal reasons for software to be creative, and developing ways in which it can impart its understanding of the world, through expressing aspects of its life experiences. Instead of challenging human creativity in terms of the quality of output, but failing due to lack of authenticity, Computational Creativity systems could be developed to explore aspects of creative independence such as intrinsic motivation, empowerment [10] and intentionality [8]. A side effect of this is that – if we get software to record and use its own experiences rather than pretending that it is a person having human experiences – we will gain a better understanding of computer processing, the impact of particular software systems and what it means for a machine to have a cultural existence in our human world. It may be that this communicative side effect actually becomes more important than having software be creative for the purpose of making things.

If software can express its experience of the world through artistic expression, surely this would add to our understanding of human culture in a digital age of tremendous, constant, technological change. While the non-human life experiences of software systems can seem other worldly, automation is very much a part of the human world, and our increasing interaction on a minute-by-minute basis with software means we should be constantly open to new ideas for understanding what it does. It’s not so strange to imagine building an automated painting system to add on to another piece of software so that it can express aspects of its experience. In fact, this would be a natural generalisation of projects such as DeepDream [16], where visualisations of deep-learned neural models were originally generated to enable people to better understand how the model processed image data. It turned out that the visualisations had artistic value as computational hallucinations, and were presented in artistic contexts, with this usage eventually dominating, fuelling a huge push in generative neural network research and development.

 

  1. A RoadMap from Creative AI to Computational Creativity

In a talk at a London Creative AI meetup event a while ago, I offered some advice for people in the Creative AI community who might be interested in pursuing the dream of making genuinely creative AI systems. At the time, there were already indications that Creative AI practitioners were beginning to see the limitations of mass generation of high-quality artefacts and were interested in handing over more creative responsibility to software. Some people were already testing the water using deep learning techniques in ways other than pastiche generation, for instance looking at style invention rather than just style transfer [17]. The advice I gave can be seen as a very rough roadmap, which reflects to some extent my own career arc in building creative AI systems, and provides one of many paths by which people can take their generative system into fascinating new territories.

While keeping much of the original, I will re-draw the roadmap below, from a fresh perspective of improving authenticity through expanding the recording and creative usage of life experiences that creative software might have. It is presented as a series of seven levels for Creative AI Systems to transition to via increased software engineering and cultural usage, with each level representing a different type of system that the software graduates to. Focused on generative visual art rather than poetry/music/games/etc., but intended to generalise over many domains, the roadmap offers direct advice to people who already have a generative system.

  • Generative Systems. So, you’ve designed a generative system and are having fun making pictures with it. You play around with input data and parameter settings, and realise that the output is not only high quality, but really varied. You write a little graphical user interface, which enables you to play around with the inputs/parameters, and this increases the fun and the variety. It becomes clear that the space of inputs/parameters is very You begin to suspect that the space of novel outputs is also vast. You’re at level one: you have an interesting generative system which is able to make stuff. 

 

  • Appreciative Systems. Generating images becomes addictive, and you gorge on the output. In your gluttony, you get a strong fear of missing out – what if I miss the parameters for a really interesting picture? You decide to systematically sample the space of outputs, but there are millions of images that can be produced. So, you encode your aesthetic preferences into a fitness function and get the software to rank/display its best results, according to the fitness function, perhaps tempered by a novelty measure to keep things fresh. You’re at level two: you have an appreciative system which is able to discern quality in output.

 

  • Artistic Systems. At some stage, some humility sinks in, and you begin to think that maybe… just maybe… your particular aesthetic preferences aren’t the only ones which could be used to mine images. You give the software the ability to invent its own aesthetic fitness functions and use them to filter and rank the images that it generates. You’re at level three, with an artistic system which has some potential to affect the world artistically.

 

  • Persuasive Systems. Some of the output is great – beautiful new images that you perhaps wouldn’t have found/made yourself. But some of the pictures are unpalatable and you can’t imagine why the software likes them. However, sometimes, an awful image grows in appeal to you, and you realise that your own aesthetic sensibilities are being changed by the software. This is weird, but fun. You want to give the software the ability to influence you more easily, so you add a module which produces a little essay as a commentary on the aesthetic generation, the artefact generation and the style that the software has invented. You’re at level four, with a persuasive system that can change your mind through explanations as well as high quality, surprising output.

 

  • Inventive Systems. You begin to realise that you enjoy the output partially because of what it looks like and partially because of the backstory to the generation of the output and the aesthetics being considered. You want to increase both aspects, by enabling the software to alter its own code, perhaps at process level, and by taking inspiration from outside sources like newspapers, twitter, art books, other artists, etc., so you have less control. And you add natural language generation to turn the commentary about the process/product into a little drama. You’re at level five, where what your inventive system does is as important, interesting and unpredictable as its output.

 

  • Authentic Systems. You’re loving the commentaries/essays/stories about how and why your software has made a particular picture/aesthetic/style/series or invented a new technique, and the software pretty much has an artistic persona. However, sometimes the persona doesn’t ring true and actually verges on being insulting, given how little the software knows about the world. You realise that you’re reading/viewing the output as if it were created by a person, which is a falsehood which has gotten very old and somewhat disturbing. You decide to give the software plausible and believable reasons to be creative, by implementing models of intrinsic motivation, reflection, self-improvement, self-determination, empowerment and maybe even consciousness. In particular, much of this depends on implementing techniques to record the life experiences that your software has, via: sensors detecting aspects of the environment the software operates in; improved in-situ and online HCI, wherein the software’s interactions with people are recorded and the software is able to probe people with questions; and methods which take life experiences and outside knowledge and operationalise them into opinions that can be reflected in generative processing and output. You then give the software the ability to use its recorded life experiences to influence its creative direction, in much the same way that twitter and newspaper sources were previously. You’re at level six, with an authentic system that is seen more as an autonomous AI individual than a pale reflection of a person.

 

  • Philosophical Systems. Ultimately, you find it thrilling to be in the presence of such an interesting creator as your software – it’s completely independent of you, and it teaches you new things, regularly inspiring you and others. You realise that for the software to be taken seriously as an artist, it needs to join the debate about what creativity means (as creativity is an essentially contested concept [18]) in practice and as a societal driving force. You implement methods for philosophical reasoning based on the software’s own creative endeavours, and you enable it to critique the thoughts of others. You add dialogue systems to propose, prove and disprove hypotheses about the nature of creativity, enabling your system to generally provoke discussion around the topic. You’re at level seven, where it’s difficult to argue that your philosophical system isn’t genuinely creative.

 

It is fair to say that no AI system gets close yet to levels 6 and 7 yet, but projects presented in Creative AI and Computational Creativity circles have tested the water up to and including level 5. If I were giving a talk about this roadmap, there would be much handwaving towards the end, as the road gets very blurry, with few signposts. This, of course, is the frontier of Computational Creativity research and reflects directions I will personally be taking software like The Painting Fool in. I’m particularly interested in exploring the notion of the machine condition and seeing how authentic we can make the processing and products from AI systems. That notwithstanding, I hope the roadmap offers some insight and inspiration to people from all backgrounds who are working with cool generative systems and want to take the project further.

 

  1. In Conclusion

More than a decade ago, I was dismayed to read in a graphics textbook the following statement:

“Simulating artistic techniques means also simulating human thinking and reasoning, especially creative thinking. This is impossible to do using algorithms or information processing systems. [19, p. 113]”

The topic of the textbook is Non-photorealistic Computer Graphics, part of which involves getting software to simulate paint/pencil/pastel strokes on-screen. Stating that computational creative thinking is impossible was short-sighted and presumably written to placate creative industry practitioners, who use software like the Adobe Creative Suite which employ such non-photorealistic graphics techniques. In the 17 years since the above statement was published, the argument seems to have moved on from whether software can be independently creative to whether it should be allowed to. It is my sincere hope that the argument will shift soon to the question of how best truly creative AI systems can enhance and inform the human world, and how we can use autonomous software creativity to help us understand how technology works.

Creative AI practitioners have emerged as much via scientists in the machine learning community embracing art practice as via tech-savvy artists picking up and applying tools such as Tensor Flow [20]. Speaking personally, and having witnessed numerous transitions, scientists tend to hold on too long to the idea that product is more important than process or personality in creative practice [21]. This is presumably due to scientific evaluation being objective, with scientific findings expected to be evaluated entirely independently of their origins.

It would be tempting to follow the lead of companies like DeepMind who often justify working on applications to the automated playing of board games and video games [22] by stating that this research pushes forward AI technologies in general, which ultimately leads to improvements in applications to other, more worthwhile, domains like protein structure prediction [23] and healthcare. Getting software to produce better poems, paintings, games, etc., will likely lead to improvements in AI techniques overall, so concentrating on improving quality of output is in some senses a good thing. However, this would serve to deflect from what I believe is a looming crisis in Creative AI, which is when the novelty of the computer generation gimmick wears off, and people begin to realise that authenticity of process, voice and life experience are more important than the so-called “quality” of computer generated artefacts.

The activities of playing games and predicting protein structures have the luxury of objective measures for success and thus progress (beating other players and nanoscale accuracy, respectively). This is not true in the arts, where there are only subjective – and highly debated – notions of the “best” painting, poem, game or musical composition. The humanity wrapped up in artefacts produced by creative people is absolutely critical in the evaluation of those artefacts, which is not true in scientific or (to a lesser extent) competitive scenarios.

It is similarly tempting to appeal to the creative outcomes of the AlphaGo match against Lee Sedol, which have been described beautifully by Cade Metz in [24]:

“In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.”

“But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine – no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years to come, as we humans work with these machines, our genius will only grow in tandem with our creations.”

In the thought experiment above, in the corpus of 10,000 new symphonies generated by computer, there would surely be many moments of inventive genius: a phrase, passage or flourish of orchestration found in the notes of the music produced. Humankind would learn from the software, and would in turn develop better generative approaches to music production. But would we necessarily learn anything about the human condition, as we generally hope to in the arts?

I posit that only if software is developed to record its life experiences and use them in the pursuit of creative practice will we learn anything about the human condition, through increased understanding of the machine condition. Developing better AI painters means engineering software with more interesting life experiences, not software with better technical abilities. While there might be advantages, there is no imperative for these life experiences to be particularly human-like, and society might be better served if we try and understand computational lives through art generation. We hear all the time that the workings of black box AI systems deep-learned over huge datasets are not understood even by the researchers in the project. While this difficulty is usually overstated, we are facing a situation of increased scenarios where AI-enhanced software makes decisions of real import for us, coupled with decreased understanding of how individual AI systems make those decisions.

Combining the best practices and understanding gained from both Computational Creativity as a research field and Creative AI as an artistic and technological movement, may be the best approach to bringing about a future enhanced by creative software expressing its life experiences artistically for our benefit. The diversity, enthusiasm and innovative thinking coming daily from the Creative AI community, guided by the philosophy of the Computational Creativity movement is a potent combination, and I’m optimistic that in my lifetime, we will reap the benefits of cross-discipline, cross-community collaborations. Creative AI practitioners may rail against interventions from people like myself: stuffy academic disciples of the Computational Creativity discipline. But it is worth mentioning that we were once the angry young men and women of a largely ostracised and ignored arm of AI, shouting into the void at an establishment who thought that notions of creativity in AI systems were too “wooly” to be taken seriously.

Who knows what history will record about the rise of creative machines in society. My sincere hope is that it will chart how Computational Creativity thinking evolved without the benefit of sophisticated technical implementations; this was massively influenced with a surge in the technical abilities of Creative AI Systems during the period of Deep Learning dominance; but then naturally turned back to the philosophical thinking of Computational Creativity in order to properly reap the benefits of truly creative technologies in society.

 

References

[1] Cardoso, A., Veale, T. and Wiggins, G. A. (2009). Converging on the divergent: The history (and future) of the international joint workshops in computational creativity. AI Magazine, 30(3), 15–22.

[2] Simon, H., and Newell, A. (1958). Heuristic problem solving: The next advance in operations research. Operations Research, 6(1), 1-10.

[3] Colton, S. and Wiggins, G. A. (2012). Computational Creativity: A Final Frontier? Proceedings of the European Conference on Artificial Intelligence, 2012.

[4] Cook, M. and Colton, S. (2018). Neighbouring Communities: Interaction, Lessons and Opportunities. Proceedings of the Ninth International Conference on Computational Creativity.

[5] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014). Generative Adversarial Networks. Proceedings of the International Conference on Neural Information Processing Systems.

[6] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T. and Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature 550, 354-359.

[7] Max, D. T. (2011) The Prince’s Gambit: A chess star emerges for the post-computer age. New Yorker, March 14th 2011 edition.

[8] Colton, S., Cook, M., Hepworth, R. and Pease, A. (2014). On Acid Drops and Teardrops: Observer Issues in Computational Creativity. Proceedings of the AISB’50 Symposium on AI and Philosophy.

[9] Colton, S. (2008). Creativity versus the Perception of Creativity in Computational Systems.

Proceedings of the AAAI Spring Symposium on Creative Systems.

[10] Charnley, J., Pease, A. and Colton, S. (2012). On the Notion of Framing in Computational Creativity. Proceedings of the Third International Conference on Computational Creativity.

[11] Guckelsberger, C., Salge, C. and Colton, S. (2017). Addressing the “Why?” in Computational Creativity: A Non-Anthropocentric, Minimal Model of Intentional Creative Agency. Proceedings of the Eighth International Conference on Computational Creativity.

[12] Colton, S., Pease, A. and Saunders, R. (2018). Issues of Authenticity in Autonomously Creative Systems. Proceedings of the Ninth International Conference on Computational Creativity.

[13] Chamberlain, W. and Etter, T. (1984). The Policeman’s Beard is Half-Constructed: Computer Prose and Poetry. Warner Books.

[14] Barthes, R. (1967). The death of the author. Aspen 5-6.

[15] Colton, S. (2012) The Painting Fool: Stories from building an automated painter. In McCormack, J. and d’Inverno, M., eds., Computers and Creativity, 3–38. Springer.

[16] Mordvintsev, A., Olah, C. and Tyka, M. (2015). DeepDream – a code example for visualizing Neural Networks. Google AI Blog, July 1st 2015.

[17] Elgammal, A., Liu, B., Elhoseiny, M. and Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. Proceedings of the Eighth International Conference on Computational Creativity.

[18] Gallie, W. (1956). Art as an essentially contested concept. The Philosophical Quarterly 6(23),97-114.

[19] Strothotte, H. and Schlechtweg, S. (2002). Non-Photorealistic Computer Graphics: Modelling, Rendering and Animation. Morgan Kaufmann.

[20] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A.,

Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jozefowicz, R.,  Jia, Y., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Schuster, M., Monga, R., Moore, S., Murray, D., Olah, F., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y. and Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.

[21] Jordanous, A. (2016). Four PPPPerspectives on computational creativity in theory and in practice. Connection Science special issue on Computational Creativity, 28(2), 194-216.

[22] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., Graves, A., Riedmiller, M., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S. and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature 518, 529-533.

[23] Evans, R., Jumper, J., Kirkpatrick, J., Sifre, L., Green, T., Qin, C., Zidek, A., Nelson, A., Bridgland, A., Penedones, H., Petersen, S., Simonyan, K., Crossan, S., Jones, D., Silver, D., Kavukcuoglu, K., Hassabis, D. and Senior, A. (2018). De novo structure prediction with deep-learning based scoring. Proceedings of the Thirteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstracts).

[24] Metz, C. (2016). In Two Moves, AlphaGo and Lee Sedol Redefined the Future. Wired, 16th March 2016 edition.

The post From Computational Creativity to Creative AI and Back Again appeared first on Interalia Magazine.

Is consciousness a battle between your beliefs and perceptions?

Imagine you’re at a magic show, in which the performer suddenly vanishes. Of course, you ultimately know that the person is probably just hiding somewhere. Yet it continues to look as if the person has disappeared. We can’t reason away that appearance, no matter what logic dictates. Why are our conscious experiences so stubborn?

The fact that our perception of the world appears to be so intransigent, however much we might reflect on it, tells us something unique about how our brains are wired. Compare the magician scenario with how we usually process information. Say you have five friends who tell you it’s raining outside, and one weather website indicating that it isn’t. You’d probably just consider the website to be wrong and write it off. But when it comes to conscious perception, there seems to be something strangely persistent about what we see, hear and feel. Even when a perceptual experience is clearly ‘wrong’, we can’t just mute it.

Why is that so? Recent advances in artificial intelligence (AI) shed new light on this puzzle. In computer science, we know that neural networks for pattern-recognition – so-called deep learning models – can benefit from a process known as predictive coding. Instead of just taking in information passively, from the bottom up, networks can make top-down hypotheses about the world, to be tested against observations. They generally work better this way. When a neural network identifies a cat, for example, it first develops a model that allows it to predict or imagine what a cat looks like. It can then examine any incoming data that arrives to see whether or not it fits that expectation.

The trouble is, while these generative models can be super efficient once they’re up and running, they usually demand huge amounts of time and information to train. One solution is to use generative adversarial networks (GANs) – hailed as the ‘coolest idea in deep learning in the last 20 years’ by Facebook’s head of AI research Yann LeCun. In GANs, we might train one network (the generator) to create pictures of cats, mimicking real cats as closely as it can. And we train another network (the discriminator) to distinguish between the manufactured cat images and the real ones. We can then pit the two networks against each other, such that the discriminator is rewarded for catching fakes, while the generator is rewarded for getting away with them. When they are set up to compete, the networks grow together in prowess, not unlike an arch art-forger trying to outwit an art expert. This makes learning very efficient for each of them.

As well as a handy engineering trick, GANs are a potentially useful analogy for understanding the human brain. In mammalian brains, the neurons responsible for encoding perceptual information serve multiple purposes. For example, the neurons that fire when you see a cat also fire when you imagine or remember a cat; they can also activate more or less at random. So whenever there’s activity in our neural circuitry, the brain needs to be able to figure out the cause of the signals, whether internal or external.

We can call this exercise perceptual reality monitoring. John Locke, the 17th-century British philosopher, believed that we had some sort of inner organ that performed the job of sensory self-monitoring. But critics of Locke wondered why Mother Nature would take the trouble to grow a whole separate organ, on top of a system that’s already set up to detect the world via the senses. You have to be able to smell something before you can go about deciding whether or not the perception is real or fake; so why not just build in a check to the detecting mechanism itself?

In light of what we now know about GANs, though, Locke’s idea makes a certain amount of sense. Because our perceptual system takes up neural resources, parts of it get recycled for different uses. So imagining a cat draws on the same neuronal patterns as actually seeing one. But this overlap muddies the water regarding the meaning of the signals. Therefore, for the recycling scheme to work well, we need a discriminator to decide when we are seeing something versus when we’re merely thinking about it. This GAN-like inner sense organ – or something like it – needs to be there to act as an adversarial rival, to stimulate the growth of a well-honed predictive coding mechanism.

If this account is right, it’s fair to say that conscious experience is probably akin to a kind of logical inference. That is, if the perceptual signal from the generator says there is a cat, and the discriminator decides that this signal truthfully reflects the state of the world right now, we naturally see a cat. The same goes for raw feelings: pain can feel sharp, even when we know full well that nothing is poking at us, and patients can report feeling pain in limbs that have already been amputated. To the extent that the discriminator gets things right most of the time, we tend to trust it. No wonder that when there’s a conflict between subjective impressions and rational beliefs, it seems to make sense to believe what we consciously experience.

This perceptual stubbornness is not just a feature of humans. Some primates have it too, as shown by their capacity to be amazed and amused by magic tricks. That is, they seem to understand that there’s a tension between what they’re seeing and what they know to be true. Given what we understand about their brains – specifically, that their perceptual neurons are also ‘recyclable’ for top-down functioning – the GAN theory suggests that these nonhuman animals probably have conscious experiences not dissimilar to ours.

The future of AI is more challenging. If we built a robot with a very complex GAN-style architecture, would it be conscious? On the basis of our theory, it would probably be capable of predictive coding, exercising the same machinery for perception as it deploys for top-down prediction or imagination. Perhaps like some current generative networks, it could ‘dream’. Like us, it probably couldn’t reason away its pain – and it might even be able to appreciate stage magic.

Theorising about consciousness is notoriously hard, and we don’t yet know what it really consists in. So we wouldn’t be in a position to establish if our robot was truly conscious. Then again, we can’t do this with any certainty with respect to other animals either. At least by fleshing out some conjectures about the machinery of consciousness, we can begin
to test them against our intuitions – and, more importantly, in experiments. What we do know is that a model of the mind involving an inner mechanism of doubt – a nit-picking system that’s constantly on the lookout for fakes and forgeries in perception – is one of the most promising ideas we’ve come up with so far.

Hakwan Lau

This article was originally published at Aeon and has been republished under Creative Commons.

The post Is consciousness a battle between your beliefs and perceptions? appeared first on Interalia Magazine.

The Maths of Life and Death

 

Q & A with Kit Yates:

Maths is an unloved subject. It’s a commonplace view that maths is hard, that maths is abstract and removed from everyday concerns. Why do you think that is?

There’s no doubt that maths is perceived as polarising; despised by many and loved by just a few. As a mathematician interested in sharing the wonders of my subject, my biggest struggle is with this self-imposed false dichotomy: those who believe that they can do maths and those who think they can’t. There are far too many of the latter. But there is almost no-one who understands no maths at all, no-one who cannot count. At the other extreme, for hundreds of years there have been no mathematicians who understand all of known mathematics. We all sit somewhere on this spectrum; how far we travel to the left or to the right depends on how much we think this knowledge can be useful to us. Exposing the uses and importance of maths in everyday life is one way to shift people along the spectrum, to bring them into the middle ground.

This is exactly what I’ve tried to do in my book. It’s important to say upfront that The Maths of Life and Death is not a not a maths book. Nor is it a book for mathematicians. There isn’t a single equation in it. The point of the book is not to bring back memories of the school mathematics lessons you might have given up years ago. Quite the opposite. If you’ve ever been disenfranchised and made to feel that you can’t take part in mathematics or aren’t good at it, consider this book an emancipation.

I genuinely believe that maths is for everyone and that we can all appreciate the beautiful mathematics at the heart of the complicated phenomena we experience daily. If you’ve ever been made to feel that you can’t comprehend maths or aren’t good at it, I say this: you are experiencing it all the time, perhaps without even knowing it. Mathematics, at its most fundamental, is pattern. If you spot a motif in the fractal branches of a tree, or in the multi-fold symmetry of a snowflake, then you are seeing maths. When you tap your foot in time to a piece of music, or when your voice reverberates and resonates as you sing in the shower, you are hearing maths. If you bend a shot into the back of the net or catch a cricket ball on its parabolic trajectory, then you are doing maths. Part of the job I undertake in the book is to highlight the places where people are using maths, intuitively, perhaps without even realising it.

Unfortunately, all too often, mathematics is viewed as a sterile, abstract subject: at best an esoteric plaything for out-of-touch academics, and at worst a waste of school children’s time and taxpayers’ money. Few explanations of everyday mathematics filter through to non-specialists. Instead they are told that mathematics is inaccessible and inscrutable. Mathematics is often lauded for its beauty, its purity, its abstraction and otherworldliness; untainted by the messy details of reality. But for me, an applied mathematician, mathematics is first and foremost a practical tool to make sense of our complex world. Mathematical modelling can give us an advantage in everyday situations, and it doesn’t have to comprise hundreds of tedious equations or lines of computer code to do so. In fact, the simplest models are stories and analogies. For me, the stories that comprise this book – the most basic models – are the most useful of all. When viewed through the right lens we can tease out the hidden mathematical rules that underlie our common experiences.

Is this attitude to maths changing?

I think societal changes are slowly altering attitude towards the importance of maths. As our economies change, there is growing awareness that we need more mathematicians, engineers and scientists to fill the increasing numbers of jobs in the technology sector. To some degree this is reflected in maths’ rise to becoming the most popular A-level choice. This rise in popularity has also impacted on the number of students continuing to study mathematics in higher education. I always tell students who come to visit my department at open days, and who are trying to make up their mind about whether to study maths or not, that by studying maths they will only open doors for themselves and never close them. It’s so easy to jump out of mathematics and into another discipline, but much harder to go back the other way.

For example, I myself am a mathematical biologist. When I tell people this, the reaction I get is usually a polite nodding of the head accompanied by an awkward silence, as if I was about to test them on their recall of the quadratic formula or Pythagoras’ theorem. More than simply being daunted, people struggle to understand how a subject like maths, which they perceive as being abstract, pure and ethereal, can have anything to do with a subject like biology, which is typically thought of as being practical, messy and pragmatic.

I dropped biology at sixth-form and took A-levels in maths, further maths, physics and chemistry. When I went to university, I had to further streamline my subjects, and felt sad that I had to leave biology behind forever; a subject I thought had incredible power to change lives for the better. I was hugely excited about the opportunity to plunge myself into the world of mathematics, but I couldn’t help worrying that I was taking on a subject that seemed to have very few practical applications. I couldn’t have been more wrong.

Whilst I plodded through the pure maths we were taught at university I lived for the applied maths courses. I listened to lecturers as they demonstrated the maths that engineers use to build bridges so that they don’t resonate and collapse in the wind, or to design wings that ensure planes don’t fall out of the sky. I learned the quantum mechanics that physicists use to understand the strange goings-on at subatomic scales and the theory of special relativity that explores the strange consequences of the invariance of the speed of light. I took courses explaining the ways in which we use mathematics in chemistry, in finance and in economics. I read about how we use mathematics in sport to enhance the performance of our top athletes and how we use mathematics in the movies to create computer-generated images of scenes that couldn’t exist in reality. In short, I learned that mathematics can be used to describe almost everything.

I think as people start to see the way in which mathematics is increasingly pervading their everyday lives and to understand how even a little mathematical knowledge can be of benefit in real life, its importance will be increasingly realized. I also believe that when students see that there is a point to the maths they are being taught, rather than just rote learning to pass an exam, that maths can be transformed into something enjoyable.

This is what the Maths of Life and Death is all about. I try to convince the reader that maths is so much more than the esoteric subject they left behind at school. It is the false alarms that play on our minds and the false confidence that helps us sleep at night; the stories pushed at us on social media and the memes that spread through it. Maths is the loopholes in the law and the needle that closes them; the technology that saves lives and the mistakes that put them at risk; the outbreak of a deadly disease and the best way to control it. It is the best hope we have of answering the most fundamental questions about the enigmas of the cosmos and the mysteries of our own species. It leads us on the myriad paths of our lives and lies in wait, just beyond the veil, to stare back at us as we draw our final breaths.

A common everyday use of maths is in shopping – a trip to the greengrocer is one of the most cited examples in school maths teaching – but what are some other everyday, and more unusual uses of maths?

It’s funny you should mention shopping, because there’s actually so much more maths to shopping than just working out your change. For example, stores have traditionally over-represented price tags which end in .99, .95 or .90. In the UK .99 is the third most common price ending after .00 and .50. The marketing theory goes that because we read left to right we take account of the first digits on price tags, but ignore everything to the right of the decimal point. Unwittingly we are being tricked into thinking products are cheaper than they are because our brains are always subconsciously rounding down. In the book I also provide a nice rule of thumb called ‘the 37% rule’ which uses the maths of optimisation to help you join the shortest queue in the supermarket.

Of course there are so many more places where maths appears in everyday life. In the book, we explore the true stories of life-changing events in which the application (or misapplication) of mathematics has played a critical role: patients crippled by faulty genes and entrepreneurs bankrupt by faulty algorithms; innocent victims of miscarriages of justice and the unwitting victims of software glitches. I follow stories of investors who have lost fortunes and parents who have lost children, all because of mathematical misunderstanding. I wrestle with ethical dilemmas from screening to statistical subterfuge and examine pertinent societal issues such as political referenda, disease prevention, criminal justice and artificial intelligence. I show that mathematics has something profound or significant to say on all of these subjects, and more.

Rather than just pointing out the places in which maths might crop up, I also try to arm the reader with simple mathematical rules and tools which can help them in their everyday life: from getting the best seat on the train, to keeping one’s head when on the receiving end of an unexpected test result from the doctor. I suggest simple ways to avoid making numerical mistakes and get my hands dirty with newsprint when untangling the figures behind the headlines. I also get up close and personal with the maths behind consumer genetics and display maths in action as I highlight the steps we can all be taking to help halt the spread of deadly diseases.

What are some of the benefits of a better understanding of maths?

A little mathematical knowledge in our increasingly quantitative society can help us to harness the power of numbers for ourselves. Simple rules allow us to make the best choices and avoid the worst mistakes. Small alterations in the way we think about our rapidly evolving environments help us to ‘keep calm’ in the face of rapidly accelerating change, or adapt to our increasingly automated realities. Basic models of our actions, reactions and interactions can prepare us for the future before it arrives. The stories relating other people’s experiences are, in my view, the simplest and most powerful models of all. They allow us to learn from the mistakes of our predecessors so that, before we embark on any numerical expedition, we ensure we are all speaking the same language, have synchronised our watches, and checked we’ve got enough fuel in the tank.

Half the battle for mathematical empowerment is daring to question the perceived authority of those who wield the weapons – shattering the illusion of certainty. Appreciating absolute and relative risks, ratio biases, mismatched framing and bias gives us the power to be sceptical of the statistics screamed from newspaper headlines, the ‘studies’ pushed at us in adverts or the half-truths that come tumbling from the mouths of our politicians. Recognising mathematical sleights of hand allows us to disperse obfuscating smoke screens, making it harder to fool us with mathematical arguments, be they in the courtroom, the classroom or the clinic.

We must ensure that the person with the most shocking statistics doesn’t always win the argument, by demanding an explanation of the maths behind the figures. We shouldn’t let medical charlatans delay us from receiving potentially life-saving treatment when benefits their alternative therapies are just a mathematical anomaly. We mustn’t let anti-vaxxers make us doubt the efficacy of vaccinations, when mathematics demonstrates that they can save vulnerable lives and wipe out disease.

As I hope I show throughout the book, it is time for us to take the power back into our own hands, because sometimes maths really is a matter of life and death.

………………………….

https://kityates.com

 

The post The Maths of Life and Death appeared first on Interalia Magazine.

In Praise of Form: Towards a New Post-Humanist Art

Today the litany of crises we face culturally and globally has become so familiar that it needs no further recitation. Indeed, so often are we reminded that the world has gone wrong that the word “crisis” has acquired a patina of banality. But this is to be an essay of hope, so let us move on. For protests to the contrary notwithstanding, there is good reason for it: across many strata of Western culture, there is a growing awareness, uneasy though it may be, that we have at last identified the problem. The problem is not out there, in some externalized other (would that it were so, so much more palatable would this be). Reluctantly, shamefully, but profoundly necessarily, we are finally meeting the enemy, and he is us: the human animal that placed itself in the center of the universe, the one that first severed itself from nature and then elevated itself above it, and the one that in imagining that this was really possible has dug its own grave. We can call this progress.

Daniel Hill, “Untitled 37,” 2012. Acrylic polymer emulsion on paper mounted to panel, 44″ x 60″ (diptych). Courtesy of ODETTA Gallery.

To be fair, the problem is more specific, and can be located in an idea. Although for most of us in the West the word “humanism” still conjures little but benevolence (“human values,” “human rights, “human dignity,” etc.), it harbors an implicit ideology that many are now challenging. This is none other than its premise of human exceptionalism: the assumption that the human being is the source of all meaning and, even further, the ultimate reality. In light of everything we’re witnessing in our ignoble Anthropocene, it is becoming increasingly clear that humanism has been as mistaken as the theism it sought to replace, for just as God’s omnipotence reduced us to servitude, so ours has done the same to the non-human world. The call for a post-humanist worldview grows ever more compelling. Can we achieve a new way of being that honors the nonhuman world, one that acknowledges its inherent richness and restores it to its rightful place in the cosmos? Spatially, chronologically, and in just about every other way, it does, after all, rather greatly exceed us.

William Holton, “Point of Convergence,” 2010. Oil and acrylic on canvas, 35″ x 36″. Courtesy of the artist.

But what does any of this have to do with art, you may be asking. And this is exactly the point. The answer is nothing – or very little, just yet. While the so-called non-human turn has inundated the humanities, leading even to the proposal a new “inhuman humanities,” visual art has undergone nothing of the kind. In fact, it could be argued that just the opposite has happened; with art’s preoccupation with social justice and an exhausted postmodernism, it’s easy for those of us in the field to forget anything beyond us exists. Adding to this our inherited assumptions about art being “self-expression” (and lest we be inclined to dismiss this as a pedestrian notion, what is our current “identity art” if not exactly this?), it becomes clear that visual art is mired in an obsolescent human centrism. Indeed, if “everything is a social construct,” as postmodernism tells us, the human being isn’t just the highest but the only reality.

But aside from the societal orientation of much visual art today, there is a deeper sense in which art has been complicit in perpetuating an old idea. It’s much more subtle than subject matter, and has to do with our very expectations for and valuations of art. For as art becomes ever more discursive, prioritizing issues and ideas over the forms in which they’re instantiated, it is reinforcing the implicit values of the humanist fallacy.

Werner Sun, “Double Vision 1B,” diptych, 2018. Archival inkjet prints and acrylic on board, 12″ x 25″ x 2″. Courtesy of the artist.

The problem is made evident when we consider prevailing attitudes toward form. “Empty formalism,” “mere formalism,” “shallow form devoid of content”: in a time when art is expected to address this or that issue, form has become a critical embarrassment, something insufficient in itself but useful for one purpose – namely, to serve as the delivery system for the real substance that is “content.” So pervasive is the disdain for “mere form” that today’s artist’s statements often read as hyper-intellectualized apologia – discursive treatises announcing in advance that there’s no “mere” happening here. And yet in the privacy of their studios, in the presence of that trust they have only with each other, many artists will confess that it is precisely form – the interplay of shapes, colors, textures, and materials, and the tensions and rhythms generated therein – that is not only captain but also navigator: the one with the first word, plenty in the middle, and certainly the last. A tacit understanding among those who make, discursive content is to many a mere maneuver of expediency.

David Mann, “YTB III,” 2016. Oil and alkyd on canvas stretched over board, 68″ x 72.”

Why the disavowal and disparagement of form? As our attitudes about art can’t be separated from the larger culture, we come back to humanism and its hierarchy of values. One of the most pernicious assumptions of the humanist worldview was its devaluation of the body and all that is associated with it. Carrying on the legacy of the great Cartesian cleavage, humanism had reason enthroned on high, casting off as inferior the emotions, the senses, all our autonomic functions – in short, anything rude enough to remind us that we are animals. And yet as today’s neuroscience has definitively shown, the body and the emotions are not separate from cognition; far from being “soft” and secondary faculties inferior to reason, they are in fact central to it, integral functions on which reason is entirely dependent. If form is something we apprehend with our senses and discursive content that which is grasped by the mind, the inferior status granted form is a tired recapitulation of the humanist error. But it is also more than this.  In denying form its rightful place in art, art is denying itself an exquisite opportunity. For if now is the time for us to move beyond ourselves, to reclaim our fleshly relations to earth, animal, and world, what better vehicle than the power of sensual form?

Debra Ramsay, “The Wind Turning in Circles Invents the Dance,” 2019. Acrylic on acrylic panel, 19″ x 18″. Courtesy of the artist.

In the spirit of the emerging ethos, then, can we imagine a new art for a post-humanist century? What would a post-humanist art look like, and how would it be experienced? First and foremost, a post-humanist art would be one that embraces form. It would be an art that considers form not as something that serves content, but rather as something that, like the body, possesses an intelligence of its own – an intelligence far deeper and more complex than conscious, discursive thought. In its address to the body and somatic experience, it would run directly counter to the prevailing emphasis on ideas, seeking not their propagation but exactly their cessation. For in order to gain access the beyond-human world, conscious thought, discursive thought, must first be extinguished. Rather than focusing on the contents of consciousness, then, post-humanist art would alight on its structure – all the subtle rhythms and patterns that constitute its movement. And not least, being decidedly oriented away from the self – away from personal identity, above all that of the artist – a post-humanist art would be one of transcendence. For with the thinker that thought itself into the center of the world silenced, we become living organisms again just like all others, participating in, and exquisitely sensitive to, the dynamic flux of the natural world.

Linda Francis, “Nostalgia for Messier #2,” 1994. Chalk on paper, 52″ x 39″. Courtesy of the artist.

With the affirmation of form as the powerful force that it is, the question becomes how, exactly, it delivers us to the non-human. We can begin by examining how form works on us, and why it moves us so deeply when indeed it does. Of all the arts, visual art is singular in a particularly significant way, and this is that it is physically embodied.[1] Its material presence being the first thing we apprehend, we confront in it not just it but ourselves: body to body, there is a certain carnal reciprocity absent in music and literature. Grasping the whole with an uncanny instantaneity, the eye moves in to probe the parts and their interrelations – this part to that, these to those over there, all of them in active tension with the overall organization.  Attraction and repulsion, assonance and dissonance, the ever-present tug of gravity that is the counterpoint to all visual form: whatever forces are enacted in the work’s particulars reverberate sympathetically on the instrument of our nervous system, causing subtle internal movements we cannot locate introspectively. Never fixing on any one area for too long, the eye is led by the forms in a rhythmic leaving and returning, ever expanding and contracting between the general and the particular. A kind of optical dance choreographed by the artist, the experience of viewing is far from the passive act of receiving information; rather, it is a profoundly active and participatory mode of engagement. When we say we are moved by a work of art, it is not just conceptual metaphor. In a very real sense, on every level of our organism we literally are moved. The experience of visual form is a distinct and particularly intense kind of electrochemical excitation.

But the real mystery of aesthetic form is not so much why it moves us but why it moves us so deeply. Why, when it does so, does it not merely delight? Why is it not just pleasant, the way the sound of a distant foghorn is pleasant, or the smell of fresh rain falling on stone, or the brush of a hand against the soft fur of an animal? Unlike these momentary pleasures, the experience of a great work of art seems in some way to change us, to rearrange the internal architecture on the deepest level of our being. And not only does it change us; it does so in a way that feels unusually significant. There is a profound rightness about it, a felt realignment, a re-membering of something unconsciously undone.  Indeed, so right is the feeling that is has, in the largest sense, the quality of coming home.

Ed Kerns, “Degree of Freedom in a Liquid Field; Not Overwhelmed,” 2018. Acrylic on canvas, 40″ x 30.” Courtesy of the artist.

Perhaps the experience of aesthetic form feels like coming home precisely because it is coming home. Home, that is, to the world that gave rise to us: the world of inanimate matter in all its myriad manifestations, and the whole kingdom of sentient creatures from whom we are descended. For what is the nature of this non-human world if not an endless cycle of dynamic patterns, from the rhythms of the tides to the sonic undulations of the animals to the expansions and contractions of the earth moved by forces to all manner – not least life and death – of arisings and evanescings? If the world out there is constituted of patterns of movement, it is in their deep visceral experience that we gain access to that world, moving from a consciousness of separation to one of participation. The experience of aesthetic form is an active engagement in the largest kind of communion.

It is also, and not insignificantly, an act of self-recognition. For in transcending the thinker and entering the greater world, we find not just the greater world but the greater parts of ourselves: the millions of years of evolution we carry in our bodies, and all that constitutes, unbeknownst to us, the richest reservoirs of our intelligence. We all know the feeling of being thus transported. Little else is as satisfying. The separatist ego will return, of course, to reassert its authority, but the experience of having left it lodges deep in the body, where, like a benevolent nuisance, it reminds us of something we only half want to remember – namely, that we live most of our lives locked in the smallest room in the house. Summoned on occasion by the exquisite rightness of a form, it comes back, and there we are again, and again we have to humbly concede that we really should get out more.

Yoshiaki Mochizuki, “Untitled, 6/6,” 2012. Gesso on board, clay, palladium leaf, and ink, 10.5″ x 10.5″. Courtesy of the artist and Marlborough, New York and London.

While it may not be our only means of participating in the Great Beyond, aesthetic form is surely one of the most powerful. If visual art continues to dismiss it, insisting on art’s identity as a discursive enterprise, it may end up on the losing side of our century’s catastrophe. For if the arrogance of reason is what brought us to where we are, it can hardly be expected to be the thing to get us out. What we need is reason reunited with the sensorium that sustains it and with the misconceived “other” that gave rise to it in the first place. And what is art if not an agent of integration, and what are artists if not those who know how to show us what that might look like? So let us reclaim form. Let us reclaim it as the transformative force it always was, and let us reclaim it in the name of something larger than ourselves – something beyond art, beyond culture, beyond even human history, something that, in returning us to our smallness, grants us full citizenship in the greatest largeness.

[1] Unless it is not. There is certainly much conceptual art that lacks any material component, but our focus here is on visual art that is visual – which is to say visual art that has sensual form.

……………………………..

http://www.concatenations.org/

The post In Praise of Form: Towards a New Post-Humanist Art appeared first on Interalia Magazine.