What we thought we were doing (and I think we succeeded fairly well) was treating the brain
as a Turing machine; that is, as a device which could perform the kind of functions which a brain
must perform if it is only to go wrong and have a psychosis.
Warren McCulloch (1948)1
In 1948 on a conference on circuits and brains in Pasadena California, the prominent cybernetician and neural net pioneer Warren McCulloch addressed a room of the most prominent mathematicians, psychologists, and physiologists of the day. In his comments he sought to titillate his respectable audience by offering them a seemingly unintuitive analogy. Finite state automata, those models of calculative and computational reason, the templates for programming, the very seats of repetition, reliability, mechanical, logical and anticipatable behavior, were “psychotic” but brain-like.
These statements cannot, however, be thought in terms of human subjectivity or psychology. McCulloch, while trained as a psychiatrist, was not discussing patients in mental clinics. Rather he was responding to a famous paper delivered by the mathematician John von Neumann on logical automata.2 The psychiatrist had no intention to argue about the essential characteristics, the ontology, of machines or minds. He recognized that computers were not yet the same as organic brains. The question of equivalence was not at stake.
What was at stake was the set of methodologies and practices, the epistemology that might build new machines – whether organic or mechanical. And the answer, both McCulloch and von Neumann suggested, was to develop a new form of logic, an epistemology they labeled “psychotic” and rational, that might make processes usually assigned to analytic functions of the brain, perhaps associated with consciousness and psychology, amenable to technical replication. McCulloch gave voice to an aspiration to turn a world framed in terms of consciousness and liberal reason, into one of control, communication, and rationality. And he did not dream alone. At this conference where many of the foremost architects of Cold War computing, psychology, economics, and life sciences sat, we hear a multitude of similar statements arguing for a new world, now comprised of “psychotic” but logical and rational agents.
I want to take this turn away from pathology and reason, to a new discourse of cognition and rationality, as a starting point to consider the relationship between memory, reason, and temporality in cybernetic discourse. McCulloch and the works of his colleagues serve as useful vehicles to begin examining the displacement of older concepts of agency, consciousness, and autonomy into circuits, cognition, and automata.
But if McCulloch was voicing the desire for a new form of intelligence, he was only doing it through the language of an older psychoanalytic psychology. While explicitly antagonistic to psychoanalysis, his definitions of psychosis largely conformed to the definitions of the day in psychoanalysis. And his model was troubled by questions of memory and time, features he assigned to the human psychology:
Two difficulties appeared [when making neurons and logic gates equivalent]. The first concerns facilitation and extinction, in which antecedent activity temporarily alters responsiveness to subsequent stimulation of one and the same part of the net. The second concerns learning, in which activities concurrent at some previous time have altered the net permanently, so that a stimulus which would previously have been inadequate is now adequate.3
Stated otherwise, for McCulloch historical time presented challenges to making thought and logic equivalent. As a result, McCulloch managed to build a new machine-mind out of the matter of neurons, but the problems of temporal organization, change, and perhaps, even consciousness continued to trouble him. This paper is therefore not a theory of “psychosis” but rather a historical investigation into the psychoanalytic relationship to cybernetics, and to the cybernetic relationship to temporality.
For cyberneticians, cognitive- and neuroscientists seeking to logically represent thought the literal mechanisms of thinking always haunted the computational models. Older histories of science, psychology, and philosophy, invested in the surplus and un-representable elements of human thought, troubled the new machinery of social and human science. Despite therefore proclaiming, “Mind no longer goes ghostly as a ghost,” and that psychoanalytic concepts could now be disavowed for neurophysiology, McCulloch continued to be haunted and animated by these ongoing problems of organizing time and space inside of networks.4
This discourse of time and logic also speaks to our contemporary debates about affect and time. If today, we continue to insist that human beings are not liberal, enlightened, reasonable, or Cartesian, it comes within a history of the human and social sciences that has long turned such pronouncements into entities such as financial instruments, psychiatric diagnostic manuals, and international relations models. I seek to offer historical sustenance and complication to contemporary accounts of biopolitics and affect that argue that life and capital are now inextricably linked through digital computation.5 This ever greater intertwining of economy and life to the affective, sensory, and even neural scale has a history.
This piece will suggest that the mechanism driving economic and vital processes into such intimacy, if not equivalence, is not a single technology but rather an epistemological change in the type of truth claims and questions posed by cyberneticians about reason and calculation that have influenced the social, human, life, and computational sciences since the late 1940s. These speculative discourses, first murmured in numerous conferences, universities, and corporations, reformulated ideas of agents, and ultimately populations and systems, in important ways that continue to inform our present.
The statements articulated by many early cyberneticians and human scientists in the late 1940s thus bridge the gap between our contemporary concerns with agents, affect, preemption, networks, and collective intelligences, and older historical concerns within science about consciousness, temporality, and representation. At this pivotal moment, demarcated by a catastrophic world war, these sciences were part of producing an aspiration for a new world made up of communication and control – but not without producing a novel set of conflicts, desires, and problems. I turn, then, to outlining the conflicted relationships between memory and control, embodied within the discourse of psychosis for example and what this might say about our present conception of both media and minds. It is my contention that this relationship between logic and archiving continues to animate our machines and digital networks, driving a dual imaginary of instantaneous analytics and collective intelligence, while encouraging the relentless penetration of media technologies into life.
It is perhaps no accident that autonomy and will were being re-scripted as circuits in machines at the time. Much of the logic that underpins contemporary ideals of intelligence emerged from the Second World War and the science of communication and control, labeled cybernetics. As is well documented, cybernetics emerged from work at the Radiation Lab at the Massachusetts Institute of Technology [MIT] on anti-aircraft defense and servo-mechanisms during the Second World War. The MIT mathematician Norbert Wiener, working with the MIT trained electrical engineer Julian Bigelow, and the physiologist Arturo Rosenblueth, reformulated the problem of shooting down planes in terms of communication – between an airplane pilot and the anti-aircraft gun. These researchers postulated that under stress, airplane pilots would act repetitively, and therefore have algorithmic behaviors analogous to servo-mechanisms and amenable to mathematical modeling and analysis. This understanding denoted that all entities were “black-boxed” and should be studied behaviorally.6
In 1943, inspired by this idea that machines and minds might be thought together through the language of logic and mathematics, the psychiatrist Warren McCulloch and the logician Walter Pitts at the University of Illinois at Urbana-Champaign decided to study quite literally the machine-like nature of human beings. The pair would later go to MIT in 1952 at Norbert Wiener’s behest.7
The article “A Logical Calculus of Ideas Immanent in Nervous Activity,” which appeared in the Bulletin of Mathematical Biophysics, has now come to be one of the most commonly referenced pieces in cognitive science, philosophy, and computer science. There are a series of moves by which neurons could be made equivalent to logic gates, and therefore “thought” is made materially realizable from the physiological actions of the brain. These moves reformulated psychology, but they also demonstrated a broader transformation in the constitution of evidence and truth in science.
The model of the neural net put forth in “A Logical Calculus of Ideas Immanent in Nervous Activity” has two characteristics of note that are critical in producing our contemporary ideas of rationality.8 The first characteristic of the model according to McCulloch is that every neuron firing has a “semiotic character”; it is mathematically rendered as a proposition. Pitts and McCulloch imagined (in a departure from the real brain) each neuron as operating on an “all or nothing” principle when firing electrical impulses over synaptic separations. The pair interpreted the fact that neurons possessed action potentials and delays, as equivalent to a discrete decision. This event affirms or denies a fact (or activation), and therefore neurons can be thought of as signs (true/false), and nets as semiotic situations, or communication structures (just like the structured scenarios of communication theory).9 [Fig. 1 and 2]
This discrete decision (true or false, activate or not) also made neurons equivalent to logical propositions and Turing machines.
The second element of the model is a strictly probabilistic and predictive time. Neuronal nets are determinate in terms of the future (they are predictive), but indeterminate in terms of the past. [Fig. 3]
In the model, given a net in a particular time state (T), one can predict the future action of the net (T+1), but not the past action. From within the net, one cannot determine which neuron fired to excite the current situation. Put another way, from within a net (or network), the boundary between perception and cognition, the separation between interiority and exteriority, and the organization of causal time are all in-differentiable.
McCulloch offered as an example the model of a circular memory neuron activating itself with its own electrical impulses. [see Fig. 2]
At every moment, what results as a conscious experience of memory cannot be the recollection of the original activation of the neuron, but merely that it was activated in the past at an in-determinant time. The firing of a signal, or the suppression of firing, can only be declarations of “true” or “false” – true, there was an impulse, or false, there was no firing – not an interpretative statement of context or meaning.
Within neural nets, at any moment, one cannot know which neuron sent the message, when the message came from, or whether the message is the result of a new stimulus or merely a misfire. In this model the net cannot determine with any certitude whether a stimulus comes from without or from within the circuit; whether it is a fresh input or simply a recycled “memory,”
McCulloch and Pitts ended on a triumphant note, announcing an aspiration for a subjective science. “Thus our knowledge [they wrote] of the world, including ourselves, is incomplete as to space and indefinite as to time. This ignorance, implicit in all our brains, is the counterpart of the abstraction which renders our knowledge useful,”10 If subjectivity had long been the site of inquiry for the human sciences, now, perhaps, it might become an explicit technology. Many cyberneticians forwarded this new concept of ignorance and partial perspective as a technical opportunity rather then an obstacle to knowledge.
McCulloch and Pitts were explicit that their work was a Gedankenexperiment, a thought experiment that produces a way of doing things, a methodological machine. Almost cheerily, McCulloch and Pitts admitted that this was an enormous “reduction” of the actual operations of the neurons.11 “But one point must be made clear: neither of us conceives the formal equivalence to be a factual explanation. Per contra!”12 McCulloch and Pitts were clear that for purposes of logical experiment, such aspects as fatigue and the speed of firing and activation must be disregarded as unimportant. At no point should anyone assume that neural nets were describing a “real” brain.13
But reduction or not, the pair had proved that logic and very sophisticated problem solving might emerge from small physiological units like neurons linked up in circuits. In doing so, and by way of labeling these circuits psychotic, amnesic, neurotic, and historically incoherent, McCulloch and Pitts made neural nets analogous to communication channels, and shifted the terms of dealing with psychology and consciousness to cognition and capacities. Neural nets were thus made equivalent to Markov chains and also compliant with the Russian mathematician A. A. Markov’s famous definition of algorithms as possessing three variables: definiteness, generality, and conclusiveness.14 I can make a final extrapolation and argue that for McCulloch and Pitts, processes of reasoning can be directly equated with logic gates, represented as algorithms, and derived from the physiological mechanism of the neuron’s actions. More significantly, the neural net suggests a change in attitudes to psychological processes that rests on an epistemological transformation in what constituted truth, reason, and evidence in science.
This epistemology rests on three important points that are seemingly unimportant alone, but are significant when recognized as joining a history of logic, engineering practices, and the human sciences in a new assemblage. The first is that logic is now both material and behavioral, and agents are non-ontologically defined or “black-boxed,” The second is that cybernetic attitudes to mind rest upon a repression of all questions of documentation, indexicality, archiving, learning, and historical temporality. And third – the temporality of the net – is preemptive: it always operates in the future perfect tense but without necessarily defined endpoints or contexts.15
Rationality could thus be redefined as both embodied and affective, and good science was not the production of certitude but rather the account of chance and indeterminacy. For neural net researchers the question then turned to determining not whether minds are the same or different as machines, but rather, as Joseph Dumit has put it: “What difference does it make to be in one network or another?”16
Having inserted the logic of the machine into the brain, this model would feedback into the design of machines. The model of the cycling memory neuron in fact directly refracts an earlier concept of control in the Turing machine (and would later become the model for memory in von Neumann’s architecture for EDVAC).17 Control in the Turing machine is the head that “reads” the program from memory, and then begins the process of executing it according to the directions in the memory. On the one hand, control directs the next operation of the machine. On the other hand, control is directed by the program. The control unit, or the reading head in a Turing machine, is directed by the tape it is reading from memory, not the reverse. Control is that function that will read and act upon this retrieved data, inserting the retrieved program or data into the run of the machine. Such machines do not operate top down, but rather in feedback loops between storage, processing segments, and the interface for input and output. In his 1946 report on building a computing machine, the ACE Report, Turing reiterated that only the possession of memory “give[s] the machine the possibility of constructing its own orders; i.e. there is always the possibility of taking a particular minor cycle out of storage and treating it as an order to be carried out. This can be very powerful,”18 If there is a feature that allows minds to act in uncanny and unexpected ways, it is this surprising capacity to change the pattern of action by way of insertion of a program from storage. Control in computers is like reverberating circuits in brains, and both are classically defined as psychotic in receiving memories without history. If today we regularly assume we know what control is and deploy the term critically at will, from within the machine it is far less clear, and far more dynamic. [Fig. 4]
The historical redefinition of rationality therefore demands a reconsideration of what “control” means. In most scholarly documentation, control has been correlated with prediction, knowing the future, and (often military) command.19 In their famous book, Planning and Coding, von Neumann and Goldstine first introduced flow charts and circuits for stored program computers. In describing their circuits they wrote, “[w]e propose to indicate these portions of the flow diagram of C by a symbolism of lines oriented by arrows. [Fig. 4 above] […] Second, it is clear that this notation is incomplete and unsatisfactory,”20 In other words, control is not definable; its operable imagining and its explicit definition are incommensurate. But rather than treat this failure in representation as a problem, this threshold became a technological opportunity; this emergent space between the definable and the infinite provided the contours of the engineering problem – an opportunity to turn from logic to technology.
Significantly for us, McCulloch and Pitts inverted the problem posed by the original negative proof of the Entscheidungsproblem that is the Turing machine. If throughout the 19th and earlier 20th centuries, an army of mathematicians and philosophers struggled to infinitely extend the limits of logical representation to which the Turing machine is a negative proof demonstrating the impossibility of fully representing all statements in first-order logic, then McCulloch and Pitts had a different epistemology and frame.21 Accepting that there were many things that could not be known or computed, McCulloch inverted the question Turing had posed. If, instead of seeking an absolute reasonable foundation for mathematical thought that an army of other logicians and mathematicians including Gottlob Frege, Kurt Gödel, David Hilbert, Bertrand Russell, Alfred North Whitehead, and Alan Turing had attempted, McCulloch and Pitts had chosen to ask instead: What if mental functioning could now be demonstrated to emanate from the physiological actions of multitudes of logic gates? What could be built? (not: What could be proven?), then the problem could have been inverted from seeking the limits of calculation to examining the possibilities for logical nets. What had been an absolute limit to mathematical logic became an extendable threshold for engineering. McCulloch implied that we should turn instead to accepting our partial and incomplete perspectives, our inability to know ourselves and make this “psychosis” in his words an “experimental epistemology.”22
What the cybernetic reformulation of logic as psychotic permitted was an abandonment of ontological concern with the past and the present in the interest of focusing on future interactions. These models measure not what is happening, but prepare us for what will happen as a result of finding patterns of past data that ironically was devoid of historical temporalities. The transformation in truth claims and epistemology opened a new frontier for study – subjective interactions in environments without complete information.
These literally nervous networks and logical rationalities proliferated in the social and human sciences. Cybernetic and communicative concepts of mind were part of a broader shift at the time in concepts of reason, psychology, and consciousness; informing everything from finance and options trading equations, environmental psychology and urban planning programs of individuals such as Kevin Lynch, and later MIT’s Architecture Machine Group and the Media Lab headed by Nicholas Negroponte, to the political science models of Karl Deutsch at Harvard, and the “bounded rationality” introduced by Herbert Simon and widely considered the start of contemporary finance. The postwar social sciences were repositories of these techniques that transformed what had once been a question of political economy, value production, and the organization of human desire and social relations to problems of circulation and communication by way of a new approach to modeling intelligence and agency.23
This rationality is also sensible, perhaps affective; a situation that puts in considerable revision of dominant understandings of digital and computational mediums as distancing, disembodied, or abstract. And if it is one of the dominant assumptions in the study of modern history and governance that liberal subjectivity and economic agency is defined as a logic guided by a reason separate from sense, then these discourses mark a clear separation.24 The science historian Lorraine Daston reminds us that we would do well to recall that those things today considered virtuous and intelligent, such as speed, logic, and definitiveness in action were not always so. She is explicit: rationality in its Cold War formulation, despite the insistence of technocrats, policy makers, and free-market advocating economists, is not reason as understood by Enlightenment thinkers, liberals, or even modern logicians.25
If this is true then our financial instruments, markets, governments, organizations, and machines are all rational, affective, sensible, and preemptive, but never reasonable. To recognize the significance of this thinking in our present, it might help to contemplate Brian Massumi’s definition of “preemption”., Preemption, he argues, is not prevention, it is a different way of knowing the world. Prevention, he claims, “assumes an ability to assess threats empirically and identify their causes.” Preemption, on the other hand, is affective, lacks representation, and is a constant nervous anticipation at a literally neural if not molecular level for a never fully articulated threat or future.26
Within ten years of the war, cyberneticians moved from working on anti-aircraft prediction to building systems without clear end points or goals, and embracing an epistemology without final objectives, or perhaps objectivity (even if many practitioners denied this). Nets, taken as systems, are probabilistic scenarios, with multiple states and indefinite run times even if each separate neuron can act definitively. In cognitive and early neuroscience the forms of knowledge being espoused were always framed in terms of experiment, never definitive conclusions. “Experimental epistemologies,” as McCulloch put it, came to mean that there are never final facts, only ongoing experiments.
These human and social scientists made operative the unknowable space between legibility and emergence, and turned it into a technological impulse to proliferate new tools of measurement, diagrams, and interfaces. At the limits of this analysis is the possibility that emergence itself has been automated. As the theorist Luciana Parisi puts it in this volume, cybernetics takes hold of the space between infinity and logic, and makes it the very site of technical intervention, the very site to proliferate algorithms into life.27 If cybernetics initially sought to control the future, now control itself became the unclear site of emergence, an indefinable state that was part of networks operating in the future without full definition or information either about endpoints or pasts. The problem of how to act under conditions of uncertainty, or how to define a man or a machine, became instead a pragmatic mandate and a focus on process. Instead of asking what is a circuit, a neuron, or a market, human scientists turned to asking: What do circuits do? How do agents act? This created an ongoing opportunity to entangle calculation and life at the very level of nervous networks, by literally correlating the nervous system with the financial and political system.
Having repressed historical interests in the social and human sciences concerning desire, motivation, agency, and sovereignty, these older questions, however, returned to plague the new epistemology of the circuit. We might then seek to historically situate the relationship between these cybernetic and computational discourses and older modernist discourses of psychology and drives to ask: What is at stake in this movement from consciousness to cognition? And in particular: What work did a discourse of psychoanalysis and psychosis do to permit this redefinition of mind? For McCulloch’s psychoanalysis obsessed and possessed his writings: he discussed it literally as an almost satanic presence possessing psychiatry and psychology and demanding exorcism. His language regularly invoked the occult, superstition, and ghosts in describing the older sciences of the psyche. Psychoanalysis, with its insistence on narrative and talk therapy, appeared to McCulloch to offer a religious and occult understanding of the mind as entirely spiritual and in the realm of representation, narrative, and human language – not an animated, vital, mechanical mind as he envisioned.28 Such violent repudiation, of course, might suggest affiliation rather then differences between the two projects.
It is perhaps no surprise that psychosis might offer this possibility of producing a logic “spoken” directly by nerves, or that it should be related to computational machines and digital mediums. Friedrich Kittler has already suggested that the initial effect of psychoanalysis was the externalization of the psyche and its incorporation into larger discursive networks. In delineating the “discourse network of 1800” from the “discourse network of 1900,” Kittler specifies the latter as being concerned with an obsession with the minute, unimportant, and indiscriminately recorded, which characterized the nascent media technologies of the time.29 Therefore Freud’s obsessive concern not with the obviously scripted “events,” but with slips of the tongue, minute details, and so forth, advances a larger technical assemblage obsessed with delivering recorded and stored events from any clear referential relation to an external, meaningful “reality,”30
Early psychoanalytic discussions of schizophrenia often bore striking resemblance to, and investment in, new media technologies of the day. The occult and telepathy, for example, obsessed Freud, and were phenomena linked to transgressed boundaries of time and space, often analogized to media technologies such as radio and photography, and increasingly figured in the theorization of both transference and psychosis. Freud’s concern with the occult, and telepathy in particular, stemmed from the space that such phenomena offered in relation to the physical nature of psychic phenomena; here, psychoanalysis appears directly in tandem with later efforts in both computation and neural nets. Freud argued, for example, in an essay on “Dreams and Occultism” that:
the telepathic process is supposed to consist of a mental act in one person instigating the same mental act in another person. What lies between these two mental acts may easily be a physical process into which the mental one is transformed at one end and which is transformed with other transformations back once more into the same mental one at the other end. 31
As the critic Patricia Thurschwell points out, telepathy appeals to Freud precisely as a mechanism that refers back to an older, perhaps primordial form of communication – language as “inseparable from biology,”32 For Freud, telepathy and the occult re-merged as sexuality in drives that tied the seemingly psychical to the evolutionary and biological processes of reproduction and trait inheritance.
For example, the only major study of paranoid schizophrenia that Freud managed to collate was that of the famous Saxon judge, Schreber, who had written an autobiography. In it, Freud comments at one point with pleasure on the manner by which psychotic persons assume that an external influence similar to telepathic control and figured as such – an “influencing machine” to use the language of Victor Tausk33 – is controlling them, even if that force actually emanates from within their own psychic machinery. He noted that these phenomena perfectly support his own theories of sexual desire and paranoid homosexual libidinal processes misdirected from their objects of desire.34 “The [only] difference,” Thurschwell writes, “between Schreber and Freud should be that the psychotic, Schreber, lives through his delusional systems while the doctor, Freud, analyzes them.”35 Psychoanalytic actions appear seamlessly anticipatory of later attitudes in computation to both the occult and the circuits of communication that make up rationality.
But cybernetic invocations of psychoanalysis, and of the occult, complicate the seamless extension of the 1900 discourse network into the present. For Freud also found such proximities – between the doctor and the patient, mind and body, and desire and knowledge – troubling. To the psychoanalysis of the earlier 20th century, struggling for credibility under the terms of objectivity offered at the time, telepathy – and with it, psychosis – presented a dangerous proximity to the practice of transference in psychoanalysis, and a threat to the clear-cut separation between the analyst and the analysand. Telepathy was an act that bridged boundaries, a reminder of both past mysticism and theological forms of knowledge, and an accession of the proximity and perhaps indifferentiability between the patient and the doctor that threatened the scientific and medical mores of objectivity that Freud aspired to.
The problem was that this clear separation between analyst and analysand was violated by the proximity between the paranoid fantasy of being penetrated by an external mind-force, and the analytic practice where the psychoanalyst is offered interior access to the patient, in a sense taking up the paranoid position as a penetrative force commanding the psychotic. For Freud, this psychotic possibility was not thought. This tension revealed itself most clearly in Freud’s debates with figures such as Sandor Ferenczi and Carl Jung, who were proponents of telepathy, but were also quicker to embrace (literally and figuratively) sexual and intimate proximity with patients, and to forego their ability to clearly delineate the separation between the analysand and analyst in psychoanalytic therapy. Ferenczi was more than willing to sleep with and accede to his own desires in the relationship with his patients. For him, psychoanalysis opened up a world of intersubjectivity where the interior and the exterior of the subject were permeable terrains. Ferenczi was ready to rethink normative sexualities and theories of desire to recognize this psychotic and telepathic, perhaps occult, nature of psychoanalysis and was also open to new forms of subjectivity and new ways to encounter difference.36 All therapy possessed a psychotic element in that it was intra-subjective, which was also the opportunity to re-channel desire.37
Freud on the other hand had an unwavering ambivalence about psychosis, and about telepathy, along with other media technologies of the day such as cinema, which was at the heart of his tensions in relationship to both the occult and his followers. Psychosis posed a threat to the authority of the analyst and seemingly made visible the occult and transgressive features of psychoanalytic therapy.38
At stake in this debate over the occult, telepathy, and psychoanalysis was none other than the place of analysis and the psychoanalytic relationship to science and its place in imposing normative subjectivities. The authority of the psychoanalyst as an external observer, subscribing to what Lorraine Daston and Peter Galison define as a “mechanical objectivity”39 over the analysand, was violated through this occult intimacy.
McCulloch, however, appeared to enjoy dallying with the mystical. In fact, at the zenith of his analysis such a thing as haunting and possession could not exist if the new sciences of the mind were appropriated. If there was a critical pivot upon which cybernetics would separate from previous histories of science, it involved this reformulation of authority and truth.40 This time, however, the occult has been merged with the machine. The regular invocation of ghosts and other spirit mediums in McCulloch’s discourse was not to provide a separation between the sciences of the mind and the vitalist fantasies of earlier eras but to argue that the mind is ghostly, but from inside. Displacing the problem of analyst and analysand entirely, the autonomous circuit can directly speak, thus providing the biological substrate to language initially sought for in theories of telepathy, occultism, and psychosis. “Mind no longer goes ghostly as a ghost” was his final declaration in the neural net article; it is no longer ghostly because it is material, but never bounded. Haunting cannot happen without history and without a bounded subject.41
McCulloch was often quick to assert the possibility for reformulating subjectivity and even quicker to dispense with ideals of an objectivity untempered by subjective experience or embodiment. In a series of later essays labeled “Of, I and It,” he asserted that the decentralized nature of human cognition made the clear delineation of otherness murky. McCulloch wrote:
yet, from the use of I, me, mine in the communications of daily life in health and disease, we are entitled to infer that the vagrant solid which the speaker labels I in the moment of the experience consists only of events that occur as he intends. The rest he calls it.
He went on to argue that concepts of resistances and thresholds should replace any “explanations” that psychoanalysis might provide.42 For McCulloch, resistances that may occur from lovers, or even from within the body, such as a causalgic arm or a nervous tick, produce a fluctuating threshold of differentiation instead of a clear set boundary between analyst and analysand or between subject and object.
For McCulloch, Pitts, and their many interlocutors in the emerging cognitive and social sciences of the time, psychoanalytic concerns with pathology, normalcy, and in fact consciousness were displaced. If for Freud the occult returned in the form of the erotic as a sexuality without boundaries,43 for cyberneticians – McCulloch most prominently – the occult returned in the form of a self-referential machine whose locus was never aimed at a desire for an external other demanding re-direction, but instead became a self-referential and self-generating world.
Cybernetics thus hijacked the apparatus of an earlier history of psychology to make human reason and machine intelligence equivalent. However, this theft was only made possible by deferring any encounter with historical time or the erotic. The anthropologist, cybernetician, and counter-cultural icon, Gregory Bateson, made this reorganization of the terms of desire into algorithm explicit:
In other words, I believe that much of early Freudian theory was upside down […]. Today we think of consciousness as the mysterious, and of the computational methods of the unconscious, e.g., primary process, as continually active, necessary, and all-embracing,44
Bateson implies that science now has a new technique – “computational methods” of the unconscious – to account not only for the behavior of individuals but that is “all embracing,” and extendable to systems, ecologies, and organizations. He went so far as to label these unconscious and computational methods “algorithms of the heart, or, as they say, the unconscious,”45 Bateson’s statements suggest a transformation of psychological inquiry and concern from the conscious to the unconscious and the displacement of what had once been a source of vexing scientific concern and a limit to knowledge (mainly the recognition of the subjective nature of human perception and consciousness) into a “method” and an “algorithm.”
At stake in the emergence of psychotic logic, therefore, was the stability of older histories of objectivity, truth, and documentation. But also up for negotiation were the terms of encounter between bodies and subjects. Cyberneticians literally took the apparatus of psychoanalysis – the circuits and drives, the repetitive automatisms, the relationship of transference – and inverted it. If, as media theorist Mary Ann Doane argues, Freud was obsessed with cataloguing the unconscious and representing time, cyberneticians now deferred the problem of storage and time to focus instead on process.46 The obsession with authority and intimacy was displaced – no longer the site of inquiry into the truth of the subject, and no longer the center of debates in cybernetic and computational research on the psyche or behavior. This slide from the occult to the erotic to the algorithmic and rational therefore has everything to do with contemporary structures of networks and capital. Within twenty years of the war, the centrality of reason as a tool to model human behavior, subjectivity, and society had been replaced with a new set of discourses and methods that made “algorithm” and “love” speakable in the same sentence and that explicitly correlated psychotic perspective with analytic logic. Cyberneticians sought to make the very space between rationality and reason, or the unconscious and conscious, amenable to logical and perhaps even mathematical intervention. The impossibility of visualizing or representing this process – an impossibility already faced by Freud in his turn to dream work – became the site for media intervention; the very distance between reason and the incoherent and mechanical repetitions of the unconscious reformulated into calculative and probabilistic technologies.
We have come far: from the interior of the mind to the structure of organizations and global economies. This still leaves a few questions. Having supposedly exorcised the ghosts of historicity, cyberneticians continued to struggle with memory and signification. In a letter in 1952 to the cybernetician Norbert Wiener, Gregory Bateson spelled out the problem of memory, time, repetition, and rationality:
What applications of the theory of games do is to reinforce the players’ acceptance of the rules and competitive premises, and therefore make it more and more difficult for the players to conceive that there might be other ways of meeting and dealing with each other[…] I question the wisdom of the static theory as a basis for action in a human world. The theory may be “static” within itself, but its use propagates changes, and I suspect that the long term changes so propagated are in a paranoidal direction and odious.47
Discussing the Rand Corporation, the premier private consulting group to the United States government and military on national security and public policy, Bateson makes explicit a new dilemma of violence. In this formulation, players no longer create violence because of a misdirected desire resulting in a loathing for an imagined Other, but instead produce violence through a self-referential performance of the game. Bateson correlates “static” games with paranoid schizophrenics, as a perceptual problem resulting in repetitive cycles culminating in potentially genocidal violence (nuclear war in this case) – in his language, a “paranoidal direction,” Authority is psychotic, and here it comes at the expense of futurity. But it is an authority emerging from the pure self-reference of the data field. Bateson fears that the performance of past data paraded as prophecy when merged with older concepts of objectivity will produce only repetition without difference. In a stunning inversion of psychoanalytic concerns, Bateson recognizes that the ubiquity of computational logics makes distance impossible to achieve and induces violence, not as a result of any misdirected object choices or imagined enemy Others – game theories have no such formulations within them – but as a pure result of performing and repeating commands without interpretation. In fact, it is precisely the lack of imagination that defines this condition. He suggests a total war without desire.
Having therefore displaced older terms of consciousness, reason and desire from the algorithmic rationality of the network, these terms would return in cybernetics under the guise of visualization, time, and memory. At the famous Sixth Macy Conference on Circular and Causal Feedback Mechanisms in Biological and Social Systems in 1949 in New York City, memory was increasingly problematized between its dynamic and stable elements and storage. In this instance, the immediacy and temporality of the televisual came to replace the older conceptions of tapes, photographs, and films. McCulloch opened the conference with a beacon and a warning. He offered the example of a new type of tube in development at Pasadena, similar to a cathode ray tube, that beams on screen where items are stored. The screen, however, is mutable, and the persistence of the memory of the beam is temporary, and must be refreshed. This idea of a cycling or scanning memory was viewed by McCulloch as offering great innovations in the possibility of miniaturizing and expanding machine memory.
His second example was a warning from John von Neumann. The warning is that even the entire number of neurons in the brain, according to calculation cannot account for the complexity of human behavior and ability. McCulloch goes so far as to discuss “lower forms […] such as the army ant where you have some 300 neurons that are not strictly speaking sensory or strictly speaking motor items, and that the performance of the army ant […] is far more complicated than can be computed by 300 yes or no devices,”48 McCulloch, however, goes on to say that there is no way that these capacities can be understood as illogical or analog. Rather, he turned to another model that might retain the logical nature of the neurons, but still account for the capacity to learn and behave at scales beyond the comprehension of computation.
The answer, coming through a range of discussions about protein structure and memory within cells, involved refreshing information in time. Wiener argued, “this variability in time here postulated will do in fact the sort of thing that von Neumann wants, that is, the variability need not be fixed variability in space, but may actually be a variability in time,” The psychologist John Stroud offered the example of a “very large macro-organism called a destroyer,” This military ship has endless “metabolic” changes of small chores throughout the day, but still retains the function of a destroyer. This systemic stability, but internal differentiation and cycling, becomes the ideal of agency and action in memory.49
McCulloch and Stroud presented a model of memory as bifurcated between perfect retention of all information with retroactive selection or memory as a constant active site of processing of information for further action, based on internal “reflectors,” or “internal eyes”:
We may [they stated] need only very tiny little reflectors which somehow or other can become a stimulus pattern which is available for this particular mode of operation of our very ordinary thinking, seeing, and hearing machinery. This particular pattern of reflectors is what I see as it were with my internal eyes just as what I see when I look at a store window, is a pattern on the retinal mosaic,50
Mental processes are equated here with processing data, and pattern seeking, but it is these internal “eyes” from within the psychic apparatus that allow a self-reflexive apparatus for deferring decisions and agency.
Memory and mind became multiple time systems operating between the real-time present of reception and circulating data, and memory in time; a cyclical “refreshing” as in a television screen system, where change and differentiation – between the organism and the environment, between networks – become possible through the delay and reorganization of circuits from within the organism. The problems of computational representation, the initial problems that were faced in mathematically and logically representing intelligence were reorganized away from a language of conscious and unconscious, discrete and infinite, reason and psychosis to the new terms of vacillating temporalities between immediacy and reflexivity.
Perhaps Bateson, also an attendant at the aforementioned conferences, offered one of the more compelling models and practices (he helped birth family therapy and addiction treatment programs such as Alcoholics Anonymous) for rethinking memory and mind in his model of the “double bind” to explain psychic suffering, addiction and other maladjusted and compulsive behaviors. In a conference in 1969 at the National Institute of Health, he offered this example to demonstrate his ideas of both psychology and treatment. He discussed a research project done on porpoises trained at Navy research facilities to perform tricks and other trained acts in return for fish (Bateson also worked with animals, particularly dolphins). One day, he recounted, one female porpoise was introduced to a new regimen. Her trainers deprived her of food if she repeated the same trick. Starved if she repeated the same act, but also if she did not perform, the porpoise was trapped. This experiment was repeated with numerous porpoises, usually culminating in extreme aggression, and a descent into what from an anthropomorphic perspective might be labeled disaffection, confusion, anti-social and violent behavior. Bateson, with his usual lack of reservation, was ready to label these dolphins as suffering the paranoid form of schizophrenia. The anthropologist was at pains to remind his audience, however, that these psychotic porpoises were acting very reasonably and rationally. In fact, they were only doing exactly what their training as animals in a navy laboratory would lead them to do. Their problem was that they had two conflicting signals. They had been taught to obey and be rewarded. But now obedience bought punishment and so did disobedience. The poor animals, having no perspective on their situation as laboratory experiments, were naturally breaking apart; their personalities were fissuring (and Bateson thought they had them) in efforts to be both rebellious and compliant, but above all to act as they had been taught. Bateson argued this was the standard condition in contemporary societies.
Having established the mechanism for a now decentered and multiple subject, Bateson commenced to articulate the dangers and possibilities of this condition. He recalled how, between the 14th and 15th time of demonstration, the porpoise “appeared much excited,” and for her final performance she gave an “elaborate” display, including multiple pieces of behavior of which four where “entirely new – never before observed in this species of animal,”51 These were not solely genetically endowed abilities, then; they were learned, the result of an experiment in time. This process in which the subject, whether a patient or a dolphin, uses the memories of other interactions and other situations to transform their actions within the immediate scenario can become the very seat of innovation. The dolphin’s ego (insofar as we decide she has one) sufficiently weakened to develop new attachments to objects in its environment through the memories of its past and of other types of encounters. This re-wired network of relations can lead to emergence through the re-contextualization of the situation within which the confused and conflicted animal finds itself. Schizophrenia, therefore, can be the very seat of creativity.52
Bateson ended in triumph, having now successfully made the psyche inter-subjective and simultaneously amenable to technical appropriation in family therapy.53 The productivity of a schizoid situation rested for Bateson on the discovery made by both communication theory and physics that different times could not communicate directly to one another. Only temporal differences resist circulation from within the definition of communication. Bateson applied this understanding liberally to animals. In cybernetic models, the ability of a subject to differentiate itself from its environment and make autonomous choices is contingent on its ability to simultaneously engage in dangerous proximities spatially with other objects and its ability to achieve distance through time.
At stake in the negotiation over the nature of networks and the timescale of analysis was nothing less than how to encounter difference – whether between individuals, value in markets, or between vast states during the Cold War. A question that perhaps started in psychoanalytic concerns over psychosis found technical realization in cybernetics. For cyberneticians the problem of analog or digital, otherwise understood as the limits between discrete logic and infinity, the separation between the calculable and the incalculable, the representable and the non-representable, and the differences between subjects and objects, was transferred into a reconfiguration of memory and storage; one that continues to inform our multiplying fantasies of real-time analytics while massive data storage infrastructures are erected to insure the permanence, and recyclability, of data.
While the time of neural nets, Markov chains, and communication theories is always preemptive, the shadow archive haunting the speculative network is one of an endless data repository whose arrangement and visualization might return imagination and agency to subjects. These wavering interactions – between the networked individual and the fetish of data – preoccupies us in the present, speaking through our contemporary concerns with data mining, search engines, and connectivity. Our imagination of technology and ourselves now wavers between rationality and control, seeking an impossible dream of consciousness out of the nervous logic of our networks, and driving the ongoing penetration and application of media technologies into life.
I opened this essay arguing that cybernetics and its affiliated communication and human sciences aspired to the elimination of difference in the name of rationality, a dream of self-organizing systems and autopoietic intelligences produced from the minute actions of small, stupid, logic gates. The dream of a world of networks without limit focuses eternally on an indefinite, and extendable, future state.
Earlier in this essay, I also invoked Freud’s concerns about schizophrenia and telepathy to suggest that paranoia and psychosis were produced as pathologies at the moment the world became a mediated one. Cybernetics marks another turn. What Freud first articulated as a concern involving authority and difference in his discussion of psychosis has now been transformed into a concern over security – a concern Bateson expressed in the 1950s, already well within the age of computation and Cold War. What Bateson articulated was the worry that in the real-time obsession to entangle life with logic, learning, and by extension thought and change, would be automated to the bereavement, and perhaps destruction, of the world. Affect and circulation here become synonymous in producing violence. This is a conclusion regularly refracted in discussions of affect and war, such as that cited earlier by Brian Massumi.
This condition only becomes inevitable, however, if we ourselves descend into the logic of immediate and real-time analytics. We must avoid this conclusion and this condition. Like Bateson’s porpoise, torn between reactionary return and self referentiality, we are forced to ask about the other possibilities that still lie inside our machines and our histories. The cycles of the porpoise reenact the telling of cybernetic history where ideas of control and communication are often over-determined in their negative valence, cybernetics is rendered as a coherent and singular entity, and the inevitability of the past to determine the future is regularly assumed. These systems that always use the past to telecast into a nervous future remade the boundaries of the body, subject, and the mind but these imagined networks also pose the potential of violence through new forms of knowledge and governance based in self-reference, recombination, and self-containment. Telecast into our present where these forms of thought and these problems of time and memory are the very architecture for our digital networks, these questions haunt us.
For psychoanalysts of the early 20th century, paranoid psychosis was a pathology of intimacy and proximity. The debates about its etiology and structure were also debates about the relationship between analyst and analysand, the forms of desire that could course through science. In the mid-20th century, these debates returned and we are left to ask about the implications of these two poles exhibited in the nervous network, one of a nostalgic reactivism and one of an operative amnesia for our own political and technical imaginaries. Perhaps the hope is in the very machinery that was built – systems that can both recognize and disavow their history, for which memory and archiving remain at tense, productively incommensurable distances. We still struggle with the enormous possibilities and the incredible perils posed through our nervous networks and psychotic logics.
1 William Aspray, John von Neumann and the Origins of Modern Computing (Cambridge, Mass.: MIT Press, 1990); John von Neumann, “The General and Logical Theory of Automata,” in: William Aspray and Arthur Burks, eds., Papers of John von Neumann on Computing and Computer Theory (Pasadena: MIT Press and Tomash Publishers, 1948/1986) p. 422 (my emphasis).
2 Von Neumann, “The General and Logical Theory of Automata,” p. 391–431.
3 Warren McCulloch, Embodiments of Mind (Cambridge, Mass: MIT Press, 1988), p. 21–22.
4 Ibid., 22.
5 Patricia T. Clough, “The New Empiricism: Affect and Sociological Method,” European Journal of Social Theory, 12.2 (2009): p. 43–61; see also: Tiziana Terranova, “Another Life: The Nature of Political Economy in Foucault’s Genealogy of Biopolitics,” Theory, Culture, and Society 26.6 (2009): p. 235.
6 See also: Arturo Rosenblueth, Norbert Wiener, Julian Bigelow, “Behavior, Purpose and Teleology,” Philosophy of Science 10.1 (1943): p. 18–24.
7 Lily E. Kay, “From Logical Neurons to Poetic Embodiments of Mind: Warren McCulloch’s Project in Neuroscience,” Science in Context 14.4 (2001): p. 591–594.
8 The model has been reviewed elsewhere, here I am briefly outlining the work with a focus on epistemology: Tara Abraham, “(Physio) Logical Circuits: The Intellectual Origins of the McCulloch-Pitts Neural Networks,” Journal of the History of the Behavioral Sciences 38 (Winter 2002): p. 3–25.
9 See also: Warren McCulloch and Walter H. Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” in: McCulloch, Embodiments of Mind, p. 19–39, here p. 21–24.
10 Ibid., p. 35.
11 McCulloch and Pitts had derived their assumptions about how neurons work from what was at that time the dominantly accepted neural doctrine in neurophysiology. The pair used the research of the Spanish pathologist, Ramón y Cajal and his student Lorento de Nó, and so had the neurological armory to begin thinking of neurons as logic gates. Namely, Ramón y Cajal was the first who suggested in the 1890s that the neuron was the anatomical and functional unit of the nervous system. He was, furthermore, largely responsible for the adoption of the neuronal doctrine as the basis of modern neuroscience. The work of Lorento de Nó focused on action potentials and synaptic delays between neurons and reverberating circuits. See also: Santiago Ramón y Cajal, Texture of the Nervous System of Man and the Vertebrates (Vienna/New York: Springer, 1999).
12 McCulloch, Embodiments of Mind, p. 22.
13 Ibid.
14 Andrey A. Markov, Theory of Algorithms, trans. Jacques J. Schorr-Kon and PST Staff (Moscow: Academy of Sciences of the U.S.S.R., published for the National Science Foundation and the Department of Commerce, U.S.A. by the Israel Program for Scientific Translation, 1954), p. 1.
15 See also: Orit Halpern, “Dreams for Our Perceptual Present: Temporality, Storage, and Interactivity in Cybernetics,” Configurations 13.2 (2005).
16 Joseph Dumit, “Circuits in the Mind,” unpublished manuscript (April 2007): p. 7.
17 John von Neumann, “First Draft of A Report on the EDVAC,” Contract No. W-670-ORD-4926 between the United States Army Ordnance Department and the University of Pennsylvania (Philadelphia: Moore School of Electrical Engineering, June 30, 1945). Von Neumann confessed that the McCulloch/Pitts’ model had influenced him in conceiving machine memory. EDVAC stands for Electronic Discrete Variable Automatic Computer, it was one of the first electronic computers, was binary and was a stored program computer.
18 Alan M. Turing, “Proposal for Development in the Mathematics Division of an Automatic Computing Engine (Ace) (1946),” in: A. M. Turing’s Ace Report of 1946 and Other Papers, Charles Babbage Institute Reprint Series for the History of Computing, eds. B. E. Carpenter and R. W. Doran (Cambridge/London: MIT Press, 1986), p. 21.
19 See also: Peter Galison, “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision,” Critical Inquiry 21 (1994); Paul N. Edwards, The Closed World Computers and the Politics of Discourse in Cold War America (Cambridge, Mass.: MIT Press, 1997).
20 Herman H. Goldstine and John von Neumann, “Planning and Coding of Problems for an Electronic Computing Instrument: Report on the Mathematical and Logical Aspects of an Electronic Computing Instrument, Part II, Vol. I,” (Princeton: The Institute for Advanced Study, 1948), p. 157.
21 Compare: Alan M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, no. 1 (1936): p. 2–42; Bertrand Russell, The Principles of Mathematics (London/New York: Routledge, 2009); Erich H. Reck, From Frege to Wittgenstein Perspectives on Early Analytic Philosophy (Oxford: Oxford University Press, 2002); Rebecca Goldstein, Incompleteness: The Proof and Paradox of Kurt Gödel (New York: W. W. Norton, 2005); Kurt Gödel, On Formally Undecidable Propositions of Principia Mathematica and Related Systems (New York: Basic Books, 1962); Alan M. Turing, The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, Plus the Secrets of Enigma (Oxford: Oxford University Press, 2004).
22 McCulloch, Embodiments of Mind, p. 359.
23 See also: Orit Halpern, Beautiful Data: A History of Vision and Reason (Durham: Duke University Press, forthcoming); Herbert Simon, “A Behavioral Model of Rational Choice,” The Quarterly Journal of Economics 69.1 (1955): p. 99–118, here p. 101; Hunter Crowther-Heyck, Herbert A. Simon: The Bounds of Reason in Modern America (Baltimore: Johns Hopkins University Press, 2005); idem et al., Economics, Bounded Rationality and the Cognitive Revolution (Northhampton, MA.: Edward Elgar Publishing, 1992).
24 See also on rationality: Donald A. MacKenzie, An Engine, Not a Camera. How Financial Models Shape Markets (Cambridge, Mass.: MIT Press, 2006); and Jonathan Crary, Techniques of the Observer: On Vision and Modernity in the Nineteenth Century (Cambridge, Mass.: MIT Press, 1990) for a discussion of mediation, reason, and observation that supports my critique of dominant histories.
25 Lorraine Daston, “The Rule of Rules, or How Reason Became Rationality,” unpublished talk (University of California at Berkeley, March 25, 2011).
26 Brian Massumi, “Potential Politics and the Primacy of Preemption,” Theory & Event 10.2 (2007): p. 4.
27 See also: Luciana Parisi, Contagious Architecture: Computation, Aesthetics, Space (Cambridge, Mass.: MIT, 2013).
28 McCulloch and Pitts, “A Logical Calculus.”
29 Friedrich A. Kittler, Gramophone, Film, Typewriter (Stanford: Stanford University Press, 1999), p. 3.
30 Friedrich A. Kittler, Discourse Networks 1800/1900 (Stanford: Stanford University Press, 1990).
31 Sigmund Freud cited in: Pamela Thurschwell, “Ferenczi’s Dangerous Proximities: Telepathy, Psychosis, and the Real Event,” Differences 11.1 (1999): p. 156.
32 Ibid., p. 157.
33 Victor Tausk, “On the Origin of the ‘Influencing Machine’ in Schizophrenia,” in: Jonathan Crary and Sanford Kwinter, eds., Incorporations (New York: Zone Books, 1992).
34 Sigmund Freud, The Schreber Case (New York: Penguin Classic, 2002), p. 3–4.
35 Thurschwell, “Ferenczi’s Dangerous Proximities,” p. 162.
36 See also: Arnold Rachman, Sandor Ferenczi: The Psychoanalyst of Tenderness and Passion (New York: Jason Aronson, 1997).
37 See also: C. G. Jung, The Psychology of Dementia Praecox, ed. Frederick W. Peterson and A. A. Brill (New York: Journal of Nervous and Mental Disease Pub. Co., 1909); Peter Gay, Freud – A Life for Our Times (New York: W. W. Norton and Company, 1998), p. 197–225.
38 Thurschwell, “Ferenczi’s Dangerous Proximities,” p. 162.
39 Lorraine Daston and Peter Galison, “Image of Objectivity,” Representations 40 (Fall 1992): p. 81–125.
40 The history of the pathology of schizophrenia is both long and constantly changing. The disease was first formally identified as dementia praecox in the 1890s; the term “schizophrenia” and the aforementioned symptoms were formalized by Eugen Bleuler in 1908 to describe a “split” or cognitive dissonance between personality, thinking, memory and perception. The definition of the disease continued to evolve: at the time McCulloch was writing, there was no Diagnostic or Statistical Manual, and his definitions adhered to those of the earlier 20th century. McCulloch himself was critical of the term, and often thought it was too often used to catalogue too many psychiatric pathologies, particularly psychotic ones. Like other practitioners at the time, he classified schizophrenia into multiple subtypes, of which only one – paranoid – was prone to violence and to the regular imagination of threat and enemies to the self. See: Eugen Bleuler, Dementia Praecox; or, the Group of Schizophrenias (New York: International Universities Press, 1950); Warren McCulloch, “The Physiology of Thinking and Perception,” paper presented at the Creative Engineering Conference (June 22, 1954), in: The Papers of Warren McCulloch, B:M 139, Series III (American Philosophical Society, Philadelphia, PA).
41 McCulloch and Pitts, “A Logical Calculus,” p. 22.
42 Warren McCulloch, series of Manuscripts ‘Of I and It’, Originally Titled ‘Of Eye and It’, in: “The Papers of Warren McCulloch,” p. 558–559.
43 Thurschwell, “Ferenczi’s Dangerous Proximities,” p. 173.
44 Gregory Bateson, Steps to an Ecology of Mind (Chicago: University of Chicago Press, 2000), p. 135–136 (emphasis added).
45 Ibid., p. 139.
46 Mary Ann Doane, “Freud, Marey, and the Cinema,” Critical Inquiry 22 (1996): p. 313–343.
47 Gregory Bateson, Letter to Norbert Wiener, September 22, 1952 (Norbert Wiener Papers, Massachusetts Institute of Technology, MC22, Box Number 10, Folder 155), p. 2.
48 Claus Pias, ed., Cybernetics. The Macy Conferences 1946–53, Vol. 1 (Berlin/Zurich: Diaphanes, 2003), p. 31.
49 Ibid., and p. 35.
50 Heinz von Foerster, ed., Transactions of the Sixth Macy Conference (New York: Josiah Macy Jr. Foundation, 1950).
51 Bateson, “Steps to an Ecology of Mind,” p. 278.
52 On the relationship between schizophrenia, creativity, difference and genius, see also: Irving Gottesman, Schizophrenia Genesis: The Origins of Madness (New York: W. H. Freeman, 1991); Shoshana Felman, Writing and Madness (Stanford: Stanford University Press, 2003); Sander Gilman, Difference and Pathology: Stereotypes of Sexuality, Race, and Madness (Ithaca: Cornell University Press, 1985).
53 Bateson, “Steps to an Ecology of Mind,” p. 278.
is an assistant professor in History at the New School of Social Research. Her focus is on histories of digital media, cybernetics, cognitive and neuroscience, art and design. Her current work focuses on excavating a genealogy of big data and interactivity. Published works and multimedia projects have appeared in numerous venues, including The Journal of Visual Culture, Public Culture and at ZKM in Karlsruhe, Germany.
Marie-Luise Angerer (Hg.), Bernd Bösel (Hg.), Michaela Ott (Hg.)
Timing of Affect
Epistemologies, Aesthetics, Politics
Broschur, 344 Seiten
PDF, 344 Seiten
Affect, or the process by which emotions come to be embodied, is a burgeoning area of interest in both the humanities and the sciences. For »Timing of Affect«, Marie-Luise Angerer, Bernd Bösel, and Michaela Ott have assembled leading scholars to explore the temporal aspects of affect through the perspectives of philosophy, music, film, media, and art, as well as technology and neurology. The contributions address possibilities for affect as a capacity of the body; as an anthropological inscription and a primary, ontological conjunctive and disjunctive process as an interruption of chains of stimulus and response; and as an arena within cultural history for political, media, and psychopharmacological interventions. Showing how these and other temporal aspects of affect are articulated both throughout history and in contemporary society, the editors then explore the implications for the current knowledge structures surrounding affect today.