Nutzerkonto

Between Segregation, Cooperation, and Craziness

Sebastian Vehlken

Ghetto Blasts
Media Histories of Neighborhood Technologies between Segregation, Cooperation, and Craziness

PDF, 30 Seiten

This article gives a media-historical overview of several seminal applications of Neighborhood Technologies, (1) in Cellular Automata (CA), (2) in Swarm Intelligence (SI), and (3) in Agent-based Modeling (ABM). It does by no way attempt to be exhaustive, but rather highlights some initial and seminal media-technological contributions towards a mindset which bears neighborhood principles in its core. The text thus centers around media technologies which are based upon the phenomenon that the specific topological settings in local neighborhoods give rise to interesting emergent global patterns which develop dynamically over time and which yield novel ways of generating problem solutions: Autonomy, emergence, and distributed functioning replace preprogramming, control, and centralization. Neighborhood Technologies thus can be understood on two levels: First, as specific spatial structures which initiate non-linear processes over time and whose results often cannot be determined in advance. As an effect, they provide media interfaces which visualize the interplay between local neighborhood interactions and global effects, e.g. in Cellular Automata (CA) where the spatial layout of the media technology enables dynamic processes and at the same time visualizes them as computer graphics. And secondly, they can be perceived as engines of transdisciplinary thinking, bridging fields like mathematical modeling, computer simulation and engineering. Neighborhood Technologies moderate between disciplines e.g. by implementing findings from biology in swarm-intelligent robot collectives whose behavior then is re-applied as an experimental setting for (con-)testing supposed biological factors of collective motion in animal swarms. The central thesis of this article is that Neighborhood Technologies by way of their foundation in neighborhood interaction make the notion of space utterly dynamic and transformable, and always intriguingly connected to functions of time. By their dynamical collective formation, Neighborhood Technologies provide decisive information about complex real-world phenomena.

Introduction1

When Babak Ghanadian and Cédric Trigoso founded their start-up niriu in Hamburg, Germany, in 2011, it was just another in a plethora of ideas for new social media platforms. In an alleged contemporary zeitgeist of a post-materialist and participatory culture of sharing2 – where the acquired social relations and not the acquired property are said to be the new currency, and where former consumers, now able to engage in the development and design of products thanks to Web 2.0 and the like, transmute into prosumers – niriu tried to incorporate the concept of a local city neighborhood into the heart of their application. Or, as the marketing jargon in the About Us section of the website puts it:

On niriu, you see right away what your neighbourhood has to offer – on the city map or on the list. Yoga teachers, hobby musicians or wine lovers: your neighbours are much more versatile than you think. Now it’s time to finally get to know the people you pass in the street each day and to discover your neighbourhood! Borrowing a screw driver or taking part in a philosophical walk through the park – some things are only needed once a year, other things you’d like to try out but you don’t know how or with whom. On niriu, you create offers or demands or you react to your neighbours’ actions. Thus everybody profits from their neighbourhood’s potential! Via niriu’s profiles you’re able to get a first impression of your neighbours. Thus you can find people with similar interests as you – if you’d like to meet them, you can make the first step online in order to get to know your neighbourhood in real life!3

Regardless of how successful or not niriu has become since its founding phase, its intention to create an online network application that would support the revitalization of the somewhat aged idea of neighborly help and of lively neighborhoods in large cities seems to be noteworthy with regard to at least two aspects: On the one hand, it revalues – similar to far more popular apps like e.g. Foursquare – the potential of global multimedia peer-to-peer communication as a form of social swarming that initializes dynamic social networks and interactions in public spaces.4 Niriu thus defines itself as a kind of digital doorbell to next-door neighbors which circumvents the everyday obstructions of alleged anonymous neighborhoods of the contemporary metropolis. On the other hand, this example of a bottom-up city development initiative can systematically be seen as a re-engineering of Thomas Schelling’s segregation models. Whilst niriu as an application engages with the adjustment of real neighborhoods and their issues, Schelling developed his modeling approach with regard to problematic neighborhoods, motivated to find novel explanatory modes for the relationship between housing neighborhoods and social networks. His models not only had a strong influence on the formal analysis of social networks, but must also be seen as a pioneering work for the present boom of an agent-based modeling and computer simulation paradigm (ABM). Schelling’s work thus appeals to a media history of social simulations which tried to explore the local interactions in social networks and the development of those (and other) global scale effects which Niriu – as a social network – now tries to establish and intensify.

Between these two poles this article delves into some seminal examples which exemplarily substantialize our notion of Neighborhood Technologies. Furthermore, it draws on some genealogic developments between certain neighborhoods whose interaction principles concretize in specific media technologies whose social networking capacities likewise result in specific neighborhoods. Thereby, the text is not at all interested in completeness but in highlighting the complementary lines which define the intersection area of a knowledge of neighborhoods and a knowledge by means of neighborhoods.

The first part attends to a media history of Cellular Automata (CA) which from the 1950s onwards became a medium of mathematical modeling and a somehow playful approach to life-like, non-linear processes in systems of multiple elements, and which also put forth the often irreducible aspect of computer-graphical visualization that is mandatory for the understanding of such dynamics. It is the observation of system dynamics in space and time which lead to results that are closed to rigid analytical approaches. The second part deals with the zoo-technological and reciprocal developments of robotics, computer science and biology in the research area of swarm intelligence.5 Inspired by the self-organization capacities of animal collectives like ants, fish or birds, robot engineers seek for distributed networking technologies which facilitate the development of autonomously moving robot collectives. And contrariwise projects like RoboFish6 implement artificial technical agents as mini-robots in biological collectives, in order to explore their local interactions and collective behavior experimentally. And the third part focuses on some of essential constituents of the field of applications which from the 1990s onwards developed as the ABM paradigm. However, it will discuss ABM rather briefly as the following article of this section on Neighborhood Epistemologies by Sándor Fekete investigates ABM in the concrete case of traffic simulations.

1. Cellular Automata

Of Ghettos and Lily Ponds

It might just be a case of historical concurrence that Thomas Schelling’s famous segregation model from 1969/1971 bears obvious similarities with CA, even if he assured his readers that he learnt about these techniques at a later date. John Conway’s legendary Game of Life, after all, dates from 1968, and already in the 1940s Stanislaw Ulam and John von Neumann were working on a theory of self-replicating automata. Ulam’s idea of implementing the theory of self-replicating automata on CA relieved von Neumann’s initial model of the engineering problems resulting from a sort of mechanical primordial ooze.7 I will come to this in the second paragraph of this chapter. Anyway: Seemingly unaware of such early computer simulation techniques Schelling manually played with an analogous set of local interaction rules which popularized Conway’s bestiary of gliders, snakes, blinkers, beehives, or breeders around the same time.

Unlike the early advocates of CA, Schelling did not work with a self-reflective or computer-game-like approach, but he was interested in socio-economic phenomena, more concretely – as he would note later – in the connection of micromotives and emerging macrobehaviors.8 He occupied himself with the reasons for the emergence of ghettos in US cities, i.e. the differentiation into clear-cut racially identical neighborhoods. The common sense answer would imply a prevalent and profound racism. But could the phenomenon also result from other local motives, independent from ideological foundations? Schelling pursued this inquiry with the help of a schematic model. It functions according to a very simple scheme: He randomly distributed coins of two sorts on a checkerboard. If more than a defined number of unlike coins adjoin to a certain coin, this coin is again randomly placed on another unoccupied square. The result, mapped by way of one- and two-dimensional graphics on paper tools9, is surprising: Even in scenarios with mild tendencies of segregation (that is, if the number of unlike coins in the 8-cell neighborhood must be high to make a coin move), on the macro level clearly separated accumulations of similar coins emerge very quickly. Schelling deduced that housing segregation thus not necessarily depends on racist ideologies, but could be effected by more neutral reasons such as seeking to not become a minority in a neighborhood. Nevertheless, and counter-intuitively, ghetto-like distribution patterns emerge from even such local preferences. Significant global patterns in dynamic neighborhood interactions thus can emerge even if these do not correlate explicitly or implicitly with the local preferences and objectives of the individual neighbors (Fig. 1).10

Where people settle, frogs are likely to dwell close by. And whilst Schelling mapped his dynamic segregation and aggregation processes in cities, the British evolutionary biologist William D. Hamilton played around with dynamic models of biological aggregation phenomena in the same year of 1971. Some time before he was retroactively celebrated by Richard Dawkins as one of the trailblazers of sociobiology and as a precursor of Edward O. Wilson,11 and a decade before his fame culminated because of his game-theoretical and interdisciplinary collaboration with political scientist and rational choice-theorist Robert Axelrod on the evolutionary emergence of cooperation strategies,12 he was eager to cast out the malign but lasting spirits of weakly defined so-called social instincts from behavioral biology.13 Like Schelling, Hamilton abstracted from general social motives as catalysts of aggregation patterns in animal collectives. Instead, he proposed a geometrical model based on egoistic individual behavior – no signs of a cooperation theory at this stage of his career. In his model biological aggregations emerge solely from the individual actions of hypothetical frogs with regard to their spatial positioning in relation to their adjacent neighbors, and an external motivational factor, that is, a hypothetical predatory snake. No wonder that his paper starts like a mad fairytale:

Imagine a circular lily pond. Imagine that the pond shelters a colony of frogs and a water snake. The snake preys on the frogs but only does so at a certain time of day – up to this time it sleeps on the bottom of the pond. Shortly before the snake is due to wake up all the frogs climb out onto the rim of the pond. This is because the snake prefers to catch frogs in the water. If it can’t find any, however, it rears its head out of the water and surveys the disconsolate line sitting on the rim – it is supposed that fear of terrestrial predators prevents the frogs from going back from the rim – the snake surveys this line and snatches the nearest one.14

This set-up triggers quite a dynamic chain reaction. Given the ability of the hypothetical frogs to move unrestrictedly around the rim of the lily pond, they would seek to optimize their randomly taken relative positions. The danger to be the nearest one in relation to the snake can be reduced if a frog takes a position which is situated closely between two neighboring frogs. Put another way, the reduction of the individual domain of danger, that is, the sum of the distances to the next neighbors divided by 2 becomes the objective of each frog. This domain of danger certainly decreases if the next neighbors position themselves as close as possible. But as certain as this all the other frogs will also try to reduce their individual domains of danger. Or, as Hamilton notes: “[O]ne can imagine a confused toing-and-froing in which the desirable narrow gaps are as elusive as the croquet hoops in Alice’s game in Wonderland” (Fig. 2).15

This model is played with one hundred hypothetical frogs, randomly spaced around the pond, and according to a simple algorithm:

In each ‘round’ of jumping a frog stays put only if the ‘gap’ it occupies is smaller than both neighbouring gaps; otherwise it jumps into the smaller of these gaps, passing the neighbour’s position by one-third of the gap-length. Note that at the termination of the experiment only the largest group is growing rapidly. The idea of this round pond and its circular rim is to study cover-seeking behaviour in an edgeless universe. No apology, therefore, need be made even for the rather ridiculous behaviour that tends to arise in the later stages of the model process, in which frogs supposedly fly right round the circular rim to “jump into” a gap on the other side of the aggregation. The model gives the hint which I wish to develop: that even when one starts with an edgeless group of animals, randomly or evenly spaced, the selfish avoidance of a predator can lead to aggregation.16

The relevance of Hamilton and Schelling’s models for later developments of Neighborhood Technologies results from their approach which roots the emergence of global patterns in the autonomous decision making processes of locally interacting individuals. These interactions are mathematically described as topographical products (e.g., significant aggregations) of topological relations and are visualized as drawings of the dynamic neighborhood processes. An instructive insight from their approach is the fact that the respective global outcomes of collective processes can be totally independent from the individual objectives – like selfishness leading to aggregation. This interest in the non-deducible effects of nonlinear interaction, even among very simple local agents, is investigated – as already mentioned – also in the field of computer science. Its media-technological implementation in CA will be the subject of the following part.

Of Zoos and Suburbs

Whilst economics and biology explore the micro- and macrodynamics of living collectives with checkerboards and frogs on paper around 1970, mathematics and the newly-developing computer science after World War II first spotlighted a different stage of life. At the end of the 1940s, John von Neumann started working on a general Theory of Self-Reproducing Automata, first presented at the Hixon Symposium in September 1948. Even if not explicitly defining his concept of an automaton, Neumann understands it as any system that processes information as a part of its self-regulatory mechanisms, as a system where stimulations effect specific processes which autonomously run according to a defined set of rules.17 Leveling the ontological differences between computers and biological organisms,18 von Neumann neither refers to mechanical parts or chemical or organic compounds as basic elements of his bio-logics, but information.19

Due to their systemic complexities, von Neumann compared the best computing machinery of his time with natural organisms. He discovered three fundamental boundary conditions for the construction of “really powerful computers”: The size of the building elements, their reliability, and the lack of a theory for the logical organization of complex computing systems. According to von Neumann, from the adequate organization of even unreliable components a reliability of the overall system could be produced that would exceed the product of the fault liability of the components.20 “He felt,” notes his editor Arthur W. Burks, “that there are qualitatively new principles involved in systems of great complexity.” And von Neumann searches for these principles by investigating phenomena of self-reproduction, because “[i]t is also to be expected that because of the close relation of self-reproduction to self-repair, results on self-reproduction would help to solve the reliability problem.”21 Although he does not explicitly allude to collective dynamics, the preoccupation with robust and adaptive features of a system involves a considerable conceptual affinity to the self-organizational capacities of dynamic networks. In both cases the adaptability to dynamically changing environmental conditions without a central control is decisive. But besides this conceptual link there is a media-technical relation that genealogically binds together the theory of automata and latterday ABM that can be perceived as a paradigmatic Neighborhood Technology. This link becomes only fully apparent after von Neumann’s first hypothetical model – a mechanical model for self-reproduction later termed the Kinematic Model and supposedly inspired by von Neumann playing around with Tinker Toys22was abandoned due to his collaboration with Stanislaw Ulam. Ulam proposed to rather use a different model environment which was unrestricted by the physical constraints of mechanical components, and far better suited for mathematical analysis.23 His CA consisted of an infinite checkerboard as biotope, with every square of the grid potentially acting as a cell according to a program (a State Transition Table) effective for all the squares. Every cell is assigned information which defines its current state from a number of possible states. In each time-step of the system every cell updates its state, dependent on the number and states of neighboring cells, according to the transition table. The state of cell at time-step t+1 thus can be described as the function of the state of the cell and of all adjacent cells at t: “Every cell had become a little automaton and was able to interact with neighboring cells according to defined rules. […] Thanks to Ulam’s suggestion the science fiction of a tinkering robot in a sea of spare parts transformed into a mathematical formalism called ‘cellular automaton’.”24

With this CA a self-reproducing artificial organism can be described in a mathematically exact way, in von Neumann’s case mutating into a “monster” (Steven Levy) of 200.000 cells with 29 possible states each. The behavior and all state combinations were implemented on a square of 80 · 400 cells upon which all functions of the components A, B, and C were executed. Only the construction plan D was transferred into a one-dimensional tail of 150.000 cells. By reading and executing the information encoded in this tail, the CA was able to reproduce itself (and its reconstruction plan) as an identical duplicate. For von Neumann, this CA seems satisfactory enough as proof of a life-like form of self-organization. But more than this it becomes clear that CA – thanks to their conceptual basis – are suitable for the modeling of a multitude of dynamic systems:

Compared to systems of differential equations, CA have the advantage that their simulations on digital computers do not produce round-off errors which can escalate especially in dynamic systems. Nonetheless, stochastic elements can be easily implemented in order to model disturbances. CA are characterized by dynamics in time and space. […] Mathematically expressed this means that CA can be defined by: 1. the cell space, i.e. the size of its playing field, its dimension (line, plane, cube etc.), and its geometry (rectangle, hexagon etc.); 2. its boundary conditions, i.e. the behavior of cells which do not have enough neighbors; 3. the neighborhood, i.e. the radius of influence exerted on a cell (e.g. the 5-cell neighborhood of von Neumann, or the 9-cell neighborhood of Moore); 4. the number of possible states of a cell […]; and 5. the rules that determine the change of states […].25

CA can at best play off their epistemic potential if they are not only brought to paper as still images of process stages, but their dynamics can be computer-graphically displayed. Starting with the pioneering work of Burks and his research group at the University of Michigan this graphical approach has proved very fruitful. After the possibility to computationally animate and interact with CA this media technology became a popular tool. Aside from the graphically supported self-awareness of CA, the relating anecdotes and the bestiary of aggregations of small squares of the Game of Life, CA became attractive for the application in other scientific disciplines because of their capability of exactly assigning particular characteristics to defined elements and the possibility to animate the interactions of these elements over time. As an effect, new scenarios could be observed and evaluated much faster.26 And this – reminding the key word lifelike behavior – was not only instructive for biological research where CA were applied in behavioral studies, the computer simulation of animal collectives, but also in histology, neurology or in evolutionary and population biology. Also, in a socio-economic research context, CA-based computer simulation models were utilized. Even today, CA are a popular simulation technique in urban planning, e.g. in order to map the development of urban sprawl. The geographers Xiaojun Yang and C.P. Lo underline the advantages of CA in this field over other media technologies:

Among all the documented dynamic models, those based on cellular automata (CA) are probably the most impressive in terms of their technological evolution in connection to urban applications. Cellular automata offer a framework for the exploration of complex adaptive systems because of CA’s advantage including their flexibility, linkages they provide to complexity theory, connection of form with function and pattern with process, and their affinities with remotely sensed data and GIS.27

However, the authors also mention the conventional simplicity of cellular automata which has been considered one of its greatest weaknesses for representing real cities. But according to another seminal paper, a variety of research efforts of the 1990s have improved “the intricacies of cellular automata model construction, particularly in the modification and expansion of transition rules to include such notions as hierarchy, self-modification, probabilistic expressions, utility maximization, accessibility measures, exogenous links, inertia, and stochasticity.”28

Yang and Lo welcome the resulting coming-of age of CA as these – say the authors – “grow out of an earlier game-like simulator and evolve into a promising tool for urban growth prediction and forecasting,”29 whilst Torrens and O’Sullivan also underline the necessity of combining these technological developments with further research in applied areas. These include explorations in spatial complexity, an undertaking which would call for the infusion of CA with concepts from urban theory, novel strategies for validating cellular urban models, as well as scenario design and simulation in relation to urban planning practices, e.g. by taking into consideration data from traffic simulation systems (Fig. 5).30

This example from urban planning brings to the surface a first media-historical element and its genealogy imperative for Neighborhood Technologies: With the models of Schelling and Hamilton and in the course of the development and the transdisciplinary application of CA entailing the pioneering work of von Neumann, Ulam and Conway, mathematical neighborhood models become alive in the artificial space of CA as dynamics in time. The non-linear and unpredictable interplay of neighborly micro-behaviors and the global systemic effects are implemented media-technologically and rendered feasible to analysis on a novel level of computer-graphically visualizations of discrete pattern emergence.

2. Swarm Intelligence

Of Dancing Drones and Robot Fishes

Even if it became popular in the context of the algorithmitization of the behavior of social insects, the birthplace of the term Swarm Intelligence is in robotics.31 Even engineers are subject to discourse dynamics: When Gerardo Beni and Jing Wang gave a short presentation on Cellular Robots at a NATO robotics workshop in 1988, that is, on “groups of robots that could work like cells of an organism to assemble more complex parts,” commentators allegedly demanded a buzzword “to describe that sort of ‘swarm’.”32 As an effect, Beni and Wang published their paper under the header Swarm Intelligence in Cellular Robotic Systems, coining a term which in the following years was employed in biological studies33 and mathematical optimization problems34 before gaining traction in the mainstream of robotics several years ago.35 First, design approaches to distributed robot collectives were mainly inspired by research on social insects and relating computer simulation models (see the following chapter of this paper). But today, and in the course of developing Unmanned Aerial Vehicles (UAV) as drone collectives for military or civil use, the interaction modes of animal collectives operating in four dimensions, such as schools of fish or flocks of birds come also into focus.36

The basic interest is the question how complex global patterns of multiple individuals can emerge from simply structured, (mostly) identical, autonomously acting elements which interact only over short distances and are independent of a central controller or a central synchronizing clock.37 The computer scientist Erol Sahin defines the field as follows: “Swarm robotics is the study of how a large number of relatively simple physically embodied agents can be designed such that a desired collective behavior emerges from the local interactions among agents and between the agents and the environment.”38 Such a concept is – at least theoretically – superior to centrally-controlled and more complex individual robots because of its greater robustness and flexibility as well as because of its scalability. Or, to put it shortly: “[U]sing swarms is the same as ‘getting a bunch of small cheap dumb things to do the same job as an expensive smart thing’.”39 The large number of simple and like elements reduces the failure rate of critical functions and thus increases redundancy. Even if a number of robots fail, their functions can be replaced by other identical robots. This effect is complemented by the multiplication of the sensory capacities, “that is, distributed sensing by large numbers of individuals can increase the total signal-to-noise ratio of the system,” which makes such collectives especially suited for search or observation tasks.40 The capacity to automatically switch into various time-spatial patterns enables such robot collectives to develop modularized solutions for diverse situations by means of self-organization. They can adapt to unpredictable and random changes in their environments without being explicitly programmed to do so. And finally, they can be scaled to different sizes without affecting the functionality of the system.41

Essential for these capacities is the synchronization of the individual robots. In most cases, robot swarms update themselves partially synchronously:

In fact, during an UC [updating circle, SV], any unit may update more than once; also it may update simultaneously with any number of other units; and, in general, the order of updating, the number of repeated updates, and the number/identity of units updating simultaneously are all events that occur at random during any UC. We call the swarm type of updating Partial Random Synchronicity (PRS).42

This equips each robot with a greater flexibility in relation to the adaptation on external factors since these can – due to the restricted sensing and interaction space of the robots – stimulate only a limited number of elements at a time. Beni calls this “order by disordered action.”43 The collective depends on spreading such influences from robot to robot and must be able to cope with time lags – a broad research area for stability analyses which prove e.g. that convergence and cohesion in swarming robots can be maintained in the presence of communication delays.44 The specified number of neighborhood relations and the resulting spatial structure and morphology of mobile collectives shapes their synchronization processes. Time lags do not necessarily lead to less sustainable systems as the local transmission of information can moderate external influences throughout the collective. This dynamic stability thus not only seems to be a time-critical factor, but is also space-critical, dependent on the spatial ordering of a collective at a given time and on the topology of local interactions that determine the flow of information through the collective. The formation of the robot swarm produces information, and at the same time this information affects the formation of the collective.

An actual example is the COLLMOT (Complex Structure and Dynamics of Collective Motion) project at ELTE University of Budapest.45 It explores the movement of swarms of drones, using neighborly control algorithms which can be found in biological swarms. By engineering actual autonomous quadrocopter collectives, the research group also endeavors to understand the essential characteristics of the emergent collective behavior which requires a thorough and realistic modeling of the robot and also of the environment in ABM before letting it take off as a physically built collective. The authors refer to the seminal model of collective motion of swarms proposed by computer engineer and graphic designer Craig Reynolds. In 1989 – at that time working as an animation designer for the Hollywood movie industry – he invented an individual-based flocking algorithm which not only resulted in the lifelike behavior of bat flocks in Batman Returns, but also opened up a whole new research area in biology in the following years. His animation model and its visualizations were quickly adopted by biologists who were interested in computer simulation approaches to animal collectives:46

According to Reynolds, collective motion of various kinds of entities can be interpreted as a consequence of three simple principles: repulsion in short range to avoid collisions, a local interaction called alignment rule to align the velocity vectors of nearby units and preferably global positioning constraint to keep the flock together. These rules can be interpreted in mathematical form as an agent-based model, i.e., a (discrete or continuous) dynamical system that describes the time-evolution of the velocity of each unit individually. The simplest agent-based models of flocking describe the alignment rule as an explicit mathematical axiom: every unit aligns its velocity vector towards the average velocity vector of the units in its neighbourhood (including itself). It is possible to generalize this term by adding coupling of accelerations], preferred directions and adaptive decision-making schemes to extend the stability for higher velocities. In other (more specific) models, the alignment rule is a consequence of interaction forces or velocity terms based on over-damped dynamics. An important feature of the alignment rule terms in flocking models is their locality; units align their velocity towards the average velocity of other units within a limited range only. In flocks of autonomous robots, the communication between the robots usually has a finite range. In other words, the units can send messages (e.g., their positions and velocities) only to other nearby units. Another analogy between nature based flocking models and autonomous robotic systems is that both can be considered to be based on agents, i.e., autonomous units subject to some system-specific rules. In flocking models, the velocity vectors of the agents evolve individually through a dynamical system. In a group of autonomous flying robots, every robot has its own on-board computer and on-board sensors, thus the control of the dynamics is individual-based and decentralized.47

By programming and constructing UAV collectives, the COLLMOT group on the one hand certainly is interested in the increase of efficiency which a flock of drones can yield in comparison to single drones or other airborne technologies. As multiple units can cover an area much better while looking for a possibly moving target than a single drone, it can be used in search/rescue/hunt operations with onboard cameras/heatcams. Or it can be employed in agricultural monitoring, with a humming flock of drones preventing disasters befalling freshly-sprouting plants, measuring environmental conditions, assessing growth rate, or delivering nutrients or pesticides locally in small amounts. And in event surveillance, it could replace expensive cranes or helicopters and perform continuous surveillance or provide multiple viewpoints from the sky.48

But on the other hand, the engineered neighborhood technology of a UAV swarm also generates novel insights for the optimization and the assessment of ABM of collective behavior and through that generates a “reverse-bio-inspiration for biological research.”49 In this case, Neighborhood Technologies explicitly work as a bridge between scientific disciplines and between theory-building, modeling, and the construction of technical artifacts. And this process is a reciprocal one, as e.g. the COLLMOT group became again “inspired to search for additional factors which allow the very highly coherent motion of pigeon flocks, since our experiments suggest that a very short reaction time itself cannot account for the perfectly synchronized flight of many kinds of birds.”50

Quite similarly, the RoboFish project of Freie University Berlin and of the Leibniz Institute of Freshwater Ecology make use of a reciprocal zoo-technological research perspective connecting robotics and biological studies. It develops a biomimetic fish school for the investigation of swarm intelligence.51 In this case, the researchers control an artificial fish in a research aquarium populated by biological fish. By animating the RoboFish according to parameters of a variety of computer simulation models of fish schools, its influence on the behavior and the individual decisions of biological fish can be experimentally tested. And these findings then can be fed back to the CS models in order to make them more realistic. Accelerated by electromagnets under the aquarium floor, the effects of this agent provocateur52 on the other schooling fishes can be tested. Jens Krause, one of the project leaders, e.g. identified certain social thresholds. Neighbors would assume that a fish with a more individualist swimming behavior possesses relevant information and follow it, whilst in larger schools, only a critical number of deviating individuals would be able to initiate a turnaround of the whole collective. Individual behavior – which always can also be a non-optimal movement – is moderated by the multiplicity of neighboring individuals with their respective local information, resulting in the optimized collective movement of the whole school with regard to external factors.53 The absolute controllability of the RoboFish makes the definition of social thresholds for decision-making in animal collectives quantifiable and generates experimental data which contribute – in combination with CS models – to the biological knowledge of swarm intelligence.

Of Busy Ants and Crazy Particles

The collective capacity of social insects has fascinated naturalists in ancient times, the early behavioral biologists of the 18th and 19th centuries, and since the 1990s also a growing horde of computer scientists.54 The latter became interested especially after Marco Dorigo et al. applied an optimization algorithm to the Traveling Salesman Problem (TSP) in the early 1990s which was inspired by food source allocation in ant colonies.55 With reference to the communication structure of biological ants, they designed a system of individually foraging ant-like computational agents laying trails of simulated pheromones behind them which would evaporate over time. This would lead to two major effects: If one of the artificial ants would find a food source and then travel constantly from there to the nest and back, it would strengthen its pheromone trail, attracting other ants. And if several ants would find different ways to the food source, after a while the majority of the other ants would opt for the shortest way as pheromone trails on shorter routes contain more pheromones than those on longer routes. This capacity makes them an interesting modeling framework which has since become popular as Ant Colony Optimization (ACO). It involves a colony of cooperating individuals, an interaction by way of the individually sensed spatial environment, based on artificial pheromone trails for indirect (stigmeric) communication, a sequence of local moves in order to determine the shortest paths, and a probabilistic individual decision rule based on local information.56

ACO algorithms are suitable for solving problems that involve graph searching, especially when traditional approaches – e.g. dynamic programming – cannot be efficiently applied.57 And over the last 15 years, they have been implemented e.g. for optimizing telephone networks, vehicle routing, the coordination of manufacturing processes or the scheduling of working personnel. Optimization algorithms based on the self-organizational capacities of animal collectives thus can assist if optimization problems are at hand that have no analytical solution. One could say that the boundaries of calculability thus mark a movement towards biological principles.

Part of this field is also a stochastic optimization algorithm called Particle Swarm Optimization (PSO), introduced by mathematicians James Kennedy und Russell Eberhart in 1995.58 Their model is based on the principles of how bird flocks find and circle around feeding grounds which are distributed in the environment. In the model, a so-called cornfield vector describes a target that motivates the exploration of the simulated particle swarm. The collective would – by way of distributing the individual sensory information about the perceived environment of each bird-oid agent to local neighbors – find this target faster than a systematic search of the complete simulation environment, advancing e.g. from the upper left corner to the one at the right bottom.

A swarm, after their definition, is “a population of interacting elements that is able to optimize some global objective through collaborative search of a space. Interactions that are relatively local (topologically) are often emphazised. There is a general stochastic (or chaotic) tendency in a swarm for individuals to move toward a center of mass in the population on critical dimensions, resulting in convergence on an optimum.”59 Hence, Kennedy and Eberhart develop their optimization algorithm by taking advantage of the dynamic relation between the individual perceptions and resulting movements and their influence on the collective movement of the collective on the one hand, and on the other by combining it with evolutionary algorithms in order to enable the simulation to learn:

The method was discovered through simulation of a simplified social model. [… PSO] has roots in two main component methodologies. Perhaps more obvious are its ties to artificial life (A-life) in general, and to bird flocking, fish schooling, and swarming theory in particular. It is also related, however, to evolutionary computation, and has ties to both genetic algorithms and evolutionary programming.60

PSO was first used to calculate the maxima and minima of non-linear functions, and afterwards also applied to multi-objective optimization problems, i.e. the optimization of a set of interdependent functions. In the latter case, the optimization of one function would conflict with that of the other functions, which makes it the object of PSO to moderate between these different demands – obviously in an optimal way. A possible example is a production process where all parameters that determine the production are interdependent, and where their combinations have an effect on the quantitative and qualitative outcomes of the process. To attain an optimal solution, theoretically all parameter combinations would have to be played through – and they increase exponentially, making this painstakingly interminable even with a relatively small number of parameters. Moreover, these parameters oftentimes are real and not integral numbers which makes the calculation of all possible cases simply impossible.61

PSO addresses these problems by examining the solution space of all possible parameter combinations by swarming, their bird-oid collectives being built along a simplified Reynolds model with the two basic parameters Nearest-Neighbor Velocity and Craziness.

At the beginning, the swarm particles are randomly distributed in the solution space, only defined by their position and their velocity. They are in contact with a defined maximum number of neighboring particles. The respective particle position at the same time designates a possible solution for the target function of the optimization problem. In iterative steps the algorithm now calculates the personalBest positions of the individual particles (the optimum is stored as a kind of personal memory) and the neighborhoodBest positions of the defined number of next neighbors. These are compared and evaluated, and in relation to (1) the best position, (2) the respective individual distances to this position, and (3) the former particle velocity, the direction and velocity of each particle is updated for the next iterative step. This would actuate a convergence of the local neighborhood towards the best position of the former time step.

However, the Craziness parameter is imperative for the later outcome, guaranteeing life-like swarming dynamics effected by incomplete information or external factors on the swarming individuals. Craziness simulates such disturbances by randomly interfering with the updated direction and velocity of a number of particles in each time step.

These variations prevent the particle swarm converging too fast around a certain position a.k.a. problem solution, as it could also be just a local optimum. And here the definition of the local neighborhood comes into play. If it is too large, the system shows the tendency to converge too early. If it is too small, the computation of the problem-solving process last longer. Similar to the swarming, hovering and the final convergence of biological bird flocks around a feeding ground, the particle swarms step by step aggregate at a certain position – the global maximum of the set of functions. The process of formation, the individual movements and neighborhood interactions indicate a mathematical solution. The dynamic spatial formation of the collective, based on local neighborhood communications, gives decisive information about the state of the environment.

3. Agent-based Modeling

Of Kisses and Sugarscapes

Robert Axelrod knows his stuff. With many years of experience at the US Ministry of Defense and with RAND Corporation, he savors the fact that the meaning of the so-called KISS principle in US army jargon has a far less delicate meaning than one could assume. It just means: “Keep it simple, stupid.”62 According to Axelrod, of all possible instructions it is thus a military order which initiates the freedom of autonomous agents and ABM:

The KISS Principle is vital […]. When a surprising result occurs, it is very helpful to be confident that we can understand everything that went into the model. Although the topic being investigated may be complicated, the assumptions underlying the agent-based model should be simple. The complexity of agent-based models should be in the simulations results, not in the assumptions of the model.63

Axelrod goes as far as describing agent-based modeling as

a third way of doing science. Like deduction, it starts with a set of explicit assumptions. But unlike deduction, it does not prove theorems. Instead, an agent-based model generates simulated data that can be analyzed inductively. Unlike typical induction, however, the simulated data come from a set of rules rather than direct measurement of the real world. Whereas the purpose of induction is to find patterns in data and that of deduction is to find consequences of assumptions, the purpose of agent-based modeling is to aid intuition.64

The application of ABM thus transforms the modes of describing and of acknowledging dynamic systems. Joshua M. Epstein and Robert L. Axtell put this change of perspective as follows: “[ABM] may change the way we think about explanations […]. What constitutes an explanation of an observed […] phenomenon? Perhaps one day people will interpret the question, ‘Can you explain it?’ as asking ‘Can you grow it?’”65 In 1996, the authors published a computer program environment which refers to this task: Combining conceptual principles from Schelling’s segregation models and Conway’s Game of Life-CA, their modeling environment Sugarscape was designed as an interdisciplinary approach to explore complex dynamics in societies. Traditional sociological research would describe social processes in a more isolated way, only in some cases trying to aggregate them to “mega-models” (like the Limits to Growth report of the 1970s) with several shortcomings which attracted manifold critiques.66 In contrast, their model would work as an integrative approach, bringing together societal subprocesses which were not easily decomposable.

The models consist of a number of artificial agents (inhabitants), a two-dimensional environment and the interaction rules governing the relations of the agents with each other and with the environment. Axtell and Epstein’s original model is based on a 51x51 cell grid, where each cell can contain different amounts of sugar (or spice). In every simulation step the individual agents search their local environment for the closest cell filled with sugar, move there and metabolize. As an effect of this process, they give rise to effects like polluting, dying, reproducing, inheriting resources, transferring information, trading or borrowing sugar, generating immunity or transmitting diseases – depending on the specific scenario and specific local rules defined at the set-up of the model.67 Thus, say the authors, “the resulting artificial society unavoidably links demography, economics, cultural adaptation, genetic evolution, combat, environmental effects, and epidemiology. Because the individual is multidimensional, so is the society.68

As physicist Eric Bonabeau notes, such ABM have become more and more popular in diverse fields of application since the 1990s – thanks to the availability of powerful computers and graphic chips which enable simulations with large agent populations and their interactions.69 ABM, says Bonabeau, were superior to other methods of simulation for three reasons: First, they can reproduce emergent phenomena, second, they offer a natural system description, and third, they are flexible.70 Unlike simulations that work through differential equations, agent-based models of simulation are based on the principle that complex global behavior can result from simple, locally-defined rules.71 In Bonabeau’s words:

Individual behavior is nonlinear and can be characterized by thresholds, if-then-rules, or nonlinear coupling. Describing discontinuity in individual behavior is difficult with differential equations. […] Agent interactions are heterogeneous and can generate network effects. Aggregate flow equations usually assume global homogeneous mixing, but the topology of the interaction network can lead to significant deviations from predicted aggregate behavior. Averages will not work. Aggregate differential equations tend to smooth out fluctuations, not ABM, which is important because under certain conditions, fluctuations can be amplified: the system is lineary stable but unstable to larger perturbations.72

In addition, it seemed more natural or obvious to model the behavior of units on the ground of local behavioral rules than through equations which stipulate the dynamics of density distribution on a global level. Also, agent-based simulation’s flexibility made it easy to add further agents to an ABM and to adjust its parameters as well as the relations between the agents. An observation of the simulation system became possible on several levels, ranging from the system as a whole, to a couple of subordinated groups, down to the individual agent.73

However, we have to go back and ask: what exactly is an agent in the sense of ABM? Apparently, experts on the subject direct their attention to very different aspects. Bonabeau for instance considers every sort of independent component to be an agent, whether they are capable only of primitive reactions or of complex adaptive actions. In contrast, John L. Casti claims to apply the notion of the agent only to those components which are capable of adaptation and able to learn from environmental experiences in order to adjust their behavior when necessary; agents in that sense are only those components which consist of rules that enable them to change their rules.74 In turn, Nicholas R. Jennings underlines the aspect of autonomy, meaning the ability of agents to take their own decisions and, therefore, to be defined as active instead of being only passively affected by systemic action.75

In defiance of such terminological differentiation, Macal and North list a number of qualities which are important from the pragmatic perspective of a model-maker or a simulator:

An agent is identifiable, a discrete individual with a set of characteristics and rules governing its behaviors and decision-making capability. Agents are self-contained. The discreteness requirement implies that an agent has a boundary and one can easily determine whether something is part of an agent, is not part of an agent, or is a shared characteristic. An agent is situated, living in an environment with which it interacts along with other agents. Agents have protocols for interaction with other agents, such as for communication, and the capability to respond to the environment. Agents have the ability to recognize and distinguish the traits of other agents. An agent may be goal-directed, having goals to achieve (not necessarily objectives to maximize) with respect to its behavior. This allows an agent to compare the outcome of its behavior relative to its goals. An agent is autonomous and self-directed. An agent can function independently in its environment and in its dealings with other agents, at least over a limited range of situations that are of interest. An agent is flexible, having the ability to learn and adapt its behaviors based on experience. This requires some form of memory. An agent may have rules that modify its rules of behavior.76

In the light of these basic features it is no surprise that the adaptive behavior of animal collectives – like Reynold’s boid-model and the abovementioned ACO – are explicitly listed as examples and sources of inspiration for the ABM “mindset.”77 And the features lead to five decisive steps in the course of programming an ABM: First, all types of agents and other objects that are part of the simulation have to be defined in classes – together with their respective attributes. Second, the environment with its external factors and interaction potentials with the agents has to be modeled. Third, the agent methods are to be scripted, that is, the specific ways in which the agent attributes are subjected to updates as an effect of the interactions of the agent with other agents or with environmental factors. Fourth, the relational methods have to be defined that describe when, where and how the agents are capable of interacting with other agents during the simulation run. And fifth, such an agent model has to be implemented in software.

This set-up leads to a procedural production of knowledge: By varying the agent attributes and methods and observing the system behavior of the CS in interated simulation runs, the model is adjusted and modulated step-by-step. And the interesting thing with ABM is that they can be seen as a medium that integrates alternative approaches to engage with a problem (see also the articles by Sándor Fekete, Dirk Helbing and Manfred Füllsack in this volume):

One may begin with a normative model in which agents attempt to optimize and use this model as a starting point for developing a simpler and more heuristic model of behavior. One may also begin with a behavioral model if applicable behavioral theory is available […], based on empirical studies. Alternatively, a number of formal logic frameworks have been developed in order to reason about agents […].78

However, and somewhat provocatively, one could state that in ABM “performance beats theoretical accuracy”:79 the pragmatic operationality of the applications is more often than not more important than its exact theoretical grounding. And this performance is intrinsically linked to graphical visualizations of the non-linear and dynamic interplay of the multiple agents. Otherwise, the emerging model dynamics would remain untraceable in static lines of code or endless tables filled with quantified behavioral data, but lacking a dynamic time-spatial overall perspective on the system’s developments in runtime.

Hence, with ABM the interdependencies of media and mathematics of dynamic networks become obvious in at least three ways: (1) The agent-based mindset animates mathematical models and thus generates novel perspectives on the emergence of global outcomes resulting from local interactions. (2) The ABM are capable of integrating diverse methodologies to address complex problems. And (3) with ABM the multidimensional effects of interaction processes can be investigated, thanks to the possible multidimensionality of agent attributes.

4. (Not a) Conclusion

This article tried to give some seminal historical and contemporary examples of the media-technological genealogy of Neighborhood Technologies. Therefore, this part can be anything but a conclusion, since the paper merely wanted to present some landmarks which inspired us to take a transdisciplinary look at Neighborhood Technologies. Furthermore, the background of these historical examples might somehow serve as a framework and a reference plane for the diverse and fresh views on Neighborhood Technologies presented in the contributions to this volume. As it became apparent, the investigation of dynamic (social) networks has enormously profited from approaches that followed neighborly perspectives. From examples like Schelling’s models of segregation onwards, it became also clear that only the coupling of mathematical modeling and analysis with media-technologies of computing or visualization would be suited for a multi-dimensional analysis of the emergent and unpredictable behavior of dynamic collectives. In Neighborhood Technologies thus the investigation of dynamic (social) networks and the application of such dynamics – described by mathematical models and programmed into the media technologies of an agent-based computer simulation mindset – are two sides of the same coin.

Furthermore, a media history of Neighborhood Technologies alludes to particular relations between spatial and temporal orderings. It is not only in cellular automata where the spatial layout of the media technology enables dynamic processes and at the same time visualizes them as computer graphics. Also in the abovementioned examples from swarm intelligence and ABM, the reciprocal interplay of topological and geometrical formations of the collectives and its internal information processing capabilities play a crucial role. This interplay is mediated by the respective definitions and sizes of local neighborhoods (how many and how distant members build a neighborhood etc.).

And not least, it became clear that Neighborhood Technologies can be perceived as media-technological engines of transdisciplinary thinking. They span fields like mathematical modeling, computer simulation, and engineering, e.g. by implementing findings from biology in swarm-intelligent robot collectives whose behavior then is re-applied as an experimental setting for (con-)testing supposed biological factors of collective motion in animal swarms. And it is precisely this transdisciplinary line of thought that our volume seeks to further pursue and encourage.

1 This article is based partly on some paragraphs of my Ph.D.thesis, published in German as Sebastian Vehlken, Zootechnologien. Eine Mediengeschichte der Schwarmforschung (Zürich-Berlin: diaphanes, 2012).

2 See e.g. Mirko Tobias Schaefer, Bastard Culture! How User Participation Transforms Cultural Production (Amsterdam: Amsterdam University Press, 2011); Axel Bruns, Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. Digital Formations (New York: Peter Lang, 2008); Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven: Yale University Press, 2006); George Ritzer, Paul Dean and Nathan Jurgenson, “The Coming of Age of the Prosumer,” American Behavioral Scientist 56/4 (2012), p. 379–398; Philip Kotler, “The Prosumer Movement. A New Challenge for Marketers,” Advances in Consumer Research, 13 (1986), p. 510–513. See also the article of Dirk Helbing in this volume.

3 See the hompage of Niriu, https://niriu.com/niriu, October 10, 2012.

4 See e.g. Geert Lovink and Miriam Rasch, eds., Unlike Us Reader: Social Media Monopolies and Their Alternatives (Amsterdam: Institute of Network Cultures, 2013).

5 See for a media-historical and -theoretical perspective e.g. Vehlken, Zootechnologies; Sebastian Vehlken “Zootechnologies. ‘Swarming’ as a Cultural Technique,” Theory, Culture and Society, special issue Cultural Techniques, ed. Geoffrey Winthrop-Young, Jussi Parikka and Ilinca Irascu (2012), p. 110–131; Jussi Parikka, Insect Media. An Archaeology of Animals and Technology (Minneapolis: University of Minnesota Press, 2011); Niels Werber, Ameisengesellschaften. Eine Faszinationsgeschichte (Frankfurt/M.: Fischer, 2014).

6 See the Biorobotics Lab of FU Berlin under http://biorobotics.mi.fu-berlin.de/wordpress/?works=robofish, October 25, 2014.

7 Stanislaw Ulam, “On Some Mathematical Problems Connected with Patterns of Growth of Figures,” Proceedings of the Symposium of Applied Mathematics 14 (1962), p. 219–231; John von Neumann, Collected Works, ed. A.H. Taub (New York: Pergamon Press, 1963), p. 288–328.

8 See Thomas C. Schelling, Micromotives and Macrobehavior (New York: Norton, 2006).

9 See Ursula Klein, Experiments, Models, Paper Tools. Cultures of Organic Chemistry in the Nineteenth Century (Stanford: Stanford University Press, 2003).

10 Thomas C. Schelling, “Dynamic Models of Segregation,” Journal of Mathematical Sociology 1 (1971), p. 143–186.

11 See Richard Dawkins, The Selfish Gene (Oxford: Oxford University Press, 1976); Edward O. Wilson, Sociobiology. The New Synthesis (Cambridge: Harvard University Press, 1975).

12 Robert Axelrod and William D. Hamilton, “The Evolution of Cooperation,” Science 211 (4489): 1390–1396 (1981).

13 William D. Hamilton, “Geometry for the Selfish Herd,” Journal of Theoretical Biology 31 (1971), p. 295–311.

14 Hamilton, “Geometry for the Selfish Herd,” p. 295.

15 Ibid., p. 296.

16 Ibid., p. 297.

17 See Nancy Forbes, Imitation of Life. How Biology is Inspiring Computing (Cambridge: MIT Press, 2004), p. 26.

18 See Claus Pias, Computer Spiel Welten (München: sequenzia, 2002). Pias investigates the genealogy of CA back to military board games of the 19th century and the numerical meteorology of the early 20th century. The Hixon lecture was first published in 1951 and is also part of John von Neumann, Collected Works, p. 288–328.

19 See Steven Levy, KL– Künstliches Leben aus dem Computer (München: Droemer Knaur, 1993), p. 32.

20 Von Neumann describes these restrictions in more detail in John von Neumann, “Probabilistic Logics and the Synthesis of Reliable Organisms From Unreliable Components,” Collected Works, vol. 5, p. 329–378.

21 John von Neumann, Theory of Self-Reproducing Automata, ed. Arhur W. Burks (Urbana/London: University of Illinois Press, 1966), p. 20.

22 This is executed in the following way: A construction part A (factory), produces an output X according to the instruction b(X). B functions as the copying part or duplicator and provides, on the basis of the input b, this b and an identical copy b´  as output. C is the control unit which delivers the instruction b(X) to B. After the double-copying process of B, C delivers the first copy to A where the output X is produced, determined by the instruction b. Finally, C combines the remaining b(X) with the output of X produced by A and generates the output (X + b (X)) from the machine A+B+C. D is a particular instruction, enabling A to produce A+B+C. It is the self-description of the machine, = b(A+B+C). The automaton A+B+C+D thus produces as an output precisely A+B+C+D without any sub-element reproducing individually. However, any sub-part is necessary for the self-replication of the whole machine. Self-organization thus is conceptualized as a system feature that characterizes the internal relation, the organization of sub-units. Herman Goldstine alluded to the anecdote that von Neumann used Tinker Toys in order to construct a three-dimensional model of his idea, see Herman H. Goldstine, The Computer from Pascal to von Neumann (Princeton: Princeton University Press, 1972), quoted in Robert Freitas Jr. and Ralph Merkle, Kinematic Self-Replicating Machines (Georgetown: Landes Bioscience, 2004). http://www.molecularassembler.com/KSRM/2.1.3.htm (last accessed September 22, 2014).

23 Compare Walter R. Stahl, “Self-Reproducing Automata,” Perspectives in Biology and Medicine 8 (1965), p. 373–393, here 378.

24 Pias, Computer Spiel Welten, p. 259 (trans. Sebastian Vehlken).

25 Pias, Computer Spiel Welten, p. 257 (trans. Sebastian Vehlken).

26 Compare G. Bard Ermentrout and Leah Edelstein-Keshet, “Cellular Automata Approaches to Biological Modeling,” Journal of Theoretical Biology 160 (1993), p. 97–133.

27 Xiaojun Yang, C.P. Lo, “Modelling Urban Growth and Landscape Changes in the Atlanta Metropolitan Area,” International Journal for Geographical Information Science 17/5 (2003), p. 463–488, here: p. 464.

28 Ibid., see also Paul M. Torrens and David O’Sullivan, “Cellular Automata and Urban Simulation: Where do we go from here,” Environment and Planning B 28 (2001), p. 163–168.

29 Xiaojun Yang, C.P. Lo, “Modelling Urban Growth,” p. 465. See also Michael Batty and Yichun Xie, “From Cells to Cities,” Environment and Planning B 21 (1994), p. 531–548; Helen Couclelis, “From Cellular Automata to Urban Models: New Principles for Model Development and Implementation,” Environment and Planning B 24 (1997), p. 165–174; Yeqiao Wang and Xinsheng Zhang, “A Dynamic Modeling Approach to Simulating Socioeconomic Effects on Landscape Changes,” Ecological Modelling 140 (2001), p. 141–162. For a comprehensive overview see Alison J. Heppenstall, Andrew T. Crooks, Linda M. See, and Michael Batty (eds.), Agent-Based Models of Geographical Systems (New York: Springer 2012).

30 See also the paper by Sándor Fekete in this volume.

31 Eric Bonabeau, Marco Dorigo and Guy Theraulaz, Swarm Intelligence. From Natural to Artificial Systems (New York: Oxford University Press, 1999).

32 Gerardo Beni, “From Swarm Intelligence to Swarm Robotics,” Swarm Robotics, ed. Erol Sahin and William M Spears (New York: Springer, 2005), p. 3–9, here: p. 3. Beni gives Alex Meystel the credit for bringing up the term in the discussion.

33 Compare Bonabeau, Dorigo and Theraulaz, Swarm Intelligence.

34 See also James Kennedy and Russell C. Eberhart, “Particle Swarm Optimization,” Proceedings of the IEEE International Conference on Neural Networks (Piscataway: IEEE Service Center, 1995), p. 1942–1948.

35 See e.g. Alexis Drogoul et al., Collective Robotics: First International Workshop. Proceedings, Paris July 4–5, 1998 (New York: Springer, 1998) Serge Kernbach, Handbook of Collective Robotics (Boca Raton: CRC Press, 2013).

36 See Joshua J. Corner and Gary B. Lamont, “Parallel Simulation of UAV Swarm Scenarios,” Proceedings of the 2004 Winter Simulation Conference, ed. R. G. Ingalls, M. D. Rossetti, J. S. Smith and B. A. Peters (Piscataway: IEEE Press, 2004), p. 355–363, here: p. 355, referring to Bruce Clough, “UAV Swarming? So what are those swarms, what are the implications, and how do we handle them?,” Proceedings of 3rd Annual Conference on Future Unmanned Vehicles, Air Force Research Laboratory, Control Automation (2003); Ferry Bachmann, Ruprecht Herbst, Robin Gebbers and Verena Hafner, “Micro UAV-Based Geo-Referenced Orthophoto Generation in VIS+NIR for Precision Agriculture,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 40–1/W2 (2013), p. 11–16.

37 See Gerardo Beni, “Order by Disordered Action in Swarms,” Swarm Robotics, ed. Sahin and Spears, p. 153–172, here: p. 153.

38 Erol Sahin, “Swarm Robotics. From Sources of Inspiration to Domains of Application,” Swarm Robotics, ed. Sahin and Spears, p. 10–20, here: p. 12.

39 Corner and Lamont, “Parallel Simulation of UAV Swarm Scenarios,” p. 355.

40 Sahin, “Swarm Robotics,” p. 12.

41 Ibid., p. 11.

42 Beni, “Order by Disordered Action in Swarms,” p. 157.

43 Ibid., p. 153.

44 Compare Yang Liu, Kevin M. Passino and Marios M. Polycarpou, “Stability Analysis of M-Dimensional Asynchronous Swarms With a Fixed Communication Topology,” IEEE Transactions on Automatic Control 48/1 (2003), p. 76–95; compare Veysel Gazi and Kevin M. Passino, “Stability Analysis of Social Foraging Swarms,” IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics 34/1 (2004), p. 539–557.

45 See https://hal.elte.hu/flocking/wiki/public/en/projects/CollectiveMotionOfFlyingRobots (last accessed September 26, 2014).

46 For a more detailed perspective see Vehlken, Zootechnologien; see Craig W. Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model.” Computer Graphics 21/4 (1987), p. 25–34.

47 Tamás Viczek et. al., “Flocking Algorithm for Autonomous Flying Robots,” Bioinspiration & Biomimetics 9/2 (2014), p. 1–11: 2. Http://iopscience.iop.org/1748–3190/9/2/025012/ (last accessed September 26, 2014).

48 See https://hal.elte.hu/flocking/wiki/public/en/projects/CollectiveMotionOfFlyingRobots.

49 Viczek et. al., “Flocking Algorithm for Autonomous Flying Robots,” p. 10.

50 Ibid., p. 10.

51 See http://robofish.mi.fu-berlin.de/wordpress/?page_id=9 (last accessed September 26, 2014).

52 See Jakob Kneser, “Rückschau: Der Robofisch und die Schwarmintelligenz,” Das Erste – W wie Wissen, broadcast of March 14, 2010, http://mediathek.daserste.de/daserste/servlet/content/ 3989916 (last accessed February 1, 2011).

53 Ibid.

54 See Leandro Nunes de Castro, Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications (Boca Raton: CRC, 2007); Eva Johach, “Schwarm-Logiken. Genealogien sozialer Organisation in Insektengesellschaften,” Schwärme – Kollektive ohne Zentrum. Eine Wissensgeschichte zwischen Leben und Information, ed. Eva Horn and Lucas M. Gisi (Bielefeld: transcript, 2009), p. 203–224; Niels Werber, Ameisengesellschaften. Eine Faszinationsgeschichte (Frankfurt/M.: Fischer, 2014); Parikka, Insect Media.

55 Alberto Colorni, Marco Dorigo and Vittorio Maniezzo, “Distributed Optimization by Ant Colonies,” Proceedings of ECAL91 - European Conference on Artificial Life, Paris, France (Amsterdam: Elsevier Publishing, 1991), p. 134–142; Marco Dorigo, “Optimization, Learning and Natural Algorithms,” Ph.D. diss., Politecnico di Milano, Italy, 1992; Marco Dorigo, Vittorio Maniezzo and Alberto Colorni, “Ant System: Optimization by a Colony of Cooperating Agents,” IEEE Transactions on Systems, Man, and Cybernetics - Part B, 26(1) (1996), p. 29–41; Marco Dorigo, Mauro Birattari and Thomas Stützle, “Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique,” IEEE Computational Intelligence Magazine, vol.1, numéro 4 (2006), p. 28–39.

56 See Marco Dorigo and Gianni Di Caro, “Ant Colony Optimization: A New Meta-Heuristic,” IEEE Congress on Evolutionary Computation - CEC ’99, ed. P. J. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao and A. Zalzala (Piscataway, IEEE Press, 1999), p. 1470–1477.

57 See Nunes de Castro, Fundamentals of Natural Computing, p. 223.

58 See Kennedy and Eberhart, “Particle Swarm Optimization;” compare Frank Heppner and Ulf Grenader, “A Stochastic Nonlinear Model for Coordinated Bird flocks,” The Ubiquity of Chaos, ed. Saul Krasner (Washington: AAAS, 1990).

59 James Kennedy and Eberhart, Swarm Intelligence (San Francisco: Morgan Kauffman, 2001), p. 27.

60 Russell and Eberhart, “Particle Swarm Optimization,” p. 1942.

61 See Cai Ziegler, “Von Tieren lernen. Optimierungsprobleme lösen mit Schwarmintelligenz,” c’t 3 (2008), p. 188–191.

62 Robert Axelrod, The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration (Princeton: Princeton University Press, 1997), p. 3–4.

63 Ibid., p. 5.

64 Ibid., p. 5.

65 Joshua M. Epstein and Robert L. Axtell, Growing Artificial Societies: Social Science from the Bottom Up (Cambridge: MIT Press, 1996).

66 William D. Nordhaus, “Lethal Model 2: The Limits to Growth Revisited,” Brookings Papers on Economic Activity 2 (1992), p. 1–59.

67 See http://en.wikipedia.org/wiki/Sugarscape.

68 Epstein and Axtell, Growing Artificial Societies, p18.

69 See e.g. Roshan M D’Souza, Mikola Lysenko and Keyvan Rahmani, “SugarScape on Steroids: Simulating Over a Million Agents at Interactive Rates,” Proceedings of Agent 2007 conference (Chicago, 2007); see also Sebastian Vehlken, “Epistemische Häufungen. Nicht-Dinge und Agentenbasierte Computersimulation,” Jenseits des Labors, ed. Florian Hoof, Eva-Maria Jung, Ulrich Salaschek (Bielefeld: Transcript, 2011), p. 63–85.

70 Eric Bonabeau, “Agent-Based Modeling: Methods and Techniques for Simulating Human Systems,” PNAS 99, Suppl. 3 (2002), p. 7280–7287: 7280.

71 See Forbes, Imitation of Life, p. 35.

72 Bonabeau, “Agent-based modelling,” p. 7281.

73 Ibid,, p. 7281.

74 See John L. Casti, Would-be-Worlds. How Simulation is Changing the Frontiers of Science (New York: John Wiley, 1997).

75 Nicholas R. Jennings, “On Agent-Based Software Engineering,” Artificial Intelligence 117 (2000), p. 277–296.

76 Charles M. Macal and Michael J. North, “Tutorial on Agent-Based Modeling and Simulation. Part 2: How to Model with Agents,” Proceedings of the 2006 Winter Simulation Conference, ed. L.F. Perrone et al., p. 73–83, here: p. 74.

77 Ibid., p. 75.

78 Ibid., p. 79.

79 Günter Küppers and Johannes Lenhard, “The Controversial Status of Simulations,” Proceedings of the 18th European Simulation Multiconference SCS Europe, 2004.

  • Social Media
  • Interdisziplinarität
  • Computergrafik
  • Visualisierung
  • Schwarmintelligenz
  • Interface
  • Roboter

Meine Sprache
Deutsch

Aktuell ausgewählte Inhalte
Deutsch, Englisch, Französisch

Sebastian Vehlken

Sebastian Vehlken

studierte Film- und Fernsehwissenschaften, Publizistik und Wirtschaftswissenschaft an der Ruhr-Universität Bochum und Media Studies an der Edith Cowan University in Perth. Seit Oktober 2010 ist er wissenschaftlicher Mitarbeiter am ICAM Institut für Kultur und Ästhetik Digitaler Medien der Leuphana Universität Lüneburg. Seine Interessensbereiche sind: Theorie und Geschichte Digitaler Medien, Medientheorie, Kybernetik und selbstorganisierende Systeme, Medien in der Biologie, Think Tanks und Beraterwissen, die Mediengeschichte des Sonars und Ozeane als Wissensräume.

Weitere Texte von Sebastian Vehlken bei DIAPHANES
Tobias Harks (Hg.), Sebastian Vehlken (Hg.): Neighborhood Technologies

Neighborhood Technologies expands upon sociologist Thomas Schelling’s wellknown study of segregation in major American cities, using this classic work as the basis for a new way of researching social networks across disciplines. Up to now, research has focused on macrolevel behaviors that, together, form rigid systems of neighborhood relations. But can neighborhoods, conversely, affect larger, global dynamics? This volume introduces the concept of “neighborhood technologies” as a model for intermediate, or meso-level, research into the links between local agents and neighborhood relations. Bridging the sciences and humanities, Tobias Harks and Sebastian Vehlken have assembled a group of contributors
who are either natural scientists with an interest in interdisciplinary research or tech-savvy humanists. With insights into computer science, mathematics, sociology, media and cultural studies, theater studies, and architecture, the book will inform new research.