Creating Subsets of the Masses
Taxonomies, Orders, and Crowds in Facebook’s Open Graph Protocol
PDF, 26 Seiten
The German version of Facebook’s homepage greets its visitors with the following euphemistic self-description: “Facebook enables you to connect with the people in your life and to share content with them.”1 If this motto is taken seriously (and there seems to be no reason not to do so, given the platform’s 1.5 billion users), this raises the question of how this connecting and sharing is designed to take place under the conditions of digital technologies and algorithms. What sort of relationship between media and the masses manifests itself in the interaction between formalization, staging (Inszenierung), and user activity on Facebook?
In an effort to answer this question, we direct our attention to one of the central features of Facebook’s technical infrastructure, namely its Open Graph protocol and the various applications, such as the “Like” button, that are based on it. Within a few years, the Like button has become a ubiquitous element of the World Wide Web. It is now possible to “Like” everything from individual texts, videos, and photographs to entire websites. At first glance, the main function of this button is seemingly to rank the popularity of online content by counting the number of times such content has been “Liked.” Upon closer inspection, however, it becomes clear that the growing prevalence of the Like button – and the Open Graph protocol with which it is associated – is establishing a new paradigm of order on the web.
Here we would like to investigate how the use of Open Graph generates a sort of interaction between organization and communication in which the media and the masses mutually (re)configure one another. In this regard, we are especially interested in the technical details of the markup protocol as well as its medial staging and types of use. We hope to focus above all on how the masses are currently ordered and classified by a stratification of algorithmical and staging processes. In historical terms, such processes can be associated with particular medial and phantasmatic conceptions of oversight and emergence. The reconstruction of these historical genealogies will allow us to provide a more precise definition of the interaction between the characteristics of the media and our understanding of the masses.
Open Graph is a protocol that enables content on the web to be tagged with metadata and connected to Facebook, the company that developed the procedure. The protocol forms the basis of most of Facebook’s user functions, including clicking on the Like button or using other social plugins. Its central role within Facebook’s infrastructure will only become evident, however, if a look is taken behind the graphical interface. The special feature of the protocol is that it makes content, both from within Facebook and outside of it, ascertainable in a manner that is formally describable and automated. This can be illustrated, for instance, by its treatment of the content on the film website imdb.com. Here, if someone looks up information about a given film, the corresponding page is furnished with a Like button. If the person in question clicks on this button, the movie is automatically incorporated into his or her user profile under the category “movies.” Moreover, this “Like” will also appear on the person’s own Facebook timeline and in the newsfeed of his or her “friends.” Both the user and the movie are thus saved as so-called objects in Open Graph, and the “Like” specifies at what time and in what manner these objects were connected with one another.
On the one hand, Open Graph constitutes the format used internally by Facebook to save “semantic”2 information and user data. On the other hand, it can also be used by external providers as a basis for making personalized recommendations. The metadata remain hidden from the users themselves; they are expressed only during the automatic categorization of content. Nevertheless, Open Graph also plays a role in the activity of users. Its features and its staging suggest which objects should be integrated into one’s own profile; users collect “finds,” present them to one another, and write comments about them. The automatic incorporation of objects into a newsfeed will direct the attention of “friends” toward one’s own activity and stimulate them to comment on it, share it with others, or post their own recommendations in response.
In order for content from websites outside of Facebook to be integrated as objects into Open Graph, this content has to be equipped with metadata by the website operators in question. As an address, every object contains an individual identification number that makes its connections comprehensible to Facebook. By integrating their own content into Facebook’s Open Graph, website operators gain the advantage of being able to know with greater precision who has been using their website and exactly what they have been looking at. Facebook marks and records every time an Open-Graph object has been clicked on as well as whatever interaction with content or plugins might subsequently take place. This information is made available to the operators of the corresponding websites in two forms. First, they are able to look at various statistics via a function called “Facebook Insights.” These statistics consist of frequency data that can be filtered and sorted according temporal, geographic, and demographic criteria.3 Second, each Open-Graph object has its own Facebook profile where, among other things, the comments of individual users can be recorded.4
The analytic tools made available by Facebook, however, are only a rudimentary variant of other possible ways to evaluate web traffic via the Open Graph. By means of an application program interface (API), the operators of websites can also access Open Graph directly and automatically in order to find far more comprehensive statistics about the use of their content. In this way it is possible to determine, for instance, how many users were directed to certain content by the recommendation of a friend. The restrictions that are supposed to accompany evaluations of this sort (for instance, the protection of personal information such as gender, age, and location) can be circumvented by encouraging users to download certain apps. When apps are installed, their operators are granted access to personal information that can include everything from the names of a user’s friends to the contents of his or her inbox.5 As soon as these rights are granted to the operator of a given app, the corresponding information is also made automatically accessible through the API.
This method of evaluating web traffic is used primarily by large content providers that want to have a precise idea of how and where their content is being circulated on social media. In fact, technical measures can be implemented to automate the dissemination of content and thus to promote its circulation on social networks. An example of this is an app for the eReading device Kobo. This app registers when Facebook users (if they are logged in) start reading a book, when they highlight a passage, take notes, and when they finish reading a book. It subsequently informs the user’s circle of friends through the newsfeed that “user X started reading [finished reading/highlighted a passage/wrote a note in] book Y.” Following the same schema, Tumblr, for example, informs a user’s circle of friends about which elements where posted or reblogged. Moreover, the entire chain of comments, “Likes,” and other interaction can also be monitored in real time by the employees of Kobo or Tumblr.
Of special interest are the technical conditions that need to be satisfied for a message such as “user X is reading article Y” to appear on someone’s Facebook newsfeed. The first condition is that a definition has to exist in Open Graph of what an “article” is and what “reading” means. In order for developers to establish definitions of this sort, Facebook provides a form with which app programmers can define such things as “cooking” a “meal” or “staying” in a “hotel.”6 The criteria for establishing such facts include interactions that leave a trace in Facebook’s database, such as clicking on a link or a person’s location as reported by a smart phone.
Here it seems important to take note of the convergence of interests that occurs during this process. The spread of Open Graph’s presence throughout the entire web (in the rudimentary form of the Like button) has enabled Facebook to be made aware of the structures of links and user activity that exist outside of its own platform. The definitions of activities (such as cooking) and concepts (such as recipes) have provided Facebook with an unprecedented synoptic view of the semantic dimensions of the web.7 This has become possible because Open Graph creates a considerable amount of incentive for website and app operators to mark their content with metadata. This it does by offering them essentially more comprehensive and detailed ways to monitor and evaluate their web traffic and by contributing to the broader and faster circulation of their content.
Participants in Facebook use Open Graph primarily by clicking on Like buttons and by installing apps on their own profiles, as in the case of the Kobo app mentioned above. The act of “Liking” something links content to users’ profiles and timelines and appears in the newsfeeds of their “friends.” Beyond this, apps also generate information about the way people interact with the content that they have been linked to. What is more, Facebook users also receive content that was generated by Open Graph when they log in to the site and look at their newsfeed, which shows what their “friends” have “Liked” as well as the comments that others have posted about such things.
In this regard, the use of Open Graph creates a particular form of visibility.8 On the one hand, Open Graph can be used to manage one’s own profile and to present one’s own preferences with links to categorized content. On the other hand, the Like button and other Open-Graph apps are used to refer to other web content while on Facebook. The links appear simultaneously in one’s own timeline and in the newsfeed of one’s “friends.” Conversely, the links of one’s “friends” and their various comments appear in one’s own newsfeed.
Such use thus generates a visibility of showing and telling. What can be seen are links to content that has been seen by one’s “friends,” partial information about how they have interacted with this content, and comments about the content itself. This visibility is based on one’s list of friends and privacy settings, and it takes place as a sort of automatic distribution. By managing addresses based on lists of friends, Facebook gives order to the profiles, timelines, apps, and newsfeeds of its participants. The aim of the visibility created by this activity is for “friends” to follow the links of others and to repeat and comment about their perceptions and interactions. In this way, Open Graph produces a space of copresence (mainly in the form of the newsfeed) in which individual users can come into contact with their “friends,” see what these “friends” have seen and done, and express their opinions about such things. Perceptions are shared and evaluations are exchanged.
Similar observations have been made in studies devoted to the use of new media by American adolescents. In the book Hanging Out, Messing Around, Geeking Out, Mizuko Ito and her colleagues aggregated the findings of various studies conducted between 2005 and 2008. Regarding communication media, which encompasses Facebook use and the Like button, the authors claim that the aim of young men and women is “to construct spaces of copresence where they can engage in ongoing, lightweight social contact that moves fluidly between online and offline contact.”9 Bernadette Kneidiger has made a similar argument about the Facebook users in German-speaking countries. A large majority of the 295 users that she surveyed (eighty-nine percent) stated that they had met most of their Facebook friends offline.10 For ninety percent of her sample, the motive for using Facebook consisted of keeping in touch with old friends. Eighty percent claimed to be interested in “finding out what [their] circle of acquaintances is up to,” and fifty-four percent agreed with the statement that they are more inclined to use the site “to share experiences with friends and acquaintances who live nearby.”11
Open Graph produces a type of visibility that invites people to repeat the perceptions of others and to exchange opinions about these perceptions. With regard to the relationship between the individual and the crowd, Facebook creates a space of copresence for a collective. It forms groups in which everyone looks at similar content and exchanges and compares opinions about it. Upon closer inspection, however, this rather seems to be an imaginary concept of a collective. Every individual user of Facebook can imagine himself or herself as being part of a group with his or her “friends.” However, one person’s “friends” are not necessarily “friends” with one another. If the situation arises, they become visible to their common “friends” through their comments made about certain content, but otherwise they do not perceive any content from one another. If, as far as users are concerned, Facebook’s Open Graph thus seems to be a medium of copresence, this observation needs to be modified at the structural level: It is rather an overlapping space of copresence, the overlapping nature of which also becomes somewhat porous when, within this space, certain “friends” become visible to other “friends” whose address lists do not contain their names.
Facebook’s Open Graph protocol is the basis of a conglomerate in which the technical functionalities of addressing and distribution go hand in hand with categorization, staging (Inszenierung), and user activity. A fundamental aspect to all of this is the fact that Open Graph can only be thought about in terms of crowds. An effective circulation of content requires an exceedingly large number of users; the semantic definition of content depends on the large-scale mobilization of website operators; and at the social level, the formation of “subsets” requires users to interact with online elements and with one another on a large scale. Open Graph produces a level of mediation between these crowds. It integrates organization and communication in such a way that crowds are rearranged; in the same way, it simultaneously organizes the classification of users in terms of content and thus gains, through the staging of its interfaces as well, its medial character.
As regards the classification or ordering of crowds, which is achieved by means of technical infrastructure, two points need to be made. First, Open Graph has brought about a massive expansion of addressability. Web content is no longer exclusively retrievable via a URL that belongs to an entire site or to part of one; it is also retrievable via an ID that is managed by Facebook and can thus reveal to website operators which elements of their sites attract the most interest from Facebook’s users. This expansion has punctured the boundaries of observational space, but it has also caused differences to be leveled out. The integration of heterogeneous elements – user profiles, text-based communication, photos, descriptions of films, cooking recipes, etc. – into a common space of observation requires a severely reduced model. Here this is achieved by a type of network analysis that reduces objects and connections to the vertices and edges of a graph. Second, it is not in this reduced model – but rather in the data material – that organizations are made and thus that new categorizations are created. Categories are either determined by Facebook itself, as when a film from an internet movie database appears in a Facebook profile under the category “movies.” Or they are defined, like the “read” books in the Kobo app by app operators on the basis of the Open Graph protocol.
Open Graph thus provides an infrastructure that is based on the expansion of observational space (by means of addressing and levelling) and offers the possibility of semantic categorization.12 In its staging and user offerings, Open Graph is also seemingly meant to generate traffic that is then made available as data or metadata for further evaluation. At the user level, Facebook’s Open Graph does not initially address crowds but rather the constitution of groups. It promises a communal sense of being present and participating in the life of one’s “friends” by circulating perceptions and opinions among them.
At the technical level, the main characteristics of Facebook’s Open Graph are thus addressing, levelling, categorizing, and the generation of large-scale web traffic. This protocol, along with the large-scale data storage by Facebook and app providers, thus provides the basis for mathematical analysis. The protocol enables the use of statistical methods, the methods of data mining, as well as methods of network analysis that are based on graph theory.13 Below we will summarize the fundamental concepts behind these types of analysis and offer a sketch of their inherent forms of knowledge. Above all, this will allow us to characterize and underscore the connections that exist between them.
The basis of “statistical knowledge” is the data collected about actors or events. These data correspond, for instance, to features such as gender, age, or other profile information of Facebook users. In this manner, every Facebook object can be classified according to a list of characteristics. In light of these collected characteristics, a crowd or group of Facebook objects can now be classified by means of a certain metric and by various calculation procedures. The metric determines in which manner the collected data can be made comparable to a given characteristic and thus provides a measure for proximity or similarity.14 For instance, deviation from the mean can be used to classify or order a given Facebook object with respect to a particular characteristic.
It is also determined, moreover, whether there are any connections between various characteristics, and this too is a form of ordering the group of objects under consideration. Two of the most important methods for determining such connections are correlation and (linear) regression. The method of correlation considers the extent to which two features of a data set (that is, two different characteristics of an object) simultaneously deviate from the mean value of these features. Simultaneous anomality indicates that there is a relation between the two features. By means of regression, efforts are made to determine relations in the form of a function and to make them calculable (that is, to determine dependencies). Statistical knowledge is thus based on collections of features and on a measure of similarity. On this basis, elements can be ordered according to their proximity or similarity, and the features in question can be classified according to their degree of interdependence.
The type of network analysis that is modelled on graph theory takes as its starting point the relations between pairs of vertices (that is, the edges of a graph). A graph consists of a given number of vertices (also called “nodes”) and a given number of edges that connect the vertices with one another. The vertices that are connected by an edge are known as neighboring vertices. A path consists of a sequence of neighboring vertices and the edges that connect them. If the edges of a graph are associated with a direction (like arrows in a diagram), the graph is “directed”; otherwise, it is “undirected.” The degree of a vertex corresponds to the number of edges that are incident to it. In directed graphs, a distinction is made between a vertex’s indegree (the number of edges directed toward the vertex) and its outdegree (the number of edges directed away from it). Mathematical graph theory analyzes graphs with respect to their structures. On the basis of the relations between pairs of vertices and on conditions such as the total number of edges or the smallest degree that exists in a graph, graph theorists determine such things as whether a graph contains certain subgraphs, the length of a graph’s paths, or how many vertices (or edges) have to be removed for a graph to “disintegrate” (that is, for subgraphs to be created that are not connected by any paths). In addition to being concerned with the existence of subgraphs and with the cohesion of graphs, graph theory is also concerned, for instance, with combinatory problems (the number of possible graphs or subgraphs within a certain structure) and with questions of optimization (determining the shortest path in a graph).15
Graph theory is thus interested in determining which structures can be deduced from the presence of pairwise relations between the elements of a given multitude. In doing so, it is less concerned with making statements about specific vertices than with generally establishing the existence of certain (sub)structures within networks. Network analysis, however, shifts this focus to some extent. Admittedly, it also uses the model of the graph to deduce the existence of structures in the empirical networks under investigation, and is thus likewise concerned with subgraphs, the issue of network cohesion, and so on. Crucial to network analysis, however, are the graph-theoretical methods for measuring the “importance” of a vertex, above all by means of various centrality measures. These are meant to determine how “central” a given vertex is, which depends (for example) on the extent to which a vertex has an especially high number of neighbors relative to the amount of edges in the graph or on the extent to which a vertex is connected to all the other vertices via especially short paths. By means of performing matrix operations on adjacency matrices, moreover, it is possible to make calculations concerning the relations between neighboring vertices (to some extent, these calculations are concerned with the logical connections that define the relations between and among vertices that are not necessarily direct neighbors).16
In this perspective, the knowledge of network analysis is based on the relations between actors. Network analysis distinguishes itself explicitly from the sort of sociology that hopes to describe actors by means of their attributes and to measure them statistically: “As regards network research, the actor is primarily interesting as an abstract concept in which various relations can be established and observed.”17 Although the focus of network analysis, based as it is on graph theory, is on individual vertices, it is chiefly concerned with the function or importance of these vertices in relation to the network as a whole. From the perspective of network analysis, such importance is generated neither by personal attributes nor exclusively by a vertex’s direct relations to others. Rather, it is based on a vertex’s neighboring relations in the interaction with the relations between other actors.
The fundamental logic of network analysis is also quite relevant to Open Graph. The expansion of address space has made it possible to understand data traffic as a product of the relations (or edges) between addresses (or vertices). In this way (or so it is claimed), structures can be comprehended that have developed out of the activity and interrelations of a vast number of web users.
In its evaluation of profile information, Open Graph offers the possibility of drawing connections between the activity of an actor (in the sense of traffic between vertices) and the frequency of such activity. The features of a vertex exist in the form of a profile and its activity exists in the form of data saved by Open Graph, and the two can be analyzed together. Moreover, Open Graph also offers the opportunity to categorize, in terms of their features, the relations between edges (that is, the traffic between vertices). Such features can include, for instance, the time that something was communicated, the geographic location of those participating, as well as the semantic evaluation or the “weight” (importance) of certain acts of communication.18 On the basis of these data, profile information can be combined with statistical methods, network analysis can be combined with profile information, or all three methods can be combined.
The statistical analysis of profile information allows for different types of profiles to be determined and subsequently classified. If certain profile information is associated with web traffic, the importance or function of the vertex within the network can be ascertained by means of network analysis.19 If this analysis is conducted with an extensive number of vertices and subnetworks, statistical methods can be applied to determine how the network-analytical importance of a vertex correlates with its profile or profile types. Even without the intermediary step of network analysis, it is possible to investigate whether and how certain profiles and interactions correlate with one another on the basis of the connections that exist between profile information and interaction data (along with an extensive amount of comparative data). In addition, Open Graph makes it possible to bundle together certain constellations of vertices and edges into a single vertex. For instance, the profile of a user can be examined as a network of vertices and edges and can thus be analyzed in terms of the relations that exist between these elements. It is also possible to treat the entire profile of a user as a single vertex and to analyze it, with the methods of network analysis, in light of its relations to other profiles. In this same way, profiles can be bundled together into a group that, in turn, can be analyzed with respect to its relations to other such groups.
By now it should be clear that the different analyses of various levels can be combined to draw conclusions about different profile types, vertex functions, and web traffic. Network-analytical measures thus allow statements to be made about the value of individual vertices (or groups of vertices) to the structure or topology of the network under consideration. For their part, statistical methods provide insight into the similarities, normalities, deviations, and connections of characteristics, which have been collected according to their semantic evaluations. It is also possible, in turn, to zoom in and out between the structures and types (the macro level) and the individual vertices pertaining to addresses and profile data. In this way, there is an inherent promise in these methods to be able to observe and explain structural developments that take place from the bottom up.
Semantic information and statistical analysis thus fill in the gaps that network analysis leaves open. Such information enables edges to be determined qualitatively and networks to be hierarchized. If, for example, preferences for certain movies are defined as part of given profile, this creates a tree structure in which the profile is placed at a higher level that subsumes the level of movie preferences. By means of statistical analysis, statements can be made at each of these levels about the position of a given vertex within a network and the thematic orientation of its content.
A variant of network analysis that is especially well suited to deal with Open Graph’s database is known as “semantic social network analysis.” With the traditional forms of analysis, vertices and edges are often reduced in such a way that causes the influence of certain vertices to be overestimated, and this is because the measures applied take into account only the number of connections from one vertex to another. Semantic social network analysis thus seeks to overcome this shortcoming:
The betweenness value mainly depends on the graph topology, and the knowledge shared between vertices is not taken into account. Thus, in an ordinary graph, a vertex can own a high betweenness value while sharing no knowledge or endogenous content with the members of the social graphs, just getting connected with people who belong to various communities within the social graph. For instance, a recruiter aided by SNA metrics, seeking the most efficient profiles related to some specific knowledge communities (e.g., a gas turbines engineer), would only obtain the most active and connected individuals, i.e., community managers, famous or media-friendly people.20
As a way out of this dilemma, the authors propose a statistical analysis that examines a somewhat different set of properties that can be assigned to the vertices within a network. If agreements can be identified between these properties that can be assigned to a certain category, this is regarded as an indication that these properties belong to a group that might escape detection in traditional network analysis. The integration of statistical and graph-theoretical methods allows for groups, types, and topologies to be ordered or stratified out of a large quantity of heterogeneous elements, and such or similar methods are conceivably employed by Open Graph to achieve its results.
Whereas the taxonomy based on statistics employs types as representations of groups and thus limits its analysis of connections to the micro level, network analysis at least allows the topological structure of groups to remain theoretically visible. By combining these two approaches toward creating order, the distinction between micro and macro levels is resolved as a matter of projection. The projection methods used by network analysis determine whether a phenomenon is perceived as a micro phenomenon or as a macro phenomenon, whether (for example) a complete group is regarded as a single vertex within a network or whether the members of the group are themselves analyzed as individual vertices. Statistical aggregations can then be applied to classify the characteristics of a group into particular types.
For website operators, Open Graph represents a medium for observing and analyzing a self-organizing or self-ordering crowd, and this medium is meant to enable operators to zoom in and out between individuals, types, and groups. These new forms of scalability are compatible with the promise of media to make the emergent phenomena of structural formation observable within a crowd. In what follows, we will take a closer look at the genealogy of this medial promise, which lies at the intersection of figures of thought, formalization (as visualization and calculability), and their operationality. In our historical reconstruction, Open Graph is seen to belong to a specific epistemic tradition whose oppositional nature has incited a steady expansion of observational space and medial representation. Here we hope to clarify this development by considering certain focal points from the history of statistics, sociology, and network analysis.
As regards the development of statistics, we have chosen as our starting point the works of Adolphe Quetelet (1796–1874), which have served both as an inspiration and as a foil to subsequent studies. In his book Physique sociale (1835), Quetelet adopted the approach of so-called error theory, which was then prominent in the field of astronomy. This is based on the assumption that measurement errors when observing planetary orbits follow a particular distribution and that the true orbit of a planet can be calculated on the basis of this distribution. Quetelet transferred this idea to social processes and systematized it into a law of deviation. According to this law, human characteristics (such as size, for instance) likewise follow a fixed distribution curve. He considered the mean value of this distribution – the average man – to be both an abstract law, which was substantiated by empirical observations of crowds, as well as an ideal that could be applied as a normative standard to individuals and to society at large.
In Quetelet’s work, the act of observing and the law that derived from this act exist on two different levels. The macrostructure of law precedes the deviating micro-phenomena:
Quetelet sought to discover the laws of association between statistically defined social facts. The interdependence of social facts, he argued, could be studied through the systems of equations that define these laws. This basic insight was elaborated in economics and in social demography by those who applied concepts of “force” and “gravity” in their studies of the social world.21
It is only possible to infer a law from a large quantity of observations on account of the reductive moment that is inherent to statistical analysis. The variability of individual activity is ultimately uninteresting because it always converges toward the mean, which is the only relevant factor. The movement also entails a specific conception of government: A well-governed society is distinguished by its orientation toward the mean, for it is this that represents “all which is grand, beautiful, and excellent.”22 As Jürgen Link has noted:
Quetelet elevated his “average man” […] into an ideal type that is both aesthetic and political, and he did this by associating the “average man” with the collective symbolism of balance, stability, beauty, and the optimal. […] Quetelet’s emphasis on invariant normalities tended toward a purely proto-normalistic strategy: Empirical and mathematical statistics were expected to discover the “natural” normalities toward which politics could then orient itself.23
This normative orientation toward the mean was problematized by subsequent thinkers, above all by Francis Galton (1822–1911). Although he accepted Quetelet’s ideas about normal distribution, Galton increasingly distanced himself from error theory. As a proponent of eugenics, he thought it was less and less possible to interpret certain deviations from the mean, namely those deviations that he considered desirable for society, as errors.24 Whereas Quetelet regarded variability as a problem that needed to be overcome, in Galton’s case it was considered a phenomenon that needed to be measured more accurately and even promoted (in certain instances). Thus, in itself, it was not the idea of distribution that Galton called into question but rather Quetelet’s normative judgment of distribution.25 Instead of regarding the mean or average as a sort of ideal, he considered deviations from the mean to be the basis for defining “group-specific types”: “Average size and average weight are no longer thought to represent the dimensions of the well-built individual in a well-governed society; rather, such averages enable individuals to be organized into different groups.”26
Instead of focusing on a universal average, Galton embraced the idea of “relative rank,”27 which allows individuals to be placed into narrowly defined groups. Of course, averages continued to play a central role in the definition of such groups,28 but the idea of averages was expanded to include other aspects and thus allowed a broader range of variability to be covered: “It is difficult to understand why statisticians commonly limit their inquiries to Averages, and do not revel in more comprehensive views. […] An Average is but a solitary fact, whereas if a single other fact be added to it, an entire Normal Scheme, which nearly corresponds to the observed one, starts potentially into existence.”29 According to Jürgen Link, Galton’s more pronounced focus on variability can be understood as a transition from a “fixed” to a “dynamic” form of normality.30 Instead of postulating a quasi-natural law of universal norms, which could be enforced with control mechanisms, Galton located optimization processes within groups that (at least to some extent) were placed in relation to one another. Parallel to his rejection of universal norms, he also cast doubt on the prevailing macro-perspective that Quetelet had relied on. Galton’s process of regression, for instance, can be seen as an effort to make the dependencies between various attributes comprehensible and analyzable. However, a total renunciation of Quetelet’s reductionist approach did not seem to be achievable.31 Then again, the field of sociology, which was just developing at the time, has been dominated by statistical surveys that are aimed at comprehending the attributes of actors. The basis of such processes – even if they are not concerned with establishing averages but rather with finding correlations – has been the selection of which characteristics should be examined and thus the organization of these characteristics into categories. The inevitable result of this has been the creation of group definitions that Rom Harré refers to as “taxonomic collectives.”32 These are collectives whose coherence is endowed exclusively by an external observer and in which there have been no internal processes of structural formation.33
From the turn of the last century, two names are closely associated with the effort to analyze such processes from the bottom up: Gabriel Tarde and Georg Simmel. Today, both of these thinkers are regarded as pioneers in the development of network theory.34 Tarde (1843–1904) devised an understanding of sociology according to which society is based on the interactions between individuals. Central among these interactions is imitation, which is primarily the repetition of ideas and opinions but can also find expression in fashions or trends.35 By means of imitation, a complex network of influences is created between individuals, and it is this network that ultimately constitutes society. Tarde conceived of imitation in terms of suggestion and transference, and thus his work can be associated with the discourses of his time concerned with electricity, magnetism, hypnosis, and contagion.36
Simmel (1858–1918) likewise focused to a large extent on the reciprocal effects and interactions between actors. It was only by analyzing such interactions, he thought, that we would be able to explain the development of stable social structures: “[U]nity in an empirical sense is nothing other than the interaction of elements; an organic body is a unity because its organs are in a closer interchange of their energies than with any outside entity.”37 What is crucial here is that the concentration has shifted from the attributes of individual actors to the processes that operate between them. It is the intertwining of such micro-processes that is understood to form structures. The theoretical turn has since been radicalized and systematized by network research. Here the point of departure is no longer actors, who have been described and statistically measured according to their attributes, but rather the relations between these actors. One of the founders of network research was Jacob Moreno (1892–1974), who coined the term “sociometry” during the 1930s. At the heart of his approach was the representational form of the sociogram, which he developed in order to make certain patterns visible that, in turn, provide insight into the hidden dynamics of a group.
Moreno’s explicit goal was for sociometry to be an alternative to the traditional application of macro-sociological categories.38 The micro-sociological analysis of networks was at first limited to smaller and closely defined groups, the members of which were asked about their preferences and distastes. The results of these surveys were then recorded as points and lines on a sociogram. It was only in the visual formalization of the sociogram that the decisive structures of a given group became recognizable. In this regard, Moreno expressly distinguished structural analysis, as based on a sociogram, from quantitative analysis, which is based on the statistical evaluation of the frequencies with which certain choices are made within a group. In the case of structural analysis, according to Moreno, “[t]he sociometric status of an individual is defined by the quantitative index of choices, rejections and indifferences received in the particular group studied.”39 With his sociograms, moreover, Moreno also made use of certain forms of projection by representing networks of “cottage groups” as vertices (the respective importance of which were weighted, to some extent, with plus or minus signs).40
The most decisive aspects for Moreno were thus the addressability and legibility of sociograms, the purpose of which is to make structures visible.41 The goal of this visualization was to change the structures of groups in such a way that the preferences and distastes of its members would come to align with one another. Basing his work on a social-psychological approach, Moreno was concerned with identifying blockades or impasses that could be resolved in group therapy sessions. As this process became more methodologically advanced, network research also came to be applied to larger groups. Even these investigations, however, remained faithful to a fundamental principle, a principle that was theoretically formulated in the work of Tarde and Simmel and methodologically implemented in Moreno’s sociograms. With respect to the organization of groups, that is, the main issue at stake was not individual attributes such as age or gender but rather the ability to identify certain patterns in the relations between the individual actors in question.
In this regard, it is noteworthy that the early stages of network research took place around the same time that graph theory was being developed into a mathematical discipline of its own. At first, however, these two types of analysis adopted different perspectives. A look at the early textbooks devoted to graph theory reveals quite clearly the traditions to which the young field belonged as well as its perspectives and areas of applications.42 Many of the problems addressed by graph theory, for instance, happened to originate as a sort of mathematical entertainment, that is, as mental games or brain teasers. They were concerned with the possibility of such things as walking across all the bridges in Königsberg without walking across any of the bridges twice (Euler formulated his famous problem regarding the seven bridges of this city in 1735) or with making one’s way through a labyrinth in an optimal manner. Around the middle of the nineteenth century, some of the problems to be faced by graph theory were introduced by the theory of electricity.43 At the end of the same century, the terminology of trees and graphs was adopted by organic chemistry in order to determine the number of isomeric arrangements.44 Beyond discussing different types of graphs and the various combinatory problems associated with them, the textbooks also provide extensive illustrations of how certain concepts from algebra and group theory, not to mention matrix calculations, could be made fruitful for graph theory. Conversely, they demonstrate how the methods of graph theory can be applied to the findings of other disciplines. It is apparent that, during the 1930s, graph theory was struggling to establish itself as a legitimate field of mathematics. Although the early developments of graph theory and network analysis were not connected – but rather took place in parallel with one another – the crucial factor is that both fields made use of the same sort of visualization, be it that of a graph or a network diagram.
In an article about visual models and network theory before 1900, Sebastian Gießmann has shown how graphs, as visual models, were adopted from the diagrammatic practices of chemistry into other scientific disciplines, most notably into the logical notation developed by Charles Sanders Peirce.45 Florian Hoof has called attention to the trend of using diagrams and other visualizations in the field of business management between the years 1880 and 1925. As part of the so-called “visual management” of the 1910s, network diagrams were implemented as schematic abstractions for visualizing the relational structures of business processes for lending support to decision-making procedures.46 Along with the forms of calculability employed by graph theory, especially by means of matrix representations and calculations, these forms of visualization were the preconditions that enabled the merger of the micro-sociological and macro-sociological concepts of the network, a convergence that took place in the 1950s, if not somewhat earlier: “In this translatability – between matrix tables with 1/0 conditions, on the one hand, and the topological network diagram on the other – lies an extremely reductive but nevertheless stable foundation for the isomorphism of social, electrical, and telecommunications networks, and for a general theory as well.”47 What is decisive about the network diagram is thus reduction, formalization, calculability, and the “continuous intersection of calculability and iconicity – the acts of writing and imaging converge.”48
In light of the lineages outlined above, it is apparent that two different and contrary traditions of knowledge, each with the operative practices of visualization and calculability, intersect with one another in Facebook’s Open Graph protocol. The opposition of statistical approaches and the approach of network analysis reveals two epistemic constellations that, viewed historically, can be said to compete with each other. Statistics is concerned with the features of individual actors, but it leaves the relations among such actors undefined. Whereas statistics can claim to analyze structures at the macro level on the basis of the frequency of particular attributes, network research affords the possibility of ascertaining the processes of structural development from the bottom up. The reduction of phenomena to vertices and edges is the price that has to be paid in order to examine emergent structures. At the same time, it is this reduction that also enables different networks to be managed and compared with one another. The promise of visibility and manageability is thus closely connected to the visualization of sociograms. The most prominent characteristic of the latter (at least in contemporary practices and discourses) is that they make visible and legible, by means of visual formulations and the “top views” that such visualizations entail, something that would otherwise remain hidden from the individual. It is precisely this that enables the formation of structures to be managed and even controlled. This possibility becomes more and more achievable as visual formulations merge more and more with calculable formulations.
In addition to the medial intersection of formalistic practices and traditions of knowledge, there is something else that deserves our attention. This is the use of metaphors from the field of physics that allowed different theoretical traditions to be compatible with one another before the mathematical formalization of graphs and matrix calculations could guarantee the translatability of fields of knowledge. Quetelet, Simmel, Tarde, and (later) Moreno all relied to some extent on physical terminology and thus drew analogies between physics and society. What is instructive in this regard is the different nature of these analogies. Whereas Quetelet sought to demonstrate the general validity of laws in the social realm, Simmel, Tarde, and Moreno were concerned with describing the interactions that exist beyond the directly observable connections between individuals. Moreno, for instance, refers to the vertices in a network as atoms; accordingly, he discusses the connections between them in terms of the forces of attraction and repulsion, the effects of which are then used to explain the cohesion of the atoms. Elsewhere he discusses connections as “channels,” which serve to represent flows of communication as well as various types of influence.49
It is in these contradictory references to physical terminology that the two aforementioned epistemic traditions find expression: the search for universal natural laws, on the one hand, and the search for emergent structures on the other. Such physical terminology, however, has another side to it, one that is more normative-oriented than epistemic. Just as Quetelet hoped to understand “social physics” as the basis for governance, Moreno’s sociometry also contains an aspect of control. In his Foundations of Sociometry, Moreno notes: “[J]ust as we have been able to correct the direction of rivers and torrents, we may be able to correct the direction in which psychological currents flow.”50 Despite being used to different ends, these physical metaphors seem to be associated with the idea of making existing forces (in the sense employed by the natural sciences) explicable or at least calculable – and thus also with the idea of being able to affect these forces in a formative manner. The references to physical concepts thus function to make contradictory conceptions of the social compatible with one another and to ennoble them in scientific terms. Although the individual actor is therefore subjected to a sort natural law, this same actor (or his or her preferences and rejections) only becomes operable within the social. This conglomerate now promises to be controllable.
This manner of thinking has advanced in network research as the analogies between physics and society have become less metaphorical and more and more technological and methodological. As more comprehensive means have become available for collecting and processing data, models from physics and graph theory have become increasingly more effective at identifying universal laws that apply to any type of network.51 This reliance on the natural sciences has helped network models to gain greater acceptance, and it has also led to a “hardening” of the formalizations that are associated with them.
If network theory, as the basis of automated data analysis, has infiltrated the media-technological infrastructure of our time, this has not taken place without the accompanying phantasms of natural laws, emergence, and controllability. Physical models are actualized within an infrastructure that has been conceived of in “hard” and graph-theoretical terms, and they are used to cast our materials and technology.52 As a sort of undercurrent, these models are simultaneously accompanied by the inconsistency of the fields of knowledge that enabled them to exist in the first place. They almost seem to be prompting yet another gesture of “physicalization.”
This can be demonstrated by that fact that analogies to physics continue to pervade the present discourse. Whereas network theory, especially as it is practiced by Barabási, has embraced the “objectification” of networks to such an extent that mathematical proofs (such as the development of the “power law”) have come to approximate natural laws,53 the current discussion held by engineers and computer scientists about semantic social network analysis contains references to physics yet again. These references are concerned with forces that govern phenomena beyond their observable relations:
The experiment of E. Branly on radio-construction (1890) has demonstrated that some flows can circulate between points without specific connections. On the radio-construction principle, we emit the hypothesis that in the social graphs, significant and invisible information flow exists between certain individuals, connected or not by visible arcs. We study our hypothesis, comparing the arcs of the skills graph to solid conductors transporting electrical flows, in the purpose to represent the invisible flows within the graph and to quantify the reactance conditioning the social network structure and evolution.54
On the one hand, then, there is a deterministic macro force, while on the other hand there is a multiplicity of interacting micro forces. The oscillation between these contradictory physical analogies seems less to be a symptom of the developments under discussion than their driving force. Whenever the moment of reduction seems all too apparent in the processes being applied, the analogy tips in the other direction, as a sort of compensatory gesture, and suggests that it is possible to fill in the respective empty spaces with a mere methodological adjustment. This dynamic of tipping back and forth generates a continuous “impulse to fill things in” (“Drang zum Auffüllen”), the backdrop to which seems to be the promise of loss-free scalability. If this compensation were in fact to succeed, then it would be possible at every point in a network to take both viewpoints into account and to organize the participants in a network both on the basis of their interrelations as well as on the basis of their attributes.
The overlapping nature of these types of organization, as identified in the case of Open Graph and in the case of current processes of analysis, can be understood as a direct result of this “impulse to fill things in.” An aspect that Markus Krajewski has referred to as being fundamental to the use of tables as instruments of knowledge, for instance, recurs here in a potent form:
The grid of a table is consequently oriented toward the completeness of that which it is designed to include. Neatly selected categorial subdivisions into columns and rows not only allow for all of the present elements to be classified; they also generate anticipation for the empty spaces that result from a strict combinatorics of categories. […] In this sense, a table can serve as an instrument of knowledge because its blank spaces immediately expose acute gaps in our understanding and other desiderata.55
In the case of Open Graph, as opposed to tables produced manually, categories are created automatically out of the data material at hand and are dynamically made to suit the respective state of information. At the same time, the overlapping processes of analysis in the technical infrastructure increase, many times over, the number of empty spaces and desiderata. Thus an ongoing sensation of shortage is produced that calls for being filled in with more and more data and metadata on the part of users. This infrastructure not only gains its medial quality as a tool of analysis in the hands of its operators but also by means of the specific staging of its empty spaces and by the way in which users deal with them. Here it seems important to note that the orderings of crowds and categories are neither determined a priori in the technical infrastructure and merely applied, nor do they arise exclusively from the interaction of users on a terrain that is quasi-technically neutral. It is therefore necessary to take into account technical preconditions, manners of use, and the interaction between them.
In our discussion of the ways in which Open Graph is used, we emphasized that, as a medium, it is a space of copresence that simulates communal presence and promises participation. A closer inspection indicated, however, that Open Graph in fact produces an overlapping of copresent spaces instead of individual spheres that are sealed off from one another. These groups are generated because their members engage with each other as a collective to share similar online content, to exchange opinions about it, and to create a feeling of togetherness by means of such interactions.56 Their visibility is based on public forms of distribution and reception in which what is seen is comprehended and commented upon. The decisive moments consist in the comprehension of others’ perceptions and in the circulation of opinions about them. It remains to be asked how the medial character of Facebook’s Open Graph and the manner in which it is used correlate with organizations or orders of crowds. In order to pursue this question, it will be necessary to provide a historical examination of the ways in which media, the masses, and their arrangements or orders have been connected.
Gabriel Tarde understood the public as a dispersed mass of physically separated individuals. On a mental level, however, these individuals are nevertheless capable of forming a collective. The mutual influence of individuals over one another, which Tarde referred to as “suggestion,” results in a common opinion that is unified by the collective. By distributing and circulating opinions, the newspaper thus served (and serves) as a decisive medium for influencing people remotely.57 According to Tarde, suggestion nevertheless takes place between readers, who unknowingly imitate the opinions expressed in the newspapers and share them with other readers. Crucial to all of this is that opinions or thoughts can circulate within a crowd so that the crowd can differentiate itself into collectives. Michael Gamper has summarized this process as follows: “The world of ‘public spheres’ was thus first of all the world of mechanical transmission media, then of thermodynamic transmission media, and finally of electromagnetic transmission media. Within the social units in question, these media were the actual bearers of suggestive exchange.”58
Tarde’s notion of the public (or public sphere) was based on his understanding of crowd psychology. Toward the end of the nineteenth century, a decisive question about crowds was how a mass of people – a disorderly congregation of individuals – could suddenly coalesce into a unit that, on the one hand, would function as something like a subject of activity and, on the other hand, would integrate individuals into itself in such a way that their individuality (and rationality) would dissolve. It is crucial to Tarde’s understanding that, in a crowd, the normal social interrelations of imitation and suggestion increase in their intensity and speed. In such a way, the individual is transferred into a sort of common psychic condition, and it is this condition that gives unity to a crowd. Within the concept of public spheres, this common psychic condition is diminished to common opinion, but such remote effects are based on the same principle as that which governs the influence of individuals over others in their presence.
As formulated by the likes of Tarde, Gustave Le Bon, and Scipio Sighele, early crowd psychology was concerned above all with the supposed irrationality that appears in crowds and that was thus assumed to be constitutive of them. At the moment of transference – something which is also integral to the knowledge of electromagnetism, hypnosis, and contagion – this irrationality is encapsulated in a certain way and thus made operational. Gamper has pointed out that the early theories of the cinema also relied on this type of knowledge. Perception, hypnosis, and transference became the modus operandi of these theories and were used to explain how an audience or public could become a community. Thus the formation of crowds is explained by a type of knowledge that is being attributed to the leadership of crowds. The irrationality of transference can now be implemented to turn a crowd into a unit.59
The early discourse about the interaction of media, the masses, and the public is thus characterized by three figures of thought. Fundamental to this discourse is the notion of transference, which is what brings individuals together in the first place and orders them into a commonality. On the one hand, this is associated with an idea of perception according to which it is common perception that initiates the process of transference (and is controllable by means of one apparatus or another). One the other hand, Tarde’s notion of the public encompasses a concept of communication (or at least of distribution and circulation) that entails the formation of common opinion.
In considering Facebook’s Open Graph as a space of copresence, it is important to understand the developments that have taken place in the crowd-psychological conception of media. Christina Bartz has investigated how the discourse about television in the 1950s and 1960s relied on the theories of crowd psychology and how it was in such terms that television first came to be discussed as a mass medium. Accordingly, the mass-medium theory of television correlates the dispersed or scattered masses with the discourse about perception and hypnosis that had defined early theories concerned with crowds and the cinema. Television is designed to be a medium of perception instead of a medium of communication and can thus function as a simulator of presence.60
Facebook’s Open Graph protocol has reconfigured this mixture of transference, perception, and communication. With its form of visibility, which prompts users to see the same things that their “friends” have seen, Open Graph is thus associated with the discourse that regards television as a medium of collectivization based on perception. At the same time, Open Graph is presented as a medium of communication that not only displays links to content and to the interactions of others with this content but also circulates opinions and comments about it. In this regard, Open Graph is thus also associated with the understanding of public spheres in Tarde’s sense of the concept. It is crucial that, in both of these traditions, the idea of transference serves as a sort of basis or undercurrent. It lies at the heart of the discourse and generates the sense of belonging together – the actual connectivity in question.
By now, transference has itself been transmuted into the technical realm and is realized as distribution and in the address management of friend lists. At the same time, it remains a crucial figure of thought for being able to imagine the staging of overlapping micro-publics as existing groups within a space of copresence. Open Graph thus reconfigures the constellation of transference, perception, and communication into a technical infrastructure that is staged as a simulator of presence. In line with the historical tradition outlined above, presence and participation are medially configured, even if they remain unoccupied in terms of content. Whereas communication and transference have been consolidated into a technological infrastructure, the level of perceptual content remains empty and demands to be filled in.
As a form of use, sharing one’s perceptions on the internet may indeed comply with this demand, but it does not necessarily do so in accordance with the dictates of technological structure. In light of recent work by Ralf Adelmann and Hartmut Winkler, it is possible to understand the relationship between use and infrastructure in a more differentiated manner.61 The concept of the “doubled subject” makes it clear that, as far as digital media are concerned, it is impossible to distinguish an unambiguous development with respect to the positions of the subject. Whereas postmodern critiques of the digital subject tend to stress the disempowerment of the subject’s autonomy, the prevailing forms of the subject in computer games, for instance, happen to be coherent and capable of action. Adelmann and Winker have explained this “paradoxical” phenomenon by noting that there is a degree of interaction between real and imaginary levels. Thus the subject is able to compensate for its fragmentation (for instance, in the form of multiple individual bits of information that are stored in a database) by means of an imaginary coherence and thereby to maintain its capability to act. Crucial to this is precisely the interaction between both levels: “In metaphorical terms, the reenactment of the civil subject requires the real stage of the mediation and distribution of the subject in databases.”62
Corresponding tendencies can also be identified in the case of belonging to groups on Facebook. The real fragmentation of the subject, which on the internet or in Open Graph only exists in the form of disparate database entries, is compensated for by the imagination of a group that shares common perceptions and opinions. When Adelmann and Winker speak of the pleasure that accompanies the change from being incapable of action to being capable of it, this is precisely the main characteristic of Facebook as a medium of collectivization. In the act of fragmentation, the process of transference is given a degree of “free rein” – the subject finds itself once again within a cascade of impressions, and this situation is suggestive of openness, of a lack of limitations, and even of freedom. These impressions are given order within the context of a group. First, one’s own newsfeed can serve as a sort of filter, something that must be taken into account because the group considers it to be “relevant.” Second, the web content on the newsfeed has already been evaluated with regard to its ability to attract attention within the group. Third, the assembly of the newsfeed – the presentation of one’s own online discoveries – can somewhat be regarded as a chain of action on the level of self-representation: The comments of “friends” concerning one’s own “Likes” serve to confirm one’s choices, generate attention, and enable the formation of common opinion. The circulation of one’s “Likes” and the reaction to them mirror the individual as a perceiving, opinion-forming, and active subject. Thus there appear moments of causal effectiveness; the impression of copresence is felt; and the decentralized, fragmented subject regains its bearings. The copresent space of a communally perceiving group therefore supports the conception of an active and non-fragmented subject that is in control of its perception and action.
In this regard, the concept of the doubled subject goes beyond approaches that regard social media as belonging to a history of loss, according to which the postmodern potential of digital media was supposedly squandered.63 Moreover, the integration of real and imaginary levels can be understood as a supplement to those philosophical and anthropological approaches whose point of departure is the “pleasure of self-fragmentation” or the demands for self-interrogation that are imposed by so-called technologies of the self.64
Our discussion of Facebook’s Open Graph protocol has raised questions about the referent of a concept of the crowd that, on the one hand, is always already medially preconfigured and yet, on the other hand, is reflected in the medial. Our bipartite approach to the historical development of “organization” and to the idea of “communication” has made it clear that, in principle, Open Graph is not a novel phenomenon. Rather, it appears to be a tentative high point in the sociological practice of taking surveys, here based on an extremely expansive space of observation, a long-term outlook, and a medial infrastructure.
That said, a new understanding of the crowd or the masses has seemed to emerge from these constellations. The main contributor to this understanding is the interaction between technical infrastructure and user practices, both of which entail new forms of phantasmatic excess. In the case of infrastructure, what is at stake is the ability to comprehend emergence in actu by means of technical processes and thus the ability to overcome the reductive moment of traditional methodological approaches. Loss-free scalability should ultimately enable forms of control that recur in the momentum and self-organization of the masses being investigated. In the case of user practices, on the contrary, the main idea is to constitute a space of copresence by “sharing” content and circulating opinions about it. At the same time, this space represents a common sphere of activity and thus enables a degree of alternation between fragmentation and coherence.
In both cases, the phantasmatic excesses are associated with the change between various modes, an act of alternation that initiates a process of “filling things in” that is both continuous and compensatory. At the level of analysis, this process manifested itself in an increasingly dense sequence of methodological adjustments, which led to the conflation or layering of orders and forms of knowledge that were originally contradictory. Each of these adjustments has generated new empty spaces that demand to be filled in and has thus specifically contributed, by making various sorts of data available, to the enrichment of Open Graph. The meagerness or reduction that, despite such large amounts of data, is inherent to this conflation or layering is concealed by an elaborate form of staging. However, it is in no way guaranteed that users will necessarily “fall for” this staging. As our discussion of the “doubled subject” has made clear, it is rather the case that the users’ imagination of belonging to a group can be understood as a gesture of compensation and thus as a form of actively “filling in” the reduction in question.
Our efforts suggest that the connection between empty spaces and phantasmatic excesses, which allow these spaces to be filled in, should be regarded as one of the decisive medial characteristics of social media. Paradoxically, however, it is precisely this view that is systematically obscured by the repeated references to the masses of people who use social media and by the massive amounts of collected data. When dealing with the masses, as our sketches of historical genealogies suggest, the need for infinite reduction seems to be immediately connected to the need for infinite inclusion.
1 See https://de-de.facebook.com/ (accessed on August 15, 2014): “Facebook ermöglicht es dir, mit den Menschen in deinem Leben in Verbindung zu treten und Inhalte mit diesen zu teilen.”
2 Obviously, “semantic” can only be understood here in the sense of formal semantics; see Max Creswell, “Formal Semantics,” in The Blackwell Guide to the Philosophy of Language, ed. Michael Devitt and Richard Hanley (Malden, MA: Blackwell, 2006), 131–46.
3 See Facebook’s “Insights for Websites: Product Guide” (2011): http://developers.facebook/com/ attachment/Insights_for_websites.pdf (accessed on August 18, 2014).
4 See https://developers.facebook.com/docs/reference/api/page (accessed on August 18, 2014).
5 See https://developers.facebook.com/docs/reference/api/permissions (accessed on August 18, 2014)
6 See https://developers.facebook.com/docs/opengraph/tutorial (accessed on August 18, 2014).
7 On the controversial relationship between the Open Graph protocol and other approaches to dealing with the “semantic web,” see Yuk Hui and Harry Halpin, “Collective Individuation: The Future of the Social Web,” in Unlike Us Reader: Social Media Monopolies and Their Alternatives, ed. Geert Lovink and Miriam Rasch (Amsterdam: Institute of Network Cultures, 2013), 103–16.
8 Crucial to this visibility are the privacy settings that allow users to determine exactly what should be visible to their “friends.” For an insightful overview these settings, see Guilbert Gates, “Facebook Privacy: A Bewildering Tangle of Options,” The New York Times (May 12, 2010), http://nytimes.com/interactive/2010/05/12/ business/facebook-privacy.html (accessed on August 18, 2014).
9 Mizuko Ito et al., Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media (Cambridge, MA: MIT Press, 2010), 38.
10 Bernadette Kneidiger, Facebook und Co.: Eine soziologische Analyse von Interaktionsformen in Online Social Networks (Wiesbaden: Verlag für Sozialwissenschaften, 2010), 38.
11 Ibid., 95–96.
12 The sequence of addressing, levelling, and categorization does not, however, necessarily result in a consistent semantic designation of web content, and this is because individual definitions can and often do conflict with one another.
13 Because the analytical methods that are actually used by Facebook and app providers are not public, we have to rely here on some degree of speculation. However, this speculation is informed by the current discourses that are taking place among software engineers and scientists.
14 To each pair of elements in the set under consideration, a metric or distance function assigns a real number that is greater than or equal to zero. A metric must satisfy the following conditions: The distance of two elements is exactly zero if the two elements are identical; the distance between x and y corresponds to the distance between y and x; and the theorem of triangle inequality holds.
15 For textbooks on the subject, see J. A. Bondy and U. S. R. Murty, Graph Theory (London: Springer, 2008); and Jonathan L. Gross and Jay Yellen, Graph Theory and its Applications (Boca Raton, Fla.: CRC Press, 1999).
16 A matrix is a tabular arrangement of symbols in rows and columns. Matrices can be added, subtracted, and – by means of a specific procedure – multiplied. In an adjacency matrix, the rows and columns designate the respective vertices of a graph. If two vertices i and j are neighboring, the entry of the ith row and jth column will be a 1. Otherwise the entry there will be a 0.
17 Steffen Albrecht, “Knoten im Netzwerk,” in Handbuch Netzwerkforschung, ed. Christian Stegbauer and Roger Häußling (Wiesbaden: Verlag für Sozialwissenschaften, 2010), 125–34, at 129.
18 See Matthias Trier, “Struktur und Dynamik in der Netzwerkanalyse,” in ibid., 205–17, esp. 214–16. The type of event-based network analysis described by Trier seems to be perfectly suited for Open Graph’s data model. Its aim is to analyze changes in networks and thus to determine the effects of short-term changes on the importance of vertices.
19 Logically enough, only subnetworks are analyzed in this process, not all of Facebook.
20 Christophe Thovex and Francky Trichet, “Semantic Social Networks Analysis: Towards a Sociophysical Knowledge Analysis,” Social Network Analysis and Mining 3 (2013), 35–49, at 42.
21 John Scott, “Social Physics and Social Networks,” in The SAGE Handbook of Social Network Analysis, ed. John Scott and Peter J. Carrington (London: SAGE, 2011), 55–66, at 56.
22 Adolphe Quetelet, A Treatise on Man and the Development of His Faculties, trans. Robert Knox (Edinburgh: W. and R. Chambers, 1842), 100.
23 Jürgen Link, Versuch über den Normalismus: Wie Normalität produziert wird, 3rd ed. (Göttingen: Vandenhoeck & Ruprecht, 2006), 195. For an English introduction to Link’s concept of normalism, see Jürgen Link, “From the ‘Power of the Norm’ to ‘Flexible Normalism’: Considerations after Foucault,” Cultural Critique 57 (2004), 14–32. The question of why any controls are needed at all if nature, on its own accord, is always moving toward a mean has been discussed in recent debates in terms of “statistical determinism.” Ian Hacking has suggested that the goal of reformatory movements was rather concerned with an indirect level of control: “to reorganize ‘boundary conditions’ under which a population was governed by statistical law.” See Ian Hacking, “How Should We Do the History of Statistics?” in The Foucault Effect: Studies in Governmentality, ed. Graham Burchell et al. (Chicago: University of Chicago Press, 1991), 181–95, at 188.
24 See Donald A. MacKenzie, Statistics in Britain, 1865–1930: The Social Construction of Scientific Knowledge (Edinburgh: Edinburgh University Press, 1981), 58.
25 See Link, Versuch über den Normalismus, 239.
26 François Ewald, Der Vorsorgestaat (Frankfurt am Main: Suhrkamp, 1993), 193.
27 See MacKenzie, Statistics in Britain, 1865–1930, 58.
28 In his studies motivated by eugenics, for instance, Galton relied on one of Booth’s classifications for defining “civic worth,” which falls on the X-axis of a normal distribution. See Alain Desrosières, The Politics of Large Numbers: A History of Statistical Reasoning, trans. Camille Naish (Cambridge, MA: Harvard University Press, 1998), 114–15.
29 Francis Galton, Natural Inheritance (London: Macmillan, 1899), 62.
30 Link, Versuch über den Normalismus, 157.
31 For a discussion of the opposition between a “living inventory” and “dead statistics” in the work of Hans Zeisel, see Isabell Otto, “Massenmedien wirken: Zur Aporie einer Evidenzlist,” in Die Listen der Evidenz, ed. Michael Cuntz (Cologne: DuMont, 2006), 221–38, esp. 225–26.
32 Rom Harré, “Philosophical Aspects of the Micro-Macro Problem,” in Advances in Social Theory and Methodology: Toward an Integration of Micro- and Macro-Sociologies, ed. Karin D. Knorr-Cetina and Aaron V. Cicourel (Boston: Routledge & Kegan Paul, 1981), 139–60.
33 As regards the process of measuring television viewers as opposed to measuring internet users, see the discussions in Ien Ang, Desperately Seeking the Audience (London: Routledge, 1991); and Josef Wehner, “‘Taxonomische Kollektive’ – Zur Vermessung des Internets,” in Herbert Willems, ed., Weltweite Welten: Internet-Figurationen aus wissenssoziologischer Perspektive (Wiesbaden: Verlag für Sozialwissenschaften, 2008), p. 363–82.
34 See Scott, “Social Physics and Social Networks,” 55–66; Linton C. Freeman, The Development of Social Network Analysis: A Study in the Sociology of Science (Vancouver: Empirical Press, 2004); and Elihu Katz, “Rediscovering Gabriel Tarde,” Political Communication 23 (2006), 263–70.
35 Another central concept in Tarde’s theory is innovation or discovery, which defines moments of novelty in society.
36 See Michael Gamper, Masse lesen, Masse schreiben: Eine Diskurs- und Imaginationsgeschichte der Menschenmenge, 1765–1930 (Munich: W. Fink, 2007), 412; and idem, “Charisma, Hypnose, Nachahmung: Massenpsychologie und Medientheorie,” in Trancemedien und neue Medien um 1900: Ein anderer Blick auf die Moderne, ed. Marcus Hahn and Erhard Schüttpelz (Bielefeld: Transcript, 2009), 351–73, at 354–55.
37 Georg Simmel, Sociology: Inquiries into the Construction of Social Forms, trans. Anthony J. Blasi et al., 2 vols. (Leiden: Brill, 2009), 1:22.
38 See Katja Mayer, “On the Sociometry of Search Engines: A Historical Review of Methods,” in Deep Search: The Politics of Search Beyond Google, ed. Felix Stalder and Konrad Becker (Innsbruck: Studienverlag, 2009), 54–72.
39 J. L. Moreno, Who Shall Survive? Foundations of Sociometry, Group Psychotherapy and Sociodrama (Beacon, NY: Beacon House, 1953), 704.
40 Ibid., 435.
41 See ibid., 141: “A readable sociogram is a good sociogram.”
42 See Dénes König, Theory of Finite and Infinite Graphs, trans. Richard McCourt (Basel: Birkhäuser, 1990). The original German version of this book was published in 1936.
43 In this regard, König refers to the circuit laws that had been formulated by Gustav Robert Kirchhoff (1824–1887), especially to his loop rule (or mesh rule) and to his rule concerning nodes or vertices (see ibid., 241–42).
44 See ibid., 62–72.
45 Sebastian Gießmann, “Graphen können alles: Visuelle Modellierung und Netzwerktheorie vor 1900),” in Visuelle Modelle, ed. Ingeborh Reichle et al. (Munich: Wilhelm Fink, 2008), 269–84. On Peirce’s use of so-called “existential graphs,” see also Wolfang Schäffner, “Electric Graphs: Charles Sanders Peirce und die Medien,” in Electric Laokoon: Zeichen und Medien, von der Lochkarte zur Grammatologie, ed. Michael Franz (Berlin: Akademie Verlag, 2007), 313–26.
46 Florian Hoof, “Vertraute Oberflächen: Möglichkeitsbedingungen der Systemkrise,” a presentation delivered at the Annual Conference of the Society for Media Studies (Jahrestagung der Gesellschaft für Medienwissenschaft) on October 4, 2012.
47 Erhard Schüttelpelz, “Ein absoluter Begriff: Zur Genealogie und Karriere des Netzwerkkonzepts,” in Vernetzte Steuerung: Soziale Prozesse im Zeitalter technischer Netzwerke, ed. Stefan Kaufmann (Zurich: Chronos, 2007), 25–46, at 35.
48 Gießmann, “Graphen können alles,” 284.
49 See Martin Donner, “Rekursion und Wissen: Zur Emergenz technosozialer Netze,” in Rekursionen, ed. Philipp von Hilgers and Ana Ofak (Munich: Wilhelm Fink, 2009), 77–115, esp. 92.
50 Moreno, Who Shall Survive? Foundations of Sociometry, 439.
51 Scott, “Social Physics and Social Networks,” 57–58.
52 For a critique of the application of “hard” physical models to social phenomena, as currently practiced in the field of “social physics,” see ibid. At the same time, however, Scott also stresses the productivity of these analogies on a metaphorical level, as they have been used by social theorists since the time of Comte.
53 See Albert-László Barabási and James Fowler, “Social Networks,” in Science is Culture: Conversations at the New Intersection of Science + Technology, ed. Adam Bly (New York: Harper Perennial, 2010), 297–312.
54 Thovex and Trichet, “Semantic Social Networks Analysis,” 44.
55 Markus Krajewski, “In Formation: Aufstieg und Fall der Tabelle als Paradigma der Datenverarbeitung,” Nach Feierabend: Zürcher Jahrbuch für Wissensgeschichte 3 (2007), 37–55, at 45.
56 This figure of thought is also discussed in the social sciences. For an overview of the treatment of virtual groups by social and media theorists, see Karin Dollhausen and Josef Wehner, “Virtuelle Gruppen: Integration durch Netzkommunikation? Gesellschafts- und medientheoretische Überlegungen,” in Virtuelle Gruppen: Characteristika und Problemdimensionen, ed. Udo Thiedeke (Wiesbaden: Westdeutscher Verlag, 2000), 68–87. From their social-scientific perspective, they discuss the relationship among social connections, communication, and technology in terms of changes in the constitution of society. Thus they maintain that modern forms of society entail an increase in individualization and the dissolution of traditional social forms. In this light, the technical infrastructure of social networks and the interactions that such networks enable are thought to correspond to the need for “flexible” people to create non-binding, fluid, but temporarily stable social forms in which to situate their identities. Open Graph would thus be the medium of fluid social groups that exchange experiences and opinions and that enable their members to orient themselves both flexibly and with mobility. Such groups, however, are no longer to be understood as “social groups” in the strict sense, which are defined by their diffuse (though narrow) and above all stable interrelations. On the concept of flexibility in modern society, see Richard Sennett, The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (New York: Norton, 1998).
57 For a discussion of Tarde’s idea of the newspaper as a “social organism,” see John Durham Peters, “Satan and Savior: Mass Communication in Progressive Thought,” Critical Studies in Mass Communication 6 (1989), 247–63.
58 Michael Gamper, “Charisma, Hypnose, Nachahmung: Massenpsychologie und Medientheorie,” in Trancemedien und neue Medien um 1900: Ein anderer Blick auf die Moderne, ed. Marcus Hahn and Erhard Schüttpelz (Bielefeld: Transcript, 2009), 351–73, at 368.
59 See ibid., 366.
60 See Christina Bartz, MassenMedium Fernsehen: Die Semantik der Masse in der Medienbeschreibung (Bielefeld: Transcript, 2007), 15–16.
61 See Ralf Adelmann and Hartmut Winker, “Kurze Ketten: Handeln und Subjektkonstitution in Computerspielen,” Ästhetik & Kommunikation 148 (2010), 99–107; and Ralf Adelmann, “‘There Is No Correct Way to Use the System’: Datenbanklogiken und mediale Formen,” in Sortieren, Sammeln, Suchen, Spielen: Die Datenbank als mediale Praxis, ed. Stefan Böhme et al. (Berlin: LIT Verlag, 2012), 253–67.
62 Ibid., 255.
63 This seems to be the position, for instance, of Geert Lovink, who complains that social media “are not postmodern machines but straightforward modernist products of the 1990s wave of digital globalization turned mass culture.” Quoted from his introductory article “A World Beyond Facebook,” in Unlike Us Reader: Social Media Monopolies and Their Alternatives, ed. Geert Lovink and Miriam Rasch (Amsterdam: Institute of Network Cultures, 2013), 9–15, at 12.
64 See Gerald Raunig, “Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung,” in Generation Facebook: Über das Leben im Social Net, ed. Oliver Leistert and Theo Röhle (Bielefeld: Transcript, 2011), 145–60; Carolin Wiedemann, “Facebook: Das Assessment-Center der alltäglichen Lebensführung,” in ibid., 161–82; and Hannelore Bublitz, Im Beichtstuhl der Medien: Die Produktion des Selbst im öffentlichen Bekenntnis (Bielefeld: Transcript, 2010).
ist Postdoctoral Researcher am Digital Cultures Research Lab der Leuphana Universität Lüneburg und forscht zu neuen Medien im digitalen Zeitalter. Ihre Forschung konzentriert sich schwerpunktsmäßig auf Medialität technischer Medien, Mediengeschichte, Wissensgeschichte der Bewegung und Kulturgeschichte der Mathematik.
ist wissenschaftlicher Mitarbeiter im DFG-Projekt »Kulturtechnik Unternehmensplanspiel« im Institut für Medienforschung, Hochschule für Bildende Künste Braunschweig. Forschungsschwerpunkte sind Science and Technology Studies, digitale Wissensordnungen sowie Überwachung und Macht.
Mass gatherings and the positive or negative phantasms of the masses instigate various discourses and practices of social control, communication, and community formation. Yet the masses are not what they once were. In light of the algorithmic analysis of mass data, the diagnosis of dispersed public spheres in the age of digital media, and new conceptions of the masses such as swarms, flash mobs, and multitudes, the emergence, functions, and effects of today’s digital masses need to be examined and discussed anew. They provide us, moreover, with an opportunity to reevaluate the cultural and medial historiography of the masses. The present volume outlines the contours of this new field of research and brings together a collection of studies that analyze the differences between the new and former masses, their distinct media-technical conditions, and the political consequences of current mass phenomena.