CSS-TW1:
International Workshop on
Cooperation in Selfish Systems incorporating TagWorld I.

26 May - 29 May 2006
University of Bologna Residential Center
Bertinoro (Forlì), Italy

Talk Abstracts



Coalitional and Antagonistic Games: Algorithmic Issues

Paul Spirakis (Invited Speaker)
Research Academic Computer Computer Technology Institute (CTI), Patras (Greece)

We present here a framework that allows us to study games among coalitions (and their anarchy cost) and also antagonism in networks. Our coalitional model is parameterised on the sizes of coalitions and allows for derivation of matching upper and lower bounds on the anarchy cost of the game. Our confrontational model examines the dynamic competition of populations over networks and allows for efficient predictions of the survival of the weakest species.


The Evolutionary Emergence of "Tit-For-Tat" Behaviour
Luis R. Izquierdo, *Segismundo S. Izquierdo
The Macaulay Institute, Aberdeen (UK), *University of Valladolid (Spain)

The particular research question I address in this paper is: "How plausible is that Tit-For-Tat (TFT) emerges in a population of individuals playing the Prisoner’s Dilemma embedded in an evolutionary context?". To do this rigorously, I am running many simulations (effectively many different models) aimed at addressing the same question ("Does TFT emerge?"). All of such models are consistent with the essence of the theory of evolution, but they are all different in details about how these principles are actually implemented. In other words, I explore the question "Does TFT emerge?" across the widest possible range of assumptions that are consistent with the most basic evolutionary principles (within certain limits, mostly imposed by availability of computing power). Some of these assumptions have already been explored in the literature; what is particularly novel about this research is that I explore many more assumptions, and I do it in a consistent and systematic way, so all the results obtained from different models can be contrasted and compared with analytical approaches within one single framework. The basic general idea behind this research is my intention to test (with a particular research question) to what extent the assumptions made in mainstream evolutionary game theory for the sake of tractability (e.g. infinite populations, replicator dynamics, focus on equilibria...) are affecting the conclusions we obtain from those models. Given the amount of computer power required to conduct this research, all the simulations are being run on computer grids.


Cooperation in Networked Populations of Selfish Adaptive Agents: Sensitivity to Network Structure
Laszlo Gulyas
AITIA International Inc. and Lorand Eotvos University, Budapest (Hungary)

Trust and cooperation among agents in a multi-agent system have several interpretations, including shared preferences, work toward a common goal, dependable expertise and opinions, altruistic behavior, etc.. Yet, in most fields of the social sciences, trust is understood in the strategic context: i.e., as the (missing) tendency of the agent to take unilateral advantage over its partners. The classic framework to explore such situations is the (Iterated) Prisoner's Dilemma game (IPD). Following the series of works by Axlerod, this framwork is now ubiquotious in many disciplines and forms the base of many models, studying a variety of problems. Within this framework, two major strategies are investigated: the all-defective (ALLD) strategy that exploits its opponents in all rounds (i.e., never cooperates) and the tit-for-tat (TFT) strategy that cooperates on the first round and then reciprocates the moves of its partner. The role of interaction topology (i.e., network structure) was also studied in IPD settings. In their seminal paper, Cohen, Axelrod and Riolo (CAR) found that, in their evolutionary model, context-preservation (i.e., a static network) and not network structure was the key for the survival (and domination) of altrustic/cooperative behavior (i.e., agents playing TFT). When network dynamics was allowed, ALLD agents took over the world. While this result has important and appealing interpretations in social disciplines, it is in stark contrast with what is known about information/behavior cascades and adaptation in complex (social) networks. In the latter field, it is established that the average path-length of the network plays an important role in the spreading of information and behavior. This theoretical paper revisits the question of network dependence in IPD games with the aim to consolidate the previously discussed apparent contradiction. It departs from the classic CAR-model by assuming individual learning, instead of evolutionary adaptation. Studying this altered model on static Watts-Strogatz networks, it shows that context-preservation is not always sufficient for achieving a cooperative regime. Furthermore, and more importantly, the results suggest that the performance of the TFT strategy may depend on the (static) network structure. This eliminates the contradiction above and may have important practical consequences when engineering networked populations of selfish adaptive agents that are intended to cooperate on certain tasks.


Simulating the Effects of Altruistic Punishment on Cooperation
Shade T. Shutters
Arizona State University (USA)

Since cooperation among a population of self-interested agents often means overcoming an incentive to cheat or free-ride, the emergence of cooperation has long been an enigma in the life and social sciences. A prevailing explanation is the concept of "strong reciprocity" as a key mechanism leading to cooperation. This concept posits that agents altruistically reward those that cooperate and altruistically punish those that do not. Recent findings suggest that altruistic punishment, in particular, may be an important factor in the evolution of cooperation. I use agent-based modelling to investigate the importance of altruistic punishment to cooperation. Here I show that agents given the ability to altruistically punish will do so but that the emergence of cooperation is highly sensitive to the ratio of costs incurred by punishee and punisher.


On Dynamic Coalitions and Möbius Inversion
Giovanni Rossi, University of Bologna (Italy)

Coalitions are subsets of agents, while coalition structures are partitions of agents, i.e., non-void collections of non-void and pair wise disjoint coalitions -called blocks-, whose union yields the entire population. A coalitional game assigns a worth to each coalition. The worth of coalition structures obtains as the sum of blocks’ worth as defined by the underlying coalitional game. An optimal coalition structure is one of maximal worth. For generic underlying coalitional game, finding an optimal coalition structure is a well known NP-complete problem. A behavioral coalition structure generation algorithm defines a time pattern of coalition structures generated by agents’ behavior. This latter specifies when and why agents move from their current coalition to another one. For example, each agent could periodically compare her payoff with that received by some randomly chosen other agent; if the former exceeds or equals the latter, then she remains a member of her current coalition; otherwise, she moves into the other agent’s coalition. Given that coalition structures evolve in time, blocks are here termed ’dynamic coalitions’, aiming to emphasize that these coalitions, prevailing at any time, have a history, i.e., some members are older while some others are younger. Also, it must be stressed that in a behavioral coalition structure generation algorithm, not only individual behavior, but also coalitional behavior (i.e., how to divide the worth of cooperation within coalitions) has to be defined. This is here modeled through Möbius inversion of coalitional games (see Rota (1964), On the foundations of combinatorial theory I: theory of Möbius functions). The result is a dynamic version of the Shapley value (see Shapley (1953), A value for n-person games), i.e., applying to dynamic coalitions. If the whole population partitions into types, then a type symmetric coalitional game assigns a worth to each coalition depending only on the number of members of each type. Here, a type-symmetric game is used, and the worth of dynamic coalitions is shared between members through Möbius inversion so to achieve the following: for any dynamic (i.e., prevailing at any time) coalition, (1) any two members of same type and same age (as members) are rewarded equally; (2) for any two members of same type but different age, the older is rewarded no less than the younger (age monotonicity). Furthermore, agents’ behavior is modeled as described above, i.e., periodic random payoff comparisons are made; if her current payoff exceeds or equals that observed, than an agent remains in her current coalition; otherwise, she moves into the coalition where the (strictly) higher payoff is observed. In addition, a criterion for dynamically fine tuning randomization is proposed.


Between Biology and Culture: For a Cognitive Description of Altruistic Behavior
Gennaro Di Tosto, Mario Paolucci, Rosaria Conte.
Istituto di Scienze e Tecnologie della Cognizione, CNR, Rome (Italy)

In evolutionary biology, an action is said to be altruistic if it is costly for the individual who performs it and provides a benefit for another individual. Costs and benefits are measured in terms of Darwinian fitness, i.e. reproductive success. Selfish individuals enhance their fitness, exploiting altruists and refusing to sustain the cost of the altruistic behavior. Hence, altruists can be out-competed by selfish individuals and eventually go extinct. We know, however, that cooperation can be established thanks to three factors: kinship, group formation, and reciprocity. With rules allowing for a positive assortation of altruists, evolutionary forces operating at the group level will tend to favor groups containing altruistic agents over those containing selfish ones. But inside each group, the single agent still pays the cost of the altruistic action and exposes itself to the risk of exploitation. Assuming the possibility of repeated interactions, cooperation between agents of the same group can be sustained through reciprocity. The theory of reciprocal altruism assumes dyadic interactions between the same two agents, constantly exchanging the roles of donor and recipient. Instead, we refer to indirect reciprocity if help is returned by a different agent from the one who receives help in the first instance. In both cases, cooperation is defended through punishment: altruists neutralize cheating by withholding help toward agents who refuse to reciprocate. Various models allowing for positive assortation of altruists have been explored. Some of them are based on the selection of arbitrary features attached to the agents, called tags. Agents carrying similar tags can be seen as a group: they prefer to interact with each other and tend to avoid interaction with agents whose tag value is different from their own. Computer simulations prove the tag-based model to perform well in establishing cooperation among agents with little or no complexity beyond these arbitrary features. It has been stated that this result is achieved without repeated interaction (no direct or indirect reciprocity). Without having to process information on past interaction, agent recognition or memory, tags are enough to establish cooperation without reciprocity. On the other hand, indirect reciprocity and altruistic punishment seem necessary in an open population. They demand, however, a high level of strategic thinking and a complex theoretical model. Indirect reciprocity leads to reputation formation and moral judgment. Moreover, results in experimental economics have documented forms of altruistic behavior among humans which the theory of reciprocal altruism fails to explain. The notion of strong reciprocity has been proposed for those kinds of behavior, and it has been argued that it is facilitated by the existence of social norms and institutions. We must question the theoretical status of these models: are they just a simulation technique, or do they have some explanatory power over important phenomena found in nature? For a theory of cooperation and altruistic behavior to be complete, it must have a cognitive basis. We claim that, without including the mental level in such a theory, every analysis will fail to capture all the features that will allow us to describe natural phenomena. Reciprocity is actually defined as an expectation of future help. This expectation introduces to the analysis the mental level, which still awaits a full, correct description. With this level missing from the analysis, how is it possible to distinguish reciprocity from simple donation, punishment from cheating, or a social norm form a mere behavioral equilibrium? The approach to a cognitive theory of altruism and cooperation should be: 1) co-evolutionary, provided that the nature of the replicators and the processes of replication (either biological or cultural) are justified; and 2) gradualistic, meaning that we need to know how higher levels of organization emerge from lower ones (e.g. the social from the individual level, the institutional form the social, etc.). In our view the gap between biology and culture should be filled by the mind.


Stochastic Models of Tagging Behavior
Ciro Cattuto, *Vittorio Loreto, *Luciano Pietronero
Centro Studi e Ricerche "Enrico Fermi", *Universita' di Roma "La Sapienza", Rome (Italy)

Collaborative tagging has been quickly gaining ground because of its ability to recruit the activity of web users into effectively organizing and sharing vast amounts of information. Here we approach collaborative tagging from the perspective of complex systems science and investigate the connection between emergent features, grounded in actual data, and the tagging behavior of users. We collect data from a popular social bookmarking system (http://del.icio.us) and select a semantic context by focusing on the resources marked with a given tag. On studying the frequency-rank distribution of the tags co-occurring with the selected one, we find a heavy-tailed behavior and observe properties that point to an emergent hierarchy of tags. We introduce a stochastic model embodying two main aspects of collaborative tagging: (i) a multiplicative character related to the idea that users are exposed to each other's tagging activity; (ii) a notion of memory - or aging of resources - in the form of a heavy-tailed access to the past of the system. Remarkably, our simple model is able to account quantitatively for the measured distributions, with a surprisingly high accuracy. We also report experimental observations on the accumulation of tags in the context of a resource, and find a universal scaling behavior. We suggest low-level mechanisms that might be responsible for the observed scaling law.


Simple Rewire Protocols for Cooperation in Dynamic Networks
David Hales, Stefano Aretconi
University of Bologna (Italy)

Recently we presented some simple network rewiring rules that support cooperation in dynamic networks. They were based on some novel "tag" models from computational sociology. We have proposed these as peer-to-peer protocols that could potentially support various kinds of application. We will describe these models and their dynamics and discuss some open issues. We will breifly disucss how these processes produce a kind of emergent dynamic coalition formation process that supports cooperatvie coalitions.


Facilitating the Development of Social Structure in Evolutionary Domains
Bruce Edmonds
Centre for Policy Modelling, Manchester (UK)

A key question is: how can sophisticated “social structures” arise as the result of an evolutionary/adaptive process of the parts? Or to be more exact: what conditions are necessary for complex coordination of action to arise in evolutionary systems? That some positive answers exist to these questions is suggested by the fact that complex ecosystems and societies have arisen as the result of biological evolution. If we could make any progress in answering these questions these could be applied in the design of complex adaptive distributed systems. A fundamental requisite for the development of such social complexity is the presence of some mechanism whereby the nature of other individuals can be recognised, albeit in a fallible and coarse manner. One such manner is that of "tags" which are socially observable cues but ones which are not necessarily "hard-wired" to other characteristics of the individual. However the presence of such tags allows for more sophisticated manners of interaction to develop and, furthermore, are a minimal way of doing so. This talk will describe investigations into two domains which are particular cases of the above general question: (1) some existing work about using tags to facilitate the emergence of group-based resource-sharing when there is a specialisation of skills in an evolutionary setting and (2) some new work looking at the effect of tags and tag-like properties on the evolution of food-webs in a competitive (dog eat dog) domain. 1. Tags, specialisation and resource-sharing - here it is shown that donation levels can be maintained in population of selfish individuals with specialised “harvesting” skills in an evolutionary process even when there is no benefit other than the better distribution of nutrition among a population with tags. This model goes further than others since 5% of resources are dissipated during sharing. The evolutionary “benefit” to gene-lineages are only those obtainable via their participation in groups of individuals with similar tags. This shows a possible way in which the potential advantages of specialisation (such as those found in human societies) can be maintained in the face of possible defection/invasion by “hoarders”. 2. Tags in the evolution of food webs - in this ongoing work, an evolutionary model of the evolution of primitive life within a simple spatial environment is investigated. This is a more descriptive model where the “organisms” inhabit a simple 2D grid world with unevenly distributed sunshine and water supplies. The organisms have a reasonably expressive genome coding for basic abilities such as moving one square, turning etc. These abilities include such as recognising those with “tag1”, displaying “tag2”, being poisonous etc. The model is being investigated for the dynamics that result from this and the food webs that result. This will be compared to behaviour in observed in previous tag models.


Tag Mechanisms applied to Grid Virtual Organizations Management
Isaac Chao
Polytechnic University of Catalonia, Barcelona (Spain)

Virtual Organizations (VO) are geographically distributed, functionally diverse, dynamic and agile organizational entities linked through Information and Communications Technology (ICT). Next generation large-scale open distributed systems, such as Grids, consist on ensembles of VOs. These systems are often composed of very heterogeneous components, potentially having divergent interests and belonging to very different administrative domains. We have to take into account a high probability of any of the following characteristics: a) Lack of previous interactions between agents or any usable historical data; b) High levels of dynamicity, agents entering and exiting the system continuously; c) Impossibility of using a centralized institution to enforce fulfilment of agreements; d) Grids can be formed by communities that continuously change their usage policies, membership and goals during their lifetime. A central control and manual management is exceedingly difficult or even impossible in this kind of systems. Decentralized, self-organized management and operation becomes a requirement. We aim to extend existent Tag mechanisms [Hales04] adapting them to the describe target scenario with a twofold purpose: 1) Address current limitations of Tag mechanism, providing robustness to non-adaptive agents, support for a fitness or utility measure and comparison for a VO scenario and adding a degree of control over the evolutionary learning mechanism. The goal is the achievement of high cooperation between agents leading to system-wide overall utility (social welfare) maximization, even in the presence of a sensible number of non-adaptive agents; 2) Coordinate VO lifecycle in a self-organized fashion, sensible to the VO ecosystem it inhabits. We rely on the emergence trough the Tag mechanism of the desired macroscopic properties. We want to address the engineering of certain macroscopic VO properties- namely cooperation level reached, diversity of VOs in the system and VO stability – just by modifying the microscopic mechanism parameters. The goal is to achieve a parametric description of such micro-macro links enabling the self-organized engineering of systems composed of VOs incorporating emergence. I will present a generic model for the management of VOs in the Grid based on Tag mechanisms, but incorporating several extensions addressing current limitations (from a Grid VO point of view) of state-of the art Tag mechanisms. A concrete implementation of the model and experimental results for a specific Grid model will be presented for evaluation and discussion


Greedy Cheating Liars and the Fools Who Believe Them
Stefano Arteconi, David Hales, Ozalp Babaoglu
University of Bologna (Italy)

Recently, evolutionary algorithms based on “tags” have been adapted for application in peer-to-peer (P2P) systems. Using this approach nodes compare their utilities with random others and copy behaviours and links based on which node has the higher utility. Although such algorithms have been shown to posses many of the attractive emergent properties of the previous tag models, such as self-organised cooperation and coordination, they rely on the honest reporting of node utilities, behaviours and links. But what if a node does not follow the specified protocol and attempt to subvert it for its own selfish ends? We examine the robustness of this approach to two kinds of cheating behaviour in the nodes: a) when nodes lie and cheat to maximize their own utility and b) when nodes act nihilistically to try to destroy cooperation in the network. We find that in a) a certain percentage of such "greedy cheating liars" actually can improve system performance and in b) the network can still function even containing high levels of such nodes. Recent evolutionary “tag” models developed within social simulation and computational sociology have been proposed as possible candidates for robust distributed computing applications such as peer-to-peer file sharing. These classes of model demonstrate a mechanism by which even selfish local optimizers may come to act in an unselfish manner through a kind of “group like” selection process. However, for this process to operate, it is assumed that nodes report their individual fitness, behavior and links to other nodes honestly. But what happens if nodes not follow the honest protocol? What if they act maliciously either for their own benefit or simply to destroy the network? A node may lie about its utility to another node, may lie about its behavior (when the other node wishes to copy it) and may lie about its neighbor links. We present results of extensive computer simulations of P2P networks using an existing evolutionary inspired protocol called SLACER which has been shown to robustly promote and maintain the formation of small-world networks with chains of cooperating nodes linking all nodes. We subject it to attack by four variants of malicious behavior showing under what conditions it is robust or vulnerable. We find a counter-intuitive result: certain kinds of lying and cheating in the nodes can actually improve some aspects of system performance.


SLAC and SLACER: what happens when you try to be "smart" about who you reject
Emma Norling
Centre for Policy Modelling, Manchester (UK)

Tags – visible cues by which agents form coalitions – have been shown to provide a simple and surprisingly robust (for the society if not the indivdual) mechanism for cooperation. The SLAC and SLACER algorithms particularly have demonstrated how simple rules can be used to construct networks of cooperating individuals based on the perceived success of these individuals. Networks constructed using these algorithms are even reasonably robust in the presence of cheats and liars. My interest is in extending this work in two ways. The first is to see what (if any) benefits can be gained from a less ‘lazy’ approach to network configuration in the SLACER algorithm. Currently, when a node reconfigures its network, links to other nodes are dropped randomly. What improvements can be gained by keeping track of ‘useful’ neighbours, and preferentially keeping these links? The second extension is to explore these algorithms in the context of applications that require cooperation from multiple partners. A real world example of such an application would be a supply chain, where specialist firms supply different components to create a particular finished product (but the components of any given firm may be used in many different products). The applications of SLAC and SLACER to date have been in two-party cooperation situations, where the successful completion of a task requires only pair-wise cooperation. I have models in development looking at both of these extensions, with some preliminary results in the first case, and no results yet in the second. I would be happy to discuss my work on one or both of these areas at the workshop.


Emergent Socially Rational Utilitarianism in a Peer-to-Peer System
Andrea Marcozzi, David Hales.
University of Bologna (Italy)

For many applications peer-to-peer (P2P) systems require their member nodes (or agents) to behave in a socially beneficial (non-egotistical) way. Kalenka and Jennings (1999) termed this requirement as the “Principle of Social Rationality”: if an agent has a choice of actions it should chose the action that maximizes the social utility (sum of all agent utilities in the system). This principle can be contrasted with classical individual rationality that states agents should select actions that maximize their individual utility. However, developing protocols for realistic P2P systems that adhere to the principle of social rationality is very difficult and potentially so costly as to negate the benefits. This is because P2P systems have no central control, are potentially huge (composed of millions of nodes) and have high node turnover (with users continually entering and leaving the system). In addition, selfish or malicious nodes can get into the system via hacked client programs. These factors mean that individual nodes, even if they wish to follow a socially rational principle, often will not have enough information to gauge the effects of their actions on others. It would be too costly for every node to ask every other node to report its state before every individual action – even if it was possible. Recently, simple locally adaptive protocols have been proposed that claim to produce socially rational outcomes through a process of self-organisation even though nodes only act on their own utility values. In this approach nodes preferentially copy other nodes (by duplicating their behaviour and links) that have higher utilities. However, in these previous works only specific scenarios are considered in which certain plausible utility values are selected. In this poster we take one such existing P2P model (the SkillWorld) and modify the utilities to explore a large space of possible values. In each case we checked if the protocol maximized the collective utility or not. We found in each case that if the collective cost of an action was less than or equal to the collective benefit the protocol self-organized the network to a state where nodes selected this action. We present a synthesis of results from computer simulations covering over ten billion interactions.


[back to main workshop page]