Agent-Based Modelling in NetLogo
(ABM-NetLogo)
Assessment details
- All students will develop their own ABM in NetLogo and perform
some experiments and analysis. The program should contain appropriate
comments in the code, user interface with input parameters and outputs
and a completed “info” screen in similar format to the models in the
NetLogo models library.
- PhD students will additionally produce a short technical report
describing some experimental results from their model. This should
include: 1) abstract; 2) introduction; 3) description of model; 4)
experiments performed and results (which may include charts, figures
and / or tables); and 5) discussion / conclusion. Minimum 3 pages.
Note: Do not worry if your model does not do what you thought or hoped
it would, you just need to described what you did and what you found.
Deadlines
- Topic (one short
paragraph) to
be e-mailed to dave@davidhales.com before 3 May
(lab 9) [All students]
- ABM model deadline: 14 June
(email the .nlogo file). [All
students]
- Technical report
deadline: 28 June
(email a .pdf file). [PhD
students only]
Some topic suggestions
Select your own topic if you can but here are some potential
ideas if you are stuck. Some of these topics are hard and you might
only be able to complete part of it or some other version of it. Some
topics might not make sense until we cover certain material in the labs:
Implement a model described in a paper
and check if you get similar results. Find a paper that
describes an ABM and attempt to re-implement it. Do you get the same
(or similar) results as reported in the paper?
Produce a variation on an existing ABM
in the NetLogo models library. Incoporate agents with some new
capabilities / behaviours. How do the new capabilities change the
outcomes of the model (if at all). Can you explain the results?
How can network topologies
self-organise? Suppose you wished agents to dynamically form
various network topologies without a top-down plan. Assume agents
(nodes) can not access a unique ordered ID (the who number in NetLogo)
but rather all start identically as entirely disconnected nodes. Each
node only has access to it's own and other nodes number of links and
(perhaps) some other internal state variables that are updated in a
local way (for example you could store the age of a link). The
preferential attachment algorithm creates a scale-free network only
using the number of links of each node. Are there agent behavours that
could produce rings, stars or lattice like structures in a similar
decentralised way?
How can we keep a network connected
with only local queries? Make a model of a dynamic network
(graph) in which there are on average N nodes. Existing nodes leave and
new nodes join with probability P each time step. New nodes connect to
a randomly selected single existing node. Nodes can only query nodes
they are directly connected to and can only store a small number of
links to other nodes (K). Queries involve asking another node for its
current links. Implement some simple node behaviours that try to keep
the network connected while minimising the number of queries. Test with
different values of N, P and K.
How do ideas spread through networks?
Implement a population of N agents connected in a network. Each agent
can have one of two opinions (red or blue). Agents change their opinion
if > T proportion of their neighbours hold a different opinion.
Starting from a condition in which all agents share the same opinion
explore the conditions that allow a single node changing it’s opinion
to spread over the entire network. Experiment with different network
topologies and T values.
How does social learning effect a
coordination game? Make a variation of the El Farol Bar model in
which there are four kinds of learners: 1) individual learners (as in
the standard model); 2) social learners that copy the best strategies
from other agents rather than learn themselves; 3) mixed learners who
learn themselves and copy from others. Compare results from populations
composed entirely of each type and some mixtures of types.
How does assimilation and change
effect segregation? Make a variation of the Schelling
segregation model in which there are two additional parameters: 1) a
second threshold C above the existing T threshold (C > T). If C
holds then rather than move, the agent “converts” to a randomly
selected neighbour colour. 2) a probability M. In each time step each
agent spontaneously changes to a random colour with probability M.
Experiment with different numbers of colours and C, M, T and agent
density values. How do these effect segregation outcomes? Additionally
measure the largest and smallest cluster that emerge in stable
segregation outcomes.
How do different topologies effect
segregation? Make a variation of Schelling’s segregation model
in which agents are located on some other graph topologies than a
lattice. Experiment with a number of different graph topologies,
threshold values, number of colours and densities. How do these
parameters effect the emergence of segregation? In addition consider a
3D lattice and some form of dynamic topologies (where the graph changes
over time).
Kill the things (in space)! The
universe is a 2D space. You are in a spaceship that can only be
rotated. It can fire torpedoes from the front. Things are trying to get
you. If they hit you then you die. If you shoot one with a torpedo they
die. Things store a movement rule that determines how fast and in what
way they move. Periodically things hatch a new thing with probability
P. New things are copies of their parent but with probability M they
mutate their movement rule in some way. If all things are killed a set
of new things appear with completely random movement rules. The user
can rotate the spaceship, left or right, and fire by pressing keys.
Additionally add an autopilot function that controls the spaceship
attempting to kill things without user intervention. How does the
behaviour of things evolve with different P and M values?
How do dynamic graphs effect
cooperation? Produce an evolutionary model of agents playing a
simple cooperation game (such as the single-round Prisoner’s Dilemma or
similar) on a dynamic graph. The graph could change following some
known growth rule (such as preferential attachment), some dynamic
change (such as random rewiring) or any other mechanism. Parameterise
the rate of change (R) of the graph and the mutation rate (M) of the
strategies and explore different values for R and M in relation to
cooperation level.
How do different fixed strategies in
repeated cooperation games compare? Implement several (at least
5) different strategies in the Iterated Prisoner’s Dilemma game.
Implement a “round robin” tournament in which all possible pairs of
strategies play a game. Present results of all games (a symmetric
matrix). Vary the number of games played G and the number of rounds in
each game R. How do these effect the results?
How do repeated cooperation strategies
evolve? Implement an evolutionary model of a population of
agents playing the Iterated Prisoner’s Dilemma. Each agent should store
3 values describing it’s strategy <p,q,r>. Where p is the
probability an agent will cooperate on the first move, q is the
probability to cooperate if the other player defected last move and r
is the probability to cooperate if the other player cooperated last
move. Hence <1,0,1> would equate to the tit-for-tat strategy,
<0,0,0> would equate to always defect and <1,1,1> would be
always cooperate. Experiment to determine what strategies evolve for
some different parameters such as mutation rates, population sizes and
number of rounds played between partners and different PD payoffs.
How do strategies in the rock / paper
/ scissors game evolve? Implement a population of N agents who
play one strategy in the game rock, paper or scissors. Each time step
each agent plays P other randomly chosen agents obtaining a payoff of 1
for a win, zero for a draw and -1 for a loss. After each round of games
is played, reproduce some high scoring agents into the next generation
and kill some low scoring agents (keeping the population size constant
at N). With small probability M mutate (randomly change) the strategy
of newly reproduced agents. Experiment with different P, M and N. Do
the parameters effect the distribution of strategies over time?