1
0
mirror of https://github.com/gsi-upm/soil synced 2024-12-22 08:18:13 +00:00

Tidying up the repository

This commit is contained in:
J. Fernando Sánchez 2017-07-03 18:40:00 +02:00
parent e1be3a730e
commit f8538fe057
9 changed files with 50 additions and 30185 deletions

View File

@ -1,9 +1,33 @@
# [Soil](https://github.com/gsi-upm/soil)
[![SOIL](logo_gsi.png)](https://www.gsi.dit.upm.es)
# [SOIL](https://github.com/gsi-upm/soil)
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](notebooks/soil_tutorial.ipynb) to develop your own agent models.
If you use Soil in your research, don't forget to cite this paper:
```bibtex
@inbook{soil-gsi-conference-2017,
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
doi = "10.1007/978-3-319-59930-4_19",
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
isbn = "978-3-319-59929-8",
keywords = "soil;social networks;agent based social simulation;python",
month = "June",
organization = "PAAMS 2017",
pages = "234-245",
publisher = "Springer Verlag",
series = "LNAI",
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
volume = "10349",
year = "2017",
}
```
@Copyright GSI - Universidad Politécnica de Madrid 2017

Binary file not shown.

29002
data.txt

File diff suppressed because it is too large Load Diff

View File

@ -8,6 +8,31 @@ Welcome to Soil's documentation!
Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
If you use Soil in your research, do not forget to cite this paper:
.. code:: bibtex
@inbook{soil-gsi-conference-2017,
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
doi = "10.1007/978-3-319-59930-4_19",
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
isbn = "978-3-319-59929-8",
keywords = "soil;social networks;agent based social simulation;python",
month = "June",
organization = "PAAMS 2017",
pages = "234-245",
publisher = "Springer Verlag",
series = "LNAI",
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
volume = "10349",
year = "2017",
}
.. toctree::
:maxdepth: 2
:caption: Learn more about soil:

View File

@ -1,112 +0,0 @@
Developing new models
---------------------
This document describes how to develop a new analysis model.
What is a model?
================
A model defines the behaviour of the agents with a view to assessing their effects on the system as a whole.
In practice, a model consists of at least two parts:
* Python module: the actual code that describes the behaviour.
* Setting up the variables in the Settings JSON file.
This separation allows us to run the simulation with different agents.
Models Code
===========
All the models are imported to the main file. The initialization look like this:
.. code:: python
import settings
networkStatus = {} # Dict that will contain the status of every agent in the network
sentimentCorrelationNodeArray = []
for x in range(0, settings.network_params["number_of_nodes"]):
sentimentCorrelationNodeArray.append({'id': x})
# Initialize agent states. Let's assume everyone is normal.
init_states = [{'id': 0, } for _ in range(settings.network_params["number_of_nodes"])]
# add keys as as necessary, but "id" must always refer to that state category
A new model have to inherit the BaseBehaviour class which is in the same module.
There are two basics methods:
* __init__
* step: used to define the behaviour over time.
Variable Initialization
=======================
The different parameters of the model have to be initialize in the Simulation Settings JSON file which will be
passed as a parameter to the simulation.
.. code:: json
{
"agent": ["SISaModel","ControlModelM2"],
"neutral_discontent_spon_prob": 0.04,
"neutral_discontent_infected_prob": 0.04,
"neutral_content_spon_prob": 0.18,
"neutral_content_infected_prob": 0.02,
"discontent_neutral": 0.13,
"discontent_content": 0.07,
"variance_d_c": 0.02,
"content_discontent": 0.009,
"variance_c_d": 0.003,
"content_neutral": 0.088,
"standard_variance": 0.055,
"prob_neutral_making_denier": 0.035,
"prob_infect": 0.075,
"prob_cured_healing_infected": 0.035,
"prob_cured_vaccinate_neutral": 0.035,
"prob_vaccinated_healing_infected": 0.035,
"prob_vaccinated_vaccinate_neutral": 0.035,
"prob_generate_anti_rumor": 0.035
}
In this file you will also define the models you are going to simulate. You can simulate as many models as you want.
The simulation returns one result for each model, executing each model separately. For the usage, see :doc:`usage`.
Example Model
=============
In this section, we will implement a Sentiment Correlation Model.
The class would look like this:
.. code:: python
from ..BaseBehaviour import *
from .. import sentimentCorrelationNodeArray
class SentimentCorrelationModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob']
self.sadness_prob = environment.environment_params['sadness_prob']
self.disgust_prob = environment.environment_params['disgust_prob']
self.time_awareness = []
for i in range(4): # In this model we have 4 sentiments
self.time_awareness.append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
sentimentCorrelationNodeArray[self.id][self.env.now] = 0
def step(self, now):
self.behaviour() # Method which define the behaviour
super().step(now)
The variables will be modified by the user, so you have to include them in the Simulation Settings JSON file.

View File

@ -1,104 +0,0 @@
# settings.py
def init():
global number_of_nodes
global max_time
global num_trials
global bite_prob
global timeout
global network_type
global heal_prob
global innovation_prob
global imitation_prob
global outside_effects_prob
global anger_prob
global joy_prob
global sadness_prob
global disgust_prob
global tweet_probability_users
global tweet_relevant_probability
global tweet_probability_about
global sentiment_about
global tweet_probability_enterprises
global enterprises
global neutral_discontent_spon_prob
global neutral_discontent_infected_prob
global neutral_content_spon_prob
global neutral_content_infected_prob
global discontent_content
global discontent_neutral
global content_discontent
global content_neutral
global variance_d_c
global variance_c_d
global standard_variance
global prob_neutral_making_denier
global prob_infect
global prob_cured_healing_infected
global prob_cured_vaccinate_neutral
global prob_vaccinated_healing_infected
global prob_vaccinated_vaccinate_neutral
global prob_generate_anti_rumor
# Network settings
network_type = 1
number_of_nodes = 1000
max_time = 50
num_trials = 1
timeout = 2
# Zombie model
bite_prob = 0.01 # 0-1
heal_prob = 0.01 # 0-1
# Bass model
innovation_prob = 0.001
imitation_prob = 0.005
# Sentiment Correlation model
outside_effects_prob = 0.2
anger_prob = 0.06
joy_prob = 0.05
sadness_prob = 0.02
disgust_prob = 0.02
# Big Market model
## Names
enterprises = ["BBVA", "Santander", "Bankia"]
## Users
tweet_probability_users = 0.44
tweet_relevant_probability = 0.25
tweet_probability_about = [0.15, 0.15, 0.15]
sentiment_about = [0, 0, 0] # Default values
## Enterprises
tweet_probability_enterprises = [0.3, 0.3, 0.3]
# SISa
neutral_discontent_spon_prob = 0.04
neutral_discontent_infected_prob = 0.04
neutral_content_spon_prob = 0.18
neutral_content_infected_prob = 0.02
discontent_neutral = 0.13
discontent_content = 0.07
variance_d_c = 0.02
content_discontent = 0.009
variance_c_d = 0.003
content_neutral = 0.088
standard_variance = 0.055
# Spread Model M2 and Control Model M2
prob_neutral_making_denier = 0.035
prob_infect = 0.075
prob_cured_healing_infected = 0.035
prob_cured_vaccinate_neutral = 0.035
prob_vaccinated_healing_infected = 0.035
prob_vaccinated_vaccinate_neutral = 0.035
prob_generate_anti_rumor = 0.035

View File

@ -1,954 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"./logo_gsi.png\" alt=\"Grupo de Sistemas Inteligentes\" width=\"100px\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SOIL Tutorial "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook contains a tutorial to learn how to use the SOcial network sImuLator (SOIL) written in Python. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SOIL is based in 2 main files:\n",
"* __soil.py__: It's the main file of SOIL. The network creation, simulation and visualization are done in this file.\n",
"+ __settings.json__: This file contains every variable needed in the simulation in order to be modified easily.\n",
"- __models__: All the spread models already implemented are stored in this directory as modules."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Requirements"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SOIL requires to install:\n",
"* **Python 3** - you can use the Conda distribution\n",
"* **NetworkX** - install with conda install networkx or pip install networkx\n",
"* **simpy** - install with pip install simpy\n",
"* **nxsim** - install with pip install nxsim\n",
"* **Gephi** - Available at https://gephi.org"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Soil.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Imports and data initialization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First of all, you need to make all the imports. This simulator is based on [nxsim](https://pypi.python.org/pypi/nxsim), using [networkx](https://networkx.github.io/) for network management. We will also include the models and settings files where the spread models and initialization variables are stored."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from models import *\n",
"from nxsim import NetworkSimulation\n",
"# import numpy\n",
"from matplotlib import pyplot as plt\n",
"import networkx as nx\n",
"import settings\n",
"import models\n",
"import math\n",
"import json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Network creation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using a parameter provided in the settings file, we can choose what type of network we want to create, as well as the number of nodes and some other parameters. More types of networks can be implemented using [networkx](https://networkx.github.io/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"if settings.network_params[\"network_type\"] == 0:\n",
" G = nx.complete_graph(settings.network_params[\"number_of_nodes\"])\n",
"if settings.network_params[\"network_type\"] == 1:\n",
" G = nx.barabasi_albert_graph(settings.network_params[\"number_of_nodes\"], 10)\n",
"if settings.network_params[\"network_type\"] == 2:\n",
" G = nx.margulis_gabber_galil_graph(settings.network_params[\"number_of_nodes\"], None)\n",
"# More types of networks can be added here"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to analyse the results of the simulation. We include them in the topology and a .gexf file is generated. This allows the user to picture the network in [Gephi](https://gephi.org/). A JSON file is also generated to permit further analysis.\n",
"\n",
"The JSON file follows this schema. The file has three depth levels. In the first one we can find the identifier of each agent in the network. Secondly, inside every agent we can observe every attribute that the creator of the model wanted to make visible. In the deepest level the different values of each attribute are\n",
"visible.\n",
"\n",
"\t{\n",
"\t\t\"agent_0\": {\n",
"\t\t\t\"attribute_X\": {\n",
"\t\t\t\t\"0\": 0,\n",
"\t\t\t\t\"2\": 0,\n",
"\t\t\t\t\"4\": 1,\n",
"\t\t\t\t\"6\": 2,\n",
"\t\t\t\t...\n",
"\t\t\t}\n",
"\t\t},\n",
"\t\t\"agent_1\": {\n",
"\t\t\t\"attribute_X\": {\n",
"\t\t\t\t\"0\": 0,\n",
"\t\t\t\t\"2\": 3,\n",
"\t\t\t\t...\n",
"\t\t\t}\n",
"\t\t},\n",
"\t\t...\t\t\n",
"\t}\n",
"\n",
"This is done with the following code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def visualization(graph_name):\n",
"\n",
" for x in range(0, settings.network_params[\"number_of_nodes\"]):\n",
" for attribute in models.networkStatus[\"agent_%s\" % x]:\n",
" emotionStatusAux = []\n",
" for t_step in models.networkStatus[\"agent_%s\" % x][attribute]:\n",
" prec = 2\n",
" output = math.floor(models.networkStatus[\"agent_%s\" % x][attribute][t_step] * (10 ** prec)) / (10 ** prec) # 2 decimals\n",
" emotionStatusAux.append((output, t_step, t_step + settings.network_params[\"timeout\"]))\n",
" attributes = {}\n",
" attributes[attribute] = emotionStatusAux\n",
" G.add_node(x, attributes)\n",
"\n",
" print(\"Done!\")\n",
"\n",
" with open('data.txt', 'w') as outfile:\n",
" json.dump(models.networkStatus, outfile, sort_keys=True, indent=4, separators=(',', ': '))\n",
"\n",
" nx.write_gexf(G, graph_name+\".gexf\", version=\"1.2draft\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's only the basic visualization. Everything you need can be implemented as well. For example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def results(model_name):\n",
" x_values = []\n",
" infected_values = []\n",
" neutral_values = []\n",
" cured_values = []\n",
" vaccinated_values = []\n",
"\n",
" attribute_plot = 'status'\n",
" for time in range(0, settings.network_params[\"max_time\"]):\n",
" value_infectados = 0\n",
" value_neutral = 0\n",
" value_cured = 0\n",
" value_vaccinated = 0\n",
" real_time = time * settings.network_params[\"timeout\"]\n",
" activity = False\n",
" for x in range(0, settings.network_params[\"number_of_nodes\"]):\n",
" if attribute_plot in models.networkStatus[\"agent_%s\" % x]:\n",
" if real_time in models.networkStatus[\"agent_%s\" % x][attribute_plot]:\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 1: ## Infected\n",
" value_infectados += 1\n",
" activity = True\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 0: ## Neutral\n",
" value_neutral += 1\n",
" activity = True\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 2: ## Cured\n",
" value_cured += 1\n",
" activity = True\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 3: ## Vaccinated\n",
" value_vaccinated += 1\n",
" activity = True\n",
"\n",
" if activity:\n",
" x_values.append(real_time)\n",
" infected_values.append(value_infectados)\n",
" neutral_values.append(value_neutral)\n",
" cured_values.append(value_cured)\n",
" vaccinated_values.append(value_vaccinated)\n",
" activity = False\n",
"\n",
" fig1 = plt.figure()\n",
" ax1 = fig1.add_subplot(111)\n",
"\n",
" infected_line = ax1.plot(x_values, infected_values, label='Infected')\n",
" neutral_line = ax1.plot(x_values, neutral_values, label='Neutral')\n",
" cured_line = ax1.plot(x_values, cured_values, label='Cured')\n",
" vaccinated_line = ax1.plot(x_values, vaccinated_values, label='Vaccinated')\n",
" ax1.legend()\n",
" fig1.savefig(model_name + '.png')\n",
" # plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simulation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The simulation starts with the following code. The user can provide the network topology, the maximum time of simulation, the spread model to be used as well as other parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agents = settings.environment_params['agent']\n",
"\n",
"print(\"Using Agent(s): {agents}\".format(agents=agents))\n",
"\n",
"if len(agents) > 1:\n",
" for agent in agents:\n",
" sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params[\"max_time\"],\n",
" num_trials=settings.network_params[\"num_trials\"], logging_interval=1.0, **settings.environment_params)\n",
" sim.run_simulation()\n",
" print(str(agent))\n",
" results(str(agent))\n",
" visualization(str(agent))\n",
"else:\n",
" agent = agents[0]\n",
" sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params[\"max_time\"],\n",
" num_trials=settings.network_params[\"num_trials\"], logging_interval=1.0, **settings.environment_params)\n",
" sim.run_simulation()\n",
" results(str(agent))\n",
" visualization(str(agent))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Imports and initialization"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import settings\n",
"\n",
"networkStatus = {} # Dict that will contain the status of every agent in the network\n",
"\n",
"sentimentCorrelationNodeArray = []\n",
"for x in range(0, settings.network_params[\"number_of_nodes\"]):\n",
" sentimentCorrelationNodeArray.append({'id': x})\n",
"# Initialize agent states. Let's assume everyone is normal.\n",
"init_states = [{'id': 0, } for _ in range(settings.network_params[\"number_of_nodes\"])]\n",
" # add keys as as necessary, but \"id\" must always refer to that state category"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Base behaviour"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Every spread model used in SOIL should extend the base behaviour class. By doing this the exportation of the attributes values will be automatic. This feature will be explained in the Spread Models section. The class looks like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import settings\n",
"from nxsim import BaseNetworkAgent\n",
"\n",
"\n",
"class BaseBehaviour(BaseNetworkAgent):\n",
"\n",
" def __init__(self, environment=None, agent_id=0, state=()):\n",
" super().__init__(environment=environment, agent_id=agent_id, state=state)\n",
" self._attrs = {}\n",
"\n",
" @property\n",
" def attrs(self):\n",
" now = self.env.now\n",
" if now not in self._attrs:\n",
" self._attrs[now] = {}\n",
" return self._attrs[now]\n",
"\n",
" @attrs.setter\n",
" def attrs(self, value):\n",
" self._attrs[self.env.now] = value\n",
"\n",
" def run(self):\n",
" while True:\n",
" self.step(self.env.now)\n",
" yield self.env.timeout(settings.network_params[\"timeout\"])\n",
"\n",
" def step(self, now):\n",
" networkStatus['agent_%s'% self.id] = self.to_json()\n",
"\n",
" def to_json(self):\n",
" final = {}\n",
" for stamp, attrs in self._attrs.items():\n",
" for a in attrs:\n",
" if a not in final:\n",
" final[a] = {}\n",
" final[a][stamp] = attrs[a]\n",
" return final"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Spread models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Every model to be implemented must include an init and a step function. Depending on your model, you would need different attributes. If you want them to be automatic exported for a further analysis, you must name them like this *self.attrs['name_of_attribute']*. Moreover, the last thing you should do inside the step function is call the following method *super().step(now)*. This call will store the values.\n",
"\n",
"Some other tips:\n",
"* __self.state['id']__: To check the id of the current agent/node.\n",
"* __self.get_neighboring_agents(state_id=x)__: Returns the neighbours agents/nodes with the id provided\n",
"\n",
"An example of a spread model already implemented and working:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import settings\n",
"import random\n",
"import numpy as np\n",
"\n",
"\n",
"class ControlModelM2(BaseBehaviour):\n",
"\n",
" # Init infected\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 1}\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 1}\n",
"\n",
" # Init beacons\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 4}\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 4}\n",
"\n",
" def __init__(self, environment=None, agent_id=0, state=()):\n",
" super().__init__(environment=environment, agent_id=agent_id, state=state)\n",
"\n",
" self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],\n",
" environment.environment_params['standard_variance'])\n",
" self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],\n",
" environment.environment_params['standard_variance'])\n",
" self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],\n",
" environment.environment_params['standard_variance'])\n",
" self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" def step(self, now):\n",
"\n",
" if self.state['id'] == 0: # Neutral\n",
" self.neutral_behaviour()\n",
" elif self.state['id'] == 1: # Infected\n",
" self.infected_behaviour()\n",
" elif self.state['id'] == 2: # Cured\n",
" self.cured_behaviour()\n",
" elif self.state['id'] == 3: # Vaccinated\n",
" self.vaccinated_behaviour()\n",
" elif self.state['id'] == 4: # Beacon-off\n",
" self.beacon_off_behaviour()\n",
" elif self.state['id'] == 5: # Beacon-on\n",
" self.beacon_on_behaviour()\n",
"\n",
" self.attrs['status'] = self.state['id']\n",
" super().step(now)\n",
"\n",
" def neutral_behaviour(self):\n",
"\n",
" # Infected\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" if len(infected_neighbors) > 0:\n",
" if random.random() < self.prob_neutral_making_denier:\n",
" self.state['id'] = 3 # Vaccinated making denier\n",
"\n",
" def infected_behaviour(self):\n",
"\n",
" # Neutral\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_infect:\n",
" neighbor.state['id'] = 1 # Infected\n",
"\n",
" def cured_behaviour(self):\n",
"\n",
" # Vaccinate\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_cured_vaccinate_neutral:\n",
" neighbor.state['id'] = 3 # Vaccinated\n",
"\n",
" # Cure\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors:\n",
" if random.random() < self.prob_cured_healing_infected:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" def vaccinated_behaviour(self):\n",
"\n",
" # Cure\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors:\n",
" if random.random() < self.prob_cured_healing_infected:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" # Vaccinate\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_cured_vaccinate_neutral:\n",
" neighbor.state['id'] = 3 # Vaccinated\n",
"\n",
" # Generate anti-rumor\n",
" infected_neighbors_2 = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors_2:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" def beacon_off_behaviour(self):\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" if len(infected_neighbors) > 0:\n",
" self.state['id'] == 5 # Beacon on\n",
"\n",
" def beacon_on_behaviour(self):\n",
"\n",
" # Cure (M2 feature added)\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 2 # Cured\n",
" neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors_infected:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 3 # Vaccinated\n",
" infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors_infected:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" # Vaccinate\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_cured_vaccinate_neutral:\n",
" neighbor.state['id'] = 3 # Vaccinated"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Settings.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This file contains all the variables that can be modified from the simulation. In case of implementing a new spread model, the new variables should be also included in this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"[\n",
" {\n",
" \"network_type\": 1,\n",
" \"number_of_nodes\": 1000,\n",
" \"max_time\": 50,\n",
" \"num_trials\": 1,\n",
" \"timeout\": 2\n",
" },\n",
"\n",
" {\n",
" \"agent\": [\"BaseBehaviour\",\"SISaModel\",\"ControlModelM2\"],\n",
"\n",
"\n",
" \"bite_prob\": 0.01,\n",
" \"heal_prob\": 0.01,\n",
"\n",
" \"innovation_prob\": 0.001,\n",
" \"imitation_prob\": 0.005,\n",
"\n",
" \"outside_effects_prob\": 0.2,\n",
" \"anger_prob\": 0.06,\n",
" \"joy_prob\": 0.05,\n",
" \"sadness_prob\": 0.02,\n",
" \"disgust_prob\": 0.02,\n",
"\n",
" \"enterprises\": [\"BBVA\", \"Santander\", \"Bankia\"],\n",
"\n",
" \"tweet_probability_users\": 0.44,\n",
" \"tweet_relevant_probability\": 0.25,\n",
" \"tweet_probability_about\": [0.15, 0.15, 0.15],\n",
" \"sentiment_about\": [0, 0, 0],\n",
"\n",
" \"tweet_probability_enterprises\": [0.3, 0.3, 0.3],\n",
"\n",
" \"neutral_discontent_spon_prob\": 0.04,\n",
" \"neutral_discontent_infected_prob\": 0.04,\n",
" \"neutral_content_spon_prob\": 0.18,\n",
" \"neutral_content_infected_prob\": 0.02,\n",
"\n",
" \"discontent_neutral\": 0.13,\n",
" \"discontent_content\": 0.07,\n",
" \"variance_d_c\": 0.02,\n",
"\n",
" \"content_discontent\": 0.009,\n",
" \"variance_c_d\": 0.003,\n",
" \"content_neutral\": 0.088,\n",
"\n",
" \"standard_variance\": 0.055,\n",
"\n",
"\n",
" \"prob_neutral_making_denier\": 0.035,\n",
"\n",
" \"prob_infect\": 0.075,\n",
"\n",
" \"prob_cured_healing_infected\": 0.035,\n",
" \"prob_cured_vaccinate_neutral\": 0.035,\n",
"\n",
" \"prob_vaccinated_healing_infected\": 0.035,\n",
" \"prob_vaccinated_vaccinate_neutral\": 0.035,\n",
" \"prob_generate_anti_rumor\": 0.035\n",
" }\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Library"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test this simulator in all the experiments we have used the Albert\n",
"Barabasi Graph [34] to automatically generate the network and the con-\n",
"nections among the agents due it is one of the most suitable graphs to\n",
"recreate social networks.\n",
"\n",
"Using different human behaviour models we will recreate the different\n",
"decisions of each agent.\n",
"\n",
"Moreover there are some parameters regarding the basic simulation that\n",
"have to be settled. In addition, more parameters will be needed depend-\n",
"ing on the spread model used for the experiment."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Spread Model M2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This model is based on the New Spread Model\n",
"M2 [1] which also refers to the cascade model [2]. Agents, usually Twit-\n",
"ter users, have four states regarding a rumour: neutral (initial state),\n",
"infected, vaccinated and cured.\n",
"\n",
"An agent becomes: infected when believes the rumour; vaccinated when is\n",
"influenced before being infected by a cured or already vaccinated agent\n",
"and cured when after becoming infected the agent is influenced by a\n",
"vaccinated/cured user.\n",
"\n",
"After a certain period of time, a random infected user develops an anti-\n",
"rumour and spreads it to its neighbours in order to vaccinate the neutral\n",
"and cure the infected ones.\n",
"\n",
"This model includes the fact that infected users who made a mistake\n",
"believing in the rumour will not be in favour of spreading theirs mistakes\n",
"through the network. Therefore, only vaccinated users will spread anti-\n",
"rumours. The probability of making a denier and becoming vaccinated\n",
"when a neutral user has an infected neighbour and the first already had\n",
"information about the rumour being false.\n",
"\n",
"* [1] E. Serrano and C. A. Iglesias. “Validating viral marketing\n",
"strategies in Twitter via agent-based social simulation”. In:\n",
"Expert Systems with Applications 50.1 (2016),\n",
"* [2] L. Weng et al. “Virality prediction and community structure\n",
"in social networks”. In: Scientific Reports 3 (2013)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Control model M2,2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This model is based on the New Control Model\n",
"M2,2 [1]. It includes the use of beacons, special agents, that represent\n",
"an authority which can work against the rumour once it is detected. It\n",
"only has two states: on or off. Beacons will switch to on status when they\n",
"detect the misinformation in an infected neighbour agent.\n",
"Once the beacon is activated, they will try to cure and vaccinate other\n",
"agents starting a anti-rumour. Therefore this model also takes into ac-\n",
"count that infected users might not admit a previous mistake.\n",
"\n",
"* [1] E. Serrano and C. A. Iglesias. “Validating viral marketing\n",
"strategies in Twitter via agent-based social simulation”. In:\n",
"Expert Systems with Applications 50.1 (2016),"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SISa Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The SISa model of infection is already included in the simulator. Its the evolution of the classic disease spread Susceptible-Infective-Susceptible (SIS) model [1, 2].\n",
"\n",
"The SISa model is proposed by [3] and the main new feature is considering the spontaneous generation process of sentiment. This model has two assumptions: first, a susceptible agent who is close and more exposed to the infected has a higher probability of infection that other agent; second, the number of infected agents does not affect the probability of recovery.\n",
"\n",
"Based on some recent implementations of the SISa model [3], every agent can be in three states: neutral (initial), content and discontent.\n",
"\n",
"All the transitions between every different state are allowed depending on customizable probabilities. This model includes the fact that an agent will be more likely to change state as the number of neighbours with this state increases.\n",
"\n",
"* [1] P. Weng and X.-Q. Zhao. “Spreading speed and traveling waves for a multi-type SIS epidemic model”. In: Journal of Differential Equations 229.1 (2006)\n",
"\n",
"* [2] P. V. Mieghem. “Epidemic phase transition of the SIS type in networks”. In: A Letters Journal Exploring the Frontiers of Physics 97.4 (2012).\n",
"* [3] A. L. Hill et al. “Emotions as infectious diseases in a large social network: the SISa model”. In: Proceedings of the Royal Society of London B: Biological Sciences 277.1701 (2010),"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Big Market Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As stated in several papers [24], social networks like Twitter are the perfect scenario to study the propagation of ideas, sentiments and marketing strategies. In this scenario several enterprises want to take advantage of social networks to promote their companies and connect with their clients.\n",
"\n",
"The goal of this model [1] is to recreate the behaviour of several enterprises in a social network. Following the example of HashtKat, we want to measure the effect of different marketing strategies in social networks.\n",
"Depending on the sentiment towards an enterprise the user will post positive or negative tweets about these enterprises. The fact that an user can increase its probabilities of posting a relevant tweet about a certain\n",
"company depending on its sentiment towards it is also considered.\n",
"In this model the number of enterprises as well as tweet rate probabilities of both companies and users can be changed.\n",
"\n",
"* [1] E. Serrano and C. A. Iglesias. “Validating viral marketing\n",
"strategies in Twitter via agent-based social simulation”. In:\n",
"Expert Systems with Applications 50.1 (2016)\n",
"* [2] B. A. Huberman et al. “Social Networks that Matter: Twitter\n",
"Under the Microscope”. In: Social Science Research Network\n",
"(2008).\n",
"* [3] M. Cha et al. “Measuring User Influence in Twitter: The\n",
"Million Follower Fallacy.” In: ICWSM 10.10-17 (2010),\n",
"* [4] M. Bulearca and S. Bulearca. “Twitter: a viable marketing\n",
"tool for SMEs?” In: Global business and management research\n",
"2.4 (2010),"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sentiment Correlation Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this model we want to study\n",
"the influence of different sentiments in a social network. In order to do so, we base our model on the research made by [1]. In this paper the authors found out that in a social network (in this case Weibo) the correlation\n",
"of anger is significantly higher than joy and sadness meaning that the anger sentiment would occasionally spread faster than the others.\n",
"\n",
"They also confirmed some intuitive ideas such as a pair of users who have higher interactions are more likely to be influenced by each other, and that users with more friends would influence their neighbours more than other agents.\n",
"\n",
"In this simulation we have four emotions: anger, joy, sadness and disgust.\n",
"\n",
"Using the probabilities extracted from the dataset used by [1] we can visualise the graph and confirm the conclusions of the paper. Anger sentiment propagation rate is much higher than any other. Joy sentiment also spreads easily to the neighbours. However, sadness and disgust propagation rate is really small, few neighbours get affected by them.\n",
"\n",
"* [1] R. Fan et al. “Anger is More Influential Than Joy: Sentiment\n",
"Correlation in Weibo”. In: CoRR abs/1309.2402 (2013)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Bass Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even though Bass Model can be applied to many appli-\n",
"cations [5760] it can be used to study the diffusion of information as\n",
"well.\n",
"This model is based on the implementation proposed by Rand and Wilen-\n",
"sky [13]. In this scenario there are only two states: unaware (initial) and\n",
"aware. For this simulation we assume that agents can only change status\n",
"from advertising (outside effects) and word of mouth (information inside\n",
"the network).\n",
"The probability of being affected by imitation (word of mouth effect)\n",
"increases as a function of the agent aware neighbours. In this model once\n",
"the user changes to aware status it remains in this state for the whole\n",
"simulation.\n",
"\n",
"* F. M. Bass. “A New Product Growth for Model ConsumerDurables”. In: Management Science 15.5 (1969),\n",
"W. Dodds. “An Application of the Bass Model in Long-TermNew Product Forecasting”. In: Journal of Marketing Research\n",
"10.3 (1973),\n",
"* F. Douglas Tigert. “The Bass New Product Growth Model: A Sensitivity Analysis for a High Technology Product”. In: Journal of Marketing 45.4 (1981),\n",
"* Z. Jiang et al. “Virtual Bass Model and the left-hand data-truncation bias in diffusion of innovation studies”. In: International Journal of Research in Marketing 23.1 (2006), "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Independent Cascade Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As stated by Rand and Wilensky [1], the Independent Cascade Model [61] suits better the case we want to\n",
"study as it is more appropriate for social networks than the Bass Model.\n",
"\n",
"In this scenario we also have two states: unaware (initial) and aware. The new feature in this model is that one agent will only get infected once at least one neighbour became aware the previous time step. There is also\n",
"a probability of becoming aware by outside effects (innovation).\n",
"\n",
"This new feature can be explained intuitively, one agent will have more influence on another if the first just infected and wants to spread the new information he acquired.\n",
"\n",
"\n",
"* [1] W. Rand and U. Wilensky. An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo. MIT Press, 2015.\n",
"* [2] J. Goldenberg et al. “Talk of the Network: A Complex Systems Look at the Underlying Process of Word-of-Mouth”. In: Marketing Letters 12.3 (2001),\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2017-06-20T22:35:29.596971Z",
"start_time": "2017-06-21T00:35:29.593251+02:00"
}
},
"source": [
"# Loading results"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2017-06-20T22:35:54.114530Z",
"start_time": "2017-06-21T00:35:54.058683+02:00"
}
},
"source": [
"ControlM2"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2017-06-20T22:35:07.741296Z",
"start_time": "2017-06-21T00:35:07.735962+02:00"
}
},
"source": [
"<img src=\"ControlModelM2.png\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copyright"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SOIL has been developed by the Intelligent Systems Group, Universidad Politécnica de Madrid, 2016-2017.\n",
"\n",
"@Copyright Universidad Politécnica de Madrid, 2016-2017\n",
" <img src=\"./logo_gsi.png\" alt=\"Grupo de Sistemas Inteligentes\" width=\"100px\">"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.1"
},
"toc": {
"colors": {
"hover_highlight": "#DAA520",
"running_highlight": "#FF0000",
"selected_highlight": "#FFD700"
},
"moveMenuLeft": true,
"nav_menu": {
"height": "439px",
"width": "252px"
},
"navigate_menu": true,
"number_sections": true,
"sideBar": true,
"threshold": 4,
"toc_cell": false,
"toc_section_display": "block",
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@ -1,12 +0,0 @@
<?xml version='1.0' encoding='utf-8'?>
<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:viz="http://www.gexf.net/1.2draft/viz" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
<graph defaultedgetype="undirected" mode="static">
<nodes>
<node id="1" label="1" />
<node id="2" label="2" />
</nodes>
<edges>
<edge id="0" source="1" target="2" />
</edges>
</graph>
</gexf>