1
0
mirror of https://github.com/gsi-upm/soil synced 2025-11-04 17:38:16 +00:00

Compare commits

..

14 Commits

Author SHA1 Message Date
J. Fernando Sánchez
5d89827ccf Fix history bug 2018-05-04 11:21:23 +02:00
J. Fernando Sánchez
fc48ed7e09 Added history class
Now the environment does not deal with history directly, it delegates it to a
specific class. The analysis also uses history instances instead of either
using the database directly or creating a proxy environment.

This should make it easier to change the implementation in the future.

In fact, the change was motivated by the large size of the csv files in previous
versions. This new implementation only stores results in deltas, and it fills
any necessary values when needed.
2018-05-04 10:01:49 +02:00
J. Fernando Sánchez
73c90887e8 Fix pip installation 2018-05-04 09:59:31 +02:00
J. Fernando Sánchez
497c8a55db Add workaround for geometric models
Closes soil/soil#4
2018-02-16 18:04:43 +01:00
J. Fernando Sánchez
7d1c800490 Parallelism and granular exporting options
* Graphs are not saved by default (not backwards compatible)
* Modified newsspread examples
* More granular options to save results (exporting to CSV and GEXF are now
optional)
* Updated tutorial to include exporting options
* Removed references from environment to simulation
* Added parallelism to simulations (can be turned off with a flag or argument).
2017-11-01 14:44:46 +01:00
J. Fernando Sánchez
a4b32afa2f Fix py3.4 and pypi bugs 2017-10-19 18:28:17 +02:00
J. Fernando Sánchez
a7c51742f6 Improved docs
Fixed several bugs
Added convenience methods in soil.analysis
2017-10-19 18:06:33 +02:00
J. Fernando Sánchez
78364d89d5 Fix gephi representation. Add sqlite 2017-10-17 19:48:56 +02:00
J. Fernando Sánchez
af76f54a28 Added rabbits 2017-10-16 19:23:52 +02:00
J. Fernando Sánchez
dbc182c6d0 Compatibility with py3.4 2017-10-09 14:44:21 +02:00
J. Fernando Sánchez
eafecc9e5e Make py3 compatibility explicit 2017-10-09 11:38:16 +02:00
J. Fernando Sánchez
e8988015e2 Add more options to the command line 2017-10-05 16:21:58 +02:00
J. Fernando Sánchez
ccc8e43416 Removed timeout from the simulation examples 2017-10-05 16:07:10 +02:00
J. Fernando Sánchez
347d295b09 Updated to match NetworkX's 2.0 API 2017-10-05 15:54:18 +02:00
104 changed files with 32081 additions and 11000 deletions

3
Dockerfile Normal file
View File

@@ -0,0 +1,3 @@
FROM python:3.4-onbuild
ENTRYPOINT ["python", "-m", "soil"]

0
LICENSE Executable file → Normal file
View File

4
MANIFEST.in Normal file
View File

@@ -0,0 +1,4 @@
include requirements.txt
include test-requirements.txt
include README.rst
graft soil

32
README.md Executable file → Normal file
View File

@@ -1,12 +1,34 @@
#[Soil](https://github.com/gsi-upm/soil)
# [SOIL](https://github.com/gsi-upm/soil)
The purpose of Soil (SOcial network sImuLator) is provding an Agent-based Social Simulator written in Python for Social Networks.
In order to see quickly how to use Soil, you can follow the following [tutorial](https://github.com/gsi-upm/soil/blob/master/soil_tutorial.ipynb).
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
If you use Soil in your research, don't forget to cite this paper:
```bibtex
@inbook{soil-gsi-conference-2017,
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
doi = "10.1007/978-3-319-59930-4_19",
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
isbn = "978-3-319-59929-8",
keywords = "soil;social networks;agent based social simulation;python",
month = "June",
organization = "PAAMS 2017",
pages = "234-245",
publisher = "Springer Verlag",
series = "LNAI",
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
volume = "10349",
year = "2017",
}
```
@Copyright GSI - Universidad Politécnica de Madrid 2017
[![SOIL](logo_gsi.png)](https://www.gsi.dit.upm.es)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

8802
data.txt

File diff suppressed because it is too large Load Diff

3
debug.py Normal file
View File

@@ -0,0 +1,3 @@
import soil
soil.main()
import pdb

8
docker-compose.yml Normal file
View File

@@ -0,0 +1,8 @@
version: '3'
services:
dev:
build: .
volumes:
- .:/usr/src/app
tty: true
entrypoint: /bin/bash

0
docs/Makefile Executable file → Normal file
View File

0
docs/conf.py Executable file → Normal file
View File

32
docs/index.rst Executable file → Normal file
View File

@@ -8,14 +8,40 @@ Welcome to Soil's documentation!
Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
If you use Soil in your research, do not forget to cite this paper:
.. code:: bibtex
@inbook{soil-gsi-conference-2017,
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
doi = "10.1007/978-3-319-59930-4_19",
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
isbn = "978-3-319-59929-8",
keywords = "soil;social networks;agent based social simulation;python",
month = "June",
organization = "PAAMS 2017",
pages = "234-245",
publisher = "Springer Verlag",
series = "LNAI",
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
volume = "10349",
year = "2017",
}
.. toctree::
:maxdepth: 2
:maxdepth: 0
:caption: Learn more about soil:
installation
usage
models
quickstart
Tutorial <soil_tutorial>
..
.. Indices and tables

21
docs/installation.rst Executable file → Normal file
View File

@@ -1,7 +1,24 @@
Installation
------------
The latest version can be installed through GitLab.
The easiest way to install Soil is through pip, with Python >= 3.4:
.. code:: bash
git clone https://lab.cluster.gsi.dit.upm.es/soil/soil.git
pip install soil
Now test that it worked by running the command line tool
.. code:: bash
soil --help
Or using soil programmatically:
.. code:: python
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.cluster.gsi.dit.upm.es/soil/soil.git>`_.

0
docs/make.bat Executable file → Normal file
View File

View File

@@ -1,112 +0,0 @@
Developing new models
---------------------
This document describes how to develop a new analysis model.
What is a model?
================
A model defines the behaviour of the agents with a view to assessing their effects on the system as a whole.
In practice, a model consists of at least two parts:
* Python module: the actual code that describes the behaviour.
* Setting up the variables in the Settings JSON file.
This separation allows us to run the simulation with different agents.
Models Code
===========
All the models are imported to the main file. The initialization look like this:
.. code:: python
import settings
networkStatus = {} # Dict that will contain the status of every agent in the network
sentimentCorrelationNodeArray = []
for x in range(0, settings.network_params["number_of_nodes"]):
sentimentCorrelationNodeArray.append({'id': x})
# Initialize agent states. Let's assume everyone is normal.
init_states = [{'id': 0, } for _ in range(settings.network_params["number_of_nodes"])]
# add keys as as necessary, but "id" must always refer to that state category
A new model have to inherit the BaseBehaviour class which is in the same module.
There are two basics methods:
* __init__
* step: used to define the behaviour over time.
Variable Initialization
=======================
The different parameters of the model have to be initialize in the Simulation Settings JSON file which will be
passed as a parameter to the simulation.
.. code:: json
{
"agent": ["SISaModel","ControlModelM2"],
"neutral_discontent_spon_prob": 0.04,
"neutral_discontent_infected_prob": 0.04,
"neutral_content_spon_prob": 0.18,
"neutral_content_infected_prob": 0.02,
"discontent_neutral": 0.13,
"discontent_content": 0.07,
"variance_d_c": 0.02,
"content_discontent": 0.009,
"variance_c_d": 0.003,
"content_neutral": 0.088,
"standard_variance": 0.055,
"prob_neutral_making_denier": 0.035,
"prob_infect": 0.075,
"prob_cured_healing_infected": 0.035,
"prob_cured_vaccinate_neutral": 0.035,
"prob_vaccinated_healing_infected": 0.035,
"prob_vaccinated_vaccinate_neutral": 0.035,
"prob_generate_anti_rumor": 0.035
}
In this file you will also define the models you are going to simulate. You can simulate as many models as you want.
The simulation returns one result for each model, executing each model separately. For the usage, see :doc:`usage`.
Example Model
=============
In this section, we will implement a Sentiment Correlation Model.
The class would look like this:
.. code:: python
from ..BaseBehaviour import *
from .. import sentimentCorrelationNodeArray
class SentimentCorrelationModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob']
self.sadness_prob = environment.environment_params['sadness_prob']
self.disgust_prob = environment.environment_params['disgust_prob']
self.time_awareness = []
for i in range(4): # In this model we have 4 sentiments
self.time_awareness.append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
sentimentCorrelationNodeArray[self.id][self.env.now] = 0
def step(self, now):
self.behaviour() # Method which define the behaviour
super().step(now)
The variables will be modified by the user, so you have to include them in the Simulation Settings JSON file.

BIN
docs/output_21_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

BIN
docs/output_54_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_54_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_55_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_55_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_55_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/output_55_5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/output_55_6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_55_9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_56_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_56_9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/output_61_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/output_63_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_66_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/output_67_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

BIN
docs/output_72_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/output_72_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/output_74_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/output_75_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/output_76_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

197
docs/quickstart.rst Normal file
View File

@@ -0,0 +1,197 @@
Quickstart
----------
This section shows how to run simulations from simulation configuration files.
First of all, you need to install the package (See :doc:`installation`)
Simulation configuration files are ``json`` or ``yaml`` files that define all the parameters of a simulation.
Here's an example (``example.yml``).
.. code:: yaml
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
network_type: barabasi_albert_graph
n: 100
m: 2
agent_distribution:
- agent_type: SISaModel
weight: 1
state:
id: content
- agent_type: SISaModel
weight: 1
state:
id: discontent
- agent_type: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
This example configuration will run three trials of a simulation containing a randomly generated network.
The 100 nodes in the network will be SISaModel agents, 10% of them will start in the content state, 10% in the discontent state, and the remaining 80% in the neutral state.
All agents will have access to the environment, which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``).
Now run the simulation with the command line tool:
.. code:: bash
soil example.yml
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
Four types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a csv file with the content of the state of every network node and the environment parameters at every step of the simulation, as well as the network in gephi format (``gexf``).
.. code::
soil_output
├── Sim_prob_0
│   ├── Sim_prob_0.dumped.yml
│   ├── Sim_prob_0.simulation.pickle
│   ├── Sim_prob_0_trial_0.environment.csv
│   └── Sim_prob_0_trial_0.gexf
Network
=======
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
Loading a network
#################
To load an existing network, specify its path in the configuration:
.. code:: yaml
---
network_params:
path: /tmp/mynetwork.gexf
Soil will try to guess what networkx method to use to read the file based on its extension.
However, we only test using ``gexf`` files.
Generating a random network
###########################
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
For example, the following configuration is equivalent to :code:`nx.complete_graph(100)`:
.. code:: yaml
network_params:
network_type: complete_graph
n: 100
Environment
============
The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters:
.. code:: yaml
environment_params:
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
Any agent has unrestricted access to the environment.
However, for the sake of simplicity, we recommend limiting environment updates to environment agents.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: an agent type (``agent_type``) and its state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
The simplest way is to specify a single type of agent.
Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml
agent_type: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
agent_distribution:
- agent_type: SISaModel
weight: 1
- agent_type: CounterModel
weight: 5
In addition to agent type, you may also add a custom initial state to the distribution.
This is very useful to add the same agent type with different states.
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
.. code:: yaml
agent_distribution:
- agent_type: SISaModel
weight: 9
state:
id: neutral
- agent_type: SISaModel
weight: 1
state:
id: discontent
Lastly, the configuration may include initial state for one or more nodes.
For instance, to add a state for the two nodes in this configuration:
.. code:: yaml
agent_type: SISaModel
network:
network_type: complete_graph
n: 2
states:
- id: content
- id: discontent
Or to add state only to specific nodes (by ``id``).
For example, to apply special skills to Linux Torvalds in a simulation:
.. literalinclude:: ../examples/torvalds.yml
:language: yaml
Environment Agents
##################
In addition to network agents, more agents can be added to the simulation.
These agens are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
.. code::
environment_agents:
- agent_type: MyAgent
state:
mood: happy
- agent_type: DummyAgent
Visualizing the results
=======================
The simulation will return a dynamic graph .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.

2612
docs/soil_tutorial.rst Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,99 +0,0 @@
Usage
-----
First of all, you need to install the package. See :doc:`installation` for installation instructions.
Simulation Settings
===================
Once installed, before running a simulation, you need to configure it.
* In the Settings JSON file you will find the configuration of the network.
.. code:: python
{
"network_type": 1,
"number_of_nodes": 1000,
"max_time": 50,
"num_trials": 1,
"timeout": 2
}
* In the Settings JSON file, you will also find the configuration of the models.
Network Types
=============
There are three types of network implemented, but you could add more.
.. code:: python
if settings.network_type == 0:
G = nx.complete_graph(settings.number_of_nodes)
if settings.network_type == 1:
G = nx.barabasi_albert_graph(settings.number_of_nodes, 10)
if settings.network_type == 2:
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
# More types of networks can be added here
Models Settings
===============
After having configured the simulation, the next step is setting up the variables of the models.
For this, you will need to modify the Settings JSON file again.
.. code:: json
{
"agent": ["SISaModel","ControlModelM2"],
"neutral_discontent_spon_prob": 0.04,
"neutral_discontent_infected_prob": 0.04,
"neutral_content_spon_prob": 0.18,
"neutral_content_infected_prob": 0.02,
"discontent_neutral": 0.13,
"discontent_content": 0.07,
"variance_d_c": 0.02,
"content_discontent": 0.009,
"variance_c_d": 0.003,
"content_neutral": 0.088,
"standard_variance": 0.055,
"prob_neutral_making_denier": 0.035,
"prob_infect": 0.075,
"prob_cured_healing_infected": 0.035,
"prob_cured_vaccinate_neutral": 0.035,
"prob_vaccinated_healing_infected": 0.035,
"prob_vaccinated_vaccinate_neutral": 0.035,
"prob_generate_anti_rumor": 0.035
}
In this file you will define the different models you are going to simulate. You can simulate as many models
as you want. Each model will be simulated separately.
After setting up the models, you have to initialize the parameters of each one. You will find the parameters needed
in the documentation of each model.
Parameter validation will fail if a required parameter without a default has not been provided.
Running the Simulation
======================
After setting all the configuration, you will be able to run the simulation. All you need to do is execute:
.. code:: bash
python3 soil.py
The simulation will return a dynamic graph .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
It will also return one .png picture for each model simulated.

334
examples/NewsSpread.ipynb Normal file

File diff suppressed because one or more lines are too long

26
examples/complete.yml Normal file
View File

@@ -0,0 +1,26 @@
---
name: simple
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
dump: false
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
state:
id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []
environment_params:
am_i_complete: true
default_state:
incidents: 0
states:
- name: 'The first node'
- name: 'The second node'

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,138 @@
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
name: Sim_all_dumb
network_agents:
- agent_type: DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
name: Sim_half_herd
network_agents:
- agent_type: DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
state:
has_tv: true
weight: 1
- agent_type: HerdViewer
state:
has_tv: false
weight: 1
- agent_type: HerdViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
name: Sim_all_herd
network_agents:
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
weight: 1
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_time: 30
name: Sim_wise_herd
network_agents:
- agent_type: HerdViewer
state:
has_tv: true
id: neutral
weight: 1
- agent_type: WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_time: 30
name: Sim_all_wise
network_agents:
- agent_type: WiseViewer
state:
has_tv: true
id: neutral
weight: 1
- agent_type: WiseViewer
state:
has_tv: true
weight: 1
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
network_params:
generator: barabasi_albert_graph
n: 500
m: 5
num_trials: 50

View File

@@ -0,0 +1,81 @@
from soil.agents import FSM, state, default_state, prob
import logging
class DumbViewer(FSM):
'''
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
'''
defaults = {
'prob_neighbor_spread': 0.5,
'prob_tv_spread': 0.1,
}
@default_state
@state
def neutral(self):
if self['has_tv']:
if prob(self.env['prob_tv_spread']):
self.set_state(self.infected)
@state
def infected(self):
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
if prob(self.env['prob_neighbor_spread']):
neighbor.infect()
def infect(self):
self.set_state(self.infected)
class HerdViewer(DumbViewer):
'''
A viewer whose probability of infection depends on the state of its neighbors.
'''
level = logging.DEBUG
def infect(self):
infected = self.count_neighboring_agents(state_id=self.infected.id)
total = self.count_neighboring_agents()
prob_infect = self.env['prob_neighbor_spread'] * infected/total
self.debug('prob_infect', prob_infect)
if prob(prob_infect):
self.set_state(self.infected.id)
class WiseViewer(HerdViewer):
'''
A viewer that can change its mind.
'''
defaults = {
'prob_neighbor_spread': 0.5,
'prob_neighbor_cure': 0.25,
'prob_tv_spread': 0.1,
}
@state
def cured(self):
prob_cure = self.env['prob_neighbor_cure']
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
if prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug('Viewer {} cannot be cured'.format(neighbor.id))
def cure(self):
self.set_state(self.cured.id)
@state
def infected(self):
cured = max(self.count_neighboring_agents(self.cured.id),
1.0)
infected = max(self.count_neighboring_agents(self.infected.id),
1.0)
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected)
if prob(prob_cure):
return self.cure()
return self.set_state(super().infected)

View File

@@ -0,0 +1,120 @@
from soil.agents import FSM, state, default_state, BaseAgent
from enum import Enum
from random import random, choice
from itertools import islice
import logging
import math
class Genders(Enum):
male = 'male'
female = 'female'
class RabbitModel(FSM):
level = logging.INFO
defaults = {
'age': 0,
'gender': Genders.male.value,
'mating_prob': 0.001,
'offspring': 0,
}
sexual_maturity = 4*30
life_expectancy = 365 * 3
gestation = 33
pregnancy = -1
max_females = 5
@default_state
@state
def newborn(self):
self['age'] += 1
if self['age'] >= self.sexual_maturity:
return self.fertile
@state
def fertile(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
if self['gender'] == Genders.female.value:
return
# Males try to mate
females = self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False)
for f in islice(females, self.max_females):
r = random()
if r < self['mating_prob']:
self.impregnate(f)
break # Take a break
def impregnate(self, whom):
if self['gender'] == Genders.female.value:
raise NotImplementedError('Females cannot impregnate')
whom['pregnancy'] = 0
whom['mate'] = self.id
whom.set_state(whom.pregnant)
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
@state
def pregnant(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
self['pregnancy'] += 1
self.debug('Pregnancy: {}'.format(self['pregnancy']))
if self['pregnancy'] >= self.gestation:
number_of_babies = int(8+4*random())
self.info('Having {} babies'.format(number_of_babies))
for i in range(number_of_babies):
state = {}
state['gender'] = choice(list(Genders)).value
child = self.env.add_node(self.__class__, state)
self.env.add_edge(self.id, child.id)
self.env.add_edge(self['mate'], child.id)
# self.add_edge()
self.debug('A BABY IS COMING TO LIFE')
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.global_topology.number_of_nodes())+1
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
self['offspring'] += 1
self.env.get_agent(self['mate'])['offspring'] += 1
del self['mate']
self['pregnancy'] = -1
return self.fertile
@state
def dead(self):
self.info('Agent {} is dying'.format(self.id))
if 'pregnancy' in self and self['pregnancy'] > -1:
self.info('A mother has died carrying a baby!!')
self.die()
return
class RandomAccident(BaseAgent):
level = logging.DEBUG
def step(self):
rabbits_total = self.global_topology.number_of_nodes()
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.env.network_agents:
if i.state['id'] == i.dead.id:
continue
r = random()
if r < prob_death:
self.debug('I killed a rabbit: {}'.format(i.id))
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
i.set_state(i.dead)
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.count_agents(state_id=RabbitModel.dead.id) == self.global_topology.number_of_nodes():
self.die()

View File

@@ -0,0 +1,23 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 1200
interval: 1
seed: MySeed
agent_type: RabbitModel
environment_agents:
- agent_type: RandomAccident
environment_params:
prob_death: 0.001
default_state:
mating_prob: 0.01
topology:
nodes:
- id: 1
state:
gender: female
- id: 0
state:
gender: male
directed: true
links: []

View File

@@ -0,0 +1,2 @@
balkian Torvalds {}
anonymous Torvalds {}

14
examples/torvalds.yml Normal file
View File

@@ -0,0 +1,14 @@
---
name: torvalds_example
max_time: 10
interval: 2
agent_type: CounterModel
default_state:
skill_level: 'beginner'
network_params:
path: 'torvalds.edgelist'
states:
Torvalds:
skill_level: 'God'
balkian:
skill_level: 'developer'

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

0
logo_gsi.png Executable file → Normal file
View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

0
logo_gsi.svg Executable file → Normal file
View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -1,38 +0,0 @@
import settings
from nxsim import BaseNetworkAgent
from .. import networkStatus
class BaseBehaviour(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self._attrs = {}
@property
def attrs(self):
now = self.env.now
if now not in self._attrs:
self._attrs[now] = {}
return self._attrs[now]
@attrs.setter
def attrs(self, value):
self._attrs[self.env.now] = value
def run(self):
while True:
self.step(self.env.now)
yield self.env.timeout(settings.network_params["timeout"])
def step(self, now):
networkStatus['agent_%s'% self.id] = self.to_json()
def to_json(self):
final = {}
for stamp, attrs in self._attrs.items():
for a in attrs:
if a not in final:
final[a] = {}
final[a][stamp] = attrs[a]
return final

View File

@@ -1 +0,0 @@
from .BaseBehaviour import BaseBehaviour

View File

@@ -1,367 +0,0 @@
import random
import numpy as np
from ..BaseBehaviour import *
import settings
import networkx as nx
POPULATION = 0
LEADERS = 1
HAVEN = 2
TRAININGENV = 3
NON_RADICAL = 0
NEUTRAL = 1
RADICAL = 2
POPNON =0
POPNE=1
POPRAD=2
HAVNON=3
HAVNE=4
HAVRAD=5
LEADER=6
TRAINING = 7
class TerroristModel(BaseBehaviour):
num_agents = 0
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.population = settings.network_params["number_of_nodes"] * settings.environment_params['initial_population']
self.havens = settings.network_params["number_of_nodes"] * settings.environment_params['initial_havens']
self.training_enviroments = settings.network_params["number_of_nodes"] * settings.environment_params['initial_training_enviroments']
self.initial_radicalism = settings.environment_params['initial_radicalism']
self.information_spread_intensity = settings.environment_params['information_spread_intensity']
self.influence = settings.environment_params['influence']
self.relative_inequality = settings.environment_params['relative_inequality']
self.additional_influence = settings.environment_params['additional_influence']
if TerroristModel.num_agents < self.population:
self.state['type'] = POPULATION
TerroristModel.num_agents = TerroristModel.num_agents + 1
random1 = random.random()
if random1 < 0.7:
self.state['id'] = NON_RADICAL
self.state['fstatus'] = POPNON
elif random1 >= 0.7 and random1 < 0.9:
self.state['id'] = NEUTRAL
self.state['fstatus'] = POPNE
elif random1 >= 0.9:
self.state['id'] = RADICAL
self.state['fstatus'] = POPRAD
elif TerroristModel.num_agents < self.havens + self.population:
self.state['type'] = HAVEN
TerroristModel.num_agents = TerroristModel.num_agents + 1
random2 = random.random()
random1 = random2 + self.initial_radicalism
if random1 < 1.2:
self.state['id'] = NON_RADICAL
self.state['fstatus'] = HAVNON
elif random1 >= 1.2 and random1 < 1.6:
self.state['id'] = NEUTRAL
self.state['fstatus'] = HAVNE
elif random1 >= 1.6:
self.state['id'] = RADICAL
self.state['fstatus'] = HAVRAD
elif TerroristModel.num_agents < self.training_enviroments + self.havens + self.population:
self.state['type'] = TRAININGENV
self.state['fstatus'] = TRAINING
TerroristModel.num_agents = TerroristModel.num_agents + 1
def step(self, now):
if self.state['type'] == POPULATION:
self.population_and_leader_conduct()
if self.state['type'] == LEADERS:
self.population_and_leader_conduct()
if self.state['type'] == HAVEN:
self.haven_conduct()
if self.state['type'] == TRAININGENV:
self.training_enviroment_conduct()
self.attrs['status'] = self.state['id']
self.attrs['type'] = self.state['type']
self.attrs['radicalism'] = self.state['rad']
self.attrs['fstatus'] = self.state['fstatus']
super().step(now)
def population_and_leader_conduct(self):
if self.state['id'] == NON_RADICAL:
if self.state['rad'] == 0.000:
self.state['rad'] = self.set_radicalism()
self.non_radical_behaviour()
if self.state['id'] == NEUTRAL:
if self.state['rad'] == 0.000:
self.state['rad'] = self.set_radicalism()
while self.state['id'] == RADICAL:
self.radical_behaviour()
break
self.neutral_behaviour()
if self.state['id'] == RADICAL:
if self.state['rad'] == 0.000:
self.state['rad'] = self.set_radicalism()
self.radical_behaviour()
def haven_conduct(self):
non_radical_neighbors = self.get_neighboring_agents(state_id=NON_RADICAL)
neutral_neighbors = self.get_neighboring_agents(state_id=NEUTRAL)
radical_neighbors = self.get_neighboring_agents(state_id=RADICAL)
neighbors_of_non_radical = len(neutral_neighbors) + len(radical_neighbors)
neighbors_of_neutral = len(non_radical_neighbors) + len(radical_neighbors)
neighbors_of_radical = len(non_radical_neighbors) + len(neutral_neighbors)
threshold = 8
if (len(non_radical_neighbors) > neighbors_of_non_radical) and len(non_radical_neighbors) >= threshold:
self.state['id'] = NON_RADICAL
elif (len(neutral_neighbors) > neighbors_of_neutral) and len(neutral_neighbors) >= threshold:
self.state['id'] = NEUTRAL
elif (len(radical_neighbors) > neighbors_of_radical) and len(radical_neighbors) >= threshold:
self.state['id'] = RADICAL
if self.state['id'] == NEUTRAL:
for neighbor in non_radical_neighbors:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] > 0.59:
neighbor.state['rad'] = 0.59
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
if self.state['id'] == RADICAL:
for neighbor in non_radical_neighbors:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] > 0.59:
neighbor.state['rad'] = 0.59
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
for neighbor in neutral_neighbors:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.6:
neighbor.state['id'] = RADICAL
if neighbor.state['type'] != HAVEN and neighbor.state['type']!=TRAININGENV:
if neighbor.state['rad'] >= 0.62:
if create_leader(neighbor):
neighbor.state['type'] = LEADERS
neighbor.state['fstatus'] = LEADER
# elif neighbor.state['type'] == LEADERS:
# neighbor.state['type'] = POPULATION
# neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVRAD
def training_enviroment_conduct(self):
self.state['id'] = RADICAL
self.state['rad'] = 1
neighbors = self.get_neighboring_agents()
for neighbor in neighbors:
if neighbor.state['id'] == NON_RADICAL:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] > 0.59:
neighbor.state['rad'] = 0.59
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
neighbor.state['rad'] = neighbor.state['rad'] + (neighbor.influence + neighbor.additional_influence) * neighbor.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] >= 0.6:
neighbor.state['id'] = RADICAL
if neighbor.state['type'] != HAVEN and neighbor.state['type'] != TRAININGENV:
if neighbor.state['rad'] >= 0.62:
if create_leader(neighbor):
neighbor.state['type'] = LEADERS
neighbor.state['fstatus'] = LEADER
# elif neighbor.state['type'] == LEADERS:
# neighbor.state['type'] = POPULATION
# neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVRAD
def non_radical_behaviour(self):
neighbors = self.get_neighboring_agents()
for neighbor in neighbors:
if neighbor.state['type'] == POPULATION:
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
elif self.state['rad'] > 0.59:
self.state['rad'] = 0.59
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
elif neighbor.state['type'] == LEADERS:
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
elif self.state['rad'] > 0.59:
self.state['rad'] = 0.59
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
def neutral_behaviour(self):
neighbors = self.get_neighboring_agents()
for neighbor in neighbors:
if neighbor.state['type'] == POPULATION:
if neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
if self.state['rad'] >= 0.6:
self.state['id'] = RADICAL
if self.state['type'] != HAVEN:
if self.state['rad'] >= 0.62:
if create_leader(self):
self.state['type'] = LEADERS
self.state['fstatus'] = LEADER
# elif self.state['type'] == LEADERS:
# self.state['type'] = POPULATION
# self.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
self.state['fstatus'] = POPRAD
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVRAD
elif neighbor.state['type'] == LEADERS:
if neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if self.state['rad'] >= 0.6:
self.state['id'] = RADICAL
if self.state['type'] != HAVEN:
if self.state['rad'] >= 0.62:
if create_leader(self):
self.state['type'] = LEADERS
self.state['fstatus'] = LEADER
# elif self.state['type'] == LEADERS:
# self.state['type'] = POPULATION
# self.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
self.state['fstatus'] = POPRAD
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVRAD
def radical_behaviour(self):
neighbors = self.get_neighboring_agents(state_id=RADICAL)
for neighbor in neighbors:
if self.state['rad']< neighbor.state['rad'] and self.state['type']== LEADERS and neighbor.state['type']==LEADERS:
self.state['type'] = POPULATION
self.state['fstatus'] = POPRAD
def set_radicalism(self):
if self.state['id'] == NON_RADICAL:
radicalism = random.uniform(0.0, 0.29) * self.relative_inequality
return radicalism
elif self.state['id'] == NEUTRAL:
radicalism = 0.3 + random.uniform(0.3, 0.59) * self.relative_inequality
if radicalism >= 0.6:
self.state['id'] = RADICAL
return radicalism
elif self.state['id'] == RADICAL:
radicalism = 0.6 + random.uniform(0.6, 1.0) * self.relative_inequality
return radicalism
def get_partition(agent):
return settings.partition_param[agent.id]
def get_centrality(agent):
return settings.centrality_param[agent.id]
def get_centrality_given_id(id):
return settings.centrality_param[id]
def get_leader(partition):
if not bool(settings.leaders) or partition not in settings.leaders.keys():
return None
return settings.leaders[partition]
def set_leader(partition, agent):
settings.leaders[partition] = agent.id
def create_leader(agent):
my_partition = get_partition(agent)
old_leader = get_leader(my_partition)
if old_leader == None:
set_leader(my_partition, agent)
return True
else:
my_centrality = get_centrality(agent)
old_leader_centrality = get_centrality_given_id(old_leader)
if my_centrality > old_leader_centrality:
set_leader(my_partition, agent)
return True
return False

View File

@@ -1 +0,0 @@
from .TerroristModel import TerroristModel

View File

@@ -1,3 +0,0 @@
from .models import *
from .BaseBehaviour import *
from .TerroristModel import *

View File

@@ -1,7 +0,0 @@
import settings
networkStatus = {} # Dict that will contain the status of every agent in the network
# Initialize agent states. Let's assume everyone is normal and all types are population.
init_states = [{'id': 0, 'type': 0, 'rad': 0, 'fstatus':0, } for _ in range(settings.network_params["number_of_nodes"])]

6
requirements.txt Executable file → Normal file
View File

@@ -1,5 +1,7 @@
nxsim
simpy
networkx
networkx>=2.0
numpy
matplotlib
matplotlib
pyyaml
pandas

View File

@@ -1,23 +0,0 @@
[
{
"network_type": 0,
"number_of_nodes": 80,
"max_time": 50,
"num_trials": 1,
"timeout": 2
},
{
"agent": ["TerroristModel"],
"initial_population": 0.85,
"initial_havens": 0.1,
"initial_training_enviroments": 0.05,
"initial_radicalism": 0.12,
"relative_inequality": 0.33,
"information_spread_intensity": 0.1,
"influence": 0.4,
"additional_influence": 0.1
}
]

View File

@@ -1,13 +0,0 @@
# General configuration
import json
with open('settings.json', 'r') as f:
settings = json.load(f)
network_params = settings[0]
environment_params = settings[1]
centrality_param = {}
partition_param={}
leaders={}

49
setup.py Normal file
View File

@@ -0,0 +1,49 @@
import os
from setuptools import setup
with open(os.path.join('soil', 'VERSION')) as f:
__version__ = f.readlines()[0].strip()
assert __version__
def parse_requirements(filename):
""" load requirements from a pip requirements file """
with open(filename, 'r') as f:
lineiter = list(line.strip() for line in f)
return [line for line in lineiter if line and not line.startswith("#")]
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
setup(
name='soil',
packages=['soil'], # this must be the same as the name above
version=__version__,
description=('An Agent-Based Social Simulator for Social Networks'),
author='J. Fernando Sanchez',
author_email='jf.sanchez@upm.es',
url='https://github.com/gsi-upm/soil', # use the URL to the github repo
download_url='https://github.com/gsi-upm/soil/archive/{}.tar.gz'.format(
__version__),
keywords=['agent', 'social', 'simulator'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
install_requires=install_reqs,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
include_package_data=True,
entry_points={
'console_scripts':
['soil = soil.__init__:main']
})

Binary file not shown.

Binary file not shown.

215
soil.py
View File

@@ -1,215 +0,0 @@
from models import *
from nxsim import NetworkSimulation
# import numpy
from matplotlib import pyplot as plt
import networkx as nx
import settings
import models
import math
import json
import operator
import community
POPULATION = 0
LEADERS = 1
HAVEN = 2
TRAINING = 3
NON_RADICAL = 0
NEUTRAL = 1
RADICAL = 2
#################
# Visualization #
#################
def visualization(graph_name):
for x in range(0, settings.network_params["number_of_nodes"]):
attributes = {}
spells = []
for attribute in models.networkStatus["agent_%s" % x]:
if attribute == 'visible':
lastvisible = False
laststep = 0
for t_step in models.networkStatus["agent_%s" % x][attribute]:
nowvisible = models.networkStatus["agent_%s" % x][attribute][t_step]
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
if lastvisible:
spells.append((laststep, None))
else:
emotionStatusAux = []
for t_step in models.networkStatus["agent_%s" % x][attribute]:
prec = 2
output = math.floor(models.networkStatus["agent_%s" % x][attribute][t_step] * (10 ** prec)) / (10 ** prec) # 2 decimals
emotionStatusAux.append((output, t_step, t_step + settings.network_params["timeout"]))
attributes[attribute] = emotionStatusAux
if spells:
G.add_node(x, attributes, spells=spells)
else:
G.add_node(x, attributes)
print("Done!")
with open('data.txt', 'w') as outfile:
json.dump(models.networkStatus, outfile, sort_keys=True, indent=4, separators=(',', ': '))
for node in range(settings.network_params["number_of_nodes"]):
G.node[node]['x'] = G.node[node]['pos'][0]
G.node[node]['y'] = G.node[node]['pos'][1]
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
nx.write_gexf(G, graph_name+".gexf", version="1.2draft")
###########
# Results #
###########
def results(model_name):
x_values = []
neutral_values = []
non_radical_values = []
radical_values = []
attribute_plot = 'status'
for time in range(0, settings.network_params["max_time"]):
value_neutral = 0
value_non_radical = 0
value_radical = 0
real_time = time * settings.network_params["timeout"]
activity = False
for x in range(0, settings.network_params["number_of_nodes"]):
if attribute_plot in models.networkStatus["agent_%s" % x]:
if real_time in models.networkStatus["agent_%s" % x][attribute_plot]:
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == NON_RADICAL:
value_non_radical += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == NEUTRAL:
value_neutral += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == RADICAL:
value_radical += 1
activity = True
if activity:
x_values.append(real_time)
neutral_values.append(value_neutral)
non_radical_values.append(value_non_radical)
radical_values.append(value_radical)
activity = False
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
non_radical_line = ax1.plot(x_values, non_radical_values, label='Non radical')
neutral_line = ax1.plot(x_values, neutral_values, label='Neutral')
radical_line = ax1.plot(x_values, radical_values, label='Radical')
ax1.legend()
fig1.savefig(model_name+'.png')
plt.show()
###########
# Results #
###########
def resultadosTipo(model_name):
x_values = []
population_values = []
leaders_values = []
havens_values = []
training_enviroments_values = []
attribute_plot = 'type'
for time in range(0, settings.network_params["max_time"]):
value_population = 0
value_leaders = 0
value_havens = 0
value_training_enviroments = 0
real_time = time * settings.network_params["timeout"]
activity = False
for x in range(0, settings.network_params["number_of_nodes"]):
if attribute_plot in models.networkStatus["agent_%s" % x]:
if real_time in models.networkStatus["agent_%s" % x][attribute_plot]:
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == POPULATION:
value_population += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == LEADERS:
value_leaders += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == HAVEN:
value_havens += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == TRAINING:
value_training_enviroments += 1
activity = True
if activity:
x_values.append(real_time)
population_values.append(value_population)
leaders_values.append(value_leaders)
havens_values.append(value_havens)
training_enviroments_values.append(value_training_enviroments)
activity = False
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
population_line = ax2.plot(x_values, population_values, label='Population')
leaders_line = ax2.plot(x_values, leaders_values, label='Leader')
havens_line = ax2.plot(x_values, havens_values, label='Havens')
training_enviroments_line = ax2.plot(x_values, training_enviroments_values, label='Training Enviroments')
ax2.legend()
fig2.savefig(model_name+'_type'+'.png')
plt.show()
####################
# Network creation #
####################
# nx.degree_centrality(G);
if settings.network_params["network_type"] == 0:
G = nx.random_geometric_graph(settings.network_params["number_of_nodes"], 0.2)
settings.partition_param = community.best_partition(G)
settings.centrality_param = nx.betweenness_centrality(G).copy()
# print(settings.centrality_param)
# print(settings.partition_param)
# More types of networks can be added here
##############
# Simulation #
##############
agents = settings.environment_params['agent']
print("Using Agent(s): {agents}".format(agents=agents))
if len(agents) > 1:
for agent in agents:
sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params["max_time"],
num_trials=settings.network_params["num_trials"], logging_interval=1.0, **settings.environment_params)
sim.run_simulation()
print(str(agent))
results(str(agent))
resultadosTipo(str(agent))
visualization(str(agent))
else:
agent = agents[0]
sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params["max_time"],
num_trials=settings.network_params["num_trials"], logging_interval=1.0, **settings.environment_params)
sim.run_simulation()
results(str(agent))
resultadosTipo(str(agent))
visualization(str(agent))

394
soil.py~
View File

@@ -1,394 +0,0 @@
from nxsim import NetworkSimulation
from nxsim import BaseNetworkAgent
from nxsim import BaseLoggingAgent
from random import randint
from matplotlib import pyplot as plt
import random
import numpy as np
import networkx as nx
import settings
settings.init()
if settings.network_type == 0:
G = nx.complete_graph(settings.number_of_nodes)
if settings.network_type == 1:
G = nx.barabasi_albert_graph(settings.number_of_nodes,3)
if settings.network_type == 2:
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
myList=[]
networkStatus=[]
for x in range(0, settings.number_of_nodes):
networkStatus.append({'id':x})
# # Just like subclassing a process in SimPy
# class MyAgent(BaseNetworkAgent):
# def __init__(self, environment=None, agent_id=0, state=()): # Make sure to have these three keyword arguments
# super().__init__(environment=environment, agent_id=agent_id, state=state)
# # Add your own attributes here
# def run(self):
# # Add your behaviors here
class SentimentCorrelationModel(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = settings.outside_effects_prob
self.anger_prob = settings.anger_prob
self.joy_prob = settings.joy_prob
self.sadness_prob = settings.sadness_prob
self.disgust_prob = settings.disgust_prob
self.time_awareness=[]
for i in range(4):
self.time_awareness.append(0) #0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
if self.env.now > 10:
G.add_node(205)
G.add_edge(205,0)
angry_neighbors_1_time_step=[]
joyful_neighbors_1_time_step=[]
sad_neighbors_1_time_step=[]
disgusted_neighbors_1_time_step=[]
angry_neighbors = self.get_neighboring_agents(state_id=1)
for x in angry_neighbors:
if x.time_awareness[0] > (self.env.now-500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighboring_agents(state_id=2)
for x in joyful_neighbors:
if x.time_awareness[1] > (self.env.now-500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighboring_agents(state_id=3)
for x in sad_neighbors:
if x.time_awareness[2] > (self.env.now-500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
for x in disgusted_neighbors:
if x.time_awareness[3] > (self.env.now-500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
# #Outside effects. Asignamos un estado aleatorio
# if random.random() < settings.outside_effects_prob:
# if self.state['id'] == 0:
# self.state['id'] = random.randint(1,4)
# myList.append(self.id)
# networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
# self.time_awareness = self.env.now #Para saber cuando se han contagiado
# yield self.env.timeout(settings.timeout)
# else:
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Joy
# if random.random() < (settings.joy_prob*(num_neighbors_joyful)/10):
# myList.append(self.id)
# self.state['id'] = 2
# networkStatus[self.id][self.env.now]=2
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Sadness
# if random.random() < (settings.sadness_prob*(num_neighbors_sad)/10):
# myList.append(self.id)
# self.state['id'] = 3
# networkStatus[self.id][self.env.now]=3
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Disgust
# if random.random() < (settings.disgust_prob*(num_neighbors_disgusted)/10):
# myList.append(self.id)
# self.state['id'] = 4
# networkStatus[self.id][self.env.now]=4
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Anger
# if random.random() < (settings.anger_prob*(num_neighbors_angry)/10):
# myList.append(self.id)
# self.state['id'] = 1
# networkStatus[self.id][self.env.now]=1
# yield self.env.timeout(settings.timeout)
# yield self.env.timeout(settings.timeout)
###########################################
anger_prob= settings.anger_prob+(len(angry_neighbors_1_time_step)*settings.anger_prob)
print("anger_prob " + str(anger_prob))
joy_prob= settings.joy_prob+(len(joyful_neighbors_1_time_step)*settings.joy_prob)
print("joy_prob " + str(joy_prob))
sadness_prob = settings.sadness_prob+(len(sad_neighbors_1_time_step)*settings.sadness_prob)
print("sadness_prob "+ str(sadness_prob))
disgust_prob = settings.disgust_prob+(len(disgusted_neighbors_1_time_step)*settings.disgust_prob)
print("disgust_prob " + str(disgust_prob))
outside_effects_prob= settings.outside_effects_prob
print("outside_effects_prob " + str(outside_effects_prob))
num = random.random()
if(num<outside_effects_prob):
self.state['id'] = random.randint(1,4)
myList.append(self.id)
networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
self.time_awareness[self.state['id']-1] = self.env.now
yield self.env.timeout(settings.timeout)
if(num<anger_prob):
myList.append(self.id)
self.state['id'] = 1
networkStatus[self.id][self.env.now]=1
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<joy_prob+anger_prob and num>anger_prob):
myList.append(self.id)
self.state['id'] = 2
networkStatus[self.id][self.env.now]=2
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
myList.append(self.id)
self.state['id'] = 3
networkStatus[self.id][self.env.now]=3
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
myList.append(self.id)
self.state['id'] = 4
networkStatus[self.id][self.env.now]=4
self.time_awareness[self.state['id']-1] = self.env.now
yield self.env.timeout(settings.timeout)
# anger_propagation = settings.anger_prob*num_neighbors_angry/10
# joy_propagation = anger_propagation + (settings.joy_prob*num_neighbors_joyful/10)
# sadness_propagation = joy_propagation + (settings.sadness_prob*num_neighbors_sad/10)
# disgust_propagation = sadness_propagation + (settings.disgust_prob*num_neighbors_disgusted/10)
# outside_effects_propagation = disgust_propagation + settings.outside_effects_prob
# if (num<anger_propagation):
# if(self.state['id'] !=0):
# myList.append(self.id)
# self.state['id'] = 1
# networkStatus[self.id][self.env.now]=1
# yield self.env.timeout(settings.timeout)
# if (num<joy_propagation):
# if(self.state['id'] !=0):
# myList.append(self.id)
# self.state['id'] = 2
# networkStatus[self.id][self.env.now]=2
# yield self.env.timeout(settings.timeout)
# if(num<sadness_propagation):
# if(self.state['id'] !=0):
# myList.append(self.id)
# self.state['id'] = 3
# networkStatus[self.id][self.env.now]=3
# yield self.env.timeout(settings.timeout)
# # if(num<disgust_propagation):
# # if(self.state['id'] !=0):
# # myList.append(self.id)
# # self.state['id'] = 4
# # networkStatus[self.id][self.env.now]=4
# # yield self.env.timeout(settings.timeout)
# if(num <outside_effects_propagation):
# if self.state['id'] == 0:
# self.state['id'] = random.randint(1,4)
# myList.append(self.id)
# networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
# self.time_awareness = self.env.now #Para saber cuando se han contagiado
# yield self.env.timeout(settings.timeout)
# else:
# yield self.env.timeout(settings.timeout)
# else:
# yield self.env.timeout(settings.timeout)
class BassModel(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = settings.innovation_prob
self.imitation_prob = settings.imitation_prob
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
#Outside effects
if random.random() < settings.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
myList.append(self.id)
networkStatus[self.id][self.env.now]=1
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
#Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (settings.imitation_prob*num_neighbors_aware):
myList.append(self.id)
self.state['id'] = 1
networkStatus[self.id][self.env.now]=1
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
class IndependentCascadeModel(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = settings.innovation_prob
self.imitation_prob = settings.imitation_prob
self.time_awareness = 0
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
aware_neighbors_1_time_step=[]
#Outside effects
if random.random() < settings.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
myList.append(self.id)
networkStatus[self.id][self.env.now]=1
self.time_awareness = self.env.now #Para saber cuando se han contagiado
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
#Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
for x in aware_neighbors:
if x.time_awareness == (self.env.now-1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (settings.imitation_prob*num_neighbors_aware):
myList.append(self.id)
self.state['id'] = 1
networkStatus[self.id][self.env.now]=1
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
class ZombieOutbreak(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.bite_prob = settings.bite_prob
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
if random.random() < settings.heal_prob:
if self.state['id'] == 1:
self.zombify()
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
else:
if self.state['id'] == 1:
print("Soy el zombie " + str(self.id) + " y me voy a curar porque el num aleatorio ha sido " + str(num))
networkStatus[self.id][self.env.now]=0
if self.id in myList:
myList.remove(self.id)
self.state['id'] = 0
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
def zombify(self):
normal_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in normal_neighbors:
if random.random() < self.bite_prob:
print("Soy el zombie " + str(self.id) + " y voy a contagiar a " + str(neighbor.id))
neighbor.state['id'] = 1 # zombie
myList.append(neighbor.id)
networkStatus[self.id][self.env.now]=1
networkStatus[neighbor.id][self.env.now]=1
print(self.env.now, "Soy el zombie: "+ str(self.id), "Mi vecino es: "+ str(neighbor.id), sep='\t')
break
# Initialize agent states. Let's assume everyone is normal.
init_states = [{'id': 0, } for _ in range(settings.number_of_nodes)] # add keys as as necessary, but "id" must always refer to that state category
# Seed a zombie
#init_states[5] = {'id': 1}
#init_states[3] = {'id': 1}
sim = NetworkSimulation(topology=G, states=init_states, agent_type=SentimentCorrelationModel,
max_time=settings.max_time, num_trials=settings.num_trials, logging_interval=1.0)
sim.run_simulation()
myList = sorted(myList, key=int)
#print("Los zombies son: " + str(myList))
trial = BaseLoggingAgent.open_trial_state_history(dir_path='sim_01', trial_id=0)
zombie_census = [sum([1 for node_id, state in g.items() if state['id'] == 1]) for t,g in trial.items()]
#for x in range(len(myList)):
# G.node[myList[x]]['viz'] = {'color': {'r': 255, 'g': 0, 'b': 0, 'a': 0}}
#G.node[1]['viz'] = {'color': {'r': 255, 'g': 0, 'b': 0, 'a': 0}}
#lista = nx.nodes(G)
#print('Nodos: ' + str(lista))
for x in range(0, settings.number_of_nodes):
networkStatusAux=[]
for tiempo in networkStatus[x]:
if tiempo != 'id':
networkStatusAux.append((networkStatus[x][tiempo],tiempo,None))
G.add_node(x, zombie= networkStatusAux)
#print(networkStatus)
nx.write_gexf(G,"test.gexf", version="1.2draft")
plt.plot(zombie_census)
plt.draw() # pyplot draw()
plt.savefig("zombie.png")
#print(networkStatus)
#nx.draw(G)
#plt.show()
#plt.savefig("path.png")

1
soil/VERSION Normal file
View File

@@ -0,0 +1 @@
0.11.1

75
soil/__init__.py Normal file
View File

@@ -0,0 +1,75 @@
import importlib
import sys
import os
import pdb
import logging
from .version import __version__
try:
basestring
except NameError:
basestring = str
logging.basicConfig()
from . import agents
from . import simulation
from . import environment
from . import utils
from . import analysis
def main():
import argparse
from . import simulation
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
parser.add_argument('file', type=str,
nargs="?",
default='simulation.yml',
help='python module containing the simulation configuration.')
parser.add_argument('--module', '-m', type=str,
help='file containing the code of any custom agents.')
parser.add_argument('--dry-run', '--dry', action='store_true',
help='Do not store the results of the simulation.')
parser.add_argument('--pdb', action='store_true',
help='Use a pdb console in case of exception.')
parser.add_argument('--graph', '-g', action='store_true',
help='Dump GEXF graph. Defaults to false.')
parser.add_argument('--csv', action='store_true',
help='Dump history in CSV format. Defaults to false.')
parser.add_argument('--output', '-o', type=str, default="soil_output",
help='folder to write results to. It defaults to the current directory.')
parser.add_argument('--synchronous', action='store_true',
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
args = parser.parse_args()
if args.module:
sys.path.append(os.getcwd())
importlib.import_module(args.module)
logging.info('Loading config file: {}'.format(args.file, args.output))
try:
dump = []
if not args.dry_run:
if args.csv:
dump.append('csv')
if args.graph:
dump.append('gexf')
simulation.run_from_config(args.file,
dry_run=args.dry_run,
dump=dump,
parallel=(not args.synchronous and not args.pdb),
results_dir=args.output)
except Exception as ex:
if args.pdb:
pdb.post_mortem()
else:
raise
if __name__ == '__main__':
main()

4
soil/__main__.py Normal file
View File

@@ -0,0 +1,4 @@
from . import main
if __name__ == '__main__':
main()

40
soil/agents/BassModel.py Normal file
View File

@@ -0,0 +1,40 @@
import random
from . import BaseAgent
class BassModel(BaseAgent):
"""
Settings:
innovation_prob
imitation_prob
"""
def __init__(self, environment, agent_id, state):
super().__init__(environment=environment, agent_id=agent_id, state=state)
env_params = environment.environment_params
self.state['sentimentCorrelation'] = 0
def step(self):
self.behaviour()
def behaviour(self):
# Outside effects
if random.random() < self.state_params['innovation_prob']:
if self.state['id'] == 0:
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (self.state_params['imitation_prob']*num_neighbors_aware):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass

View File

@@ -0,0 +1,102 @@
import random
from . import BaseAgent
class BigMarketModel(BaseAgent):
"""
Settings:
Names:
enterprises [Array]
tweet_probability_enterprises [Array]
Users:
tweet_probability_users
tweet_relevant_probability
tweet_probability_about [Array]
sentiment_about [Array]
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.enterprises = environment.environment_params['enterprises']
self.type = ""
self.number_of_enterprises = len(environment.environment_params['enterprises'])
if self.id < self.number_of_enterprises: # Enterprises
self.state['id'] = self.id
self.type = "Enterprise"
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
else: # normal users
self.state['id'] = self.number_of_enterprises
self.type = "User"
self.tweet_probability = environment.environment_params['tweet_probability_users']
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
self.sentiment_about = environment.environment_params['sentiment_about'] # List
def step(self):
if self.id < self.number_of_enterprises: # Enterprise
self.enterpriseBehaviour()
else: # Usuario
self.userBehaviour()
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def enterpriseBehaviour(self):
if random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
for x in aware_neighbors:
if random.uniform(0,10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
# Establecemos limites
if x.sentiment_about[self.id] > 1:
x.sentiment_about[self.id] = 1
if x.sentiment_about[self.id]< -1:
x.sentiment_about[self.id] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
def userBehaviour(self):
if random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant
# Tweet probability per enterprise
for i in range(self.number_of_enterprises):
random_num = random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0:
# NEGATIVO
self.userTweets("negative",i)
elif self.sentiment_about[i] == 0:
# NEUTRO
pass
else:
# POSITIVO
self.userTweets("positive",i)
def userTweets(self,sentiment,enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":
x.sentiment_about[enterprise] +=0.003
elif sentiment == "negative":
x.sentiment_about[enterprise] -=0.003
else:
pass
# Establecemos limites
if x.sentiment_about[enterprise] > 1:
x.sentiment_about[enterprise] = 1
if x.sentiment_about[enterprise] < -1:
x.sentiment_about[enterprise] = -1
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]

View File

@@ -0,0 +1,32 @@
from . import BaseAgent
class CounterModel(BaseAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
def step(self):
# Outside effects
total = len(list(self.get_all_agents()))
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = neighbors
self['total'] = total
class AggregatedCounter(BaseAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
def step(self):
# Outside effects
total = len(list(self.get_all_agents()))
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = self.get('neighbors', 0) + neighbors
self['total'] = total = self.get('total', 0) + total
self.debug('Running for step: {}. Total: {}'.format(self.now, total))

View File

@@ -0,0 +1,18 @@
from . import BaseAgent
import os.path
import matplotlib
import matplotlib.pyplot as plt
import networkx as nx
class DrawingAgent(BaseAgent):
"""
Agent that draws the state of the network.
"""
def step(self):
# Outside effects
f = plt.figure()
nx.draw(self.env.G, node_size=10, width=0.2, pos=nx.spring_layout(self.env.G, scale=100), ax=f.add_subplot(111))
f.savefig(os.path.join(self.env.get_path(), "graph-"+str(self.env.now)+".png"))

View File

@@ -0,0 +1,49 @@
import random
from . import BaseAgent
class IndependentCascadeModel(BaseAgent):
"""
Settings:
innovation_prob
imitation_prob
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = environment.environment_params['innovation_prob']
self.imitation_prob = environment.environment_params['imitation_prob']
self.state['time_awareness'] = 0
self.state['sentimentCorrelation'] = 0
def step(self):
self.behaviour()
def behaviour(self):
aware_neighbors_1_time_step = []
# Outside effects
if random.random() < self.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
self.state['time_awareness'] = self.env.now # To know when they have been infected
else:
pass
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
for x in aware_neighbors:
if x.state['time_awareness'] == (self.env.now-1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (self.imitation_prob*num_neighbors_aware):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass
return

242
soil/agents/ModelM2.py Normal file
View File

@@ -0,0 +1,242 @@
import random
import numpy as np
from . import BaseAgent
class SpreadModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
environment.environment_params['standard_variance'])
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance'])
def step(self):
if self.state['id'] == 0: # Neutral
self.neutral_behaviour()
elif self.state['id'] == 1: # Infected
self.infected_behaviour()
elif self.state['id'] == 2: # Cured
self.cured_behaviour()
elif self.state['id'] == 3: # Vaccinated
self.vaccinated_behaviour()
def neutral_behaviour(self):
# Infected
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier:
self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_infect:
neighbor.state['id'] = 1 # Infected
def cured_behaviour(self):
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self):
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
class ControlModelM2(BaseAgent):
"""
Settings:
prob_neutral_making_denier
prob_infect
prob_cured_healing_infected
prob_cured_vaccinate_neutral
prob_vaccinated_healing_infected
prob_vaccinated_vaccinate_neutral
prob_generate_anti_rumor
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
environment.environment_params['standard_variance'])
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance'])
def step(self):
if self.state['id'] == 0: # Neutral
self.neutral_behaviour()
elif self.state['id'] == 1: # Infected
self.infected_behaviour()
elif self.state['id'] == 2: # Cured
self.cured_behaviour()
elif self.state['id'] == 3: # Vaccinated
self.vaccinated_behaviour()
elif self.state['id'] == 4: # Beacon-off
self.beacon_off_behaviour()
elif self.state['id'] == 5: # Beacon-on
self.beacon_on_behaviour()
def neutral_behaviour(self):
self.state['visible'] = False
# Infected
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier:
self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self):
# Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_infect:
neighbor.state['id'] = 1 # Infected
self.state['visible'] = False
def cured_behaviour(self):
self.state['visible'] = True
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self):
self.state['visible'] = True
# Cure
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
def beacon_off_behaviour(self):
self.state['visible'] = False
infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0:
self.state['id'] == 5 # Beacon on
def beacon_on_behaviour(self):
self.state['visible'] = False
# Cure (M2 feature added)
infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor:
neighbor.state['id'] = 2 # Cured
# Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral:
neighbor.state['id'] = 3 # Vaccinated

93
soil/agents/SISaModel.py Normal file
View File

@@ -0,0 +1,93 @@
import random
import numpy as np
from . import FSM, state
class SISaModel(FSM):
"""
Settings:
neutral_discontent_spon_prob
neutral_discontent_infected_prob
neutral_content_spong_prob
neutral_content_infected_prob
discontent_neutral
discontent_content
variance_d_c
content_discontent
variance_c_d
content_neutral
standard_variance
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(environment.environment_params['neutral_discontent_spon_prob'],
environment.environment_params['standard_variance'])
self.neutral_discontent_infected_prob = np.random.normal(environment.environment_params['neutral_discontent_infected_prob'],
environment.environment_params['standard_variance'])
self.neutral_content_spon_prob = np.random.normal(environment.environment_params['neutral_content_spon_prob'],
environment.environment_params['standard_variance'])
self.neutral_content_infected_prob = np.random.normal(environment.environment_params['neutral_content_infected_prob'],
environment.environment_params['standard_variance'])
self.discontent_neutral = np.random.normal(environment.environment_params['discontent_neutral'],
environment.environment_params['standard_variance'])
self.discontent_content = np.random.normal(environment.environment_params['discontent_content'],
environment.environment_params['variance_d_c'])
self.content_discontent = np.random.normal(environment.environment_params['content_discontent'],
environment.environment_params['variance_c_d'])
self.content_neutral = np.random.normal(environment.environment_params['content_neutral'],
environment.environment_params['standard_variance'])
@state
def neutral(self):
# Spontaneous effects
if random.random() < self.neutral_discontent_spon_prob:
return self.discontent
if random.random() < self.neutral_content_spon_prob:
return self.content
# Infected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
if random.random() < discontent_neighbors * self.neutral_discontent_infected_prob:
return self.discontent
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
if random.random() < content_neighbors * self.neutral_content_infected_prob:
return self.content
return self.neutral
@state
def discontent(self):
# Healing
if random.random() < self.discontent_neutral:
return self.neutral
# Superinfected
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
if random.random() < content_neighbors * self.discontent_content:
return self.content
return self.discontent
@state
def content(self):
# Healing
if random.random() < self.content_neutral:
return self.neutral
# Superinfected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
if random.random() < discontent_neighbors * self.content_discontent:
self.discontent
return self.content

View File

@@ -0,0 +1,102 @@
import random
from . import BaseAgent
class SentimentCorrelationModel(BaseAgent):
"""
Settings:
outside_effects_prob
anger_prob
joy_prob
sadness_prob
disgust_prob
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob']
self.sadness_prob = environment.environment_params['sadness_prob']
self.disgust_prob = environment.environment_params['disgust_prob']
self.state['time_awareness'] = []
for i in range(4): # In this model we have 4 sentiments
self.state['time_awareness'].append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
self.state['sentimentCorrelation'] = 0
def step(self):
self.behaviour()
def behaviour(self):
angry_neighbors_1_time_step = []
joyful_neighbors_1_time_step = []
sad_neighbors_1_time_step = []
disgusted_neighbors_1_time_step = []
angry_neighbors = self.get_neighboring_agents(state_id=1)
for x in angry_neighbors:
if x.state['time_awareness'][0] > (self.env.now-500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighboring_agents(state_id=2)
for x in joyful_neighbors:
if x.state['time_awareness'][1] > (self.env.now-500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighboring_agents(state_id=3)
for x in sad_neighbors:
if x.state['time_awareness'][2] > (self.env.now-500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
for x in disgusted_neighbors:
if x.state['time_awareness'][3] > (self.env.now-500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
anger_prob = self.anger_prob+(len(angry_neighbors_1_time_step)*self.anger_prob)
joy_prob = self.joy_prob+(len(joyful_neighbors_1_time_step)*self.joy_prob)
sadness_prob = self.sadness_prob+(len(sad_neighbors_1_time_step)*self.sadness_prob)
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
outside_effects_prob = self.outside_effects_prob
num = random.random()
if num<outside_effects_prob:
self.state['id'] = random.randint(1, 4)
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
self.state['time_awareness'][self.state['id']-1] = self.env.now
self.state['sentiment'] = self.state['id']
if(num<anger_prob):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
self.state['time_awareness'][self.state['id']-1] = self.env.now
elif (num<joy_prob+anger_prob and num>anger_prob):
self.state['id'] = 2
self.state['sentimentCorrelation'] = 2
self.state['time_awareness'][self.state['id']-1] = self.env.now
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
self.state['id'] = 3
self.state['sentimentCorrelation'] = 3
self.state['time_awareness'][self.state['id']-1] = self.env.now
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
self.state['id'] = 4
self.state['sentimentCorrelation'] = 4
self.state['time_awareness'][self.state['id']-1] = self.env.now
self.state['sentiment'] = self.state['id']

379
soil/agents/__init__.py Normal file
View File

@@ -0,0 +1,379 @@
# networkStatus = {} # Dict that will contain the status of every agent in the network
# sentimentCorrelationNodeArray = []
# for x in range(0, settings.network_params["number_of_nodes"]):
# sentimentCorrelationNodeArray.append({'id': x})
# Initialize agent states. Let's assume everyone is normal.
import nxsim
import logging
from collections import OrderedDict
from copy import deepcopy
from functools import partial
import json
from functools import wraps
from .. import utils, history
agent_types = {}
class MetaAgent(type):
def __init__(cls, name, bases, nmspc):
super(MetaAgent, cls).__init__(name, bases, nmspc)
agent_types[name] = cls
class BaseAgent(nxsim.BaseAgent, metaclass=MetaAgent):
"""
A special simpy BaseAgent that keeps track of its state history.
"""
defaults = {}
def __init__(self, environment=None, agent_id=None, state=None,
name='network_process', interval=None, **state_params):
# Check for REQUIRED arguments
assert environment is not None, TypeError('__init__ missing 1 required keyword argument: \'environment\'. '
'Cannot be NoneType.')
# Initialize agent parameters
self.id = agent_id
self.name = name
self.state_params = state_params
# Global parameters
self.global_topology = environment.G
self.environment_params = environment.environment_params
# Register agent to environment
self.env = environment
self._neighbors = None
self.alive = True
real_state = deepcopy(self.defaults)
real_state.update(state or {})
self._state = real_state
self.interval = interval
if not hasattr(self, 'level'):
self.level = logging.DEBUG
self.logger = logging.getLogger('{}-Agent-{}'.format(self.env.name,
self.id))
self.logger.setLevel(self.level)
# initialize every time an instance of the agent is created
self.action = self.env.process(self.run())
@property
def state(self):
return self._state
@state.setter
def state(self, value):
for k, v in value.items():
self[k] = v
def __getitem__(self, key):
if isinstance(key, tuple):
key, t_step = key
k = history.Key(key=key, t_step=t_step, agent_id=self.id)
return self.env[k]
return self.state.get(key, None)
def __delitem__(self, key):
self.state[key] = None
def __contains__(self, key):
return key in self.state
def __setitem__(self, key, value):
self.state[key] = value
k = history.Key(t_step=self.now,
agent_id=self.id,
key=key)
self.env[k] = value
def get(self, key, default=None):
return self[key] if key in self else default
@property
def now(self):
try:
return self.env.now
except AttributeError:
# No environment
return None
def run(self):
if self.interval is not None:
interval = self.interval
elif 'interval' in self:
interval = self['interval']
else:
interval = self.env.interval
while self.alive:
res = self.step()
yield res or self.env.timeout(interval)
def die(self, remove=False):
self.alive = False
if remove:
super().die()
def step(self):
pass
def to_json(self):
return json.dumps(self.state)
def count_agents(self, state_id=None, limit_neighbors=False):
if limit_neighbors:
agents = self.global_topology.neighbors(self.id)
else:
agents = self.global_topology.nodes()
count = 0
for agent in agents:
if state_id and state_id != self.global_topology.node[agent]['agent']['id']:
continue
count += 1
return count
def count_neighboring_agents(self, state_id=None):
return len(super().get_agents(state_id, limit_neighbors=True))
def get_agents(self, state_id=None, limit_neighbors=False, iterator=False, **kwargs):
if limit_neighbors:
agents = super().get_agents(state_id, limit_neighbors)
else:
agents = filter(lambda x: state_id is None or x.state.get('id', None) == state_id,
self.env.agents)
def matches_all(agent):
state = agent.state
for k, v in kwargs.items():
if state.get(k, None) != v:
return False
return True
f = filter(matches_all, agents)
if iterator:
return f
return list(f)
def log(self, message, *args, level=logging.INFO, **kwargs):
message = message + " ".join(str(i) for i in args)
message = "\t@{:>5}:\t{}".format(self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra['now'] = self.now
extra['id'] = self.id
return self.logger.log(level, message, extra=extra)
def debug(self, *args, **kwargs):
return self.log(*args, level=logging.DEBUG, **kwargs)
def info(self, *args, **kwargs):
return self.log(*args, level=logging.INFO, **kwargs)
def state(func):
'''
A state function should return either a state id, or a tuple (state_id, when)
The default value for state_id is the current state id.
The default value for when is the interval defined in the nevironment.
'''
@wraps(func)
def func_wrapper(self):
next_state = func(self)
when = None
if next_state is None:
return when
try:
next_state, when = next_state
except (ValueError, TypeError):
pass
if next_state:
self.set_state(next_state)
return when
func_wrapper.id = func.__name__
func_wrapper.is_default = False
return func_wrapper
def default_state(func):
func.is_default = True
return func
class MetaFSM(MetaAgent):
def __init__(cls, name, bases, nmspc):
super(MetaFSM, cls).__init__(name, bases, nmspc)
states = {}
# Re-use states from inherited classes
default_state = None
for i in bases:
if isinstance(i, MetaFSM):
for state_id, state in i.states.items():
if state.is_default:
default_state = state
states[state_id] = state
# Add new states
for name, func in nmspc.items():
if hasattr(func, 'id'):
if func.is_default:
default_state = func
states[func.id] = func
cls.default_state = default_state
cls.states = states
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, *args, **kwargs):
super(FSM, self).__init__(*args, **kwargs)
if 'id' not in self.state:
if not self.default_state:
raise ValueError('No default state specified for {}'.format(self.id))
self['id'] = self.default_state.id
def step(self):
if 'id' in self.state:
next_state = self['id']
elif self.default_state:
next_state = self.default_state.id
else:
raise Exception('{} has no valid state id or default state'.format(self))
if next_state not in self.states:
raise Exception('{} is not a valid id for {}'.format(next_state, self))
self.states[next_state](self)
def set_state(self, state):
if hasattr(state, 'id'):
state = state.id
if state not in self.states:
raise ValueError('{} is not a valid state'.format(state))
self['id'] = state
return state
def prob(prob=1):
'''
A true/False uniform distribution with a given probability.
To be used like this:
.. code-block:: python
if prob(0.3):
do_something()
'''
r = random.random()
return r < prob
def calculate_distribution(network_agents=None,
agent_type=None):
'''
Calculate the threshold values (thresholds for a uniform distribution)
of an agent distribution given the weights of each agent type.
The input has this form: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
'''
if network_agents:
network_agents = deepcopy(network_agents)
elif agent_type:
network_agents = [{'agent_type': agent_type}]
else:
return []
# Calculate the thresholds
total = sum(x.get('weight', 1) for x in network_agents)
acc = 0
for v in network_agents:
upper = acc + (v.get('weight', 1)/total)
v['threshold'] = [acc, upper]
acc = upper
return network_agents
def _serialize_distribution(network_agents):
d = _convert_agent_types(network_agents,
to_string=True)
'''
When serializing an agent distribution, remove the thresholds, in order
to avoid cluttering the YAML definition file.
'''
for v in d:
if 'threshold' in v:
del v['threshold']
return d
def _validate_states(states, topology):
'''Validate states to avoid ignoring states during initialization'''
states = states or []
if isinstance(states, dict):
for x in states:
assert x in topology.node
else:
assert len(states) <= len(topology)
return states
def _convert_agent_types(ind, to_string=False):
'''Convenience method to allow specifying agents by class or class name.'''
d = deepcopy(ind)
for v in d:
agent_type = v['agent_type']
if to_string and not isinstance(agent_type, str):
v['agent_type'] = str(agent_type.__name__)
elif not to_string and isinstance(agent_type, str):
v['agent_type'] = agent_types[agent_type]
return d
def _agent_from_distribution(distribution, value=-1):
"""Used in the initialization of agents given an agent distribution."""
if value < 0:
value = random.random()
for d in distribution:
threshold = d['threshold']
if value >= threshold[0] and value < threshold[1]:
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
raise Exception('Distribution for value {} not found in: {}'.format(value, distribution))
from .BassModel import *
from .BigMarketModel import *
from .IndependentCascadeModel import *
from .ModelM2 import *
from .SentimentCorrelationModel import *
from .SISaModel import *
from .CounterModel import *
from .DrawingAgent import *

166
soil/analysis.py Normal file
View File

@@ -0,0 +1,166 @@
import pandas as pd
import glob
import yaml
from os.path import join
from . import utils, history
def read_data(*args, group=False, **kwargs):
iterable = _read_data(*args, **kwargs)
if group:
return group_trials(iterable)
else:
return list(iterable)
def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
if not process_args:
process_args = {}
for folder in glob.glob(pattern):
config_file = glob.glob(join(folder, '*.yml'))[0]
config = yaml.load(open(config_file))
df = None
if from_csv:
for trial_data in sorted(glob.glob(join(folder,
'*.environment.csv'))):
df = read_csv(trial_data, **kwargs)
yield config_file, df, config
else:
for trial_data in sorted(glob.glob(join(folder, '*.db.sqlite'))):
df = read_sql(trial_data, **kwargs)
yield config_file, df, config
def read_sql(db, *args, **kwargs):
h = history.History(db, backup=False)
df = h.read_sql(*args, **kwargs)
return df
def read_csv(filename, keys=None, convert_types=False, **kwargs):
'''
Read a CSV in canonical form: ::
<agent_id, t_step, key, value, value_type>
'''
df = pd.read_csv(filename)
if convert_types:
df = convert_types_slow(df)
if keys:
df = df[df['key'].isin(keys)]
df = process_one(df)
return df
def convert_row(row):
row['value'] = utils.convert(row['value'], row['value_type'])
return row
def convert_types_slow(df):
'''This is a slow operation.'''
dtypes = get_types(df)
for k, v in dtypes.items():
t = df[df['key']==k]
t['value'] = t['value'].astype(v)
df = df.apply(convert_row, axis=1)
return df
def split_df(df):
'''
Split a dataframe in two dataframes: one with the history of agents,
and one with the environment history
'''
envmask = (df['agent_id'] == 'env')
n_env = envmask.sum()
if n_env == len(df):
return df, None
elif n_env == 0:
return None, df
agents, env = [x for _, x in df.groupby(envmask)]
return env, agents
def process(df, **kwargs):
'''
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
two dataframes with a column per key: one with the history of the agents, and one for the
history of the environment.
'''
env, agents = split_df(df)
return process_one(env, **kwargs), process_one(agents, **kwargs)
def get_types(df):
dtypes = df.groupby(by=['key'])['value_type'].unique()
return {k:v[0] for k,v in dtypes.iteritems()}
def process_one(df, *keys, columns=['key', 'agent_id'], values='value',
fill=True, index=['t_step',],
aggfunc='first', **kwargs):
'''
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
a dataframe with a column per key
'''
if df is None:
return df
if keys:
df = df[df['key'].isin(keys)]
df = df.pivot_table(values=values, index=index, columns=columns,
aggfunc=aggfunc, **kwargs)
if fill:
df = fillna(df)
return df
def get_count(df, *keys):
if keys:
df = df[list(keys)]
counts = pd.DataFrame()
for key in df.columns.levels[0]:
g = df[key].apply(pd.Series.value_counts, axis=1).fillna(0)
for value, series in g.iteritems():
counts[key, value] = series
counts.columns = pd.MultiIndex.from_tuples(counts.columns)
return counts
def get_value(df, *keys, aggfunc='sum'):
if keys:
df = df[list(keys)]
return df.groupby(axis=1, level=0).agg(aggfunc, axis=1)
def plot_all(*args, **kwargs):
'''
Read all the trial data and plot the result of applying a function on them.
'''
dfs = do_all(*args, **kwargs)
ps = []
for line in dfs:
f, df, config = line
df.plot(title=config['name'])
ps.append(df)
return ps
def do_all(pattern, func, *keys, include_env=False, **kwargs):
for config_file, df, config in read_data(pattern, keys=keys):
p = func(df, *keys, **kwargs)
p.plot(title=config['name'])
yield config_file, p, config
def group_trials(trials, aggfunc=['mean', 'min', 'max', 'std']):
trials = list(trials)
trials = list(map(lambda x: x[1] if isinstance(x, tuple) else x, trials))
return pd.concat(trials).groupby(level=0).agg(aggfunc).reorder_levels([2, 0,1] ,axis=1)
def fillna(df):
new_df = df.ffill(axis=0)
return new_df

314
soil/environment.py Normal file
View File

@@ -0,0 +1,314 @@
import os
import sqlite3
import time
import csv
import random
import simpy
import tempfile
import pandas as pd
from copy import deepcopy
from networkx.readwrite import json_graph
import networkx as nx
import nxsim
from . import utils, agents, analysis, history
class SoilEnvironment(nxsim.NetworkEnvironment):
"""
The environment is key in a simulation. It contains the network topology,
a reference to network and environment agents, as well as the environment
params, which are used as shared state between agents.
The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary or with the environment's
:meth:`soil.environment.SoilEnvironment.get` method.
"""
def __init__(self, name=None,
network_agents=None,
environment_agents=None,
states=None,
default_state=None,
interval=1,
seed=None,
dry_run=False,
dir_path=None,
topology=None,
*args, **kwargs):
self.name = name or 'UnnamedEnvironment'
if isinstance(states, list):
states = dict(enumerate(states))
self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {}
if not topology:
topology = nx.Graph()
super().__init__(*args, topology=topology, **kwargs)
self._env_agents = {}
self.dry_run = dry_run
self.interval = interval
self.dir_path = dir_path or tempfile.mkdtemp('soil-env')
self.get_path()
self._history = history.History(name=self.name if not dry_run else None,
dir_path=self.dir_path)
# Add environment agents first, so their events get
# executed before network agents
self.environment_agents = environment_agents or []
self.network_agents = network_agents or []
self['SEED'] = seed or time.time()
random.seed(self['SEED'])
@property
def agents(self):
yield from self.environment_agents
yield from self.network_agents
@property
def environment_agents(self):
for ref in self._env_agents.values():
yield ref
@environment_agents.setter
def environment_agents(self, environment_agents):
# Set up environmental agent
self._env_agents = {}
for item in environment_agents:
kwargs = deepcopy(item)
atype = kwargs.pop('agent_type')
kwargs['agent_id'] = kwargs.get('agent_id', atype.__name__)
kwargs['state'] = kwargs.get('state', {})
a = atype(environment=self, **kwargs)
self._env_agents[a.id] = a
@property
def network_agents(self):
for i in self.G.nodes():
node = self.G.node[i]
if 'agent' in node:
yield node['agent']
@network_agents.setter
def network_agents(self, network_agents):
if not network_agents:
return
for ix in self.G.nodes():
agent, state = agents._agent_from_distribution(network_agents)
self.set_agent(ix, agent_type=agent, state=state)
def set_agent(self, agent_id, agent_type, state=None):
node = self.G.nodes[agent_id]
defstate = deepcopy(self.default_state)
defstate.update(self.states.get(agent_id, {}))
if state:
defstate.update(state)
state = defstate
state.update(node.get('state', {}))
a = agent_type(environment=self,
agent_id=agent_id,
state=state)
node['agent'] = a
return a
def add_node(self, agent_type, state=None):
agent_id = int(len(self.G.nodes()))
self.G.add_node(agent_id)
a = self.set_agent(agent_id, agent_type, state)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, attrs=None):
return self.G.add_edge(agent1, agent2)
def run(self, *args, **kwargs):
self._save_state()
super().run(*args, **kwargs)
self._history.flush_cache()
def _save_state(self, now=None):
# for agent in self.agents:
# agent.save_state()
utils.logger.debug('Saving state @{}'.format(self.now))
self._history.save_records(self.state_to_tuples(now=now))
def save_state(self):
'''
:DEPRECATED:
Periodically save the state of the environment and the agents.
'''
self._save_state()
while self.peek() != simpy.core.Infinity:
delay = max(self.peek() - self.now, self.interval)
utils.logger.debug('Step: {}'.format(self.now))
ev = self.event()
ev._ok = True
# Schedule the event with minimum priority so
# that it executes before all agents
self.schedule(ev, -999, delay)
yield ev
self._save_state()
def __getitem__(self, key):
if isinstance(key, tuple):
self._history.flush_cache()
return self._history[key]
return self.environment_params[key]
def __setitem__(self, key, value):
if isinstance(key, tuple):
k = history.Key(*key)
self._history.save_record(*k,
value=value)
return
self.environment_params[key] = value
self._history.save_record(agent_id='env',
t_step=self.now,
key=key,
value=value)
def __contains__(self, key):
return key in self.environment_params
def get(self, key, default=None):
'''
Get the value of an environment attribute in a
given point in the simulation (history).
If key is an attribute name, this method returns
the current value.
To get values at other times, use a
:meth: `soil.history.Key` tuple.
'''
return self[key] if key in self else default
def get_path(self, dir_path=None):
dir_path = dir_path or self.dir_path
if not os.path.exists(dir_path):
try:
os.makedirs(dir_path)
except FileExistsError:
pass
return dir_path
def get_agent(self, agent_id):
return self.G.node[agent_id]['agent']
def get_agents(self):
return list(self.agents)
def dump_csv(self, dir_path=None):
csv_name = os.path.join(self.get_path(dir_path),
'{}.environment.csv'.format(self.name))
with open(csv_name, 'w') as f:
cr = csv.writer(f)
cr.writerow(('agent_id', 't_step', 'key', 'value', 'value_type'))
for i in self.history_to_tuples():
cr.writerow(i)
def dump_gexf(self, dir_path=None):
G = self.history_to_graph()
graph_path = os.path.join(self.get_path(dir_path),
self.name+".gexf")
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.node[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
nx.write_gexf(G, graph_path, version="1.2draft")
def dump(self, dir_path=None, formats=None):
if not formats:
return
functions = {
'csv': self.dump_csv,
'gexf': self.dump_gexf
}
for f in formats:
if f in functions:
functions[f](dir_path)
else:
raise ValueError('Unknown format: {}'.format(f))
def state_to_tuples(self, now=None):
if now is None:
now = self.now
for k, v in self.environment_params.items():
yield history.Record(agent_id='env',
t_step=now,
key=k,
value=v)
for agent in self.agents:
for k, v in agent.state.items():
yield history.Record(agent_id=agent.id,
t_step=now,
key=k,
value=v)
def history_to_tuples(self):
return self._history.to_tuples()
def history_to_graph(self):
G = nx.Graph(self.G)
for agent in self.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
history = self[agent.id, None, None]
if not history:
continue
for t_step, state in reversed(sorted(list(history.items()))):
for attribute, value in state.items():
if attribute == 'visible':
nowvisible = state[attribute]
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
else:
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (state[attribute], t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
value = (last_value, t_step, laststep)
if key not in attributes:
attributes[key] = list()
attributes[key].append(value)
lastattributes[key] = (state[attribute], t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], 0, v[1]))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G
def __getstate__(self):
state = self.__dict__.copy()
state['G'] = json_graph.node_link_data(self.G)
state['network_agents'] = agents._serialize_distribution(self.network_agents)
state['environment_agents'] = agents._convert_agent_types(self.environment_agents,
to_string=True)
del state['_queue']
return state
def __setstate__(self, state):
self.__dict__ = state
self.G = json_graph.node_link_graph(state['G'])
self.network_agents = self.calculate_distribution(self._convert_agent_types(self.network_agents))
self.environment_agents = self._convert_agent_types(self.environment_agents)
return state

231
soil/history.py Normal file
View File

@@ -0,0 +1,231 @@
import time
import os
import pandas as pd
import sqlite3
import copy
from collections import UserDict, Iterable, namedtuple
from . import utils
class History:
"""
Store and retrieve values from a sqlite database.
"""
def __init__(self, db_path=None, name=None, dir_path=None, backup=True):
if db_path is None and name:
db_path = os.path.join(dir_path or os.getcwd(),
'{}.db.sqlite'.format(name))
if db_path is None:
db_path = ":memory:"
else:
if backup and os.path.exists(db_path):
newname = db_path + '.backup{}.sqlite'.format(time.time())
os.rename(db_path, newname)
self._db_path = db_path
if isinstance(db_path, str):
self._db = sqlite3.connect(db_path)
else:
self._db = db_path
with self._db:
self._db.execute('''CREATE TABLE IF NOT EXISTS history (agent_id text, t_step int, key text, value text text)''')
self._db.execute('''CREATE TABLE IF NOT EXISTS value_types (key text, value_type text)''')
self._db.execute('''CREATE UNIQUE INDEX IF NOT EXISTS idx_history ON history (agent_id, t_step, key);''')
self._dtypes = {}
self._tups = []
def conversors(self, key):
"""Get the serializer and deserializer for a given key."""
if key not in self._dtypes:
self.read_types()
return self._dtypes[key]
@property
def dtypes(self):
return {k:v[0] for k, v in self._dtypes.items()}
def save_tuples(self, tuples):
self.save_records(Record(*tup) for tup in tuples)
def save_records(self, records):
with self._db:
for rec in records:
if not isinstance(rec, Record):
rec = Record(*rec)
if rec.key not in self._dtypes:
name = utils.name(rec.value)
serializer = utils.serializer(name)
deserializer = utils.deserializer(name)
self._dtypes[rec.key] = (name, serializer, deserializer)
self._db.execute("replace into value_types (key, value_type) values (?, ?)", (rec.key, name))
self._db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
def save_record(self, *args, **kwargs):
self._tups.append(Record(*args, **kwargs))
if len(self._tups) > 100:
self.flush_cache()
def flush_cache(self):
'''
Use a cache to save state changes to avoid opening a session for every change.
The cache will be flushed at the end of the simulation, and when history is accessed.
'''
self.save_records(self._tups)
self._tups = list()
def to_tuples(self):
self.flush_cache()
with self._db:
res = self._db.execute("select agent_id, t_step, key, value from history ").fetchall()
for r in res:
agent_id, t_step, key, value = r
_, _ , des = self.conversors(key)
yield agent_id, t_step, key, des(value)
def read_types(self):
with self._db:
res = self._db.execute("select key, value_type from value_types ").fetchall()
for k, v in res:
serializer = utils.serializer(v)
deserializer = utils.deserializer(v)
self._dtypes[k] = (v, serializer, deserializer)
def __getitem__(self, key):
key = Key(*key)
agent_ids = [key.agent_id] if key.agent_id is not None else []
t_steps = [key.t_step] if key.t_step is not None else []
keys = [key.key] if key.key is not None else []
df = self.read_sql(agent_ids=agent_ids,
t_steps=t_steps,
keys=keys)
r = Records(df, filter=key, dtypes=self._dtypes)
return r.value()
def read_sql(self, keys=None, agent_ids=None, t_steps=None, convert_types=False, limit=-1):
self.read_types()
def escape_and_join(v):
if v is None:
return
return ",".join(map(lambda x: "\'{}\'".format(x), v))
filters = [("key in ({})".format(escape_and_join(keys)), keys),
("agent_id in ({})".format(escape_and_join(agent_ids)), agent_ids)
]
filters = list(k[0] for k in filters if k[1])
last_df = None
if t_steps:
# Look for the last value before the minimum step in the query
min_step = min(t_steps)
last_filters = ['t_step < {}'.format(min_step),]
last_filters = last_filters + filters
condition = ' and '.join(last_filters)
last_query = '''
select h1.*
from history h1
inner join (
select agent_id, key, max(t_step) as t_step
from history
where {condition}
group by agent_id, key
) h2
on h1.agent_id = h2.agent_id and
h1.key = h2.key and
h1.t_step = h2.t_step
'''.format(condition=condition)
last_df = pd.read_sql_query(last_query, self._db)
filters.append("t_step >= '{}' and t_step <= '{}'".format(min_step, max(t_steps)))
condition = ''
if filters:
condition = 'where {} '.format(' and '.join(filters))
query = 'select * from history {} limit {}'.format(condition, limit)
df = pd.read_sql_query(query, self._db)
if last_df is not None:
df = pd.concat([df, last_df])
df_p = df.pivot_table(values='value', index=['t_step'],
columns=['key', 'agent_id'],
aggfunc='first')
for k, v in self._dtypes.items():
if k in df_p:
dtype, _, deserial = v
df_p[k] = df_p[k].fillna(method='ffill').fillna(deserial()).astype(dtype)
if t_steps:
df_p = df_p.reindex(t_steps, method='ffill')
return df_p.ffill()
class Records():
def __init__(self, df, filter=None, dtypes=None):
if not filter:
filter = Key(agent_id=None,
t_step=None,
key=None)
self._df = df
self._filter = filter
self.dtypes = dtypes or {}
super().__init__()
def mask(self, tup):
res = ()
for i, k in zip(tup[:-1], self._filter):
if k is None:
res = res + (i,)
res = res + (tup[-1],)
return res
def filter(self, newKey):
f = list(self._filter)
for ix, i in enumerate(f):
if i is None:
f[ix] = newKey
self._filter = Key(*f)
@property
def resolved(self):
return sum(1 for i in self._filter if i is not None) == 3
def __iter__(self):
for column, series in self._df.iteritems():
key, agent_id = column
for t_step, value in series.iteritems():
r = Record(t_step=t_step,
agent_id=agent_id,
key=key,
value=value)
yield self.mask(r)
def value(self):
if self.resolved:
f = self._filter
try:
i = self._df[f.key][str(f.agent_id)]
ix = i.index.get_loc(f.t_step, method='ffill')
return i.iloc[ix]
except KeyError:
return self.dtypes[f.key][2]()
return self
def __getitem__(self, k):
n = copy.copy(self)
n.filter(k)
return n.value()
def __len__(self):
return len(self._df)
Key = namedtuple('Key', ['agent_id', 't_step', 'key'])
Record = namedtuple('Record', 'agent_id t_step key value')

1
soil/settings.py Normal file
View File

@@ -0,0 +1 @@
# General configuration

219
soil/simulation.py Normal file
View File

@@ -0,0 +1,219 @@
import os
import time
import imp
import sys
import yaml
import networkx as nx
from networkx.readwrite import json_graph
from multiprocessing import Pool
from functools import partial
import pickle
from nxsim import NetworkSimulation
from . import utils, environment, basestring, agents
from .utils import logger
class SoilSimulation(NetworkSimulation):
"""
Subclass of nsim.NetworkSimulation with three main differences:
1) agent type can be specified by name or by class.
2) instead of just one type, a network agents distribution can be used.
The distribution specifies the weight (or probability) of each
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
"""
def __init__(self, name=None, topology=None, network_params=None,
network_agents=None, agent_type=None, states=None,
default_state=None, interval=1, dump=None, dry_run=False,
dir_path=None, num_trials=1, max_time=100,
agent_module=None, load_module=None, seed=None,
environment_agents=None, environment_params=None):
if topology is None:
topology = utils.load_network(network_params,
dir_path=dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
self.load_module = load_module
self.topology = nx.Graph(topology)
self.network_params = network_params
self.name = name or 'UnnamedSimulation'
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
self.seed = str(seed) or str(time.time())
self.dump = dump
self.dry_run = dry_run
self.environment_params = environment_params or {}
if load_module:
path = sys.path + [self.dir_path, os.getcwd()]
f, fp, desc = imp.find_module(load_module, path)
imp.load_module('soil.agents.custom', f, fp, desc)
environment_agents = environment_agents or []
self.environment_agents = agents._convert_agent_types(environment_agents)
distro = agents.calculate_distribution(network_agents,
agent_type)
self.network_agents = agents._convert_agent_types(distro)
self.states = agents._validate_states(states,
self.topology)
def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs)
def run(self, *args, **kwargs):
return list(self.run_simulation_gen(*args, **kwargs))
def run_simulation_gen(self, *args, parallel=False, dry_run=False,
**kwargs):
p = Pool()
with utils.timer('simulation {}'.format(self.name)):
if parallel:
func = partial(self.run_trial, dry_run=dry_run or self.dry_run,
return_env=not parallel, **kwargs)
for i in p.imap_unordered(func, range(self.num_trials)):
yield i
else:
for i in range(self.num_trials):
yield self.run_trial(i, dry_run=dry_run or self.dry_run, **kwargs)
if not (dry_run or self.dry_run):
logger.info('Dumping results to {}'.format(self.dir_path))
self.dump_pickle(self.dir_path)
self.dump_yaml(self.dir_path)
else:
logger.info('NOT dumping results')
def get_env(self, trial_id=0, **kwargs):
opts = self.environment_params.copy()
env_name = '{}_trial_{}'.format(self.name, trial_id)
opts.update({
'name': env_name,
'topology': self.topology.copy(),
'seed': self.seed+env_name,
'initial_time': 0,
'dry_run': self.dry_run,
'interval': self.interval,
'network_agents': self.network_agents,
'states': self.states,
'default_state': self.default_state,
'environment_agents': self.environment_agents,
'dir_path': self.dir_path,
})
opts.update(kwargs)
env = environment.SoilEnvironment(**opts)
return env
def run_trial(self, trial_id=0, until=None, return_env=True, **opts):
"""Run a single trial of the simulation
Parameters
----------
trial_id : int
"""
# Set-up trial environment and graph
until = until or self.max_time
env = self.get_env(trial_id=trial_id, **opts)
# Set up agents on nodes
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
env.run(until)
if self.dump and not self.dry_run:
with utils.timer('Dumping simulation {} trial {}'.format(self.name, trial_id)):
env.dump(formats=self.dump)
if return_env:
return env
def to_dict(self):
return self.__getstate__()
def to_yaml(self):
return yaml.dump(self.to_dict())
def dump_yaml(self, dir_path=None, file_name=None):
dir_path = dir_path or self.dir_path
if not os.path.exists(dir_path):
os.makedirs(dir_path)
if not file_name:
file_name = os.path.join(dir_path,
'{}.dumped.yml'.format(self.name))
with open(file_name, 'w') as f:
f.write(self.to_yaml())
def dump_pickle(self, dir_path=None, pickle_name=None):
dir_path = dir_path or self.dir_path
if not os.path.exists(dir_path):
os.makedirs(dir_path)
if not pickle_name:
pickle_name = os.path.join(dir_path,
'{}.simulation.pickle'.format(self.name))
with open(pickle_name, 'wb') as f:
pickle.dump(self, f)
def __getstate__(self):
state = self.__dict__.copy()
state['topology'] = json_graph.node_link_data(self.topology)
state['network_agents'] = agents._serialize_distribution(self.network_agents)
state['environment_agents'] = agents._convert_agent_types(self.environment_agents,
to_string=True)
return state
def __setstate__(self, state):
self.__dict__ = state
self.topology = json_graph.node_link_graph(state['topology'])
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
self.environment_agents = agents._convert_agent_types(self.environment_agents)
return state
def from_config(config):
config = list(utils.load_config(config))
if len(config) > 1:
raise AttributeError('Provide only one configuration')
config = config[0][0]
sim = SoilSimulation(**config)
return sim
def run_from_config(*configs, results_dir='soil_output', dry_run=False, dump=None, timestamp=False, **kwargs):
for config_def in configs:
# logger.info("Found {} config(s)".format(len(ls)))
for config, _ in utils.load_config(config_def):
name = config.get('name', 'unnamed')
logger.info("Using config(s): {name}".format(name=name))
if timestamp:
sim_folder = '{}_{}'.format(name,
time.strftime("%Y-%m-%d_%H:%M:%S"))
else:
sim_folder = name
dir_path = os.path.join(results_dir, sim_folder)
sim = SoilSimulation(dir_path=dir_path, dump=dump, **config)
logger.info('Dumping results to {} : {}'.format(sim.dir_path, sim.dump))
sim.run_simulation(**kwargs)

105
soil/utils.py Normal file
View File

@@ -0,0 +1,105 @@
import os
import yaml
import logging
import importlib
from time import time
from glob import glob
from random import random
from copy import deepcopy
import networkx as nx
from contextlib import contextmanager
logger = logging.getLogger('soil')
logger.setLevel(logging.INFO)
def load_network(network_params, dir_path=None):
if network_params is None:
return nx.Graph()
path = network_params.get('path', None)
if path:
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
return method(path, **kwargs)
net_args = network_params.copy()
net_type = net_args.pop('generator')
method = getattr(nx.generators, net_type)
return method(**net_args)
def load_file(infile):
with open(infile, 'r') as f:
return list(yaml.load_all(f))
def load_files(*patterns):
for pattern in patterns:
for i in glob(pattern):
for config in load_file(i):
yield config, os.path.abspath(i)
def load_config(config):
if isinstance(config, dict):
yield config, None
else:
yield from load_files(config)
@contextmanager
def timer(name='task', pre="", function=logger.info, to_object=None):
start = time()
function('{}Starting {} at {}.'.format(pre, name, start))
yield start
end = time()
function('{}Finished {} in {} seconds'.format(pre, name, str(end-start)))
if to_object:
to_object.start = start
to_object.end = end
def repr(v):
func = serializer(v)
tname = name(v)
return func(v), tname
def name(v):
return type(v).__name__
def serializer(type_):
if type_ == 'bool':
return lambda x: "true" if x else ""
return lambda x: x
def deserializer(type_):
try:
# Check if it's a builtin type
module = importlib.import_module('builtins')
cls = getattr(module, type_)
except AttributeError:
# if not, separate module and class
module, type_ = type_.rsplit(".", 1)
module = importlib.import_module(module)
cls = getattr(module, type_)
return cls
def convert(value, type_):
return deserializer(type_)(value)

20
soil/version.py Normal file
View File

@@ -0,0 +1,20 @@
import os
import logging
logger = logging.getLogger(__name__)
ROOT = os.path.dirname(__file__)
DEFAULT_FILE = os.path.join(ROOT, 'VERSION')
def read_version(versionfile=DEFAULT_FILE):
try:
with open(versionfile) as f:
return f.read().strip()
except IOError: # pragma: no cover
logger.error(('Running an unknown version of {}.'
'Be careful!.').format(__name__))
return '0.0'
__version__ = read_version()

View File

@@ -1,913 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"./logo_gsi.png\" alt=\"Grupo de Sistemas Inteligentes\" width=\"100px\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SOIL Tutorial "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook contains a tutorial to learn how to use the SOcial network sImuLator (SOIL) written in Python. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SOIL is based in 2 main files:\n",
"* __soil.py__: It's the main file of SOIL. The network creation, simulation and visualization are done in this file.\n",
"+ __settings.json__: This file contains every variable needed in the simulation in order to be modified easily.\n",
"- __models__: All the spread models already implemented are stored in this directory as modules."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Requirements"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SOIL requires to install:\n",
"* **Python 3** - you can use the Conda distribution\n",
"* **NetworkX** - install with conda install networkx or pip install networkx\n",
"* **simpy** - install with pip install simpy\n",
"* **nxsim** - install with pip install nxsim\n",
"* **Gephi** - Available at https://gephi.org"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Soil.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Imports and data initialization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First of all, you need to make all the imports. This simulator is based on [nxsim](https://pypi.python.org/pypi/nxsim), using [networkx](https://networkx.github.io/) for network management. We will also include the models and settings files where the spread models and initialization variables are stored."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from models import *\n",
"from nxsim import NetworkSimulation\n",
"# import numpy\n",
"from matplotlib import pyplot as plt\n",
"import networkx as nx\n",
"import settings\n",
"import models\n",
"import math\n",
"import json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Network creation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using a parameter provided in the settings file, we can choose what type of network we want to create, as well as the number of nodes and some other parameters. More types of networks can be implemented using [networkx](https://networkx.github.io/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"if settings.network_params[\"network_type\"] == 0:\n",
" G = nx.complete_graph(settings.network_params[\"number_of_nodes\"])\n",
"if settings.network_params[\"network_type\"] == 1:\n",
" G = nx.barabasi_albert_graph(settings.network_params[\"number_of_nodes\"], 10)\n",
"if settings.network_params[\"network_type\"] == 2:\n",
" G = nx.margulis_gabber_galil_graph(settings.network_params[\"number_of_nodes\"], None)\n",
"# More types of networks can be added here"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to analyse the results of the simulation. We include them in the topology and a .gexf file is generated. This allows the user to picture the network in [Gephi](https://gephi.org/). A JSON file is also generated to permit further analysis.\n",
"\n",
"The JSON file follows this schema. The file has three depth levels. In the first one we can find the identifier of each agent in the network. Secondly, inside every agent we can observe every attribute that the creator of the model wanted to make visible. In the deepest level the different values of each attribute are\n",
"visible.\n",
"\n",
"\t{\n",
"\t\t\"agent_0\": {\n",
"\t\t\t\"attribute_X\": {\n",
"\t\t\t\t\"0\": 0,\n",
"\t\t\t\t\"2\": 0,\n",
"\t\t\t\t\"4\": 1,\n",
"\t\t\t\t\"6\": 2,\n",
"\t\t\t\t...\n",
"\t\t\t}\n",
"\t\t},\n",
"\t\t\"agent_1\": {\n",
"\t\t\t\"attribute_X\": {\n",
"\t\t\t\t\"0\": 0,\n",
"\t\t\t\t\"2\": 3,\n",
"\t\t\t\t...\n",
"\t\t\t}\n",
"\t\t},\n",
"\t\t...\t\t\n",
"\t}\n",
"\n",
"This is done with the following code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def visualization(graph_name):\n",
"\n",
" for x in range(0, settings.network_params[\"number_of_nodes\"]):\n",
" for attribute in models.networkStatus[\"agent_%s\" % x]:\n",
" emotionStatusAux = []\n",
" for t_step in models.networkStatus[\"agent_%s\" % x][attribute]:\n",
" prec = 2\n",
" output = math.floor(models.networkStatus[\"agent_%s\" % x][attribute][t_step] * (10 ** prec)) / (10 ** prec) # 2 decimals\n",
" emotionStatusAux.append((output, t_step, t_step + settings.network_params[\"timeout\"]))\n",
" attributes = {}\n",
" attributes[attribute] = emotionStatusAux\n",
" G.add_node(x, attributes)\n",
"\n",
" print(\"Done!\")\n",
"\n",
" with open('data.txt', 'w') as outfile:\n",
" json.dump(models.networkStatus, outfile, sort_keys=True, indent=4, separators=(',', ': '))\n",
"\n",
" nx.write_gexf(G, graph_name+\".gexf\", version=\"1.2draft\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's only the basic visualization. Everything you need can be implemented as well. For example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def results(model_name):\n",
" x_values = []\n",
" infected_values = []\n",
" neutral_values = []\n",
" cured_values = []\n",
" vaccinated_values = []\n",
"\n",
" attribute_plot = 'status'\n",
" for time in range(0, settings.network_params[\"max_time\"]):\n",
" value_infectados = 0\n",
" value_neutral = 0\n",
" value_cured = 0\n",
" value_vaccinated = 0\n",
" real_time = time * settings.network_params[\"timeout\"]\n",
" activity = False\n",
" for x in range(0, settings.network_params[\"number_of_nodes\"]):\n",
" if attribute_plot in models.networkStatus[\"agent_%s\" % x]:\n",
" if real_time in models.networkStatus[\"agent_%s\" % x][attribute_plot]:\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 1: ## Infected\n",
" value_infectados += 1\n",
" activity = True\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 0: ## Neutral\n",
" value_neutral += 1\n",
" activity = True\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 2: ## Cured\n",
" value_cured += 1\n",
" activity = True\n",
" if models.networkStatus[\"agent_%s\" % x][attribute_plot][real_time] == 3: ## Vaccinated\n",
" value_vaccinated += 1\n",
" activity = True\n",
"\n",
" if activity:\n",
" x_values.append(real_time)\n",
" infected_values.append(value_infectados)\n",
" neutral_values.append(value_neutral)\n",
" cured_values.append(value_cured)\n",
" vaccinated_values.append(value_vaccinated)\n",
" activity = False\n",
"\n",
" fig1 = plt.figure()\n",
" ax1 = fig1.add_subplot(111)\n",
"\n",
" infected_line = ax1.plot(x_values, infected_values, label='Infected')\n",
" neutral_line = ax1.plot(x_values, neutral_values, label='Neutral')\n",
" cured_line = ax1.plot(x_values, cured_values, label='Cured')\n",
" vaccinated_line = ax1.plot(x_values, vaccinated_values, label='Vaccinated')\n",
" ax1.legend()\n",
" fig1.savefig(model_name + '.png')\n",
" # plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simulation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The simulation starts with the following code. The user can provide the network topology, the maximum time of simulation, the spread model to be used as well as other parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"agents = settings.environment_params['agent']\n",
"\n",
"print(\"Using Agent(s): {agents}\".format(agents=agents))\n",
"\n",
"if len(agents) > 1:\n",
" for agent in agents:\n",
" sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params[\"max_time\"],\n",
" num_trials=settings.network_params[\"num_trials\"], logging_interval=1.0, **settings.environment_params)\n",
" sim.run_simulation()\n",
" print(str(agent))\n",
" results(str(agent))\n",
" visualization(str(agent))\n",
"else:\n",
" agent = agents[0]\n",
" sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params[\"max_time\"],\n",
" num_trials=settings.network_params[\"num_trials\"], logging_interval=1.0, **settings.environment_params)\n",
" sim.run_simulation()\n",
" results(str(agent))\n",
" visualization(str(agent))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Imports and initialization"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import settings\n",
"\n",
"networkStatus = {} # Dict that will contain the status of every agent in the network\n",
"\n",
"sentimentCorrelationNodeArray = []\n",
"for x in range(0, settings.network_params[\"number_of_nodes\"]):\n",
" sentimentCorrelationNodeArray.append({'id': x})\n",
"# Initialize agent states. Let's assume everyone is normal.\n",
"init_states = [{'id': 0, } for _ in range(settings.network_params[\"number_of_nodes\"])]\n",
" # add keys as as necessary, but \"id\" must always refer to that state category"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Base behaviour"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Every spread model used in SOIL should extend the base behaviour class. By doing this the exportation of the attributes values will be automatic. This feature will be explained in the Spread Models section. The class looks like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import settings\n",
"from nxsim import BaseNetworkAgent\n",
"\n",
"\n",
"class BaseBehaviour(BaseNetworkAgent):\n",
"\n",
" def __init__(self, environment=None, agent_id=0, state=()):\n",
" super().__init__(environment=environment, agent_id=agent_id, state=state)\n",
" self._attrs = {}\n",
"\n",
" @property\n",
" def attrs(self):\n",
" now = self.env.now\n",
" if now not in self._attrs:\n",
" self._attrs[now] = {}\n",
" return self._attrs[now]\n",
"\n",
" @attrs.setter\n",
" def attrs(self, value):\n",
" self._attrs[self.env.now] = value\n",
"\n",
" def run(self):\n",
" while True:\n",
" self.step(self.env.now)\n",
" yield self.env.timeout(settings.network_params[\"timeout\"])\n",
"\n",
" def step(self, now):\n",
" networkStatus['agent_%s'% self.id] = self.to_json()\n",
"\n",
" def to_json(self):\n",
" final = {}\n",
" for stamp, attrs in self._attrs.items():\n",
" for a in attrs:\n",
" if a not in final:\n",
" final[a] = {}\n",
" final[a][stamp] = attrs[a]\n",
" return final"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Spread models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Every model to be implemented must include an init and a step function. Depending on your model, you would need different attributes. If you want them to be automatic exported for a further analysis, you must name them like this *self.attrs['name_of_attribute']*. Moreover, the last thing you should do inside the step function is call the following method *super().step(now)*. This call will store the values.\n",
"\n",
"Some other tips:\n",
"* __self.state['id']__: To check the id of the current agent/node.\n",
"* __self.get_neighboring_agents(state_id=x)__: Returns the neighbours agents/nodes with the id provided\n",
"\n",
"An example of a spread model already implemented and working:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import settings\n",
"import random\n",
"import numpy as np\n",
"\n",
"\n",
"class ControlModelM2(BaseBehaviour):\n",
"\n",
" # Init infected\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 1}\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 1}\n",
"\n",
" # Init beacons\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 4}\n",
" init_states[random.randint(0, settings.network_params[\"number_of_nodes\"]-1)] = {'id': 4}\n",
"\n",
" def __init__(self, environment=None, agent_id=0, state=()):\n",
" super().__init__(environment=environment, agent_id=agent_id, state=state)\n",
"\n",
" self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],\n",
" environment.environment_params['standard_variance'])\n",
" self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],\n",
" environment.environment_params['standard_variance'])\n",
" self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],\n",
" environment.environment_params['standard_variance'])\n",
" self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],\n",
" environment.environment_params['standard_variance'])\n",
"\n",
" def step(self, now):\n",
"\n",
" if self.state['id'] == 0: # Neutral\n",
" self.neutral_behaviour()\n",
" elif self.state['id'] == 1: # Infected\n",
" self.infected_behaviour()\n",
" elif self.state['id'] == 2: # Cured\n",
" self.cured_behaviour()\n",
" elif self.state['id'] == 3: # Vaccinated\n",
" self.vaccinated_behaviour()\n",
" elif self.state['id'] == 4: # Beacon-off\n",
" self.beacon_off_behaviour()\n",
" elif self.state['id'] == 5: # Beacon-on\n",
" self.beacon_on_behaviour()\n",
"\n",
" self.attrs['status'] = self.state['id']\n",
" super().step(now)\n",
"\n",
" def neutral_behaviour(self):\n",
"\n",
" # Infected\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" if len(infected_neighbors) > 0:\n",
" if random.random() < self.prob_neutral_making_denier:\n",
" self.state['id'] = 3 # Vaccinated making denier\n",
"\n",
" def infected_behaviour(self):\n",
"\n",
" # Neutral\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_infect:\n",
" neighbor.state['id'] = 1 # Infected\n",
"\n",
" def cured_behaviour(self):\n",
"\n",
" # Vaccinate\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_cured_vaccinate_neutral:\n",
" neighbor.state['id'] = 3 # Vaccinated\n",
"\n",
" # Cure\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors:\n",
" if random.random() < self.prob_cured_healing_infected:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" def vaccinated_behaviour(self):\n",
"\n",
" # Cure\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors:\n",
" if random.random() < self.prob_cured_healing_infected:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" # Vaccinate\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_cured_vaccinate_neutral:\n",
" neighbor.state['id'] = 3 # Vaccinated\n",
"\n",
" # Generate anti-rumor\n",
" infected_neighbors_2 = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors_2:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" def beacon_off_behaviour(self):\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" if len(infected_neighbors) > 0:\n",
" self.state['id'] == 5 # Beacon on\n",
"\n",
" def beacon_on_behaviour(self):\n",
"\n",
" # Cure (M2 feature added)\n",
" infected_neighbors = self.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 2 # Cured\n",
" neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors_infected:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 3 # Vaccinated\n",
" infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)\n",
" for neighbor in infected_neighbors_infected:\n",
" if random.random() < self.prob_generate_anti_rumor:\n",
" neighbor.state['id'] = 2 # Cured\n",
"\n",
" # Vaccinate\n",
" neutral_neighbors = self.get_neighboring_agents(state_id=0)\n",
" for neighbor in neutral_neighbors:\n",
" if random.random() < self.prob_cured_vaccinate_neutral:\n",
" neighbor.state['id'] = 3 # Vaccinated"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Settings.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This file contains all the variables that can be modified from the simulation. In case of implementing a new spread model, the new variables should be also included in this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"[\n",
" {\n",
" \"network_type\": 1,\n",
" \"number_of_nodes\": 1000,\n",
" \"max_time\": 50,\n",
" \"num_trials\": 1,\n",
" \"timeout\": 2\n",
" },\n",
"\n",
" {\n",
" \"agent\": [\"BaseBehaviour\",\"SISaModel\",\"ControlModelM2\"],\n",
"\n",
"\n",
" \"bite_prob\": 0.01,\n",
" \"heal_prob\": 0.01,\n",
"\n",
" \"innovation_prob\": 0.001,\n",
" \"imitation_prob\": 0.005,\n",
"\n",
" \"outside_effects_prob\": 0.2,\n",
" \"anger_prob\": 0.06,\n",
" \"joy_prob\": 0.05,\n",
" \"sadness_prob\": 0.02,\n",
" \"disgust_prob\": 0.02,\n",
"\n",
" \"enterprises\": [\"BBVA\", \"Santander\", \"Bankia\"],\n",
"\n",
" \"tweet_probability_users\": 0.44,\n",
" \"tweet_relevant_probability\": 0.25,\n",
" \"tweet_probability_about\": [0.15, 0.15, 0.15],\n",
" \"sentiment_about\": [0, 0, 0],\n",
"\n",
" \"tweet_probability_enterprises\": [0.3, 0.3, 0.3],\n",
"\n",
" \"neutral_discontent_spon_prob\": 0.04,\n",
" \"neutral_discontent_infected_prob\": 0.04,\n",
" \"neutral_content_spon_prob\": 0.18,\n",
" \"neutral_content_infected_prob\": 0.02,\n",
"\n",
" \"discontent_neutral\": 0.13,\n",
" \"discontent_content\": 0.07,\n",
" \"variance_d_c\": 0.02,\n",
"\n",
" \"content_discontent\": 0.009,\n",
" \"variance_c_d\": 0.003,\n",
" \"content_neutral\": 0.088,\n",
"\n",
" \"standard_variance\": 0.055,\n",
"\n",
"\n",
" \"prob_neutral_making_denier\": 0.035,\n",
"\n",
" \"prob_infect\": 0.075,\n",
"\n",
" \"prob_cured_healing_infected\": 0.035,\n",
" \"prob_cured_vaccinate_neutral\": 0.035,\n",
"\n",
" \"prob_vaccinated_healing_infected\": 0.035,\n",
" \"prob_vaccinated_vaccinate_neutral\": 0.035,\n",
" \"prob_generate_anti_rumor\": 0.035\n",
" }\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Library"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test this simulator in all the experiments we have used the Albert\n",
"Barabasi Graph [34] to automatically generate the network and the con-\n",
"nections among the agents due it is one of the most suitable graphs to\n",
"recreate social networks.\n",
"\n",
"Using different human behaviour models we will recreate the different\n",
"decisions of each agent.\n",
"\n",
"Moreover there are some parameters regarding the basic simulation that\n",
"have to be settled. In addition, more parameters will be needed depend-\n",
"ing on the spread model used for the experiment."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Spread Model M2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This model is based on the New Spread Model\n",
"M2 [1] which also refers to the cascade model [2]. Agents, usually Twit-\n",
"ter users, have four states regarding a rumour: neutral (initial state),\n",
"infected, vaccinated and cured.\n",
"\n",
"An agent becomes: infected when believes the rumour; vaccinated when is\n",
"influenced before being infected by a cured or already vaccinated agent\n",
"and cured when after becoming infected the agent is influenced by a\n",
"vaccinated/cured user.\n",
"\n",
"After a certain period of time, a random infected user develops an anti-\n",
"rumour and spreads it to its neighbours in order to vaccinate the neutral\n",
"and cure the infected ones.\n",
"\n",
"This model includes the fact that infected users who made a mistake\n",
"believing in the rumour will not be in favour of spreading theirs mistakes\n",
"through the network. Therefore, only vaccinated users will spread anti-\n",
"rumours. The probability of making a denier and becoming vaccinated\n",
"when a neutral user has an infected neighbour and the first already had\n",
"information about the rumour being false.\n",
"\n",
"* [1] E. Serrano and C. A. Iglesias. “Validating viral marketing\n",
"strategies in Twitter via agent-based social simulation”. In:\n",
"Expert Systems with Applications 50.1 (2016),\n",
"* [2] L. Weng et al. “Virality prediction and community structure\n",
"in social networks”. In: Scientific Reports 3 (2013)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Control model M2,2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This model is based on the New Control Model\n",
"M2,2 [1]. It includes the use of beacons, special agents, that represent\n",
"an authority which can work against the rumour once it is detected. It\n",
"only has two states: on or off. Beacons will switch to on status when they\n",
"detect the misinformation in an infected neighbour agent.\n",
"Once the beacon is activated, they will try to cure and vaccinate other\n",
"agents starting a anti-rumour. Therefore this model also takes into ac-\n",
"count that infected users might not admit a previous mistake.\n",
"\n",
"* [1] E. Serrano and C. A. Iglesias. “Validating viral marketing\n",
"strategies in Twitter via agent-based social simulation”. In:\n",
"Expert Systems with Applications 50.1 (2016),"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SISa Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The SISa model of infection is already included in the simulator. Its the evolution of the classic disease spread Susceptible-Infective-Susceptible (SIS) model [1, 2].\n",
"\n",
"The SISa model is proposed by [3] and the main new feature is considering the spontaneous generation process of sentiment. This model has two assumptions: first, a susceptible agent who is close and more exposed to the infected has a higher probability of infection that other agent; second, the number of infected agents does not affect the probability of recovery.\n",
"\n",
"Based on some recent implementations of the SISa model [3], every agent can be in three states: neutral (initial), content and discontent.\n",
"\n",
"All the transitions between every different state are allowed depending on customizable probabilities. This model includes the fact that an agent will be more likely to change state as the number of neighbours with this state increases.\n",
"\n",
"* [1] P. Weng and X.-Q. Zhao. “Spreading speed and traveling waves for a multi-type SIS epidemic model”. In: Journal of Differential Equations 229.1 (2006)\n",
"\n",
"* [2] P. V. Mieghem. “Epidemic phase transition of the SIS type in networks”. In: A Letters Journal Exploring the Frontiers of Physics 97.4 (2012).\n",
"* [3] A. L. Hill et al. “Emotions as infectious diseases in a large social network: the SISa model”. In: Proceedings of the Royal Society of London B: Biological Sciences 277.1701 (2010),"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Big Market Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As stated in several papers [24], social networks like Twitter are the perfect scenario to study the propagation of ideas, sentiments and marketing strategies. In this scenario several enterprises want to take advantage of social networks to promote their companies and connect with their clients.\n",
"\n",
"The goal of this model [1] is to recreate the behaviour of several enterprises in a social network. Following the example of HashtKat, we want to measure the effect of different marketing strategies in social networks.\n",
"Depending on the sentiment towards an enterprise the user will post positive or negative tweets about these enterprises. The fact that an user can increase its probabilities of posting a relevant tweet about a certain\n",
"company depending on its sentiment towards it is also considered.\n",
"In this model the number of enterprises as well as tweet rate probabilities of both companies and users can be changed.\n",
"\n",
"* [1] E. Serrano and C. A. Iglesias. “Validating viral marketing\n",
"strategies in Twitter via agent-based social simulation”. In:\n",
"Expert Systems with Applications 50.1 (2016)\n",
"* [2] B. A. Huberman et al. “Social Networks that Matter: Twitter\n",
"Under the Microscope”. In: Social Science Research Network\n",
"(2008).\n",
"* [3] M. Cha et al. “Measuring User Influence in Twitter: The\n",
"Million Follower Fallacy.” In: ICWSM 10.10-17 (2010),\n",
"* [4] M. Bulearca and S. Bulearca. “Twitter: a viable marketing\n",
"tool for SMEs?” In: Global business and management research\n",
"2.4 (2010),"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sentiment Correlation Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this model we want to study\n",
"the influence of different sentiments in a social network. In order to do so, we base our model on the research made by [1]. In this paper the authors found out that in a social network (in this case Weibo) the correlation\n",
"of anger is significantly higher than joy and sadness meaning that the anger sentiment would occasionally spread faster than the others.\n",
"\n",
"They also confirmed some intuitive ideas such as a pair of users who have higher interactions are more likely to be influenced by each other, and that users with more friends would influence their neighbours more than other agents.\n",
"\n",
"In this simulation we have four emotions: anger, joy, sadness and disgust.\n",
"\n",
"Using the probabilities extracted from the dataset used by [1] we can visualise the graph and confirm the conclusions of the paper. Anger sentiment propagation rate is much higher than any other. Joy sentiment also spreads easily to the neighbours. However, sadness and disgust propagation rate is really small, few neighbours get affected by them.\n",
"\n",
"* [1] R. Fan et al. “Anger is More Influential Than Joy: Sentiment\n",
"Correlation in Weibo”. In: CoRR abs/1309.2402 (2013)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Bass Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even though Bass Model can be applied to many appli-\n",
"cations [5760] it can be used to study the diffusion of information as\n",
"well.\n",
"This model is based on the implementation proposed by Rand and Wilen-\n",
"sky [13]. In this scenario there are only two states: unaware (initial) and\n",
"aware. For this simulation we assume that agents can only change status\n",
"from advertising (outside effects) and word of mouth (information inside\n",
"the network).\n",
"The probability of being affected by imitation (word of mouth effect)\n",
"increases as a function of the agent aware neighbours. In this model once\n",
"the user changes to aware status it remains in this state for the whole\n",
"simulation.\n",
"\n",
"* F. M. Bass. “A New Product Growth for Model ConsumerDurables”. In: Management Science 15.5 (1969),\n",
"W. Dodds. “An Application of the Bass Model in Long-TermNew Product Forecasting”. In: Journal of Marketing Research\n",
"10.3 (1973),\n",
"* F. Douglas Tigert. “The Bass New Product Growth Model: A Sensitivity Analysis for a High Technology Product”. In: Journal of Marketing 45.4 (1981),\n",
"* Z. Jiang et al. “Virtual Bass Model and the left-hand data-truncation bias in diffusion of innovation studies”. In: International Journal of Research in Marketing 23.1 (2006), "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Independent Cascade Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As stated by Rand and Wilensky [1], the Independent Cascade Model [61] suits better the case we want to\n",
"study as it is more appropriate for social networks than the Bass Model.\n",
"\n",
"In this scenario we also have two states: unaware (initial) and aware. The new feature in this model is that one agent will only get infected once at least one neighbour became aware the previous time step. There is also\n",
"a probability of becoming aware by outside effects (innovation).\n",
"\n",
"This new feature can be explained intuitively, one agent will have more influence on another if the first just infected and wants to spread the new information he acquired.\n",
"\n",
"\n",
"* [1] W. Rand and U. Wilensky. An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo. MIT Press, 2015.\n",
"* [2] J. Goldenberg et al. “Talk of the Network: A Complex Systems Look at the Underlying Process of Word-of-Mouth”. In: Marketing Letters 12.3 (2001),\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copyright"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SOIL has been developed by the Intelligent Systems Group, Universidad Politécnica de Madrid, 2016-2017.\n",
"\n",
"@Copyright Universidad Politécnica de Madrid, 2016-2017\n",
" <img src=\"./logo_gsi.png\" alt=\"Grupo de Sistemas Inteligentes\" width=\"100px\">"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.0"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

0
test-requirements.txt Normal file
View File

16
tests/test.csv Normal file
View File

@@ -0,0 +1,16 @@
agent_id,t_step,key,value,value_type
a0,0,hello,w,str
a0,1,hello,o,str
a0,2,hello,r,str
a0,3,hello,l,str
a0,4,hello,d,str
a0,5,hello,!,str
env,1,started,,bool
env,2,started,True,bool
env,7,started,,bool
a0,0,hello,w,str
a0,1,hello,o,str
a0,2,hello,r,str
a0,3,hello,l,str
a0,4,hello,d,str
a0,5,hello,!,str
1 agent_id t_step key value value_type
2 a0 0 hello w str
3 a0 1 hello o str
4 a0 2 hello r str
5 a0 3 hello l str
6 a0 4 hello d str
7 a0 5 hello ! str
8 env 1 started bool
9 env 2 started True bool
10 env 7 started bool
11 a0 0 hello w str
12 a0 1 hello o str
13 a0 2 hello r str
14 a0 3 hello l str
15 a0 4 hello d str
16 a0 5 hello ! str

Some files were not shown because too many files have changed in this diff Show More