Compare commits
2 Commits
0.13.4-fix
...
TFG-David
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c73503d9f6 | ||
|
|
de67fe3e74 |
@@ -1,2 +0,0 @@
|
||||
**/soil_output
|
||||
.*
|
||||
@@ -1,25 +0,0 @@
|
||||
stages:
|
||||
- build
|
||||
- test
|
||||
|
||||
build:
|
||||
stage: build
|
||||
image:
|
||||
name: gcr.io/kaniko-project/executor:debug
|
||||
entrypoint: [""]
|
||||
tags:
|
||||
- docker
|
||||
script:
|
||||
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
|
||||
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
|
||||
only:
|
||||
- tags
|
||||
|
||||
|
||||
test:
|
||||
tags:
|
||||
- docker
|
||||
image: python:3.7
|
||||
stage: test
|
||||
script:
|
||||
- python setup.py test
|
||||
12
Dockerfile
@@ -1,12 +0,0 @@
|
||||
FROM python:3.7
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
COPY test-requirements.txt requirements.txt /usr/src/app/
|
||||
RUN pip install --no-cache-dir -r test-requirements.txt -r requirements.txt
|
||||
|
||||
COPY ./ /usr/src/app
|
||||
|
||||
RUN pip install '.[web]'
|
||||
|
||||
ENTRYPOINT ["python", "-m", "soil"]
|
||||
@@ -1,4 +0,0 @@
|
||||
include requirements.txt
|
||||
include test-requirements.txt
|
||||
include README.rst
|
||||
graft soil
|
||||
4
Makefile
@@ -1,4 +0,0 @@
|
||||
test:
|
||||
docker-compose exec dev python -m pytest -s -v
|
||||
|
||||
.PHONY: test
|
||||
32
README.md
Normal file → Executable file
@@ -1,34 +1,12 @@
|
||||
# [SOIL](https://github.com/gsi-upm/soil)
|
||||
#[Soil](https://github.com/gsi-upm/soil)
|
||||
|
||||
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
|
||||
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
|
||||
The purpose of Soil (SOcial network sImuLator) is provding an Agent-based Social Simulator written in Python for Social Networks.
|
||||
|
||||
|
||||
In order to see quickly how to use Soil, you can follow the following [tutorial](https://github.com/gsi-upm/soil/blob/master/soil_tutorial.ipynb).
|
||||
|
||||
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
|
||||
|
||||
If you use Soil in your research, don't forget to cite this paper:
|
||||
|
||||
```bibtex
|
||||
@inbook{soil-gsi-conference-2017,
|
||||
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
|
||||
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
|
||||
doi = "10.1007/978-3-319-59930-4_19",
|
||||
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
|
||||
isbn = "978-3-319-59929-8",
|
||||
keywords = "soil;social networks;agent based social simulation;python",
|
||||
month = "June",
|
||||
organization = "PAAMS 2017",
|
||||
pages = "234-245",
|
||||
publisher = "Springer Verlag",
|
||||
series = "LNAI",
|
||||
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
|
||||
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
|
||||
volume = "10349",
|
||||
year = "2017",
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
@Copyright GSI - Universidad Politécnica de Madrid 2017
|
||||
|
||||
[](https://www.gsi.dit.upm.es)
|
||||
|
||||
|
||||
BIN
TerroristModel.png
Normal file
|
After Width: | Height: | Size: 24 KiB |
BIN
TerroristModel_type.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
BIN
clase_base.pyc
Executable file
@@ -1,12 +0,0 @@
|
||||
version: '3'
|
||||
services:
|
||||
dev:
|
||||
build: .
|
||||
environment:
|
||||
PYTHONDONTWRITEBYTECODE: 1
|
||||
volumes:
|
||||
- .:/usr/src/app
|
||||
tty: true
|
||||
entrypoint: /bin/bash
|
||||
ports:
|
||||
- '8001:8001'
|
||||
0
docs/Makefile
Normal file → Executable file
0
docs/conf.py
Normal file → Executable file
@@ -1,244 +0,0 @@
|
||||
Configuring a simulation
|
||||
------------------------
|
||||
|
||||
There are two ways to configure a simulation: programmatically and with a configuration file.
|
||||
In both cases, the parameters used are the same.
|
||||
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
|
||||
|
||||
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
|
||||
Here's an example (``example.yml``).
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
name: MyExampleSimulation
|
||||
max_time: 50
|
||||
num_trials: 3
|
||||
interval: 2
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 100
|
||||
m: 2
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: content
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: discontent
|
||||
- agent_type: SISaModel
|
||||
weight: 8
|
||||
state:
|
||||
id: neutral
|
||||
environment_params:
|
||||
prob_infect: 0.075
|
||||
|
||||
|
||||
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
|
||||
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil.
|
||||
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
|
||||
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
|
||||
The state of the agents will be updated every 2 seconds (``interval``).
|
||||
|
||||
Now run the simulation with the command line tool:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
soil example.yml
|
||||
|
||||
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
|
||||
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
|
||||
|
||||
|
||||
.. code::
|
||||
|
||||
soil_output
|
||||
└── MyExampleSimulation
|
||||
├── MyExampleSimulation.dumped.yml
|
||||
├── MyExampleSimulation.simulation.pickle
|
||||
├── MyExampleSimulation_trial_0.db.sqlite
|
||||
├── MyExampleSimulation_trial_1.db.sqlite
|
||||
└── MyExampleSimulation_trial_2.db.sqlite
|
||||
|
||||
|
||||
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
|
||||
|
||||
Network
|
||||
=======
|
||||
|
||||
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
|
||||
|
||||
Loading a network
|
||||
#################
|
||||
|
||||
To load an existing network, specify its path in the configuration:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
network_params:
|
||||
path: /tmp/mynetwork.gexf
|
||||
|
||||
Soil will try to guess what networkx method to use to read the file based on its extension.
|
||||
However, we only test using ``gexf`` files.
|
||||
|
||||
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
topology:
|
||||
nodes:
|
||||
- id: First
|
||||
- id: Second
|
||||
links:
|
||||
- source: First
|
||||
target: Second
|
||||
|
||||
|
||||
Generating a random network
|
||||
###########################
|
||||
|
||||
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
|
||||
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_params:
|
||||
generator: complete_graph
|
||||
n: 100
|
||||
|
||||
Environment
|
||||
============
|
||||
The environment is the place where the shared state of the simulation is stored.
|
||||
For instance, the probability of disease outbreak.
|
||||
The configuration file may specify the initial value of the environment parameters:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
environment_params:
|
||||
daily_probability_of_earthquake: 0.001
|
||||
number_of_earthquakes: 0
|
||||
|
||||
All agents have access to the environment parameters.
|
||||
|
||||
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
|
||||
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
|
||||
|
||||
|
||||
Agents
|
||||
======
|
||||
Agents are a way of modelling behavior.
|
||||
Agents can be characterized with two variables: agent type (``agent_type``) and state.
|
||||
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
|
||||
Through the environment, it can access the network topology and the state of other agents.
|
||||
|
||||
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
|
||||
|
||||
Network Agents
|
||||
##############
|
||||
Network agents are attached to a node in the topology.
|
||||
The configuration file allows you to specify how agents will be mapped to topology nodes.
|
||||
|
||||
The simplest way is to specify a single type of agent.
|
||||
Hence, every node in the network will be associated to an agent of that type.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_type: SISaModel
|
||||
|
||||
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
|
||||
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
- agent_type: CounterModel
|
||||
weight: 5
|
||||
|
||||
The third option is to specify the type of agent on the node itself, e.g.:
|
||||
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
topology:
|
||||
nodes:
|
||||
- id: first
|
||||
agent_type: BaseAgent
|
||||
states:
|
||||
first:
|
||||
agent_type: SISaModel
|
||||
|
||||
|
||||
This would also work with a randomly generated network:
|
||||
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network:
|
||||
generator: complete
|
||||
n: 5
|
||||
agent_type: BaseAgent
|
||||
states:
|
||||
- agent_type: SISaModel
|
||||
|
||||
|
||||
|
||||
In addition to agent type, you may add a custom initial state to the distribution.
|
||||
This is very useful to add the same agent type with different states.
|
||||
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
weight: 9
|
||||
state:
|
||||
id: neutral
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: discontent
|
||||
|
||||
Lastly, the configuration may include initial state for one or more nodes.
|
||||
For instance, to add a state for the two nodes in this configuration:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_type: SISaModel
|
||||
network:
|
||||
generator: complete_graph
|
||||
n: 2
|
||||
states:
|
||||
- id: content
|
||||
- id: discontent
|
||||
|
||||
|
||||
Or to add state only to specific nodes (by ``id``).
|
||||
For example, to apply special skills to Linux Torvalds in a simulation:
|
||||
|
||||
.. literalinclude:: ../examples/torvalds.yml
|
||||
:language: yaml
|
||||
|
||||
|
||||
Environment Agents
|
||||
##################
|
||||
In addition to network agents, more agents can be added to the simulation.
|
||||
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
|
||||
|
||||
|
||||
.. code::
|
||||
|
||||
environment_agents:
|
||||
- agent_type: MyAgent
|
||||
state:
|
||||
mood: happy
|
||||
- agent_type: DummyAgent
|
||||
|
||||
|
||||
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
|
||||
They are also useful to add behavior that has little to do with the network and the interactions within that network.
|
||||
35
docs/index.rst
Normal file → Executable file
@@ -6,43 +6,16 @@
|
||||
Welcome to Soil's documentation!
|
||||
================================
|
||||
|
||||
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
|
||||
|
||||
If you use Soil in your research, do not forget to cite this paper:
|
||||
|
||||
.. code:: bibtex
|
||||
|
||||
@inbook{soil-gsi-conference-2017,
|
||||
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
|
||||
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
|
||||
doi = "10.1007/978-3-319-59930-4_19",
|
||||
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
|
||||
isbn = "978-3-319-59929-8",
|
||||
keywords = "soil;social networks;agent based social simulation;python",
|
||||
month = "June",
|
||||
organization = "PAAMS 2017",
|
||||
pages = "234-245",
|
||||
publisher = "Springer Verlag",
|
||||
series = "LNAI",
|
||||
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
|
||||
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
|
||||
volume = "10349",
|
||||
year = "2017",
|
||||
}
|
||||
|
||||
|
||||
|
||||
Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 0
|
||||
:maxdepth: 2
|
||||
:caption: Learn more about soil:
|
||||
|
||||
installation
|
||||
quickstart
|
||||
configuration
|
||||
Tutorial <soil_tutorial>
|
||||
usage
|
||||
models
|
||||
|
||||
..
|
||||
|
||||
|
||||
.. Indices and tables
|
||||
|
||||
21
docs/installation.rst
Normal file → Executable file
@@ -1,24 +1,7 @@
|
||||
Installation
|
||||
------------
|
||||
|
||||
The easiest way to install Soil is through pip, with Python >= 3.4:
|
||||
The latest version can be installed through GitLab.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
pip install soil
|
||||
|
||||
|
||||
Now test that it worked by running the command line tool
|
||||
|
||||
.. code:: bash
|
||||
|
||||
soil --help
|
||||
|
||||
Or using soil programmatically:
|
||||
|
||||
.. code:: python
|
||||
|
||||
import soil
|
||||
print(soil.__version__)
|
||||
|
||||
The latest version can be installed through `GitLab <https://lab.cluster.gsi.dit.upm.es/soil/soil.git>`_.
|
||||
git clone https://lab.cluster.gsi.dit.upm.es/soil/soil.git
|
||||
0
docs/make.bat
Normal file → Executable file
112
docs/models.rst
Executable file
@@ -0,0 +1,112 @@
|
||||
Developing new models
|
||||
---------------------
|
||||
This document describes how to develop a new analysis model.
|
||||
|
||||
What is a model?
|
||||
================
|
||||
|
||||
A model defines the behaviour of the agents with a view to assessing their effects on the system as a whole.
|
||||
In practice, a model consists of at least two parts:
|
||||
|
||||
* Python module: the actual code that describes the behaviour.
|
||||
* Setting up the variables in the Settings JSON file.
|
||||
|
||||
This separation allows us to run the simulation with different agents.
|
||||
|
||||
Models Code
|
||||
===========
|
||||
|
||||
All the models are imported to the main file. The initialization look like this:
|
||||
|
||||
.. code:: python
|
||||
|
||||
import settings
|
||||
|
||||
networkStatus = {} # Dict that will contain the status of every agent in the network
|
||||
|
||||
sentimentCorrelationNodeArray = []
|
||||
for x in range(0, settings.network_params["number_of_nodes"]):
|
||||
sentimentCorrelationNodeArray.append({'id': x})
|
||||
# Initialize agent states. Let's assume everyone is normal.
|
||||
init_states = [{'id': 0, } for _ in range(settings.network_params["number_of_nodes"])]
|
||||
# add keys as as necessary, but "id" must always refer to that state category
|
||||
|
||||
A new model have to inherit the BaseBehaviour class which is in the same module.
|
||||
There are two basics methods:
|
||||
|
||||
* __init__
|
||||
* step: used to define the behaviour over time.
|
||||
|
||||
Variable Initialization
|
||||
=======================
|
||||
|
||||
The different parameters of the model have to be initialize in the Simulation Settings JSON file which will be
|
||||
passed as a parameter to the simulation.
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"agent": ["SISaModel","ControlModelM2"],
|
||||
|
||||
"neutral_discontent_spon_prob": 0.04,
|
||||
"neutral_discontent_infected_prob": 0.04,
|
||||
"neutral_content_spon_prob": 0.18,
|
||||
"neutral_content_infected_prob": 0.02,
|
||||
|
||||
"discontent_neutral": 0.13,
|
||||
"discontent_content": 0.07,
|
||||
"variance_d_c": 0.02,
|
||||
|
||||
"content_discontent": 0.009,
|
||||
"variance_c_d": 0.003,
|
||||
"content_neutral": 0.088,
|
||||
|
||||
"standard_variance": 0.055,
|
||||
|
||||
|
||||
"prob_neutral_making_denier": 0.035,
|
||||
|
||||
"prob_infect": 0.075,
|
||||
|
||||
"prob_cured_healing_infected": 0.035,
|
||||
"prob_cured_vaccinate_neutral": 0.035,
|
||||
|
||||
"prob_vaccinated_healing_infected": 0.035,
|
||||
"prob_vaccinated_vaccinate_neutral": 0.035,
|
||||
"prob_generate_anti_rumor": 0.035
|
||||
}
|
||||
|
||||
In this file you will also define the models you are going to simulate. You can simulate as many models as you want.
|
||||
The simulation returns one result for each model, executing each model separately. For the usage, see :doc:`usage`.
|
||||
|
||||
Example Model
|
||||
=============
|
||||
|
||||
In this section, we will implement a Sentiment Correlation Model.
|
||||
|
||||
The class would look like this:
|
||||
|
||||
.. code:: python
|
||||
|
||||
from ..BaseBehaviour import *
|
||||
from .. import sentimentCorrelationNodeArray
|
||||
|
||||
class SentimentCorrelationModel(BaseBehaviour):
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
||||
self.anger_prob = environment.environment_params['anger_prob']
|
||||
self.joy_prob = environment.environment_params['joy_prob']
|
||||
self.sadness_prob = environment.environment_params['sadness_prob']
|
||||
self.disgust_prob = environment.environment_params['disgust_prob']
|
||||
self.time_awareness = []
|
||||
for i in range(4): # In this model we have 4 sentiments
|
||||
self.time_awareness.append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||
sentimentCorrelationNodeArray[self.id][self.env.now] = 0
|
||||
|
||||
def step(self, now):
|
||||
self.behaviour() # Method which define the behaviour
|
||||
super().step(now)
|
||||
|
||||
The variables will be modified by the user, so you have to include them in the Simulation Settings JSON file.
|
||||
|
Before Width: | Height: | Size: 7.0 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 5.3 KiB |
|
Before Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 11 KiB |
|
Before Width: | Height: | Size: 19 KiB |
@@ -1,93 +0,0 @@
|
||||
Quickstart
|
||||
----------
|
||||
|
||||
This section shows how to run your first simulation with Soil.
|
||||
For installation instructions, see :doc:`installation`.
|
||||
|
||||
There are mainly two parts in a simulation: agent classes and simulation configuration.
|
||||
An agent class defines how the agent will behave throughout the simulation.
|
||||
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
|
||||
|
||||
|
||||
.. image:: soil.png
|
||||
:width: 80%
|
||||
:align: center
|
||||
|
||||
|
||||
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
|
||||
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
|
||||
|
||||
Configuration
|
||||
=============
|
||||
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
|
||||
|
||||
.. literalinclude:: quickstart.yml
|
||||
:language: yaml
|
||||
|
||||
The agent type used, SISa, is a very simple model.
|
||||
It only has three states (neutral, content and discontent),
|
||||
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
|
||||
|
||||
Running the simulation
|
||||
======================
|
||||
|
||||
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
|
||||
|
||||
.. code::
|
||||
|
||||
❯ soil --graph --csv quickstart.yml [13:35:29]
|
||||
INFO:soil:Using config(s): quickstart
|
||||
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
|
||||
INFO:soil:Starting simulation quickstart at 13:35:30.
|
||||
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
|
||||
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
|
||||
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
|
||||
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
|
||||
INFO:soil:Dumping results to soil_output/quickstart
|
||||
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
|
||||
|
||||
|
||||
The ``CSV`` file should look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
agent_id,t_step,key,value
|
||||
env,0,neutral_discontent_spon_prob,0.05
|
||||
env,0,neutral_discontent_infected_prob,0.1
|
||||
env,0,neutral_content_spon_prob,0.2
|
||||
env,0,neutral_content_infected_prob,0.4
|
||||
env,0,discontent_neutral,0.2
|
||||
env,0,discontent_content,0.05
|
||||
env,0,content_discontent,0.05
|
||||
env,0,variance_d_c,0.05
|
||||
env,0,variance_c_d,0.1
|
||||
|
||||
Results and visualization
|
||||
=========================
|
||||
|
||||
The environment variables are marked as ``agent_id`` env.
|
||||
Th exported values are only stored when they change.
|
||||
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
|
||||
|
||||
The dynamic graph is exported as a .gexf file which could be visualized with
|
||||
`Gephi <https://gephi.org/users/download/>`__.
|
||||
Now it is your turn to experiment with the simulation.
|
||||
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
|
||||
|
||||
|
||||
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
|
||||
To make it work, you have to install soil like this:
|
||||
|
||||
.. code::
|
||||
|
||||
pip install soil[web]
|
||||
|
||||
Once installed, the soil web UI can be run in two ways:
|
||||
|
||||
.. code::
|
||||
|
||||
soil-web
|
||||
|
||||
# OR
|
||||
|
||||
python -m soil.web
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
name: quickstart
|
||||
num_trials: 1
|
||||
max_time: 1000
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
state:
|
||||
id: neutral
|
||||
weight: 1
|
||||
- agent_type: SISaModel
|
||||
state:
|
||||
id: content
|
||||
weight: 2
|
||||
network_params:
|
||||
n: 100
|
||||
k: 5
|
||||
p: 0.2
|
||||
generator: newman_watts_strogatz_graph
|
||||
environment_params:
|
||||
neutral_discontent_spon_prob: 0.05
|
||||
neutral_discontent_infected_prob: 0.1
|
||||
neutral_content_spon_prob: 0.2
|
||||
neutral_content_infected_prob: 0.4
|
||||
discontent_neutral: 0.2
|
||||
discontent_content: 0.05
|
||||
content_discontent: 0.05
|
||||
variance_d_c: 0.05
|
||||
variance_c_d: 0.1
|
||||
content_neutral: 0.1
|
||||
standard_variance: 0.1
|
||||
BIN
docs/soil.png
|
Before Width: | Height: | Size: 43 KiB |
99
docs/usage.rst
Executable file
@@ -0,0 +1,99 @@
|
||||
Usage
|
||||
-----
|
||||
|
||||
First of all, you need to install the package. See :doc:`installation` for installation instructions.
|
||||
|
||||
Simulation Settings
|
||||
===================
|
||||
|
||||
Once installed, before running a simulation, you need to configure it.
|
||||
|
||||
* In the Settings JSON file you will find the configuration of the network.
|
||||
|
||||
.. code:: python
|
||||
|
||||
{
|
||||
"network_type": 1,
|
||||
"number_of_nodes": 1000,
|
||||
"max_time": 50,
|
||||
"num_trials": 1,
|
||||
"timeout": 2
|
||||
}
|
||||
|
||||
* In the Settings JSON file, you will also find the configuration of the models.
|
||||
|
||||
Network Types
|
||||
=============
|
||||
|
||||
There are three types of network implemented, but you could add more.
|
||||
|
||||
.. code:: python
|
||||
|
||||
if settings.network_type == 0:
|
||||
G = nx.complete_graph(settings.number_of_nodes)
|
||||
if settings.network_type == 1:
|
||||
G = nx.barabasi_albert_graph(settings.number_of_nodes, 10)
|
||||
if settings.network_type == 2:
|
||||
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
|
||||
# More types of networks can be added here
|
||||
|
||||
Models Settings
|
||||
===============
|
||||
|
||||
After having configured the simulation, the next step is setting up the variables of the models.
|
||||
For this, you will need to modify the Settings JSON file again.
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"agent": ["SISaModel","ControlModelM2"],
|
||||
|
||||
"neutral_discontent_spon_prob": 0.04,
|
||||
"neutral_discontent_infected_prob": 0.04,
|
||||
"neutral_content_spon_prob": 0.18,
|
||||
"neutral_content_infected_prob": 0.02,
|
||||
|
||||
"discontent_neutral": 0.13,
|
||||
"discontent_content": 0.07,
|
||||
"variance_d_c": 0.02,
|
||||
|
||||
"content_discontent": 0.009,
|
||||
"variance_c_d": 0.003,
|
||||
"content_neutral": 0.088,
|
||||
|
||||
"standard_variance": 0.055,
|
||||
|
||||
|
||||
"prob_neutral_making_denier": 0.035,
|
||||
|
||||
"prob_infect": 0.075,
|
||||
|
||||
"prob_cured_healing_infected": 0.035,
|
||||
"prob_cured_vaccinate_neutral": 0.035,
|
||||
|
||||
"prob_vaccinated_healing_infected": 0.035,
|
||||
"prob_vaccinated_vaccinate_neutral": 0.035,
|
||||
"prob_generate_anti_rumor": 0.035
|
||||
}
|
||||
|
||||
In this file you will define the different models you are going to simulate. You can simulate as many models
|
||||
as you want. Each model will be simulated separately.
|
||||
|
||||
After setting up the models, you have to initialize the parameters of each one. You will find the parameters needed
|
||||
in the documentation of each model.
|
||||
|
||||
Parameter validation will fail if a required parameter without a default has not been provided.
|
||||
|
||||
Running the Simulation
|
||||
======================
|
||||
|
||||
After setting all the configuration, you will be able to run the simulation. All you need to do is execute:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
python3 soil.py
|
||||
|
||||
The simulation will return a dynamic graph .gexf file which could be visualized with
|
||||
`Gephi <https://gephi.org/users/download/>`__.
|
||||
|
||||
It will also return one .png picture for each model simulated.
|
||||
80808
examples/Untitled.ipynb
@@ -1,28 +0,0 @@
|
||||
---
|
||||
name: simple
|
||||
dir_path: "/tmp/"
|
||||
num_trials: 3
|
||||
dry_run: True
|
||||
max_time: 100
|
||||
interval: 1
|
||||
seed: "CompleteSeed!"
|
||||
dump: false
|
||||
network_params:
|
||||
generator: complete_graph
|
||||
n: 10
|
||||
network_agents:
|
||||
- agent_type: CounterModel
|
||||
weight: 1
|
||||
state:
|
||||
id: 0
|
||||
- agent_type: AggregatedCounter
|
||||
weight: 0.2
|
||||
environment_agents: []
|
||||
environment_class: Environment
|
||||
environment_params:
|
||||
am_i_complete: true
|
||||
default_state:
|
||||
incidents: 0
|
||||
states:
|
||||
- name: 'The first node'
|
||||
- name: 'The second node'
|
||||
@@ -1,138 +0,0 @@
|
||||
---
|
||||
default_state: {}
|
||||
load_module: newsspread
|
||||
environment_agents: []
|
||||
environment_params:
|
||||
prob_neighbor_spread: 0.0
|
||||
prob_tv_spread: 0.01
|
||||
interval: 1
|
||||
max_time: 30
|
||||
name: Sim_all_dumb
|
||||
network_agents:
|
||||
- agent_type: DumbViewer
|
||||
state:
|
||||
has_tv: false
|
||||
weight: 1
|
||||
- agent_type: DumbViewer
|
||||
state:
|
||||
has_tv: true
|
||||
weight: 1
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 500
|
||||
m: 5
|
||||
num_trials: 50
|
||||
---
|
||||
default_state: {}
|
||||
load_module: newsspread
|
||||
environment_agents: []
|
||||
environment_params:
|
||||
prob_neighbor_spread: 0.0
|
||||
prob_tv_spread: 0.01
|
||||
interval: 1
|
||||
max_time: 30
|
||||
name: Sim_half_herd
|
||||
network_agents:
|
||||
- agent_type: DumbViewer
|
||||
state:
|
||||
has_tv: false
|
||||
weight: 1
|
||||
- agent_type: DumbViewer
|
||||
state:
|
||||
has_tv: true
|
||||
weight: 1
|
||||
- agent_type: HerdViewer
|
||||
state:
|
||||
has_tv: false
|
||||
weight: 1
|
||||
- agent_type: HerdViewer
|
||||
state:
|
||||
has_tv: true
|
||||
weight: 1
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 500
|
||||
m: 5
|
||||
num_trials: 50
|
||||
---
|
||||
default_state: {}
|
||||
load_module: newsspread
|
||||
environment_agents: []
|
||||
environment_params:
|
||||
prob_neighbor_spread: 0.0
|
||||
prob_tv_spread: 0.01
|
||||
interval: 1
|
||||
max_time: 30
|
||||
name: Sim_all_herd
|
||||
network_agents:
|
||||
- agent_type: HerdViewer
|
||||
state:
|
||||
has_tv: true
|
||||
id: neutral
|
||||
weight: 1
|
||||
- agent_type: HerdViewer
|
||||
state:
|
||||
has_tv: true
|
||||
id: neutral
|
||||
weight: 1
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 500
|
||||
m: 5
|
||||
num_trials: 50
|
||||
---
|
||||
default_state: {}
|
||||
load_module: newsspread
|
||||
environment_agents: []
|
||||
environment_params:
|
||||
prob_neighbor_spread: 0.0
|
||||
prob_tv_spread: 0.01
|
||||
prob_neighbor_cure: 0.1
|
||||
interval: 1
|
||||
max_time: 30
|
||||
name: Sim_wise_herd
|
||||
network_agents:
|
||||
- agent_type: HerdViewer
|
||||
state:
|
||||
has_tv: true
|
||||
id: neutral
|
||||
weight: 1
|
||||
- agent_type: WiseViewer
|
||||
state:
|
||||
has_tv: true
|
||||
weight: 1
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 500
|
||||
m: 5
|
||||
num_trials: 50
|
||||
---
|
||||
default_state: {}
|
||||
load_module: newsspread
|
||||
environment_agents: []
|
||||
environment_params:
|
||||
prob_neighbor_spread: 0.0
|
||||
prob_tv_spread: 0.01
|
||||
prob_neighbor_cure: 0.1
|
||||
interval: 1
|
||||
max_time: 30
|
||||
name: Sim_all_wise
|
||||
network_agents:
|
||||
- agent_type: WiseViewer
|
||||
state:
|
||||
has_tv: true
|
||||
id: neutral
|
||||
weight: 1
|
||||
- agent_type: WiseViewer
|
||||
state:
|
||||
has_tv: true
|
||||
weight: 1
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 500
|
||||
m: 5
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 500
|
||||
m: 5
|
||||
num_trials: 50
|
||||
@@ -1,81 +0,0 @@
|
||||
from soil.agents import FSM, state, default_state, prob
|
||||
import logging
|
||||
|
||||
|
||||
class DumbViewer(FSM):
|
||||
'''
|
||||
A viewer that gets infected via TV (if it has one) and tries to infect
|
||||
its neighbors once it's infected.
|
||||
'''
|
||||
defaults = {
|
||||
'prob_neighbor_spread': 0.5,
|
||||
'prob_tv_spread': 0.1,
|
||||
}
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def neutral(self):
|
||||
if self['has_tv']:
|
||||
if prob(self.env['prob_tv_spread']):
|
||||
self.set_state(self.infected)
|
||||
|
||||
@state
|
||||
def infected(self):
|
||||
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
|
||||
if prob(self.env['prob_neighbor_spread']):
|
||||
neighbor.infect()
|
||||
|
||||
def infect(self):
|
||||
self.set_state(self.infected)
|
||||
|
||||
|
||||
class HerdViewer(DumbViewer):
|
||||
'''
|
||||
A viewer whose probability of infection depends on the state of its neighbors.
|
||||
'''
|
||||
|
||||
level = logging.DEBUG
|
||||
|
||||
def infect(self):
|
||||
infected = self.count_neighboring_agents(state_id=self.infected.id)
|
||||
total = self.count_neighboring_agents()
|
||||
prob_infect = self.env['prob_neighbor_spread'] * infected/total
|
||||
self.debug('prob_infect', prob_infect)
|
||||
if prob(prob_infect):
|
||||
self.set_state(self.infected.id)
|
||||
|
||||
|
||||
class WiseViewer(HerdViewer):
|
||||
'''
|
||||
A viewer that can change its mind.
|
||||
'''
|
||||
|
||||
defaults = {
|
||||
'prob_neighbor_spread': 0.5,
|
||||
'prob_neighbor_cure': 0.25,
|
||||
'prob_tv_spread': 0.1,
|
||||
}
|
||||
|
||||
@state
|
||||
def cured(self):
|
||||
prob_cure = self.env['prob_neighbor_cure']
|
||||
for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
|
||||
if prob(prob_cure):
|
||||
try:
|
||||
neighbor.cure()
|
||||
except AttributeError:
|
||||
self.debug('Viewer {} cannot be cured'.format(neighbor.id))
|
||||
|
||||
def cure(self):
|
||||
self.set_state(self.cured.id)
|
||||
|
||||
@state
|
||||
def infected(self):
|
||||
cured = max(self.count_neighboring_agents(self.cured.id),
|
||||
1.0)
|
||||
infected = max(self.count_neighboring_agents(self.infected.id),
|
||||
1.0)
|
||||
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected)
|
||||
if prob(prob_cure):
|
||||
return self.cure()
|
||||
return self.set_state(super().infected)
|
||||
@@ -1,10 +0,0 @@
|
||||
Simulation of pubs and drinking pals that go from pub to pub.
|
||||
|
||||
Th custom environment includes a list of pubs and methods to allow agents to discover and enter pubs.
|
||||
There are two types of agents:
|
||||
|
||||
* Patron. A patron will do three things, in this order:
|
||||
* Look for other patrons to drink with
|
||||
* Look for a pub where the agent and other agents in the same group can get in.
|
||||
* While in the pub, patrons only drink, until they get drunk and taken home.
|
||||
* Police. There is only one police agent that will take any drunk patrons home (kick them out of the pub).
|
||||
@@ -1,174 +0,0 @@
|
||||
from soil.agents import FSM, state, default_state
|
||||
from soil import Environment
|
||||
from random import random, shuffle
|
||||
from itertools import islice
|
||||
import logging
|
||||
|
||||
|
||||
class CityPubs(Environment):
|
||||
'''Environment with Pubs'''
|
||||
level = logging.INFO
|
||||
|
||||
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
|
||||
super(CityPubs, self).__init__(*args, **kwargs)
|
||||
pubs = {}
|
||||
for i in range(number_of_pubs):
|
||||
newpub = {
|
||||
'name': 'The awesome pub #{}'.format(i),
|
||||
'open': True,
|
||||
'capacity': pub_capacity,
|
||||
'occupancy': 0,
|
||||
}
|
||||
pubs[newpub['name']] = newpub
|
||||
self['pubs'] = pubs
|
||||
|
||||
def enter(self, pub_id, *nodes):
|
||||
'''Agents will try to enter. The pub checks if it is possible'''
|
||||
try:
|
||||
pub = self['pubs'][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
||||
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])):
|
||||
return False
|
||||
pub['occupancy'] += len(nodes)
|
||||
for node in nodes:
|
||||
node['pub'] = pub_id
|
||||
return True
|
||||
|
||||
def available_pubs(self):
|
||||
for pub in self['pubs'].values():
|
||||
if pub['open'] and (pub['occupancy'] < pub['capacity']):
|
||||
yield pub['name']
|
||||
|
||||
def exit(self, pub_id, *node_ids):
|
||||
'''Agents will notify the pub they want to leave'''
|
||||
try:
|
||||
pub = self['pubs'][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
||||
for node_id in node_ids:
|
||||
node = self.get_agent(node_id)
|
||||
if pub_id == node['pub']:
|
||||
del node['pub']
|
||||
pub['occupancy'] -= 1
|
||||
|
||||
|
||||
class Patron(FSM):
|
||||
'''Agent that looks for friends to drink with. It will do three things:
|
||||
1) Look for other patrons to drink with
|
||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||
'''
|
||||
level = logging.INFO
|
||||
|
||||
defaults = {
|
||||
'pub': None,
|
||||
'drunk': False,
|
||||
'pints': 0,
|
||||
'max_pints': 3,
|
||||
}
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def looking_for_friends(self):
|
||||
'''Look for friends to drink with'''
|
||||
self.info('I am looking for friends')
|
||||
available_friends = list(self.get_agents(drunk=False,
|
||||
pub=None,
|
||||
state_id=self.looking_for_friends.id))
|
||||
if not available_friends:
|
||||
self.info('Life sucks and I\'m alone!')
|
||||
return self.at_home
|
||||
befriended = self.try_friends(available_friends)
|
||||
if befriended:
|
||||
return self.looking_for_pub
|
||||
|
||||
@state
|
||||
def looking_for_pub(self):
|
||||
'''Look for a pub that accepts me and my friends'''
|
||||
if self['pub'] != None:
|
||||
return self.sober_in_pub
|
||||
self.debug('I am looking for a pub')
|
||||
group = list(self.get_neighboring_agents())
|
||||
for pub in self.env.available_pubs():
|
||||
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
|
||||
if self.env.enter(pub, self, *group):
|
||||
self.info('We\'re all {} getting in {}!'.format(len(group), pub))
|
||||
return self.sober_in_pub
|
||||
|
||||
@state
|
||||
def sober_in_pub(self):
|
||||
'''Drink up.'''
|
||||
self.drink()
|
||||
if self['pints'] > self['max_pints']:
|
||||
return self.drunk_in_pub
|
||||
|
||||
@state
|
||||
def drunk_in_pub(self):
|
||||
'''I'm out. Take me home!'''
|
||||
self.info('I\'m so drunk. Take me home!')
|
||||
self['drunk'] = True
|
||||
pass # out drunk
|
||||
|
||||
@state
|
||||
def at_home(self):
|
||||
'''The end'''
|
||||
self.debug('Life sucks. I\'m home!')
|
||||
|
||||
def drink(self):
|
||||
self['pints'] += 1
|
||||
self.debug('Cheers to that')
|
||||
|
||||
def kick_out(self):
|
||||
self.set_state(self.at_home)
|
||||
|
||||
def befriend(self, other_agent, force=False):
|
||||
'''
|
||||
Try to become friends with another agent. The chances of
|
||||
success depend on both agents' openness.
|
||||
'''
|
||||
if force or self['openness'] > random():
|
||||
self.env.add_edge(self, other_agent)
|
||||
self.info('Made some friend {}'.format(other_agent))
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_friends(self, others):
|
||||
''' Look for random agents around me and try to befriend them'''
|
||||
befriended = False
|
||||
k = int(10*self['openness'])
|
||||
shuffle(others)
|
||||
for friend in islice(others, k): # random.choice >= 3.7
|
||||
if friend == self:
|
||||
continue
|
||||
if friend.befriend(self):
|
||||
self.befriend(friend, force=True)
|
||||
self.debug('Hooray! new friend: {}'.format(friend.id))
|
||||
befriended = True
|
||||
else:
|
||||
self.debug('{} does not want to be friends'.format(friend.id))
|
||||
return befriended
|
||||
|
||||
|
||||
class Police(FSM):
|
||||
'''Simple agent to take drunk people out of pubs.'''
|
||||
level = logging.INFO
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def patrol(self):
|
||||
drunksters = list(self.get_agents(drunk=True,
|
||||
state_id=Patron.drunk_in_pub.id))
|
||||
for drunk in drunksters:
|
||||
self.info('Kicking out the trash: {}'.format(drunk.id))
|
||||
drunk.kick_out()
|
||||
else:
|
||||
self.info('No trash to take out. Too bad.')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
from soil import simulation
|
||||
simulation.run_from_config('pubcrawl.yml',
|
||||
dry_run=True,
|
||||
dump=None,
|
||||
parallel=False)
|
||||
@@ -1,26 +0,0 @@
|
||||
---
|
||||
name: pubcrawl
|
||||
num_trials: 3
|
||||
max_time: 10
|
||||
dump: false
|
||||
network_params:
|
||||
# Generate 100 empty nodes. They will be assigned a network agent
|
||||
generator: empty_graph
|
||||
n: 30
|
||||
network_agents:
|
||||
- agent_type: pubcrawl.Patron
|
||||
description: Extroverted patron
|
||||
state:
|
||||
openness: 1.0
|
||||
weight: 9
|
||||
- agent_type: pubcrawl.Patron
|
||||
description: Introverted patron
|
||||
state:
|
||||
openness: 0.1
|
||||
weight: 1
|
||||
environment_agents:
|
||||
- agent_type: pubcrawl.Police
|
||||
environment_class: pubcrawl.CityPubs
|
||||
environment_params:
|
||||
altercations: 0
|
||||
number_of_pubs: 3
|
||||
@@ -1,120 +0,0 @@
|
||||
from soil.agents import FSM, state, default_state, BaseAgent
|
||||
from enum import Enum
|
||||
from random import random, choice
|
||||
from itertools import islice
|
||||
import logging
|
||||
import math
|
||||
|
||||
|
||||
class Genders(Enum):
|
||||
male = 'male'
|
||||
female = 'female'
|
||||
|
||||
|
||||
class RabbitModel(FSM):
|
||||
|
||||
level = logging.INFO
|
||||
|
||||
defaults = {
|
||||
'age': 0,
|
||||
'gender': Genders.male.value,
|
||||
'mating_prob': 0.001,
|
||||
'offspring': 0,
|
||||
}
|
||||
|
||||
sexual_maturity = 4*30
|
||||
life_expectancy = 365 * 3
|
||||
gestation = 33
|
||||
pregnancy = -1
|
||||
max_females = 5
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def newborn(self):
|
||||
self['age'] += 1
|
||||
|
||||
if self['age'] >= self.sexual_maturity:
|
||||
return self.fertile
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
self['age'] += 1
|
||||
if self['age'] > self.life_expectancy:
|
||||
return self.dead
|
||||
|
||||
if self['gender'] == Genders.female.value:
|
||||
return
|
||||
|
||||
# Males try to mate
|
||||
females = self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False)
|
||||
for f in islice(females, self.max_females):
|
||||
r = random()
|
||||
if r < self['mating_prob']:
|
||||
self.impregnate(f)
|
||||
break # Take a break
|
||||
|
||||
def impregnate(self, whom):
|
||||
if self['gender'] == Genders.female.value:
|
||||
raise NotImplementedError('Females cannot impregnate')
|
||||
whom['pregnancy'] = 0
|
||||
whom['mate'] = self.id
|
||||
whom.set_state(whom.pregnant)
|
||||
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
|
||||
|
||||
@state
|
||||
def pregnant(self):
|
||||
self['age'] += 1
|
||||
if self['age'] > self.life_expectancy:
|
||||
return self.dead
|
||||
|
||||
self['pregnancy'] += 1
|
||||
self.debug('Pregnancy: {}'.format(self['pregnancy']))
|
||||
if self['pregnancy'] >= self.gestation:
|
||||
number_of_babies = int(8+4*random())
|
||||
self.info('Having {} babies'.format(number_of_babies))
|
||||
for i in range(number_of_babies):
|
||||
state = {}
|
||||
state['gender'] = choice(list(Genders)).value
|
||||
child = self.env.add_node(self.__class__, state)
|
||||
self.env.add_edge(self.id, child.id)
|
||||
self.env.add_edge(self['mate'], child.id)
|
||||
# self.add_edge()
|
||||
self.debug('A BABY IS COMING TO LIFE')
|
||||
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.global_topology.number_of_nodes())+1
|
||||
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
|
||||
self['offspring'] += 1
|
||||
self.env.get_agent(self['mate'])['offspring'] += 1
|
||||
del self['mate']
|
||||
self['pregnancy'] = -1
|
||||
return self.fertile
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
self.info('Agent {} is dying'.format(self.id))
|
||||
if 'pregnancy' in self and self['pregnancy'] > -1:
|
||||
self.info('A mother has died carrying a baby!!')
|
||||
self.die()
|
||||
return
|
||||
|
||||
|
||||
class RandomAccident(BaseAgent):
|
||||
|
||||
level = logging.DEBUG
|
||||
|
||||
def step(self):
|
||||
rabbits_total = self.global_topology.number_of_nodes()
|
||||
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
|
||||
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
|
||||
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
|
||||
for i in self.env.network_agents:
|
||||
if i.state['id'] == i.dead.id:
|
||||
continue
|
||||
r = random()
|
||||
if r < prob_death:
|
||||
self.debug('I killed a rabbit: {}'.format(i.id))
|
||||
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
|
||||
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
|
||||
i.set_state(i.dead)
|
||||
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
|
||||
if self.count_agents(state_id=RabbitModel.dead.id) == self.global_topology.number_of_nodes():
|
||||
self.die()
|
||||
@@ -1,23 +0,0 @@
|
||||
---
|
||||
load_module: rabbit_agents
|
||||
name: rabbits_example
|
||||
max_time: 500
|
||||
interval: 1
|
||||
seed: MySeed
|
||||
agent_type: RabbitModel
|
||||
environment_agents:
|
||||
- agent_type: RandomAccident
|
||||
environment_params:
|
||||
prob_death: 0.001
|
||||
default_state:
|
||||
mating_prob: 0.01
|
||||
topology:
|
||||
nodes:
|
||||
- id: 1
|
||||
state:
|
||||
gender: female
|
||||
- id: 0
|
||||
state:
|
||||
gender: male
|
||||
directed: true
|
||||
links: []
|
||||
@@ -1,2 +0,0 @@
|
||||
balkian Torvalds {}
|
||||
anonymous Torvalds {}
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
name: torvalds_example
|
||||
max_time: 10
|
||||
interval: 2
|
||||
agent_type: CounterModel
|
||||
default_state:
|
||||
skill_level: 'beginner'
|
||||
network_params:
|
||||
path: 'torvalds.edgelist'
|
||||
states:
|
||||
Torvalds:
|
||||
skill_level: 'God'
|
||||
balkian:
|
||||
skill_level: 'developer'
|
||||
0
logo_gsi.png
Normal file → Executable file
|
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
0
logo_gsi.svg
Normal file → Executable file
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
38
models/BaseBehaviour/BaseBehaviour.py
Executable file
@@ -0,0 +1,38 @@
|
||||
import settings
|
||||
from nxsim import BaseNetworkAgent
|
||||
from .. import networkStatus
|
||||
|
||||
|
||||
class BaseBehaviour(BaseNetworkAgent):
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self._attrs = {}
|
||||
|
||||
@property
|
||||
def attrs(self):
|
||||
now = self.env.now
|
||||
if now not in self._attrs:
|
||||
self._attrs[now] = {}
|
||||
return self._attrs[now]
|
||||
|
||||
@attrs.setter
|
||||
def attrs(self, value):
|
||||
self._attrs[self.env.now] = value
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
self.step(self.env.now)
|
||||
yield self.env.timeout(settings.network_params["timeout"])
|
||||
|
||||
def step(self, now):
|
||||
networkStatus['agent_%s'% self.id] = self.to_json()
|
||||
|
||||
def to_json(self):
|
||||
final = {}
|
||||
for stamp, attrs in self._attrs.items():
|
||||
for a in attrs:
|
||||
if a not in final:
|
||||
final[a] = {}
|
||||
final[a][stamp] = attrs[a]
|
||||
return final
|
||||
1
models/BaseBehaviour/__init__.py
Executable file
@@ -0,0 +1 @@
|
||||
from .BaseBehaviour import BaseBehaviour
|
||||
367
models/TerroristModel/TerroristModel.py
Normal file
@@ -0,0 +1,367 @@
|
||||
import random
|
||||
import numpy as np
|
||||
from ..BaseBehaviour import *
|
||||
import settings
|
||||
import networkx as nx
|
||||
|
||||
|
||||
|
||||
POPULATION = 0
|
||||
LEADERS = 1
|
||||
HAVEN = 2
|
||||
TRAININGENV = 3
|
||||
|
||||
NON_RADICAL = 0
|
||||
NEUTRAL = 1
|
||||
RADICAL = 2
|
||||
|
||||
POPNON =0
|
||||
POPNE=1
|
||||
POPRAD=2
|
||||
|
||||
HAVNON=3
|
||||
HAVNE=4
|
||||
HAVRAD=5
|
||||
|
||||
LEADER=6
|
||||
|
||||
TRAINING = 7
|
||||
|
||||
|
||||
class TerroristModel(BaseBehaviour):
|
||||
num_agents = 0
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
|
||||
self.population = settings.network_params["number_of_nodes"] * settings.environment_params['initial_population']
|
||||
self.havens = settings.network_params["number_of_nodes"] * settings.environment_params['initial_havens']
|
||||
self.training_enviroments = settings.network_params["number_of_nodes"] * settings.environment_params['initial_training_enviroments']
|
||||
|
||||
self.initial_radicalism = settings.environment_params['initial_radicalism']
|
||||
self.information_spread_intensity = settings.environment_params['information_spread_intensity']
|
||||
self.influence = settings.environment_params['influence']
|
||||
self.relative_inequality = settings.environment_params['relative_inequality']
|
||||
self.additional_influence = settings.environment_params['additional_influence']
|
||||
|
||||
if TerroristModel.num_agents < self.population:
|
||||
self.state['type'] = POPULATION
|
||||
TerroristModel.num_agents = TerroristModel.num_agents + 1
|
||||
random1 = random.random()
|
||||
if random1 < 0.7:
|
||||
self.state['id'] = NON_RADICAL
|
||||
self.state['fstatus'] = POPNON
|
||||
elif random1 >= 0.7 and random1 < 0.9:
|
||||
self.state['id'] = NEUTRAL
|
||||
self.state['fstatus'] = POPNE
|
||||
elif random1 >= 0.9:
|
||||
self.state['id'] = RADICAL
|
||||
self.state['fstatus'] = POPRAD
|
||||
|
||||
elif TerroristModel.num_agents < self.havens + self.population:
|
||||
self.state['type'] = HAVEN
|
||||
TerroristModel.num_agents = TerroristModel.num_agents + 1
|
||||
random2 = random.random()
|
||||
random1 = random2 + self.initial_radicalism
|
||||
if random1 < 1.2:
|
||||
self.state['id'] = NON_RADICAL
|
||||
self.state['fstatus'] = HAVNON
|
||||
elif random1 >= 1.2 and random1 < 1.6:
|
||||
self.state['id'] = NEUTRAL
|
||||
self.state['fstatus'] = HAVNE
|
||||
elif random1 >= 1.6:
|
||||
self.state['id'] = RADICAL
|
||||
self.state['fstatus'] = HAVRAD
|
||||
|
||||
elif TerroristModel.num_agents < self.training_enviroments + self.havens + self.population:
|
||||
self.state['type'] = TRAININGENV
|
||||
self.state['fstatus'] = TRAINING
|
||||
TerroristModel.num_agents = TerroristModel.num_agents + 1
|
||||
|
||||
def step(self, now):
|
||||
if self.state['type'] == POPULATION:
|
||||
self.population_and_leader_conduct()
|
||||
if self.state['type'] == LEADERS:
|
||||
self.population_and_leader_conduct()
|
||||
if self.state['type'] == HAVEN:
|
||||
self.haven_conduct()
|
||||
if self.state['type'] == TRAININGENV:
|
||||
self.training_enviroment_conduct()
|
||||
|
||||
self.attrs['status'] = self.state['id']
|
||||
self.attrs['type'] = self.state['type']
|
||||
self.attrs['radicalism'] = self.state['rad']
|
||||
self.attrs['fstatus'] = self.state['fstatus']
|
||||
super().step(now)
|
||||
|
||||
def population_and_leader_conduct(self):
|
||||
if self.state['id'] == NON_RADICAL:
|
||||
if self.state['rad'] == 0.000:
|
||||
self.state['rad'] = self.set_radicalism()
|
||||
self.non_radical_behaviour()
|
||||
if self.state['id'] == NEUTRAL:
|
||||
if self.state['rad'] == 0.000:
|
||||
self.state['rad'] = self.set_radicalism()
|
||||
while self.state['id'] == RADICAL:
|
||||
self.radical_behaviour()
|
||||
break
|
||||
self.neutral_behaviour()
|
||||
if self.state['id'] == RADICAL:
|
||||
if self.state['rad'] == 0.000:
|
||||
self.state['rad'] = self.set_radicalism()
|
||||
self.radical_behaviour()
|
||||
|
||||
def haven_conduct(self):
|
||||
non_radical_neighbors = self.get_neighboring_agents(state_id=NON_RADICAL)
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=NEUTRAL)
|
||||
radical_neighbors = self.get_neighboring_agents(state_id=RADICAL)
|
||||
|
||||
neighbors_of_non_radical = len(neutral_neighbors) + len(radical_neighbors)
|
||||
neighbors_of_neutral = len(non_radical_neighbors) + len(radical_neighbors)
|
||||
neighbors_of_radical = len(non_radical_neighbors) + len(neutral_neighbors)
|
||||
threshold = 8
|
||||
if (len(non_radical_neighbors) > neighbors_of_non_radical) and len(non_radical_neighbors) >= threshold:
|
||||
self.state['id'] = NON_RADICAL
|
||||
elif (len(neutral_neighbors) > neighbors_of_neutral) and len(neutral_neighbors) >= threshold:
|
||||
self.state['id'] = NEUTRAL
|
||||
elif (len(radical_neighbors) > neighbors_of_radical) and len(radical_neighbors) >= threshold:
|
||||
self.state['id'] = RADICAL
|
||||
|
||||
if self.state['id'] == NEUTRAL:
|
||||
for neighbor in non_radical_neighbors:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] > 0.59:
|
||||
neighbor.state['rad'] = 0.59
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
|
||||
if self.state['id'] == RADICAL:
|
||||
|
||||
for neighbor in non_radical_neighbors:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] > 0.59:
|
||||
neighbor.state['rad'] = 0.59
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
|
||||
for neighbor in neutral_neighbors:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.6:
|
||||
neighbor.state['id'] = RADICAL
|
||||
if neighbor.state['type'] != HAVEN and neighbor.state['type']!=TRAININGENV:
|
||||
if neighbor.state['rad'] >= 0.62:
|
||||
if create_leader(neighbor):
|
||||
neighbor.state['type'] = LEADERS
|
||||
neighbor.state['fstatus'] = LEADER
|
||||
# elif neighbor.state['type'] == LEADERS:
|
||||
# neighbor.state['type'] = POPULATION
|
||||
# neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVRAD
|
||||
|
||||
def training_enviroment_conduct(self):
|
||||
self.state['id'] = RADICAL
|
||||
self.state['rad'] = 1
|
||||
neighbors = self.get_neighboring_agents()
|
||||
for neighbor in neighbors:
|
||||
if neighbor.state['id'] == NON_RADICAL:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] > 0.59:
|
||||
neighbor.state['rad'] = 0.59
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
|
||||
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (neighbor.influence + neighbor.additional_influence) * neighbor.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] >= 0.6:
|
||||
neighbor.state['id'] = RADICAL
|
||||
if neighbor.state['type'] != HAVEN and neighbor.state['type'] != TRAININGENV:
|
||||
if neighbor.state['rad'] >= 0.62:
|
||||
if create_leader(neighbor):
|
||||
neighbor.state['type'] = LEADERS
|
||||
neighbor.state['fstatus'] = LEADER
|
||||
# elif neighbor.state['type'] == LEADERS:
|
||||
# neighbor.state['type'] = POPULATION
|
||||
# neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVRAD
|
||||
|
||||
def non_radical_behaviour(self):
|
||||
neighbors = self.get_neighboring_agents()
|
||||
|
||||
for neighbor in neighbors:
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
|
||||
self.state['id'] = NEUTRAL
|
||||
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
elif self.state['rad'] > 0.59:
|
||||
self.state['rad'] = 0.59
|
||||
self.state['id'] = NEUTRAL
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
|
||||
elif neighbor.state['type'] == LEADERS:
|
||||
|
||||
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
|
||||
self.state['id'] = NEUTRAL
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
elif self.state['rad'] > 0.59:
|
||||
self.state['rad'] = 0.59
|
||||
self.state['id'] = NEUTRAL
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
|
||||
|
||||
def neutral_behaviour(self):
|
||||
neighbors = self.get_neighboring_agents()
|
||||
for neighbor in neighbors:
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
if neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.6:
|
||||
self.state['id'] = RADICAL
|
||||
if self.state['type'] != HAVEN:
|
||||
if self.state['rad'] >= 0.62:
|
||||
if create_leader(self):
|
||||
self.state['type'] = LEADERS
|
||||
|
||||
self.state['fstatus'] = LEADER
|
||||
# elif self.state['type'] == LEADERS:
|
||||
# self.state['type'] = POPULATION
|
||||
# self.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
self.state['fstatus'] = POPRAD
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVRAD
|
||||
|
||||
|
||||
elif neighbor.state['type'] == LEADERS:
|
||||
if neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.6:
|
||||
self.state['id'] = RADICAL
|
||||
if self.state['type'] != HAVEN:
|
||||
if self.state['rad'] >= 0.62:
|
||||
if create_leader(self):
|
||||
self.state['type'] = LEADERS
|
||||
self.state['fstatus'] = LEADER
|
||||
# elif self.state['type'] == LEADERS:
|
||||
# self.state['type'] = POPULATION
|
||||
# self.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
self.state['fstatus'] = POPRAD
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVRAD
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def radical_behaviour(self):
|
||||
neighbors = self.get_neighboring_agents(state_id=RADICAL)
|
||||
|
||||
for neighbor in neighbors:
|
||||
if self.state['rad']< neighbor.state['rad'] and self.state['type']== LEADERS and neighbor.state['type']==LEADERS:
|
||||
self.state['type'] = POPULATION
|
||||
self.state['fstatus'] = POPRAD
|
||||
|
||||
|
||||
def set_radicalism(self):
|
||||
if self.state['id'] == NON_RADICAL:
|
||||
radicalism = random.uniform(0.0, 0.29) * self.relative_inequality
|
||||
return radicalism
|
||||
elif self.state['id'] == NEUTRAL:
|
||||
radicalism = 0.3 + random.uniform(0.3, 0.59) * self.relative_inequality
|
||||
if radicalism >= 0.6:
|
||||
self.state['id'] = RADICAL
|
||||
return radicalism
|
||||
elif self.state['id'] == RADICAL:
|
||||
radicalism = 0.6 + random.uniform(0.6, 1.0) * self.relative_inequality
|
||||
return radicalism
|
||||
|
||||
def get_partition(agent):
|
||||
return settings.partition_param[agent.id]
|
||||
|
||||
def get_centrality(agent):
|
||||
return settings.centrality_param[agent.id]
|
||||
def get_centrality_given_id(id):
|
||||
return settings.centrality_param[id]
|
||||
|
||||
def get_leader(partition):
|
||||
if not bool(settings.leaders) or partition not in settings.leaders.keys():
|
||||
return None
|
||||
return settings.leaders[partition]
|
||||
|
||||
def set_leader(partition, agent):
|
||||
settings.leaders[partition] = agent.id
|
||||
|
||||
def create_leader(agent):
|
||||
my_partition = get_partition(agent)
|
||||
old_leader = get_leader(my_partition)
|
||||
|
||||
if old_leader == None:
|
||||
set_leader(my_partition, agent)
|
||||
return True
|
||||
else:
|
||||
my_centrality = get_centrality(agent)
|
||||
old_leader_centrality = get_centrality_given_id(old_leader)
|
||||
if my_centrality > old_leader_centrality:
|
||||
set_leader(my_partition, agent)
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
|
||||
1
models/TerroristModel/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
from .TerroristModel import TerroristModel
|
||||
3
models/__init__.py
Executable file
@@ -0,0 +1,3 @@
|
||||
from .models import *
|
||||
from .BaseBehaviour import *
|
||||
from .TerroristModel import *
|
||||
7
models/models.py
Executable file
@@ -0,0 +1,7 @@
|
||||
import settings
|
||||
|
||||
networkStatus = {} # Dict that will contain the status of every agent in the network
|
||||
|
||||
# Initialize agent states. Let's assume everyone is normal and all types are population.
|
||||
init_states = [{'id': 0, 'type': 0, 'rad': 0, 'fstatus':0, } for _ in range(settings.network_params["number_of_nodes"])]
|
||||
|
||||
4
requirements.txt
Normal file → Executable file
@@ -1,7 +1,5 @@
|
||||
nxsim
|
||||
simpy
|
||||
networkx>=2.0
|
||||
networkx
|
||||
numpy
|
||||
matplotlib
|
||||
pyyaml
|
||||
pandas
|
||||
|
||||
23
settings.json
Executable file
@@ -0,0 +1,23 @@
|
||||
[
|
||||
{
|
||||
"network_type": 0,
|
||||
"number_of_nodes": 80,
|
||||
"max_time": 50,
|
||||
"num_trials": 1,
|
||||
"timeout": 2
|
||||
},
|
||||
|
||||
{
|
||||
"agent": ["TerroristModel"],
|
||||
|
||||
"initial_population": 0.85,
|
||||
"initial_havens": 0.1,
|
||||
"initial_training_enviroments": 0.05,
|
||||
|
||||
"initial_radicalism": 0.12,
|
||||
"relative_inequality": 0.33,
|
||||
"information_spread_intensity": 0.1,
|
||||
"influence": 0.4,
|
||||
"additional_influence": 0.1
|
||||
}
|
||||
]
|
||||
13
settings.py
Executable file
@@ -0,0 +1,13 @@
|
||||
# General configuration
|
||||
import json
|
||||
|
||||
with open('settings.json', 'r') as f:
|
||||
settings = json.load(f)
|
||||
|
||||
network_params = settings[0]
|
||||
environment_params = settings[1]
|
||||
|
||||
centrality_param = {}
|
||||
partition_param={}
|
||||
leaders={}
|
||||
|
||||
54
setup.py
@@ -1,54 +0,0 @@
|
||||
import os
|
||||
from setuptools import setup
|
||||
|
||||
|
||||
with open(os.path.join('soil', 'VERSION')) as f:
|
||||
__version__ = f.readlines()[0].strip()
|
||||
assert __version__
|
||||
|
||||
|
||||
def parse_requirements(filename):
|
||||
""" load requirements from a pip requirements file """
|
||||
with open(filename, 'r') as f:
|
||||
lineiter = list(line.strip() for line in f)
|
||||
return [line for line in lineiter if line and not line.startswith("#")]
|
||||
|
||||
|
||||
install_reqs = parse_requirements("requirements.txt")
|
||||
test_reqs = parse_requirements("test-requirements.txt")
|
||||
|
||||
|
||||
setup(
|
||||
name='soil',
|
||||
packages=['soil'], # this must be the same as the name above
|
||||
version=__version__,
|
||||
description=('An Agent-Based Social Simulator for Social Networks'),
|
||||
author='J. Fernando Sanchez',
|
||||
author_email='jf.sanchez@upm.es',
|
||||
url='https://github.com/gsi-upm/soil', # use the URL to the github repo
|
||||
download_url='https://github.com/gsi-upm/soil/archive/{}.tar.gz'.format(
|
||||
__version__),
|
||||
keywords=['agent', 'social', 'simulator'],
|
||||
classifiers=[
|
||||
'Development Status :: 5 - Production/Stable',
|
||||
'Environment :: Console',
|
||||
'Intended Audience :: End Users/Desktop',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: Apache Software License',
|
||||
'Operating System :: MacOS :: MacOS X',
|
||||
'Operating System :: Microsoft :: Windows',
|
||||
'Operating System :: POSIX',
|
||||
'Programming Language :: Python :: 3'],
|
||||
install_requires=install_reqs,
|
||||
extras_require={
|
||||
'web': ['tornado']
|
||||
|
||||
},
|
||||
tests_require=test_reqs,
|
||||
setup_requires=['pytest-runner', ],
|
||||
include_package_data=True,
|
||||
entry_points={
|
||||
'console_scripts':
|
||||
['soil = soil.__init__:main',
|
||||
'soil-web = soil.web.__init__:main']
|
||||
})
|
||||
BIN
sim_01/log.0.state.pickled
Executable file
BIN
sim_01/log.1.state.pickled
Normal file
215
soil.py
Executable file
@@ -0,0 +1,215 @@
|
||||
from models import *
|
||||
from nxsim import NetworkSimulation
|
||||
# import numpy
|
||||
from matplotlib import pyplot as plt
|
||||
import networkx as nx
|
||||
import settings
|
||||
import models
|
||||
import math
|
||||
import json
|
||||
import operator
|
||||
import community
|
||||
|
||||
|
||||
|
||||
POPULATION = 0
|
||||
LEADERS = 1
|
||||
HAVEN = 2
|
||||
TRAINING = 3
|
||||
|
||||
NON_RADICAL = 0
|
||||
NEUTRAL = 1
|
||||
RADICAL = 2
|
||||
#################
|
||||
# Visualization #
|
||||
#################
|
||||
|
||||
def visualization(graph_name):
|
||||
|
||||
for x in range(0, settings.network_params["number_of_nodes"]):
|
||||
attributes = {}
|
||||
spells = []
|
||||
for attribute in models.networkStatus["agent_%s" % x]:
|
||||
if attribute == 'visible':
|
||||
lastvisible = False
|
||||
laststep = 0
|
||||
for t_step in models.networkStatus["agent_%s" % x][attribute]:
|
||||
nowvisible = models.networkStatus["agent_%s" % x][attribute][t_step]
|
||||
if nowvisible and not lastvisible:
|
||||
laststep = t_step
|
||||
if not nowvisible and lastvisible:
|
||||
spells.append((laststep, t_step))
|
||||
|
||||
lastvisible = nowvisible
|
||||
if lastvisible:
|
||||
spells.append((laststep, None))
|
||||
else:
|
||||
emotionStatusAux = []
|
||||
for t_step in models.networkStatus["agent_%s" % x][attribute]:
|
||||
prec = 2
|
||||
output = math.floor(models.networkStatus["agent_%s" % x][attribute][t_step] * (10 ** prec)) / (10 ** prec) # 2 decimals
|
||||
emotionStatusAux.append((output, t_step, t_step + settings.network_params["timeout"]))
|
||||
attributes[attribute] = emotionStatusAux
|
||||
if spells:
|
||||
G.add_node(x, attributes, spells=spells)
|
||||
else:
|
||||
G.add_node(x, attributes)
|
||||
|
||||
print("Done!")
|
||||
|
||||
|
||||
with open('data.txt', 'w') as outfile:
|
||||
json.dump(models.networkStatus, outfile, sort_keys=True, indent=4, separators=(',', ': '))
|
||||
|
||||
for node in range(settings.network_params["number_of_nodes"]):
|
||||
G.node[node]['x'] = G.node[node]['pos'][0]
|
||||
G.node[node]['y'] = G.node[node]['pos'][1]
|
||||
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
|
||||
del (G.node[node]['pos'])
|
||||
|
||||
nx.write_gexf(G, graph_name+".gexf", version="1.2draft")
|
||||
|
||||
###########
|
||||
# Results #
|
||||
###########
|
||||
|
||||
def results(model_name):
|
||||
x_values = []
|
||||
neutral_values = []
|
||||
non_radical_values = []
|
||||
radical_values = []
|
||||
|
||||
attribute_plot = 'status'
|
||||
for time in range(0, settings.network_params["max_time"]):
|
||||
value_neutral = 0
|
||||
value_non_radical = 0
|
||||
value_radical = 0
|
||||
real_time = time * settings.network_params["timeout"]
|
||||
activity = False
|
||||
for x in range(0, settings.network_params["number_of_nodes"]):
|
||||
if attribute_plot in models.networkStatus["agent_%s" % x]:
|
||||
if real_time in models.networkStatus["agent_%s" % x][attribute_plot]:
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == NON_RADICAL:
|
||||
value_non_radical += 1
|
||||
activity = True
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == NEUTRAL:
|
||||
value_neutral += 1
|
||||
activity = True
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == RADICAL:
|
||||
value_radical += 1
|
||||
activity = True
|
||||
|
||||
|
||||
if activity:
|
||||
x_values.append(real_time)
|
||||
neutral_values.append(value_neutral)
|
||||
non_radical_values.append(value_non_radical)
|
||||
radical_values.append(value_radical)
|
||||
activity = False
|
||||
|
||||
fig1 = plt.figure()
|
||||
ax1 = fig1.add_subplot(111)
|
||||
|
||||
non_radical_line = ax1.plot(x_values, non_radical_values, label='Non radical')
|
||||
neutral_line = ax1.plot(x_values, neutral_values, label='Neutral')
|
||||
radical_line = ax1.plot(x_values, radical_values, label='Radical')
|
||||
ax1.legend()
|
||||
fig1.savefig(model_name+'.png')
|
||||
plt.show()
|
||||
|
||||
###########
|
||||
# Results #
|
||||
###########
|
||||
|
||||
def resultadosTipo(model_name):
|
||||
x_values = []
|
||||
population_values = []
|
||||
leaders_values = []
|
||||
havens_values = []
|
||||
training_enviroments_values = []
|
||||
|
||||
attribute_plot = 'type'
|
||||
for time in range(0, settings.network_params["max_time"]):
|
||||
value_population = 0
|
||||
value_leaders = 0
|
||||
value_havens = 0
|
||||
value_training_enviroments = 0
|
||||
real_time = time * settings.network_params["timeout"]
|
||||
activity = False
|
||||
for x in range(0, settings.network_params["number_of_nodes"]):
|
||||
if attribute_plot in models.networkStatus["agent_%s" % x]:
|
||||
if real_time in models.networkStatus["agent_%s" % x][attribute_plot]:
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == POPULATION:
|
||||
value_population += 1
|
||||
activity = True
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == LEADERS:
|
||||
value_leaders += 1
|
||||
activity = True
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == HAVEN:
|
||||
value_havens += 1
|
||||
activity = True
|
||||
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == TRAINING:
|
||||
value_training_enviroments += 1
|
||||
activity = True
|
||||
if activity:
|
||||
x_values.append(real_time)
|
||||
population_values.append(value_population)
|
||||
leaders_values.append(value_leaders)
|
||||
havens_values.append(value_havens)
|
||||
training_enviroments_values.append(value_training_enviroments)
|
||||
activity = False
|
||||
|
||||
fig2 = plt.figure()
|
||||
ax2 = fig2.add_subplot(111)
|
||||
|
||||
population_line = ax2.plot(x_values, population_values, label='Population')
|
||||
leaders_line = ax2.plot(x_values, leaders_values, label='Leader')
|
||||
havens_line = ax2.plot(x_values, havens_values, label='Havens')
|
||||
training_enviroments_line = ax2.plot(x_values, training_enviroments_values, label='Training Enviroments')
|
||||
ax2.legend()
|
||||
fig2.savefig(model_name+'_type'+'.png')
|
||||
plt.show()
|
||||
|
||||
####################
|
||||
# Network creation #
|
||||
####################
|
||||
|
||||
# nx.degree_centrality(G);
|
||||
|
||||
if settings.network_params["network_type"] == 0:
|
||||
G = nx.random_geometric_graph(settings.network_params["number_of_nodes"], 0.2)
|
||||
|
||||
settings.partition_param = community.best_partition(G)
|
||||
settings.centrality_param = nx.betweenness_centrality(G).copy()
|
||||
|
||||
|
||||
# print(settings.centrality_param)
|
||||
# print(settings.partition_param)
|
||||
# More types of networks can be added here
|
||||
|
||||
##############
|
||||
# Simulation #
|
||||
##############
|
||||
|
||||
agents = settings.environment_params['agent']
|
||||
|
||||
print("Using Agent(s): {agents}".format(agents=agents))
|
||||
|
||||
if len(agents) > 1:
|
||||
for agent in agents:
|
||||
sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params["max_time"],
|
||||
num_trials=settings.network_params["num_trials"], logging_interval=1.0, **settings.environment_params)
|
||||
sim.run_simulation()
|
||||
print(str(agent))
|
||||
results(str(agent))
|
||||
resultadosTipo(str(agent))
|
||||
visualization(str(agent))
|
||||
else:
|
||||
agent = agents[0]
|
||||
sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params["max_time"],
|
||||
num_trials=settings.network_params["num_trials"], logging_interval=1.0, **settings.environment_params)
|
||||
sim.run_simulation()
|
||||
results(str(agent))
|
||||
resultadosTipo(str(agent))
|
||||
|
||||
visualization(str(agent))
|
||||
394
soil.py~
Executable file
@@ -0,0 +1,394 @@
|
||||
from nxsim import NetworkSimulation
|
||||
from nxsim import BaseNetworkAgent
|
||||
from nxsim import BaseLoggingAgent
|
||||
from random import randint
|
||||
from matplotlib import pyplot as plt
|
||||
import random
|
||||
import numpy as np
|
||||
import networkx as nx
|
||||
import settings
|
||||
|
||||
|
||||
settings.init()
|
||||
|
||||
if settings.network_type == 0:
|
||||
G = nx.complete_graph(settings.number_of_nodes)
|
||||
if settings.network_type == 1:
|
||||
G = nx.barabasi_albert_graph(settings.number_of_nodes,3)
|
||||
if settings.network_type == 2:
|
||||
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
|
||||
|
||||
|
||||
myList=[]
|
||||
networkStatus=[]
|
||||
for x in range(0, settings.number_of_nodes):
|
||||
networkStatus.append({'id':x})
|
||||
|
||||
|
||||
|
||||
# # Just like subclassing a process in SimPy
|
||||
# class MyAgent(BaseNetworkAgent):
|
||||
# def __init__(self, environment=None, agent_id=0, state=()): # Make sure to have these three keyword arguments
|
||||
# super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
# # Add your own attributes here
|
||||
|
||||
# def run(self):
|
||||
# # Add your behaviors here
|
||||
|
||||
|
||||
|
||||
|
||||
class SentimentCorrelationModel(BaseNetworkAgent):
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.outside_effects_prob = settings.outside_effects_prob
|
||||
self.anger_prob = settings.anger_prob
|
||||
self.joy_prob = settings.joy_prob
|
||||
self.sadness_prob = settings.sadness_prob
|
||||
self.disgust_prob = settings.disgust_prob
|
||||
self.time_awareness=[]
|
||||
for i in range(4):
|
||||
self.time_awareness.append(0) #0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||
networkStatus[self.id][self.env.now]=0
|
||||
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
if self.env.now > 10:
|
||||
G.add_node(205)
|
||||
G.add_edge(205,0)
|
||||
angry_neighbors_1_time_step=[]
|
||||
joyful_neighbors_1_time_step=[]
|
||||
sad_neighbors_1_time_step=[]
|
||||
disgusted_neighbors_1_time_step=[]
|
||||
|
||||
|
||||
angry_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for x in angry_neighbors:
|
||||
if x.time_awareness[0] > (self.env.now-500):
|
||||
angry_neighbors_1_time_step.append(x)
|
||||
num_neighbors_angry = len(angry_neighbors_1_time_step)
|
||||
|
||||
|
||||
joyful_neighbors = self.get_neighboring_agents(state_id=2)
|
||||
for x in joyful_neighbors:
|
||||
if x.time_awareness[1] > (self.env.now-500):
|
||||
joyful_neighbors_1_time_step.append(x)
|
||||
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
|
||||
|
||||
|
||||
sad_neighbors = self.get_neighboring_agents(state_id=3)
|
||||
for x in sad_neighbors:
|
||||
if x.time_awareness[2] > (self.env.now-500):
|
||||
sad_neighbors_1_time_step.append(x)
|
||||
num_neighbors_sad = len(sad_neighbors_1_time_step)
|
||||
|
||||
|
||||
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
|
||||
for x in disgusted_neighbors:
|
||||
if x.time_awareness[3] > (self.env.now-500):
|
||||
disgusted_neighbors_1_time_step.append(x)
|
||||
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
|
||||
|
||||
# #Outside effects. Asignamos un estado aleatorio
|
||||
# if random.random() < settings.outside_effects_prob:
|
||||
# if self.state['id'] == 0:
|
||||
# self.state['id'] = random.randint(1,4)
|
||||
# myList.append(self.id)
|
||||
# networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
|
||||
# self.time_awareness = self.env.now #Para saber cuando se han contagiado
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
# else:
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
# #Imitation effects-Joy
|
||||
|
||||
# if random.random() < (settings.joy_prob*(num_neighbors_joyful)/10):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 2
|
||||
# networkStatus[self.id][self.env.now]=2
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
# #Imitation effects-Sadness
|
||||
|
||||
# if random.random() < (settings.sadness_prob*(num_neighbors_sad)/10):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 3
|
||||
# networkStatus[self.id][self.env.now]=3
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
# #Imitation effects-Disgust
|
||||
|
||||
# if random.random() < (settings.disgust_prob*(num_neighbors_disgusted)/10):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 4
|
||||
# networkStatus[self.id][self.env.now]=4
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
# #Imitation effects-Anger
|
||||
|
||||
# if random.random() < (settings.anger_prob*(num_neighbors_angry)/10):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 1
|
||||
# networkStatus[self.id][self.env.now]=1
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
###########################################
|
||||
|
||||
|
||||
anger_prob= settings.anger_prob+(len(angry_neighbors_1_time_step)*settings.anger_prob)
|
||||
print("anger_prob " + str(anger_prob))
|
||||
joy_prob= settings.joy_prob+(len(joyful_neighbors_1_time_step)*settings.joy_prob)
|
||||
print("joy_prob " + str(joy_prob))
|
||||
sadness_prob = settings.sadness_prob+(len(sad_neighbors_1_time_step)*settings.sadness_prob)
|
||||
print("sadness_prob "+ str(sadness_prob))
|
||||
disgust_prob = settings.disgust_prob+(len(disgusted_neighbors_1_time_step)*settings.disgust_prob)
|
||||
print("disgust_prob " + str(disgust_prob))
|
||||
outside_effects_prob= settings.outside_effects_prob
|
||||
print("outside_effects_prob " + str(outside_effects_prob))
|
||||
|
||||
|
||||
num = random.random()
|
||||
|
||||
|
||||
if(num<outside_effects_prob):
|
||||
self.state['id'] = random.randint(1,4)
|
||||
myList.append(self.id)
|
||||
networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
|
||||
self.time_awareness[self.state['id']-1] = self.env.now
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
if(num<anger_prob):
|
||||
|
||||
myList.append(self.id)
|
||||
self.state['id'] = 1
|
||||
networkStatus[self.id][self.env.now]=1
|
||||
self.time_awareness[self.state['id']-1] = self.env.now
|
||||
elif (num<joy_prob+anger_prob and num>anger_prob):
|
||||
|
||||
myList.append(self.id)
|
||||
self.state['id'] = 2
|
||||
networkStatus[self.id][self.env.now]=2
|
||||
self.time_awareness[self.state['id']-1] = self.env.now
|
||||
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
|
||||
|
||||
myList.append(self.id)
|
||||
self.state['id'] = 3
|
||||
networkStatus[self.id][self.env.now]=3
|
||||
self.time_awareness[self.state['id']-1] = self.env.now
|
||||
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
|
||||
|
||||
myList.append(self.id)
|
||||
self.state['id'] = 4
|
||||
networkStatus[self.id][self.env.now]=4
|
||||
self.time_awareness[self.state['id']-1] = self.env.now
|
||||
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
# anger_propagation = settings.anger_prob*num_neighbors_angry/10
|
||||
# joy_propagation = anger_propagation + (settings.joy_prob*num_neighbors_joyful/10)
|
||||
# sadness_propagation = joy_propagation + (settings.sadness_prob*num_neighbors_sad/10)
|
||||
# disgust_propagation = sadness_propagation + (settings.disgust_prob*num_neighbors_disgusted/10)
|
||||
# outside_effects_propagation = disgust_propagation + settings.outside_effects_prob
|
||||
|
||||
# if (num<anger_propagation):
|
||||
# if(self.state['id'] !=0):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 1
|
||||
# networkStatus[self.id][self.env.now]=1
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
# if (num<joy_propagation):
|
||||
# if(self.state['id'] !=0):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 2
|
||||
# networkStatus[self.id][self.env.now]=2
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
# if(num<sadness_propagation):
|
||||
# if(self.state['id'] !=0):
|
||||
# myList.append(self.id)
|
||||
# self.state['id'] = 3
|
||||
# networkStatus[self.id][self.env.now]=3
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
# # if(num<disgust_propagation):
|
||||
# # if(self.state['id'] !=0):
|
||||
# # myList.append(self.id)
|
||||
# # self.state['id'] = 4
|
||||
# # networkStatus[self.id][self.env.now]=4
|
||||
# # yield self.env.timeout(settings.timeout)
|
||||
# if(num <outside_effects_propagation):
|
||||
# if self.state['id'] == 0:
|
||||
# self.state['id'] = random.randint(1,4)
|
||||
# myList.append(self.id)
|
||||
# networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
|
||||
# self.time_awareness = self.env.now #Para saber cuando se han contagiado
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
# else:
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
# else:
|
||||
# yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
class BassModel(BaseNetworkAgent):
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.innovation_prob = settings.innovation_prob
|
||||
self.imitation_prob = settings.imitation_prob
|
||||
networkStatus[self.id][self.env.now]=0
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
|
||||
|
||||
#Outside effects
|
||||
if random.random() < settings.innovation_prob:
|
||||
if self.state['id'] == 0:
|
||||
self.state['id'] = 1
|
||||
myList.append(self.id)
|
||||
networkStatus[self.id][self.env.now]=1
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
#Imitation effects
|
||||
if self.state['id'] == 0:
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
num_neighbors_aware = len(aware_neighbors)
|
||||
if random.random() < (settings.imitation_prob*num_neighbors_aware):
|
||||
myList.append(self.id)
|
||||
self.state['id'] = 1
|
||||
networkStatus[self.id][self.env.now]=1
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
class IndependentCascadeModel(BaseNetworkAgent):
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.innovation_prob = settings.innovation_prob
|
||||
self.imitation_prob = settings.imitation_prob
|
||||
self.time_awareness = 0
|
||||
networkStatus[self.id][self.env.now]=0
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
aware_neighbors_1_time_step=[]
|
||||
#Outside effects
|
||||
if random.random() < settings.innovation_prob:
|
||||
if self.state['id'] == 0:
|
||||
self.state['id'] = 1
|
||||
myList.append(self.id)
|
||||
networkStatus[self.id][self.env.now]=1
|
||||
self.time_awareness = self.env.now #Para saber cuando se han contagiado
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
#Imitation effects
|
||||
if self.state['id'] == 0:
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for x in aware_neighbors:
|
||||
if x.time_awareness == (self.env.now-1):
|
||||
aware_neighbors_1_time_step.append(x)
|
||||
num_neighbors_aware = len(aware_neighbors_1_time_step)
|
||||
if random.random() < (settings.imitation_prob*num_neighbors_aware):
|
||||
myList.append(self.id)
|
||||
self.state['id'] = 1
|
||||
networkStatus[self.id][self.env.now]=1
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
class ZombieOutbreak(BaseNetworkAgent):
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.bite_prob = settings.bite_prob
|
||||
networkStatus[self.id][self.env.now]=0
|
||||
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
if random.random() < settings.heal_prob:
|
||||
if self.state['id'] == 1:
|
||||
self.zombify()
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
if self.state['id'] == 1:
|
||||
print("Soy el zombie " + str(self.id) + " y me voy a curar porque el num aleatorio ha sido " + str(num))
|
||||
networkStatus[self.id][self.env.now]=0
|
||||
if self.id in myList:
|
||||
myList.remove(self.id)
|
||||
self.state['id'] = 0
|
||||
yield self.env.timeout(settings.timeout)
|
||||
else:
|
||||
yield self.env.timeout(settings.timeout)
|
||||
|
||||
|
||||
def zombify(self):
|
||||
normal_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in normal_neighbors:
|
||||
if random.random() < self.bite_prob:
|
||||
print("Soy el zombie " + str(self.id) + " y voy a contagiar a " + str(neighbor.id))
|
||||
neighbor.state['id'] = 1 # zombie
|
||||
myList.append(neighbor.id)
|
||||
networkStatus[self.id][self.env.now]=1
|
||||
networkStatus[neighbor.id][self.env.now]=1
|
||||
print(self.env.now, "Soy el zombie: "+ str(self.id), "Mi vecino es: "+ str(neighbor.id), sep='\t')
|
||||
break
|
||||
|
||||
|
||||
# Initialize agent states. Let's assume everyone is normal.
|
||||
init_states = [{'id': 0, } for _ in range(settings.number_of_nodes)] # add keys as as necessary, but "id" must always refer to that state category
|
||||
|
||||
# Seed a zombie
|
||||
#init_states[5] = {'id': 1}
|
||||
#init_states[3] = {'id': 1}
|
||||
|
||||
sim = NetworkSimulation(topology=G, states=init_states, agent_type=SentimentCorrelationModel,
|
||||
max_time=settings.max_time, num_trials=settings.num_trials, logging_interval=1.0)
|
||||
|
||||
|
||||
sim.run_simulation()
|
||||
|
||||
myList = sorted(myList, key=int)
|
||||
#print("Los zombies son: " + str(myList))
|
||||
|
||||
trial = BaseLoggingAgent.open_trial_state_history(dir_path='sim_01', trial_id=0)
|
||||
zombie_census = [sum([1 for node_id, state in g.items() if state['id'] == 1]) for t,g in trial.items()]
|
||||
|
||||
#for x in range(len(myList)):
|
||||
# G.node[myList[x]]['viz'] = {'color': {'r': 255, 'g': 0, 'b': 0, 'a': 0}}
|
||||
|
||||
#G.node[1]['viz'] = {'color': {'r': 255, 'g': 0, 'b': 0, 'a': 0}}
|
||||
|
||||
#lista = nx.nodes(G)
|
||||
#print('Nodos: ' + str(lista))
|
||||
for x in range(0, settings.number_of_nodes):
|
||||
networkStatusAux=[]
|
||||
for tiempo in networkStatus[x]:
|
||||
if tiempo != 'id':
|
||||
networkStatusAux.append((networkStatus[x][tiempo],tiempo,None))
|
||||
G.add_node(x, zombie= networkStatusAux)
|
||||
#print(networkStatus)
|
||||
|
||||
|
||||
nx.write_gexf(G,"test.gexf", version="1.2draft")
|
||||
plt.plot(zombie_census)
|
||||
plt.draw() # pyplot draw()
|
||||
plt.savefig("zombie.png")
|
||||
#print(networkStatus)
|
||||
#nx.draw(G)
|
||||
#plt.show()
|
||||
#plt.savefig("path.png")
|
||||
@@ -1 +0,0 @@
|
||||
0.13.4
|
||||
@@ -1,76 +0,0 @@
|
||||
import importlib
|
||||
import sys
|
||||
import os
|
||||
import pdb
|
||||
import logging
|
||||
|
||||
from .version import __version__
|
||||
|
||||
try:
|
||||
basestring
|
||||
except NameError:
|
||||
basestring = str
|
||||
|
||||
from . import agents
|
||||
from .simulation import *
|
||||
from .environment import Environment
|
||||
from . import utils
|
||||
from . import analysis
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
from . import simulation
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logging.info('Running SOIL version: {}'.format(__version__))
|
||||
|
||||
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
|
||||
parser.add_argument('file', type=str,
|
||||
nargs="?",
|
||||
default='simulation.yml',
|
||||
help='python module containing the simulation configuration.')
|
||||
parser.add_argument('--module', '-m', type=str,
|
||||
help='file containing the code of any custom agents.')
|
||||
parser.add_argument('--dry-run', '--dry', action='store_true',
|
||||
help='Do not store the results of the simulation.')
|
||||
parser.add_argument('--pdb', action='store_true',
|
||||
help='Use a pdb console in case of exception.')
|
||||
parser.add_argument('--graph', '-g', action='store_true',
|
||||
help='Dump GEXF graph. Defaults to false.')
|
||||
parser.add_argument('--csv', action='store_true',
|
||||
help='Dump history in CSV format. Defaults to false.')
|
||||
parser.add_argument('--output', '-o', type=str, default="soil_output",
|
||||
help='folder to write results to. It defaults to the current directory.')
|
||||
parser.add_argument('--synchronous', action='store_true',
|
||||
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if os.getcwd() not in sys.path:
|
||||
sys.path.append(os.getcwd())
|
||||
if args.module:
|
||||
importlib.import_module(args.module)
|
||||
|
||||
logging.info('Loading config file: {}'.format(args.file))
|
||||
|
||||
try:
|
||||
dump = []
|
||||
if not args.dry_run:
|
||||
if args.csv:
|
||||
dump.append('csv')
|
||||
if args.graph:
|
||||
dump.append('gexf')
|
||||
simulation.run_from_config(args.file,
|
||||
dry_run=args.dry_run,
|
||||
dump=dump,
|
||||
parallel=(not args.synchronous),
|
||||
results_dir=args.output)
|
||||
except Exception:
|
||||
if args.pdb:
|
||||
pdb.post_mortem()
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,4 +0,0 @@
|
||||
from . import main
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,40 +0,0 @@
|
||||
import random
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class BassModel(BaseAgent):
|
||||
"""
|
||||
Settings:
|
||||
innovation_prob
|
||||
imitation_prob
|
||||
"""
|
||||
|
||||
def __init__(self, environment, agent_id, state):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
env_params = environment.environment_params
|
||||
self.state['sentimentCorrelation'] = 0
|
||||
|
||||
def step(self):
|
||||
self.behaviour()
|
||||
|
||||
def behaviour(self):
|
||||
# Outside effects
|
||||
if random.random() < self.state_params['innovation_prob']:
|
||||
if self.state['id'] == 0:
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
else:
|
||||
pass
|
||||
|
||||
return
|
||||
|
||||
# Imitation effects
|
||||
if self.state['id'] == 0:
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
num_neighbors_aware = len(aware_neighbors)
|
||||
if random.random() < (self.state_params['imitation_prob']*num_neighbors_aware):
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
|
||||
else:
|
||||
pass
|
||||
@@ -1,102 +0,0 @@
|
||||
import random
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class BigMarketModel(BaseAgent):
|
||||
"""
|
||||
Settings:
|
||||
Names:
|
||||
enterprises [Array]
|
||||
|
||||
tweet_probability_enterprises [Array]
|
||||
Users:
|
||||
tweet_probability_users
|
||||
|
||||
tweet_relevant_probability
|
||||
|
||||
tweet_probability_about [Array]
|
||||
|
||||
sentiment_about [Array]
|
||||
"""
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.enterprises = environment.environment_params['enterprises']
|
||||
self.type = ""
|
||||
self.number_of_enterprises = len(environment.environment_params['enterprises'])
|
||||
|
||||
if self.id < self.number_of_enterprises: # Enterprises
|
||||
self.state['id'] = self.id
|
||||
self.type = "Enterprise"
|
||||
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
|
||||
else: # normal users
|
||||
self.state['id'] = self.number_of_enterprises
|
||||
self.type = "User"
|
||||
self.tweet_probability = environment.environment_params['tweet_probability_users']
|
||||
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
|
||||
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
|
||||
self.sentiment_about = environment.environment_params['sentiment_about'] # List
|
||||
|
||||
def step(self):
|
||||
|
||||
if self.id < self.number_of_enterprises: # Enterprise
|
||||
self.enterpriseBehaviour()
|
||||
else: # Usuario
|
||||
self.userBehaviour()
|
||||
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
|
||||
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
|
||||
|
||||
def enterpriseBehaviour(self):
|
||||
|
||||
if random.random() < self.tweet_probability: # Tweets
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
|
||||
for x in aware_neighbors:
|
||||
if random.uniform(0,10) < 5:
|
||||
x.sentiment_about[self.id] += 0.1 # Increments for enterprise
|
||||
else:
|
||||
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
|
||||
|
||||
# Establecemos limites
|
||||
if x.sentiment_about[self.id] > 1:
|
||||
x.sentiment_about[self.id] = 1
|
||||
if x.sentiment_about[self.id]< -1:
|
||||
x.sentiment_about[self.id] = -1
|
||||
|
||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
|
||||
|
||||
def userBehaviour(self):
|
||||
|
||||
if random.random() < self.tweet_probability: # Tweets
|
||||
if random.random() < self.tweet_relevant_probability: # Tweets something relevant
|
||||
# Tweet probability per enterprise
|
||||
for i in range(self.number_of_enterprises):
|
||||
random_num = random.random()
|
||||
if random_num < self.tweet_probability_about[i]:
|
||||
# The condition is fulfilled, sentiments are evaluated towards that enterprise
|
||||
if self.sentiment_about[i] < 0:
|
||||
# NEGATIVO
|
||||
self.userTweets("negative",i)
|
||||
elif self.sentiment_about[i] == 0:
|
||||
# NEUTRO
|
||||
pass
|
||||
else:
|
||||
# POSITIVO
|
||||
self.userTweets("positive",i)
|
||||
|
||||
def userTweets(self,sentiment,enterprise):
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
|
||||
for x in aware_neighbors:
|
||||
if sentiment == "positive":
|
||||
x.sentiment_about[enterprise] +=0.003
|
||||
elif sentiment == "negative":
|
||||
x.sentiment_about[enterprise] -=0.003
|
||||
else:
|
||||
pass
|
||||
|
||||
# Establecemos limites
|
||||
if x.sentiment_about[enterprise] > 1:
|
||||
x.sentiment_about[enterprise] = 1
|
||||
if x.sentiment_about[enterprise] < -1:
|
||||
x.sentiment_about[enterprise] = -1
|
||||
|
||||
x.attrs['sentiment_enterprise_%s'% self.enterprises[enterprise]] = x.sentiment_about[enterprise]
|
||||
@@ -1,32 +0,0 @@
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class CounterModel(BaseAgent):
|
||||
"""
|
||||
Dummy behaviour. It counts the number of nodes in the network and neighbors
|
||||
in each step and adds it to its state.
|
||||
"""
|
||||
|
||||
def step(self):
|
||||
# Outside effects
|
||||
total = len(list(self.get_all_agents()))
|
||||
neighbors = len(list(self.get_neighboring_agents()))
|
||||
self['times'] = self.get('times', 0) + 1
|
||||
self['neighbors'] = neighbors
|
||||
self['total'] = total
|
||||
|
||||
|
||||
class AggregatedCounter(BaseAgent):
|
||||
"""
|
||||
Dummy behaviour. It counts the number of nodes in the network and neighbors
|
||||
in each step and adds it to its state.
|
||||
"""
|
||||
|
||||
def step(self):
|
||||
# Outside effects
|
||||
total = len(list(self.get_all_agents()))
|
||||
neighbors = len(list(self.get_neighboring_agents()))
|
||||
self['times'] = self.get('times', 0) + 1
|
||||
self['neighbors'] = self.get('neighbors', 0) + neighbors
|
||||
self['total'] = total = self.get('total', 0) + total
|
||||
self.debug('Running for step: {}. Total: {}'.format(self.now, total))
|
||||
@@ -1,18 +0,0 @@
|
||||
from . import BaseAgent
|
||||
|
||||
import os.path
|
||||
import matplotlib
|
||||
import matplotlib.pyplot as plt
|
||||
import networkx as nx
|
||||
|
||||
|
||||
class DrawingAgent(BaseAgent):
|
||||
"""
|
||||
Agent that draws the state of the network.
|
||||
"""
|
||||
|
||||
def step(self):
|
||||
# Outside effects
|
||||
f = plt.figure()
|
||||
nx.draw(self.env.G, node_size=10, width=0.2, pos=nx.spring_layout(self.env.G, scale=100), ax=f.add_subplot(111))
|
||||
f.savefig(os.path.join(self.env.get_path(), "graph-"+str(self.env.now)+".png"))
|
||||
@@ -1,49 +0,0 @@
|
||||
import random
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class IndependentCascadeModel(BaseAgent):
|
||||
"""
|
||||
Settings:
|
||||
innovation_prob
|
||||
|
||||
imitation_prob
|
||||
"""
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.innovation_prob = environment.environment_params['innovation_prob']
|
||||
self.imitation_prob = environment.environment_params['imitation_prob']
|
||||
self.state['time_awareness'] = 0
|
||||
self.state['sentimentCorrelation'] = 0
|
||||
|
||||
def step(self):
|
||||
self.behaviour()
|
||||
|
||||
def behaviour(self):
|
||||
aware_neighbors_1_time_step = []
|
||||
# Outside effects
|
||||
if random.random() < self.innovation_prob:
|
||||
if self.state['id'] == 0:
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
self.state['time_awareness'] = self.env.now # To know when they have been infected
|
||||
else:
|
||||
pass
|
||||
|
||||
return
|
||||
|
||||
# Imitation effects
|
||||
if self.state['id'] == 0:
|
||||
aware_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for x in aware_neighbors:
|
||||
if x.state['time_awareness'] == (self.env.now-1):
|
||||
aware_neighbors_1_time_step.append(x)
|
||||
num_neighbors_aware = len(aware_neighbors_1_time_step)
|
||||
if random.random() < (self.imitation_prob*num_neighbors_aware):
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
else:
|
||||
pass
|
||||
|
||||
return
|
||||
@@ -1,242 +0,0 @@
|
||||
import random
|
||||
import numpy as np
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class SpreadModelM2(BaseAgent):
|
||||
"""
|
||||
Settings:
|
||||
prob_neutral_making_denier
|
||||
|
||||
prob_infect
|
||||
|
||||
prob_cured_healing_infected
|
||||
|
||||
prob_cured_vaccinate_neutral
|
||||
|
||||
prob_vaccinated_healing_infected
|
||||
|
||||
prob_vaccinated_vaccinate_neutral
|
||||
|
||||
prob_generate_anti_rumor
|
||||
"""
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
|
||||
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
def step(self):
|
||||
|
||||
if self.state['id'] == 0: # Neutral
|
||||
self.neutral_behaviour()
|
||||
elif self.state['id'] == 1: # Infected
|
||||
self.infected_behaviour()
|
||||
elif self.state['id'] == 2: # Cured
|
||||
self.cured_behaviour()
|
||||
elif self.state['id'] == 3: # Vaccinated
|
||||
self.vaccinated_behaviour()
|
||||
|
||||
def neutral_behaviour(self):
|
||||
|
||||
# Infected
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
if len(infected_neighbors) > 0:
|
||||
if random.random() < self.prob_neutral_making_denier:
|
||||
self.state['id'] = 3 # Vaccinated making denier
|
||||
|
||||
def infected_behaviour(self):
|
||||
|
||||
# Neutral
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_infect:
|
||||
neighbor.state['id'] = 1 # Infected
|
||||
|
||||
def cured_behaviour(self):
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if random.random() < self.prob_cured_healing_infected:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
def vaccinated_behaviour(self):
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if random.random() < self.prob_cured_healing_infected:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
|
||||
# Generate anti-rumor
|
||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors_2:
|
||||
if random.random() < self.prob_generate_anti_rumor:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
|
||||
class ControlModelM2(BaseAgent):
|
||||
"""
|
||||
Settings:
|
||||
prob_neutral_making_denier
|
||||
|
||||
prob_infect
|
||||
|
||||
prob_cured_healing_infected
|
||||
|
||||
prob_cured_vaccinate_neutral
|
||||
|
||||
prob_vaccinated_healing_infected
|
||||
|
||||
prob_vaccinated_vaccinate_neutral
|
||||
|
||||
prob_generate_anti_rumor
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
|
||||
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'],
|
||||
environment.environment_params['standard_variance'])
|
||||
|
||||
def step(self):
|
||||
|
||||
if self.state['id'] == 0: # Neutral
|
||||
self.neutral_behaviour()
|
||||
elif self.state['id'] == 1: # Infected
|
||||
self.infected_behaviour()
|
||||
elif self.state['id'] == 2: # Cured
|
||||
self.cured_behaviour()
|
||||
elif self.state['id'] == 3: # Vaccinated
|
||||
self.vaccinated_behaviour()
|
||||
elif self.state['id'] == 4: # Beacon-off
|
||||
self.beacon_off_behaviour()
|
||||
elif self.state['id'] == 5: # Beacon-on
|
||||
self.beacon_on_behaviour()
|
||||
|
||||
def neutral_behaviour(self):
|
||||
self.state['visible'] = False
|
||||
|
||||
# Infected
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
if len(infected_neighbors) > 0:
|
||||
if random.random() < self.prob_neutral_making_denier:
|
||||
self.state['id'] = 3 # Vaccinated making denier
|
||||
|
||||
def infected_behaviour(self):
|
||||
|
||||
# Neutral
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_infect:
|
||||
neighbor.state['id'] = 1 # Infected
|
||||
self.state['visible'] = False
|
||||
|
||||
def cured_behaviour(self):
|
||||
|
||||
self.state['visible'] = True
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if random.random() < self.prob_cured_healing_infected:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
def vaccinated_behaviour(self):
|
||||
self.state['visible'] = True
|
||||
|
||||
# Cure
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if random.random() < self.prob_cured_healing_infected:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
|
||||
# Generate anti-rumor
|
||||
infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors_2:
|
||||
if random.random() < self.prob_generate_anti_rumor:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
def beacon_off_behaviour(self):
|
||||
self.state['visible'] = False
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
if len(infected_neighbors) > 0:
|
||||
self.state['id'] == 5 # Beacon on
|
||||
|
||||
def beacon_on_behaviour(self):
|
||||
self.state['visible'] = False
|
||||
# Cure (M2 feature added)
|
||||
infected_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors:
|
||||
if random.random() < self.prob_generate_anti_rumor:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors_infected:
|
||||
if random.random() < self.prob_generate_anti_rumor:
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
|
||||
for neighbor in infected_neighbors_infected:
|
||||
if random.random() < self.prob_generate_anti_rumor:
|
||||
neighbor.state['id'] = 2 # Cured
|
||||
|
||||
# Vaccinate
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=0)
|
||||
for neighbor in neutral_neighbors:
|
||||
if random.random() < self.prob_cured_vaccinate_neutral:
|
||||
neighbor.state['id'] = 3 # Vaccinated
|
||||
@@ -1,93 +0,0 @@
|
||||
import random
|
||||
import numpy as np
|
||||
from . import FSM, state
|
||||
|
||||
|
||||
class SISaModel(FSM):
|
||||
"""
|
||||
Settings:
|
||||
neutral_discontent_spon_prob
|
||||
|
||||
neutral_discontent_infected_prob
|
||||
|
||||
neutral_content_spon_prob
|
||||
|
||||
neutral_content_infected_prob
|
||||
|
||||
discontent_neutral
|
||||
|
||||
discontent_content
|
||||
|
||||
variance_d_c
|
||||
|
||||
content_discontent
|
||||
|
||||
variance_c_d
|
||||
|
||||
content_neutral
|
||||
|
||||
standard_variance
|
||||
"""
|
||||
|
||||
def __init__(self, environment, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
|
||||
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_discontent_infected_prob = np.random.normal(self.env['neutral_discontent_infected_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_content_spon_prob = np.random.normal(self.env['neutral_content_spon_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_content_infected_prob = np.random.normal(self.env['neutral_content_infected_prob'],
|
||||
self.env['standard_variance'])
|
||||
|
||||
self.discontent_neutral = np.random.normal(self.env['discontent_neutral'],
|
||||
self.env['standard_variance'])
|
||||
self.discontent_content = np.random.normal(self.env['discontent_content'],
|
||||
self.env['variance_d_c'])
|
||||
|
||||
self.content_discontent = np.random.normal(self.env['content_discontent'],
|
||||
self.env['variance_c_d'])
|
||||
self.content_neutral = np.random.normal(self.env['content_neutral'],
|
||||
self.env['standard_variance'])
|
||||
|
||||
@state
|
||||
def neutral(self):
|
||||
# Spontaneous effects
|
||||
if random.random() < self.neutral_discontent_spon_prob:
|
||||
return self.discontent
|
||||
if random.random() < self.neutral_content_spon_prob:
|
||||
return self.content
|
||||
|
||||
# Infected
|
||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
|
||||
if random.random() < discontent_neighbors * self.neutral_discontent_infected_prob:
|
||||
return self.discontent
|
||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
||||
if random.random() < content_neighbors * self.neutral_content_infected_prob:
|
||||
return self.content
|
||||
return self.neutral
|
||||
|
||||
@state
|
||||
def discontent(self):
|
||||
# Healing
|
||||
if random.random() < self.discontent_neutral:
|
||||
return self.neutral
|
||||
|
||||
# Superinfected
|
||||
content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
|
||||
if random.random() < content_neighbors * self.discontent_content:
|
||||
return self.content
|
||||
return self.discontent
|
||||
|
||||
@state
|
||||
def content(self):
|
||||
# Healing
|
||||
if random.random() < self.content_neutral:
|
||||
return self.neutral
|
||||
|
||||
# Superinfected
|
||||
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
|
||||
if random.random() < discontent_neighbors * self.content_discontent:
|
||||
self.discontent
|
||||
return self.content
|
||||
@@ -1,102 +0,0 @@
|
||||
import random
|
||||
from . import BaseAgent
|
||||
|
||||
|
||||
class SentimentCorrelationModel(BaseAgent):
|
||||
"""
|
||||
Settings:
|
||||
outside_effects_prob
|
||||
|
||||
anger_prob
|
||||
|
||||
joy_prob
|
||||
|
||||
sadness_prob
|
||||
|
||||
disgust_prob
|
||||
"""
|
||||
|
||||
def __init__(self, environment, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
||||
self.anger_prob = environment.environment_params['anger_prob']
|
||||
self.joy_prob = environment.environment_params['joy_prob']
|
||||
self.sadness_prob = environment.environment_params['sadness_prob']
|
||||
self.disgust_prob = environment.environment_params['disgust_prob']
|
||||
self.state['time_awareness'] = []
|
||||
for i in range(4): # In this model we have 4 sentiments
|
||||
self.state['time_awareness'].append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||
self.state['sentimentCorrelation'] = 0
|
||||
|
||||
def step(self):
|
||||
self.behaviour()
|
||||
|
||||
def behaviour(self):
|
||||
|
||||
angry_neighbors_1_time_step = []
|
||||
joyful_neighbors_1_time_step = []
|
||||
sad_neighbors_1_time_step = []
|
||||
disgusted_neighbors_1_time_step = []
|
||||
|
||||
angry_neighbors = self.get_neighboring_agents(state_id=1)
|
||||
for x in angry_neighbors:
|
||||
if x.state['time_awareness'][0] > (self.env.now-500):
|
||||
angry_neighbors_1_time_step.append(x)
|
||||
num_neighbors_angry = len(angry_neighbors_1_time_step)
|
||||
|
||||
joyful_neighbors = self.get_neighboring_agents(state_id=2)
|
||||
for x in joyful_neighbors:
|
||||
if x.state['time_awareness'][1] > (self.env.now-500):
|
||||
joyful_neighbors_1_time_step.append(x)
|
||||
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
|
||||
|
||||
sad_neighbors = self.get_neighboring_agents(state_id=3)
|
||||
for x in sad_neighbors:
|
||||
if x.state['time_awareness'][2] > (self.env.now-500):
|
||||
sad_neighbors_1_time_step.append(x)
|
||||
num_neighbors_sad = len(sad_neighbors_1_time_step)
|
||||
|
||||
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
|
||||
for x in disgusted_neighbors:
|
||||
if x.state['time_awareness'][3] > (self.env.now-500):
|
||||
disgusted_neighbors_1_time_step.append(x)
|
||||
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
|
||||
|
||||
anger_prob = self.anger_prob+(len(angry_neighbors_1_time_step)*self.anger_prob)
|
||||
joy_prob = self.joy_prob+(len(joyful_neighbors_1_time_step)*self.joy_prob)
|
||||
sadness_prob = self.sadness_prob+(len(sad_neighbors_1_time_step)*self.sadness_prob)
|
||||
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
|
||||
outside_effects_prob = self.outside_effects_prob
|
||||
|
||||
num = random.random()
|
||||
|
||||
if num<outside_effects_prob:
|
||||
self.state['id'] = random.randint(1, 4)
|
||||
|
||||
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
self.state['sentiment'] = self.state['id']
|
||||
|
||||
|
||||
if(num<anger_prob):
|
||||
|
||||
self.state['id'] = 1
|
||||
self.state['sentimentCorrelation'] = 1
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
elif (num<joy_prob+anger_prob and num>anger_prob):
|
||||
|
||||
self.state['id'] = 2
|
||||
self.state['sentimentCorrelation'] = 2
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
|
||||
|
||||
self.state['id'] = 3
|
||||
self.state['sentimentCorrelation'] = 3
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
|
||||
|
||||
self.state['id'] = 4
|
||||
self.state['sentimentCorrelation'] = 4
|
||||
self.state['time_awareness'][self.state['id']-1] = self.env.now
|
||||
|
||||
self.state['sentiment'] = self.state['id']
|
||||