mirror of
https://github.com/gsi-upm/soil
synced 2024-11-13 06:52:28 +00:00
Merge branch 'refactor-imports'
* remove leftover import in example * Update quickstart tutorial * Add gitlab-ci * Added missing gexf for tests * Upgrade to python3.7 and pandas 0.3.4 because pandas has dropped support for python 3.4 -> There are some API changes in pandas, and I've updated the code accordingly. * Set pytest as the default test runner * Update dockerignore * Skip testing long examples (>1000 steps)
This commit is contained in:
commit
01cc8e9128
2
.dockerignore
Normal file
2
.dockerignore
Normal file
@ -0,0 +1,2 @@
|
||||
**/soil_output
|
||||
.*
|
21
.gitlab-ci.yml
Normal file
21
.gitlab-ci.yml
Normal file
@ -0,0 +1,21 @@
|
||||
image: python:3.7
|
||||
|
||||
steps:
|
||||
- build
|
||||
- test
|
||||
|
||||
build:
|
||||
stage: build
|
||||
image:
|
||||
name: gcr.io/kaniko-project/executor:debug
|
||||
entrypoint: [""]
|
||||
script:
|
||||
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
|
||||
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
|
||||
only:
|
||||
- tags
|
||||
|
||||
|
||||
test:
|
||||
script:
|
||||
python setup.py test
|
@ -1,4 +1,11 @@
|
||||
FROM python:3.4-onbuild
|
||||
FROM python:3.7
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
COPY test-requirements.txt requirements.txt /usr/src/app/
|
||||
RUN pip install --no-cache-dir -r test-requirements.txt -r requirements.txt
|
||||
|
||||
COPY ./ /usr/src/app
|
||||
|
||||
RUN pip install '.[web]'
|
||||
|
||||
|
4
Makefile
Normal file
4
Makefile
Normal file
@ -0,0 +1,4 @@
|
||||
test:
|
||||
docker-compose exec dev python -m pytest -s -v
|
||||
|
||||
.PHONY: test
|
@ -2,6 +2,8 @@ version: '3'
|
||||
services:
|
||||
dev:
|
||||
build: .
|
||||
environment:
|
||||
PYTHONDONTWRITEBYTECODE: 1
|
||||
volumes:
|
||||
- .:/usr/src/app
|
||||
tty: true
|
||||
|
244
docs/configuration.rst
Normal file
244
docs/configuration.rst
Normal file
@ -0,0 +1,244 @@
|
||||
Configuring a simulation
|
||||
------------------------
|
||||
|
||||
There are two ways to configure a simulation: programmatically and with a configuration file.
|
||||
In both cases, the parameters used are the same.
|
||||
The advantage of a configuration file is that it is a clean declarative description, and it makes it easier to reproduce.
|
||||
|
||||
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
|
||||
Here's an example (``example.yml``).
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
name: MyExampleSimulation
|
||||
max_time: 50
|
||||
num_trials: 3
|
||||
interval: 2
|
||||
network_params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 100
|
||||
m: 2
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: content
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: discontent
|
||||
- agent_type: SISaModel
|
||||
weight: 8
|
||||
state:
|
||||
id: neutral
|
||||
environment_params:
|
||||
prob_infect: 0.075
|
||||
|
||||
|
||||
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
|
||||
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil.
|
||||
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
|
||||
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
|
||||
The state of the agents will be updated every 2 seconds (``interval``).
|
||||
|
||||
Now run the simulation with the command line tool:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
soil example.yml
|
||||
|
||||
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
|
||||
Three types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a ``sqlite`` file with the content of the state of every network node and the environment parameters at every step of the simulation.
|
||||
|
||||
|
||||
.. code::
|
||||
|
||||
soil_output
|
||||
└── MyExampleSimulation
|
||||
├── MyExampleSimulation.dumped.yml
|
||||
├── MyExampleSimulation.simulation.pickle
|
||||
├── MyExampleSimulation_trial_0.db.sqlite
|
||||
├── MyExampleSimulation_trial_1.db.sqlite
|
||||
└── MyExampleSimulation_trial_2.db.sqlite
|
||||
|
||||
|
||||
You may also ask soil to export the states in a ``csv`` file, and the network in gephi format (``gexf``).
|
||||
|
||||
Network
|
||||
=======
|
||||
|
||||
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
|
||||
|
||||
Loading a network
|
||||
#################
|
||||
|
||||
To load an existing network, specify its path in the configuration:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
network_params:
|
||||
path: /tmp/mynetwork.gexf
|
||||
|
||||
Soil will try to guess what networkx method to use to read the file based on its extension.
|
||||
However, we only test using ``gexf`` files.
|
||||
|
||||
For simple networks, you may also include them in the configuration itself using , using the ``topology`` parameter like so:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
topology:
|
||||
nodes:
|
||||
- id: First
|
||||
- id: Second
|
||||
links:
|
||||
- source: First
|
||||
target: Second
|
||||
|
||||
|
||||
Generating a random network
|
||||
###########################
|
||||
|
||||
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
|
||||
For example, the following configuration is equivalent to :code:`nx.complete_graph(n=100)`:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_params:
|
||||
generator: complete_graph
|
||||
n: 100
|
||||
|
||||
Environment
|
||||
============
|
||||
The environment is the place where the shared state of the simulation is stored.
|
||||
For instance, the probability of disease outbreak.
|
||||
The configuration file may specify the initial value of the environment parameters:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
environment_params:
|
||||
daily_probability_of_earthquake: 0.001
|
||||
number_of_earthquakes: 0
|
||||
|
||||
All agents have access to the environment parameters.
|
||||
|
||||
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
|
||||
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
|
||||
|
||||
|
||||
Agents
|
||||
======
|
||||
Agents are a way of modelling behavior.
|
||||
Agents can be characterized with two variables: agent type (``agent_type``) and state.
|
||||
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
|
||||
Through the environment, it can access the network topology and the state of other agents.
|
||||
|
||||
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
|
||||
|
||||
Network Agents
|
||||
##############
|
||||
Network agents are attached to a node in the topology.
|
||||
The configuration file allows you to specify how agents will be mapped to topology nodes.
|
||||
|
||||
The simplest way is to specify a single type of agent.
|
||||
Hence, every node in the network will be associated to an agent of that type.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_type: SISaModel
|
||||
|
||||
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
|
||||
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
- agent_type: CounterModel
|
||||
weight: 5
|
||||
|
||||
The third option is to specify the type of agent on the node itself, e.g.:
|
||||
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
topology:
|
||||
nodes:
|
||||
- id: first
|
||||
agent_type: BaseAgent
|
||||
states:
|
||||
first:
|
||||
agent_type: SISaModel
|
||||
|
||||
|
||||
This would also work with a randomly generated network:
|
||||
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network:
|
||||
generator: complete
|
||||
n: 5
|
||||
agent_type: BaseAgent
|
||||
states:
|
||||
- agent_type: SISaModel
|
||||
|
||||
|
||||
|
||||
In addition to agent type, you may add a custom initial state to the distribution.
|
||||
This is very useful to add the same agent type with different states.
|
||||
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
weight: 9
|
||||
state:
|
||||
id: neutral
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: discontent
|
||||
|
||||
Lastly, the configuration may include initial state for one or more nodes.
|
||||
For instance, to add a state for the two nodes in this configuration:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_type: SISaModel
|
||||
network:
|
||||
generator: complete_graph
|
||||
n: 2
|
||||
states:
|
||||
- id: content
|
||||
- id: discontent
|
||||
|
||||
|
||||
Or to add state only to specific nodes (by ``id``).
|
||||
For example, to apply special skills to Linux Torvalds in a simulation:
|
||||
|
||||
.. literalinclude:: ../examples/torvalds.yml
|
||||
:language: yaml
|
||||
|
||||
|
||||
Environment Agents
|
||||
##################
|
||||
In addition to network agents, more agents can be added to the simulation.
|
||||
These agents are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
|
||||
|
||||
|
||||
.. code::
|
||||
|
||||
environment_agents:
|
||||
- agent_type: MyAgent
|
||||
state:
|
||||
mood: happy
|
||||
- agent_type: DummyAgent
|
||||
|
||||
|
||||
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
|
||||
They are also useful to add behavior that has little to do with the network and the interactions within that network.
|
@ -6,7 +6,7 @@
|
||||
Welcome to Soil's documentation!
|
||||
================================
|
||||
|
||||
Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
|
||||
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
|
||||
|
||||
If you use Soil in your research, do not forget to cite this paper:
|
||||
|
||||
@ -39,6 +39,7 @@ If you use Soil in your research, do not forget to cite this paper:
|
||||
|
||||
installation
|
||||
quickstart
|
||||
configuration
|
||||
Tutorial <soil_tutorial>
|
||||
|
||||
..
|
||||
|
@ -1,197 +1,94 @@
|
||||
Quickstart
|
||||
----------
|
||||
|
||||
This section shows how to run simulations from simulation configuration files.
|
||||
First of all, you need to install the package (See :doc:`installation`)
|
||||
This section shows how to run your first simulation with Soil.
|
||||
For installation instructions, see :doc:`installation`.
|
||||
|
||||
Simulation configuration files are ``json`` or ``yaml`` files that define all the parameters of a simulation.
|
||||
Here's an example (``example.yml``).
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
name: MyExampleSimulation
|
||||
max_time: 50
|
||||
num_trials: 3
|
||||
interval: 2
|
||||
network_params:
|
||||
network_type: barabasi_albert_graph
|
||||
n: 100
|
||||
m: 2
|
||||
agent_distribution:
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: content
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: discontent
|
||||
- agent_type: SISaModel
|
||||
weight: 8
|
||||
state:
|
||||
id: neutral
|
||||
environment_params:
|
||||
prob_infect: 0.075
|
||||
There are mainly two parts in a simulation: agent classes and simulation configuration.
|
||||
An agent class defines how the agent will behave throughout the simulation.
|
||||
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
|
||||
|
||||
|
||||
This example configuration will run three trials of a simulation containing a randomly generated network.
|
||||
The 100 nodes in the network will be SISaModel agents, 10% of them will start in the content state, 10% in the discontent state, and the remaining 80% in the neutral state.
|
||||
All agents will have access to the environment, which only contains one variable, ``prob_infected``.
|
||||
The state of the agents will be updated every 2 seconds (``interval``).
|
||||
|
||||
Now run the simulation with the command line tool:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
soil example.yml
|
||||
|
||||
Once the simulation finishes, its results will be stored in a folder named ``MyExampleSimulation``.
|
||||
Four types of objects are saved by default: a pickle of the simulation; a ``YAML`` representation of the simulation (which can be used to re-launch it); and for every trial, a csv file with the content of the state of every network node and the environment parameters at every step of the simulation, as well as the network in gephi format (``gexf``).
|
||||
.. image:: soil.png
|
||||
:width: 80%
|
||||
:align: center
|
||||
|
||||
|
||||
.. code::
|
||||
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
|
||||
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
|
||||
The configuration is the following:
|
||||
|
||||
soil_output
|
||||
├── Sim_prob_0
|
||||
│ ├── Sim_prob_0.dumped.yml
|
||||
│ ├── Sim_prob_0.simulation.pickle
|
||||
│ ├── Sim_prob_0_trial_0.environment.csv
|
||||
│ └── Sim_prob_0_trial_0.gexf
|
||||
|
||||
|
||||
Network
|
||||
=======
|
||||
|
||||
The network topology for the simulation can be loaded from an existing network file or generated with one of the random network generation methods from networkx.
|
||||
|
||||
Loading a network
|
||||
#################
|
||||
|
||||
To load an existing network, specify its path in the configuration:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
network_params:
|
||||
path: /tmp/mynetwork.gexf
|
||||
|
||||
Soil will try to guess what networkx method to use to read the file based on its extension.
|
||||
However, we only test using ``gexf`` files.
|
||||
|
||||
Generating a random network
|
||||
###########################
|
||||
|
||||
To generate a random network using one of networkx's built-in methods, specify the `graph generation algorithm <https://networkx.github.io/documentation/development/reference/generators.html>`_ and other parameters.
|
||||
For example, the following configuration is equivalent to :code:`nx.complete_graph(100)`:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
network_params:
|
||||
network_type: complete_graph
|
||||
n: 100
|
||||
|
||||
Environment
|
||||
============
|
||||
The environment is the place where the shared state of the simulation is stored.
|
||||
For instance, the probability of disease outbreak.
|
||||
The configuration file may specify the initial value of the environment parameters:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
environment_params:
|
||||
daily_probability_of_earthquake: 0.001
|
||||
number_of_earthquakes: 0
|
||||
|
||||
Any agent has unrestricted access to the environment.
|
||||
However, for the sake of simplicity, we recommend limiting environment updates to environment agents.
|
||||
|
||||
Agents
|
||||
======
|
||||
Agents are a way of modelling behavior.
|
||||
Agents can be characterized with two variables: an agent type (``agent_type``) and its state.
|
||||
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
|
||||
Through the environment, it can access the network topology and the state of other agents.
|
||||
|
||||
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
|
||||
|
||||
Network Agents
|
||||
##############
|
||||
Network agents are attached to a node in the topology.
|
||||
The configuration file allows you to specify how agents will be mapped to topology nodes.
|
||||
|
||||
The simplest way is to specify a single type of agent.
|
||||
Hence, every node in the network will be associated to an agent of that type.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_type: SISaModel
|
||||
|
||||
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
|
||||
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_distribution:
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
- agent_type: CounterModel
|
||||
weight: 5
|
||||
|
||||
In addition to agent type, you may also add a custom initial state to the distribution.
|
||||
This is very useful to add the same agent type with different states.
|
||||
e.g., to populate the network with SISaModel, roughly 10% of them with a discontent state:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_distribution:
|
||||
- agent_type: SISaModel
|
||||
weight: 9
|
||||
state:
|
||||
id: neutral
|
||||
- agent_type: SISaModel
|
||||
weight: 1
|
||||
state:
|
||||
id: discontent
|
||||
|
||||
Lastly, the configuration may include initial state for one or more nodes.
|
||||
For instance, to add a state for the two nodes in this configuration:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
agent_type: SISaModel
|
||||
network:
|
||||
network_type: complete_graph
|
||||
n: 2
|
||||
states:
|
||||
- id: content
|
||||
- id: discontent
|
||||
|
||||
|
||||
Or to add state only to specific nodes (by ``id``).
|
||||
For example, to apply special skills to Linux Torvalds in a simulation:
|
||||
|
||||
.. literalinclude:: ../examples/torvalds.yml
|
||||
.. literalinclude:: quickstart.yml
|
||||
:language: yaml
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
Environment Agents
|
||||
##################
|
||||
In addition to network agents, more agents can be added to the simulation.
|
||||
These agens are programmed in much the same way as network agents, the only difference is that they will not be assigned to network nodes.
|
||||
You may :download:`download the file <quickstart.yml>` directly.
|
||||
The agent type used, SISa, is a very simple model.
|
||||
It only has three states (neutral, content and discontent),
|
||||
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
|
||||
|
||||
Running the simulation
|
||||
======================
|
||||
|
||||
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
|
||||
|
||||
.. code::
|
||||
|
||||
environment_agents:
|
||||
- agent_type: MyAgent
|
||||
state:
|
||||
mood: happy
|
||||
- agent_type: DummyAgent
|
||||
❯ soil --graph --csv quickstart.yml [13:35:29]
|
||||
INFO:soil:Using config(s): quickstart
|
||||
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
|
||||
INFO:soil:Starting simulation quickstart at 13:35:30.
|
||||
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
|
||||
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
|
||||
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
|
||||
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
|
||||
INFO:soil:Dumping results to soil_output/quickstart
|
||||
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
|
||||
|
||||
|
||||
Visualizing the results
|
||||
=======================
|
||||
The ``CSV`` file should look like this:
|
||||
|
||||
The simulation will return a dynamic graph .gexf file which could be visualized with
|
||||
.. code::
|
||||
|
||||
agent_id,t_step,key,value
|
||||
env,0,neutral_discontent_spon_prob,0.05
|
||||
env,0,neutral_discontent_infected_prob,0.1
|
||||
env,0,neutral_content_spon_prob,0.2
|
||||
env,0,neutral_content_infected_prob,0.4
|
||||
env,0,discontent_neutral,0.2
|
||||
env,0,discontent_content,0.05
|
||||
env,0,content_discontent,0.05
|
||||
env,0,variance_d_c,0.05
|
||||
env,0,variance_c_d,0.1
|
||||
|
||||
Results and visualization
|
||||
=========================
|
||||
|
||||
The environment variables are marked as ``agent_id`` env.
|
||||
Th exported values are only stored when they change.
|
||||
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
|
||||
|
||||
The dynamic graph is exported as a .gexf file which could be visualized with
|
||||
`Gephi <https://gephi.org/users/download/>`__.
|
||||
Now it is your turn to experiment with the simulation.
|
||||
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
|
||||
|
||||
|
||||
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
|
||||
To make it work, you have to install soil like this:
|
||||
|
||||
```
|
||||
pip install soil[web]
|
||||
```
|
||||
|
||||
Once installed, the soil web UI can be run in two ways:
|
||||
|
||||
```
|
||||
soil-web
|
||||
|
||||
OR
|
||||
|
||||
python -m soil.web
|
||||
```
|
30
docs/quickstart.yml
Normal file
30
docs/quickstart.yml
Normal file
@ -0,0 +1,30 @@
|
||||
---
|
||||
name: quickstart
|
||||
num_trials: 1
|
||||
max_time: 1000
|
||||
network_agents:
|
||||
- agent_type: SISaModel
|
||||
state:
|
||||
id: neutral
|
||||
weight: 1
|
||||
- agent_type: SISaModel
|
||||
state:
|
||||
id: content
|
||||
weight: 2
|
||||
network_params:
|
||||
n: 100
|
||||
k: 5
|
||||
p: 0.2
|
||||
generator: newman_watts_strogatz_graph
|
||||
environment_params:
|
||||
neutral_discontent_spon_prob: 0.05
|
||||
neutral_discontent_infected_prob: 0.1
|
||||
neutral_content_spon_prob: 0.2
|
||||
neutral_content_infected_prob: 0.4
|
||||
discontent_neutral: 0.2
|
||||
discontent_content: 0.05
|
||||
content_discontent: 0.05
|
||||
variance_d_c: 0.05
|
||||
variance_c_d: 0.1
|
||||
content_neutral: 0.1
|
||||
standard_variance: 0.1
|
BIN
docs/soil.png
Normal file
BIN
docs/soil.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 43 KiB |
@ -214,7 +214,7 @@ nodes in that network. Notice how node 0 is the only one with a TV.
|
||||
MAX_TIME = 100
|
||||
EVENT_TIME = 10
|
||||
|
||||
sim = soil.simulation.SoilSimulation(topology=G,
|
||||
sim = soil.Simulation(topology=G,
|
||||
num_trials=1,
|
||||
max_time=MAX_TIME,
|
||||
environment_agents=[{'agent_type': NewsEnvironmentAgent,
|
||||
|
@ -2,14 +2,22 @@
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"start_time": "2017-11-02T09:48:41.843Z"
|
||||
},
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Populating the interactive namespace from numpy and matplotlib\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import soil\n",
|
||||
"import networkx as nx\n",
|
||||
@ -39,26 +47,216 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"total 288K\r\n",
|
||||
"drwxr-xr-x 7 j users 4.0K May 23 12:48 .\r\n",
|
||||
"drwxr-xr-x 15 j users 20K May 7 18:59 ..\r\n",
|
||||
"-rw-r--r-- 1 j users 451 Oct 17 2017 complete.yml\r\n",
|
||||
"drwxr-xr-x 2 j users 4.0K Feb 18 11:22 .ipynb_checkpoints\r\n",
|
||||
"drwxr-xr-x 2 j users 4.0K Oct 17 2017 long_running\r\n",
|
||||
"-rw-r--r-- 1 j users 1.2K May 23 12:49 .nbgrader.log\r\n",
|
||||
"drwxr-xr-x 4 j users 4.0K May 4 11:23 newsspread\r\n",
|
||||
"-rw-r--r-- 1 j users 225K May 4 11:23 NewsSpread.ipynb\r\n",
|
||||
"drwxr-xr-x 4 j users 4.0K May 4 11:21 rabbits\r\n",
|
||||
"-rw-r--r-- 1 j users 42 Jul 3 2017 torvalds.edgelist\r\n",
|
||||
"-rw-r--r-- 1 j users 245 Oct 13 2017 torvalds.yml\r\n",
|
||||
"drwxr-xr-x 4 j users 4.0K May 4 11:23 tutorial\r\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!ls "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"start_time": "2017-11-02T09:48:43.440Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"---\r\n",
|
||||
"default_state: {}\r\n",
|
||||
"load_module: newsspread\r\n",
|
||||
"environment_agents: []\r\n",
|
||||
"environment_params:\r\n",
|
||||
" prob_neighbor_spread: 0.0\r\n",
|
||||
" prob_tv_spread: 0.01\r\n",
|
||||
"interval: 1\r\n",
|
||||
"max_time: 30\r\n",
|
||||
"name: Sim_all_dumb\r\n",
|
||||
"network_agents:\r\n",
|
||||
"- agent_type: DumbViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: false\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: DumbViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" weight: 1\r\n",
|
||||
"network_params:\r\n",
|
||||
" generator: barabasi_albert_graph\r\n",
|
||||
" n: 500\r\n",
|
||||
" m: 5\r\n",
|
||||
"num_trials: 50\r\n",
|
||||
"---\r\n",
|
||||
"default_state: {}\r\n",
|
||||
"load_module: newsspread\r\n",
|
||||
"environment_agents: []\r\n",
|
||||
"environment_params:\r\n",
|
||||
" prob_neighbor_spread: 0.0\r\n",
|
||||
" prob_tv_spread: 0.01\r\n",
|
||||
"interval: 1\r\n",
|
||||
"max_time: 30\r\n",
|
||||
"name: Sim_half_herd\r\n",
|
||||
"network_agents:\r\n",
|
||||
"- agent_type: DumbViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: false\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: DumbViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: HerdViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: false\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: HerdViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" weight: 1\r\n",
|
||||
"network_params:\r\n",
|
||||
" generator: barabasi_albert_graph\r\n",
|
||||
" n: 500\r\n",
|
||||
" m: 5\r\n",
|
||||
"num_trials: 50\r\n",
|
||||
"---\r\n",
|
||||
"default_state: {}\r\n",
|
||||
"load_module: newsspread\r\n",
|
||||
"environment_agents: []\r\n",
|
||||
"environment_params:\r\n",
|
||||
" prob_neighbor_spread: 0.0\r\n",
|
||||
" prob_tv_spread: 0.01\r\n",
|
||||
"interval: 1\r\n",
|
||||
"max_time: 30\r\n",
|
||||
"name: Sim_all_herd\r\n",
|
||||
"network_agents:\r\n",
|
||||
"- agent_type: HerdViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" id: neutral\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: HerdViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" id: neutral\r\n",
|
||||
" weight: 1\r\n",
|
||||
"network_params:\r\n",
|
||||
" generator: barabasi_albert_graph\r\n",
|
||||
" n: 500\r\n",
|
||||
" m: 5\r\n",
|
||||
"num_trials: 50\r\n",
|
||||
"---\r\n",
|
||||
"default_state: {}\r\n",
|
||||
"load_module: newsspread\r\n",
|
||||
"environment_agents: []\r\n",
|
||||
"environment_params:\r\n",
|
||||
" prob_neighbor_spread: 0.0\r\n",
|
||||
" prob_tv_spread: 0.01\r\n",
|
||||
" prob_neighbor_cure: 0.1\r\n",
|
||||
"interval: 1\r\n",
|
||||
"max_time: 30\r\n",
|
||||
"name: Sim_wise_herd\r\n",
|
||||
"network_agents:\r\n",
|
||||
"- agent_type: HerdViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" id: neutral\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: WiseViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" weight: 1\r\n",
|
||||
"network_params:\r\n",
|
||||
" generator: barabasi_albert_graph\r\n",
|
||||
" n: 500\r\n",
|
||||
" m: 5\r\n",
|
||||
"num_trials: 50\r\n",
|
||||
"---\r\n",
|
||||
"default_state: {}\r\n",
|
||||
"load_module: newsspread\r\n",
|
||||
"environment_agents: []\r\n",
|
||||
"environment_params:\r\n",
|
||||
" prob_neighbor_spread: 0.0\r\n",
|
||||
" prob_tv_spread: 0.01\r\n",
|
||||
" prob_neighbor_cure: 0.1\r\n",
|
||||
"interval: 1\r\n",
|
||||
"max_time: 30\r\n",
|
||||
"name: Sim_all_wise\r\n",
|
||||
"network_agents:\r\n",
|
||||
"- agent_type: WiseViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" id: neutral\r\n",
|
||||
" weight: 1\r\n",
|
||||
"- agent_type: WiseViewer\r\n",
|
||||
" state:\r\n",
|
||||
" has_tv: true\r\n",
|
||||
" weight: 1\r\n",
|
||||
"network_params:\r\n",
|
||||
" generator: barabasi_albert_graph\r\n",
|
||||
" n: 500\r\n",
|
||||
" m: 5\r\n",
|
||||
"network_params:\r\n",
|
||||
" generator: barabasi_albert_graph\r\n",
|
||||
" n: 500\r\n",
|
||||
" m: 5\r\n",
|
||||
"num_trials: 50\r\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!cat NewsSpread.yml"
|
||||
"!cat newsspread/NewsSpread.yml"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 10,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"start_time": "2017-11-02T09:48:43.879Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"ename": "ValueError",
|
||||
"evalue": "No objects to concatenate",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[0;31m----------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
|
||||
"\u001b[0;32m<ipython-input-10-bae848826594>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mevodumb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0manalysis\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread_data\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'soil_output/Sim_all_dumb/'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mgroup\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mprocess\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0manalysis\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_count\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkeys\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'id'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m;\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
|
||||
"\u001b[0;32m~/git/lab.gsi/soil/soil/soil/analysis.py\u001b[0m in \u001b[0;36mread_data\u001b[0;34m(group, *args, **kwargs)\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0miterable\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_read_data\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mgroup\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 13\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mgroup_trials\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0miterable\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 14\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mlist\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0miterable\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m~/git/lab.gsi/soil/soil/soil/analysis.py\u001b[0m in \u001b[0;36mgroup_trials\u001b[0;34m(trials, aggfunc)\u001b[0m\n\u001b[1;32m 159\u001b[0m \u001b[0mtrials\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mlist\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrials\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 160\u001b[0m \u001b[0mtrials\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mlist\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmap\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;32mlambda\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtuple\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32melse\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtrials\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 161\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mpd\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mconcat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrials\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgroupby\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlevel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0magg\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0maggfunc\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreorder_levels\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m,\u001b[0m\u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 162\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 163\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m~/.local/lib/python3.6/site-packages/pandas/core/reshape/concat.py\u001b[0m in \u001b[0;36mconcat\u001b[0;34m(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, copy)\u001b[0m\n\u001b[1;32m 210\u001b[0m \u001b[0mkeys\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mkeys\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlevels\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlevels\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnames\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mnames\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 211\u001b[0m \u001b[0mverify_integrity\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mverify_integrity\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 212\u001b[0;31m copy=copy)\n\u001b[0m\u001b[1;32m 213\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mop\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_result\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 214\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m~/.local/lib/python3.6/site-packages/pandas/core/reshape/concat.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy)\u001b[0m\n\u001b[1;32m 243\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 244\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mobjs\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 245\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mValueError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'No objects to concatenate'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 246\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 247\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mkeys\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;31mValueError\u001b[0m: No objects to concatenate"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evodumb = analysis.read_data('soil_output/Sim_all_dumb/', group=True, process=analysis.get_count, keys=['id']);"
|
||||
]
|
||||
@ -302,7 +500,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.2"
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"toc": {
|
||||
"colors": {
|
||||
|
80808
examples/Untitled.ipynb
Normal file
80808
examples/Untitled.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
@ -2,6 +2,7 @@
|
||||
name: simple
|
||||
dir_path: "/tmp/"
|
||||
num_trials: 3
|
||||
dry_run: True
|
||||
max_time: 100
|
||||
interval: 1
|
||||
seed: "CompleteSeed!"
|
||||
@ -17,6 +18,7 @@ network_agents:
|
||||
- agent_type: AggregatedCounter
|
||||
weight: 0.2
|
||||
environment_agents: []
|
||||
environment_class: Environment
|
||||
environment_params:
|
||||
am_i_complete: true
|
||||
default_state:
|
||||
|
10
examples/pubcrawl/README.md
Normal file
10
examples/pubcrawl/README.md
Normal file
@ -0,0 +1,10 @@
|
||||
Simulation of pubs and drinking pals that go from pub to pub.
|
||||
|
||||
Th custom environment includes a list of pubs and methods to allow agents to discover and enter pubs.
|
||||
There are two types of agents:
|
||||
|
||||
* Patron. A patron will do three things, in this order:
|
||||
* Look for other patrons to drink with
|
||||
* Look for a pub where the agent and other agents in the same group can get in.
|
||||
* While in the pub, patrons only drink, until they get drunk and taken home.
|
||||
* Police. There is only one police agent that will take any drunk patrons home (kick them out of the pub).
|
174
examples/pubcrawl/pubcrawl.py
Normal file
174
examples/pubcrawl/pubcrawl.py
Normal file
@ -0,0 +1,174 @@
|
||||
from soil.agents import FSM, state, default_state
|
||||
from soil import Environment
|
||||
from random import random, shuffle
|
||||
from itertools import islice
|
||||
import logging
|
||||
|
||||
|
||||
class CityPubs(Environment):
|
||||
'''Environment with Pubs'''
|
||||
level = logging.INFO
|
||||
|
||||
def __init__(self, *args, number_of_pubs=3, pub_capacity=10, **kwargs):
|
||||
super(CityPubs, self).__init__(*args, **kwargs)
|
||||
pubs = {}
|
||||
for i in range(number_of_pubs):
|
||||
newpub = {
|
||||
'name': 'The awesome pub #{}'.format(i),
|
||||
'open': True,
|
||||
'capacity': pub_capacity,
|
||||
'occupancy': 0,
|
||||
}
|
||||
pubs[newpub['name']] = newpub
|
||||
self['pubs'] = pubs
|
||||
|
||||
def enter(self, pub_id, *nodes):
|
||||
'''Agents will try to enter. The pub checks if it is possible'''
|
||||
try:
|
||||
pub = self['pubs'][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
||||
if not pub['open'] or (pub['capacity'] < (len(nodes) + pub['occupancy'])):
|
||||
return False
|
||||
pub['occupancy'] += len(nodes)
|
||||
for node in nodes:
|
||||
node['pub'] = pub_id
|
||||
return True
|
||||
|
||||
def available_pubs(self):
|
||||
for pub in self['pubs'].values():
|
||||
if pub['open'] and (pub['occupancy'] < pub['capacity']):
|
||||
yield pub['name']
|
||||
|
||||
def exit(self, pub_id, *node_ids):
|
||||
'''Agents will notify the pub they want to leave'''
|
||||
try:
|
||||
pub = self['pubs'][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError('Pub {} is not available'.format(pub_id))
|
||||
for node_id in node_ids:
|
||||
node = self.get_agent(node_id)
|
||||
if pub_id == node['pub']:
|
||||
del node['pub']
|
||||
pub['occupancy'] -= 1
|
||||
|
||||
|
||||
class Patron(FSM):
|
||||
'''Agent that looks for friends to drink with. It will do three things:
|
||||
1) Look for other patrons to drink with
|
||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||
'''
|
||||
level = logging.INFO
|
||||
|
||||
defaults = {
|
||||
'pub': None,
|
||||
'drunk': False,
|
||||
'pints': 0,
|
||||
'max_pints': 3,
|
||||
}
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def looking_for_friends(self):
|
||||
'''Look for friends to drink with'''
|
||||
self.info('I am looking for friends')
|
||||
available_friends = list(self.get_agents(drunk=False,
|
||||
pub=None,
|
||||
state_id=self.looking_for_friends.id))
|
||||
if not available_friends:
|
||||
self.info('Life sucks and I\'m alone!')
|
||||
return self.at_home
|
||||
befriended = self.try_friends(available_friends)
|
||||
if befriended:
|
||||
return self.looking_for_pub
|
||||
|
||||
@state
|
||||
def looking_for_pub(self):
|
||||
'''Look for a pub that accepts me and my friends'''
|
||||
if self['pub'] != None:
|
||||
return self.sober_in_pub
|
||||
self.debug('I am looking for a pub')
|
||||
group = list(self.get_neighboring_agents())
|
||||
for pub in self.env.available_pubs():
|
||||
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
|
||||
if self.env.enter(pub, self, *group):
|
||||
self.info('We\'re all {} getting in {}!'.format(len(group), pub))
|
||||
return self.sober_in_pub
|
||||
|
||||
@state
|
||||
def sober_in_pub(self):
|
||||
'''Drink up.'''
|
||||
self.drink()
|
||||
if self['pints'] > self['max_pints']:
|
||||
return self.drunk_in_pub
|
||||
|
||||
@state
|
||||
def drunk_in_pub(self):
|
||||
'''I'm out. Take me home!'''
|
||||
self.info('I\'m so drunk. Take me home!')
|
||||
self['drunk'] = True
|
||||
pass # out drunk
|
||||
|
||||
@state
|
||||
def at_home(self):
|
||||
'''The end'''
|
||||
self.debug('Life sucks. I\'m home!')
|
||||
|
||||
def drink(self):
|
||||
self['pints'] += 1
|
||||
self.debug('Cheers to that')
|
||||
|
||||
def kick_out(self):
|
||||
self.set_state(self.at_home)
|
||||
|
||||
def befriend(self, other_agent, force=False):
|
||||
'''
|
||||
Try to become friends with another agent. The chances of
|
||||
success depend on both agents' openness.
|
||||
'''
|
||||
if force or self['openness'] > random():
|
||||
self.env.add_edge(self, other_agent)
|
||||
self.info('Made some friend {}'.format(other_agent))
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_friends(self, others):
|
||||
''' Look for random agents around me and try to befriend them'''
|
||||
befriended = False
|
||||
k = int(10*self['openness'])
|
||||
shuffle(others)
|
||||
for friend in islice(others, k): # random.choice >= 3.7
|
||||
if friend == self:
|
||||
continue
|
||||
if friend.befriend(self):
|
||||
self.befriend(friend, force=True)
|
||||
self.debug('Hooray! new friend: {}'.format(friend.id))
|
||||
befriended = True
|
||||
else:
|
||||
self.debug('{} does not want to be friends'.format(friend.id))
|
||||
return befriended
|
||||
|
||||
|
||||
class Police(FSM):
|
||||
'''Simple agent to take drunk people out of pubs.'''
|
||||
level = logging.INFO
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def patrol(self):
|
||||
drunksters = list(self.get_agents(drunk=True,
|
||||
state_id=Patron.drunk_in_pub.id))
|
||||
for drunk in drunksters:
|
||||
self.info('Kicking out the trash: {}'.format(drunk.id))
|
||||
drunk.kick_out()
|
||||
else:
|
||||
self.info('No trash to take out. Too bad.')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
from soil import simulation
|
||||
simulation.run_from_config('pubcrawl.yml',
|
||||
dry_run=True,
|
||||
dump=None,
|
||||
parallel=False)
|
26
examples/pubcrawl/pubcrawl.yml
Normal file
26
examples/pubcrawl/pubcrawl.yml
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
name: pubcrawl
|
||||
num_trials: 3
|
||||
max_time: 10
|
||||
dump: false
|
||||
network_params:
|
||||
# Generate 100 empty nodes. They will be assigned a network agent
|
||||
generator: empty_graph
|
||||
n: 30
|
||||
network_agents:
|
||||
- agent_type: pubcrawl.Patron
|
||||
description: Extroverted patron
|
||||
state:
|
||||
openness: 1.0
|
||||
weight: 9
|
||||
- agent_type: pubcrawl.Patron
|
||||
description: Introverted patron
|
||||
state:
|
||||
openness: 0.1
|
||||
weight: 1
|
||||
environment_agents:
|
||||
- agent_type: pubcrawl.Police
|
||||
environment_class: pubcrawl.CityPubs
|
||||
environment_params:
|
||||
altercations: 0
|
||||
number_of_pubs: 3
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
load_module: rabbit_agents
|
||||
name: rabbits_example
|
||||
max_time: 1200
|
||||
max_time: 500
|
||||
interval: 1
|
||||
seed: MySeed
|
||||
agent_type: RabbitModel
|
||||
|
@ -12327,7 +12327,7 @@ Notice how node 0 is the only one with a TV.</p>
|
||||
<span class="n">MAX_TIME</span> <span class="o">=</span> <span class="mi">100</span>
|
||||
<span class="n">EVENT_TIME</span> <span class="o">=</span> <span class="mi">10</span>
|
||||
|
||||
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">simulation</span><span class="o">.</span><span class="n">SoilSimulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
|
||||
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
|
||||
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
|
||||
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
|
||||
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">'agent_type'</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
|
||||
@ -21883,7 +21883,7 @@ bgAAAABJRU5ErkJggg==
|
||||
|
||||
|
||||
<div class="output_subarea output_stream output_stdout output_text">
|
||||
<pre>267M ../rabbits/soil_output/rabbits_example/
|
||||
<pre>267M ../rabbits/soil_output/rabbits_example/
|
||||
</pre>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -426,7 +426,7 @@
|
||||
"MAX_TIME = 100\n",
|
||||
"EVENT_TIME = 10\n",
|
||||
"\n",
|
||||
"sim = soil.simulation.SoilSimulation(topology=G,\n",
|
||||
"sim = soil.Simulation(topology=G,\n",
|
||||
" num_trials=1,\n",
|
||||
" max_time=MAX_TIME,\n",
|
||||
" environment_agents=[{'agent_type': NewsEnvironmentAgent,\n",
|
||||
|
4
setup.cfg
Normal file
4
setup.cfg
Normal file
@ -0,0 +1,4 @@
|
||||
[aliases]
|
||||
test=pytest
|
||||
[tool:pytest]
|
||||
addopts = --verbose
|
@ -1 +1 @@
|
||||
0.12.0
|
||||
0.13.0
|
@ -14,12 +14,11 @@ except NameError:
|
||||
logging.basicConfig()
|
||||
|
||||
from . import agents
|
||||
from . import simulation
|
||||
from . import environment
|
||||
from .simulation import *
|
||||
from .environment import Environment
|
||||
from . import utils
|
||||
from . import analysis
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
from . import simulation
|
||||
@ -46,11 +45,12 @@ def main():
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.module:
|
||||
if os.getcwd() not in sys.path:
|
||||
sys.path.append(os.getcwd())
|
||||
if args.module:
|
||||
importlib.import_module(args.module)
|
||||
|
||||
logging.info('Loading config file: {}'.format(args.file, args.output))
|
||||
logging.info('Loading config file: {}'.format(args.file))
|
||||
|
||||
try:
|
||||
dump = []
|
||||
@ -64,7 +64,7 @@ def main():
|
||||
dump=dump,
|
||||
parallel=(not args.synchronous and not args.pdb),
|
||||
results_dir=args.output)
|
||||
except Exception as ex:
|
||||
except Exception:
|
||||
if args.pdb:
|
||||
pdb.post_mortem()
|
||||
else:
|
||||
|
@ -10,7 +10,7 @@ class SISaModel(FSM):
|
||||
|
||||
neutral_discontent_infected_prob
|
||||
|
||||
neutral_content_spong_prob
|
||||
neutral_content_spon_prob
|
||||
|
||||
neutral_content_infected_prob
|
||||
|
||||
@ -29,27 +29,27 @@ class SISaModel(FSM):
|
||||
standard_variance
|
||||
"""
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
def __init__(self, environment, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
|
||||
self.neutral_discontent_spon_prob = np.random.normal(environment.environment_params['neutral_discontent_spon_prob'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.neutral_discontent_infected_prob = np.random.normal(environment.environment_params['neutral_discontent_infected_prob'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.neutral_content_spon_prob = np.random.normal(environment.environment_params['neutral_content_spon_prob'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.neutral_content_infected_prob = np.random.normal(environment.environment_params['neutral_content_infected_prob'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_discontent_infected_prob = np.random.normal(self.env['neutral_discontent_infected_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_content_spon_prob = np.random.normal(self.env['neutral_content_spon_prob'],
|
||||
self.env['standard_variance'])
|
||||
self.neutral_content_infected_prob = np.random.normal(self.env['neutral_content_infected_prob'],
|
||||
self.env['standard_variance'])
|
||||
|
||||
self.discontent_neutral = np.random.normal(environment.environment_params['discontent_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.discontent_content = np.random.normal(environment.environment_params['discontent_content'],
|
||||
environment.environment_params['variance_d_c'])
|
||||
self.discontent_neutral = np.random.normal(self.env['discontent_neutral'],
|
||||
self.env['standard_variance'])
|
||||
self.discontent_content = np.random.normal(self.env['discontent_content'],
|
||||
self.env['variance_d_c'])
|
||||
|
||||
self.content_discontent = np.random.normal(environment.environment_params['content_discontent'],
|
||||
environment.environment_params['variance_c_d'])
|
||||
self.content_neutral = np.random.normal(environment.environment_params['content_neutral'],
|
||||
environment.environment_params['standard_variance'])
|
||||
self.content_discontent = np.random.normal(self.env['content_discontent'],
|
||||
self.env['variance_c_d'])
|
||||
self.content_neutral = np.random.normal(self.env['content_neutral'],
|
||||
self.env['standard_variance'])
|
||||
|
||||
@state
|
||||
def neutral(self):
|
||||
|
@ -16,7 +16,7 @@ class SentimentCorrelationModel(BaseAgent):
|
||||
disgust_prob
|
||||
"""
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
def __init__(self, environment, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
||||
self.anger_prob = environment.environment_params['anger_prob']
|
||||
|
@ -16,23 +16,15 @@ from functools import wraps
|
||||
|
||||
from .. import utils, history
|
||||
|
||||
agent_types = {}
|
||||
|
||||
|
||||
class MetaAgent(type):
|
||||
def __init__(cls, name, bases, nmspc):
|
||||
super(MetaAgent, cls).__init__(name, bases, nmspc)
|
||||
agent_types[name] = cls
|
||||
|
||||
|
||||
class BaseAgent(nxsim.BaseAgent, metaclass=MetaAgent):
|
||||
class BaseAgent(nxsim.BaseAgent):
|
||||
"""
|
||||
A special simpy BaseAgent that keeps track of its state history.
|
||||
"""
|
||||
|
||||
defaults = {}
|
||||
|
||||
def __init__(self, environment=None, agent_id=None, state=None,
|
||||
def __init__(self, environment, agent_id=None, state=None,
|
||||
name='network_process', interval=None, **state_params):
|
||||
# Check for REQUIRED arguments
|
||||
assert environment is not None, TypeError('__init__ missing 1 required keyword argument: \'environment\'. '
|
||||
@ -152,14 +144,18 @@ class BaseAgent(nxsim.BaseAgent, metaclass=MetaAgent):
|
||||
def count_neighboring_agents(self, state_id=None):
|
||||
return len(super().get_agents(state_id, limit_neighbors=True))
|
||||
|
||||
def get_agents(self, state_id=None, limit_neighbors=False, iterator=False, **kwargs):
|
||||
def get_agents(self, state_id=None, agent_type=None, limit_neighbors=False, iterator=False, **kwargs):
|
||||
agents = self.env.agents
|
||||
if limit_neighbors:
|
||||
agents = super().get_agents(state_id, limit_neighbors)
|
||||
else:
|
||||
agents = filter(lambda x: state_id is None or x.state.get('id', None) == state_id,
|
||||
self.env.agents)
|
||||
|
||||
def matches_all(agent):
|
||||
if state_id is not None:
|
||||
if agent.state.get('id', None) != state_id:
|
||||
return False
|
||||
if agent_type is not None:
|
||||
if type(agent) != agent_type:
|
||||
return False
|
||||
state = agent.state
|
||||
for k, v in kwargs.items():
|
||||
if state.get(k, None) != v:
|
||||
@ -219,7 +215,7 @@ def default_state(func):
|
||||
return func
|
||||
|
||||
|
||||
class MetaFSM(MetaAgent):
|
||||
class MetaFSM(type):
|
||||
def __init__(cls, name, bases, nmspc):
|
||||
super(MetaFSM, cls).__init__(name, bases, nmspc)
|
||||
states = {}
|
||||
@ -328,16 +324,39 @@ def calculate_distribution(network_agents=None,
|
||||
return network_agents
|
||||
|
||||
|
||||
def _serialize_distribution(network_agents):
|
||||
d = _convert_agent_types(network_agents,
|
||||
to_string=True)
|
||||
def serialize_type(agent_type, known_modules=[], **kwargs):
|
||||
if isinstance(agent_type, str):
|
||||
return agent_type
|
||||
known_modules += ['soil.agents']
|
||||
return utils.serialize(agent_type, known_modules=known_modules, **kwargs)[1] # Get the name of the class
|
||||
|
||||
|
||||
def serialize_distribution(network_agents, known_modules=[]):
|
||||
'''
|
||||
When serializing an agent distribution, remove the thresholds, in order
|
||||
to avoid cluttering the YAML definition file.
|
||||
'''
|
||||
d = deepcopy(network_agents)
|
||||
for v in d:
|
||||
if 'threshold' in v:
|
||||
del v['threshold']
|
||||
v['agent_type'] = serialize_type(v['agent_type'],
|
||||
known_modules=known_modules)
|
||||
return d
|
||||
|
||||
|
||||
def deserialize_type(agent_type, known_modules=[]):
|
||||
if not isinstance(agent_type, str):
|
||||
return agent_type
|
||||
known = known_modules + ['soil.agents', 'soil.agents.custom' ]
|
||||
agent_type = utils.deserializer(agent_type, known_modules=known)
|
||||
return agent_type
|
||||
|
||||
|
||||
def deserialize_distribution(ind, **kwargs):
|
||||
d = deepcopy(ind)
|
||||
for v in d:
|
||||
v['agent_type'] = deserialize_type(v['agent_type'], **kwargs)
|
||||
return d
|
||||
|
||||
|
||||
@ -352,16 +371,11 @@ def _validate_states(states, topology):
|
||||
return states
|
||||
|
||||
|
||||
def _convert_agent_types(ind, to_string=False):
|
||||
def _convert_agent_types(ind, to_string=False, **kwargs):
|
||||
'''Convenience method to allow specifying agents by class or class name.'''
|
||||
d = deepcopy(ind)
|
||||
for v in d:
|
||||
agent_type = v['agent_type']
|
||||
if to_string and not isinstance(agent_type, str):
|
||||
v['agent_type'] = str(agent_type.__name__)
|
||||
elif not to_string and isinstance(agent_type, str):
|
||||
v['agent_type'] = agent_types[agent_type]
|
||||
return d
|
||||
if to_string:
|
||||
return serialize_distribution(ind, **kwargs)
|
||||
return deserialize_distribution(ind, **kwargs)
|
||||
|
||||
|
||||
def _agent_from_distribution(distribution, value=-1):
|
||||
|
@ -56,7 +56,7 @@ def read_csv(filename, keys=None, convert_types=False, **kwargs):
|
||||
|
||||
|
||||
def convert_row(row):
|
||||
row['value'] = utils.convert(row['value'], row['value_type'])
|
||||
row['value'] = utils.deserialize(row['value_type'], row['value'])
|
||||
return row
|
||||
|
||||
|
||||
@ -123,7 +123,7 @@ def get_count(df, *keys):
|
||||
df = df[list(keys)]
|
||||
counts = pd.DataFrame()
|
||||
for key in df.columns.levels[0]:
|
||||
g = df[key].apply(pd.Series.value_counts, axis=1).fillna(0)
|
||||
g = df[[key]].apply(pd.Series.value_counts, axis=1).fillna(0)
|
||||
for value, series in g.iteritems():
|
||||
counts[key, value] = series
|
||||
counts.columns = pd.MultiIndex.from_tuples(counts.columns)
|
||||
|
@ -15,7 +15,7 @@ import nxsim
|
||||
from . import utils, agents, analysis, history
|
||||
|
||||
|
||||
class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
class Environment(nxsim.NetworkEnvironment):
|
||||
"""
|
||||
The environment is key in a simulation. It contains the network topology,
|
||||
a reference to network and environment agents, as well as the environment
|
||||
@ -23,7 +23,7 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
|
||||
The environment parameters and the state of every agent can be accessed
|
||||
both by using the environment as a dictionary or with the environment's
|
||||
:meth:`soil.environment.SoilEnvironment.get` method.
|
||||
:meth:`soil.environment.Environment.get` method.
|
||||
"""
|
||||
|
||||
def __init__(self, name=None,
|
||||
@ -49,7 +49,8 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
self.dry_run = dry_run
|
||||
self.interval = interval
|
||||
self.dir_path = dir_path or tempfile.mkdtemp('soil-env')
|
||||
self.get_path()
|
||||
if not dry_run:
|
||||
self.get_path()
|
||||
self._history = history.History(name=self.name if not dry_run else None,
|
||||
dir_path=self.dir_path)
|
||||
# Add environment agents first, so their events get
|
||||
@ -93,17 +94,35 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
if not network_agents:
|
||||
return
|
||||
for ix in self.G.nodes():
|
||||
agent, state = agents._agent_from_distribution(network_agents)
|
||||
self.set_agent(ix, agent_type=agent, state=state)
|
||||
self.init_agent(ix, agent_distribution=network_agents)
|
||||
|
||||
def init_agent(self, agent_id, agent_distribution):
|
||||
node = self.G.nodes[agent_id]
|
||||
init = False
|
||||
state = dict(node)
|
||||
|
||||
agent_type = None
|
||||
if 'agent_type' in self.states.get(agent_id, {}):
|
||||
agent_type = self.states[agent_id]
|
||||
elif 'agent_type' in node:
|
||||
agent_type = node['agent_type']
|
||||
elif 'agent_type' in self.default_state:
|
||||
agent_type = self.default_state['agent_type']
|
||||
|
||||
if agent_type:
|
||||
agent_type = agents.deserialize_type(agent_type)
|
||||
else:
|
||||
agent_type, state = agents._agent_from_distribution(agent_distribution)
|
||||
return self.set_agent(agent_id, agent_type, state)
|
||||
|
||||
def set_agent(self, agent_id, agent_type, state=None):
|
||||
node = self.G.nodes[agent_id]
|
||||
defstate = deepcopy(self.default_state)
|
||||
defstate = deepcopy(self.default_state) or {}
|
||||
defstate.update(self.states.get(agent_id, {}))
|
||||
defstate.update(node.get('state', {}))
|
||||
if state:
|
||||
defstate.update(state)
|
||||
state = defstate
|
||||
state.update(node.get('state', {}))
|
||||
a = agent_type(environment=self,
|
||||
agent_id=agent_id,
|
||||
state=state)
|
||||
@ -118,6 +137,10 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
return a
|
||||
|
||||
def add_edge(self, agent1, agent2, attrs=None):
|
||||
if hasattr(agent1, 'id'):
|
||||
agent1 = agent1.id
|
||||
if hasattr(agent2, 'id'):
|
||||
agent2 = agent2.id
|
||||
return self.G.add_edge(agent1, agent2)
|
||||
|
||||
def run(self, *args, **kwargs):
|
||||
@ -202,7 +225,7 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
|
||||
with open(csv_name, 'w') as f:
|
||||
cr = csv.writer(f)
|
||||
cr.writerow(('agent_id', 't_step', 'key', 'value', 'value_type'))
|
||||
cr.writerow(('agent_id', 't_step', 'key', 'value'))
|
||||
for i in self.history_to_tuples():
|
||||
cr.writerow(i)
|
||||
|
||||
@ -302,7 +325,6 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
state['network_agents'] = agents._serialize_distribution(self.network_agents)
|
||||
state['environment_agents'] = agents._convert_agent_types(self.environment_agents,
|
||||
to_string=True)
|
||||
del state['_queue']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
@ -311,3 +333,6 @@ class SoilEnvironment(nxsim.NetworkEnvironment):
|
||||
self.network_agents = self.calculate_distribution(self._convert_agent_types(self.network_agents))
|
||||
self.environment_agents = self._convert_agent_types(self.environment_agents)
|
||||
return state
|
||||
|
||||
|
||||
SoilEnvironment = Environment
|
||||
|
103
soil/history.py
103
soil/history.py
@ -3,7 +3,7 @@ import os
|
||||
import pandas as pd
|
||||
import sqlite3
|
||||
import copy
|
||||
from collections import UserDict, Iterable, namedtuple
|
||||
from collections import UserDict, namedtuple
|
||||
|
||||
from . import utils
|
||||
|
||||
@ -17,12 +17,12 @@ class History:
|
||||
if db_path is None and name:
|
||||
db_path = os.path.join(dir_path or os.getcwd(),
|
||||
'{}.db.sqlite'.format(name))
|
||||
if db_path is None:
|
||||
db_path = ":memory:"
|
||||
else:
|
||||
if db_path:
|
||||
if backup and os.path.exists(db_path):
|
||||
newname = db_path + '.backup{}.sqlite'.format(time.time())
|
||||
os.rename(db_path, newname)
|
||||
else:
|
||||
db_path = ":memory:"
|
||||
self.db_path = db_path
|
||||
|
||||
self.db = db_path
|
||||
@ -34,12 +34,6 @@ class History:
|
||||
self._dtypes = {}
|
||||
self._tups = []
|
||||
|
||||
def conversors(self, key):
|
||||
"""Get the serializer and deserializer for a given key."""
|
||||
if key not in self._dtypes:
|
||||
self.read_types()
|
||||
return self._dtypes[key]
|
||||
|
||||
@property
|
||||
def db(self):
|
||||
try:
|
||||
@ -58,55 +52,88 @@ class History:
|
||||
|
||||
@property
|
||||
def dtypes(self):
|
||||
self.read_types()
|
||||
return {k:v[0] for k, v in self._dtypes.items()}
|
||||
|
||||
def save_tuples(self, tuples):
|
||||
'''
|
||||
Save a series of tuples, converting them to records if necessary
|
||||
'''
|
||||
self.save_records(Record(*tup) for tup in tuples)
|
||||
|
||||
def save_records(self, records):
|
||||
with self.db:
|
||||
for rec in records:
|
||||
if not isinstance(rec, Record):
|
||||
rec = Record(*rec)
|
||||
if rec.key not in self._dtypes:
|
||||
name = utils.name(rec.value)
|
||||
serializer = utils.serializer(name)
|
||||
deserializer = utils.deserializer(name)
|
||||
self._dtypes[rec.key] = (name, serializer, deserializer)
|
||||
self.db.execute("replace into value_types (key, value_type) values (?, ?)", (rec.key, name))
|
||||
self.db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
|
||||
'''
|
||||
Save a collection of records
|
||||
'''
|
||||
for record in records:
|
||||
if not isinstance(record, Record):
|
||||
record = Record(*record)
|
||||
self.save_record(*record)
|
||||
|
||||
def save_record(self, *args, **kwargs):
|
||||
self._tups.append(Record(*args, **kwargs))
|
||||
def save_record(self, agent_id, t_step, key, value):
|
||||
'''
|
||||
Save a collection of records to the database.
|
||||
Database writes are cached.
|
||||
'''
|
||||
value = self.convert(key, value)
|
||||
self._tups.append(Record(agent_id=agent_id,
|
||||
t_step=t_step,
|
||||
key=key,
|
||||
value=value))
|
||||
if len(self._tups) > 100:
|
||||
self.flush_cache()
|
||||
|
||||
def convert(self, key, value):
|
||||
"""Get the serialized value for a given key."""
|
||||
if key not in self._dtypes:
|
||||
self.read_types()
|
||||
if key not in self._dtypes:
|
||||
name = utils.name(value)
|
||||
serializer = utils.serializer(name)
|
||||
deserializer = utils.deserializer(name)
|
||||
self._dtypes[key] = (name, serializer, deserializer)
|
||||
with self.db:
|
||||
self.db.execute("replace into value_types (key, value_type) values (?, ?)", (key, name))
|
||||
return self._dtypes[key][1](value)
|
||||
|
||||
def recover(self, key, value):
|
||||
"""Get the deserialized value for a given key, and the serialized version."""
|
||||
if key not in self._dtypes:
|
||||
self.read_types()
|
||||
if key not in self._dtypes:
|
||||
raise ValueError("Unknown datatype for {} and {}".format(key, value))
|
||||
return self._dtypes[key][2](value)
|
||||
|
||||
|
||||
def flush_cache(self):
|
||||
'''
|
||||
Use a cache to save state changes to avoid opening a session for every change.
|
||||
The cache will be flushed at the end of the simulation, and when history is accessed.
|
||||
'''
|
||||
self.save_records(self._tups)
|
||||
with self.db:
|
||||
for rec in self._tups:
|
||||
self.db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
|
||||
self._tups = list()
|
||||
|
||||
def to_tuples(self):
|
||||
self.flush_cache()
|
||||
with self.db:
|
||||
res = self.db.execute("select agent_id, t_step, key, value from history ").fetchall()
|
||||
for r in res:
|
||||
agent_id, t_step, key, value = r
|
||||
_, _ , des = self.conversors(key)
|
||||
yield agent_id, t_step, key, des(value)
|
||||
self.flush_cache()
|
||||
with self.db:
|
||||
res = self.db.execute("select agent_id, t_step, key, value from history ").fetchall()
|
||||
for r in res:
|
||||
agent_id, t_step, key, value = r
|
||||
value = self.recover(key, value)
|
||||
yield agent_id, t_step, key, value
|
||||
|
||||
def read_types(self):
|
||||
with self.db:
|
||||
res = self.db.execute("select key, value_type from value_types ").fetchall()
|
||||
for k, v in res:
|
||||
serializer = utils.serializer(v)
|
||||
deserializer = utils.deserializer(v)
|
||||
self._dtypes[k] = (v, serializer, deserializer)
|
||||
with self.db:
|
||||
res = self.db.execute("select key, value_type from value_types ").fetchall()
|
||||
for k, v in res:
|
||||
serializer = utils.serializer(v)
|
||||
deserializer = utils.deserializer(v)
|
||||
self._dtypes[k] = (v, serializer, deserializer)
|
||||
|
||||
def __getitem__(self, key):
|
||||
self.flush_cache()
|
||||
key = Key(*key)
|
||||
agent_ids = [key.agent_id] if key.agent_id is not None else []
|
||||
t_steps = [key.t_step] if key.t_step is not None else []
|
||||
@ -176,7 +203,7 @@ class History:
|
||||
for k, v in self._dtypes.items():
|
||||
if k in df_p:
|
||||
dtype, _, deserial = v
|
||||
df_p[k] = df_p[k].fillna(method='ffill').fillna(deserial()).astype(dtype)
|
||||
df_p[k] = df_p[k].fillna(method='ffill').astype(dtype)
|
||||
if t_steps:
|
||||
df_p = df_p.reindex(t_steps, method='ffill')
|
||||
return df_p.ffill()
|
||||
|
@ -1,8 +1,9 @@
|
||||
import os
|
||||
import time
|
||||
import imp
|
||||
import importlib
|
||||
import sys
|
||||
import yaml
|
||||
import traceback
|
||||
import networkx as nx
|
||||
from networkx.readwrite import json_graph
|
||||
from multiprocessing import Pool
|
||||
@ -12,11 +13,12 @@ import pickle
|
||||
|
||||
from nxsim import NetworkSimulation
|
||||
|
||||
from . import utils, environment, basestring, agents
|
||||
from . import utils, basestring, agents
|
||||
from .environment import Environment
|
||||
from .utils import logger
|
||||
|
||||
|
||||
class SoilSimulation(NetworkSimulation):
|
||||
class Simulation(NetworkSimulation):
|
||||
"""
|
||||
Subclass of nsim.NetworkSimulation with three main differences:
|
||||
1) agent type can be specified by name or by class.
|
||||
@ -43,13 +45,48 @@ class SoilSimulation(NetworkSimulation):
|
||||
'agent_type_1'.
|
||||
3) if no initial state is given, each node's state will be set
|
||||
to `{'id': 0}`.
|
||||
|
||||
Parameters
|
||||
---------
|
||||
name : str, optional
|
||||
name of the Simulation
|
||||
topology : networkx.Graph instance, optional
|
||||
network_params : dict
|
||||
parameters used to create a topology with networkx, if no topology is given
|
||||
network_agents : dict
|
||||
definition of agents to populate the topology with
|
||||
agent_type : NetworkAgent subclass, optional
|
||||
Default type of NetworkAgent to use for nodes not specified in network_agents
|
||||
states : list, optional
|
||||
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
|
||||
whose value indicates the state
|
||||
dir_path : str, optional
|
||||
Directory path where to save pickled objects
|
||||
seed : str, optional
|
||||
Seed to use for the random generator
|
||||
num_trials : int, optional
|
||||
Number of independent simulation runs
|
||||
max_time : int, optional
|
||||
Time how long the simulation should run
|
||||
environment_params : dict, optional
|
||||
Dictionary of globally-shared environmental parameters
|
||||
environment_agents: dict, optional
|
||||
Similar to network_agents. Distribution of Agents that control the environment
|
||||
environment_class: soil.environment.Environment subclass, optional
|
||||
Class for the environment. It defailts to soil.environment.Environment
|
||||
load_module : str, module name, deprecated
|
||||
If specified, soil will load the content of this module under 'soil.agents.custom'
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, name=None, topology=None, network_params=None,
|
||||
network_agents=None, agent_type=None, states=None,
|
||||
default_state=None, interval=1, dump=None, dry_run=False,
|
||||
dir_path=None, num_trials=1, max_time=100,
|
||||
agent_module=None, load_module=None, seed=None,
|
||||
environment_agents=None, environment_params=None, **kwargs):
|
||||
load_module=None, seed=None,
|
||||
environment_agents=None, environment_params=None,
|
||||
environment_class=None, **kwargs):
|
||||
|
||||
if topology is None:
|
||||
topology = utils.load_network(network_params,
|
||||
@ -69,19 +106,21 @@ class SoilSimulation(NetworkSimulation):
|
||||
self.seed = str(seed) or str(time.time())
|
||||
self.dump = dump
|
||||
self.dry_run = dry_run
|
||||
self.environment_params = environment_params or {}
|
||||
|
||||
if load_module:
|
||||
path = sys.path + [self.dir_path, os.getcwd()]
|
||||
f, fp, desc = imp.find_module(load_module, path)
|
||||
imp.load_module('soil.agents.custom', f, fp, desc)
|
||||
sys.path += [self.dir_path, os.getcwd()]
|
||||
|
||||
self.environment_params = environment_params or {}
|
||||
self.environment_class = utils.deserialize(environment_class,
|
||||
known_modules=['soil.environment', ]) or Environment
|
||||
|
||||
environment_agents = environment_agents or []
|
||||
self.environment_agents = agents._convert_agent_types(environment_agents)
|
||||
self.environment_agents = agents._convert_agent_types(environment_agents,
|
||||
known_modules=[self.load_module])
|
||||
|
||||
distro = agents.calculate_distribution(network_agents,
|
||||
agent_type)
|
||||
self.network_agents = agents._convert_agent_types(distro)
|
||||
self.network_agents = agents._convert_agent_types(distro,
|
||||
known_modules=[self.load_module])
|
||||
|
||||
self.states = agents._validate_states(states,
|
||||
self.topology)
|
||||
@ -97,13 +136,17 @@ class SoilSimulation(NetworkSimulation):
|
||||
p = Pool()
|
||||
with utils.timer('simulation {}'.format(self.name)):
|
||||
if parallel:
|
||||
func = partial(self.run_trial, dry_run=dry_run or self.dry_run,
|
||||
return_env=not parallel, **kwargs)
|
||||
func = partial(self.run_trial_exceptions, dry_run=dry_run or self.dry_run,
|
||||
return_env=True,
|
||||
**kwargs)
|
||||
for i in p.imap_unordered(func, range(self.num_trials)):
|
||||
if isinstance(i, Exception):
|
||||
logger.error('Trial failed:\n\t{}'.format(i.message))
|
||||
continue
|
||||
yield i
|
||||
else:
|
||||
for i in range(self.num_trials):
|
||||
yield self.run_trial(i, dry_run=dry_run or self.dry_run, **kwargs)
|
||||
yield self.run_trial(i, dry_run = dry_run or self.dry_run, **kwargs)
|
||||
if not (dry_run or self.dry_run):
|
||||
logger.info('Dumping results to {}'.format(self.dir_path))
|
||||
self.dump_pickle(self.dir_path)
|
||||
@ -111,9 +154,9 @@ class SoilSimulation(NetworkSimulation):
|
||||
else:
|
||||
logger.info('NOT dumping results')
|
||||
|
||||
def get_env(self, trial_id=0, **kwargs):
|
||||
opts = self.environment_params.copy()
|
||||
env_name = '{}_trial_{}'.format(self.name, trial_id)
|
||||
def get_env(self, trial_id = 0, **kwargs):
|
||||
opts=self.environment_params.copy()
|
||||
env_name='{}_trial_{}'.format(self.name, trial_id)
|
||||
opts.update({
|
||||
'name': env_name,
|
||||
'topology': self.topology.copy(),
|
||||
@ -128,10 +171,10 @@ class SoilSimulation(NetworkSimulation):
|
||||
'dir_path': self.dir_path,
|
||||
})
|
||||
opts.update(kwargs)
|
||||
env = environment.SoilEnvironment(**opts)
|
||||
env=self.environment_class(**opts)
|
||||
return env
|
||||
|
||||
def run_trial(self, trial_id=0, until=None, return_env=True, **opts):
|
||||
def run_trial(self, trial_id = 0, until = None, return_env = True, **opts):
|
||||
"""Run a single trial of the simulation
|
||||
|
||||
Parameters
|
||||
@ -139,16 +182,27 @@ class SoilSimulation(NetworkSimulation):
|
||||
trial_id : int
|
||||
"""
|
||||
# Set-up trial environment and graph
|
||||
until = until or self.max_time
|
||||
env = self.get_env(trial_id=trial_id, **opts)
|
||||
until=until or self.max_time
|
||||
env=self.get_env(trial_id = trial_id, **opts)
|
||||
# Set up agents on nodes
|
||||
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
|
||||
env.run(until)
|
||||
if self.dump and not self.dry_run:
|
||||
with utils.timer('Dumping simulation {} trial {}'.format(self.name, trial_id)):
|
||||
env.dump(formats=self.dump)
|
||||
env.dump(formats = self.dump)
|
||||
if return_env:
|
||||
return env
|
||||
def run_trial_exceptions(self, *args, **kwargs):
|
||||
'''
|
||||
A wrapper for run_trial that catches exceptions and returns them.
|
||||
It is meant for async simulations
|
||||
'''
|
||||
try:
|
||||
return self.run_trial(*args, **kwargs)
|
||||
except Exception as ex:
|
||||
c = ex.__cause__
|
||||
c.message = ''.join(traceback.format_tb(c.__traceback__)[3:])
|
||||
return c
|
||||
|
||||
def to_dict(self):
|
||||
return self.__getstate__()
|
||||
@ -156,39 +210,53 @@ class SoilSimulation(NetworkSimulation):
|
||||
def to_yaml(self):
|
||||
return yaml.dump(self.to_dict())
|
||||
|
||||
def dump_yaml(self, dir_path=None, file_name=None):
|
||||
dir_path = dir_path or self.dir_path
|
||||
def dump_yaml(self, dir_path = None, file_name = None):
|
||||
dir_path=dir_path or self.dir_path
|
||||
if not os.path.exists(dir_path):
|
||||
os.makedirs(dir_path)
|
||||
if not file_name:
|
||||
file_name = os.path.join(dir_path,
|
||||
file_name=os.path.join(dir_path,
|
||||
'{}.dumped.yml'.format(self.name))
|
||||
with open(file_name, 'w') as f:
|
||||
f.write(self.to_yaml())
|
||||
|
||||
def dump_pickle(self, dir_path=None, pickle_name=None):
|
||||
dir_path = dir_path or self.dir_path
|
||||
def dump_pickle(self, dir_path = None, pickle_name = None):
|
||||
dir_path=dir_path or self.dir_path
|
||||
if not os.path.exists(dir_path):
|
||||
os.makedirs(dir_path)
|
||||
if not pickle_name:
|
||||
pickle_name = os.path.join(dir_path,
|
||||
pickle_name=os.path.join(dir_path,
|
||||
'{}.simulation.pickle'.format(self.name))
|
||||
with open(pickle_name, 'wb') as f:
|
||||
pickle.dump(self, f)
|
||||
|
||||
def __getstate__(self):
|
||||
state = self.__dict__.copy()
|
||||
state['topology'] = json_graph.node_link_data(self.topology)
|
||||
state['network_agents'] = agents._serialize_distribution(self.network_agents)
|
||||
state['environment_agents'] = agents._convert_agent_types(self.environment_agents,
|
||||
to_string=True)
|
||||
state={}
|
||||
for k, v in self.__dict__.items():
|
||||
if k[0] != '_':
|
||||
state[k]=v
|
||||
state['topology']=json_graph.node_link_data(self.topology)
|
||||
state['network_agents']=agents.serialize_distribution(self.network_agents,
|
||||
known_modules = [])
|
||||
state['environment_agents']=agents.serialize_distribution(self.environment_agents,
|
||||
known_modules = [])
|
||||
state['environment_class']=utils.serialize(self.environment_class,
|
||||
known_modules=['soil.environment'])[1] # func, name
|
||||
if state['load_module'] is None:
|
||||
del state['load_module']
|
||||
return state
|
||||
|
||||
def __setstate__(self, state):
|
||||
self.__dict__ = state
|
||||
self.load_module = getattr(self, 'load_module', None)
|
||||
if self.dir_path not in sys.path:
|
||||
sys.path += [self.dir_path, os.getcwd()]
|
||||
self.topology = json_graph.node_link_graph(state['topology'])
|
||||
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
|
||||
self.environment_agents = agents._convert_agent_types(self.environment_agents)
|
||||
self.environment_agents = agents._convert_agent_types(self.environment_agents,
|
||||
known_modules=[self.load_module])
|
||||
self.environment_class = utils.deserialize(self.environment_class,
|
||||
known_modules=[self.load_module, 'soil.environment', ]) # func, name
|
||||
return state
|
||||
|
||||
|
||||
@ -197,11 +265,11 @@ def from_config(config):
|
||||
if len(config) > 1:
|
||||
raise AttributeError('Provide only one configuration')
|
||||
config = config[0][0]
|
||||
sim = SoilSimulation(**config)
|
||||
sim = Simulation(**config)
|
||||
return sim
|
||||
|
||||
|
||||
def run_from_config(*configs, results_dir='soil_output', dry_run=False, dump=None, timestamp=False, **kwargs):
|
||||
def run_from_config(*configs, results_dir='soil_output', dump=None, timestamp=False, **kwargs):
|
||||
for config_def in configs:
|
||||
# logger.info("Found {} config(s)".format(len(ls)))
|
||||
for config, _ in utils.load_config(config_def):
|
||||
@ -214,6 +282,8 @@ def run_from_config(*configs, results_dir='soil_output', dry_run=False, dump=Non
|
||||
else:
|
||||
sim_folder = name
|
||||
dir_path = os.path.join(results_dir, sim_folder)
|
||||
sim = SoilSimulation(dir_path=dir_path, dump=dump, **config)
|
||||
if dump is not None:
|
||||
config['dump'] = dump
|
||||
sim = Simulation(dir_path=dir_path, **config)
|
||||
logger.info('Dumping results to {} : {}'.format(sim.dir_path, sim.dump))
|
||||
sim.run_simulation(**kwargs)
|
||||
|
101
soil/utils.py
101
soil/utils.py
@ -1,8 +1,9 @@
|
||||
import os
|
||||
import ast
|
||||
import yaml
|
||||
import logging
|
||||
import importlib
|
||||
from time import time
|
||||
import time
|
||||
from glob import glob
|
||||
from random import random
|
||||
from copy import deepcopy
|
||||
@ -62,44 +63,92 @@ def load_config(config):
|
||||
|
||||
@contextmanager
|
||||
def timer(name='task', pre="", function=logger.info, to_object=None):
|
||||
start = time()
|
||||
function('{}Starting {} at {}.'.format(pre, name, start))
|
||||
start = time.time()
|
||||
function('{}Starting {} at {}.'.format(pre, name,
|
||||
time.strftime("%X", time.gmtime(start))))
|
||||
yield start
|
||||
end = time()
|
||||
function('{}Finished {} in {} seconds'.format(pre, name, str(end-start)))
|
||||
end = time.time()
|
||||
function('{}Finished {} at {} in {} seconds'.format(pre, name,
|
||||
time.strftime("%X", time.gmtime(end)),
|
||||
str(end-start)))
|
||||
if to_object:
|
||||
to_object.start = start
|
||||
to_object.end = end
|
||||
|
||||
|
||||
def repr(v):
|
||||
func = serializer(v)
|
||||
tname = name(v)
|
||||
return func(v), tname
|
||||
builtins = importlib.import_module('builtins')
|
||||
|
||||
|
||||
def name(v):
|
||||
return type(v).__name__
|
||||
def name(value, known_modules=[]):
|
||||
'''Return a name that can be imported, to serialize/deserialize an object'''
|
||||
if value is None:
|
||||
return 'None'
|
||||
if not isinstance(value, type): # Get the class name first
|
||||
value = type(value)
|
||||
tname = value.__name__
|
||||
if hasattr(builtins, tname):
|
||||
return tname
|
||||
modname = value.__module__
|
||||
if modname == '__main__':
|
||||
return tname
|
||||
if known_modules and modname in known_modules:
|
||||
return tname
|
||||
for kmod in known_modules:
|
||||
if not kmod:
|
||||
continue
|
||||
module = importlib.import_module(kmod)
|
||||
if hasattr(module, tname):
|
||||
return tname
|
||||
return '{}.{}'.format(modname, tname)
|
||||
|
||||
|
||||
def serializer(type_):
|
||||
if type_ == 'bool':
|
||||
return lambda x: "true" if x else ""
|
||||
if type_ != 'str' and hasattr(builtins, type_):
|
||||
return repr
|
||||
return lambda x: x
|
||||
|
||||
|
||||
def deserializer(type_):
|
||||
try:
|
||||
# Check if it's a builtin type
|
||||
module = importlib.import_module('builtins')
|
||||
cls = getattr(module, type_)
|
||||
except AttributeError:
|
||||
# if not, separate module and class
|
||||
def serialize(v, known_modules=[]):
|
||||
'''Get a text representation of an object.'''
|
||||
tname = name(v, known_modules=known_modules)
|
||||
func = serializer(tname)
|
||||
return func(v), tname
|
||||
|
||||
def deserializer(type_, known_modules=[]):
|
||||
if type_ == 'str':
|
||||
return lambda x='': x
|
||||
if type_ == 'None':
|
||||
return lambda x=None: None
|
||||
if hasattr(builtins, type_): # Check if it's a builtin type
|
||||
cls = getattr(builtins, type_)
|
||||
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
|
||||
# Otherwise, see if we can find the module and the class
|
||||
modules = known_modules or []
|
||||
options = []
|
||||
|
||||
for mod in modules:
|
||||
if mod:
|
||||
options.append((mod, type_))
|
||||
|
||||
if '.' in type_: # Fully qualified module
|
||||
module, type_ = type_.rsplit(".", 1)
|
||||
module = importlib.import_module(module)
|
||||
cls = getattr(module, type_)
|
||||
return cls
|
||||
options.append ((module, type_))
|
||||
|
||||
errors = []
|
||||
for modname, tname in options:
|
||||
try:
|
||||
module = importlib.import_module(modname)
|
||||
cls = getattr(module, tname)
|
||||
return getattr(cls, 'deserialize', cls)
|
||||
except (ImportError, AttributeError) as ex:
|
||||
errors.append((modname, tname, ex))
|
||||
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors))
|
||||
|
||||
|
||||
def convert(value, type_):
|
||||
return deserializer(type_)(value)
|
||||
def deserialize(type_, value=None, **kwargs):
|
||||
'''Get an object from a text representation'''
|
||||
if not isinstance(type_, str):
|
||||
return type_
|
||||
des = deserializer(type_, **kwargs)
|
||||
if value is None:
|
||||
return des
|
||||
return des(value)
|
||||
|
@ -271,4 +271,4 @@ def main():
|
||||
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
|
||||
args = parser.parse_args()
|
||||
|
||||
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
||||
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
|
@ -0,0 +1 @@
|
||||
pytest
|
12
tests/test.gexf
Normal file
12
tests/test.gexf
Normal file
@ -0,0 +1,12 @@
|
||||
<?xml version='1.0' encoding='utf-8'?>
|
||||
<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:viz="http://www.gexf.net/1.2draft/viz" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<graph defaultedgetype="undirected" mode="static">
|
||||
<nodes>
|
||||
<node id="0" label="0" />
|
||||
<node id="1" label="1" />
|
||||
</nodes>
|
||||
<edges>
|
||||
<edge id="0" source="0" target="1" />
|
||||
</edges>
|
||||
</graph>
|
||||
</gexf>
|
48
tests/test_examples.py
Normal file
48
tests/test_examples.py
Normal file
@ -0,0 +1,48 @@
|
||||
from unittest import TestCase
|
||||
import os
|
||||
from os.path import join
|
||||
|
||||
from soil import utils, simulation
|
||||
|
||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||
EXAMPLES = join(ROOT, '..', 'examples')
|
||||
|
||||
|
||||
class TestExamples(TestCase):
|
||||
pass
|
||||
|
||||
|
||||
def make_example_test(path, config):
|
||||
def wrapped(self):
|
||||
root = os.getcwd()
|
||||
os.chdir(os.path.dirname(path))
|
||||
s = simulation.from_config(config)
|
||||
iterations = s.max_time * s.num_trials
|
||||
if iterations > 1000:
|
||||
self.skipTest('This example would probably take too long')
|
||||
envs = s.run_simulation(dry_run=True)
|
||||
assert envs
|
||||
for env in envs:
|
||||
assert env
|
||||
try:
|
||||
n = config['network_params']['n']
|
||||
assert len(list(env.network_agents)) == n
|
||||
assert env.now > 2 # It has run
|
||||
assert env.now <= config['max_time'] # But not further than allowed
|
||||
except KeyError:
|
||||
pass
|
||||
os.chdir(root)
|
||||
return wrapped
|
||||
|
||||
|
||||
def add_example_tests():
|
||||
for config, path in utils.load_config(join(EXAMPLES, '**', '*.yml')):
|
||||
p = make_example_test(path=path, config=config)
|
||||
fname = os.path.basename(path)
|
||||
p.__name__ = 'test_example_file_%s' % fname
|
||||
p.__doc__ = '%s should be a valid configuration' % fname
|
||||
setattr(TestExamples, p.__name__, p)
|
||||
del p
|
||||
|
||||
|
||||
add_example_tests()
|
@ -116,6 +116,7 @@ class TestHistory(TestCase):
|
||||
db_path = os.path.join(DBROOT, 'test')
|
||||
h = history.History(db_path=db_path)
|
||||
h.save_tuples(tuples)
|
||||
h.flush_cache()
|
||||
assert os.path.exists(db_path)
|
||||
|
||||
# Recover the data
|
||||
@ -128,6 +129,28 @@ class TestHistory(TestCase):
|
||||
backuppaths = glob(db_path + '.backup*.sqlite')
|
||||
assert len(backuppaths) == 1
|
||||
backuppath = backuppaths[0]
|
||||
assert newhistory._db_path == h._db_path
|
||||
assert newhistory.db_path == h.db_path
|
||||
assert os.path.exists(backuppath)
|
||||
assert not len(newhistory[None, None, None])
|
||||
|
||||
def test_history_tuples(self):
|
||||
"""
|
||||
The data recovered should be equal to the one recorded.
|
||||
"""
|
||||
tuples = (
|
||||
('a_1', 0, 'id', 'v'),
|
||||
('a_1', 1, 'id', 'a'),
|
||||
('a_1', 2, 'id', 'l'),
|
||||
('a_1', 3, 'id', 'u'),
|
||||
('a_1', 4, 'id', 'e'),
|
||||
('env', 1, 'prob', 1),
|
||||
('env', 2, 'prob', 2),
|
||||
('env', 3, 'prob', 3),
|
||||
('a_2', 7, 'finished', True),
|
||||
)
|
||||
h = history.History()
|
||||
h.save_tuples(tuples)
|
||||
recovered = list(h.to_tuples())
|
||||
assert recovered
|
||||
for i in recovered:
|
||||
assert i in tuples
|
||||
|
@ -6,14 +6,18 @@ import networkx as nx
|
||||
from functools import partial
|
||||
|
||||
from os.path import join
|
||||
from soil import simulation, environment, agents, utils
|
||||
from soil import simulation, Environment, agents, utils, history
|
||||
|
||||
|
||||
ROOT = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
EXAMPLES = join(ROOT, '..', 'examples')
|
||||
|
||||
|
||||
class CustomAgent(agents.BaseAgent):
|
||||
def step(self):
|
||||
self.state['neighbors'] = self.count_agents(state_id=0,
|
||||
limit_neighbors=True)
|
||||
|
||||
class TestMain(TestCase):
|
||||
|
||||
def test_load_graph(self):
|
||||
@ -127,10 +131,6 @@ class TestMain(TestCase):
|
||||
|
||||
def test_custom_agent(self):
|
||||
"""Allow for search of neighbors with a certain state_id"""
|
||||
class CustomAgent(agents.BaseAgent):
|
||||
def step(self):
|
||||
self.state['neighbors'] = self.count_agents(state_id=0,
|
||||
limit_neighbors=True)
|
||||
config = {
|
||||
'dry_run': True,
|
||||
'network_params': {
|
||||
@ -188,8 +188,6 @@ class TestMain(TestCase):
|
||||
recovered = yaml.load(serial)
|
||||
with utils.timer('deleting'):
|
||||
del recovered['topology']
|
||||
del recovered['load_module']
|
||||
del recovered['dry_run']
|
||||
assert config == recovered
|
||||
|
||||
def test_configuration_changes(self):
|
||||
@ -197,25 +195,17 @@ class TestMain(TestCase):
|
||||
The configuration should not change after running
|
||||
the simulation.
|
||||
"""
|
||||
config = utils.load_file('examples/complete.yml')[0]
|
||||
config = utils.load_file(join(EXAMPLES, 'complete.yml'))[0]
|
||||
s = simulation.from_config(config)
|
||||
s.dry_run = True
|
||||
for i in range(5):
|
||||
s.run_simulation(dry_run=True)
|
||||
nconfig = s.to_dict()
|
||||
del nconfig['topology']
|
||||
del nconfig['dry_run']
|
||||
del nconfig['load_module']
|
||||
assert config == nconfig
|
||||
|
||||
def test_examples(self):
|
||||
"""
|
||||
Make sure all examples in the examples folder are correct
|
||||
"""
|
||||
pass
|
||||
|
||||
def test_row_conversion(self):
|
||||
env = environment.SoilEnvironment(dry_run=True)
|
||||
env = Environment(dry_run=True)
|
||||
env['test'] = 'test_value'
|
||||
|
||||
res = list(env.history_to_tuples())
|
||||
@ -234,7 +224,7 @@ class TestMain(TestCase):
|
||||
from geometric models. We should work around it.
|
||||
"""
|
||||
G = nx.random_geometric_graph(20, 0.1)
|
||||
env = environment.SoilEnvironment(topology=G, dry_run=True)
|
||||
env = Environment(topology=G, dry_run=True)
|
||||
env.dump_gexf('/tmp/dump-gexf')
|
||||
|
||||
def test_save_graph(self):
|
||||
@ -245,7 +235,7 @@ class TestMain(TestCase):
|
||||
'''
|
||||
G = nx.cycle_graph(5)
|
||||
distribution = agents.calculate_distribution(None, agents.BaseAgent)
|
||||
env = environment.SoilEnvironment(topology=G, network_agents=distribution, dry_run=True)
|
||||
env = Environment(topology=G, network_agents=distribution, dry_run=True)
|
||||
env[0, 0, 'testvalue'] = 'start'
|
||||
env[0, 10, 'testvalue'] = 'finish'
|
||||
nG = env.history_to_graph()
|
||||
@ -253,33 +243,65 @@ class TestMain(TestCase):
|
||||
assert ('start', 0, 10) in values
|
||||
assert ('finish', 10, None) in values
|
||||
|
||||
def test_serialize_class(self):
|
||||
ser, name = utils.serialize(agents.BaseAgent)
|
||||
assert name == 'soil.agents.BaseAgent'
|
||||
assert ser == agents.BaseAgent
|
||||
|
||||
def make_example_test(path, config):
|
||||
def wrapped(self):
|
||||
root = os.getcwd()
|
||||
os.chdir(os.path.dirname(path))
|
||||
s = simulation.from_config(config)
|
||||
envs = s.run_simulation(dry_run=True)
|
||||
assert envs
|
||||
for env in envs:
|
||||
assert env
|
||||
try:
|
||||
n = config['network_params']['n']
|
||||
assert len(env.get_agents()) == n
|
||||
except KeyError:
|
||||
pass
|
||||
os.chdir(root)
|
||||
return wrapped
|
||||
class CustomAgent(agents.BaseAgent):
|
||||
pass
|
||||
|
||||
ser, name = utils.serialize(CustomAgent)
|
||||
assert name == 'test_main.CustomAgent'
|
||||
assert ser == CustomAgent
|
||||
|
||||
def add_example_tests():
|
||||
for config, path in utils.load_config(join(EXAMPLES, '*.yml')):
|
||||
p = make_example_test(path=path, config=config)
|
||||
fname = os.path.basename(path)
|
||||
p.__name__ = 'test_example_file_%s' % fname
|
||||
p.__doc__ = '%s should be a valid configuration' % fname
|
||||
setattr(TestMain, p.__name__, p)
|
||||
del p
|
||||
def test_serialize_builtin_types(self):
|
||||
|
||||
for i in [1, None, True, False, {}, [], list(), dict()]:
|
||||
ser, name = utils.serialize(i)
|
||||
assert type(ser) == str
|
||||
des = utils.deserialize(name, ser)
|
||||
assert i == des
|
||||
|
||||
add_example_tests()
|
||||
def test_serialize_agent_type(self):
|
||||
'''A class from soil.agents should be serialized without the module part'''
|
||||
ser = agents.serialize_type(CustomAgent)
|
||||
assert ser == 'test_main.CustomAgent'
|
||||
ser = agents.serialize_type(agents.BaseAgent)
|
||||
assert ser == 'BaseAgent'
|
||||
|
||||
def test_deserialize_agent_distribution(self):
|
||||
agent_distro = [
|
||||
{
|
||||
'agent_type': 'CounterModel',
|
||||
'weight': 1
|
||||
},
|
||||
{
|
||||
'agent_type': 'test_main.CustomAgent',
|
||||
'weight': 2
|
||||
},
|
||||
]
|
||||
converted = agents.deserialize_distribution(agent_distro)
|
||||
assert converted[0]['agent_type'] == agents.CounterModel
|
||||
assert converted[1]['agent_type'] == CustomAgent
|
||||
|
||||
def test_serialize_agent_distribution(self):
|
||||
agent_distro = [
|
||||
{
|
||||
'agent_type': agents.CounterModel,
|
||||
'weight': 1
|
||||
},
|
||||
{
|
||||
'agent_type': CustomAgent,
|
||||
'weight': 2
|
||||
},
|
||||
]
|
||||
converted = agents.serialize_distribution(agent_distro)
|
||||
assert converted[0]['agent_type'] == 'CounterModel'
|
||||
assert converted[1]['agent_type'] == 'test_main.CustomAgent'
|
||||
|
||||
def test_history(self):
|
||||
'''Test storing in and retrieving from history (sqlite)'''
|
||||
h = history.History()
|
||||
h.save_record(agent_id=0, t_step=0, key="test", value="hello")
|
||||
assert h[0, 0, "test"] == "hello"
|
||||
|
Loading…
Reference in New Issue
Block a user