1
0
mirror of https://github.com/gsi-upm/soil synced 2025-10-27 13:48:17 +00:00

Compare commits

..

32 Commits

Author SHA1 Message Date
J. Fernando Sánchez
cd62c23cb9 WIP: all tests pass 2022-10-13 22:43:16 +02:00
J. Fernando Sánchez
f811ee18c5 WIP 2022-10-06 15:49:19 +02:00
J. Fernando Sánchez
0a9c6d8b19 WIP: removed stats 2022-09-16 18:14:16 +02:00
J. Fernando Sánchez
3dc56892c1 WIP: working config 2022-09-15 19:27:17 +02:00
J. Fernando Sánchez
e41dc3dae2 WIP 2022-09-13 18:16:31 +02:00
J. Fernando Sánchez
bbaed636a8 WIP 2022-07-19 17:18:02 +02:00
J. Fernando Sánchez
6f7481769e WIP 2022-07-19 17:17:23 +02:00
J. Fernando Sánchez
1a8313e4f6 WIP 2022-07-19 17:12:41 +02:00
J. Fernando Sánchez
a40aa55b6a Release 0.20.7 2022-07-06 09:23:46 +02:00
J. Fernando Sánchez
50cba751a6 Release 0.20.6 2022-07-05 12:08:34 +02:00
J. Fernando Sánchez
dfb6d13649 version 0.20.5 2022-05-18 16:13:53 +02:00
J. Fernando Sánchez
5559d37e57 version 0.20.4 2022-05-18 15:20:58 +02:00
J. Fernando Sánchez
2116fe6f38 Bug fixes and minor improvements 2022-05-12 16:14:47 +02:00
J. Fernando Sánchez
affeeb9643 Update examples 2022-04-04 16:47:58 +02:00
J. Fernando Sánchez
42ddc02318 CI: delay PyPI check 2022-03-07 14:35:07 +01:00
J. Fernando Sánchez
cab9a3440b Fix typo CI/CD 2022-03-07 13:57:25 +01:00
J. Fernando Sánchez
db505da49c Minor CI change 2022-03-07 13:35:02 +01:00
J. Fernando Sánchez
8eb8eb16eb Minor CI change 2022-03-07 12:51:22 +01:00
J. Fernando Sánchez
3fc5ca8c08 Fix requirements issue CI/CD 2022-03-07 12:46:01 +01:00
J. Fernando Sánchez
c02e6ea2e8 Fix die bug 2022-03-07 11:17:27 +01:00
J. Fernando Sánchez
38f8a8d110 Merge branch 'mesa'
First iteration to achieve MESA compatibility.
As a side effect, we have removed `simpy`.

For a full list of changes, see `CHANGELOG.md`.
2022-03-07 10:54:47 +01:00
J. Fernando Sánchez
cb72aac980 Add random activation example 2022-03-07 10:48:59 +01:00
J. Fernando Sánchez
6c4f44b4cb Partial MESA compatibility and several fixes
Documentation for the new APIs is still a work in progress :)
2021-10-15 20:16:49 +02:00
J. Fernando Sánchez
af9a392a93 WIP: mesa compat
All tests pass but some features are still missing/unclear:

- Mesa agents do not have a `state`, so their "metrics" don't get stored. I will
probably refactor this to remove some magic in this regard. This should get rid
of the `_state` dictionary and the setitem/getitem magic.
- Simulation is still different from a runner. So far only Agent and
Environment/Model have been updated.
2021-10-15 13:36:39 +02:00
J. Fernando Sánchez
5d7e57675a WIP: mesa compatibility 2021-10-14 17:37:06 +02:00
J. Fernando Sánchez
e860bdb922 v0.15.2
See CHANGELOG.md for a complete list of changes
2021-05-22 16:33:52 +02:00
J. Fernando Sánchez
d6b684c1c1 Fix docs requirements 2021-05-22 16:08:38 +02:00
J. Fernando Sánchez
05f7f49233 Refactoring v0.15.1
See CHANGELOG.md for a full list of changes

* Removed nxsim
* Refactored `agents.NetworkAgent` and `agents.BaseAgent`
* Refactored exporters
* Added stats to history
2020-11-19 23:58:47 +01:00
J. Fernando Sánchez
3b2c6a3db5 Seed before env initialization
Fixes #6
2020-07-27 12:29:24 +02:00
J. Fernando Sánchez
6118f917ee Fix Windows bug
Update URLs to gsi.upm.es
2020-07-07 10:57:10 +02:00
J. Fernando Sánchez
6adc8d36ba minor change in docs 2020-03-13 12:50:05 +01:00
J. Fernando Sánchez
c8b8149a17 Updated to 0.14.6
Fix compatibility issues with newer networkx and pandas versions
2020-03-11 16:17:14 +01:00
84 changed files with 8498 additions and 2953 deletions

View File

@@ -1,2 +1,5 @@
**/soil_output **/soil_output
.* .*
**/__pycache__
__pycache__
*.pyc

1
.gitignore vendored
View File

@@ -8,3 +8,4 @@ soil_output
docs/_build* docs/_build*
build/* build/*
dist/* dist/*
prof

View File

@@ -1,9 +1,10 @@
stages: stages:
- test - test
- build - publish
- check_published
build: docker:
stage: build stage: publish
image: image:
name: gcr.io/kaniko-project/executor:debug name: gcr.io/kaniko-project/executor:debug
entrypoint: [""] entrypoint: [""]
@@ -16,13 +17,37 @@ build:
only: only:
- tags - tags
test: test:
except:
- tags # Avoid running tests for tags, because they are already run for the branch
tags: tags:
- docker - docker
image: python:3.7 image: python:3.7
stage: test stage: test
script: script:
- pip install -r requirements.txt -r test-requirements.txt
- python setup.py test - python setup.py test
push_pypi:
only:
- tags
tags:
- docker
image: python:3.7
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
- pip install twine
- python setup.py sdist bdist_wheel
- TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
check_pypi:
only:
- tags
tags:
- docker
image: python:3.7
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG
# Allow PYPI to update its index before we try to install
when: delayed
start_in: 2 minutes

View File

@@ -3,6 +3,116 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3 UNRELEASED]
### Added
* Simple debugging capabilities, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents)
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are also using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to.
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
### Changed
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
## [0.20.5]
### Changed
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
## [0.20.4]
### Added
* Agents can now be given any kwargs, which will be used to set their state
* Environments have a default logger `self.logger` and a log method, just like agents
## [0.20.3]
### Fixed
* Default state values are now deepcopied again.
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
### Removed
* Datacollectors are not being used for now.
* `time.TimedActivation.step` does not use an `until` parameter anymore.
### Changed
* Simulations now run right up to `until` (open interval)
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
* Rabbits simulation is more idiomatic (using subclasses)
## [0.20.2]
### Fixed
* CI/CD testing issues
## [0.20.1]
### Fixed
* Agents would run another step after dying.
## [0.20.0]
### Added
* Integration with MESA
* `not_agent_ids` parameter to get sql in history
### Changed
* `soil.Environment` now also inherits from `mesa.Model`
* `soil.Agent` now also inherits from `mesa.Agent`
* `soil.time` to replace `simpy` events, delays, duration, etc.
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
### Removed
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
### Added
* An option to choose whether a database should be used for history
## [0.15.2]
### Fixed
* Pass the right known_modules and parameters to stats discovery in simulation
* The configuration file must exist when launching through the CLI. If it doesn't, an error will be logged
* Minor changes in the documentation of the CLI arguments
### Changed
* Stats are now exported by default
## [0.15.1]
### Added
* read-only `History`
### Fixed
* Serialization problem with the `Environment` on parallel mode.
* Analysis functions now work as they should in the tutorial
## [0.15.0]
### Added
* Control logging level in CLI and simulation
* `Stats` to calculate trial and simulation-wide statistics
* Simulation statistics are stored in a separate table in history (see `History.get_stats` and `History.save_stats`, as well as `soil.stats`)
* Aliased `NetworkAgent.G` to `NetworkAgent.topology`.
### Changed
* Templates in config files can be given as dictionaries in addition to strings
* Samplers are used more explicitly
* Removed nxsim dependency. We had already made a lot of changes, and nxsim has not been updated in 5 years.
* Exporter methods renamed to `trial` and `end`. Added `start`.
* `Distribution` exporter now a stats class
* `global_topology` renamed to `topology`
* Moved topology-related methods to `NetworkAgent`
### Fixed
* Temporary files used for history in dry_run mode are not longer left open
## [0.14.9]
### Changed
* Seed random before environment initialization
## [0.14.8]
### Fixed
* Invalid directory names in Windows gsi-upm/soil#5
## [0.14.7]
### Changed
* Minor change to traceback handling in async simulations
### Fixed
* Incomplete example in the docs (example.yml) caused an exception
## [0.14.6]
### Fixed
* Bug with newer versions of networkx (0.24) where the Graph.node attribute has been removed. We have updated our calls, but the code in nxsim is not under our control, so we have pinned the networkx version until that issue is solved.
### Changed
* Explicit yaml.SafeLoader to avoid deprecation warnings when using yaml.load. It should not break any existing setups, but we could move to the FullLoader in the future if needed.
## [0.14.4] ## [0.14.4]
### Fixed ### Fixed
* Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly. * Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly.

View File

@@ -5,6 +5,45 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models. Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
# Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
This translates to harder maintenance and a worse experience for newcomers.
In the end, we decided to make some breaking changes.
If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Mesa compatibility
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage
## Citation
If you use Soil in your research, don't forget to cite this paper: If you use Soil in your research, don't forget to cite this paper:
```bibtex ```bibtex
@@ -28,7 +67,6 @@ If you use Soil in your research, don't forget to cite this paper:
``` ```
@Copyright GSI - Universidad Politécnica de Madrid 2017 @Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.dit.upm.es)
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

View File

@@ -31,7 +31,7 @@
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = [] extensions = ['IPython.sphinxext.ipython_console_highlighting']
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
@@ -69,7 +69,7 @@ language = None
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path # This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '**.ipynb_checkpoints']
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'sphinx'

View File

@@ -8,36 +8,12 @@ The advantage of a configuration file is that it is a clean declarative descript
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation. Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``). Here's an example (``example.yml``).
.. code:: yaml .. literalinclude:: example.yml
:language: yaml
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
- agent_type: SISaModel
weight: 1
state:
id: content
- agent_type: SISaModel
weight: 1
state:
id: discontent
- agent_type: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``). This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
The 100 nodes in the network will be SISaModel agents (``network_agents.agent_type``), which is an agent behavior that is included in Soil. The 100 nodes in the network will be SISaModel agents (``network_agents.agent_class``), which is an agent behavior that is included in Soil.
10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state. 10% of the agents (``weight=1``) will start in the content state, 10% in the discontent state, and the remaining 80% (``weight=8``) in the neutral state.
All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``. All agents will have access to the environment (``environment_params``), which only contains one variable, ``prob_infected``.
The state of the agents will be updated every 2 seconds (``interval``). The state of the agents will be updated every 2 seconds (``interval``).
@@ -112,9 +88,18 @@ For example, the following configuration is equivalent to :code:`nx.complete_gra
Environment Environment
============ ============
The environment is the place where the shared state of the simulation is stored. The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak. That means both global parameters, such as the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters: But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml .. code:: yaml
@@ -122,23 +107,33 @@ The configuration file may specify the initial value of the environment paramete
daily_probability_of_earthquake: 0.001 daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0 number_of_earthquakes: 0
All agents have access to the environment parameters. All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state. In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent. For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents Agents
====== ======
Agents are a way of modelling behavior. Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state. Agents can be characterized with two variables: agent type (``agent_class``) and state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters. The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents. Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent. There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents Network Agents
############## ##############
Network agents are attached to a node in the topology. Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes. The configuration file allows you to specify how agents will be mapped to topology nodes.
@@ -147,17 +142,19 @@ Hence, every node in the network will be associated to an agent of that type.
.. code:: yaml .. code:: yaml
agent_type: SISaModel agent_class: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property). It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type. For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml .. code:: yaml
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
- agent_type: CounterModel - agent_class: CounterModel
weight: 5 weight: 5
The third option is to specify the type of agent on the node itself, e.g.: The third option is to specify the type of agent on the node itself, e.g.:
@@ -168,10 +165,10 @@ The third option is to specify the type of agent on the node itself, e.g.:
topology: topology:
nodes: nodes:
- id: first - id: first
agent_type: BaseAgent agent_class: BaseAgent
states: states:
first: first:
agent_type: SISaModel agent_class: SISaModel
This would also work with a randomly generated network: This would also work with a randomly generated network:
@@ -182,9 +179,9 @@ This would also work with a randomly generated network:
network: network:
generator: complete generator: complete
n: 5 n: 5
agent_type: BaseAgent agent_class: BaseAgent
states: states:
- agent_type: SISaModel - agent_class: SISaModel
@@ -195,11 +192,11 @@ e.g., to populate the network with SISaModel, roughly 10% of them with a discont
.. code:: yaml .. code:: yaml
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
weight: 9 weight: 9
state: state:
id: neutral id: neutral
- agent_type: SISaModel - agent_class: SISaModel
weight: 1 weight: 1
state: state:
id: discontent id: discontent
@@ -209,7 +206,7 @@ For instance, to add a state for the two nodes in this configuration:
.. code:: yaml .. code:: yaml
agent_type: SISaModel agent_class: SISaModel
network: network:
generator: complete_graph generator: complete_graph
n: 2 n: 2
@@ -234,11 +231,32 @@ These agents are programmed in much the same way as network agents, the only dif
.. code:: .. code::
environment_agents: environment_agents:
- agent_type: MyAgent - agent_class: MyAgent
state: state:
mood: happy mood: happy
- agent_type: DummyAgent - agent_class: DummyAgent
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance. You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network. They are also useful to add behavior that has little to do with the network and the interactions within that network.
Templating
==========
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
For instance, you may want to run a simulation with different agent distributions.
This can be done in Soil using **templates**.
A template is a configuration where some of the values are specified with a variable.
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
There are two types of variables, depending on how their values are decided:
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
Here is an example with a single fixed variable and two bounded variable:
.. literalinclude:: ../examples/template.yml
:language: yaml

35
docs/example.yml Normal file
View File

@@ -0,0 +1,35 @@
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
- agent_class: SISaModel
weight: 1
state:
id: content
- agent_class: SISaModel
weight: 1
state:
id: discontent
- agent_class: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@@ -14,11 +14,11 @@ Now test that it worked by running the command line tool
soil --help soil --help
Or using soil programmatically: Or, if you're using using soil programmatically:
.. code:: python .. code:: python
import soil import soil
print(soil.__version__) print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.cluster.gsi.dit.upm.es/soil/soil.git>`_. The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.

View File

@@ -3,11 +3,11 @@ name: quickstart
num_trials: 1 num_trials: 1
max_time: 1000 max_time: 1000
network_agents: network_agents:
- agent_type: SISaModel - agent_class: SISaModel
state: state:
id: neutral id: neutral
weight: 1 weight: 1
- agent_type: SISaModel - agent_class: SISaModel
state: state:
id: content id: content
weight: 2 weight: 2

1
docs/requirements.txt Normal file
View File

@@ -0,0 +1 @@
ipython>=7.31.1

12
docs/soil-vs.rst Normal file
View File

@@ -0,0 +1,12 @@
### MESA
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
Here are some reasons to use Soil instead of plain mesa:
- Less boilerplate for common scenarios (by some definitions of common)
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
- Reporting functions that aggregate multiple

View File

@@ -47,12 +47,6 @@ There are three main elements in a soil simulation:
- The environment. It assigns agents to nodes in the network, and - The environment. It assigns agents to nodes in the network, and
stores the environment parameters (shared state for all agents). stores the environment parameters (shared state for all agents).
Soil is based on ``simpy``, which is an event-based network simulation
library. Soil provides several abstractions over events to make
developing agents easier. This means you can use events (timeouts,
delays) in soil, but for the most part we will assume your models will
be step-based.
Modeling behaviour Modeling behaviour
------------------ ------------------
@@ -217,11 +211,11 @@ nodes in that network. Notice how node 0 is the only one with a TV.
sim = soil.Simulation(topology=G, sim = soil.Simulation(topology=G,
num_trials=1, num_trials=1,
max_time=MAX_TIME, max_time=MAX_TIME,
environment_agents=[{'agent_type': NewsEnvironmentAgent, environment_agents=[{'agent_class': NewsEnvironmentAgent,
'state': { 'state': {
'event_time': EVENT_TIME 'event_time': EVENT_TIME
}}], }}],
network_agents=[{'agent_type': NewsSpread, network_agents=[{'agent_class': NewsSpread,
'weight': 1}], 'weight': 1}],
states={0: {'has_tv': True}}, states={0: {'has_tv': True}},
default_state={'has_tv': False}, default_state={'has_tv': False},
@@ -291,14 +285,14 @@ For this demo, we will use a python dictionary:
}, },
'network_agents': [ 'network_agents': [
{ {
'agent_type': NewsSpread, 'agent_class': NewsSpread,
'weight': 1, 'weight': 1,
'state': { 'state': {
'has_tv': False 'has_tv': False
} }
}, },
{ {
'agent_type': NewsSpread, 'agent_class': NewsSpread,
'weight': 2, 'weight': 2,
'state': { 'state': {
'has_tv': True 'has_tv': True
@@ -306,7 +300,7 @@ For this demo, we will use a python dictionary:
} }
], ],
'environment_agents':[ 'environment_agents':[
{'agent_type': NewsEnvironmentAgent, {'agent_class': NewsEnvironmentAgent,
'state': { 'state': {
'event_time': 10 'event_time': 10
} }

View File

@@ -98,11 +98,11 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_dumb\r\n", "name: Sim_all_dumb\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -122,19 +122,19 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_half_herd\r\n", "name: Sim_half_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -154,12 +154,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_herd\r\n", "name: Sim_all_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
@@ -181,12 +181,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_wise_herd\r\n", "name: Sim_wise_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -207,12 +207,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_wise\r\n", "name: Sim_all_wise\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -500,7 +500,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.6.5" "version": "3.8.5"
}, },
"toc": { "toc": {
"colors": { "colors": {

View File

@@ -141,10 +141,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -1758,10 +1758,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -3363,10 +3363,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -4977,10 +4977,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -6591,10 +6591,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -8211,10 +8211,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -9828,10 +9828,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -11448,10 +11448,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -13062,10 +13062,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -14679,10 +14679,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -16296,10 +16296,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -17916,10 +17916,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -19521,10 +19521,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -21144,10 +21144,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -22767,10 +22767,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -24375,10 +24375,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -25992,10 +25992,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -27603,10 +27603,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -29220,10 +29220,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -30819,10 +30819,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -32439,10 +32439,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -34056,10 +34056,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -35676,10 +35676,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -37293,10 +37293,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -38913,10 +38913,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -40518,10 +40518,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -42129,10 +42129,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -43746,10 +43746,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -45357,10 +45357,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -46974,10 +46974,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -48588,10 +48588,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -50202,10 +50202,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -51819,10 +51819,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -53436,10 +53436,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -55041,10 +55041,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -56655,10 +56655,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -58257,10 +58257,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -59877,10 +59877,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -61494,10 +61494,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -63108,10 +63108,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -64713,10 +64713,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -66330,10 +66330,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -67947,10 +67947,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -69561,10 +69561,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -71178,10 +71178,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -72801,10 +72801,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -74418,10 +74418,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -76035,10 +76035,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -77643,10 +77643,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -79260,10 +79260,10 @@
" 'load_module': 'newsspread',\n", " 'load_module': 'newsspread',\n",
" 'max_time': 30,\n", " 'max_time': 30,\n",
" 'name': 'Sim_all_dumb',\n", " 'name': 'Sim_all_dumb',\n",
" 'network_agents': [{'agent_type': 'DumbViewer',\n", " 'network_agents': [{'agent_class': 'DumbViewer',\n",
" 'state': {'has_tv': False},\n", " 'state': {'has_tv': False},\n",
" 'weight': 1},\n", " 'weight': 1},\n",
" {'agent_type': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n", " {'agent_class': 'DumbViewer', 'state': {'has_tv': True}, 'weight': 1}],\n",
" 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n", " 'network_params': {'generator': 'barabasi_albert_graph', 'm': 5, 'n': 500},\n",
" 'num_trials': 50,\n", " 'num_trials': 50,\n",
" 'seed': 'None',\n", " 'seed': 'None',\n",
@@ -80800,7 +80800,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.6.5" "version": "3.8.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -1,27 +1,61 @@
--- ---
version: '2'
name: simple name: simple
group: tests group: tests
dir_path: "/tmp/" dir_path: "/tmp/"
num_trials: 3 num_trials: 3
max_time: 100 max_steps: 100
interval: 1 interval: 1
seed: "CompleteSeed!" seed: "CompleteSeed!"
network_params: model_class: Environment
model_params:
am_i_complete: true
topologies:
default:
params:
generator: complete_graph generator: complete_graph
n: 10 n: 10
network_agents: another_graph:
- agent_type: CounterModel params:
weight: 1 generator: complete_graph
n: 2
environment:
agents:
agent_class: CounterModel
topology: default
state: state:
times: 1
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: BaseAgent
group: environment
topology: null
hidden: true
state:
times: 10
- agent_class: CounterModel
id: 0 id: 0
- agent_type: AggregatedCounter group: other_counters
topology: another_graph
state:
times: 1
total: 0
- agent_class: CounterModel
topology: another_graph
group: other_counters
id: 1
distribution:
- agent_class: CounterModel
weight: 1
group: general_counters
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2 weight: 0.2
environment_agents: [] override:
environment_class: Environment - filter:
environment_params: agent_class: AggregatedCounter
am_i_complete: true n: 2
default_state: state:
incidents: 0 times: 5
states:
- name: 'The first node'
- name: 'The second node'

View File

@@ -0,0 +1,63 @@
---
version: '2'
id: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_steps: 100
interval: 1
seed: "CompleteSeed!"
model_class: "soil.Environment"
model_params:
topologies:
default:
params:
generator: complete_graph
n: 10
another_graph:
params:
generator: complete_graph
n: 2
agents:
# The values here will be used as default values for any agent
agent_class: CounterModel
topology: default
state:
times: 1
# This specifies a distribution of agents, each with a `weight` or an explicit number of agents
distribution:
- agent_class: CounterModel
weight: 1
# This is inherited from the default settings
#topology: default
state:
times: 3
- agent_class: AggregatedCounter
topology: default
weight: 0.2
fixed:
- name: 'Environment Agent 1'
# All the other agents will assigned to the 'default' group
group: environment
# Do not count this agent towards total limits
hidden: true
agent_class: soil.BaseAgent
topology: null
state:
times: 10
- agent_class: CounterModel
topology: another_graph
id: 0
state:
times: 1
total: 0
- agent_class: CounterModel
topology: another_graph
id: 1
override:
# 2 agents that match this filter will be updated to match the state {times: 5}
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5

View File

@@ -2,7 +2,7 @@
name: custom-generator name: custom-generator
description: Using a custom generator for the network description: Using a custom generator for the network
num_trials: 3 num_trials: 3
max_time: 100 max_steps: 100
interval: 1 interval: 1
network_params: network_params:
generator: mymodule.mygenerator generator: mymodule.mygenerator
@@ -10,7 +10,7 @@ network_params:
n: 10 n: 10
n_edges: 5 n_edges: 5
network_agents: network_agents:
- agent_type: CounterModel - agent_class: CounterModel
weight: 1 weight: 1
state: state:
id: 0 state_id: 0

View File

@@ -1,6 +1,6 @@
from networkx import Graph from networkx import Graph
import random
import networkx as nx import networkx as nx
from random import choice
def mygenerator(n=5, n_edges=5): def mygenerator(n=5, n_edges=5):
''' '''
@@ -14,9 +14,9 @@ def mygenerator(n=5, n_edges=5):
for i in range(n_edges): for i in range(n_edges):
nodes = list(G.nodes) nodes = list(G.nodes)
n_in = choice(nodes) n_in = random.choice(nodes)
nodes.remove(n_in) # Avoid loops nodes.remove(n_in) # Avoid loops
n_out = choice(nodes) n_out = random.choice(nodes)
G.add_edge(n_in, n_out) G.add_edge(n_in, n_out)
return G return G

View File

@@ -27,8 +27,8 @@ if __name__ == '__main__':
import logging import logging
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
from soil import Simulation from soil import Simulation
s = Simulation(network_agents=[{'ids': [0], 'agent_type': Fibonacci}, s = Simulation(network_agents=[{'ids': [0], 'agent_class': Fibonacci},
{'ids': [1], 'agent_type': Odds}], {'ids': [1], 'agent_class': Odds}],
network_params={"generator": "complete_graph", "n": 2}, network_params={"generator": "complete_graph", "n": 2},
max_time=100, max_time=100,
) )

24
examples/mesa/mesa.yml Normal file
View File

@@ -0,0 +1,24 @@
---
name: mesa_sim
group: tests
dir_path: "/tmp"
num_trials: 3
max_steps: 100
interval: 1
seed: '1'
model_class: social_wealth.MoneyEnv
model_params:
topologies:
default:
params:
generator: social_wealth.graph_generator
n: 5
agents:
distribution:
- agent_class: social_wealth.SocialMoneyAgent
topology: default
weight: 1
mesa_agent_class: social_wealth.MoneyAgent
N: 10
width: 50
height: 50

105
examples/mesa/server.py Normal file
View File

@@ -0,0 +1,105 @@
from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
class MyNetwork(NetworkModule):
def render(self, model):
return self.portrayal_method(model)
def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node
portrayal = dict()
portrayal["nodes"] = [
{
"id": agent_id,
"size": env.get_agent(agent_id).wealth,
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959",
"color": "#CC0000",
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}",
}
for (agent_id) in env.G.nodes
]
portrayal["edges"] = [
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
for edge_id, (source, target) in enumerate(env.G.edges)
]
return portrayal
def gridPortrayal(agent):
"""
This function is registered with the visualization server to be called
each tick to indicate how to draw the agent in its current state.
:param agent: the agent in the simulation
:return: the portrayal dictionary
"""
color = max(10, min(agent.wealth*10, 100))
return {
"Shape": "rect",
"w": 1,
"h": 1,
"Filled": "true",
"Layer": 0,
"Label": agent.unique_id,
"Text": agent.unique_id,
"x": agent.pos[0],
"y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})"
}
grid = MyNetwork(network_portrayal, 500, 500, library="sigma")
chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
model_params = {
"N": UserSettableParameter(
"slider",
"N",
5,
1,
10,
1,
description="Choose how many agents to include in the model",
),
"network_agents": [{"agent_class": SocialMoneyAgent}],
"height": UserSettableParameter(
"slider",
"height",
5,
5,
10,
1,
description="Grid height",
),
"width": UserSettableParameter(
"slider",
"width",
5,
5,
10,
1,
description="Grid width",
),
"network_params": {
'generator': graph_generator
},
}
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
)
server.port = 8521
server.launch(open_browser=False)

View File

@@ -0,0 +1,119 @@
'''
This is an example that adds soil agents and environment in a normal
mesa workflow.
'''
from mesa import Agent as MesaAgent
from mesa.space import MultiGrid
# from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
import networkx as nx
from soil import NetworkAgent, Environment
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths)
N = len(list(model.agents))
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
class MoneyAgent(MesaAgent):
"""
A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model):
super().__init__(unique_id=unique_id, model=model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.info("Crying wolf", self.pos)
self.move()
if self.wealth > 0:
self.give_money()
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
wealth = 1
def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighboring_agents())
self.info("Trying to give money")
self.debug("Cellmates: ", cellmates)
self.debug("Friends: ", friends)
nearby_friends = list(cellmates & friends)
if len(nearby_friends):
other = self.random.choice(nearby_friends)
other.wealth += 1
self.wealth -= 1
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(self, width, height, *args, topologies, **kwargs):
super().__init__(*args, topologies=topologies, **kwargs)
self.grid = MultiGrid(width, height, False)
# Create agents
for agent in self.agents:
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"})
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
if __name__ == '__main__':
G = graph_generator()
fixed_params = {"topology": G,
"width": 10,
"network_agents": [{"agent_class": SocialMoneyAgent,
'weight': 1}],
"height": 10}
variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(MoneyEnv,
variable_parameters=variable_params,
fixed_parameters=fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini})
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

83
examples/mesa/wealth.py Normal file
View File

@@ -0,0 +1,83 @@
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
self.running = True
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"})
def step(self):
self.datacollector.collect(self)
self.schedule.step()
if __name__ == '__main__':
fixed_params = {"width": 10,
"height": 10}
variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(MoneyModel,
variable_params,
fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini})
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

View File

@@ -89,11 +89,11 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_dumb\r\n", "name: Sim_all_dumb\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -113,19 +113,19 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_half_herd\r\n", "name: Sim_half_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: DumbViewer\r\n", "- agent_class: DumbViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: false\r\n", " has_tv: false\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -145,12 +145,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_herd\r\n", "name: Sim_all_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
@@ -172,12 +172,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_wise_herd\r\n", "name: Sim_wise_herd\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: HerdViewer\r\n", "- agent_class: HerdViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",
@@ -198,12 +198,12 @@
"max_time: 30\r\n", "max_time: 30\r\n",
"name: Sim_all_wise\r\n", "name: Sim_all_wise\r\n",
"network_agents:\r\n", "network_agents:\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" id: neutral\r\n", " id: neutral\r\n",
" weight: 1\r\n", " weight: 1\r\n",
"- agent_type: WiseViewer\r\n", "- agent_class: WiseViewer\r\n",
" state:\r\n", " state:\r\n",
" has_tv: true\r\n", " has_tv: true\r\n",
" weight: 1\r\n", " weight: 1\r\n",

View File

@@ -1,19 +1,18 @@
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_all_dumb name: Sim_all_dumb
network_agents: network_agents:
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@@ -24,28 +23,27 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_half_herd name: Sim_half_herd
network_agents: network_agents:
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: DumbViewer - agent_class: newsspread.DumbViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: false has_tv: false
weight: 1 weight: 1
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@@ -56,24 +54,23 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_all_herd name: Sim_all_herd
network_agents: network_agents:
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
id: neutral state_id: neutral
weight: 1 weight: 1
network_params: network_params:
generator: barabasi_albert_graph generator: barabasi_albert_graph
@@ -82,22 +79,21 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
prob_neighbor_cure: 0.1 prob_neighbor_cure: 0.1
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_wise_herd name: Sim_wise_herd
network_agents: network_agents:
- agent_type: HerdViewer - agent_class: newsspread.HerdViewer
state: state:
has_tv: true has_tv: true
id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1
@@ -108,22 +104,21 @@ network_params:
num_trials: 50 num_trials: 50
--- ---
default_state: {} default_state: {}
load_module: newsspread
environment_agents: [] environment_agents: []
environment_params: environment_params:
prob_neighbor_spread: 0.0 prob_neighbor_spread: 0.0
prob_tv_spread: 0.01 prob_tv_spread: 0.01
prob_neighbor_cure: 0.1 prob_neighbor_cure: 0.1
interval: 1 interval: 1
max_time: 300 max_steps: 300
name: Sim_all_wise name: Sim_all_wise
network_agents: network_agents:
- agent_type: WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
id: neutral state_id: neutral
weight: 1 weight: 1
- agent_type: WiseViewer - agent_class: newsspread.WiseViewer
state: state:
has_tv: true has_tv: true
weight: 1 weight: 1

View File

@@ -1,8 +1,8 @@
from soil.agents import FSM, state, default_state, prob from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging import logging
class DumbViewer(FSM): class DumbViewer(FSM, NetworkAgent):
''' '''
A viewer that gets infected via TV (if it has one) and tries to infect A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected. its neighbors once it's infected.
@@ -16,16 +16,22 @@ class DumbViewer(FSM):
@state @state
def neutral(self): def neutral(self):
if self['has_tv']: if self['has_tv']:
if prob(self.env['prob_tv_spread']): if self.prob(self.model['prob_tv_spread']):
self.set_state(self.infected) return self.infected
@state @state
def infected(self): def infected(self):
for neighbor in self.get_neighboring_agents(state_id=self.neutral.id): for neighbor in self.get_neighboring_agents(state_id=self.neutral.id):
if prob(self.env['prob_neighbor_spread']): if self.prob(self.model['prob_neighbor_spread']):
neighbor.infect() neighbor.infect()
def infect(self): def infect(self):
'''
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
'''
self.set_state(self.infected) self.set_state(self.infected)
@@ -34,15 +40,14 @@ class HerdViewer(DumbViewer):
A viewer whose probability of infection depends on the state of its neighbors. A viewer whose probability of infection depends on the state of its neighbors.
''' '''
level = logging.DEBUG
def infect(self): def infect(self):
'''Notice again that this is NOT a state. See DumbViewer.infect for reference'''
infected = self.count_neighboring_agents(state_id=self.infected.id) infected = self.count_neighboring_agents(state_id=self.infected.id)
total = self.count_neighboring_agents() total = self.count_neighboring_agents()
prob_infect = self.env['prob_neighbor_spread'] * infected/total prob_infect = self.model['prob_neighbor_spread'] * infected/total
self.debug('prob_infect', prob_infect) self.debug('prob_infect', prob_infect)
if prob(prob_infect): if self.prob(prob_infect):
self.set_state(self.infected.id) self.set_state(self.infected)
class WiseViewer(HerdViewer): class WiseViewer(HerdViewer):
@@ -58,9 +63,9 @@ class WiseViewer(HerdViewer):
@state @state
def cured(self): def cured(self):
prob_cure = self.env['prob_neighbor_cure'] prob_cure = self.model['prob_neighbor_cure']
for neighbor in self.get_neighboring_agents(state_id=self.infected.id): for neighbor in self.get_neighboring_agents(state_id=self.infected.id):
if prob(prob_cure): if self.prob(prob_cure):
try: try:
neighbor.cure() neighbor.cure()
except AttributeError: except AttributeError:
@@ -75,7 +80,7 @@ class WiseViewer(HerdViewer):
1.0) 1.0)
infected = max(self.count_neighboring_agents(self.infected.id), infected = max(self.count_neighboring_agents(self.infected.id),
1.0) 1.0)
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected) prob_cure = self.model['prob_neighbor_cure'] * (cured/infected)
if prob(prob_cure): if self.prob(prob_cure):
return self.cure() return self.cured
return self.set_state(super().infected) return self.set_state(super().infected)

View File

@@ -18,21 +18,23 @@ class MyAgent(agents.FSM):
@agents.default_state @agents.default_state
@agents.state @agents.state
def neutral(self): def neutral(self):
self.info('I am running') self.debug('I am running')
if agents.prob(0.2):
self.info('This runs 2/10 times on average')
s = Simulation(name='Programmatic', s = Simulation(name='Programmatic',
network_params={'generator': mygenerator}, network_params={'generator': mygenerator},
num_trials=1, num_trials=1,
max_time=100, max_time=100,
agent_type=MyAgent, agent_class=MyAgent,
dry_run=True) dry_run=True)
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
envs = s.run() envs = s.run()
s.dump_yaml() # Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')
for env in envs:
env.dump_csv()

View File

@@ -1,6 +1,5 @@
from soil.agents import FSM, state, default_state from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment from soil import Environment
from random import random, shuffle
from itertools import islice from itertools import islice
import logging import logging
@@ -53,7 +52,7 @@ class CityPubs(Environment):
pub['occupancy'] -= 1 pub['occupancy'] -= 1
class Patron(FSM): class Patron(FSM, NetworkAgent):
'''Agent that looks for friends to drink with. It will do three things: '''Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with 1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in. 2) Look for a bar where the agent and other agents in the same group can get in.
@@ -61,12 +60,10 @@ class Patron(FSM):
''' '''
level = logging.DEBUG level = logging.DEBUG
defaults = { pub = None
'pub': None, drunk = False
'drunk': False, pints = 0
'pints': 0, max_pints = 3
'max_pints': 3,
}
@default_state @default_state
@state @state
@@ -90,9 +87,9 @@ class Patron(FSM):
return self.sober_in_pub return self.sober_in_pub
self.debug('I am looking for a pub') self.debug('I am looking for a pub')
group = list(self.get_neighboring_agents()) group = list(self.get_neighboring_agents())
for pub in self.env.available_pubs(): for pub in self.model.available_pubs():
self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group))) self.debug('We\'re trying to get into {}: total: {}'.format(pub, len(group)))
if self.env.enter(pub, self, *group): if self.model.enter(pub, self, *group):
self.info('We\'re all {} getting in {}!'.format(len(group), pub)) self.info('We\'re all {} getting in {}!'.format(len(group), pub))
return self.sober_in_pub return self.sober_in_pub
@@ -128,8 +125,8 @@ class Patron(FSM):
Try to become friends with another agent. The chances of Try to become friends with another agent. The chances of
success depend on both agents' openness. success depend on both agents' openness.
''' '''
if force or self['openness'] > random(): if force or self['openness'] > self.random.random():
self.env.add_edge(self, other_agent) self.model.add_edge(self, other_agent)
self.info('Made some friend {}'.format(other_agent)) self.info('Made some friend {}'.format(other_agent))
return True return True
return False return False
@@ -138,7 +135,7 @@ class Patron(FSM):
''' Look for random agents around me and try to befriend them''' ''' Look for random agents around me and try to befriend them'''
befriended = False befriended = False
k = int(10*self['openness']) k = int(10*self['openness'])
shuffle(others) self.random.shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7 for friend in islice(others, k): # random.choice >= 3.7
if friend == self: if friend == self:
continue continue

View File

@@ -1,25 +1,25 @@
--- ---
name: pubcrawl name: pubcrawl
num_trials: 3 num_trials: 3
max_time: 10 max_steps: 10
dump: false dump: false
network_params: network_params:
# Generate 100 empty nodes. They will be assigned a network agent # Generate 100 empty nodes. They will be assigned a network agent
generator: empty_graph generator: empty_graph
n: 30 n: 30
network_agents: network_agents:
- agent_type: pubcrawl.Patron - agent_class: pubcrawl.Patron
description: Extroverted patron description: Extroverted patron
state: state:
openness: 1.0 openness: 1.0
weight: 9 weight: 9
- agent_type: pubcrawl.Patron - agent_class: pubcrawl.Patron
description: Introverted patron description: Introverted patron
state: state:
openness: 0.1 openness: 0.1
weight: 1 weight: 1
environment_agents: environment_agents:
- agent_type: pubcrawl.Police - agent_class: pubcrawl.Police
environment_class: pubcrawl.CityPubs environment_class: pubcrawl.CityPubs
environment_params: environment_params:
altercations: 0 altercations: 0

View File

@@ -0,0 +1,4 @@
There are two similar implementations of this simulation.
- `basic`. Using simple primites
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.

View File

@@ -0,0 +1,130 @@
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
class RabbitModel(FSM, NetworkAgent):
sexual_maturity = 30
life_expectancy = 300
@default_state
@state
def newborn(self):
self.info('I am a newborn.')
self.age = 0
self.offspring = 0
return self.youngling
@state
def youngling(self):
self.age += 1
if self.age >= self.sexual_maturity:
self.info(f'I am fertile! My age is {self.age}')
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(RabbitModel):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
self.age += 1
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(agent_class=Female,
state_id=Female.fertile.id,
limit=self.max_females):
self.debug('FOUND A FEMALE: ', repr(f), self.mating_prob)
if self.prob(self['mating_prob']):
f.impregnate(self)
break # Take a break
class Female(RabbitModel):
gestation = 100
@state
def fertile(self):
# Just wait for a Male
self.age += 1
if self.age > self.life_expectancy:
return self.dead
def impregnate(self, male):
self.info(f'{repr(male)} impregnating female {repr(self)}')
self.mate = male
self.pregnancy = -1
self.set_state(self.pregnant, when=self.now)
self.number_of_babies = int(8+4*self.random.random())
self.debug('I am pregnant')
@state
def pregnant(self):
self.age += 1
self.pregnancy += 1
if self.prob(self.age / self.life_expectancy):
return self.die()
if self.pregnancy >= self.gestation:
self.info('Having {} babies'.format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class,
topology=self.topology,
**state)
child.add_edge(self)
try:
child.add_edge(self.mate)
self.model.agents[self.mate].offspring += 1
except ValueError:
self.debug('The father has passed away')
self.offspring += 1
self.mate = None
return self.fertile
@state
def dead(self):
super().dead()
if 'pregnancy' in self and self['pregnancy'] > -1:
self.info('A mother has died carrying a baby!!')
class RandomAccident(BaseAgent):
level = logging.INFO
def step(self):
rabbits_alive = self.model.topology.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.iter_agents(agent_class=RabbitModel):
if i.state.id == i.dead.id:
continue
if self.prob(prob_death):
self.info('I killed a rabbit: {}'.format(i.id))
rabbits_alive -= 1
i.set_state(i.dead)
self.debug('Rabbits alive: {}'.format(rabbits_alive))

View File

@@ -0,0 +1,41 @@
---
version: '2'
name: rabbits_basic
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: soil.environment.Environment
model_params:
agents:
topology: default
agent_class: rabbit_agents.RabbitModel
distribution:
- agent_class: rabbit_agents.Male
topology: default
weight: 1
- agent_class: rabbit_agents.Female
topology: default
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: null
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topologies:
default:
topology:
directed: true
links: []
nodes:
- id: 1
- id: 0
extra:
visualization_params: {}

View File

@@ -0,0 +1,130 @@
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
from soil.time import Delta, When, NEVER
from enum import Enum
import logging
import math
class RabbitModel(FSM, NetworkAgent):
mating_prob = 0.005
offspring = 0
birth = None
sexual_maturity = 3
life_expectancy = 30
@default_state
@state
def newborn(self):
self.birth = self.now
self.info(f'I am a newborn.')
self.model['rabbits_alive'] = self.model.get('rabbits_alive', 0) + 1
# Here we can skip the `youngling` state by using a coroutine/generator.
while self.age < self.sexual_maturity:
interval = self.sexual_maturity - self.age
yield Delta(interval)
self.info(f'I am fertile! My age is {self.age}')
return self.fertile
@property
def age(self):
return self.now - self.birth
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
def step(self):
super().step()
if self.prob(self.age / self.life_expectancy):
return self.die()
class Male(RabbitModel):
max_females = 5
@state
def fertile(self):
# Males try to mate
for f in self.model.agents(agent_class=Female,
state_id=Female.fertile.id,
limit=self.max_females):
self.debug('Found a female:', repr(f))
if self.prob(self['mating_prob']):
f.impregnate(self)
break # Take a break, don't try to impregnate the rest
class Female(RabbitModel):
due_date = None
age_of_pregnancy = None
gestation = 10
mate = None
@state
def fertile(self):
return self.fertile, NEVER
@state
def pregnant(self):
self.info('I am pregnant')
if self.age > self.life_expectancy:
return self.dead
self.due_date = self.now + self.gestation
number_of_babies = int(8+4*self.random.random())
while self.now < self.due_date:
yield When(self.due_date)
self.info('Having {} babies'.format(number_of_babies))
for i in range(number_of_babies):
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class,
topology=self.topology)
self.model.add_edge(self, child)
self.model.add_edge(self.mate, child)
self.offspring += 1
self.model.agents[self.mate].offspring += 1
self.mate = None
self.due_date = None
return self.fertile
@state
def dead(self):
super().dead()
if self.due_date is not None:
self.info('A mother has died carrying a baby!!')
def impregnate(self, male):
self.info(f'{repr(male)} impregnating female {repr(self)}')
self.mate = male
self.set_state(self.pregnant, when=self.now)
class RandomAccident(BaseAgent):
level = logging.INFO
def step(self):
rabbits_total = self.model.topology.number_of_nodes()
if 'rabbits_alive' not in self.model:
self.model['rabbits_alive'] = 0
rabbits_alive = self.model.get('rabbits_alive', rabbits_total)
prob_death = self.model.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.model.network_agents:
if i.state.id == i.dead.id:
continue
if self.prob(prob_death):
self.info('I killed a rabbit: {}'.format(i.id))
rabbits_alive = self.model['rabbits_alive'] = rabbits_alive -1
i.set_state(i.dead)
self.debug('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.model.count_agents(state_id=RabbitModel.dead.id) == self.model.topology.number_of_nodes():
self.die()

View File

@@ -0,0 +1,41 @@
---
version: '2'
name: rabbits_improved
num_trials: 1
seed: MySeed
description: null
group: null
interval: 1.0
max_time: 100
model_class: soil.environment.Environment
model_params:
agents:
topology: default
agent_class: rabbit_agents.RabbitModel
distribution:
- agent_class: rabbit_agents.Male
topology: default
weight: 1
- agent_class: rabbit_agents.Female
topology: default
weight: 1
fixed:
- agent_class: rabbit_agents.RandomAccident
topology: null
hidden: true
state:
group: environment
state:
group: network
mating_prob: 0.1
prob_death: 0.001
topologies:
default:
topology:
directed: true
links: []
nodes:
- id: 1
- id: 0
extra:
visualization_params: {}

View File

@@ -1,120 +0,0 @@
from soil.agents import FSM, state, default_state, BaseAgent
from enum import Enum
from random import random, choice
from itertools import islice
import logging
import math
class Genders(Enum):
male = 'male'
female = 'female'
class RabbitModel(FSM):
level = logging.INFO
defaults = {
'age': 0,
'gender': Genders.male.value,
'mating_prob': 0.001,
'offspring': 0,
}
sexual_maturity = 4*30
life_expectancy = 365 * 3
gestation = 33
pregnancy = -1
max_females = 5
@default_state
@state
def newborn(self):
self['age'] += 1
if self['age'] >= self.sexual_maturity:
return self.fertile
@state
def fertile(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
if self['gender'] == Genders.female.value:
return
# Males try to mate
females = self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False)
for f in islice(females, self.max_females):
r = random()
if r < self['mating_prob']:
self.impregnate(f)
break # Take a break
def impregnate(self, whom):
if self['gender'] == Genders.female.value:
raise NotImplementedError('Females cannot impregnate')
whom['pregnancy'] = 0
whom['mate'] = self.id
whom.set_state(whom.pregnant)
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
@state
def pregnant(self):
self['age'] += 1
if self['age'] > self.life_expectancy:
return self.dead
self['pregnancy'] += 1
self.debug('Pregnancy: {}'.format(self['pregnancy']))
if self['pregnancy'] >= self.gestation:
number_of_babies = int(8+4*random())
self.info('Having {} babies'.format(number_of_babies))
for i in range(number_of_babies):
state = {}
state['gender'] = choice(list(Genders)).value
child = self.env.add_node(self.__class__, state)
self.env.add_edge(self.id, child.id)
self.env.add_edge(self['mate'], child.id)
# self.add_edge()
self.debug('A BABY IS COMING TO LIFE')
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.global_topology.number_of_nodes())+1
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
self['offspring'] += 1
self.env.get_agent(self['mate'])['offspring'] += 1
del self['mate']
self['pregnancy'] = -1
return self.fertile
@state
def dead(self):
self.info('Agent {} is dying'.format(self.id))
if 'pregnancy' in self and self['pregnancy'] > -1:
self.info('A mother has died carrying a baby!!')
self.die()
return
class RandomAccident(BaseAgent):
level = logging.DEBUG
def step(self):
rabbits_total = self.global_topology.number_of_nodes()
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
for i in self.env.network_agents:
if i.state['id'] == i.dead.id:
continue
r = random()
if r < prob_death:
self.debug('I killed a rabbit: {}'.format(i.id))
rabbits_alive = self.env['rabbits_alive'] = rabbits_alive -1
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
i.set_state(i.dead)
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.count_agents(state_id=RabbitModel.dead.id) == self.global_topology.number_of_nodes():
self.die()

View File

@@ -1,23 +0,0 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 500
interval: 1
seed: MySeed
agent_type: RabbitModel
environment_agents:
- agent_type: RandomAccident
environment_params:
prob_death: 0.001
default_state:
mating_prob: 0.01
topology:
nodes:
- id: 1
state:
gender: female
- id: 0
state:
gender: male
directed: true
links: []

View File

@@ -0,0 +1,44 @@
'''
Example of setting a
Example of a fully programmatic simulation, without definition files.
'''
from soil import Simulation, agents
from soil.time import Delta
import logging
class MyAgent(agents.FSM):
'''
An agent that first does a ping
'''
defaults = {'pong_counts': 2}
@agents.default_state
@agents.state
def ping(self):
self.info('Ping')
return self.pong, Delta(self.random.expovariate(1/16))
@agents.state
def pong(self):
self.info('Pong')
self.pong_counts -= 1
self.info(str(self.pong_counts))
if self.pong_counts < 1:
return self.die()
return None, Delta(self.random.expovariate(1/16))
s = Simulation(name='Programmatic',
network_agents=[{'agent_class': MyAgent, 'id': 0}],
topology={'nodes': [{'id': 0}], 'links': []},
num_trials=1,
max_time=100,
agent_class=MyAgent,
dry_run=True)
logging.basicConfig(level=logging.INFO)
envs = s.run()

View File

@@ -1,29 +1,30 @@
--- ---
sampler:
method: "SALib.sample.morris.sample"
N: 10
template:
group: simple
num_trials: 1
interval: 1
max_steps: 2
seed: "CompleteSeed!"
dump: false
model_params:
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_class: CounterModel
weight: "{{ x1 }}"
state:
state_id: 0
- agent_class: AggregatedCounter
weight: "{{ 1 - x1 }}"
name: "{{ x3 }}"
skip_test: true
vars: vars:
bounds: bounds:
x1: [0, 1] x1: [0, 1]
x2: [1, 2] x2: [1, 2]
fixed: fixed:
x3: ["a", "b", "c"] x3: ["a", "b", "c"]
sampler: "SALib.sample.morris.sample"
samples: 10
template: |
group: simple
num_trials: 1
interval: 1
max_time: 2
seed: "CompleteSeed!"
dump: false
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: {{ x1 }}
state:
id: 0
- agent_type: AggregatedCounter
weight: {{ 1 - x1 }}
environment_params:
name: {{ x3 }}
skip_test: true

View File

@@ -1,4 +1,3 @@
import random
import networkx as nx import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment from soil import Environment
@@ -18,34 +17,34 @@ class TerroristSpreadModel(FSM, Geo):
prob_interaction prob_interaction
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.information_spread_intensity = environment.environment_params['information_spread_intensity'] self.information_spread_intensity = model.environment_params['information_spread_intensity']
self.terrorist_additional_influence = environment.environment_params['terrorist_additional_influence'] self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence']
self.prob_interaction = environment.environment_params['prob_interaction'] self.prob_interaction = model.environment_params['prob_interaction']
if self['id'] == self.civilian.id: # Civilian if self['id'] == self.civilian.id: # Civilian
self.mean_belief = random.uniform(0.00, 0.5) self.mean_belief = self.random.uniform(0.00, 0.5)
elif self['id'] == self.terrorist.id: # Terrorist elif self['id'] == self.terrorist.id: # Terrorist
self.mean_belief = random.uniform(0.8, 1.00) self.mean_belief = self.random.uniform(0.8, 1.00)
elif self['id'] == self.leader.id: # Leader elif self['id'] == self.leader.id: # Leader
self.mean_belief = 1.00 self.mean_belief = 1.00
else: else:
raise Exception('Invalid state id: {}'.format(self['id'])) raise Exception('Invalid state id: {}'.format(self['id']))
if 'min_vulnerability' in environment.environment_params: if 'min_vulnerability' in model.environment_params:
self.vulnerability = random.uniform( environment.environment_params['min_vulnerability'], environment.environment_params['max_vulnerability'] ) self.vulnerability = self.random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
else : else :
self.vulnerability = random.uniform( 0, environment.environment_params['max_vulnerability'] ) self.vulnerability = self.random.uniform( 0, model.environment_params['max_vulnerability'] )
@state @state
def civilian(self): def civilian(self):
neighbours = list(self.get_neighboring_agents(agent_type=TerroristSpreadModel)) neighbours = list(self.get_neighboring_agents(agent_class=TerroristSpreadModel))
if len(neighbours) > 0: if len(neighbours) > 0:
# Only interact with some of the neighbors # Only interact with some of the neighbors
interactions = list(n for n in neighbours if random.random() <= self.prob_interaction) interactions = list(n for n in neighbours if self.random.random() <= self.prob_interaction)
influence = sum( self.degree(i) for i in interactions ) influence = sum( self.degree(i) for i in interactions )
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions ) mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions )
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity ) mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity )
@@ -64,7 +63,7 @@ class TerroristSpreadModel(FSM, Geo):
@state @state
def terrorist(self): def terrorist(self):
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id], neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id],
agent_type=TerroristSpreadModel, agent_class=TerroristSpreadModel,
limit_neighbors=True) limit_neighbors=True)
if len(neighbours) > 0: if len(neighbours) > 0:
influence = sum( self.degree(n) for n in neighbours ) influence = sum( self.degree(n) for n in neighbours )
@@ -82,6 +81,26 @@ class TerroristSpreadModel(FSM, Geo):
return return
return self.leader return self.leader
def ego_search(self, steps=1, center=False, node=None, **kwargs):
'''Get a list of nodes in the ego network of *node* of radius *steps*'''
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
if force or (not hasattr(self.model, '_degree')) or getattr(self.model, '_last_step', 0) < self.now:
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
if force or (not hasattr(self.model, '_betweenness')) or getattr(self.model, '_last_step', 0) < self.now:
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[node]
class TrainingAreaModel(FSM, Geo): class TrainingAreaModel(FSM, Geo):
""" """
@@ -93,17 +112,17 @@ class TrainingAreaModel(FSM, Geo):
Requires TerroristSpreadModel. Requires TerroristSpreadModel.
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = environment.environment_params['training_influence'] self.training_influence = model.environment_params['training_influence']
if 'min_vulnerability' in environment.environment_params: if 'min_vulnerability' in model.environment_params:
self.min_vulnerability = environment.environment_params['min_vulnerability'] self.min_vulnerability = model.environment_params['min_vulnerability']
else: self.min_vulnerability = 0 else: self.min_vulnerability = 0
@default_state @default_state
@state @state
def terrorist(self): def terrorist(self):
for neighbour in self.get_neighboring_agents(agent_type=TerroristSpreadModel): for neighbour in self.get_neighboring_agents(agent_class=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability: if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence ) neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
@@ -120,16 +139,16 @@ class HavenModel(FSM, Geo):
Requires TerroristSpreadModel. Requires TerroristSpreadModel.
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = environment.environment_params['haven_influence'] self.haven_influence = model.environment_params['haven_influence']
if 'min_vulnerability' in environment.environment_params: if 'min_vulnerability' in model.environment_params:
self.min_vulnerability = environment.environment_params['min_vulnerability'] self.min_vulnerability = model.environment_params['min_vulnerability']
else: self.min_vulnerability = 0 else: self.min_vulnerability = 0
self.max_vulnerability = environment.environment_params['max_vulnerability'] self.max_vulnerability = model.environment_params['max_vulnerability']
def get_occupants(self, **kwargs): def get_occupants(self, **kwargs):
return self.get_neighboring_agents(agent_type=TerroristSpreadModel, **kwargs) return self.get_neighboring_agents(agent_class=TerroristSpreadModel, **kwargs)
@state @state
def civilian(self): def civilian(self):
@@ -162,13 +181,13 @@ class TerroristNetworkModel(TerroristSpreadModel):
weight_link_distance weight_link_distance
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = environment.environment_params['vision_range'] self.vision_range = model.environment_params['vision_range']
self.sphere_influence = environment.environment_params['sphere_influence'] self.sphere_influence = model.environment_params['sphere_influence']
self.weight_social_distance = environment.environment_params['weight_social_distance'] self.weight_social_distance = model.environment_params['weight_social_distance']
self.weight_link_distance = environment.environment_params['weight_link_distance'] self.weight_link_distance = model.environment_params['weight_link_distance']
@state @state
def terrorist(self): def terrorist(self):
@@ -182,27 +201,27 @@ class TerroristNetworkModel(TerroristSpreadModel):
def update_relationships(self): def update_relationships(self):
if self.count_neighboring_agents(state_id=self.civilian.id) == 0: if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
close_ups = set(self.geo_search(radius=self.vision_range, agent_type=TerroristNetworkModel)) close_ups = set(self.geo_search(radius=self.vision_range, agent_class=TerroristNetworkModel))
step_neighbours = set(self.ego_search(self.sphere_influence, agent_type=TerroristNetworkModel, center=False)) step_neighbours = set(self.ego_search(self.sphere_influence, agent_class=TerroristNetworkModel, center=False))
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_type=TerroristNetworkModel)) neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_class=TerroristNetworkModel))
search = (close_ups | step_neighbours) - neighbours search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search): for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id) social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = ( 1 - self.get_distance(agent.id) ) spatial_proximity = ( 1 - self.get_distance(agent.id) )
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
if agent['id'] == agent.civilian.id and random.random() < prob_new_interaction: if agent['id'] == agent.civilian.id and self.random.random() < prob_new_interaction:
self.add_edge(agent) self.add_edge(agent)
break break
def get_distance(self, target): def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.global_topology, 'pos')[self.id] source_x, source_y = nx.get_node_attributes(self.G, 'pos')[self.id]
target_x, target_y = nx.get_node_attributes(self.global_topology, 'pos')[target] target_x, target_y = nx.get_node_attributes(self.G, 'pos')[target]
dx = abs( source_x - target_x ) dx = abs( source_x - target_x )
dy = abs( source_y - target_y ) dy = abs( source_y - target_y )
return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 ) return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 )
def shortest_path_length(self, target): def shortest_path_length(self, target):
try: try:
return nx.shortest_path_length(self.global_topology, self.id, target) return nx.shortest_path_length(self.G, self.id, target)
except nx.NetworkXNoPath: except nx.NetworkXNoPath:
return float('inf') return float('inf')

View File

@@ -1,32 +1,31 @@
name: TerroristNetworkModel_sim name: TerroristNetworkModel_sim
load_module: TerroristNetworkModel max_steps: 150
max_time: 150
num_trials: 1 num_trials: 1
network_params: model_params:
network_params:
generator: random_geometric_graph generator: random_geometric_graph
radius: 0.2 radius: 0.2
# generator: geographical_threshold_graph # generator: geographical_threshold_graph
# theta: 20 # theta: 20
n: 100 n: 100
network_agents: network_agents:
- agent_type: TerroristNetworkModel - agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8 weight: 0.8
state: state:
id: civilian # Civilians id: civilian # Civilians
- agent_type: TerroristNetworkModel - agent_class: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1 weight: 0.1
state: state:
id: leader # Leaders id: leader # Leaders
- agent_type: TrainingAreaModel - agent_class: TerroristNetworkModel.TrainingAreaModel
weight: 0.05 weight: 0.05
state: state:
id: terrorist # Terrorism id: terrorist # Terrorism
- agent_type: HavenModel - agent_class: TerroristNetworkModel.HavenModel
weight: 0.05 weight: 0.05
state: state:
id: civilian # Civilian id: civilian # Civilian
environment_params:
# TerroristSpreadModel # TerroristSpreadModel
information_spread_intensity: 0.7 information_spread_intensity: 0.7
terrorist_additional_influence: 0.035 terrorist_additional_influence: 0.035

View File

@@ -1,13 +1,14 @@
--- ---
name: torvalds_example name: torvalds_example
max_time: 10 max_steps: 10
interval: 2 interval: 2
agent_type: CounterModel model_params:
default_state: agent_class: CounterModel
default_state:
skill_level: 'beginner' skill_level: 'beginner'
network_params: network_params:
path: 'torvalds.edgelist' path: 'torvalds.edgelist'
states: states:
Torvalds: Torvalds:
skill_level: 'God' skill_level: 'God'
balkian: balkian:

View File

@@ -12330,11 +12330,11 @@ Notice how node 0 is the only one with a TV.</p>
<span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span> <span class="n">sim</span> <span class="o">=</span> <span class="n">soil</span><span class="o">.</span><span class="n">Simulation</span><span class="p">(</span><span class="n">topology</span><span class="o">=</span><span class="n">G</span><span class="p">,</span>
<span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">num_trials</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span> <span class="n">max_time</span><span class="o">=</span><span class="n">MAX_TIME</span><span class="p">,</span>
<span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span> <span class="n">environment_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span> <span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="n">EVENT_TIME</span>
<span class="p">}}],</span> <span class="p">}}],</span>
<span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="n">network_agents</span><span class="o">=</span><span class="p">[{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">}],</span>
<span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span> <span class="n">states</span><span class="o">=</span><span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span><span class="p">}},</span>
<span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span> <span class="n">default_state</span><span class="o">=</span><span class="p">{</span><span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span><span class="p">},</span>
@@ -12468,14 +12468,14 @@ For this demo, we will use a python dictionary:</p>
<span class="p">},</span> <span class="p">},</span>
<span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span> <span class="s1">&#39;network_agents&#39;</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span> <span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span> <span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">False</span>
<span class="p">}</span> <span class="p">}</span>
<span class="p">},</span> <span class="p">},</span>
<span class="p">{</span> <span class="p">{</span>
<span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span> <span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsSpread</span><span class="p">,</span>
<span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> <span class="s1">&#39;weight&#39;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span> <span class="s1">&#39;has_tv&#39;</span><span class="p">:</span> <span class="kc">True</span>
@@ -12483,7 +12483,7 @@ For this demo, we will use a python dictionary:</p>
<span class="p">}</span> <span class="p">}</span>
<span class="p">],</span> <span class="p">],</span>
<span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span> <span class="s1">&#39;environment_agents&#39;</span><span class="p">:[</span>
<span class="p">{</span><span class="s1">&#39;agent_type&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span> <span class="p">{</span><span class="s1">&#39;agent_class&#39;</span><span class="p">:</span> <span class="n">NewsEnvironmentAgent</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span> <span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="p">{</span>
<span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span> <span class="s1">&#39;event_time&#39;</span><span class="p">:</span> <span class="mi">10</span>
<span class="p">}</span> <span class="p">}</span>

File diff suppressed because one or more lines are too long

View File

@@ -1,10 +1,10 @@
nxsim>=0.1.2 networkx>=2.5
simpy
networkx>=2.0
numpy numpy
matplotlib matplotlib
pyyaml>=5.1 pyyaml>=5.1
pandas>=0.23 pandas>=1
scipy==1.2.1 # scipy 1.3.0rc1 is not compatible with salib
SALib>=1.3 SALib>=1.3
Jinja2 Jinja2
Mesa>=1
pydantic>=1.9
sqlalchemy>=1.4

View File

@@ -16,6 +16,12 @@ def parse_requirements(filename):
install_reqs = parse_requirements("requirements.txt") install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt") test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'mesa': ['mesa>=0.8.9'],
'geo': ['scipy>=1.3'],
'web': ['tornado']
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
setup( setup(
@@ -40,12 +46,10 @@ setup(
'Operating System :: POSIX', 'Operating System :: POSIX',
'Programming Language :: Python :: 3'], 'Programming Language :: Python :: 3'],
install_requires=install_reqs, install_requires=install_reqs,
extras_require={ extras_require=extras_require,
'web': ['tornado']
},
tests_require=test_reqs, tests_require=test_reqs,
setup_requires=['pytest-runner', ], setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True, include_package_data=True,
entry_points={ entry_points={
'console_scripts': 'console_scripts':

View File

@@ -1 +1 @@
0.14.4 0.20.7

View File

@@ -1,8 +1,10 @@
from __future__ import annotations
import importlib import importlib
import sys import sys
import os import os
import pdb
import logging import logging
import traceback
from .version import __version__ from .version import __version__
@@ -11,50 +13,78 @@ try:
except NameError: except NameError:
basestring = str basestring = str
from .agents import *
from . import agents from . import agents
from .simulation import * from .simulation import *
from .environment import Environment from .environment import Environment
from .history import History
from . import serialization from . import serialization
from . import analysis from .utils import logger
from .time import *
def main(): def main(cfg='simulation.yml', **kwargs):
import argparse import argparse
from . import simulation from . import simulation
logging.basicConfig(level=logging.INFO) logger.info('Running SOIL version: {}'.format(__version__))
logging.info('Running SOIL version: {}'.format(__version__))
parser = argparse.ArgumentParser(description='Run a SOIL simulation') parser = argparse.ArgumentParser(description='Run a SOIL simulation')
parser.add_argument('file', type=str, parser.add_argument('file', type=str,
nargs="?", nargs="?",
default='simulation.yml', default=cfg,
help='python module containing the simulation configuration.') help='Configuration file for the simulation (e.g., YAML or JSON)')
parser.add_argument('--version', action='store_true',
help='Show version info and exit')
parser.add_argument('--module', '-m', type=str, parser.add_argument('--module', '-m', type=str,
help='file containing the code of any custom agents.') help='file containing the code of any custom agents.')
parser.add_argument('--dry-run', '--dry', action='store_true', parser.add_argument('--dry-run', '--dry', action='store_true',
help='Do not store the results of the simulation.') help='Do not store the results of the simulation to disk, show in terminal instead.')
parser.add_argument('--pdb', action='store_true', parser.add_argument('--pdb', action='store_true',
help='Use a pdb console in case of exception.') help='Use a pdb console in case of exception.')
parser.add_argument('--debug', action='store_true',
help='Run a customized version of a pdb console to debug a simulation.')
parser.add_argument('--graph', '-g', action='store_true', parser.add_argument('--graph', '-g', action='store_true',
help='Dump GEXF graph. Defaults to false.') help='Dump each trial\'s network topology as a GEXF graph. Defaults to false.')
parser.add_argument('--csv', action='store_true', parser.add_argument('--csv', action='store_true',
help='Dump history in CSV format. Defaults to false.') help='Dump all data collected in CSV format. Defaults to false.')
parser.add_argument('--level', type=str,
help='Logging level')
parser.add_argument('--output', '-o', type=str, default="soil_output", parser.add_argument('--output', '-o', type=str, default="soil_output",
help='folder to write results to. It defaults to the current directory.') help='folder to write results to. It defaults to the current directory.')
parser.add_argument('--synchronous', action='store_true', parser.add_argument('--synchronous', action='store_true',
help='Run trials serially and synchronously instead of in parallel. Defaults to false.') help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
parser.add_argument('-e', '--exporter', action='append', parser.add_argument('-e', '--exporter', action='append',
help='Export environment and/or simulations using this exporter') help='Export environment and/or simulations using this exporter')
parser.add_argument('--only-convert', '--convert', action='store_true',
help='Do not run the simulation, only convert the configuration file(s) and output them.')
parser.add_argument("--set",
metavar="KEY=VALUE",
action='append',
help="Set a number of parameters that will be passed to the simulation."
"(do not put spaces before or after the = sign). "
"If a value contains spaces, you should define "
"it with double quotes: "
'foo="this is a sentence". Note that '
"values are always treated as strings.")
args = parser.parse_args() args = parser.parse_args()
logger.setLevel(getattr(logging, (args.level or 'INFO').upper()))
if args.version:
return
if os.getcwd() not in sys.path: if os.getcwd() not in sys.path:
sys.path.append(os.getcwd()) sys.path.append(os.getcwd())
if args.module: if args.module:
importlib.import_module(args.module) importlib.import_module(args.module)
logging.info('Loading config file: {}'.format(args.file)) logger.info('Loading config file: {}'.format(args.file))
if args.pdb or args.debug:
args.synchronous = True
if args.debug:
os.environ['SOIL_DEBUG'] = 'true'
try: try:
exporters = list(args.exporter or ['default', ]) exporters = list(args.exporter or ['default', ])
@@ -65,18 +95,52 @@ def main():
exp_params = {} exp_params = {}
if args.dry_run: if args.dry_run:
exp_params['copy_to'] = sys.stdout exp_params['copy_to'] = sys.stdout
simulation.run_from_config(args.file,
dry_run=args.dry_run, if not os.path.exists(args.file):
logger.error('Please, input a valid file')
return
for sim in simulation.iter_from_config(args.file):
if args.set:
for s in args.set:
k, v = s.split('=', 1)[:2]
v = eval(v)
tail, *head = k.rsplit('.', 1)[::-1]
target = sim
if head:
for part in head[0].split('.'):
try:
target = getattr(target, part)
except AttributeError:
target = target[part]
try:
setattr(target, tail, v)
except AttributeError:
target[tail] = v
if args.only_convert:
print(sim.to_yaml())
continue
sim.run_simulation(dry_run=args.dry_run,
exporters=exporters, exporters=exporters,
parallel=(not args.synchronous), parallel=(not args.synchronous),
outdir=args.output, outdir=args.output,
exporter_params=exp_params) exporter_params=exp_params,
except Exception: **kwargs)
except Exception as ex:
if args.pdb: if args.pdb:
pdb.post_mortem() from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
else: else:
raise raise
def easy(cfg, debug=False):
sim = simulation.from_config(cfg)
if debug or os.environ.get('SOIL_DEBUG'):
from .debugging import setup
setup(sys._getframe().f_back)
return sim
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -1,40 +1,30 @@
import random from . import FSM, state, default_state
from . import BaseAgent
class BassModel(BaseAgent): class BassModel(FSM):
""" """
Settings: Settings:
innovation_prob innovation_prob
imitation_prob imitation_prob
""" """
sentimentCorrelation = 0
def __init__(self, environment, agent_id, state):
super().__init__(environment=environment, agent_id=agent_id, state=state)
env_params = environment.environment_params
self.state['sentimentCorrelation'] = 0
def step(self): def step(self):
self.behaviour() self.behaviour()
def behaviour(self): @default_state
# Outside effects @state
if random.random() < self.state_params['innovation_prob']: def innovation(self):
if self.state['id'] == 0: if self.prob(self.innovation_prob):
self.state['id'] = 1 self.sentimentCorrelation = 1
self.state['sentimentCorrelation'] = 1 return self.aware
else: else:
pass aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
num_neighbors_aware = len(aware_neighbors) num_neighbors_aware = len(aware_neighbors)
if random.random() < (self.state_params['imitation_prob']*num_neighbors_aware): if self.prob((self['imitation_prob']*num_neighbors_aware)):
self.state['id'] = 1 self.sentimentCorrelation = 1
self.state['sentimentCorrelation'] = 1 return self.aware
else: @state
pass def aware(self):
self.die()

View File

@@ -1,8 +1,7 @@
import random from . import FSM, state, default_state
from . import BaseAgent
class BigMarketModel(BaseAgent): class BigMarketModel(FSM):
""" """
Settings: Settings:
Names: Names:
@@ -19,39 +18,30 @@ class BigMarketModel(BaseAgent):
sentiment_about [Array] sentiment_about [Array]
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, *args, **kwargs):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(*args, **kwargs)
self.enterprises = environment.environment_params['enterprises'] self.enterprises = self.env.environment_params['enterprises']
self.type = "" self.type = ""
self.number_of_enterprises = len(environment.environment_params['enterprises'])
if self.id < self.number_of_enterprises: # Enterprises if self.id < len(self.enterprises): # Enterprises
self.state['id'] = self.id self.set_state(self.enterprise.id)
self.type = "Enterprise" self.type = "Enterprise"
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id] self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
else: # normal users else: # normal users
self.state['id'] = self.number_of_enterprises
self.type = "User" self.type = "User"
self.set_state(self.user.id)
self.tweet_probability = environment.environment_params['tweet_probability_users'] self.tweet_probability = environment.environment_params['tweet_probability_users']
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability'] self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
self.sentiment_about = environment.environment_params['sentiment_about'] # List self.sentiment_about = environment.environment_params['sentiment_about'] # List
def step(self): @state
def enterprise(self):
if self.id < self.number_of_enterprises: # Enterprise if self.random.random() < self.tweet_probability: # Tweets
self.enterpriseBehaviour()
else: # Usuario
self.userBehaviour()
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def enterpriseBehaviour(self):
if random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
for x in aware_neighbors: for x in aware_neighbors:
if random.uniform(0,10) < 5: if self.random.uniform(0,10) < 5:
x.sentiment_about[self.id] += 0.1 # Increments for enterprise x.sentiment_about[self.id] += 0.1 # Increments for enterprise
else: else:
x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise x.sentiment_about[self.id] -= 0.1 # Decrements for enterprise
@@ -64,13 +54,13 @@ class BigMarketModel(BaseAgent):
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id] x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
def userBehaviour(self): @state
def user(self):
if random.random() < self.tweet_probability: # Tweets if self.random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant if self.random.random() < self.tweet_relevant_probability: # Tweets something relevant
# Tweet probability per enterprise # Tweet probability per enterprise
for i in range(self.number_of_enterprises): for i in range(len(self.enterprises)):
random_num = random.random() random_num = self.random.random()
if random_num < self.tweet_probability_about[i]: if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise # The condition is fulfilled, sentiments are evaluated towards that enterprise
if self.sentiment_about[i] < 0: if self.sentiment_about[i] < 0:
@@ -82,8 +72,10 @@ class BigMarketModel(BaseAgent):
else: else:
# POSITIVO # POSITIVO
self.userTweets("positive",i) self.userTweets("positive",i)
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def userTweets(self,sentiment,enterprise): def userTweets(self, sentiment,enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
for x in aware_neighbors: for x in aware_neighbors:
if sentiment == "positive": if sentiment == "positive":

View File

@@ -1,38 +1,40 @@
from . import BaseAgent from . import NetworkAgent
class CounterModel(BaseAgent): class CounterModel(NetworkAgent):
""" """
Dummy behaviour. It counts the number of nodes in the network and neighbors Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state. in each step and adds it to its state.
""" """
times = 0
neighbors = 0
total = 0
def step(self): def step(self):
# Outside effects # Outside effects
total = len(list(self.get_all_agents())) total = len(list(self.model.schedule._agents))
neighbors = len(list(self.get_neighboring_agents())) neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1 self['times'] = self.get('times', 0) + 1
self['neighbors'] = neighbors self['neighbors'] = neighbors
self['total'] = total self['total'] = total
class AggregatedCounter(BaseAgent): class AggregatedCounter(NetworkAgent):
""" """
Dummy behaviour. It counts the number of nodes in the network and neighbors Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state. in each step and adds it to its state.
""" """
defaults = { times = 0
'times': 0, neighbors = 0
'neighbors': 0, total = 0
'total': 0
}
def step(self): def step(self):
# Outside effects # Outside effects
self['times'] += 1 self['times'] += 1
neighbors = len(list(self.get_neighboring_agents())) neighbors = len(list(self.get_neighboring_agents()))
self['neighbors'] += neighbors self['neighbors'] += neighbors
total = len(list(self.get_all_agents())) total = len(list(self.model.schedule.agents))
self['total'] += total self['total'] += total
self.debug('Running for step: {}. Total: {}'.format(self.now, total)) self.debug('Running for step: {}. Total: {}'.format(self.now, total))

21
soil/agents/Geo.py Normal file
View File

@@ -0,0 +1,21 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent, as_node
class Geo(NetworkAgent):
'''In this type of network, nodes have a "pos" attribute.'''
def geo_search(self, radius, node=None, center=False, **kwargs):
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
pos = nx.get_node_attributes(G, 'pos')
if not pos:
return []
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -1,4 +1,3 @@
import random
from . import BaseAgent from . import BaseAgent
@@ -10,10 +9,10 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob imitation_prob
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, *args, **kwargs):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(*args, **kwargs)
self.innovation_prob = environment.environment_params['innovation_prob'] self.innovation_prob = self.env.environment_params['innovation_prob']
self.imitation_prob = environment.environment_params['imitation_prob'] self.imitation_prob = self.env.environment_params['imitation_prob']
self.state['time_awareness'] = 0 self.state['time_awareness'] = 0
self.state['sentimentCorrelation'] = 0 self.state['sentimentCorrelation'] = 0
@@ -23,7 +22,7 @@ class IndependentCascadeModel(BaseAgent):
def behaviour(self): def behaviour(self):
aware_neighbors_1_time_step = [] aware_neighbors_1_time_step = []
# Outside effects # Outside effects
if random.random() < self.innovation_prob: if self.prob(self.innovation_prob):
if self.state['id'] == 0: if self.state['id'] == 0:
self.state['id'] = 1 self.state['id'] = 1
self.state['sentimentCorrelation'] = 1 self.state['sentimentCorrelation'] = 1
@@ -40,7 +39,7 @@ class IndependentCascadeModel(BaseAgent):
if x.state['time_awareness'] == (self.env.now-1): if x.state['time_awareness'] == (self.env.now-1):
aware_neighbors_1_time_step.append(x) aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step) num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (self.imitation_prob*num_neighbors_aware): if self.prob(self.imitation_prob*num_neighbors_aware):
self.state['id'] = 1 self.state['id'] = 1
self.state['sentimentCorrelation'] = 1 self.state['sentimentCorrelation'] = 1
else: else:

View File

@@ -1,4 +1,3 @@
import random
import numpy as np import numpy as np
from . import BaseAgent from . import BaseAgent
@@ -21,25 +20,28 @@ class SpreadModelM2(BaseAgent):
prob_generate_anti_rumor prob_generate_anti_rumor
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
# Use a single generator with the same seed as `self.random`
random = np.random.default_rng(seed=self._seed)
self.prob_neutral_making_denier = random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_infect = np.random.normal(environment.environment_params['prob_infect'], self.prob_infect = random.normal(environment.environment_params['prob_infect'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_cured_healing_infected = np.random.normal(environment.environment_params['prob_cured_healing_infected'], self.prob_cured_healing_infected = random.normal(environment.environment_params['prob_cured_healing_infected'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_cured_vaccinate_neutral = np.random.normal(environment.environment_params['prob_cured_vaccinate_neutral'], self.prob_cured_vaccinate_neutral = random.normal(environment.environment_params['prob_cured_vaccinate_neutral'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_vaccinated_healing_infected = np.random.normal(environment.environment_params['prob_vaccinated_healing_infected'], self.prob_vaccinated_healing_infected = random.normal(environment.environment_params['prob_vaccinated_healing_infected'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_vaccinated_vaccinate_neutral = np.random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'], self.prob_vaccinated_vaccinate_neutral = random.normal(environment.environment_params['prob_vaccinated_vaccinate_neutral'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
self.prob_generate_anti_rumor = np.random.normal(environment.environment_params['prob_generate_anti_rumor'], self.prob_generate_anti_rumor = random.normal(environment.environment_params['prob_generate_anti_rumor'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
def step(self): def step(self):
@@ -58,7 +60,7 @@ class SpreadModelM2(BaseAgent):
# Infected # Infected
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier: if self.prob(self.prob_neutral_making_denier):
self.state['id'] = 3 # Vaccinated making denier self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self): def infected_behaviour(self):
@@ -66,7 +68,7 @@ class SpreadModelM2(BaseAgent):
# Neutral # Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_infect: if self.prob(self.prob_infect):
neighbor.state['id'] = 1 # Infected neighbor.state['id'] = 1 # Infected
def cured_behaviour(self): def cured_behaviour(self):
@@ -74,13 +76,13 @@ class SpreadModelM2(BaseAgent):
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self): def vaccinated_behaviour(self):
@@ -88,19 +90,19 @@ class SpreadModelM2(BaseAgent):
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor # Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1) infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2: for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
@@ -123,8 +125,8 @@ class ControlModelM2(BaseAgent):
""" """
def __init__(self, environment=None, agent_id=0, state=()): def __init__(self, model=None, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'], self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance']) environment.environment_params['standard_variance'])
@@ -165,7 +167,7 @@ class ControlModelM2(BaseAgent):
# Infected # Infected
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
if len(infected_neighbors) > 0: if len(infected_neighbors) > 0:
if random.random() < self.prob_neutral_making_denier: if self.random(self.prob_neutral_making_denier):
self.state['id'] = 3 # Vaccinated making denier self.state['id'] = 3 # Vaccinated making denier
def infected_behaviour(self): def infected_behaviour(self):
@@ -173,7 +175,7 @@ class ControlModelM2(BaseAgent):
# Neutral # Neutral
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_infect: if self.prob(self.prob_infect):
neighbor.state['id'] = 1 # Infected neighbor.state['id'] = 1 # Infected
self.state['visible'] = False self.state['visible'] = False
@@ -183,13 +185,13 @@ class ControlModelM2(BaseAgent):
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
def vaccinated_behaviour(self): def vaccinated_behaviour(self):
@@ -198,19 +200,19 @@ class ControlModelM2(BaseAgent):
# Cure # Cure
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_cured_healing_infected: if self.prob(self.prob_cured_healing_infected):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
# Generate anti-rumor # Generate anti-rumor
infected_neighbors_2 = self.get_neighboring_agents(state_id=1) infected_neighbors_2 = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_2: for neighbor in infected_neighbors_2:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
def beacon_off_behaviour(self): def beacon_off_behaviour(self):
@@ -224,19 +226,19 @@ class ControlModelM2(BaseAgent):
# Cure (M2 feature added) # Cure (M2 feature added)
infected_neighbors = self.get_neighboring_agents(state_id=1) infected_neighbors = self.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors: for neighbor in infected_neighbors:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0) neutral_neighbors_infected = neighbor.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors_infected: for neighbor in neutral_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated
infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1) infected_neighbors_infected = neighbor.get_neighboring_agents(state_id=1)
for neighbor in infected_neighbors_infected: for neighbor in infected_neighbors_infected:
if random.random() < self.prob_generate_anti_rumor: if self.prob(self.prob_generate_anti_rumor):
neighbor.state['id'] = 2 # Cured neighbor.state['id'] = 2 # Cured
# Vaccinate # Vaccinate
neutral_neighbors = self.get_neighboring_agents(state_id=0) neutral_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in neutral_neighbors: for neighbor in neutral_neighbors:
if random.random() < self.prob_cured_vaccinate_neutral: if self.prob(self.prob_cured_vaccinate_neutral):
neighbor.state['id'] = 3 # Vaccinated neighbor.state['id'] = 3 # Vaccinated

View File

@@ -1,4 +1,3 @@
import random
import numpy as np import numpy as np
from . import FSM, state from . import FSM, state
@@ -29,65 +28,67 @@ class SISaModel(FSM):
standard_variance standard_variance
""" """
def __init__(self, environment, agent_id=0, state=()): def __init__(self, environment, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'], random = np.random.default_rng(seed=self._seed)
self.neutral_discontent_spon_prob = random.normal(self.env['neutral_discontent_spon_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.neutral_discontent_infected_prob = np.random.normal(self.env['neutral_discontent_infected_prob'], self.neutral_discontent_infected_prob = random.normal(self.env['neutral_discontent_infected_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.neutral_content_spon_prob = np.random.normal(self.env['neutral_content_spon_prob'], self.neutral_content_spon_prob = random.normal(self.env['neutral_content_spon_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.neutral_content_infected_prob = np.random.normal(self.env['neutral_content_infected_prob'], self.neutral_content_infected_prob = random.normal(self.env['neutral_content_infected_prob'],
self.env['standard_variance']) self.env['standard_variance'])
self.discontent_neutral = np.random.normal(self.env['discontent_neutral'], self.discontent_neutral = random.normal(self.env['discontent_neutral'],
self.env['standard_variance']) self.env['standard_variance'])
self.discontent_content = np.random.normal(self.env['discontent_content'], self.discontent_content = random.normal(self.env['discontent_content'],
self.env['variance_d_c']) self.env['variance_d_c'])
self.content_discontent = np.random.normal(self.env['content_discontent'], self.content_discontent = random.normal(self.env['content_discontent'],
self.env['variance_c_d']) self.env['variance_c_d'])
self.content_neutral = np.random.normal(self.env['content_neutral'], self.content_neutral = random.normal(self.env['content_neutral'],
self.env['standard_variance']) self.env['standard_variance'])
@state @state
def neutral(self): def neutral(self):
# Spontaneous effects # Spontaneous effects
if random.random() < self.neutral_discontent_spon_prob: if self.prob(self.neutral_discontent_spon_prob):
return self.discontent return self.discontent
if random.random() < self.neutral_content_spon_prob: if self.prob(self.neutral_content_spon_prob):
return self.content return self.content
# Infected # Infected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent) discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent)
if random.random() < discontent_neighbors * self.neutral_discontent_infected_prob: if self.prob(scontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent return self.discontent
content_neighbors = self.count_neighboring_agents(state_id=self.content.id) content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
if random.random() < content_neighbors * self.neutral_content_infected_prob: if self.prob(s * self.neutral_content_infected_prob):
return self.content return self.content
return self.neutral return self.neutral
@state @state
def discontent(self): def discontent(self):
# Healing # Healing
if random.random() < self.discontent_neutral: if self.prob(self.discontent_neutral):
return self.neutral return self.neutral
# Superinfected # Superinfected
content_neighbors = self.count_neighboring_agents(state_id=self.content.id) content_neighbors = self.count_neighboring_agents(state_id=self.content.id)
if random.random() < content_neighbors * self.discontent_content: if self.prob(s * self.discontent_content):
return self.content return self.content
return self.discontent return self.discontent
@state @state
def content(self): def content(self):
# Healing # Healing
if random.random() < self.content_neutral: if self.prob(self.content_neutral):
return self.neutral return self.neutral
# Superinfected # Superinfected
discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id) discontent_neighbors = self.count_neighboring_agents(state_id=self.discontent.id)
if random.random() < discontent_neighbors * self.content_discontent: if self.prob(scontent_neighbors * self.content_discontent):
self.discontent self.discontent
return self.content return self.content

View File

@@ -1,4 +1,3 @@
import random
from . import BaseAgent from . import BaseAgent
@@ -16,8 +15,8 @@ class SentimentCorrelationModel(BaseAgent):
disgust_prob disgust_prob
""" """
def __init__(self, environment, agent_id=0, state=()): def __init__(self, environment, unique_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state) super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob'] self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob'] self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob'] self.joy_prob = environment.environment_params['joy_prob']
@@ -68,10 +67,10 @@ class SentimentCorrelationModel(BaseAgent):
disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob) disgust_prob = self.disgust_prob+(len(disgusted_neighbors_1_time_step)*self.disgust_prob)
outside_effects_prob = self.outside_effects_prob outside_effects_prob = self.outside_effects_prob
num = random.random() num = self.random.random()
if num<outside_effects_prob: if num<outside_effects_prob:
self.state['id'] = random.randint(1, 4) self.state['id'] = self.random.randint(1, 4)
self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network self.state['sentimentCorrelation'] = self.state['id'] # It is stored when it has been infected for the dynamic network
self.state['time_awareness'][self.state['id']-1] = self.env.now self.state['time_awareness'][self.state['id']-1] = self.env.now

File diff suppressed because it is too large Load Diff

View File

@@ -1,166 +0,0 @@
import pandas as pd
import glob
import yaml
from os.path import join
from . import serialization, history
def read_data(*args, group=False, **kwargs):
iterable = _read_data(*args, **kwargs)
if group:
return group_trials(iterable)
else:
return list(iterable)
def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
if not process_args:
process_args = {}
for folder in glob.glob(pattern):
config_file = glob.glob(join(folder, '*.yml'))[0]
config = yaml.load(open(config_file))
df = None
if from_csv:
for trial_data in sorted(glob.glob(join(folder,
'*.environment.csv'))):
df = read_csv(trial_data, **kwargs)
yield config_file, df, config
else:
for trial_data in sorted(glob.glob(join(folder, '*.db.sqlite'))):
df = read_sql(trial_data, **kwargs)
yield config_file, df, config
def read_sql(db, *args, **kwargs):
h = history.History(db_path=db, backup=False)
df = h.read_sql(*args, **kwargs)
return df
def read_csv(filename, keys=None, convert_types=False, **kwargs):
'''
Read a CSV in canonical form: ::
<agent_id, t_step, key, value, value_type>
'''
df = pd.read_csv(filename)
if convert_types:
df = convert_types_slow(df)
if keys:
df = df[df['key'].isin(keys)]
df = process_one(df)
return df
def convert_row(row):
row['value'] = serialization.deserialize(row['value_type'], row['value'])
return row
def convert_types_slow(df):
'''This is a slow operation.'''
dtypes = get_types(df)
for k, v in dtypes.items():
t = df[df['key']==k]
t['value'] = t['value'].astype(v)
df = df.apply(convert_row, axis=1)
return df
def split_df(df):
'''
Split a dataframe in two dataframes: one with the history of agents,
and one with the environment history
'''
envmask = (df['agent_id'] == 'env')
n_env = envmask.sum()
if n_env == len(df):
return df, None
elif n_env == 0:
return None, df
agents, env = [x for _, x in df.groupby(envmask)]
return env, agents
def process(df, **kwargs):
'''
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
two dataframes with a column per key: one with the history of the agents, and one for the
history of the environment.
'''
env, agents = split_df(df)
return process_one(env, **kwargs), process_one(agents, **kwargs)
def get_types(df):
dtypes = df.groupby(by=['key'])['value_type'].unique()
return {k:v[0] for k,v in dtypes.iteritems()}
def process_one(df, *keys, columns=['key', 'agent_id'], values='value',
fill=True, index=['t_step',],
aggfunc='first', **kwargs):
'''
Process a dataframe in canonical form ``(t_step, agent_id, key, value, value_type)`` into
a dataframe with a column per key
'''
if df is None:
return df
if keys:
df = df[df['key'].isin(keys)]
df = df.pivot_table(values=values, index=index, columns=columns,
aggfunc=aggfunc, **kwargs)
if fill:
df = fillna(df)
return df
def get_count(df, *keys):
if keys:
df = df[list(keys)]
counts = pd.DataFrame()
for key in df.columns.levels[0]:
g = df[[key]].apply(pd.Series.value_counts, axis=1).fillna(0)
for value, series in g.iteritems():
counts[key, value] = series
counts.columns = pd.MultiIndex.from_tuples(counts.columns)
return counts
def get_value(df, *keys, aggfunc='sum'):
if keys:
df = df[list(keys)]
return df.groupby(axis=1, level=0).agg(aggfunc, axis=1)
def plot_all(*args, **kwargs):
'''
Read all the trial data and plot the result of applying a function on them.
'''
dfs = do_all(*args, **kwargs)
ps = []
for line in dfs:
f, df, config = line
df.plot(title=config['name'])
ps.append(df)
return ps
def do_all(pattern, func, *keys, include_env=False, **kwargs):
for config_file, df, config in read_data(pattern, keys=keys):
p = func(df, *keys, **kwargs)
p.plot(title=config['name'])
yield config_file, p, config
def group_trials(trials, aggfunc=['mean', 'min', 'max', 'std']):
trials = list(trials)
trials = list(map(lambda x: x[1] if isinstance(x, tuple) else x, trials))
return pd.concat(trials).groupby(level=0).agg(aggfunc).reorder_levels([2, 0,1] ,axis=1)
def fillna(df):
new_df = df.ffill(axis=0)
return new_df

266
soil/config.py Normal file
View File

@@ -0,0 +1,266 @@
from __future__ import annotations
from enum import Enum
from pydantic import BaseModel, ValidationError, validator, root_validator
import yaml
import os
import sys
from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra
from . import environment, utils
import networkx as nx
# Could use TypeAlias in python >= 3.10
nodeId = int
class Node(BaseModel):
id: nodeId
state: Optional[Dict[str, Any]] = {}
class Edge(BaseModel):
source: nodeId
target: nodeId
value: Optional[float] = 1
class Topology(BaseModel):
nodes: List[Node]
directed: bool
links: List[Edge]
class NetParams(BaseModel, extra=Extra.allow):
generator: Union[Callable, str]
n: int
class NetConfig(BaseModel):
params: Optional[NetParams]
topology: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
class Config:
arbitrary_types_allowed = True
@staticmethod
def default():
return NetConfig(topology=None, params=None)
@root_validator
def validate_all(cls, values):
if 'params' not in values and 'topology' not in values:
raise ValueError('You must specify either a topology or the parameters to generate a graph')
return values
class EnvConfig(BaseModel):
@staticmethod
def default():
return EnvConfig()
class SingleAgentConfig(BaseModel):
agent_class: Optional[Union[Type, str]] = None
unique_id: Optional[int] = None
topology: Optional[str] = None
node_id: Optional[Union[int, str]] = None
state: Optional[Dict[str, Any]] = {}
class FixedAgentConfig(SingleAgentConfig):
n: Optional[int] = 1
hidden: Optional[bool] = False # Do not count this agent towards total agent count
@root_validator
def validate_all(cls, values):
if values.get('agent_id', None) is not None and values.get('n', 1) > 1:
raise ValueError(f"An agent_id can only be provided when there is only one agent ({values.get('n')} given)")
return values
class OverrideAgentConfig(FixedAgentConfig):
filter: Optional[Dict[str, Any]] = None
class Strategy(Enum):
topology = 'topology'
total = 'total'
class AgentDistro(SingleAgentConfig):
weight: Optional[float] = 1
strategy: Strategy = Strategy.topology
class AgentConfig(SingleAgentConfig):
n: Optional[int] = None
topology: Optional[str]
distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[FixedAgentConfig]] = None
override: Optional[List[OverrideAgentConfig]] = None
@staticmethod
def default():
return AgentConfig()
@root_validator
def validate_all(cls, values):
if 'distribution' in values and ('n' not in values and 'topology' not in values):
raise ValueError("You need to provide the number of agents or a topology to extract the value from.")
return values
class Config(BaseModel, extra=Extra.allow):
version: Optional[str] = '1'
name: str = 'Unnamed Simulation'
description: Optional[str] = None
group: str = None
dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
max_steps: int = -1
interval: float = 1
seed: str = ""
dry_run: bool = False
model_class: Union[Type, str] = environment.Environment
model_params: Optional[Dict[str, Any]] = {}
visualization_params: Optional[Dict[str, Any]] = {}
@classmethod
def from_raw(cls, cfg):
if isinstance(cfg, Config):
return cfg
if cfg.get('version', '1') == '1' and any(k in cfg for k in ['agents', 'agent_class', 'topology', 'environment_class']):
return convert_old(cfg)
return Config(**cfg)
def convert_old(old, strict=True):
'''
Try to convert old style configs into the new format.
This is still a work in progress and might not work in many cases.
'''
utils.logger.warning('The old configuration format is deprecated. The converted file MAY NOT yield the right results')
new = old.copy()
network = {}
if 'topology' in old:
del new['topology']
network['topology'] = old['topology']
if 'network_params' in old and old['network_params']:
del new['network_params']
for (k, v) in old['network_params'].items():
if k == 'path':
network['path'] = v
else:
network.setdefault('params', {})[k] = v
topologies = {}
if network:
topologies['default'] = network
agents = {'fixed': [], 'distribution': []}
def updated_agent(agent):
'''Convert an agent definition'''
newagent = dict(agent)
return newagent
by_weight = []
fixed = []
override = []
if 'environment_agents' in new:
for agent in new['environment_agents']:
agent.setdefault('state', {})['group'] = 'environment'
if 'agent_id' in agent:
agent['state']['name'] = agent['agent_id']
del agent['agent_id']
agent['hidden'] = True
agent['topology'] = None
fixed.append(updated_agent(agent))
del new['environment_agents']
if 'agent_class' in old:
del new['agent_class']
agents['agent_class'] = old['agent_class']
if 'default_state' in old:
del new['default_state']
agents['state'] = old['default_state']
if 'network_agents' in old:
agents['topology'] = 'default'
agents.setdefault('state', {})['group'] = 'network'
for agent in new['network_agents']:
agent = updated_agent(agent)
if 'agent_id' in agent:
agent['state']['name'] = agent['agent_id']
del agent['agent_id']
fixed.append(agent)
else:
by_weight.append(agent)
del new['network_agents']
if 'agent_class' in old and (not fixed and not by_weight):
agents['topology'] = 'default'
by_weight = [{'agent_class': old['agent_class'], 'weight': 1}]
# TODO: translate states properly
if 'states' in old:
del new['states']
states = old['states']
if isinstance(states, dict):
states = states.items()
else:
states = enumerate(states)
for (k, v) in states:
override.append({'filter': {'node_id': k},
'state': v})
agents['override'] = override
agents['fixed'] = fixed
agents['distribution'] = by_weight
model_params = {}
if 'environment_params' in new:
del new['environment_params']
model_params = dict(old['environment_params'])
if 'environment_class' in old:
del new['environment_class']
new['model_class'] = old['environment_class']
if 'dump' in old:
del new['dump']
new['dry_run'] = not old['dump']
model_params['topologies'] = topologies
model_params['agents'] = agents
return Config(version='2',
model_params=model_params,
**new)

6
soil/datacollection.py Normal file
View File

@@ -0,0 +1,6 @@
from mesa import DataCollector as MDC
class SoilDataCollector(MDC):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

151
soil/debugging.py Normal file
View File

@@ -0,0 +1,151 @@
from __future__ import annotations
import pdb
import sys
import os
from textwrap import indent
from functools import wraps
from .agents import FSM, MetaFSM
def wrapcmd(func):
@wraps(func)
def wrapper(self, arg: str, temporary=False):
sys.settrace(self.trace_dispatch)
known = globals()
known.update(self.curframe.f_globals)
known.update(self.curframe.f_locals)
known['agent'] = known.get('self', None)
known['model'] = known.get('self', {}).get('model')
known['attrs'] = arg.strip().split()
exec(func.__code__, known, known)
return wrapper
class Debug(pdb.Pdb):
def __init__(self, *args, skip_soil=False, **kwargs):
skip = kwargs.get('skip', [])
if skip_soil:
skip.append('soil.*')
skip.append('mesa.*')
super(Debug, self).__init__(*args, skip=skip, **kwargs)
self.prompt = "[soil-pdb] "
@staticmethod
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
for agent in model.agents(**kwargs):
d = agent
print(' - ' + indent(agent.to_str(keys=attrs, pretty=pretty), ' '))
@wrapcmd
def do_soil_agents():
return Debug._soil_agents(model, attrs=attrs or None)
do_sa = do_soil_agents
@wrapcmd
def do_soil_list():
return Debug._soil_agents(model, attrs=['state_id'], pretty=False)
do_sl = do_soil_list
@wrapcmd
def do_soil_self():
if not agent:
print('No agent available')
return
keys = None
if attrs:
keys = []
for k in attrs:
for key in agent.keys():
if key.startswith(k):
keys.append(key)
print(agent.to_str(pretty=True, keys=keys))
do_ss = do_soil_self
def do_break_state(self, arg: str, temporary=False):
'''
Break before a specified state is stepped into.
'''
klass = None
state = arg.strip()
if not state:
self.error("Specify at least a state name")
return
comma = arg.find(':')
if comma > 0:
state = arg[comma+1:].lstrip()
klass = arg[:comma].rstrip()
klass = eval(klass,
self.curframe.f_globals,
self.curframe_locals)
if klass:
klasses = [klass]
else:
klasses = [k for k in self.curframe.f_globals.values() if isinstance(k, type) and issubclass(k, FSM)]
print(klasses)
if not klasses:
self.error('No agent classes found')
for klass in klasses:
try:
func = getattr(klass, state)
except AttributeError:
continue
if hasattr(func, '__func__'):
func = func.__func__
code = func.__code__
#use co_name to identify the bkpt (function names
#could be aliased, but co_name is invariant)
funcname = code.co_name
lineno = code.co_firstlineno
filename = code.co_filename
# Check for reasonable breakpoint
line = self.checkline(filename, lineno)
if not line:
raise ValueError('no line found')
# now set the break point
cond = None
existing = self.get_breaks(filename, line)
if existing:
self.message("Breakpoint already exists at %s:%d" %
(filename, line))
continue
err = self.set_break(filename, line, temporary, cond, funcname)
if err:
self.error(err)
else:
bp = self.get_breaks(filename, line)[-1]
self.message("Breakpoint %d at %s:%d" %
(bp.number, bp.file, bp.line))
do_bs = do_break_state
def setup(frame=None):
debugger = Debug()
frame = frame or sys._getframe().f_back
debugger.set_trace(frame)
def debug_env():
if os.environ.get('SOIL_DEBUG'):
return setup(frame=sys._getframe().f_back)
def post_mortem(traceback=None):
p = Debug()
t = sys.exc_info()[2]
p.reset()
p.interaction(None, t)

View File

@@ -1,356 +1,302 @@
from __future__ import annotations
import os import os
import sqlite3 import sqlite3
import time import math
import csv
import random import random
import simpy import logging
import yaml
import tempfile from typing import Any, Dict, Optional, Union
import pandas as pd from collections import namedtuple
from time import time as current_time
from copy import deepcopy from copy import deepcopy
from collections import Counter
from networkx.readwrite import json_graph from networkx.readwrite import json_graph
import networkx as nx import networkx as nx
import nxsim
from . import serialization, agents, analysis, history, utils from mesa import Model
from mesa.datacollection import DataCollector
# These properties will be copied when pickling/unpickling the environment from . import agents as agentmod, config, serialization, utils, time, network
_CONFIG_PROPS = [ 'name',
'states',
'default_state',
'interval',
]
class Environment(nxsim.NetworkEnvironment):
Record = namedtuple('Record', 'dict_id t_step key value')
class BaseEnvironment(Model):
""" """
The environment is key in a simulation. It contains the network topology, The environment is key in a simulation. It controls how agents interact,
a reference to network and environment agents, as well as the environment and what information is available to them.
params, which are used as shared state between agents.
This is an opinionated version of `mesa.Model` class, which adds many
convenience methods and abstractions.
The environment parameters and the state of every agent can be accessed The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary or with the environment's both by using the environment as a dictionary and with the environment's
:meth:`soil.environment.Environment.get` method. :meth:`soil.environment.Environment.get` method.
""" """
def __init__(self, name=None, def __init__(self,
network_agents=None, id='unnamed_env',
environment_agents=None, seed='default',
states=None, schedule=None,
default_state=None, dir_path=None,
interval=1, interval=1,
seed=None, agent_class=None,
topology=None, agents: [tuple[type, Dict[str, Any]]] = {},
*args, **kwargs): agent_reporters: Optional[Any] = None,
self.name = name or 'UnnamedEnvironment' model_reporters: Optional[Any] = None,
if isinstance(states, list): tables: Optional[Any] = None,
states = dict(enumerate(states)) **env_params):
self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {} super().__init__(seed=seed)
if not topology: self.current_id = -1
topology = nx.Graph()
super().__init__(*args, topology=topology, **kwargs) self.id = id
self._env_agents = {}
self.dir_path = dir_path or os.getcwd()
if schedule is None:
schedule = time.TimedActivation(self)
self.schedule = schedule
self.agent_class = agent_class or agentmod.BaseAgent
self.init_agents(agents)
self.env_params = env_params or {}
self.interval = interval self.interval = interval
self._history = history.History(name=self.name,
backup=True) self.logger = utils.logger.getChild(self.id)
# Add environment agents first, so their events get
# executed before network agents self.datacollector = DataCollector(
self.environment_agents = environment_agents or [] model_reporters=model_reporters,
self.network_agents = network_agents or [] agent_reporters=agent_reporters,
self['SEED'] = seed or time.time() tables=tables,
random.seed(self['SEED']) )
def _read_single_agent(self, agent):
agent = dict(**agent)
cls = agent.pop('agent_class', None) or self.agent_class
unique_id = agent.pop('unique_id', None)
if unique_id is None:
unique_id = self.next_id()
return serialization.deserialize(cls)(unique_id=unique_id,
model=self, **agent)
def init_agents(self, agents: Union[config.AgentConfig, [Dict[str, Any]]] = {}):
if not agents:
return
lst = agents
override = []
if not isinstance(lst, list):
if not isinstance(agents, config.AgentConfig):
lst = config.AgentConfig(**agents)
if lst.override:
override = lst.override
lst = agentmod.from_config(lst,
topologies=getattr(self, 'topologies', None),
random=self.random)
#TODO: check override is working again. It cannot (easily) be part of agents.from_config anymore,
# because it needs attribute such as unique_id, which are only present after init
new_agents = [self._read_single_agent(agent) for agent in lst]
for a in new_agents:
self.schedule.add(a)
for rule in override:
for agent in agentmod.filter_agents(self.schedule._agents, **rule.filter):
for attr, value in rule.state.items():
setattr(agent, attr, value)
@property @property
def agents(self): def agents(self):
yield from self.environment_agents return agentmod.AgentView(self.schedule._agents)
yield from self.network_agents
def find_one(self, *args, **kwargs):
return agentmod.AgentView(self.schedule._agents).one(*args, **kwargs)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.agents(*args, **kwargs))
@property @property
def environment_agents(self): def now(self):
for ref in self._env_agents.values(): if self.schedule:
yield ref return self.schedule.time
raise Exception('The environment has not been scheduled, so it has no sense of time')
@environment_agents.setter
def environment_agents(self, environment_agents):
# Set up environmental agent
self._env_agents = {}
for item in environment_agents:
kwargs = deepcopy(item)
atype = kwargs.pop('agent_type')
kwargs['agent_id'] = kwargs.get('agent_id', atype.__name__)
kwargs['state'] = kwargs.get('state', {})
a = atype(environment=self, **kwargs)
self._env_agents[a.id] = a
@property def add_agent(self, agent_id, agent_class, **kwargs):
def network_agents(self):
for i in self.G.nodes():
node = self.G.node[i]
if 'agent' in node:
yield node['agent']
@network_agents.setter
def network_agents(self, network_agents):
self._network_agents = network_agents
for ix in self.G.nodes():
self.init_agent(ix, agent_distribution=network_agents)
def init_agent(self, agent_id, agent_distribution):
node = self.G.nodes[agent_id]
init = False
state = dict(node)
agent_type = None
if 'agent_type' in self.states.get(agent_id, {}):
agent_type = self.states[agent_id]['agent_type']
elif 'agent_type' in node:
agent_type = node['agent_type']
elif 'agent_type' in self.default_state:
agent_type = self.default_state['agent_type']
if agent_type:
agent_type = agents.deserialize_type(agent_type)
elif agent_distribution:
agent_type, state = agents._agent_from_distribution(agent_distribution, agent_id=agent_id)
else:
serialization.logger.debug('Skipping node {}'.format(agent_id))
return
return self.set_agent(agent_id, agent_type, state)
def set_agent(self, agent_id, agent_type, state=None):
node = self.G.nodes[agent_id]
defstate = deepcopy(self.default_state) or {}
defstate.update(self.states.get(agent_id, {}))
defstate.update(node.get('state', {}))
if state:
defstate.update(state)
a = None a = None
if agent_type: if agent_class:
state = defstate a = agent_class(model=self,
a = agent_type(environment=self, unique_id=agent_id,
agent_id=agent_id, **kwargs)
state=state)
node['agent'] = a self.schedule.add(a)
return a return a
def add_node(self, agent_type, state=None): def log(self, message, *args, level=logging.INFO, **kwargs):
agent_id = int(len(self.G.nodes())) if not self.logger.isEnabledFor(level):
self.G.add_node(agent_id)
a = self.set_agent(agent_id, agent_type, state)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, start=None, **attrs):
if hasattr(agent1, 'id'):
agent1 = agent1.id
if hasattr(agent2, 'id'):
agent2 = agent2.id
start = start or self.now
return self.G.add_edge(agent1, agent2, **attrs)
def run(self, *args, **kwargs):
self._save_state()
self.log_stats()
super().run(*args, **kwargs)
self._history.flush_cache()
self.log_stats()
def _save_state(self, now=None):
serialization.logger.debug('Saving state @{}'.format(self.now))
self._history.save_records(self.state_to_tuples(now=now))
def save_state(self):
'''
:DEPRECATED:
Periodically save the state of the environment and the agents.
'''
self._save_state()
while self.peek() != simpy.core.Infinity:
delay = max(self.peek() - self.now, self.interval)
serialization.logger.debug('Step: {}'.format(self.now))
ev = self.event()
ev._ok = True
# Schedule the event with minimum priority so
# that it executes before all agents
self.schedule(ev, -999, delay)
yield ev
self._save_state()
def __getitem__(self, key):
if isinstance(key, tuple):
self._history.flush_cache()
return self._history[key]
return self.environment_params[key]
def __setitem__(self, key, value):
if isinstance(key, tuple):
k = history.Key(*key)
self._history.save_record(*k,
value=value)
return return
self.environment_params[key] = value message = message + " ".join(str(i) for i in args)
self._history.save_record(agent_id='env', message = " @{:>3}: {}".format(self.now, message)
t_step=self.now, for k, v in kwargs:
key=key, message += " {k}={v} ".format(k, v)
value=value) extra = {}
extra['now'] = self.now
extra['id'] = self.id
return self.logger.log(level, message, extra=extra)
def step(self):
'''
Advance one step in the simulation, and update the data collection and scheduler appropriately
'''
super().step()
self.logger.info(f'--- Step {self.now:^5} ---')
self.schedule.step()
self.datacollector.collect(self)
def __contains__(self, key): def __contains__(self, key):
return key in self.environment_params return key in self.env_params
def get(self, key, default=None): def get(self, key, default=None):
''' '''
Get the value of an environment attribute in a Get the value of an environment attribute.
given point in the simulation (history). Return `default` if the value is not set.
If key is an attribute name, this method returns
the current value.
To get values at other times, use a
:meth: `soil.history.Key` tuple.
''' '''
return self[key] if key in self else default return self.env_params.get(key, default)
def get_agent(self, agent_id): def __getitem__(self, key):
return self.G.node[agent_id]['agent'] return self.env_params.get(key)
def get_agents(self, nodes=None): def __setitem__(self, key, value):
if nodes is None: return self.env_params.__setitem__(key, value)
return list(self.agents)
return [self.G.node[i]['agent'] for i in nodes]
def dump_csv(self, f): def _agent_to_tuples(self, agent, now=None):
with utils.open_or_reuse(f, 'w') as f:
cr = csv.writer(f)
cr.writerow(('agent_id', 't_step', 'key', 'value'))
for i in self.history_to_tuples():
cr.writerow(i)
def dump_gexf(self, f):
G = self.history_to_graph()
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.node[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
nx.write_gexf(G, f, version="1.2draft")
def dump(self, *args, formats=None, **kwargs):
if not formats:
return
functions = {
'csv': self.dump_csv,
'gexf': self.dump_gexf
}
for f in formats:
if f in functions:
functions[f](*args, **kwargs)
else:
raise ValueError('Unknown format: {}'.format(f))
def dump_sqlite(self, f):
return self._history.dump(f)
def state_to_tuples(self, now=None):
if now is None: if now is None:
now = self.now now = self.now
for k, v in self.environment_params.items(): for k, v in agent.state.items():
yield history.Record(agent_id='env', yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
def state_to_tuples(self, agent_id=None, now=None):
if now is None:
now = self.now
if agent_id:
agent = self.agents[agent_id]
yield from self._agent_to_tuples(agent, now)
return
for k, v in self.env_params.items():
yield Record(dict_id='env',
t_step=now, t_step=now,
key=k, key=k,
value=v) value=v)
for agent in self.agents: for agent in self.agents:
for k, v in agent.state.items(): yield from self._agent_to_tuples(agent, now)
yield history.Record(agent_id=agent.id,
t_step=now,
key=k,
value=v)
def history_to_tuples(self):
return self._history.to_tuples()
def history_to_graph(self):
G = nx.Graph(self.G)
for agent in self.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
history = self[agent.id, None, None]
if not history:
continue
for t_step, attribute, value in sorted(list(history)):
if attribute == 'visible':
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G
def stats(self):
stats = {}
stats['network'] = {}
stats['network']['n_nodes'] = self.G.number_of_nodes()
stats['network']['n_edges'] = self.G.number_of_edges()
c = Counter()
c.update(a.__class__.__name__ for a in self.network_agents)
stats['agents'] = {}
stats['agents']['model_count'] = dict(c)
c2 = Counter()
c2.update(a['id'] for a in self.network_agents)
stats['agents']['state_count'] = dict(c2)
stats['params'] = self.environment_params
return stats
def log_stats(self):
stats = self.stats()
serialization.logger.info('Environment stats: \n{}'.format(yaml.dump(stats, default_flow_style=False)))
def __getstate__(self):
state = {}
for prop in _CONFIG_PROPS:
state[prop] = self.__dict__[prop]
state['G'] = json_graph.node_link_data(self.G)
state['environment_agents'] = self._env_agents
state['history'] = self._history
return state
def __setstate__(self, state):
for prop in _CONFIG_PROPS:
self.__dict__[prop] = state[prop]
self._env_agents = state['environment_agents']
self.G = json_graph.node_link_graph(state['G'])
self._history = state['history']
SoilEnvironment = Environment class NetworkEnvironment(BaseEnvironment):
def __init__(self, *args, topology: nx.Graph = None, topologies: Dict[str, config.NetConfig] = {}, **kwargs):
agents = kwargs.pop('agents', None)
super().__init__(*args, agents=None, **kwargs)
self._node_ids = {}
assert not hasattr(self, 'topologies')
if topology is not None:
if topologies:
raise ValueError('Please, provide either a single topology or a dictionary of them')
topologies = {'default': topology}
self.topologies = {}
for (name, cfg) in topologies.items():
self.set_topology(cfg=cfg, graph=name)
self.init_agents(agents)
def _read_single_agent(self, agent, unique_id=None):
agent = dict(agent)
if agent.get('topology', None) is not None:
topology = agent.get('topology')
if unique_id is None:
unique_id = self.next_id()
if topology:
node_id = self.agent_to_node(unique_id, graph_name=topology, node_id=agent.get('node_id'))
agent['node_id'] = node_id
agent['topology'] = topology
agent['unique_id'] = unique_id
return super()._read_single_agent(agent)
@property
def topology(self):
return self.topologies['default']
def set_topology(self, cfg=None, dir_path=None, graph='default'):
topology = cfg
if not isinstance(cfg, nx.Graph):
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.topologies[graph] = topology
def topology_for(self, unique_id):
return self.topologies[self._node_ids[unique_id][0]]
@property
def network_agents(self):
yield from self.agents(agent_class=agentmod.NetworkAgent)
def agent_to_node(self, unique_id, graph_name='default',
node_id=None, shuffle=False):
node_id = network.agent_to_node(G=self.topologies[graph_name],
agent_id=unique_id,
node_id=node_id,
shuffle=shuffle,
random=self.random)
self._node_ids[unique_id] = (graph_name, node_id)
return node_id
def add_node(self, agent_class, topology, **kwargs):
unique_id = self.next_id()
self.topologies[topology].add_node(unique_id)
node_id = self.agent_to_node(unique_id=unique_id, node_id=unique_id, graph_name=topology)
a = self.add_agent(unique_id=unique_id, agent_class=agent_class, node_id=node_id, topology=topology, **kwargs)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
agent1 = agent1.node_id
agent2 = agent2.node_id
return self.topologies[graph].add_edge(agent1, agent2, start=start)
def add_agent(self, unique_id, state=None, graph='default', **kwargs):
node = self.topologies[graph].nodes[unique_id]
node_state = node.get('state', {})
if node_state:
node_state.update(state or {})
state = node_state
a = super().add_agent(unique_id, state=state, **kwargs)
node['agent'] = a
return a
def node_id_for(self, agent_id):
return self._node_ids[agent_id][1]
Environment = NetworkEnvironment

View File

@@ -1,25 +1,18 @@
import os import os
import time from time import time as current_time
from io import BytesIO from io import BytesIO
from sqlalchemy import create_engine
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import networkx as nx import networkx as nx
import pandas as pd
from .serialization import deserialize from .serialization import deserialize
from .utils import open_or_reuse, logger, timer from .utils import open_or_reuse, logger, timer
from . import utils from . import utils, network
def for_sim(simulation, names, *args, **kwargs):
'''Return the set of exporters for a simulation, given the exporter names'''
exporters = []
for name in names:
mod = deserialize(name, known_modules=['soil.exporters'])
exporters.append(mod(simulation, *args, **kwargs))
return exporters
class DryRunner(BytesIO): class DryRunner(BytesIO):
@@ -37,8 +30,12 @@ class DryRunner(BytesIO):
super().write(bytes(txt, 'utf-8')) super().write(bytes(txt, 'utf-8'))
def close(self): def close(self):
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content = '(binary data not shown)'
self.getvalue().decode())) try:
content = self.getvalue().decode()
except UnicodeDecodeError:
pass
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content))
super().close() super().close()
@@ -49,7 +46,7 @@ class Exporter:
''' '''
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None): def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
self.sim = simulation self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), 'soil_output') outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
self.outdir = os.path.join(outdir, self.outdir = os.path.join(outdir,
simulation.group or '', simulation.group or '',
@@ -57,14 +54,21 @@ class Exporter:
self.dry_run = dry_run self.dry_run = dry_run
self.copy_to = copy_to self.copy_to = copy_to
def start(self): def sim_start(self):
'''Method to call when the simulation starts''' '''Method to call when the simulation starts'''
pass
def end(self): def sim_end(self):
'''Method to call when the simulation ends''' '''Method to call when the simulation ends'''
pass
def trial_start(self, env):
'''Method to call when a trial start'''
pass
def trial_end(self, env): def trial_end(self, env):
'''Method to call when a trial ends''' '''Method to call when a trial ends'''
pass
def output(self, f, mode='w', **kwargs): def output(self, f, mode='w', **kwargs):
if self.dry_run: if self.dry_run:
@@ -81,93 +85,76 @@ class Exporter:
class default(Exporter): class default(Exporter):
'''Default exporter. Writes sqlite results, as well as the simulation YAML''' '''Default exporter. Writes sqlite results, as well as the simulation YAML'''
def start(self): def sim_start(self):
if not self.dry_run: if not self.dry_run:
logger.info('Dumping results to %s', self.outdir) logger.info('Dumping results to %s', self.outdir)
self.sim.dump_yaml(outdir=self.outdir) with self.output(self.simulation.name + '.dumped.yml') as f:
f.write(self.simulation.to_yaml())
else: else:
logger.info('NOT dumping results') logger.info('NOT dumping results')
def trial_end(self, env): def trial_end(self, env):
if not self.dry_run: if not self.dry_run:
with timer('Dumping simulation {} trial {}'.format(self.sim.name, with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
env.name)): env.id)):
with self.output('{}.sqlite'.format(env.name), mode='wb') as f: engine = create_engine('sqlite:///{}.sqlite'.format(env.id), echo=False)
env.dump_sqlite(f)
dc = env.datacollector
for (t, df) in get_dc_dfs(dc):
df.to_sql(t, con=engine, if_exists='append')
def get_dc_dfs(dc):
dfs = {'env': dc.get_model_vars_dataframe(),
'agents': dc.get_agent_vars_dataframe() }
for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name)
yield from dfs.items()
class csv(Exporter): class csv(Exporter):
'''Export the state of each environment (and its agents) in a separate CSV file''' '''Export the state of each environment (and its agents) in a separate CSV file'''
def trial_end(self, env): def trial_end(self, env):
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.sim.name, with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
env.name, env.id,
self.outdir)): self.outdir)):
with self.output('{}.csv'.format(env.name)) as f: for (df_name, df) in get_dc_dfs(env.datacollector):
env.dump_csv(f) with self.output('{}.{}.csv'.format(env.id, df_name)) as f:
df.to_csv(f)
#TODO: reimplement GEXF exporting without history
class gexf(Exporter): class gexf(Exporter):
def trial_end(self, env): def trial_end(self, env):
if self.dry_run: if self.dry_run:
logger.info('Not dumping GEXF in dry_run mode') logger.info('Not dumping GEXF in dry_run mode')
return return
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.sim.name, with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name,
env.name)): env.id)):
with self.output('{}.gexf'.format(env.name), mode='wb') as f: with self.output('{}.gexf'.format(env.id), mode='wb') as f:
env.dump_gexf(f) network.dump_gexf(env.history_to_graph(), f)
self.dump_gexf(env, f)
class dummy(Exporter): class dummy(Exporter):
def start(self): def sim_start(self):
with self.output('dummy', 'w') as f: with self.output('dummy', 'w') as f:
f.write('simulation started @ {}\n'.format(time.time())) f.write('simulation started @ {}\n'.format(current_time()))
def trial_start(self, env):
with self.output('dummy', 'w') as f:
f.write('trial started@ {}\n'.format(current_time()))
def trial_end(self, env): def trial_end(self, env):
with self.output('dummy', 'w') as f: with self.output('dummy', 'w') as f:
for i in env.history_to_tuples(): f.write('trial ended@ {}\n'.format(current_time()))
f.write(','.join(map(str, i)))
f.write('\n')
def end(self): def sim_end(self):
with self.output('dummy', 'a') as f: with self.output('dummy', 'a') as f:
f.write('simulation ended @ {}\n'.format(time.time())) f.write('simulation ended @ {}\n'.format(current_time()))
class distribution(Exporter):
'''
Write the distribution of agent states at the end of each trial,
the mean value, and its deviation.
'''
def start(self):
self.means = []
self.counts = []
def trial_end(self, env):
df = env[None, None, None].df()
ix = df.index[-1]
attrs = df.columns.levels[0]
vc = {}
stats = {}
for a in attrs:
t = df.loc[(ix, a)]
try:
self.means.append(('mean', a, t.mean()))
except TypeError:
for name, count in t.value_counts().iteritems():
self.counts.append(('count', a, name, count))
def end(self):
dfm = pd.DataFrame(self.means, columns=['metric', 'key', 'value'])
dfc = pd.DataFrame(self.counts, columns=['metric', 'key', 'value', 'count'])
dfm = dfm.groupby(by=['key']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
dfc = dfc.groupby(by=['key', 'value']).agg(['mean', 'std', 'count', 'median', 'max', 'min'])
with self.output('counts.csv') as f:
dfc.to_csv(f)
with self.output('metrics.csv') as f:
dfm.to_csv(f)
class graphdrawing(Exporter): class graphdrawing(Exporter):
@@ -175,5 +162,53 @@ class graphdrawing(Exporter):
# Outside effects # Outside effects
f = plt.figure() f = plt.figure()
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111)) nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111))
with open('graph-{}.png'.format(env.name)) as f: with open('graph-{}.png'.format(env.id)) as f:
f.savefig(f) f.savefig(f)
'''
Convert an environment into a NetworkX graph
'''
def env_to_graph(env, history=None):
G = nx.Graph(env.G)
for agent in env.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
if not history:
history = sorted(list(env.state_to_tuples()))
for _, t_step, attribute, value in history:
if attribute == 'visible':
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G

View File

@@ -1,315 +0,0 @@
import time
import os
import pandas as pd
import sqlite3
import copy
import logging
import tempfile
logger = logging.getLogger(__name__)
from collections import UserDict, namedtuple
from . import serialization
from .utils import open_or_reuse
class History:
"""
Store and retrieve values from a sqlite database.
"""
def __init__(self, name=None, db_path=None, backup=False):
self._db = None
if db_path is None:
if not name:
name = time.time()
_, db_path = tempfile.mkstemp(suffix='{}.sqlite'.format(name))
if backup and os.path.exists(db_path):
newname = db_path + '.backup{}.sqlite'.format(time.time())
os.rename(db_path, newname)
self.db_path = db_path
self.db = db_path
with self.db:
logger.debug('Creating database {}'.format(self.db_path))
self.db.execute('''CREATE TABLE IF NOT EXISTS history (agent_id text, t_step int, key text, value text text)''')
self.db.execute('''CREATE TABLE IF NOT EXISTS value_types (key text, value_type text)''')
self.db.execute('''CREATE UNIQUE INDEX IF NOT EXISTS idx_history ON history (agent_id, t_step, key);''')
self._dtypes = {}
self._tups = []
@property
def db(self):
try:
self._db.cursor()
except (sqlite3.ProgrammingError, AttributeError):
self.db = None # Reset the database
return self._db
@db.setter
def db(self, db_path=None):
self._close()
db_path = db_path or self.db_path
if isinstance(db_path, str):
logger.debug('Connecting to database {}'.format(db_path))
self._db = sqlite3.connect(db_path)
else:
self._db = db_path
def _close(self):
if self._db is None:
return
self.flush_cache()
self._db.close()
self._db = None
@property
def dtypes(self):
self.read_types()
return {k:v[0] for k, v in self._dtypes.items()}
def save_tuples(self, tuples):
'''
Save a series of tuples, converting them to records if necessary
'''
self.save_records(Record(*tup) for tup in tuples)
def save_records(self, records):
'''
Save a collection of records
'''
for record in records:
if not isinstance(record, Record):
record = Record(*record)
self.save_record(*record)
def save_record(self, agent_id, t_step, key, value):
'''
Save a collection of records to the database.
Database writes are cached.
'''
value = self.convert(key, value)
self._tups.append(Record(agent_id=agent_id,
t_step=t_step,
key=key,
value=value))
if len(self._tups) > 100:
self.flush_cache()
def convert(self, key, value):
"""Get the serialized value for a given key."""
if key not in self._dtypes:
self.read_types()
if key not in self._dtypes:
name = serialization.name(value)
serializer = serialization.serializer(name)
deserializer = serialization.deserializer(name)
self._dtypes[key] = (name, serializer, deserializer)
with self.db:
self.db.execute("replace into value_types (key, value_type) values (?, ?)", (key, name))
return self._dtypes[key][1](value)
def recover(self, key, value):
"""Get the deserialized value for a given key, and the serialized version."""
if key not in self._dtypes:
self.read_types()
if key not in self._dtypes:
raise ValueError("Unknown datatype for {} and {}".format(key, value))
return self._dtypes[key][2](value)
def flush_cache(self):
'''
Use a cache to save state changes to avoid opening a session for every change.
The cache will be flushed at the end of the simulation, and when history is accessed.
'''
logger.debug('Flushing cache {}'.format(self.db_path))
with self.db:
for rec in self._tups:
self.db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
self._tups = list()
def to_tuples(self):
self.flush_cache()
with self.db:
res = self.db.execute("select agent_id, t_step, key, value from history ").fetchall()
for r in res:
agent_id, t_step, key, value = r
value = self.recover(key, value)
yield agent_id, t_step, key, value
def read_types(self):
with self.db:
res = self.db.execute("select key, value_type from value_types ").fetchall()
for k, v in res:
serializer = serialization.serializer(v)
deserializer = serialization.deserializer(v)
self._dtypes[k] = (v, serializer, deserializer)
def __getitem__(self, key):
self.flush_cache()
key = Key(*key)
agent_ids = [key.agent_id] if key.agent_id is not None else []
t_steps = [key.t_step] if key.t_step is not None else []
keys = [key.key] if key.key is not None else []
df = self.read_sql(agent_ids=agent_ids,
t_steps=t_steps,
keys=keys)
r = Records(df, filter=key, dtypes=self._dtypes)
if r.resolved:
return r.value()
return r
def read_sql(self, keys=None, agent_ids=None, t_steps=None, convert_types=False, limit=-1):
self.read_types()
def escape_and_join(v):
if v is None:
return
return ",".join(map(lambda x: "\'{}\'".format(x), v))
filters = [("key in ({})".format(escape_and_join(keys)), keys),
("agent_id in ({})".format(escape_and_join(agent_ids)), agent_ids)
]
filters = list(k[0] for k in filters if k[1])
last_df = None
if t_steps:
# Look for the last value before the minimum step in the query
min_step = min(t_steps)
last_filters = ['t_step < {}'.format(min_step),]
last_filters = last_filters + filters
condition = ' and '.join(last_filters)
last_query = '''
select h1.*
from history h1
inner join (
select agent_id, key, max(t_step) as t_step
from history
where {condition}
group by agent_id, key
) h2
on h1.agent_id = h2.agent_id and
h1.key = h2.key and
h1.t_step = h2.t_step
'''.format(condition=condition)
last_df = pd.read_sql_query(last_query, self.db)
filters.append("t_step >= '{}' and t_step <= '{}'".format(min_step, max(t_steps)))
condition = ''
if filters:
condition = 'where {} '.format(' and '.join(filters))
query = 'select * from history {} limit {}'.format(condition, limit)
df = pd.read_sql_query(query, self.db)
if last_df is not None:
df = pd.concat([df, last_df])
df_p = df.pivot_table(values='value', index=['t_step'],
columns=['key', 'agent_id'],
aggfunc='first')
for k, v in self._dtypes.items():
if k in df_p:
dtype, _, deserial = v
df_p[k] = df_p[k].fillna(method='ffill').astype(dtype)
if t_steps:
df_p = df_p.reindex(t_steps, method='ffill')
return df_p.ffill()
def __getstate__(self):
state = dict(**self.__dict__)
del state['_db']
del state['_dtypes']
return state
def __setstate__(self, state):
self.__dict__ = state
self._dtypes = {}
self._db = None
def dump(self, f):
self._close()
for line in open_or_reuse(self.db_path, 'rb'):
f.write(line)
class Records():
def __init__(self, df, filter=None, dtypes=None):
if not filter:
filter = Key(agent_id=None,
t_step=None,
key=None)
self._df = df
self._filter = filter
self.dtypes = dtypes or {}
super().__init__()
def mask(self, tup):
res = ()
for i, k in zip(tup[:-1], self._filter):
if k is None:
res = res + (i,)
res = res + (tup[-1],)
return res
def filter(self, newKey):
f = list(self._filter)
for ix, i in enumerate(f):
if i is None:
f[ix] = newKey
self._filter = Key(*f)
@property
def resolved(self):
return sum(1 for i in self._filter if i is not None) == 3
def __iter__(self):
for column, series in self._df.iteritems():
key, agent_id = column
for t_step, value in series.iteritems():
r = Record(t_step=t_step,
agent_id=agent_id,
key=key,
value=value)
yield self.mask(r)
def value(self):
if self.resolved:
f = self._filter
try:
i = self._df[f.key][str(f.agent_id)]
ix = i.index.get_loc(f.t_step, method='ffill')
return i.iloc[ix]
except KeyError as ex:
return self.dtypes[f.key][2]()
return list(self)
def df(self):
return self._df
def __getitem__(self, k):
n = copy.copy(self)
n.filter(k)
if n.resolved:
return n.value()
return n
def __len__(self):
return len(self._df)
def __str__(self):
if self.resolved:
return str(self.value())
return '<Records for [{}]>'.format(self._filter)
Key = namedtuple('Key', ['agent_id', 't_step', 'key'])
Record = namedtuple('Record', 'agent_id t_step key value')

78
soil/network.py Normal file
View File

@@ -0,0 +1,78 @@
from __future__ import annotations
from typing import Dict
import os
import sys
import random
import networkx as nx
from . import config, serialization, basestring
def from_config(cfg: config.NetConfig, dir_path: str = None):
if not isinstance(cfg, config.NetConfig):
cfg = config.NetConfig(**cfg)
if cfg.path:
path = cfg.path
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
return method(path, **kwargs)
if cfg.params:
net_args = cfg.params.dict()
net_gen = net_args.pop('generator')
if dir_path not in sys.path:
sys.path.append(dir_path)
method = serialization.deserializer(net_gen,
known_modules=['networkx.generators',])
return method(**net_args)
if isinstance(cfg.topology, config.Topology):
cfg = cfg.topology.dict()
if isinstance(cfg, str) or isinstance(cfg, dict):
return nx.json_graph.node_link_graph(cfg)
return nx.Graph()
def agent_to_node(G, agent_id, node_id=None, shuffle=False, random=random):
'''
Link an agent to a node in a topology.
If node_id is None, a node without an agent_id will be found.
'''
#TODO: test
if node_id is None:
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if data.get('agent_id', None) is None:
node_id = next_id
break
if node_id is None:
raise ValueError(f"Not enough nodes in topology to assign one to agent {agent_id}")
G.nodes[node_id]['agent_id'] = agent_id
return node_id
def dump_gexf(G, f):
for node in G.nodes():
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
nx.write_gexf(G, f, version="1.2draft")

View File

@@ -2,10 +2,13 @@ import os
import logging import logging
import ast import ast
import sys import sys
import re
import importlib import importlib
from glob import glob from glob import glob
from itertools import product, chain from itertools import product, chain
from .config import Config
import yaml import yaml
import networkx as nx import networkx as nx
@@ -13,43 +16,47 @@ from jinja2 import Template
logger = logging.getLogger('soil') logger = logging.getLogger('soil')
logger.setLevel(logging.INFO)
def load_network(network_params, dir_path=None): # def load_network(network_params, dir_path=None):
if network_params is None: # G = nx.Graph()
return nx.Graph()
path = network_params.get('path', None)
if path:
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
return method(path, **kwargs)
net_args = network_params.copy() # if not network_params:
if 'generator' not in net_args: # return G
return nx.Graph()
net_gen = net_args.pop('generator') # if 'path' in network_params:
# path = network_params['path']
# if dir_path and not os.path.isabs(path):
# path = os.path.join(dir_path, path)
# extension = os.path.splitext(path)[1][1:]
# kwargs = {}
# if extension == 'gexf':
# kwargs['version'] = '1.2draft'
# kwargs['node_type'] = int
# try:
# method = getattr(nx.readwrite, 'read_' + extension)
# except AttributeError:
# raise AttributeError('Unknown format')
# G = method(path, **kwargs)
if dir_path not in sys.path: # elif 'generator' in network_params:
sys.path.append(dir_path) # net_args = network_params.copy()
# net_gen = net_args.pop('generator')
method = deserializer(net_gen, # if dir_path not in sys.path:
known_modules=['networkx.generators',]) # sys.path.append(dir_path)
return method(**net_args) # method = deserializer(net_gen,
# known_modules=['networkx.generators',])
# G = method(**net_args)
# return G
def load_file(infile): def load_file(infile):
folder = os.path.dirname(infile)
if folder not in sys.path:
sys.path.append(folder)
with open(infile, 'r') as f: with open(infile, 'r') as f:
return list(chain.from_iterable(map(expand_template, load_string(f)))) return list(chain.from_iterable(map(expand_template, load_string(f))))
@@ -66,11 +73,32 @@ def expand_template(config):
raise ValueError(('You must provide a definition of variables' raise ValueError(('You must provide a definition of variables'
' for the template.')) ' for the template.'))
template = Template(config['template']) template = config['template']
sampler_name = config.get('sampler', 'SALib.sample.morris.sample') if not isinstance(template, str):
n_samples = int(config.get('samples', 100)) template = yaml.dump(template)
sampler = deserializer(sampler_name)
template = Template(template)
params = params_for_template(config)
blank_str = template.render({k: 0 for k in params[0].keys()})
blank = list(load_string(blank_str))
if len(blank) > 1:
raise ValueError('Templates must not return more than one configuration')
if 'name' in blank[0]:
raise ValueError('Templates cannot be named, use group instead')
for ps in params:
string = template.render(ps)
for c in load_string(string):
yield c
def params_for_template(config):
sampler_config = config.get('sampler', {'N': 100})
sampler = sampler_config.pop('method', 'SALib.sample.morris.sample')
sampler = deserializer(sampler)
bounds = config['vars']['bounds'] bounds = config['vars']['bounds']
problem = { problem = {
@@ -78,7 +106,7 @@ def expand_template(config):
'names': list(bounds.keys()), 'names': list(bounds.keys()),
'bounds': list(v for v in bounds.values()) 'bounds': list(v for v in bounds.values())
} }
samples = sampler(problem, n_samples) samples = sampler(problem, **sampler_config)
lists = config['vars'].get('lists', {}) lists = config['vars'].get('lists', {})
names = list(lists.keys()) names = list(lists.keys())
@@ -88,42 +116,32 @@ def expand_template(config):
allnames = names + problem['names'] allnames = names + problem['names']
allvalues = [(list(i[0])+list(i[1])) for i in product(combs, samples)] allvalues = [(list(i[0])+list(i[1])) for i in product(combs, samples)]
params = list(map(lambda x: dict(zip(allnames, x)), allvalues)) params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
return params
blank_str = template.render({k: 0 for k in allnames})
blank = list(load_string(blank_str))
if len(blank) > 1:
raise ValueError('Templates must not return more than one configuration')
if 'name' in blank[0]:
raise ValueError('Templates cannot be named, use group instead')
confs = []
for ps in params:
string = template.render(ps)
for c in load_string(string):
yield c
def load_files(*patterns, **kwargs): def load_files(*patterns, **kwargs):
for pattern in patterns: for pattern in patterns:
for i in glob(pattern, **kwargs): for i in glob(pattern, **kwargs):
for config in load_file(i): for cfg in load_file(i):
path = os.path.abspath(i) path = os.path.abspath(i)
if 'dir_path' not in config: yield Config.from_raw(cfg), path
config['dir_path'] = os.path.dirname(path)
yield config, path
def load_config(config): def load_config(cfg):
if isinstance(config, dict): if isinstance(cfg, Config):
yield config, None yield cfg, os.getcwd()
elif isinstance(cfg, dict):
yield Config.from_raw(cfg), os.getcwd()
else: else:
yield from load_files(config) yield from load_files(cfg)
builtins = importlib.import_module('builtins') builtins = importlib.import_module('builtins')
def name(value, known_modules=[]): KNOWN_MODULES = ['soil', ]
def name(value, known_modules=KNOWN_MODULES):
'''Return a name that can be imported, to serialize/deserialize an object''' '''Return a name that can be imported, to serialize/deserialize an object'''
if value is None: if value is None:
return 'None' return 'None'
@@ -152,13 +170,30 @@ def serializer(type_):
return lambda x: x return lambda x: x
def serialize(v, known_modules=[]): def serialize(v, known_modules=KNOWN_MODULES):
'''Get a text representation of an object.''' '''Get a text representation of an object.'''
tname = name(v, known_modules=known_modules) tname = name(v, known_modules=known_modules)
func = serializer(tname) func = serializer(tname)
return func(v), tname return func(v), tname
def deserializer(type_, known_modules=[]):
def serialize_dict(d, known_modules=KNOWN_MODULES):
d = dict(d)
for (k, v) in d.items():
if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules)
elif isinstance(v, list):
for ix in range(len(v)):
v[ix] = serialize_dict(v[ix], known_modules=known_modules)
elif isinstance(v, type):
d[k] = serialize(v, known_modules=known_modules)[1]
return d
IS_CLASS = re.compile(r"<class '(.*)'>")
def deserializer(type_, known_modules=KNOWN_MODULES):
if type(type_) != str: # Already deserialized if type(type_) != str: # Already deserialized
return type_ return type_
if type_ == 'str': if type_ == 'str':
@@ -168,17 +203,23 @@ def deserializer(type_, known_modules=[]):
if hasattr(builtins, type_): # Check if it's a builtin type if hasattr(builtins, type_): # Check if it's a builtin type
cls = getattr(builtins, type_) cls = getattr(builtins, type_)
return lambda x=None: ast.literal_eval(x) if x is not None else cls() return lambda x=None: ast.literal_eval(x) if x is not None else cls()
match = IS_CLASS.match(type_)
if match:
modname, tname = match.group(1).rsplit(".", 1)
module = importlib.import_module(modname)
cls = getattr(module, tname)
return getattr(cls, 'deserialize', cls)
# Otherwise, see if we can find the module and the class # Otherwise, see if we can find the module and the class
modules = known_modules or []
options = [] options = []
for mod in modules: for mod in known_modules:
if mod: if mod:
options.append((mod, type_)) options.append((mod, type_))
if '.' in type_: # Fully qualified module if '.' in type_: # Fully qualified module
module, type_ = type_.rsplit(".", 1) module, type_ = type_.rsplit(".", 1)
options.append ((module, type_)) options.append((module, type_))
errors = [] errors = []
for modname, tname in options: for modname, tname in options:
@@ -199,3 +240,13 @@ def deserialize(type_, value=None, **kwargs):
if value is None: if value is None:
return des return des
return des(value) return des(value)
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
'''Return the list of deserialized objects'''
objects = []
for name in names:
mod = deserialize(name, known_modules=known_modules)
objects.append(mod(*args, **kwargs))
return objects

View File

@@ -1,306 +1,221 @@
import os import os
import time from time import time as current_time, strftime
import importlib import importlib
import sys import sys
import yaml import yaml
import traceback import traceback
import inspect
import logging
import networkx as nx import networkx as nx
from textwrap import dedent
from dataclasses import dataclass, field, asdict
from typing import Any, Dict, Union, Optional
from networkx.readwrite import json_graph from networkx.readwrite import json_graph
from multiprocessing import Pool
from functools import partial from functools import partial
import pickle import pickle
from nxsim import NetworkSimulation
from . import serialization, utils, basestring, agents from . import serialization, utils, basestring, agents
from .environment import Environment from .environment import Environment
from .utils import logger from .utils import logger, run_and_return_exceptions
from .exporters import for_sim as exporters_for_sim from .exporters import default
from .time import INFINITY
from .config import Config, convert_old
class Simulation(NetworkSimulation): #TODO: change documentation for simulation
@dataclass
class Simulation:
""" """
Subclass of nsim.NetworkSimulation with three main differences:
1) agent type can be specified by name or by class.
2) instead of just one type, a network agents distribution can be used.
The distribution specifies the weight (or probability) of each
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
Parameters Parameters
--------- ---------
name : str, optional config (optional): :class:`config.Config`
name of the Simulation name of the Simulation
group : str, optional
a group name can be used to link simulations
topology : networkx.Graph instance, optional
network_params : dict
parameters used to create a topology with networkx, if no topology is given
network_agents : dict
definition of agents to populate the topology with
agent_type : NetworkAgent subclass, optional
Default type of NetworkAgent to use for nodes not specified in network_agents
states : list, optional
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
whose value indicates the state
dir_path: str, optional
Directory path to load simulation assets (files, modules...)
seed : str, optional
Seed to use for the random generator
num_trials : int, optional
Number of independent simulation runs
max_time : int, optional
Time how long the simulation should run
environment_params : dict, optional
Dictionary of globally-shared environmental parameters
environment_agents: dict, optional
Similar to network_agents. Distribution of Agents that control the environment
environment_class: soil.environment.Environment subclass, optional
Class for the environment. It defailts to soil.environment.Environment
load_module : str, module name, deprecated
If specified, soil will load the content of this module under 'soil.agents.custom'
kwargs: parameters to use to initialize a new configuration, if one has not been provided.
""" """
version: str = '2'
name: str = 'Unnamed simulation'
description: Optional[str] = ''
group: str = None
model_class: Union[str, type] = 'soil.Environment'
model_params: dict = field(default_factory=dict)
seed: str = field(default_factory=lambda: current_time())
dir_path: str = field(default_factory=lambda: os.getcwd())
max_time: float = float('inf')
max_steps: int = -1
interval: int = 1
num_trials: int = 3
dry_run: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
def __init__(self, name=None, group=None, topology=None, network_params=None, @classmethod
network_agents=None, agent_type=None, states=None, def from_dict(cls, env):
default_state=None, interval=1, num_trials=1,
max_time=100, load_module=None, seed=None,
dir_path=None, environment_agents=None,
environment_params=None, environment_class=None,
**kwargs):
self.seed = str(seed) or str(time.time()) ignored = {k: v for k, v in env.items()
self.load_module = load_module if k not in inspect.signature(cls).parameters}
self.network_params = network_params
self.name = name or 'Unnamed_' + time.strftime("%Y-%m-%d_%H:%M:%S")
self.group = group or None
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
sys.path += list(x for x in [os.getcwd(), self.dir_path] if x not in sys.path) kwargs = {k:v for k, v in env.items() if k not in ignored}
if ignored:
kwargs.setdefault('extra', {}).update(ignored)
if ignored:
print(f'Warning: Ignoring these parameters (added to "extra"): { ignored }')
if topology is None: return cls(**kwargs)
topology = serialization.load_network(network_params,
dir_path=self.dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
self.topology = nx.Graph(topology)
self.environment_params = environment_params or {}
self.environment_class = serialization.deserialize(environment_class,
known_modules=['soil.environment', ]) or Environment
environment_agents = environment_agents or []
self.environment_agents = agents._convert_agent_types(environment_agents,
known_modules=[self.load_module])
distro = agents.calculate_distribution(network_agents,
agent_type)
self.network_agents = agents._convert_agent_types(distro,
known_modules=[self.load_module])
self.states = agents._validate_states(states,
self.topology)
def run_simulation(self, *args, **kwargs): def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs) return self.run(*args, **kwargs)
def run(self, *args, **kwargs): def run(self, *args, **kwargs):
'''Run the simulation and return the list of resulting environments''' '''Run the simulation and return the list of resulting environments'''
return list(self._run_simulation_gen(*args, **kwargs)) logger.info(dedent('''
Simulation:
---
''') +
self.to_yaml())
return list(self.run_gen(*args, **kwargs))
def _run_sync_or_async(self, parallel=False, *args, **kwargs): def run_gen(self, parallel=False, dry_run=False,
if parallel: exporters=[default, ], outdir=None, exporter_params={},
p = Pool() log_level=None,
func = partial(self.run_trial_exceptions, **kwargs):
*args, '''Run the simulation and yield the resulting environments.'''
**kwargs) if log_level:
for i in p.imap_unordered(func, range(self.num_trials)): logger.setLevel(log_level)
if isinstance(i, Exception):
logger.error('Trial failed:\n\t%s', i.message)
continue
yield i
else:
for i in range(self.num_trials):
yield self.run_trial(i,
*args,
**kwargs)
def _run_simulation_gen(self, *args, parallel=False, dry_run=False,
exporters=['default', ], outdir=None, exporter_params={}, **kwargs):
logger.info('Using exporters: %s', exporters or []) logger.info('Using exporters: %s', exporters or [])
logger.info('Output directory: %s', outdir) logger.info('Output directory: %s', outdir)
exporters = exporters_for_sim(self, exporters = serialization.deserialize_all(exporters,
exporters, simulation=self,
known_modules=['soil.exporters', ],
dry_run=dry_run, dry_run=dry_run,
outdir=outdir, outdir=outdir,
**exporter_params) **exporter_params)
with utils.timer('simulation {}'.format(self.name)): with utils.timer('simulation {}'.format(self.name)):
for exporter in exporters: for exporter in exporters:
exporter.start() exporter.sim_start()
for env in self._run_sync_or_async(*args, parallel=parallel, for env in utils.run_parallel(func=self.run_trial,
iterable=range(int(self.num_trials)),
parallel=parallel,
log_level=log_level,
**kwargs): **kwargs):
for exporter in exporters:
exporter.trial_start(env)
for exporter in exporters: for exporter in exporters:
exporter.trial_end(env) exporter.trial_end(env)
yield env yield env
for exporter in exporters: for exporter in exporters:
exporter.end() exporter.sim_end()
def get_env(self, trial_id = 0, **kwargs): def get_env(self, trial_id=0, **kwargs):
'''Create an environment for a trial of the simulation''' '''Create an environment for a trial of the simulation'''
opts = self.environment_params.copy() def deserialize_reporters(reporters):
env_name = '{}_trial_{}'.format(self.name, trial_id) for (k, v) in reporters.items():
opts.update({ if isinstance(v, str) and v.startswith('py:'):
'name': env_name, reporters[k] = serialization.deserialize(value.lsplit(':', 1)[1])
'topology': self.topology.copy(),
'seed': self.seed+env_name,
'initial_time': 0,
'interval': self.interval,
'network_agents': self.network_agents,
'states': self.states,
'default_state': self.default_state,
'environment_agents': self.environment_agents,
})
opts.update(kwargs)
env = self.environment_class(**opts)
return env
def run_trial(self, trial_id=0, until=None, **opts): model_params = self.model_params.copy()
"""Run a single trial of the simulation model_params.update(kwargs)
Parameters agent_reporters = deserialize_reporters(model_params.pop('agent_reporters', {}))
---------- model_reporters = deserialize_reporters(model_params.pop('model_reporters', {}))
trial_id : int
env = serialization.deserialize(self.model_class)
return env(id=f'{self.name}_trial_{trial_id}',
seed=f'{self.seed}_trial_{trial_id}',
dir_path=self.dir_path,
agent_reporters=agent_reporters,
model_reporters=model_reporters,
**model_params)
def run_trial(self, trial_id=None, until=None, log_file=False, log_level=logging.INFO, **opts):
""" """
# Set-up trial environment and graph Run a single trial of the simulation
until = until or self.max_time
env = self.get_env(trial_id = trial_id, **opts) """
# Set up agents on nodes if log_level:
logger.setLevel(log_level)
model = self.get_env(trial_id, **opts)
trial_id = trial_id if trial_id is not None else current_time()
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)): with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
env.run(until) return self.run_model(model=model, trial_id=trial_id, until=until, log_level=log_level)
return env
def run_trial_exceptions(self, *args, **kwargs): def run_model(self, model, until=None, **opts):
''' # Set-up trial environment and graph
A wrapper for run_trial that catches exceptions and returns them. until = float(until or self.max_time or 'inf')
It is meant for async simulations
''' # Set up agents on nodes
try: def is_done():
return self.run_trial(*args, **kwargs) return False
except Exception as ex:
c = ex.__cause__ if until and hasattr(model.schedule, 'time'):
c.message = ''.join(traceback.format_exception(type(c), c, c.__traceback__)[:]) prev = is_done
return c
def is_done():
return prev() or model.schedule.time >= until
if self.max_steps and self.max_steps > 0 and hasattr(model.schedule, 'steps'):
prev_steps = is_done
def is_done():
return prev_steps() or model.schedule.steps >= self.max_steps
newline = '\n'
logger.info(dedent(f'''
Model stats:
Agents (total: { model.schedule.get_agent_count() }):
- { (newline + ' - ').join(str(a) for a in model.schedule.agents) }'''
f'''
Topologies (size):
- { dict( (k, len(v)) for (k, v) in model.topologies.items()) }
''' if getattr(model, "topologies", None) else ''
))
while not is_done():
utils.logger.debug(f'Simulation time {model.schedule.time}/{until}. Next: {getattr(model.schedule, "next_time", model.schedule.time + self.interval)}')
model.step()
return model
def to_dict(self): def to_dict(self):
return self.__getstate__() d = asdict(self)
if not isinstance(d['model_class'], str):
d['model_class'] = serialization.name(d['model_class'])
d['model_params'] = serialization.serialize_dict(d['model_params'])
d['dir_path'] = str(d['dir_path'])
d['version'] = '2'
return d
def to_yaml(self): def to_yaml(self):
return yaml.dump(self.to_dict()) return yaml.dump(self.to_dict())
def dump_yaml(self, f=None, outdir=None): def iter_from_config(*cfgs):
if not f and not outdir: for config in cfgs:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir, '{}.dumped.yml'.format(self.name))
with utils.open_or_reuse(f, 'w') as f:
f.write(self.to_yaml())
def dump_pickle(self, f=None, outdir=None):
if not outdir and not f:
raise ValueError('specify a file or an output directory')
if not f:
f = os.path.join(outdir,
'{}.simulation.pickle'.format(self.name))
with utils.open_or_reuse(f, 'wb') as f:
pickle.dump(self, f)
def __getstate__(self):
state={}
for k, v in self.__dict__.items():
if k[0] != '_':
state[k] = v
state['topology'] = json_graph.node_link_data(self.topology)
state['network_agents'] = agents.serialize_distribution(self.network_agents,
known_modules = [])
state['environment_agents'] = agents.serialize_distribution(self.environment_agents,
known_modules = [])
state['environment_class'] = serialization.serialize(self.environment_class,
known_modules=['soil.environment'])[1] # func, name
if state['load_module'] is None:
del state['load_module']
return state
def __setstate__(self, state):
self.__dict__ = state
self.load_module = getattr(self, 'load_module', None)
if self.dir_path not in sys.path:
sys.path += [self.dir_path, os.getcwd()]
self.topology = json_graph.node_link_graph(state['topology'])
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
self.environment_agents = agents._convert_agent_types(self.environment_agents,
known_modules=[self.load_module])
self.environment_class = serialization.deserialize(self.environment_class,
known_modules=[self.load_module, 'soil.environment', ]) # func, name
return state
def all_from_config(config):
configs = list(serialization.load_config(config)) configs = list(serialization.load_config(config))
for config, _ in configs: for config, path in configs:
sim = Simulation(**config) d = dict(config)
yield sim if 'dir_path' not in d:
d['dir_path'] = os.path.dirname(path)
yield Simulation.from_dict(d)
def from_config(conf_or_path): def from_config(conf_or_path):
config = list(serialization.load_config(conf_or_path)) lst = list(iter_from_config(conf_or_path))
if len(config) > 1: if len(lst) > 1:
raise AttributeError('Provide only one configuration') raise AttributeError('Provide only one configuration')
config = config[0][0] return lst[0]
sim = Simulation(**config)
return sim
def run_from_config(*configs, **kwargs): def run_from_config(*configs, **kwargs):
for config_def in configs: for sim in iter_from_config(*configs):
# logger.info("Found {} config(s)".format(len(ls))) logger.info(f"Using config(s): {sim.name}")
for config, path in serialization.load_config(config_def):
name = config.get('name', 'unnamed')
logger.info("Using config(s): {name}".format(name=name))
dir_path = config.pop('dir_path', os.path.dirname(path))
sim = Simulation(dir_path=dir_path,
**config)
sim.run_simulation(**kwargs) sim.run_simulation(**kwargs)

99
soil/time.py Normal file
View File

@@ -0,0 +1,99 @@
from mesa.time import BaseScheduler
from queue import Empty
from heapq import heappush, heappop, heapify
import math
from .utils import logger
from mesa import Agent as MesaAgent
INFINITY = float('inf')
class When:
def __init__(self, time):
if isinstance(time, When):
return time
self._time = time
def abs(self, time):
return self._time
NEVER = When(INFINITY)
class Delta(When):
def __init__(self, delta):
self._delta = delta
def __eq__(self, other):
return self._delta == other._delta
def abs(self, time):
return time + self._delta
class TimedActivation(BaseScheduler):
"""A scheduler which activates each agent when the agent requests.
In each activation, each agent will update its 'next_time'.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._next = {}
self._queue = []
self.next_time = 0
self.logger = logger.getChild(f'time_{ self.model }')
def add(self, agent: MesaAgent, when=None):
if when is None:
when = self.time
if agent.unique_id in self._agents:
self._queue.remove((self._next[agent.unique_id], agent.unique_id))
del self._agents[agent.unique_id]
heapify(self._queue)
heappush(self._queue, (when, agent.unique_id))
self._next[agent.unique_id] = when
super().add(agent)
def step(self) -> None:
"""
Executes agents in order, one at a time. After each step,
an agent will signal when it wants to be scheduled next.
"""
self.logger.debug(f'Simulation step {self.next_time}')
if not self.model.running:
return
self.time = self.next_time
when = self.time
while self._queue and self._queue[0][0] == self.time:
(when, agent_id) = heappop(self._queue)
self.logger.debug(f'Stepping agent {agent_id}')
agent = self._agents[agent_id]
returned = agent.step()
if not agent.alive:
self.remove(agent)
continue
when = (returned or Delta(1)).abs(self.time)
if when < self.time:
raise Exception("Cannot schedule an agent for a time in the past ({} < {})".format(when, self.time))
self._next[agent_id] = when
heappush(self._queue, (when, agent_id))
self.steps += 1
if not self._queue:
self.time = INFINITY
self.next_time = INFINITY
self.model.running = False
return self.time
self.next_time = self._queue[0][0]
self.logger.debug(f'Next step: {self.next_time}')

View File

@@ -1,24 +1,40 @@
import logging import logging
import time from time import time as current_time, strftime, gmtime, localtime
import os import os
import traceback
from functools import partial
from shutil import copyfile from shutil import copyfile
from multiprocessing import Pool
from contextlib import contextmanager from contextlib import contextmanager
logger = logging.getLogger('soil') logger = logging.getLogger('soil')
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
timeformat = "%H:%M:%S"
if os.environ.get('SOIL_VERBOSE', ''):
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
else:
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
logFormatter = logging.Formatter(logformat, timeformat)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logger.addHandler(consoleHandler)
@contextmanager @contextmanager
def timer(name='task', pre="", function=logger.info, to_object=None): def timer(name='task', pre="", function=logger.info, to_object=None):
start = time.time() start = current_time()
function('{}Starting {} at {}.'.format(pre, name, function('{}Starting {} at {}.'.format(pre, name,
time.strftime("%X", time.gmtime(start)))) strftime("%X", gmtime(start))))
yield start yield start
end = time.time() end = current_time()
function('{}Finished {} at {} in {} seconds'.format(pre, name, function('{}Finished {} at {} in {} seconds'.format(pre, name,
time.strftime("%X", time.gmtime(end)), strftime("%X", gmtime(end)),
str(end-start))) str(end-start)))
if to_object: if to_object:
to_object.start = start to_object.start = start
@@ -31,20 +47,87 @@ def safe_open(path, mode='r', backup=True, **kwargs):
os.makedirs(outdir) os.makedirs(outdir)
if backup and 'w' in mode and os.path.exists(path): if backup and 'w' in mode and os.path.exists(path):
creation = os.path.getctime(path) creation = os.path.getctime(path)
stamp = time.strftime('%Y-%m-%d_%H:%M', time.localtime(creation)) stamp = strftime('%Y-%m-%d_%H.%M.%S', localtime(creation))
backup_dir = os.path.join(outdir, stamp) backup_dir = os.path.join(outdir, 'backup')
if not os.path.exists(backup_dir): if not os.path.exists(backup_dir):
os.makedirs(backup_dir) os.makedirs(backup_dir)
newpath = os.path.join(backup_dir, os.path.basename(path)) newpath = os.path.join(backup_dir, '{}@{}'.format(os.path.basename(path),
if os.path.exists(newpath): stamp))
newpath = '{}@{}'.format(newpath, time.time())
copyfile(path, newpath) copyfile(path, newpath)
return open(path, mode=mode, **kwargs) return open(path, mode=mode, **kwargs)
@contextmanager
def open_or_reuse(f, *args, **kwargs): def open_or_reuse(f, *args, **kwargs):
try: try:
return safe_open(f, *args, **kwargs) with safe_open(f, *args, **kwargs) as f:
yield f
except (AttributeError, TypeError): except (AttributeError, TypeError):
return f yield f
def flatten_dict(d):
if not isinstance(d, dict):
return d
return dict(_flatten_dict(d))
def _flatten_dict(d, prefix=''):
if not isinstance(d, dict):
# print('END:', prefix, d)
yield prefix, d
return
if prefix:
prefix = prefix + '.'
for k, v in d.items():
# print(k, v)
res = list(_flatten_dict(v, prefix='{}{}'.format(prefix, k)))
# print('RES:', res)
yield from res
def unflatten_dict(d):
out = {}
for k, v in d.items():
target = out
if not isinstance(k, str):
target[k] = v
continue
tokens = k.split('.')
if len(tokens) < 2:
target[k] = v
continue
for token in tokens[:-1]:
if token not in target:
target[token] = {}
target = target[token]
target[tokens[-1]] = v
return out
def run_and_return_exceptions(func, *args, **kwargs):
'''
A wrapper for run_trial that catches exceptions and returns them.
It is meant for async simulations.
'''
try:
return func(*args, **kwargs)
except Exception as ex:
if ex.__cause__ is not None:
ex = ex.__cause__
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
return ex
def run_parallel(func, iterable, parallel=False, **kwargs):
if parallel and not os.environ.get('SOIL_DEBUG', None):
p = Pool()
wrapped_func = partial(run_and_return_exceptions,
func, **kwargs)
for i in p.imap_unordered(wrapped_func, iterable):
if isinstance(i, Exception):
logger.error('Trial failed:\n\t%s', i.message)
continue
yield i
else:
for i in iterable:
yield func(i, **kwargs)

5
soil/visualization.py Normal file
View File

@@ -0,0 +1,5 @@
from mesa.visualization.UserParam import UserSettableParameter
class UserSettableParameter(UserSettableParameter):
def __str__(self):
return self.value

View File

@@ -118,9 +118,9 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
elif msg['type'] == 'download_gexf': elif msg['type'] == 'download_gexf':
G = self.trials[ int(msg['data']) ].history_to_graph() G = self.trials[ int(msg['data']) ].history_to_graph()
for node in G.nodes(): for node in G.nodes():
if 'pos' in G.node[node]: if 'pos' in G.nodes[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}} G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos']) del (G.nodes[node]['pos'])
writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft') writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft')
writer.add_graph(G) writer.add_graph(G)
self.write_message({'type': 'download_gexf', self.write_message({'type': 'download_gexf',
@@ -130,9 +130,9 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
elif msg['type'] == 'download_json': elif msg['type'] == 'download_json':
G = self.trials[ int(msg['data']) ].history_to_graph() G = self.trials[ int(msg['data']) ].history_to_graph()
for node in G.nodes(): for node in G.nodes():
if 'pos' in G.node[node]: if 'pos' in G.nodes[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}} G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos']) del (G.nodes[node]['pos'])
self.write_message({'type': 'download_json', self.write_message({'type': 'download_json',
'filename': self.config['name'] + '_trial_' + str(msg['data']), 'filename': self.config['name'] + '_trial_' + str(msg['data']),
'data': nx.node_link_data(G) }) 'data': nx.node_link_data(G) })

View File

@@ -6,11 +6,11 @@ network_params:
n: 100 n: 100
m: 2 m: 2
network_agents: network_agents:
- agent_type: ControlModelM2 - agent_class: ControlModelM2
weight: 0.1 weight: 0.1
state: state:
id: 1 id: 1
- agent_type: ControlModelM2 - agent_class: ControlModelM2
weight: 0.9 weight: 0.9
state: state:
id: 0 id: 0

View File

@@ -1 +1,4 @@
pytest pytest
pytest-profiling
scipy>=1.3
tornado

View File

@@ -0,0 +1,50 @@
---
version: '2'
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
model_class: Environment
model_params:
topologies:
default:
params:
generator: complete_graph
n: 4
agents:
agent_class: CounterModel
state:
group: network
times: 1
topology: 'default'
distribution:
- agent_class: CounterModel
weight: 0.25
state:
state_id: 0
times: 1
- agent_class: AggregatedCounter
weight: 0.5
state:
times: 2
override:
- filter:
node_id: 1
state:
name: 'Node 1'
- filter:
node_id: 2
state:
name: 'Node 2'
fixed:
- agent_class: BaseAgent
hidden: true
topology: null
state:
name: 'Environment Agent 1'
times: 10
group: environment
am_i_complete: true

37
tests/old_complete.yml Normal file
View File

@@ -0,0 +1,37 @@
---
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
network_params:
generator: complete_graph
n: 4
network_agents:
- agent_class: CounterModel
weight: 0.25
state:
state_id: 0
times: 1
- agent_class: AggregatedCounter
weight: 0.5
state:
times: 2
environment_agents:
- agent_id: 'Environment Agent 1'
agent_class: BaseAgent
state:
times: 10
environment_class: Environment
environment_params:
am_i_complete: true
agent_class: CounterModel
default_state:
times: 1
states:
1:
name: 'Node 1'
2:
name: 'Node 2'

24
tests/test_agents.py Normal file
View File

@@ -0,0 +1,24 @@
from unittest import TestCase
import pytest
from soil import agents, environment
from soil import time as stime
class Dead(agents.FSM):
@agents.default_state
@agents.state
def only(self):
return self.die()
class TestMain(TestCase):
def test_die_raises_exception(self):
d = Dead(unique_id=0, model=environment.Environment())
d.step()
with pytest.raises(agents.DeadAgent):
d.step()
def test_die_returns_infinity(self):
d = Dead(unique_id=0, model=environment.Environment())
ret = d.step().abs(0)
print(ret, 'next')
assert ret == stime.INFINITY

View File

@@ -1,89 +0,0 @@
from unittest import TestCase
import os
import pandas as pd
import yaml
from functools import partial
from os.path import join
from soil import simulation, analysis, agents
ROOT = os.path.abspath(os.path.dirname(__file__))
class Ping(agents.FSM):
defaults = {
'count': 0,
}
@agents.default_state
@agents.state
def even(self):
self['count'] += 1
return self.odd
@agents.state
def odd(self):
self['count'] += 1
return self.even
class TestAnalysis(TestCase):
# Code to generate a simple sqlite history
def setUp(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'analysis',
'seed': 'seed',
'network_params': {
'generator': 'complete_graph',
'n': 2
},
'agent_type': Ping,
'states': [{'interval': 1}, {'interval': 2}],
'max_time': 30,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_config(config)
self.env = s.run_simulation(dry_run=True)[0]
def test_saved(self):
env = self.env
assert env.get_agent(0)['count', 0] == 1
assert env.get_agent(0)['count', 29] == 30
assert env.get_agent(1)['count', 0] == 1
assert env.get_agent(1)['count', 29] == 15
assert env['env', 29, None]['SEED'] == env['env', 29, 'SEED']
def test_count(self):
env = self.env
df = analysis.read_sql(env._history.db_path)
res = analysis.get_count(df, 'SEED', 'id')
assert res['SEED']['seedanalysis_trial_0'].iloc[0] == 1
assert res['SEED']['seedanalysis_trial_0'].iloc[-1] == 1
assert res['id']['odd'].iloc[0] == 2
assert res['id']['even'].iloc[0] == 0
assert res['id']['odd'].iloc[-1] == 1
assert res['id']['even'].iloc[-1] == 1
def test_value(self):
env = self.env
df = analysis.read_sql(env._history._db)
res_sum = analysis.get_value(df, 'count')
assert res_sum['count'].iloc[0] == 2
import numpy as np
res_mean = analysis.get_value(df, 'count', aggfunc=np.mean)
assert res_mean['count'].iloc[0] == 1
res_total = analysis.get_value(df)
res_total['SEED'].iloc[0] == 'seedanalysis_trial_0'

152
tests/test_config.py Normal file
View File

@@ -0,0 +1,152 @@
from unittest import TestCase
import os
import yaml
import copy
from os.path import join
from soil import simulation, serialization, config, network, agents, utils
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
def isequal(a, b):
if isinstance(a, dict):
for (k, v) in a.items():
if v:
isequal(a[k], b[k])
else:
assert not b.get(k, None)
return
assert a == b
class TestConfig(TestCase):
def test_conversion(self):
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
converted_defaults = config.convert_old(old, strict=False)
converted = converted_defaults.dict(exclude_unset=True)
isequal(converted, expected)
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
init_config = copy.copy(s.to_dict())
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
# del nconfig['to
isequal(init_config, nconfig)
def test_topology_config(self):
netconfig = config.NetConfig(**{
'path': join(ROOT, 'test.gexf')
})
net = network.from_config(netconfig, dir_path=ROOT)
assert len(net.nodes) == 2
assert len(net.edges) == 1
def test_env_from_config(self):
"""
Simple configuration that tests that the graph is loaded, and that
network agents are initialized properly.
"""
cfg = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'agent_class': 'CounterModel',
# 'states': [{'times': 10}, {'times': 20}],
'max_time': 2,
'dry_run': True,
'num_trials': 1,
'environment_params': {
}
}
conf = config.convert_old(cfg)
s = simulation.from_config(conf)
env = s.get_env()
assert len(env.topologies['default'].nodes) == 2
assert len(env.topologies['default'].edges) == 1
assert len(env.agents) == 2
assert env.agents[0].G == env.topologies['default']
def test_agents_from_config(self):
'''We test that the known complete configuration produces
the right agents in the right groups'''
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
s = simulation.from_config(cfg)
env = s.get_env()
assert len(env.topologies['default'].nodes) == 4
assert len(env.agents(group='network')) == 4
assert len(env.agents(group='environment')) == 1
def test_yaml(self):
"""
The YAML version of a newly created configuration should be equivalent
to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
"""
with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
for (k, v) in config.items():
assert recovered[k] == v
def make_example_test(path, cfg):
def wrapped(self):
root = os.getcwd()
print(path)
s = simulation.from_config(cfg)
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
return wrapped
def add_example_tests():
for config, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'),
join(EXAMPLES, '*.yml'),
):
p = make_example_test(path=path, cfg=config)
fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname
p.__doc__ = '%s should be a valid configuration' % fname
setattr(TestConfig, p.__name__, p)
del p
add_example_tests()

View File

@@ -2,7 +2,7 @@ from unittest import TestCase
import os import os
from os.path import join from os.path import join
from soil import serialization, simulation from soil import serialization, simulation, config
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, '..', 'examples')
@@ -14,36 +14,37 @@ class TestExamples(TestCase):
pass pass
def make_example_test(path, config): def make_example_test(path, cfg):
def wrapped(self): def wrapped(self):
root = os.getcwd() root = os.getcwd()
for s in simulation.all_from_config(path): for s in simulation.iter_from_config(cfg):
iterations = s.max_time * s.num_trials iterations = s.max_steps * s.num_trials
if iterations > 1000: if iterations < 0 or iterations > 1000:
s.max_time = 100 s.max_steps = 100
s.num_trials = 1 s.num_trials = 1
if config.get('skip_test', False) and not FORCE_TESTS: assert isinstance(cfg, config.Config)
if getattr(cfg, 'skip_test', False) and not FORCE_TESTS:
self.skipTest('Example ignored.') self.skipTest('Example ignored.')
envs = s.run_simulation(dry_run=True) envs = s.run_simulation(dry_run=True)
assert envs assert envs
for env in envs: for env in envs:
assert env assert env
try: try:
n = config['network_params']['n'] n = cfg.model_params['network_params']['n']
assert len(list(env.network_agents)) == n assert len(list(env.network_agents)) == n
assert env.now > 2 # It has run
assert env.now <= config['max_time'] # But not further than allowed
except KeyError: except KeyError:
pass pass
assert env.schedule.steps > 0 # It has run
assert env.schedule.steps <= s.max_steps # But not further than allowed
return wrapped return wrapped
def add_example_tests(): def add_example_tests():
for config, path in serialization.load_files( for cfg, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'), join(EXAMPLES, '*', '*.yml'),
join(EXAMPLES, '*.yml'), join(EXAMPLES, '*.yml'),
): ):
p = make_example_test(path=path, config=config) p = make_example_test(path=path, cfg=config.Config.from_raw(cfg))
fname = os.path.basename(path) fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname p.__name__ = 'test_example_file_%s' % fname
p.__doc__ = '%s should be a valid configuration' % fname p.__doc__ = '%s should be a valid configuration' % fname

View File

@@ -2,12 +2,11 @@ import os
import io import io
import tempfile import tempfile
import shutil import shutil
from time import time
from unittest import TestCase from unittest import TestCase
from soil import exporters from soil import exporters
from soil.utils import safe_open
from soil import simulation from soil import simulation
from soil import agents
class Dummy(exporters.Exporter): class Dummy(exporters.Exporter):
@@ -15,58 +14,57 @@ class Dummy(exporters.Exporter):
trials = 0 trials = 0
ended = False ended = False
total_time = 0 total_time = 0
called_start = 0
called_trial = 0
called_end = 0
def start(self): def sim_start(self):
self.__class__.called_start += 1
self.__class__.started = True self.__class__.started = True
def trial_end(self, env): def trial_end(self, env):
assert env assert env
self.__class__.trials += 1 self.__class__.trials += 1
self.__class__.total_time += env.now self.__class__.total_time += env.now
self.__class__.called_trial += 1
def end(self): def sim_end(self):
self.__class__.ended = True self.__class__.ended = True
self.__class__.called_end += 1
class Exporters(TestCase): class Exporters(TestCase):
def test_basic(self): def test_basic(self):
# We need to add at least one agent to make sure the scheduler
# ticks every step
num_trials = 5
max_time = 2
config = { config = {
'name': 'exporter_sim', 'name': 'exporter_sim',
'network_params': {}, 'model_params': {
'agent_type': 'CounterModel', 'agents': [{
'max_time': 2, 'agent_class': agents.BaseAgent
'num_trials': 5, }]
'environment_params': {} },
'max_time': max_time,
'num_trials': num_trials,
} }
s = simulation.from_config(config) s = simulation.from_config(config)
s.run_simulation(exporters=[Dummy], dry_run=True)
for env in s.run_simulation(exporters=[Dummy], dry_run=True):
assert len(env.agents) == 1
assert env.now == max_time
assert Dummy.started assert Dummy.started
assert Dummy.ended assert Dummy.ended
assert Dummy.trials == 5 assert Dummy.called_start == 1
assert Dummy.total_time == 2*5 assert Dummy.called_end == 1
assert Dummy.called_trial == num_trials
def test_distribution(self): assert Dummy.trials == num_trials
'''The distribution exporter should write the number of agents in each state''' assert Dummy.total_time == max_time * num_trials
config = {
'name': 'exporter_sim',
'network_params': {
'generator': 'complete_graph',
'n': 4
},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': 5,
'environment_params': {}
}
output = io.StringIO()
s = simulation.from_config(config)
s.run_simulation(exporters=[exporters.distribution], dry_run=True, exporter_params={'copy_to': output})
result = output.getvalue()
assert 'count' in result
assert 'SEED,Noneexporter_sim_trial_3,1,,1,1,1,1' in result
def test_writing(self): def test_writing(self):
'''Try to write CSV, GEXF, sqlite and YAML (without dry_run)''' '''Try to write CSV, sqlite and YAML (without dry_run)'''
n_trials = 5 n_trials = 5
config = { config = {
'name': 'exporter_sim', 'name': 'exporter_sim',
@@ -74,9 +72,10 @@ class Exporters(TestCase):
'generator': 'complete_graph', 'generator': 'complete_graph',
'n': 4 'n': 4
}, },
'agent_type': 'CounterModel', 'agent_class': 'CounterModel',
'max_time': 2, 'max_time': 2,
'num_trials': n_trials, 'num_trials': n_trials,
'dry_run': False,
'environment_params': {} 'environment_params': {}
} }
output = io.StringIO() output = io.StringIO()
@@ -85,9 +84,8 @@ class Exporters(TestCase):
envs = s.run_simulation(exporters=[ envs = s.run_simulation(exporters=[
exporters.default, exporters.default,
exporters.csv, exporters.csv,
exporters.gexf,
exporters.distribution,
], ],
dry_run=False,
outdir=tmpdir, outdir=tmpdir,
exporter_params={'copy_to': output}) exporter_params={'copy_to': output})
result = output.getvalue() result = output.getvalue()
@@ -99,11 +97,7 @@ class Exporters(TestCase):
try: try:
for e in envs: for e in envs:
with open(os.path.join(simdir, '{}.gexf'.format(e.name))) as f: with open(os.path.join(simdir, '{}.env.csv'.format(e.id))) as f:
result = f.read()
assert result
with open(os.path.join(simdir, '{}.csv'.format(e.name))) as f:
result = f.read() result = f.read()
assert result assert result
finally: finally:

View File

@@ -1,156 +0,0 @@
from unittest import TestCase
import os
import shutil
from glob import glob
from soil import history
ROOT = os.path.abspath(os.path.dirname(__file__))
DBROOT = os.path.join(ROOT, 'testdb')
class TestHistory(TestCase):
def setUp(self):
if not os.path.exists(DBROOT):
os.makedirs(DBROOT)
def tearDown(self):
if os.path.exists(DBROOT):
shutil.rmtree(DBROOT)
def test_history(self):
"""
"""
tuples = (
('a_0', 0, 'id', 'h'),
('a_0', 1, 'id', 'e'),
('a_0', 2, 'id', 'l'),
('a_0', 3, 'id', 'l'),
('a_0', 4, 'id', 'o'),
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 3, 'prob', 2),
('env', 5, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
# assert h['env', 0, 'prob'] == 0
for i in range(1, 7):
assert h['env', i, 'prob'] == ((i-1)//2)+1
for i, k in zip(range(5), 'hello'):
assert h['a_0', i, 'id'] == k
for record, value in zip(h['a_0', None, 'id'], 'hello'):
t_step, val = record
assert val == value
for i, k in zip(range(5), 'value'):
assert h['a_1', i, 'id'] == k
for i in range(5, 8):
assert h['a_1', i, 'id'] == 'e'
for i in range(7):
assert h['a_2', i, 'finished'] == False
assert h['a_2', 7, 'finished']
def test_history_gen(self):
"""
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
for t_step, key, value in h['env', None, None]:
assert t_step == value
assert key == 'prob'
records = list(h[None, 7, None])
assert len(records) == 3
for i in records:
agent_id, key, value = i
if agent_id == 'a_1':
assert key == 'id'
assert value == 'e'
elif agent_id == 'a_2':
assert key == 'finished'
assert value
else:
assert key == 'prob'
assert value == 3
records = h['a_1', 7, None]
assert records['id'] == 'e'
def test_history_file(self):
"""
History should be saved to a file
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
db_path = os.path.join(DBROOT, 'test')
h = history.History(db_path=db_path)
h.save_tuples(tuples)
h.flush_cache()
assert os.path.exists(db_path)
# Recover the data
recovered = history.History(db_path=db_path)
assert recovered['a_1', 0, 'id'] == 'v'
assert recovered['a_1', 4, 'id'] == 'e'
# Using backup=True should create a backup copy, and initialize an empty history
newhistory = history.History(db_path=db_path, backup=True)
backuppaths = glob(db_path + '.backup*.sqlite')
assert len(backuppaths) == 1
backuppath = backuppaths[0]
assert newhistory.db_path == h.db_path
assert os.path.exists(backuppath)
assert len(newhistory[None, None, None]) == 0
def test_history_tuples(self):
"""
The data recovered should be equal to the one recorded.
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
recovered = list(h.to_tuples())
assert recovered
for i in recovered:
assert i in tuples

View File

@@ -1,163 +1,131 @@
from unittest import TestCase from unittest import TestCase
import os import os
import io
import yaml
import pickle import pickle
import networkx as nx import networkx as nx
from functools import partial from functools import partial
from os.path import join from os.path import join
from soil import (simulation, Environment, agents, serialization, from soil import (simulation, Environment, agents, network, serialization,
history, utils) utils, config)
from soil.time import Delta
ROOT = os.path.abspath(os.path.dirname(__file__)) ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples') EXAMPLES = join(ROOT, '..', 'examples')
class CustomAgent(agents.FSM): class CustomAgent(agents.FSM, agents.NetworkAgent):
@agents.default_state @agents.default_state
@agents.state @agents.state
def normal(self): def normal(self):
self.state['neighbors'] = self.count_agents(state_id='normal', self.neighbors = self.count_agents(state_id='normal',
limit_neighbors=True) limit_neighbors=True)
@agents.state @agents.state
def unreachable(self): def unreachable(self):
return return
class TestMain(TestCase): class TestMain(TestCase):
def test_load_graph(self):
"""
Load a graph from file if the extension is known.
Raise an exception otherwise.
"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
}
}
G = serialization.load_network(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {
'network_params': {
'path': join(ROOT, 'unknown.extension')
}
}
G = serialization.load_network(config['network_params'])
print(G)
def test_generate_barabasi(self):
"""
If no path is given, a generator and network parameters
should be used to generate a network
"""
config = {
'network_params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(TypeError):
G = serialization.load_network(config['network_params'])
config['network_params']['n'] = 100
config['network_params']['m'] = 10
G = serialization.load_network(config['network_params'])
assert len(G) == 100
def test_empty_simulation(self): def test_empty_simulation(self):
"""A simulation with a base behaviour should do nothing""" """A simulation with a base behaviour should do nothing"""
config = { config = {
'model_params': {
'network_params': { 'network_params': {
'path': join(ROOT, 'test.gexf') 'path': join(ROOT, 'test.gexf')
}, },
'agent_type': 'BaseAgent', 'agent_class': 'BaseAgent',
'environment_params': {
} }
} }
s = simulation.from_config(config) s = simulation.from_config(config)
s.run_simulation(dry_run=True) s.run_simulation(dry_run=True)
def test_network_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'CounterAgent',
'num_trials': 1,
'max_time': 2,
'model_params': {
'network_params': {
'generator': nx.complete_graph,
'n': 2,
},
'agent_class': 'CounterModel',
'states': {
0: {'times': 10},
1: {'times': 20},
},
}
}
s = simulation.from_config(config)
def test_counter_agent(self): def test_counter_agent(self):
""" """
The initial states should be applied to the agent and the The initial states should be applied to the agent and the
agent should be able to update its state.""" agent should be able to update its state."""
config = { config = {
'version': '2',
'name': 'CounterAgent', 'name': 'CounterAgent',
'network_params': { 'dry_run': True,
'path': join(ROOT, 'test.gexf')
},
'agent_type': 'CounterModel',
'states': [{'times': 10}, {'times': 20}],
'max_time': 2,
'num_trials': 1, 'num_trials': 1,
'environment_params': { 'max_time': 2,
} 'model_params': {
} 'topologies': {
s = simulation.from_config(config) 'default': {
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0)['times', 0] == 11
assert env.get_agent(0)['times', 1] == 12
assert env.get_agent(1)['times', 0] == 21
assert env.get_agent(1)['times', 1] == 22
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
"""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf') 'path': join(ROOT, 'test.gexf')
}
}, },
'network_agents': [{ 'agents': {
'agent_type': 'AggregatedCounter', 'agent_class': 'CounterModel',
'weight': 1, 'topology': 'default',
'state': {'id': 0} 'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
}
}],
'max_time': 10,
'environment_params': {
} }
} }
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.get_env()
for agent in env.network_agents: assert isinstance(env.agents[0], agents.CounterModel)
last = 0 assert env.agents[0].G == env.topologies['default']
assert len(agent[None, None]) == 10 assert env.agents[0]['times'] == 10
for step, total in sorted(agent['total', None]): assert env.agents[0]['times'] == 10
assert total == last + 2 env.step()
last = total assert env.agents[0]['times'] == 11
assert env.agents[1]['times'] == 21
def test_custom_agent(self): def test_init_and_count_agents(self):
"""Allow for search of neighbors with a certain state_id""" """Agents should be properly initialized and counting should filter them properly"""
#TODO: separate this test into two or more test cases
config = { config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': CustomAgent,
'weight': 1
}],
'max_time': 10, 'max_time': 10,
'environment_params': { 'model_params': {
'agents': [{'agent_class': CustomAgent, 'weight': 1, 'topology': 'default'},
{'agent_class': CustomAgent, 'weight': 3, 'topology': 'default'},
],
'topologies': {
'default': {
'path': join(ROOT, 'test.gexf')
} }
},
},
} }
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0).state['neighbors'] == 1 assert env.agents[0].weight == 1
assert env.get_agent(0).state['neighbors'] == 1 assert env.count_agents() == 2
assert env.get_agent(1).count_agents(state_id='normal') == 2 assert env.count_agents(weight=1) == 1
assert env.get_agent(1).count_agents(state_id='normal', limit_neighbors=True) == 1 assert env.count_agents(weight=3) == 1
assert env.count_agents(agent_class=CustomAgent) == 2
def test_torvalds_example(self): def test_torvalds_example(self):
"""A complete example from a documentation should work.""" """A complete example from a documentation should work."""
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0] config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
config['network_params']['path'] = join(EXAMPLES, config['model_params']['network_params']['path'] = join(EXAMPLES,
config['network_params']['path']) config['model_params']['network_params']['path'])
s = simulation.from_config(config) s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0] env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents: for a in env.network_agents:
@@ -175,80 +143,15 @@ class TestMain(TestCase):
assert a.state['total'] == 3 assert a.state['total'] == 3
assert a.state['neighbors'] == 1 assert a.state['neighbors'] == 1
def test_yaml(self):
"""
The YAML version of a newly created simulation
should be equivalent to the configuration file used
"""
with utils.timer('loading'):
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial)
with utils.timer('deleting'):
del recovered['topology']
assert config == recovered
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
for i in range(5):
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
del nconfig['topology']
assert config == nconfig
def test_row_conversion(self):
env = Environment()
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env._now = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
def test_save_geometric(self):
"""
There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = Environment(topology=G)
f = io.BytesIO()
env.dump_gexf(f)
def test_save_graph(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.node[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
def test_serialize_class(self): def test_serialize_class(self):
ser, name = serialization.serialize(agents.BaseAgent) ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent' assert name == 'soil.agents.BaseAgent'
assert ser == agents.BaseAgent assert ser == agents.BaseAgent
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ])
assert name == 'BaseAgent'
assert ser == agents.BaseAgent
ser, name = serialization.serialize(CustomAgent) ser, name = serialization.serialize(CustomAgent)
assert name == 'test_main.CustomAgent' assert name == 'test_main.CustomAgent'
assert ser == CustomAgent assert ser == CustomAgent
@@ -262,7 +165,7 @@ class TestMain(TestCase):
des = serialization.deserialize(name, ser) des = serialization.deserialize(name, ser)
assert i == des assert i == des
def test_serialize_agent_type(self): def test_serialize_agent_class(self):
'''A class from soil.agents should be serialized without the module part''' '''A class from soil.agents should be serialized without the module part'''
ser = agents.serialize_type(CustomAgent) ser = agents.serialize_type(CustomAgent)
assert ser == 'test_main.CustomAgent' assert ser == 'test_main.CustomAgent'
@@ -273,74 +176,93 @@ class TestMain(TestCase):
def test_deserialize_agent_distribution(self): def test_deserialize_agent_distribution(self):
agent_distro = [ agent_distro = [
{ {
'agent_type': 'CounterModel', 'agent_class': 'CounterModel',
'weight': 1 'weight': 1
}, },
{ {
'agent_type': 'test_main.CustomAgent', 'agent_class': 'test_main.CustomAgent',
'weight': 2 'weight': 2
}, },
] ]
converted = agents.deserialize_distribution(agent_distro) converted = agents.deserialize_definition(agent_distro)
assert converted[0]['agent_type'] == agents.CounterModel assert converted[0]['agent_class'] == agents.CounterModel
assert converted[1]['agent_type'] == CustomAgent assert converted[1]['agent_class'] == CustomAgent
pickle.dumps(converted) pickle.dumps(converted)
def test_serialize_agent_distribution(self): def test_serialize_agent_distribution(self):
agent_distro = [ agent_distro = [
{ {
'agent_type': agents.CounterModel, 'agent_class': agents.CounterModel,
'weight': 1 'weight': 1
}, },
{ {
'agent_type': CustomAgent, 'agent_class': CustomAgent,
'weight': 2 'weight': 2
}, },
] ]
converted = agents.serialize_distribution(agent_distro) converted = agents.serialize_definition(agent_distro)
assert converted[0]['agent_type'] == 'CounterModel' assert converted[0]['agent_class'] == 'CounterModel'
assert converted[1]['agent_type'] == 'test_main.CustomAgent' assert converted[1]['agent_class'] == 'test_main.CustomAgent'
pickle.dumps(converted) pickle.dumps(converted)
def test_pickle_agent_environment(self):
env = Environment(name='Test')
a = agents.BaseAgent(environment=env, agent_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'
def test_history(self):
'''Test storing in and retrieving from history (sqlite)'''
h = history.History()
h.save_record(agent_id=0, t_step=0, key="test", value="hello")
assert h[0, 0, "test"] == "hello"
def test_subgraph(self):
'''An agent should be able to subgraph the global topology'''
G = nx.Graph()
G.add_node(3)
G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_type=agents.NetworkAgent)
env = Environment(name='Test', topology=G, network_agents=distro)
lst = list(env.network_agents)
a2 = env.get_agent(2)
a3 = env.get_agent(3)
assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
assert len(a3.subgraph(agent_type=agents.NetworkAgent)) == 3
def test_templates(self): def test_templates(self):
'''Loading a template should result in several configs''' '''Loading a template should result in several configs'''
configs = serialization.load_file(join(EXAMPLES, 'template.yml')) configs = serialization.load_file(join(EXAMPLES, 'template.yml'))
assert len(configs) > 0 assert len(configs) > 0
def test_until(self):
config = {
'name': 'until_sim',
'model_params': {
'network_params': {},
'agents': {
'fixed': [{
'agent_class': agents.BaseAgent,
}]
},
},
'max_time': 2,
'num_trials': 50,
}
s = simulation.from_config(config)
runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now > 2)
assert len(runs) == config['num_trials']
assert len(over) == 0
def test_fsm(self):
'''Basic state change'''
class ToggleAgent(agents.FSM):
@agents.default_state
@agents.state
def ping(self):
return self.pong
@agents.state
def pong(self):
return self.ping
a = ToggleAgent(unique_id=1, model=Environment())
assert a.state_id == a.ping.id
a.step()
assert a.state_id == a.pong.id
a.step()
assert a.state_id == a.ping.id
def test_fsm_when(self):
'''Basic state change'''
class ToggleAgent(agents.FSM):
@agents.default_state
@agents.state
def ping(self):
return self.pong, 2
@agents.state
def pong(self):
return self.ping
a = ToggleAgent(unique_id=1, model=Environment())
when = a.step()
assert when == 2
when = a.step()
assert when == Delta(a.interval)

69
tests/test_mesa.py Normal file
View File

@@ -0,0 +1,69 @@
'''
Mesa-SOIL integration tests
We have to test that:
- Mesa agents can be used in SOIL
- Simplified soil agents can be used in mesa simulations
- Mesa and soil agents can interact in a simulation
- Mesa visualizations work with SOIL simulations
'''
from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.space import MultiGrid
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
def step(self):
'''Advance the model by one step.'''
self.schedule.step()
# model = MoneyModel(10)
# for i in range(10):
# model.step()
# agent_wealth = [a.wealth for a in model.schedule.agents]

133
tests/test_network.py Normal file
View File

@@ -0,0 +1,133 @@
from unittest import TestCase
import io
import os
import networkx as nx
from os.path import join
from soil import config, network, environment, agents, simulation
from test_main import CustomAgent
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
class TestNetwork(TestCase):
def test_load_graph(self):
"""
Load a graph from file if the extension is known.
Raise an exception otherwise.
"""
config = {
'network_params': {
'path': join(ROOT, 'test.gexf')
}
}
G = network.from_config(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {
'network_params': {
'path': join(ROOT, 'unknown.extension')
}
}
G = network.from_config(config['network_params'])
print(G)
def test_generate_barabasi(self):
"""
If no path is given, a generator and network parameters
should be used to generate a network
"""
cfg = {
'params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(Exception):
G = network.from_config(cfg)
cfg['params']['n'] = 100
cfg['params']['m'] = 10
G = network.from_config(cfg)
assert len(G) == 100
def test_save_geometric(self):
"""
There is a bug in networkx that prevents it from creating a GEXF file
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = environment.NetworkEnvironment(topology=G)
f = io.BytesIO()
assert env.topologies['default']
network.dump_gexf(env.topologies['default'], f)
def test_networkenvironment_creation(self):
"""Networkenvironment should accept netconfig as parameters"""
model_params = {
'topologies': {
'default': {
'path': join(ROOT, 'test.gexf')
}
},
'agents': {
'topology': 'default',
'distribution': [{
'agent_class': CustomAgent,
}]
}
}
env = environment.Environment(**model_params)
assert env.topologies
env.step()
assert len(env.topologies['default']) == 2
assert len(env.agents) == 2
assert env.agents[1].count_agents(state_id='normal') == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
def test_custom_agent_neighbors(self):
"""Allow for search of neighbors with a certain state_id"""
config = {
'model_params': {
'topologies': {
'default': {
'path': join(ROOT, 'test.gexf')
}
},
'agents': {
'topology': 'default',
'distribution': [
{
'weight': 1,
'agent_class': CustomAgent
}
]
}
},
'max_time': 10,
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.agents[1].count_agents(state_id='normal') == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
def test_subgraph(self):
'''An agent should be able to subgraph the global topology'''
G = nx.Graph()
G.add_node(3)
G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_class=agents.NetworkAgent)
aconfig = config.AgentConfig(distribution=distro, topology='default')
env = environment.Environment(name='Test', topologies={'default': G}, agents=aconfig)
lst = list(env.network_agents)
a2 = env.find_one(node_id=2)
a3 = env.find_one(node_id=3)
assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
assert len(a3.subgraph(agent_class=agents.NetworkAgent)) == 3