1
0
mirror of https://github.com/gsi-upm/soil synced 2025-09-13 19:52:20 +00:00

Compare commits

...

51 Commits

Author SHA1 Message Date
J. Fernando Sánchez
0a9c6d8b19 WIP: removed stats 2022-09-16 18:14:16 +02:00
J. Fernando Sánchez
3dc56892c1 WIP: working config 2022-09-15 19:27:17 +02:00
J. Fernando Sánchez
e41dc3dae2 WIP 2022-09-13 18:16:31 +02:00
J. Fernando Sánchez
bbaed636a8 WIP 2022-07-19 17:18:02 +02:00
J. Fernando Sánchez
6f7481769e WIP 2022-07-19 17:17:23 +02:00
J. Fernando Sánchez
1a8313e4f6 WIP 2022-07-19 17:12:41 +02:00
J. Fernando Sánchez
a40aa55b6a Release 0.20.7 2022-07-06 09:23:46 +02:00
J. Fernando Sánchez
50cba751a6 Release 0.20.6 2022-07-05 12:08:34 +02:00
J. Fernando Sánchez
dfb6d13649 version 0.20.5 2022-05-18 16:13:53 +02:00
J. Fernando Sánchez
5559d37e57 version 0.20.4 2022-05-18 15:20:58 +02:00
J. Fernando Sánchez
2116fe6f38 Bug fixes and minor improvements 2022-05-12 16:14:47 +02:00
J. Fernando Sánchez
affeeb9643 Update examples 2022-04-04 16:47:58 +02:00
J. Fernando Sánchez
42ddc02318 CI: delay PyPI check 2022-03-07 14:35:07 +01:00
J. Fernando Sánchez
cab9a3440b Fix typo CI/CD 2022-03-07 13:57:25 +01:00
J. Fernando Sánchez
db505da49c Minor CI change 2022-03-07 13:35:02 +01:00
J. Fernando Sánchez
8eb8eb16eb Minor CI change 2022-03-07 12:51:22 +01:00
J. Fernando Sánchez
3fc5ca8c08 Fix requirements issue CI/CD 2022-03-07 12:46:01 +01:00
J. Fernando Sánchez
c02e6ea2e8 Fix die bug 2022-03-07 11:17:27 +01:00
J. Fernando Sánchez
38f8a8d110 Merge branch 'mesa'
First iteration to achieve MESA compatibility.
As a side effect, we have removed `simpy`.

For a full list of changes, see `CHANGELOG.md`.
2022-03-07 10:54:47 +01:00
J. Fernando Sánchez
cb72aac980 Add random activation example 2022-03-07 10:48:59 +01:00
J. Fernando Sánchez
6c4f44b4cb Partial MESA compatibility and several fixes
Documentation for the new APIs is still a work in progress :)
2021-10-15 20:16:49 +02:00
J. Fernando Sánchez
af9a392a93 WIP: mesa compat
All tests pass but some features are still missing/unclear:

- Mesa agents do not have a `state`, so their "metrics" don't get stored. I will
probably refactor this to remove some magic in this regard. This should get rid
of the `_state` dictionary and the setitem/getitem magic.
- Simulation is still different from a runner. So far only Agent and
Environment/Model have been updated.
2021-10-15 13:36:39 +02:00
J. Fernando Sánchez
5d7e57675a WIP: mesa compatibility 2021-10-14 17:37:06 +02:00
J. Fernando Sánchez
e860bdb922 v0.15.2
See CHANGELOG.md for a complete list of changes
2021-05-22 16:33:52 +02:00
J. Fernando Sánchez
d6b684c1c1 Fix docs requirements 2021-05-22 16:08:38 +02:00
J. Fernando Sánchez
05f7f49233 Refactoring v0.15.1
See CHANGELOG.md for a full list of changes

* Removed nxsim
* Refactored `agents.NetworkAgent` and `agents.BaseAgent`
* Refactored exporters
* Added stats to history
2020-11-19 23:58:47 +01:00
J. Fernando Sánchez
3b2c6a3db5 Seed before env initialization
Fixes #6
2020-07-27 12:29:24 +02:00
J. Fernando Sánchez
6118f917ee Fix Windows bug
Update URLs to gsi.upm.es
2020-07-07 10:57:10 +02:00
J. Fernando Sánchez
6adc8d36ba minor change in docs 2020-03-13 12:50:05 +01:00
J. Fernando Sánchez
c8b8149a17 Updated to 0.14.6
Fix compatibility issues with newer networkx and pandas versions
2020-03-11 16:17:14 +01:00
J. Fernando Sánchez
6690b6ee5f Fix incompatibility and bug in get_agents 2019-05-16 19:59:46 +02:00
J. Fernando Sánchez
97835b3d10 Clean up exporters 2019-05-03 13:17:27 +02:00
J. Fernando Sánchez
b0add8552e Tag version 0.14.0 2019-04-30 16:26:08 +02:00
J. Fernando Sánchez
1cf85ea450 Avoid writing gexf in test 2019-04-30 16:16:46 +02:00
J. Fernando Sánchez
c32e167fb8 Bump pyyaml to 5.1 2019-04-30 16:04:12 +02:00
J. Fernando Sánchez
5f68b5321d Pinning scipy to 1.2.1
1.3.0rc1 is not compatible with salib
2019-04-30 15:52:37 +02:00
J. Fernando Sánchez
2a2843bd19 Add tests exporters 2019-04-30 09:28:53 +02:00
J. Fernando Sánchez
d1006bd55c WIP: exporters 2019-04-29 18:47:15 +02:00
J. Fernando Sánchez
9bc036d185 WIP: exporters 2019-04-26 19:22:45 +02:00
J. Fernando Sánchez
a3ea434f23 0.13.8 2019-02-19 21:17:19 +01:00
J. Fernando Sánchez
65f6aa72f3 fix timeout in FSM. Improve logs 2019-02-01 19:05:07 +01:00
J. Fernando Sánchez
09e14c6e84 Add generator and programmatic examples 2018-12-20 19:25:33 +01:00
J. Fernando Sánchez
8593ac999d Swap test and build in CI. Remove tests in tags 2018-12-20 17:56:33 +01:00
J. Fernando Sánchez
90338c3549 skip-tls-verify in kaniko 2018-12-20 17:48:58 +01:00
J. Fernando Sánchez
1d532dacfe Remove entrypoint build stage 2018-12-20 15:14:58 +01:00
J. Fernando Sánchez
a1f8d8c9c5 Change entrypoint build stage 2018-12-20 15:07:45 +01:00
J. Fernando Sánchez
de326eb331 Remove CI global image 2018-12-20 15:05:45 +01:00
J. Fernando Sánchez
04b4380c61 Fix wrong import soil.web 2018-12-20 14:06:18 +01:00
J. Fernando Sánchez
d70a0c865c limit ci jobs to docker runners 2018-12-09 17:22:40 +01:00
J. Fernando Sánchez
625c28e4ee Fix CI syntax 2018-12-09 17:09:31 +01:00
J. Fernando Sánchez
9749f4ca14 Fix multithreading
Multithreading needs pickling to work.
Pickling/unpickling didn't work in some situations, like when the
environment_agents parameter was left blank.
This was due to two reasons:

1) agents and history didn't have a setstate method, and some of their
attributes cannot be pickled (generators, sqlite connection)
2) the environment was adding generators (agents) to its state.

This fixes the situation by restricting the keys that the environment exports
when it pickles, and by adding the set/getstate methods in agents.

The resulting pickles should contain enough information to inspect
them (history, state values, etc), but very limited.
2018-12-09 16:58:49 +01:00
74 changed files with 8169 additions and 2207 deletions

View File

@@ -1,2 +1,5 @@
**/soil_output
.*
**/__pycache__
__pycache__
*.pyc

1
.gitignore vendored
View File

@@ -8,3 +8,4 @@ soil_output
docs/_build*
build/*
dist/*
prof

View File

@@ -1,21 +1,53 @@
image: python:3.7
steps:
- build
stages:
- test
- publish
- check_published
build:
stage: build
docker:
stage: publish
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- docker
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
# The skip-tls-verify flag is there because our registry certificate is self signed
- /kaniko/executor --context $CI_PROJECT_DIR --skip-tls-verify --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
only:
- tags
test:
tags:
- docker
image: python:3.7
stage: test
script:
python setup.py test
- pip install -r requirements.txt -r test-requirements.txt
- python setup.py test
push_pypi:
only:
- tags
tags:
- docker
image: python:3.7
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
- pip install twine
- python setup.py sdist bdist_wheel
- TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
check_pypi:
only:
- tags
tags:
- docker
image: python:3.7
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG
# Allow PYPI to update its index before we try to install
when: delayed
start_in: 2 minutes

171
CHANGELOG.md Normal file
View File

@@ -0,0 +1,171 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [UNRELEASED]
### Changed
* Configuration schema is very different now. Check `soil.config` for more information. We are using Pydantic for (de)serialization.
* There may be more than one topology/network in the simulation
* Agents are split into groups now. Each group may be assigned a given set of agents or an agent distribution, and a network topology to be assigned to.
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
### Changed
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
## [0.20.5]
### Changed
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
## [0.20.4]
### Added
* Agents can now be given any kwargs, which will be used to set their state
* Environments have a default logger `self.logger` and a log method, just like agents
## [0.20.3]
### Fixed
* Default state values are now deepcopied again.
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
### Removed
* Datacollectors are not being used for now.
* `time.TimedActivation.step` does not use an `until` parameter anymore.
### Changed
* Simulations now run right up to `until` (open interval)
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
* Rabbits simulation is more idiomatic (using subclasses)
## [0.20.2]
### Fixed
* CI/CD testing issues
## [0.20.1]
### Fixed
* Agents would run another step after dying.
## [0.20.0]
### Added
* Integration with MESA
* `not_agent_ids` parameter to get sql in history
### Changed
* `soil.Environment` now also inherits from `mesa.Model`
* `soil.Agent` now also inherits from `mesa.Agent`
* `soil.time` to replace `simpy` events, delays, duration, etc.
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
### Removed
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
### Added
* An option to choose whether a database should be used for history
## [0.15.2]
### Fixed
* Pass the right known_modules and parameters to stats discovery in simulation
* The configuration file must exist when launching through the CLI. If it doesn't, an error will be logged
* Minor changes in the documentation of the CLI arguments
### Changed
* Stats are now exported by default
## [0.15.1]
### Added
* read-only `History`
### Fixed
* Serialization problem with the `Environment` on parallel mode.
* Analysis functions now work as they should in the tutorial
## [0.15.0]
### Added
* Control logging level in CLI and simulation
* `Stats` to calculate trial and simulation-wide statistics
* Simulation statistics are stored in a separate table in history (see `History.get_stats` and `History.save_stats`, as well as `soil.stats`)
* Aliased `NetworkAgent.G` to `NetworkAgent.topology`.
### Changed
* Templates in config files can be given as dictionaries in addition to strings
* Samplers are used more explicitly
* Removed nxsim dependency. We had already made a lot of changes, and nxsim has not been updated in 5 years.
* Exporter methods renamed to `trial` and `end`. Added `start`.
* `Distribution` exporter now a stats class
* `global_topology` renamed to `topology`
* Moved topology-related methods to `NetworkAgent`
### Fixed
* Temporary files used for history in dry_run mode are not longer left open
## [0.14.9]
### Changed
* Seed random before environment initialization
## [0.14.8]
### Fixed
* Invalid directory names in Windows gsi-upm/soil#5
## [0.14.7]
### Changed
* Minor change to traceback handling in async simulations
### Fixed
* Incomplete example in the docs (example.yml) caused an exception
## [0.14.6]
### Fixed
* Bug with newer versions of networkx (0.24) where the Graph.node attribute has been removed. We have updated our calls, but the code in nxsim is not under our control, so we have pinned the networkx version until that issue is solved.
### Changed
* Explicit yaml.SafeLoader to avoid deprecation warnings when using yaml.load. It should not break any existing setups, but we could move to the FullLoader in the future if needed.
## [0.14.4]
### Fixed
* Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly.
## [0.14.3]
### Fixed
* Incompatibility with py3.3-3.6 due to ModuleNotFoundError and TypeError in DryRunner
## [0.14.2]
### Fixed
* Output path for exporters is now soil_output
### Changed
* CSV output to stdout in dry_run mode
## [0.14.1]
### Changed
* Exporter names in lower case
* Add default exporter in runs
## [0.14.0]
### Added
* Loading configuration from template definitions in the yaml, in preparation for SALib support.
The definition of the variables and their possible values (i.e., a problem in SALib terms), as well as a sampler function, can be provided.
Soil uses this definition and the template to generate a set of configurations.
* Simulation group names, to link related simulations. For now, they are only used to group all simulations in the same group under the same folder.
* Exporters unify exporting/dumping results and other files to disk. If `dry_run` is set to `True`, exporters will write to stdout instead of a file (useful for testing/debugging).
* Distribution exporter, to write statistics about values and value_counts in every simulation. The results are dumped to two CSV files.
### Changed
* `dir_path` is now the directory for resources (modules, files)
* Environments and simulations do not export or write anything by default. That task is delegated to Exporters
### Removed
* The output dir for environments and simulations (see Exporters)
* DrawingAgent, because it wrote to disk and was not being used. We provide a partial alternative in the form of the GraphDrawing exporter. A complete alternative will be provided once the network at each state can be accessed by exporters.
## Fixed
* Modules with custom agents/environments failed to load when they were run from outside the directory of the definition file. Modules are now loaded from the directory of the simulation file in addition to the working directory
* Memory databases (in history) can now be shared between threads.
* Testing all examples, not just subdirectories
## [0.13.8]
### Changed
* Moved TerroristNetworkModel to examples
### Added
* `get_agents` and `count_agents` methods now accept lists as inputs. They can be used to retrieve agents from node ids
* `subgraph` in BaseAgent
* `agents.select` method, to filter out agents
* `skip_test` property in yaml definitions, to force skipping some examples
* `agents.Geo`, with a search function based on postition
* `BaseAgent.ego_search` to get nodes from the ego network of a node
* `BaseAgent.degree` and `BaseAgent.betweenness`
### Fixed
## [0.13.7]
### Changed
* History now defaults to not backing up! This makes it more intuitive to load the history for examination, at the expense of rewriting something. That should not happen because History is only created in the Environment, and that has `backup=True`.
### Added
* Agent names are assigned based on their agent types
* Agent logging uses the agent name.
* FSM agents can now return a timeout in addition to a new state. e.g. `return self.idle, self.env.timeout(2)` will execute the *different_state* in 2 *units of time* (`t_step=now+2`).
* Example of using timeouts in FSM (custom_timeouts)
* `network_agents` entries may include an `ids` entry. If set, it should be a list of node ids that should be assigned that agent type. This complements the previous behavior of setting agent type with `weights`.

View File

@@ -1,4 +1,7 @@
include requirements.txt
include test-requirements.txt
include README.rst
graft soil
graft soil
global-exclude __pycache__
global-exclude soil_output
global-exclude *.py[co]

View File

@@ -1,4 +1,7 @@
test:
quick-test:
docker-compose exec dev python -m pytest -s -v
.PHONY: test
test:
docker run -t -v $$PWD:/usr/src/app -w /usr/src/app python:3.7 python setup.py test
.PHONY: test

View File

@@ -5,6 +5,9 @@ Learn how to run your own simulations with our [documentation](http://soilsim.re
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
## Citation
If you use Soil in your research, don't forget to cite this paper:
```bibtex
@@ -28,7 +31,25 @@ If you use Soil in your research, don't forget to cite this paper:
```
@Copyright GSI - Universidad Politécnica de Madrid 2017
## Mesa compatibility
[![SOIL](logo_gsi.png)](https://www.gsi.dit.upm.es)
Soil is in the process of becoming fully compatible with MESA.
As of this writing,
This is a non-exhaustive list of tasks to achieve compatibility:
* Environments.agents and mesa.Agent.agents are not the same. env is a property, and it only takes into account network and environment agents. Might rename environment_agents to other_agents or sth like that
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Document the new APIs and usage
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

View File

@@ -31,7 +31,7 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
extensions = ['IPython.sphinxext.ipython_console_highlighting']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -69,7 +69,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '**.ipynb_checkpoints']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'

View File

@@ -8,32 +8,8 @@ The advantage of a configuration file is that it is a clean declarative descript
Simulation configuration files can be formatted in ``json`` or ``yaml`` and they define all the parameters of a simulation.
Here's an example (``example.yml``).
.. code:: yaml
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
- agent_type: SISaModel
weight: 1
state:
id: content
- agent_type: SISaModel
weight: 1
state:
id: discontent
- agent_type: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
.. literalinclude:: example.yml
:language: yaml
This example configuration will run three trials (``num_trials``) of a simulation containing a randomly generated network (``network_params``).
@@ -112,9 +88,18 @@ For example, the following configuration is equivalent to :code:`nx.complete_gra
Environment
============
The environment is the place where the shared state of the simulation is stored.
For instance, the probability of disease outbreak.
The configuration file may specify the initial value of the environment parameters:
That means both global parameters, such as the probability of disease outbreak.
But it also means other data, such as a map, or a network topology that connects multiple agents.
As a result, it is also typical to add custom functions in an environment that help agents interact with each other and with the state of the simulation.
Last but not least, an environment controls when and how its agents will be executed.
By default, soil environments incorporate a ``soil.time.TimedActivation`` model for agent execution (more on this on the following section).
Soil environments are very similar, and often interchangeable with, mesa models (``mesa.Model``).
A configuration may specify the initial value of the environment parameters:
.. code:: yaml
@@ -122,23 +107,33 @@ The configuration file may specify the initial value of the environment paramete
daily_probability_of_earthquake: 0.001
number_of_earthquakes: 0
All agents have access to the environment parameters.
All agents have access to the environment (and its parameters).
In some scenarios, it is useful to have a custom environment, to provide additional methods or to control the way agents update environment state.
For example, if our agents play the lottery, the environment could provide a method to decide whether the agent wins, instead of leaving it to the agent.
Agents
======
Agents are a way of modelling behavior.
Agents can be characterized with two variables: agent type (``agent_type``) and state.
Only one agent is executed at a time (generally, every ``interval`` seconds), and it has access to its state and the environment parameters.
The agent type is a ``soil.Agent`` class, which contains the code that encapsulates the behavior of the agent.
The state is a set of variables, which may change during the simulation, and that the code may use to control the behavior.
All agents provide a ``step`` method either explicitly or implicitly (by inheriting it from a superclass), which controls how the agent will behave in each step of the simulation.
When and how agent steps are executed in a simulation depends entirely on the ``environment``.
Most environments will internally use a scheduler (``mesa.time.BaseScheduler``), which controls the activation of agents.
In soil, we generally used the ``soil.time.TimedActivation`` scheduler, which allows agents to specify when their next activation will happen, defaulting to a
When an agent's step is executed (generally, every ``interval`` seconds), the agent has access to its state and the environment.
Through the environment, it can access the network topology and the state of other agents.
There are three three types of agents according to how they are added to the simulation: network agents and environment agent.
There are two types of agents according to how they are added to the simulation: network agents and environment agent.
Network Agents
##############
Network agents are attached to a node in the topology.
The configuration file allows you to specify how agents will be mapped to topology nodes.
@@ -149,7 +144,9 @@ Hence, every node in the network will be associated to an agent of that type.
agent_type: SISaModel
It is also possible to add more than one type of agent to the simulation, and to control the ratio of each type (using the ``weight`` property).
It is also possible to add more than one type of agent to the simulation.
To control the ratio of each type (using the ``weight`` property).
For instance, with following configuration, it is five times more likely for a node to be assigned a CounterModel type than a SISaModel type.
.. code:: yaml
@@ -242,3 +239,24 @@ These agents are programmed in much the same way as network agents, the only dif
You may use environment agents to model events that a normal agent cannot control, such as natural disasters or chance.
They are also useful to add behavior that has little to do with the network and the interactions within that network.
Templating
==========
Sometimes, it is useful to parameterize a simulation and run it over a range of values in order to compare each run and measure the effect of those parameters in the simulation.
For instance, you may want to run a simulation with different agent distributions.
This can be done in Soil using **templates**.
A template is a configuration where some of the values are specified with a variable.
e.g., ``weight: "{{ var1 }}"`` instead of ``weight: 1``.
There are two types of variables, depending on how their values are decided:
* Fixed. A list of values is provided, and a new simulation is run for each possible value. If more than a variable is given, a new simulation will be run per combination of values.
* Bounded/Sampled. The bounds of the variable are provided, along with a sampler method, which will be used to compute all the configuration combinations.
When fixed and bounded variables are mixed, Soil generates a new configuration per combination of fixed values and bounded values.
Here is an example with a single fixed variable and two bounded variable:
.. literalinclude:: ../examples/template.yml
:language: yaml

35
docs/example.yml Normal file
View File

@@ -0,0 +1,35 @@
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
network_params:
generator: barabasi_albert_graph
n: 100
m: 2
network_agents:
- agent_type: SISaModel
weight: 1
state:
id: content
- agent_type: SISaModel
weight: 1
state:
id: discontent
- agent_type: SISaModel
weight: 8
state:
id: neutral
environment_params:
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

View File

@@ -14,11 +14,11 @@ Now test that it worked by running the command line tool
soil --help
Or using soil programmatically:
Or, if you're using using soil programmatically:
.. code:: python
import soil
print(soil.__version__)
The latest version can be installed through `GitLab <https://lab.cluster.gsi.dit.upm.es/soil/soil.git>`_.
The latest version can be installed through `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_ or `GitHub <https://github.com/gsi-upm/soil>`_.

1
docs/requirements.txt Normal file
View File

@@ -0,0 +1 @@
ipython>=7.31.1

View File

@@ -26,7 +26,7 @@ But before that, let's import the soil module and networkx.
%autoreload 2
%pylab inline
# To display plots in the notebooed_
# To display plots in the notebook_
.. parsed-literal::
@@ -47,12 +47,6 @@ There are three main elements in a soil simulation:
- The environment. It assigns agents to nodes in the network, and
stores the environment parameters (shared state for all agents).
Soil is based on ``simpy``, which is an event-based network simulation
library. Soil provides several abstractions over events to make
developing agents easier. This means you can use events (timeouts,
delays) in soil, but for the most part we will assume your models will
be step-based.
Modeling behaviour
------------------
@@ -323,7 +317,7 @@ Let's run our simulation:
.. code:: ipython3
soil.simulation.run_from_config(config, dump=False)
soil.simulation.run_from_config(config)
.. parsed-literal::
@@ -2531,7 +2525,7 @@ Dealing with bigger data
.. parsed-literal::
267M ../rabbits/soil_output/rabbits_example/
267M ../rabbits/soil_output/rabbits_example/
If we tried to load the entire history, we would probably run out of

View File

@@ -500,7 +500,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.8.5"
},
"toc": {
"colors": {

View File

@@ -80800,7 +80800,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.8.6"
}
},
"nbformat": 4,

View File

@@ -1,28 +1,65 @@
---
name: simple
dir_path: "/tmp/"
num_trials: 3
dry_run: True
max_time: 100
interval: 1
seed: "CompleteSeed!"
dump: false
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 1
version: '2'
general:
id: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
topologies:
default:
params:
generator: complete_graph
n: 10
another_graph:
params:
generator: complete_graph
n: 2
environment:
environment_class: Environment
params:
am_i_complete: true
agents:
# Agents are split several groups, each with its own definition
default: # This is a special group. Its values will be used as default values for the rest of the groups
agent_class: CounterModel
topology: default
state:
id: 0
- agent_type: AggregatedCounter
weight: 0.2
environment_agents: []
environment_class: Environment
environment_params:
am_i_complete: true
default_state:
incidents: 0
states:
- name: 'The first node'
- name: 'The second node'
times: 1
environment:
# In this group we are not specifying any topology
fixed:
- name: 'Environment Agent 1'
agent_class: CounterModel
state:
times: 10
general_counters:
topology: default
distribution:
- agent_class: CounterModel
weight: 1
state:
times: 3
- agent_class: AggregatedCounter
weight: 0.2
override:
- filter:
agent_class: AggregatedCounter
n: 2
state:
times: 5
other_counters:
topology: another_graph
fixed:
- agent_class: CounterModel
id: 0
state:
times: 1
total: 0
- agent_class: CounterModel
id: 1
# If not specified, it will use the state set in the default
# state:

View File

@@ -0,0 +1,16 @@
---
name: custom-generator
description: Using a custom generator for the network
num_trials: 3
max_time: 100
interval: 1
network_params:
generator: mymodule.mygenerator
# These are custom parameters
n: 10
n_edges: 5
network_agents:
- agent_type: CounterModel
weight: 1
state:
state_id: 0

View File

@@ -0,0 +1,27 @@
from networkx import Graph
import networkx as nx
from random import choice
def mygenerator(n=5, n_edges=5):
'''
Just a simple generator that creates a network with n nodes and
n_edges edges. Edges are assigned randomly, only avoiding self loops.
'''
G = nx.Graph()
for i in range(n):
G.add_node(i)
for i in range(n_edges):
nodes = list(G.nodes)
n_in = choice(nodes)
nodes.remove(n_in) # Avoid loops
n_out = choice(nodes)
G.add_edge(n_in, n_out)
return G

View File

@@ -0,0 +1,35 @@
from soil.agents import FSM, state, default_state
class Fibonacci(FSM):
'''Agent that only executes in t_steps that are Fibonacci numbers'''
defaults = {
'prev': 1
}
@default_state
@state
def counting(self):
self.log('Stopping at {}'.format(self.now))
prev, self['prev'] = self['prev'], max([self.now, self['prev']])
return None, self.env.timeout(prev)
class Odds(FSM):
'''Agent that only executes in odd t_steps'''
@default_state
@state
def odds(self):
self.log('Stopping at {}'.format(self.now))
return None, self.env.timeout(1+self.now%2)
if __name__ == '__main__':
import logging
logging.basicConfig(level=logging.INFO)
from soil import Simulation
s = Simulation(network_agents=[{'ids': [0], 'agent_type': Fibonacci},
{'ids': [1], 'agent_type': Odds}],
network_params={"generator": "complete_graph", "n": 2},
max_time=100,
)
s.run(dry_run=True)

20
examples/mesa/mesa.yml Normal file
View File

@@ -0,0 +1,20 @@
---
name: mesa_sim
group: tests
dir_path: "/tmp"
num_trials: 3
max_time: 100
interval: 1
seed: '1'
network_params:
generator: social_wealth.graph_generator
n: 5
network_agents:
- agent_type: social_wealth.SocialMoneyAgent
weight: 1
environment_class: social_wealth.MoneyEnv
environment_params:
mesa_agent_type: social_wealth.MoneyAgent
N: 10
width: 50
height: 50

105
examples/mesa/server.py Normal file
View File

@@ -0,0 +1,105 @@
from mesa.visualization.ModularVisualization import ModularServer
from soil.visualization import UserSettableParameter
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
class MyNetwork(NetworkModule):
def render(self, model):
return self.portrayal_method(model)
def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node
portrayal = dict()
portrayal["nodes"] = [
{
"id": agent_id,
"size": env.get_agent(agent_id).wealth,
# "color": "#CC0000" if not agents or agents[0].wealth == 0 else "#007959",
"color": "#CC0000",
"label": f"{agent_id}: {env.get_agent(agent_id).wealth}",
}
for (agent_id) in env.G.nodes
]
portrayal["edges"] = [
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
for edge_id, (source, target) in enumerate(env.G.edges)
]
return portrayal
def gridPortrayal(agent):
"""
This function is registered with the visualization server to be called
each tick to indicate how to draw the agent in its current state.
:param agent: the agent in the simulation
:return: the portrayal dictionary
"""
color = max(10, min(agent.wealth*10, 100))
return {
"Shape": "rect",
"w": 1,
"h": 1,
"Filled": "true",
"Layer": 0,
"Label": agent.unique_id,
"Text": agent.unique_id,
"x": agent.pos[0],
"y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})"
}
grid = MyNetwork(network_portrayal, 500, 500, library="sigma")
chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
model_params = {
"N": UserSettableParameter(
"slider",
"N",
5,
1,
10,
1,
description="Choose how many agents to include in the model",
),
"network_agents": [{"agent_type": SocialMoneyAgent}],
"height": UserSettableParameter(
"slider",
"height",
5,
5,
10,
1,
description="Grid height",
),
"width": UserSettableParameter(
"slider",
"width",
5,
5,
10,
1,
description="Grid width",
),
"network_params": {
'generator': graph_generator
},
}
canvas_element = CanvasGrid(gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
)
server.port = 8521
server.launch(open_browser=False)

View File

@@ -0,0 +1,119 @@
'''
This is an example that adds soil agents and environment in a normal
mesa workflow.
'''
from mesa import Agent as MesaAgent
from mesa.space import MultiGrid
# from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
import networkx as nx
from soil import NetworkAgent, Environment
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths)
N = len(list(model.agents))
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
class MoneyAgent(MesaAgent):
"""
A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model):
super().__init__(unique_id=unique_id, model=model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.info("Crying wolf", self.pos)
self.move()
if self.wealth > 0:
self.give_money()
class SocialMoneyAgent(NetworkAgent, MoneyAgent):
wealth = 1
def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighboring_agents())
self.info("Trying to give money")
self.debug("Cellmates: ", cellmates)
self.debug("Friends: ", friends)
nearby_friends = list(cellmates & friends)
if len(nearby_friends):
other = self.random.choice(nearby_friends)
other.wealth += 1
self.wealth -= 1
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(self, width, height, *args, topologies, **kwargs):
super().__init__(*args, topologies=topologies, **kwargs)
self.grid = MultiGrid(width, height, False)
# Create agents
for agent in self.agents:
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"})
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
if __name__ == '__main__':
G = graph_generator()
fixed_params = {"topology": G,
"width": 10,
"network_agents": [{"agent_type": SocialMoneyAgent,
'weight': 1}],
"height": 10}
variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(MoneyEnv,
variable_parameters=variable_params,
fixed_parameters=fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini})
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

83
examples/mesa/wealth.py Normal file
View File

@@ -0,0 +1,83 @@
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum( xi * (N-i) for i,xi in enumerate(x) ) / (N*sum(x))
return (1 + (1/N) - 2*B)
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
self.running = True
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"})
def step(self):
self.datacollector.collect(self)
self.schedule.step()
if __name__ == '__main__':
fixed_params = {"width": 10,
"height": 10}
variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(MoneyModel,
variable_params,
fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini})
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

View File

@@ -1,19 +1,18 @@
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
max_time: 300
name: Sim_all_dumb
network_agents:
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: true
weight: 1
@@ -24,28 +23,27 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
max_time: 300
name: Sim_half_herd
network_agents:
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: false
weight: 1
- agent_type: DumbViewer
- agent_type: newsspread.DumbViewer
state:
has_tv: true
weight: 1
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: false
weight: 1
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
weight: 1
@@ -56,24 +54,23 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
interval: 1
max_time: 30
max_time: 300
name: Sim_all_herd
network_agents:
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
network_params:
generator: barabasi_albert_graph
@@ -82,22 +79,21 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_time: 30
max_time: 300
name: Sim_wise_herd
network_agents:
- agent_type: HerdViewer
- agent_type: newsspread.HerdViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
- agent_type: WiseViewer
- agent_type: newsspread.WiseViewer
state:
has_tv: true
weight: 1
@@ -108,22 +104,21 @@ network_params:
num_trials: 50
---
default_state: {}
load_module: newsspread
environment_agents: []
environment_params:
prob_neighbor_spread: 0.0
prob_tv_spread: 0.01
prob_neighbor_cure: 0.1
interval: 1
max_time: 30
max_time: 300
name: Sim_all_wise
network_agents:
- agent_type: WiseViewer
- agent_type: newsspread.WiseViewer
state:
has_tv: true
id: neutral
state_id: neutral
weight: 1
- agent_type: WiseViewer
- agent_type: newsspread.WiseViewer
state:
has_tv: true
weight: 1

View File

@@ -1,8 +1,8 @@
from soil.agents import FSM, state, default_state, prob
from soil.agents import FSM, NetworkAgent, state, default_state, prob
import logging
class DumbViewer(FSM):
class DumbViewer(FSM, NetworkAgent):
'''
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
@@ -17,7 +17,7 @@ class DumbViewer(FSM):
def neutral(self):
if self['has_tv']:
if prob(self.env['prob_tv_spread']):
self.set_state(self.infected)
return self.infected
@state
def infected(self):
@@ -26,6 +26,12 @@ class DumbViewer(FSM):
neighbor.infect()
def infect(self):
'''
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
'''
self.set_state(self.infected)
@@ -34,15 +40,14 @@ class HerdViewer(DumbViewer):
A viewer whose probability of infection depends on the state of its neighbors.
'''
level = logging.DEBUG
def infect(self):
'''Notice again that this is NOT a state. See DumbViewer.infect for reference'''
infected = self.count_neighboring_agents(state_id=self.infected.id)
total = self.count_neighboring_agents()
prob_infect = self.env['prob_neighbor_spread'] * infected/total
self.debug('prob_infect', prob_infect)
if prob(prob_infect):
self.set_state(self.infected.id)
self.set_state(self.infected)
class WiseViewer(HerdViewer):
@@ -77,5 +82,5 @@ class WiseViewer(HerdViewer):
1.0)
prob_cure = self.env['prob_neighbor_cure'] * (cured/infected)
if prob(prob_cure):
return self.cure()
return self.cured
return self.set_state(super().infected)

1
examples/programmatic/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
Programmatic*

View File

@@ -0,0 +1,40 @@
'''
Example of a fully programmatic simulation, without definition files.
'''
from soil import Simulation, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
return G
class MyAgent(agents.FSM):
@agents.default_state
@agents.state
def neutral(self):
self.debug('I am running')
if agents.prob(0.2):
self.info('This runs 2/10 times on average')
s = Simulation(name='Programmatic',
network_params={'generator': mygenerator},
num_trials=1,
max_time=100,
agent_type=MyAgent,
dry_run=True)
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = s.run()
# Uncomment this to output the simulation to a YAML file
# s.dump_yaml('simulation.yaml')

View File

@@ -1,4 +1,4 @@
from soil.agents import FSM, state, default_state
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment
from random import random, shuffle
from itertools import islice
@@ -53,13 +53,13 @@ class CityPubs(Environment):
pub['occupancy'] -= 1
class Patron(FSM):
class Patron(FSM, NetworkAgent):
'''Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in.
3) While in the bar, patrons only drink, until they get drunk and taken home.
'''
level = logging.INFO
level = logging.DEBUG
defaults = {
'pub': None,
@@ -113,7 +113,8 @@ class Patron(FSM):
@state
def at_home(self):
'''The end'''
self.debug('Life sucks. I\'m home!')
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
self.debug('I\'m home. Just like {} of my friends'.format(len(others)))
def drink(self):
self['pints'] += 1
@@ -150,7 +151,7 @@ class Patron(FSM):
return befriended
class Police(FSM):
class Police(FSM, NetworkAgent):
'''Simple agent to take drunk people out of pubs.'''
level = logging.INFO

View File

@@ -1,7 +1,6 @@
from soil.agents import FSM, state, default_state, BaseAgent
from soil.agents import FSM, state, default_state, BaseAgent, NetworkAgent
from enum import Enum
from random import random, choice
from itertools import islice
import logging
import math
@@ -11,9 +10,7 @@ class Genders(Enum):
female = 'female'
class RabbitModel(FSM):
level = logging.INFO
class RabbitModel(FSM, NetworkAgent):
defaults = {
'age': 0,
@@ -22,7 +19,7 @@ class RabbitModel(FSM):
'offspring': 0,
}
sexual_maturity = 4*30
sexual_maturity = 3 #4*30
life_expectancy = 365 * 3
gestation = 33
pregnancy = -1
@@ -31,10 +28,23 @@ class RabbitModel(FSM):
@default_state
@state
def newborn(self):
self.debug(f'I am a newborn at age {self["age"]}')
self['age'] += 1
if self['age'] >= self.sexual_maturity:
self.debug('I am fertile!')
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.info('Agent {} is dying'.format(self.id))
self.die()
class Male(RabbitModel):
@state
def fertile(self):
@@ -46,21 +56,26 @@ class RabbitModel(FSM):
return
# Males try to mate
females = self.get_agents(state_id=self.fertile.id, gender=Genders.female.value, limit_neighbors=False)
for f in islice(females, self.max_females):
for f in self.get_agents(state_id=Female.fertile.id,
agent_type=Female,
limit_neighbors=False,
limit=self.max_females):
r = random()
if r < self['mating_prob']:
self.impregnate(f)
break # Take a break
def impregnate(self, whom):
if self['gender'] == Genders.female.value:
raise NotImplementedError('Females cannot impregnate')
whom['pregnancy'] = 0
whom['mate'] = self.id
whom.set_state(whom.pregnant)
self.debug('{} impregnating: {}. {}'.format(self.id, whom.id, whom.state))
class Female(RabbitModel):
@state
def fertile(self):
# Just wait for a Male
pass
@state
def pregnant(self):
self['age'] += 1
@@ -80,7 +95,7 @@ class RabbitModel(FSM):
self.env.add_edge(self['mate'], child.id)
# self.add_edge()
self.debug('A BABY IS COMING TO LIFE')
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.global_topology.number_of_nodes())+1
self.env['rabbits_alive'] = self.env.get('rabbits_alive', self.topology.number_of_nodes())+1
self.debug('Rabbits alive: {}'.format(self.env['rabbits_alive']))
self['offspring'] += 1
self.env.get_agent(self['mate'])['offspring'] += 1
@@ -90,11 +105,9 @@ class RabbitModel(FSM):
@state
def dead(self):
self.info('Agent {} is dying'.format(self.id))
super().dead()
if 'pregnancy' in self and self['pregnancy'] > -1:
self.info('A mother has died carrying a baby!!')
self.die()
return
class RandomAccident(BaseAgent):
@@ -102,7 +115,9 @@ class RandomAccident(BaseAgent):
level = logging.DEBUG
def step(self):
rabbits_total = self.global_topology.number_of_nodes()
rabbits_total = self.env.topology.number_of_nodes()
if 'rabbits_alive' not in self.env:
self.env['rabbits_alive'] = 0
rabbits_alive = self.env.get('rabbits_alive', rabbits_total)
prob_death = self.env.get('prob_death', 1e-100)*math.floor(math.log10(max(1, rabbits_alive)))
self.debug('Killing some rabbits with prob={}!'.format(prob_death))
@@ -116,5 +131,5 @@ class RandomAccident(BaseAgent):
self.log('Rabbits alive: {}'.format(self.env['rabbits_alive']))
i.set_state(i.dead)
self.log('Rabbits alive: {}/{}'.format(rabbits_alive, rabbits_total))
if self.count_agents(state_id=RabbitModel.dead.id) == self.global_topology.number_of_nodes():
if self.env.count_agents(state_id=RabbitModel.dead.id) == self.env.topology.number_of_nodes():
self.die()

View File

@@ -1,23 +1,20 @@
---
load_module: rabbit_agents
name: rabbits_example
max_time: 500
max_time: 100
interval: 1
seed: MySeed
agent_type: RabbitModel
agent_type: rabbit_agents.RabbitModel
environment_agents:
- agent_type: RandomAccident
- agent_type: rabbit_agents.RandomAccident
environment_params:
prob_death: 0.001
default_state:
mating_prob: 0.01
mating_prob: 0.1
topology:
nodes:
- id: 1
state:
gender: female
agent_type: rabbit_agents.Male
- id: 0
state:
gender: male
agent_type: rabbit_agents.Female
directed: true
links: []

View File

@@ -0,0 +1,45 @@
'''
Example of setting a
Example of a fully programmatic simulation, without definition files.
'''
from soil import Simulation, agents
from soil.time import Delta
from random import expovariate
import logging
class MyAgent(agents.FSM):
'''
An agent that first does a ping
'''
defaults = {'pong_counts': 2}
@agents.default_state
@agents.state
def ping(self):
self.info('Ping')
return self.pong, Delta(expovariate(1/16))
@agents.state
def pong(self):
self.info('Pong')
self.pong_counts -= 1
self.info(str(self.pong_counts))
if self.pong_counts < 1:
return self.die()
return None, Delta(expovariate(1/16))
s = Simulation(name='Programmatic',
network_agents=[{'agent_type': MyAgent, 'id': 0}],
topology={'nodes': [{'id': 0}], 'links': []},
num_trials=1,
max_time=100,
agent_type=MyAgent,
dry_run=True)
logging.basicConfig(level=logging.INFO)
envs = s.run()

30
examples/template.yml Normal file
View File

@@ -0,0 +1,30 @@
---
sampler:
method: "SALib.sample.morris.sample"
N: 10
template:
group: simple
num_trials: 1
interval: 1
max_time: 2
seed: "CompleteSeed!"
dump: false
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: "{{ x1 }}"
state:
state_id: 0
- agent_type: AggregatedCounter
weight: "{{ 1 - x1 }}"
environment_params:
name: "{{ x3 }}"
skip_test: true
vars:
bounds:
x1: [0, 1]
x2: [1, 2]
fixed:
x3: ["a", "b", "c"]

View File

@@ -0,0 +1,208 @@
import random
import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, state, default_state
from soil import Environment
class TerroristSpreadModel(FSM, Geo):
"""
Settings:
information_spread_intensity
terrorist_additional_influence
min_vulnerability (optional else zero)
max_vulnerability
prob_interaction
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.information_spread_intensity = model.environment_params['information_spread_intensity']
self.terrorist_additional_influence = model.environment_params['terrorist_additional_influence']
self.prob_interaction = model.environment_params['prob_interaction']
if self['id'] == self.civilian.id: # Civilian
self.mean_belief = random.uniform(0.00, 0.5)
elif self['id'] == self.terrorist.id: # Terrorist
self.mean_belief = random.uniform(0.8, 1.00)
elif self['id'] == self.leader.id: # Leader
self.mean_belief = 1.00
else:
raise Exception('Invalid state id: {}'.format(self['id']))
if 'min_vulnerability' in model.environment_params:
self.vulnerability = random.uniform( model.environment_params['min_vulnerability'], model.environment_params['max_vulnerability'] )
else :
self.vulnerability = random.uniform( 0, model.environment_params['max_vulnerability'] )
@state
def civilian(self):
neighbours = list(self.get_neighboring_agents(agent_type=TerroristSpreadModel))
if len(neighbours) > 0:
# Only interact with some of the neighbors
interactions = list(n for n in neighbours if random.random() <= self.prob_interaction)
influence = sum( self.degree(i) for i in interactions )
mean_belief = sum( i.mean_belief * self.degree(i) / influence for i in interactions )
mean_belief = mean_belief * self.information_spread_intensity + self.mean_belief * ( 1 - self.information_spread_intensity )
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
if self.mean_belief >= 0.8:
return self.terrorist
@state
def leader(self):
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
for neighbour in self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]):
if self.betweenness(neighbour) > self.betweenness(self):
return self.terrorist
@state
def terrorist(self):
neighbours = self.get_agents(state_id=[self.terrorist.id, self.leader.id],
agent_type=TerroristSpreadModel,
limit_neighbors=True)
if len(neighbours) > 0:
influence = sum( self.degree(n) for n in neighbours )
mean_belief = sum( n.mean_belief * self.degree(n) / influence for n in neighbours )
mean_belief = mean_belief * self.vulnerability + self.mean_belief * ( 1 - self.vulnerability )
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
# Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state.id == self.leader.id, neighbours))
if not leaders:
# Check if this is the potential leader
# Stop once it's found. Otherwise, set self as leader
for neighbour in neighbours:
if self.betweenness(self) < self.betweenness(neighbour):
return
return self.leader
class TrainingAreaModel(FSM, Geo):
"""
Settings:
training_influence
min_vulnerability
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.training_influence = model.environment_params['training_influence']
if 'min_vulnerability' in model.environment_params:
self.min_vulnerability = model.environment_params['min_vulnerability']
else: self.min_vulnerability = 0
@default_state
@state
def terrorist(self):
for neighbour in self.get_neighboring_agents(agent_type=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
class HavenModel(FSM, Geo):
"""
Settings:
haven_influence
min_vulnerability
max_vulnerability
Requires TerroristSpreadModel.
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.haven_influence = model.environment_params['haven_influence']
if 'min_vulnerability' in model.environment_params:
self.min_vulnerability = model.environment_params['min_vulnerability']
else: self.min_vulnerability = 0
self.max_vulnerability = model.environment_params['max_vulnerability']
def get_occupants(self, **kwargs):
return self.get_neighboring_agents(agent_type=TerroristSpreadModel, **kwargs)
@state
def civilian(self):
civilians = self.get_occupants(state_id=self.civilian.id)
if not civilians:
return self.terrorist
for neighbour in self.get_occupants():
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability * ( 1 - self.haven_influence )
return self.civilian
@state
def terrorist(self):
for neighbour in self.get_occupants():
if neighbour.vulnerability < self.max_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.haven_influence )
return self.terrorist
class TerroristNetworkModel(TerroristSpreadModel):
"""
Settings:
sphere_influence
vision_range
weight_social_distance
weight_link_distance
"""
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=model, unique_id=unique_id, state=state)
self.vision_range = model.environment_params['vision_range']
self.sphere_influence = model.environment_params['sphere_influence']
self.weight_social_distance = model.environment_params['weight_social_distance']
self.weight_link_distance = model.environment_params['weight_link_distance']
@state
def terrorist(self):
self.update_relationships()
return super().terrorist()
@state
def leader(self):
self.update_relationships()
return super().leader()
def update_relationships(self):
if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
close_ups = set(self.geo_search(radius=self.vision_range, agent_type=TerroristNetworkModel))
step_neighbours = set(self.ego_search(self.sphere_influence, agent_type=TerroristNetworkModel, center=False))
neighbours = set(agent.id for agent in self.get_neighboring_agents(agent_type=TerroristNetworkModel))
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.id)
spatial_proximity = ( 1 - self.get_distance(agent.id) )
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
if agent['id'] == agent.civilian.id and random.random() < prob_new_interaction:
self.add_edge(agent)
break
def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.topology, 'pos')[self.id]
target_x, target_y = nx.get_node_attributes(self.topology, 'pos')[target]
dx = abs( source_x - target_x )
dy = abs( source_y - target_y )
return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 )
def shortest_path_length(self, target):
try:
return nx.shortest_path_length(self.topology, self.id, target)
except nx.NetworkXNoPath:
return float('inf')

View File

@@ -1,5 +1,4 @@
name: TerroristNetworkModel_sim
load_module: TerroristNetworkModel
max_time: 150
num_trials: 1
network_params:
@@ -9,19 +8,19 @@ network_params:
# theta: 20
n: 100
network_agents:
- agent_type: TerroristNetworkModel
- agent_type: TerroristNetworkModel.TerroristNetworkModel
weight: 0.8
state:
id: civilian # Civilians
- agent_type: TerroristNetworkModel
- agent_type: TerroristNetworkModel.TerroristNetworkModel
weight: 0.1
state:
id: leader # Leaders
- agent_type: TrainingAreaModel
- agent_type: TerroristNetworkModel.TrainingAreaModel
weight: 0.05
state:
id: terrorist # Terrorism
- agent_type: HavenModel
- agent_type: TerroristNetworkModel.HavenModel
weight: 0.05
state:
id: civilian # Civilian
@@ -60,3 +59,4 @@ visualization_params:
background_image: 'map_4800x2860.jpg'
background_opacity: '0.9'
background_filter_color: 'blue'
skip_test: true # This simulation takes too long for automated tests.

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,9 @@
nxsim
simpy
networkx>=2.0
networkx>=2.5
numpy
matplotlib
pyyaml
pandas
pyyaml>=5.1
pandas>=0.23
SALib>=1.3
Jinja2
Mesa>=0.8.9
pydantic>=1.9

View File

@@ -16,6 +16,12 @@ def parse_requirements(filename):
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'mesa': ['mesa>=0.8.9'],
'geo': ['scipy>=1.3'],
'web': ['tornado']
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
setup(
@@ -40,12 +46,10 @@ setup(
'Operating System :: POSIX',
'Programming Language :: Python :: 3'],
install_requires=install_reqs,
extras_require={
'web': ['tornado']
},
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
entry_points={
'console_scripts':

View File

@@ -1 +1 @@
0.13.1
0.20.7

View File

@@ -11,59 +11,83 @@ try:
except NameError:
basestring = str
logging.basicConfig()
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment
from . import utils
from . import serialization
from . import analysis
from .utils import logger
from .time import *
def main():
import argparse
from . import simulation
logger.info('Running SOIL version: {}'.format(__version__))
parser = argparse.ArgumentParser(description='Run a SOIL simulation')
parser.add_argument('file', type=str,
nargs="?",
default='simulation.yml',
help='python module containing the simulation configuration.')
help='Configuration file for the simulation (e.g., YAML or JSON)')
parser.add_argument('--version', action='store_true',
help='Show version info and exit')
parser.add_argument('--module', '-m', type=str,
help='file containing the code of any custom agents.')
parser.add_argument('--dry-run', '--dry', action='store_true',
help='Do not store the results of the simulation.')
help='Do not store the results of the simulation to disk, show in terminal instead.')
parser.add_argument('--pdb', action='store_true',
help='Use a pdb console in case of exception.')
parser.add_argument('--graph', '-g', action='store_true',
help='Dump GEXF graph. Defaults to false.')
help='Dump each trial\'s network topology as a GEXF graph. Defaults to false.')
parser.add_argument('--csv', action='store_true',
help='Dump history in CSV format. Defaults to false.')
help='Dump all data collected in CSV format. Defaults to false.')
parser.add_argument('--level', type=str,
help='Logging level')
parser.add_argument('--output', '-o', type=str, default="soil_output",
help='folder to write results to. It defaults to the current directory.')
parser.add_argument('--synchronous', action='store_true',
help='Run trials serially and synchronously instead of in parallel. Defaults to false.')
parser.add_argument('-e', '--exporter', action='append',
help='Export environment and/or simulations using this exporter')
args = parser.parse_args()
logging.basicConfig(level=getattr(logging, (args.level or 'INFO').upper()))
if args.version:
return
if os.getcwd() not in sys.path:
sys.path.append(os.getcwd())
if args.module:
importlib.import_module(args.module)
logging.info('Loading config file: {}'.format(args.file))
logger.info('Loading config file: {}'.format(args.file))
if args.pdb:
args.synchronous = True
try:
dump = []
if not args.dry_run:
if args.csv:
dump.append('csv')
if args.graph:
dump.append('gexf')
exporters = list(args.exporter or ['default', ])
if args.csv:
exporters.append('csv')
if args.graph:
exporters.append('gexf')
exp_params = {}
if args.dry_run:
exp_params['copy_to'] = sys.stdout
if not os.path.exists(args.file):
logger.error('Please, input a valid file')
return
simulation.run_from_config(args.file,
dry_run=args.dry_run,
dump=dump,
parallel=(not args.synchronous and not args.pdb),
results_dir=args.output)
exporters=exporters,
parallel=(not args.synchronous),
outdir=args.output,
exporter_params=exp_params)
except Exception:
if args.pdb:
pdb.post_mortem()

View File

@@ -1,40 +1,31 @@
import random
from . import BaseAgent
from . import FSM, state, default_state
class BassModel(BaseAgent):
class BassModel(FSM):
"""
Settings:
innovation_prob
imitation_prob
"""
def __init__(self, environment, agent_id, state):
super().__init__(environment=environment, agent_id=agent_id, state=state)
env_params = environment.environment_params
self.state['sentimentCorrelation'] = 0
sentimentCorrelation = 0
def step(self):
self.behaviour()
def behaviour(self):
# Outside effects
if random.random() < self.state_params['innovation_prob']:
if self.state['id'] == 0:
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
else:
pass
return
# Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
@default_state
@state
def innovation(self):
if random.random() < self.innovation_prob:
self.sentimentCorrelation = 1
return self.aware
else:
aware_neighbors = self.get_neighboring_agents(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (self.state_params['imitation_prob']*num_neighbors_aware):
self.state['id'] = 1
self.state['sentimentCorrelation'] = 1
if random.random() < (self['imitation_prob']*num_neighbors_aware):
self.sentimentCorrelation = 1
return self.aware
else:
pass
@state
def aware(self):
self.die()

View File

@@ -1,8 +1,8 @@
import random
from . import BaseAgent
from . import FSM, state, default_state
class BigMarketModel(BaseAgent):
class BigMarketModel(FSM):
"""
Settings:
Names:
@@ -19,34 +19,25 @@ class BigMarketModel(BaseAgent):
sentiment_about [Array]
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.enterprises = environment.environment_params['enterprises']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.enterprises = self.env.environment_params['enterprises']
self.type = ""
self.number_of_enterprises = len(environment.environment_params['enterprises'])
if self.id < self.number_of_enterprises: # Enterprises
self.state['id'] = self.id
if self.id < len(self.enterprises): # Enterprises
self.set_state(self.enterprise.id)
self.type = "Enterprise"
self.tweet_probability = environment.environment_params['tweet_probability_enterprises'][self.id]
else: # normal users
self.state['id'] = self.number_of_enterprises
self.type = "User"
self.set_state(self.user.id)
self.tweet_probability = environment.environment_params['tweet_probability_users']
self.tweet_relevant_probability = environment.environment_params['tweet_relevant_probability']
self.tweet_probability_about = environment.environment_params['tweet_probability_about'] # List
self.sentiment_about = environment.environment_params['sentiment_about'] # List
def step(self):
if self.id < self.number_of_enterprises: # Enterprise
self.enterpriseBehaviour()
else: # Usuario
self.userBehaviour()
for i in range(self.number_of_enterprises): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def enterpriseBehaviour(self):
@state
def enterprise(self):
if random.random() < self.tweet_probability: # Tweets
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbour users
@@ -64,12 +55,12 @@ class BigMarketModel(BaseAgent):
x.attrs['sentiment_enterprise_%s'% self.enterprises[self.id]] = x.sentiment_about[self.id]
def userBehaviour(self):
@state
def user(self):
if random.random() < self.tweet_probability: # Tweets
if random.random() < self.tweet_relevant_probability: # Tweets something relevant
# Tweet probability per enterprise
for i in range(self.number_of_enterprises):
for i in range(len(self.enterprises)):
random_num = random.random()
if random_num < self.tweet_probability_about[i]:
# The condition is fulfilled, sentiments are evaluated towards that enterprise
@@ -82,8 +73,10 @@ class BigMarketModel(BaseAgent):
else:
# POSITIVO
self.userTweets("positive",i)
for i in range(len(self.enterprises)): # So that it never is set to 0 if there are not changes (logs)
self.attrs['sentiment_enterprise_%s'% self.enterprises[i]] = self.sentiment_about[i]
def userTweets(self,sentiment,enterprise):
def userTweets(self, sentiment,enterprise):
aware_neighbors = self.get_neighboring_agents(state_id=self.number_of_enterprises) # Nodes neighbours users
for x in aware_neighbors:
if sentiment == "positive":

View File

@@ -1,32 +1,44 @@
from . import BaseAgent
from . import NetworkAgent
class CounterModel(BaseAgent):
class CounterModel(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
defaults = {
'times': 0,
'neighbors': 0,
'total': 0
}
def step(self):
# Outside effects
total = len(list(self.get_all_agents()))
total = len(list(self.env.agents))
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = neighbors
self['total'] = total
class AggregatedCounter(BaseAgent):
class AggregatedCounter(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
defaults = {
'times': 0,
'neighbors': 0,
'total': 0
}
def step(self):
# Outside effects
total = len(list(self.get_all_agents()))
self['times'] += 1
neighbors = len(list(self.get_neighboring_agents()))
self['times'] = self.get('times', 0) + 1
self['neighbors'] = self.get('neighbors', 0) + neighbors
self['total'] = total = self.get('total', 0) + total
self['neighbors'] += neighbors
total = len(list(self.env.agents))
self['total'] += total
self.debug('Running for step: {}. Total: {}'.format(self.now, total))

View File

@@ -1,18 +0,0 @@
from . import BaseAgent
import os.path
import matplotlib
import matplotlib.pyplot as plt
import networkx as nx
class DrawingAgent(BaseAgent):
"""
Agent that draws the state of the network.
"""
def step(self):
# Outside effects
f = plt.figure()
nx.draw(self.env.G, node_size=10, width=0.2, pos=nx.spring_layout(self.env.G, scale=100), ax=f.add_subplot(111))
f.savefig(os.path.join(self.env.get_path(), "graph-"+str(self.env.now)+".png"))

21
soil/agents/Geo.py Normal file
View File

@@ -0,0 +1,21 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent, as_node
class Geo(NetworkAgent):
'''In this type of network, nodes have a "pos" attribute.'''
def geo_search(self, radius, node=None, center=False, **kwargs):
'''Get a list of nodes whose coordinates are closer than *radius* to *node*.'''
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
pos = nx.get_node_attributes(G, 'pos')
if not pos:
return []
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -10,10 +10,10 @@ class IndependentCascadeModel(BaseAgent):
imitation_prob
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = environment.environment_params['innovation_prob']
self.imitation_prob = environment.environment_params['imitation_prob']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.innovation_prob = self.env.environment_params['innovation_prob']
self.imitation_prob = self.env.environment_params['imitation_prob']
self.state['time_awareness'] = 0
self.state['sentimentCorrelation'] = 0

View File

@@ -21,8 +21,8 @@ class SpreadModelM2(BaseAgent):
prob_generate_anti_rumor
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])
@@ -123,8 +123,8 @@ class ControlModelM2(BaseAgent):
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, model=None, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.prob_neutral_making_denier = np.random.normal(environment.environment_params['prob_neutral_making_denier'],
environment.environment_params['standard_variance'])

View File

@@ -29,8 +29,8 @@ class SISaModel(FSM):
standard_variance
"""
def __init__(self, environment, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.neutral_discontent_spon_prob = np.random.normal(self.env['neutral_discontent_spon_prob'],
self.env['standard_variance'])

View File

@@ -16,8 +16,8 @@ class SentimentCorrelationModel(BaseAgent):
disgust_prob
"""
def __init__(self, environment, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
def __init__(self, environment, unique_id=0, state=()):
super().__init__(model=environment, unique_id=unique_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob']

View File

@@ -1,67 +1,104 @@
# networkStatus = {} # Dict that will contain the status of every agent in the network
# sentimentCorrelationNodeArray = []
# for x in range(0, settings.network_params["number_of_nodes"]):
# sentimentCorrelationNodeArray.append({'id': x})
# Initialize agent states. Let's assume everyone is normal.
import nxsim
import logging
from collections import OrderedDict
from collections import OrderedDict, defaultdict
from collections.abc import MutableMapping, Mapping, Set
from abc import ABCMeta
from copy import deepcopy
from functools import partial
from functools import partial, wraps
from itertools import islice, chain
import json
import networkx as nx
from functools import wraps
from mesa import Agent as MesaAgent
from typing import Dict, List
from .. import utils, history
from random import shuffle
from .. import serialization, utils, time, config
class BaseAgent(nxsim.BaseAgent):
def as_node(agent):
if isinstance(agent, BaseAgent):
return agent.id
return agent
IGNORED_FIELDS = ('model', 'logger')
class DeadAgent(Exception):
pass
class BaseAgent(MesaAgent, MutableMapping):
"""
A special simpy BaseAgent that keeps track of its state history.
A special type of Mesa Agent that:
* Can be used as a dictionary to access its state.
* Has logging built-in
* Can be given default arguments through a defaults class attribute,
which will be used on construction to initialize each agent's state
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
"""
defaults = {}
def __init__(self, environment, agent_id=None, state=None,
name='network_process', interval=None, **state_params):
def __init__(self,
unique_id,
model,
name=None,
interval=None,
**kwargs
):
# Check for REQUIRED arguments
assert environment is not None, TypeError('__init__ missing 1 required keyword argument: \'environment\'. '
'Cannot be NoneType.')
# Initialize agent parameters
self.id = agent_id
self.name = name
self.state_params = state_params
if isinstance(unique_id, MesaAgent):
raise Exception()
assert isinstance(unique_id, int)
super().__init__(unique_id=unique_id, model=model)
# Global parameters
self.global_topology = environment.G
self.environment_params = environment.environment_params
self.name = str(name) if name else'{}[{}]'.format(type(self).__name__, self.unique_id)
# Register agent to environment
self.env = environment
self._neighbors = None
self.alive = True
real_state = deepcopy(self.defaults)
real_state.update(state or {})
self.state = real_state
self.interval = interval
if not hasattr(self, 'level'):
self.level = logging.DEBUG
self.logger = logging.getLogger('{}-Agent-{}'.format(self.env.name,
self.id))
self.logger.setLevel(self.level)
self.interval = interval or self.get('interval', 1)
self.logger = logging.getLogger(self.model.id).getChild(self.name)
# initialize every time an instance of the agent is created
self.action = self.env.process(self.run())
if hasattr(self, 'level'):
self.logger.setLevel(self.level)
for (k, v) in self.defaults.items():
if not hasattr(self, k) or getattr(self, k) is None:
setattr(self, k, deepcopy(v))
for (k, v) in kwargs.items():
setattr(self, k, v)
for (k, v) in getattr(self, 'defaults', {}).items():
if not hasattr(self, k) or getattr(self, k) is None:
setattr(self, k, v)
def __hash__(self):
return hash(self.unique_id)
# TODO: refactor to clean up mesa compatibility
@property
def id(self):
return self.unique_id
@property
def env(self):
return self.model
@env.setter
def env(self, model):
self.model = model
@property
def state(self):
'''
Return the agent itself, which behaves as a dictionary.
Changes made to `agent.state` will be reflected in the history.
This method shouldn't be used, but is kept here for backwards compatibility.
'''
@@ -69,32 +106,40 @@ class BaseAgent(nxsim.BaseAgent):
@state.setter
def state(self, value):
self._state = {}
for k, v in value.items():
self[k] = v
@property
def environment_params(self):
return self.model.environment_params
@environment_params.setter
def environment_params(self, value):
self.model.environment_params = value
def __getitem__(self, key):
if isinstance(key, tuple):
key, t_step = key
k = history.Key(key=key, t_step=t_step, agent_id=self.id)
return self.env[k]
return self._state.get(key, None)
return getattr(self, key)
def __delitem__(self, key):
self._state[key] = None
return delattr(self, key)
def __contains__(self, key):
return key in self._state
return hasattr(self, key)
def __setitem__(self, key, value):
self._state[key] = value
k = history.Key(t_step=self.now,
agent_id=self.id,
key=key)
self.env[k] = value
setattr(self, key, value)
def __len__(self):
return sum(1 for n in self.keys())
def __iter__(self):
return self.items()
def keys(self):
return (k for k in self.__dict__ if k[0] != '_')
def items(self):
return self._state.items()
return ((k, v) for (k, v) in self.__dict__.items() if k[0] != '_')
def get(self, key, default=None):
return self[key] if key in self else default
@@ -102,112 +147,153 @@ class BaseAgent(nxsim.BaseAgent):
@property
def now(self):
try:
return self.env.now
return self.model.now
except AttributeError:
# No environment
return None
def run(self):
if self.interval is not None:
interval = self.interval
elif 'interval' in self:
interval = self['interval']
else:
interval = self.env.interval
while self.alive:
res = self.step()
yield res or self.env.timeout(interval)
def die(self, remove=False):
self.info(f'agent {self.unique_id} is dying')
self.alive = False
if remove:
super().die()
self.remove_node(self.id)
return time.NEVER
def step(self):
pass
def to_json(self):
return json.dumps(self.state)
def count_agents(self, state_id=None, limit_neighbors=False):
if limit_neighbors:
agents = self.global_topology.neighbors(self.id)
else:
agents = self.global_topology.nodes()
count = 0
for agent in agents:
if state_id and state_id != self.global_topology.node[agent]['agent']['id']:
continue
count += 1
return count
def count_neighboring_agents(self, state_id=None):
return len(super().get_agents(state_id, limit_neighbors=True))
def get_agents(self, state_id=None, agent_type=None, limit_neighbors=False, iterator=False, **kwargs):
agents = self.env.agents
if limit_neighbors:
agents = super().get_agents(state_id, limit_neighbors)
def matches_all(agent):
if state_id is not None:
if agent.state.get('id', None) != state_id:
return False
if agent_type is not None:
if type(agent) != agent_type:
return False
state = agent.state
for k, v in kwargs.items():
if state.get(k, None) != v:
return False
return True
f = filter(matches_all, agents)
if iterator:
return f
return list(f)
if not self.alive:
raise DeadAgent(self.unique_id)
return super().step() or time.Delta(self.interval)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
message = message + " ".join(str(i) for i in args)
message = "\t@{:>5}:\t{}".format(self.now, message)
message = " @{:>3}: {}".format(self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra['now'] = self.now
extra['id'] = self.id
extra['unique_id'] = self.unique_id
extra['agent_name'] = self.name
return self.logger.log(level, message, extra=extra)
def debug(self, *args, **kwargs):
return self.log(*args, level=logging.DEBUG, **kwargs)
def info(self, *args, **kwargs):
return self.log(*args, level=logging.INFO, **kwargs)
# Alias
# Agent = BaseAgent
def state(func):
'''
A state function should return either a state id, or a tuple (state_id, when)
The default value for state_id is the current state id.
The default value for when is the interval defined in the nevironment.
'''
class NetworkAgent(BaseAgent):
@wraps(func)
def func_wrapper(self):
next_state = func(self)
when = None
if next_state is None:
@property
def topology(self):
return self.env.topology_for(self.unique_id)
@property
def node_id(self):
return self.env.node_id_for(self.unique_id)
@property
def G(self):
return self.model.topologies[self._topology]
def count_agents(self, **kwargs):
return len(list(self.get_agents(**kwargs)))
def count_neighboring_agents(self, state_id=None, **kwargs):
return len(self.get_neighboring_agents(state_id=state_id, **kwargs))
def get_neighboring_agents(self, state_id=None, **kwargs):
return self.get_agents(limit_neighbors=True, state_id=state_id, **kwargs)
def get_agents(self, *args, limit=None, **kwargs):
it = self.iter_agents(*args, **kwargs)
if limit is not None:
it = islice(it, limit)
return list(it)
def iter_agents(self, unique_id=None, limit_neighbors=False, **kwargs):
if limit_neighbors:
unique_id = [self.topology.nodes[node]['agent_id'] for node in self.topology.neighbors(self.node_id)]
if not unique_id:
return
yield from self.model.agents(unique_id=unique_id, **kwargs)
def subgraph(self, center=True, **kwargs):
include = [self] if center else []
G = self.topology.subgraph(n.node_id for n in list(self.get_agents(**kwargs)+include))
return G
def remove_node(self, unique_id):
self.topology.remove_node(unique_id)
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
# return super(NetworkAgent, self).add_edge(node1=self.id, node2=other, **kwargs)
if self.unique_id not in self.topology.nodes(data=False):
raise ValueError('{} not in list of existing agents in the network'.format(self.unique_id))
if other.unique_id not in self.topology.nodes(data=False):
raise ValueError('{} not in list of existing agents in the network'.format(other))
self.topology.add_edge(self.unique_id, other.unique_id, edge_attr_dict=edge_attr_dict, *edge_attrs)
def ego_search(self, steps=1, center=False, node=None, **kwargs):
'''Get a list of nodes in the ego network of *node* of radius *steps*'''
node = as_node(node if node is not None else self)
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, node, force=False):
node = as_node(node)
if force or (not hasattr(self.model, '_degree')) or getattr(self.model, '_last_step', 0) < self.now:
self.model._degree = nx.degree_centrality(self.topology)
self.model._last_step = self.now
return self.model._degree[node]
def betweenness(self, node, force=False):
node = as_node(node)
if force or (not hasattr(self.model, '_betweenness')) or getattr(self.model, '_last_step', 0) < self.now:
self.model._betweenness = nx.betweenness_centrality(self.topology)
self.model._last_step = self.now
return self.model._betweenness[node]
def state(name=None):
def decorator(func, name=None):
'''
A state function should return either a state id, or a tuple (state_id, when)
The default value for state_id is the current state id.
The default value for when is the interval defined in the environment.
'''
@wraps(func)
def func_wrapper(self):
next_state = func(self)
when = None
if next_state is None:
return when
try:
next_state, when = next_state
except (ValueError, TypeError):
pass
if next_state:
self.set_state(next_state)
return when
try:
next_state, when = next_state
except (ValueError, TypeError):
pass
if next_state:
self.set_state(next_state)
return when
func_wrapper.id = func.__name__
func_wrapper.is_default = False
return func_wrapper
func_wrapper.id = name or func.__name__
func_wrapper.is_default = False
return func_wrapper
if callable(name):
return decorator(name)
else:
return partial(decorator, name=name)
def default_state(func):
@@ -215,7 +301,7 @@ def default_state(func):
return func
class MetaFSM(type):
class MetaFSM(ABCMeta):
def __init__(cls, name, bases, nmspc):
super(MetaFSM, cls).__init__(name, bases, nmspc)
states = {}
@@ -241,28 +327,32 @@ class MetaFSM(type):
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, *args, **kwargs):
super(FSM, self).__init__(*args, **kwargs)
if 'id' not in self.state:
if not hasattr(self, 'state_id'):
if not self.default_state:
raise ValueError('No default state specified for {}'.format(self.id))
self['id'] = self.default_state.id
raise ValueError('No default state specified for {}'.format(self.unique_id))
self.state_id = self.default_state.id
self.set_state(self.state_id)
def step(self):
if 'id' in self.state:
next_state = self['id']
elif self.default_state:
next_state = self.default_state.id
else:
raise Exception('{} has no valid state id or default state'.format(self))
if next_state not in self.states:
raise Exception('{} is not a valid id for {}'.format(next_state, self))
self.states[next_state](self)
self.debug(f'Agent {self.unique_id} @ state {self.state_id}')
interval = super().step()
if 'id' not in self.state:
if self.default_state:
self.set_state(self.default_state.id)
else:
raise Exception('{} has no valid state id or default state'.format(self))
interval = self.states[self.state_id](self) or interval
if not self.alive:
return time.NEVER
return interval
def set_state(self, state):
if hasattr(state, 'id'):
state = state.id
if state not in self.states:
raise ValueError('{} is not a valid state'.format(state))
self['id'] = state
self.state_id = state
return state
@@ -282,7 +372,7 @@ def prob(prob=1):
def calculate_distribution(network_agents=None,
agent_type=None):
agent_class=None):
'''
Calculate the threshold values (thresholds for a uniform distribution)
of an agent distribution given the weights of each agent type.
@@ -290,13 +380,13 @@ def calculate_distribution(network_agents=None,
The input has this form: ::
[
{'agent_type': 'agent_type_1',
{'agent_class': 'agent_class_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
{'agent_class': 'agent_class_2',
'weight': 0.8,
'state': {
'id': 1
@@ -305,58 +395,64 @@ def calculate_distribution(network_agents=None,
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
'agent_class_1'.
'''
if network_agents:
network_agents = deepcopy(network_agents)
elif agent_type:
network_agents = [{'agent_type': agent_type}]
network_agents = [deepcopy(agent) for agent in network_agents if not hasattr(agent, 'id')]
elif agent_class:
network_agents = [{'agent_class': agent_class}]
else:
return []
raise ValueError('Specify a distribution or a default agent type')
# Fix missing weights and incompatible types
for x in network_agents:
x['weight'] = float(x.get('weight', 1))
# Calculate the thresholds
total = sum(x.get('weight', 1) for x in network_agents)
total = sum(x['weight'] for x in network_agents)
acc = 0
for v in network_agents:
upper = acc + (v.get('weight', 1)/total)
if 'ids' in v:
continue
upper = acc + (v['weight']/total)
v['threshold'] = [acc, upper]
acc = upper
return network_agents
def serialize_type(agent_type, known_modules=[], **kwargs):
if isinstance(agent_type, str):
return agent_type
def serialize_type(agent_class, known_modules=[], **kwargs):
if isinstance(agent_class, str):
return agent_class
known_modules += ['soil.agents']
return utils.serialize(agent_type, known_modules=known_modules, **kwargs)[1] # Get the name of the class
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[1] # Get the name of the class
def serialize_distribution(network_agents, known_modules=[]):
def serialize_definition(network_agents, known_modules=[]):
'''
When serializing an agent distribution, remove the thresholds, in order
to avoid cluttering the YAML definition file.
'''
d = deepcopy(network_agents)
d = deepcopy(list(network_agents))
for v in d:
if 'threshold' in v:
del v['threshold']
v['agent_type'] = serialize_type(v['agent_type'],
v['agent_class'] = serialize_type(v['agent_class'],
known_modules=known_modules)
return d
def deserialize_type(agent_type, known_modules=[]):
if not isinstance(agent_type, str):
return agent_type
def deserialize_type(agent_class, known_modules=[]):
if not isinstance(agent_class, str):
return agent_class
known = known_modules + ['soil.agents', 'soil.agents.custom' ]
agent_type = utils.deserializer(agent_type, known_modules=known)
return agent_type
agent_class = serialization.deserializer(agent_class, known_modules=known)
return agent_class
def deserialize_distribution(ind, **kwargs):
def deserialize_definition(ind, **kwargs):
d = deepcopy(ind)
for v in d:
v['agent_type'] = deserialize_type(v['agent_type'], **kwargs)
v['agent_class'] = deserialize_type(v['agent_class'], **kwargs)
return d
@@ -365,32 +461,327 @@ def _validate_states(states, topology):
states = states or []
if isinstance(states, dict):
for x in states:
assert x in topology.node
assert x in topology.nodes
else:
assert len(states) <= len(topology)
return states
def _convert_agent_types(ind, to_string=False, **kwargs):
def _convert_agent_classs(ind, to_string=False, **kwargs):
'''Convenience method to allow specifying agents by class or class name.'''
if to_string:
return serialize_distribution(ind, **kwargs)
return deserialize_distribution(ind, **kwargs)
return serialize_definition(ind, **kwargs)
return deserialize_definition(ind, **kwargs)
def _agent_from_distribution(distribution, value=-1):
def _agent_from_definition(definition, value=-1, unique_id=None):
"""Used in the initialization of agents given an agent distribution."""
if value < 0:
value = random.random()
for d in distribution:
threshold = d['threshold']
if value >= threshold[0] and value < threshold[1]:
for d in sorted(definition, key=lambda x: x.get('threshold')):
threshold = d.get('threshold', (-1, -1))
# Check if the definition matches by id (first) or by threshold
if (unique_id is not None and unique_id in d.get('ids', [])) or \
(value >= threshold[0] and value < threshold[1]):
state = {}
if 'state' in d:
state = deepcopy(d['state'])
return d['agent_type'], state
return d['agent_class'], state
raise Exception('Distribution for value {} not found in: {}'.format(value, distribution))
raise Exception('Definition for value {} not found in: {}'.format(value, definition))
def _definition_to_dict(definition, size=None, default_state=None):
state = default_state or {}
agents = {}
remaining = {}
if size:
for ix in range(size):
remaining[ix] = copy(state)
else:
remaining = defaultdict(lambda x: copy(state))
distro = sorted([item for item in definition if 'weight' in item])
id = 0
def init_agent(item, id=ix):
while id in agents:
id += 1
agent = remaining[id]
agent['state'].update(copy(item.get('state', {})))
agents[agent.unique_id] = agent
del remaining[id]
return agent
for item in definition:
if 'ids' in item:
ids = item['ids']
del item['ids']
for id in ids:
agent = init_agent(item, id)
for item in definition:
if 'number' in item:
times = item['number']
del item['number']
for times in range(times):
if size:
ix = random.choice(remaining.keys())
agent = init_agent(item, id)
else:
agent = init_agent(item)
if not size:
return agents
if len(remaining) < 0:
raise Exception('Invalid definition. Too many agents to add')
total_weight = float(sum(s['weight'] for s in distro))
unit = size / total_weight
for item in distro:
times = unit * item['weight']
del item['weight']
for times in range(times):
ix = random.choice(remaining.keys())
agent = init_agent(item, id)
return agents
class AgentView(Mapping, Set):
"""A lazy-loaded list of agents.
"""
__slots__ = ("_agents",)
def __init__(self, agents):
self._agents = agents
def __getstate__(self):
return {"_agents": self._agents}
def __setstate__(self, state):
self._agents = state["_agents"]
# Mapping methods
def __len__(self):
return sum(len(x) for x in self._agents.values())
def __iter__(self):
yield from iter(chain.from_iterable(g.values() for g in self._agents.values()))
def __getitem__(self, agent_id):
if isinstance(agent_id, slice):
raise ValueError(f"Slicing is not supported")
for group in self._agents.values():
if agent_id in group:
return group[agent_id]
raise ValueError(f"Agent {agent_id} not found")
def filter(self, *args, **kwargs):
yield from filter_groups(self._agents, *args, **kwargs)
def one(self, *args, **kwargs):
return next(filter_groups(self._agents, *args, **kwargs))
def __call__(self, *args, **kwargs):
return list(self.filter(*args, **kwargs))
def __contains__(self, agent_id):
return any(agent_id in g for g in self._agents)
def __str__(self):
return str(list(a.unique_id for a in self))
def __repr__(self):
return f"{self.__class__.__name__}({self})"
def filter_groups(groups, *, group=None, **kwargs):
assert isinstance(groups, dict)
if group is not None and not isinstance(group, list):
group = [group]
if group:
groups = list(groups[g] for g in group if g in groups)
else:
groups = list(groups.values())
agents = chain.from_iterable(filter_group(g, **kwargs) for g in groups)
yield from agents
def filter_group(group, *id_args, unique_id=None, state_id=None, agent_class=None, ignore=None, state=None, **kwargs):
'''
Filter agents given as a dict, by the criteria given as arguments (e.g., certain type or state id).
'''
assert isinstance(group, dict)
ids = []
if unique_id is not None:
if isinstance(unique_id, list):
ids += unique_id
else:
ids.append(unique_id)
if id_args:
ids += id_args
if state_id is not None and not isinstance(state_id, (tuple, list)):
state_id = tuple([state_id])
if agent_class is not None:
agent_class = deserialize_type(agent_class)
try:
agent_class = tuple(agent_class)
except TypeError:
agent_class = tuple([agent_class])
if ids:
agents = (group[aid] for aid in ids if aid in group)
else:
agents = (a for a in group.values())
f = agents
if ignore:
f = filter(lambda x: x not in ignore, f)
if state_id is not None:
f = filter(lambda agent: agent.get('state_id', None) in state_id, f)
if agent_class is not None:
f = filter(lambda agent: isinstance(agent, agent_class), f)
state = state or dict()
state.update(kwargs)
for k, v in state.items():
f = filter(lambda agent: agent.state.get(k, None) == v, f)
yield from f
def from_config(cfg: Dict[str, config.AgentConfig], env):
'''
Agents are specified in groups.
Each group can be specified in two ways, either through a fixed list in which each item has
has the agent type, number of agents to create, and the other parameters, or through what we call
an `agent distribution`, which is similar but instead of number of agents, it specifies the weight
of each agent type.
'''
default = cfg.get('default', None)
return {k: _group_from_config(c, default=default, env=env) for (k, c) in cfg.items() if k is not 'default'}
def _group_from_config(cfg: config.AgentConfig, default: config.SingleAgentConfig, env):
agents = {}
if cfg.fixed is not None:
agents = _from_fixed(cfg.fixed, topology=cfg.topology, default=default, env=env)
if cfg.distribution:
n = cfg.n or len(env.topologies[cfg.topology or default.topology])
target = n - len(agents)
agents.update(_from_distro(cfg.distribution, target,
topology=cfg.topology or default.topology,
default=default,
env=env))
assert len(agents) == n
if cfg.override:
for attrs in cfg.override:
if attrs.filter:
filtered = list(filter_group(agents, **attrs.filter))
else:
filtered = list(agents)
if attrs.n > len(filtered):
raise ValueError(f'Not enough agents to sample. Got {len(filtered)}, expected >= {attrs.n}')
for agent in random.sample(filtered, attrs.n):
agent.state.update(attrs.state)
return agents
def _from_fixed(lst: List[config.FixedAgentConfig], topology: str, default: config.SingleAgentConfig, env):
agents = {}
for fixed in lst:
agent_id = fixed.agent_id
if agent_id is None:
agent_id = env.next_id()
cls = serialization.deserialize(fixed.agent_class or default.agent_class)
state = fixed.state.copy()
state.update(default.state)
agent = cls(unique_id=agent_id,
model=env,
**state)
topology = fixed.topology if (fixed.topology is not None) else (topology or default.topology)
if topology:
env.agent_to_node(agent_id, topology, fixed.node_id)
agents[agent.unique_id] = agent
return agents
def _from_distro(distro: List[config.AgentDistro],
n: int,
topology: str,
default: config.SingleAgentConfig,
env):
agents = {}
if n is None:
if any(lambda dist: dist.n is None, distro):
raise ValueError('You must provide a total number of agents, or the number of each type')
n = sum(dist.n for dist in distro)
weights = list(dist.weight if dist.weight is not None else 1 for dist in distro)
minw = min(weights)
norm = list(weight / minw for weight in weights)
total = sum(norm)
chunk = n // total
# random.choices would be enough to get a weighted distribution. But it can vary a lot for smaller k
# So instead we calculate our own distribution to make sure the actual ratios are close to what we would expect
# Calculate how many times each has to appear
indices = list(chain.from_iterable([idx] * int(n*chunk) for (idx, n) in enumerate(norm)))
# Complete with random agents following the original weight distribution
if len(indices) < n:
indices += random.choices(list(range(len(distro))), weights=[d.weight for d in distro], k=n-len(indices))
# Deserialize classes for efficiency
classes = list(serialization.deserialize(i.agent_class or default.agent_class) for i in distro)
# Add them in random order
random.shuffle(indices)
for idx in indices:
d = distro[idx]
cls = classes[idx]
agent_id = env.next_id()
state = d.state.copy()
if default:
state.update(default.state)
agent = cls(unique_id=agent_id, model=env, **state)
topology = d.topology if (d.topology is not None) else topology or default.topology
if topology:
env.agent_to_node(agent.unique_id, topology)
assert agent.name is not None
assert agent.name != 'None'
assert agent.name
agents[agent.unique_id] = agent
return agents
from .BassModel import *
@@ -400,4 +791,10 @@ from .ModelM2 import *
from .SentimentCorrelationModel import *
from .SISaModel import *
from .CounterModel import *
from .DrawingAgent import *
try:
import scipy
from .Geo import Geo
except ImportError:
import sys
print('Could not load the Geo Agent, scipy is not installed', file=sys.stderr)

View File

@@ -4,7 +4,8 @@ import glob
import yaml
from os.path import join
from . import utils, history
from . import serialization
from tsih import History
def read_data(*args, group=False, **kwargs):
@@ -20,7 +21,7 @@ def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
process_args = {}
for folder in glob.glob(pattern):
config_file = glob.glob(join(folder, '*.yml'))[0]
config = yaml.load(open(config_file))
config = yaml.load(open(config_file), Loader=yaml.SafeLoader)
df = None
if from_csv:
for trial_data in sorted(glob.glob(join(folder,
@@ -28,13 +29,13 @@ def _read_data(pattern, *args, from_csv=False, process_args=None, **kwargs):
df = read_csv(trial_data, **kwargs)
yield config_file, df, config
else:
for trial_data in sorted(glob.glob(join(folder, '*.db.sqlite'))):
for trial_data in sorted(glob.glob(join(folder, '*.sqlite'))):
df = read_sql(trial_data, **kwargs)
yield config_file, df, config
def read_sql(db, *args, **kwargs):
h = history.History(db, backup=False)
h = History(db_path=db, backup=False, readonly=True)
df = h.read_sql(*args, **kwargs)
return df
@@ -56,12 +57,17 @@ def read_csv(filename, keys=None, convert_types=False, **kwargs):
def convert_row(row):
row['value'] = utils.deserialize(row['value_type'], row['value'])
row['value'] = serialization.deserialize(row['value_type'], row['value'])
return row
def convert_types_slow(df):
'''This is a slow operation.'''
'''
Go over every column in a dataframe and convert it to the type determined by the `get_types`
function.
This is a slow operation.
'''
dtypes = get_types(df)
for k, v in dtypes.items():
t = df[df['key']==k]
@@ -69,6 +75,13 @@ def convert_types_slow(df):
df = df.apply(convert_row, axis=1)
return df
def split_processed(df):
env = df.loc[:, df.columns.get_level_values(1).isin(['env', 'stats'])]
agents = df.loc[:, ~df.columns.get_level_values(1).isin(['env', 'stats'])]
return env, agents
def split_df(df):
'''
Split a dataframe in two dataframes: one with the history of agents,
@@ -95,6 +108,9 @@ def process(df, **kwargs):
def get_types(df):
'''
Get the value type for every key stored in a raw history dataframe.
'''
dtypes = df.groupby(by=['key'])['value_type'].unique()
return {k:v[0] for k,v in dtypes.iteritems()}
@@ -119,8 +135,14 @@ def process_one(df, *keys, columns=['key', 'agent_id'], values='value',
def get_count(df, *keys):
'''
For every t_step and key, get the value count.
The result is a dataframe with `t_step` as index, an a multiindex column based on `key` and the values found for each `key`.
'''
if keys:
df = df[list(keys)]
df.columns = df.columns.remove_unused_levels()
counts = pd.DataFrame()
for key in df.columns.levels[0]:
g = df[[key]].apply(pd.Series.value_counts, axis=1).fillna(0)
@@ -130,13 +152,28 @@ def get_count(df, *keys):
return counts
def get_majority(df, *keys):
'''
For every t_step and key, get the value of the majority of agents
The result is a dataframe with `t_step` as index, and columns based on `key`.
'''
df = get_count(df, *keys)
return df.stack(level=0).idxmax(axis=1).unstack()
def get_value(df, *keys, aggfunc='sum'):
'''
For every t_step and key, get the value of *numeric columns*, aggregated using a specific function.
'''
if keys:
df = df[list(keys)]
return df.groupby(axis=1, level=0).agg(aggfunc, axis=1)
df.columns = df.columns.remove_unused_levels()
df = df.select_dtypes('number')
return df.groupby(level='key', axis=1).agg(aggfunc)
def plot_all(*args, **kwargs):
def plot_all(*args, plot_args={}, **kwargs):
'''
Read all the trial data and plot the result of applying a function on them.
'''
@@ -144,14 +181,17 @@ def plot_all(*args, **kwargs):
ps = []
for line in dfs:
f, df, config = line
df.plot(title=config['name'])
if len(df) < 1:
continue
df.plot(title=config['name'], **plot_args)
ps.append(df)
return ps
def do_all(pattern, func, *keys, include_env=False, **kwargs):
for config_file, df, config in read_data(pattern, keys=keys):
if len(df) < 1:
continue
p = func(df, *keys, **kwargs)
p.plot(title=config['name'])
yield config_file, p, config

242
soil/config.py Normal file
View File

@@ -0,0 +1,242 @@
from __future__ import annotations
from pydantic import BaseModel, ValidationError, validator, root_validator
import yaml
import os
import sys
from typing import Any, Callable, Dict, List, Optional, Union, Type
from pydantic import BaseModel, Extra
import networkx as nx
class General(BaseModel):
id: str = 'Unnamed Simulation'
group: str = None
dir_path: Optional[str] = None
num_trials: int = 1
max_time: float = 100
interval: float = 1
seed: str = ""
@staticmethod
def default():
return General()
# Could use TypeAlias in python >= 3.10
nodeId = int
class Node(BaseModel):
id: nodeId
state: Optional[Dict[str, Any]] = {}
class Edge(BaseModel):
source: nodeId
target: nodeId
value: Optional[float] = 1
class Topology(BaseModel):
nodes: List[Node]
directed: bool
links: List[Edge]
class NetParams(BaseModel, extra=Extra.allow):
generator: Union[Callable, str]
n: int
class NetConfig(BaseModel):
group: str = 'network'
params: Optional[NetParams]
topology: Optional[Union[Topology, nx.Graph]]
path: Optional[str]
class Config:
arbitrary_types_allowed = True
@staticmethod
def default():
return NetConfig(topology=None, params=None)
@root_validator
def validate_all(cls, values):
if 'params' not in values and 'topology' not in values:
raise ValueError('You must specify either a topology or the parameters to generate a graph')
return values
class EnvConfig(BaseModel):
environment_class: Union[Type, str] = 'soil.Environment'
params: Dict[str, Any] = {}
schedule: Union[Type, str] = 'soil.time.TimedActivation'
@staticmethod
def default():
return EnvConfig()
class SingleAgentConfig(BaseModel):
agent_class: Optional[Union[Type, str]] = None
agent_id: Optional[int] = None
topology: Optional[str] = None
node_id: Optional[Union[int, str]] = None
name: Optional[str] = None
state: Optional[Dict[str, Any]] = {}
class FixedAgentConfig(SingleAgentConfig):
n: Optional[int] = 1
@root_validator
def validate_all(cls, values):
if values.get('agent_id', None) is not None and values.get('n', 1) > 1:
print(values)
raise ValueError(f"An agent_id can only be provided when there is only one agent ({values.get('n')} given)")
return values
class OverrideAgentConfig(FixedAgentConfig):
filter: Optional[Dict[str, Any]] = None
class AgentDistro(SingleAgentConfig):
weight: Optional[float] = 1
class AgentConfig(SingleAgentConfig):
n: Optional[int] = None
topology: Optional[str] = None
distribution: Optional[List[AgentDistro]] = None
fixed: Optional[List[FixedAgentConfig]] = None
override: Optional[List[OverrideAgentConfig]] = None
@staticmethod
def default():
return AgentConfig()
@root_validator
def validate_all(cls, values):
if 'distribution' in values and ('n' not in values and 'topology' not in values):
raise ValueError("You need to provide the number of agents or a topology to extract the value from.")
return values
class Config(BaseModel, extra=Extra.forbid):
version: Optional[str] = '1'
general: General = General.default()
topologies: Optional[Dict[str, NetConfig]] = {}
environment: EnvConfig = EnvConfig.default()
agents: Optional[Dict[str, AgentConfig]] = {}
def convert_old(old, strict=True):
'''
Try to convert old style configs into the new format.
This is still a work in progress and might not work in many cases.
'''
new = {}
general = {}
for k in ['id',
'group',
'dir_path',
'num_trials',
'max_time',
'interval',
'seed']:
if k in old:
general[k] = old[k]
if 'name' in old:
general['id'] = old['name']
network = {}
if 'network_params' in old and old['network_params']:
for (k, v) in old['network_params'].items():
if k == 'path':
network['path'] = v
else:
network.setdefault('params', {})[k] = v
if 'topology' in old:
network['topology'] = old['topology']
agents = {
'network': {},
'default': {},
}
if 'agent_type' in old:
agents['default']['agent_class'] = old['agent_type']
if 'default_state' in old:
agents['default']['state'] = old['default_state']
def updated_agent(agent):
newagent = dict(agent)
newagent['agent_class'] = newagent['agent_type']
del newagent['agent_type']
return newagent
for agent in old.get('environment_agents', []):
agents['environment'] = {'distribution': [], 'fixed': []}
if 'agent_id' in agent:
agent['name'] = agent['agent_id']
del agent['agent_id']
agents['environment']['fixed'].append(updated_agent(agent))
by_weight = []
fixed = []
override = []
if 'network_agents' in old:
agents['network']['topology'] = 'default'
for agent in old['network_agents']:
agent = updated_agent(agent)
if 'agent_id' in agent:
fixed.append(agent)
else:
by_weight.append(agent)
if 'agent_type' in old and (not fixed and not by_weight):
agents['network']['topology'] = 'default'
by_weight = [{'agent_class': old['agent_type']}]
# TODO: translate states properly
if 'states' in old:
states = old['states']
if isinstance(states, dict):
states = states.items()
else:
states = enumerate(states)
for (k, v) in states:
override.append({'filter': {'node_id': k},
'state': v
})
agents['network']['override'] = override
agents['network']['fixed'] = fixed
agents['network']['distribution'] = by_weight
environment = {'params': {}}
if 'environment_class' in old:
environment['environment_class'] = old['environment_class']
for (k, v) in old.get('environment_params', {}).items():
environment['params'][k] = v
return Config(version='2',
general=general,
topologies={'default': network},
environment=environment,
agents=agents)

24
soil/datacollection.py Normal file
View File

@@ -0,0 +1,24 @@
from mesa import DataCollector as MDC
class SoilDataCollector(MDC):
def __init__(self, environment, *args, **kwargs):
super().__init__(*args, **kwargs)
# Populate model and env reporters so they have a key per
# So they can be shown in the web interface
self.environment = environment
raise NotImplementedError()
@property
def model_vars(self):
raise NotImplementedError()
@model_vars.setter
def model_vars(self, value):
raise NotImplementedError()
@property
def agent_reporters(self):
raise NotImplementedError()

View File

@@ -1,338 +1,293 @@
from __future__ import annotations
import os
import sqlite3
import time
import csv
import math
import random
import simpy
import tempfile
import pandas as pd
import logging
from typing import Dict
from collections import namedtuple
from time import time as current_time
from copy import deepcopy
from networkx.readwrite import json_graph
import networkx as nx
import nxsim
from . import utils, agents, analysis, history
from mesa import Model
from mesa.datacollection import DataCollector
from . import serialization, agents, analysis, utils, time, config, network
class Environment(nxsim.NetworkEnvironment):
Record = namedtuple('Record', 'dict_id t_step key value')
class Environment(Model):
"""
The environment is key in a simulation. It contains the network topology,
a reference to network and environment agents, as well as the environment
params, which are used as shared state between agents.
The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary or with the environment's
both by using the environment as a dictionary or with the environment's
:meth:`soil.environment.Environment.get` method.
"""
def __init__(self, name=None,
network_agents=None,
environment_agents=None,
states=None,
default_state=None,
interval=1,
seed=None,
dry_run=False,
def __init__(self,
env_id='unnamed_env',
seed='default',
schedule=None,
dir_path=None,
topology=None,
*args, **kwargs):
self.name = name or 'UnnamedEnvironment'
if isinstance(states, list):
states = dict(enumerate(states))
self.states = deepcopy(states) if states else {}
self.default_state = deepcopy(default_state) or {}
if not topology:
topology = nx.Graph()
super().__init__(*args, topology=topology, **kwargs)
self._env_agents = {}
self.dry_run = dry_run
interval=1,
agents: Dict[str, config.AgentConfig] = {},
topologies: Dict[str, config.NetConfig] = {},
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
**env_params):
super().__init__()
self.current_id = -1
self.seed = '{}_{}'.format(seed, env_id)
self.id = env_id
self.dir_path = dir_path or os.getcwd()
if schedule is None:
schedule = time.TimedActivation()
self.schedule = schedule
seed = seed or current_time()
random.seed(seed)
self.topologies = {}
self._node_ids = {}
for (name, cfg) in topologies.items():
self.set_topology(cfg=cfg,
graph=name)
self.agents = agents or {}
self.env_params = env_params or {}
self.interval = interval
self.dir_path = dir_path or tempfile.mkdtemp('soil-env')
if not dry_run:
self.get_path()
self._history = history.History(name=self.name if not dry_run else None,
dir_path=self.dir_path)
# Add environment agents first, so their events get
# executed before network agents
self.environment_agents = environment_agents or []
self.network_agents = network_agents or []
self['SEED'] = seed or time.time()
random.seed(self['SEED'])
self['SEED'] = seed
self.logger = utils.logger.getChild(self.id)
self.datacollector = DataCollector(model_reporters, agent_reporters, tables)
@property
def agents(self):
yield from self.environment_agents
yield from self.network_agents
@property
def environment_agents(self):
for ref in self._env_agents.values():
yield ref
@environment_agents.setter
def environment_agents(self, environment_agents):
# Set up environmental agent
self._env_agents = {}
for item in environment_agents:
kwargs = deepcopy(item)
atype = kwargs.pop('agent_type')
kwargs['agent_id'] = kwargs.get('agent_id', atype.__name__)
kwargs['state'] = kwargs.get('state', {})
a = atype(environment=self, **kwargs)
self._env_agents[a.id] = a
def topology(self):
return self.topologies['default']
@property
def network_agents(self):
for i in self.G.nodes():
node = self.G.node[i]
if 'agent' in node:
yield node['agent']
yield from self.agents(agent_class=agents.NetworkAgent)
@network_agents.setter
def network_agents(self, network_agents):
if not network_agents:
return
for ix in self.G.nodes():
self.init_agent(ix, agent_distribution=network_agents)
@staticmethod
def from_config(conf: config.Config, trial_id, **kwargs) -> Environment:
'''Create an environment for a trial of the simulation'''
conf = conf
if kwargs:
conf = config.Config(**conf.dict(exclude_defaults=True), **kwargs)
seed = '{}_{}'.format(conf.general.seed, trial_id)
id = '{}_trial_{}'.format(conf.general.id, trial_id).replace('.', '-')
opts = conf.environment.params.copy()
dir_path = conf.general.dir_path
opts.update(conf)
opts.update(kwargs)
env = serialization.deserialize(conf.environment.environment_class)(env_id=id, seed=seed, dir_path=dir_path, **opts)
return env
def init_agent(self, agent_id, agent_distribution):
node = self.G.nodes[agent_id]
@property
def now(self):
if self.schedule:
return self.schedule.time
raise Exception('The environment has not been scheduled, so it has no sense of time')
def topology_for(self, agent_id):
return self.topologies[self._node_ids[agent_id][0]]
def node_id_for(self, agent_id):
return self._node_ids[agent_id][1]
def set_topology(self, cfg=None, dir_path=None, graph='default'):
topology = cfg
if not isinstance(cfg, nx.Graph):
topology = network.from_config(cfg, dir_path=dir_path or self.dir_path)
self.topologies[graph] = topology
@property
def agents(self):
return agents.AgentView(self._agents)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.find_all(*args, **kwargs))
def find_all(self, *args, **kwargs):
return agents.AgentView(self._agents).filter(*args, **kwargs)
def find_one(self, *args, **kwargs):
return agents.AgentView(self._agents).one(*args, **kwargs)
@agents.setter
def agents(self, agents_def: Dict[str, config.AgentConfig]):
self._agents = agents.from_config(agents_def, env=self)
for d in self._agents.values():
for a in d.values():
self.schedule.add(a)
def init_agent(self, agent_id, agent_definitions, graph='default'):
node = self.topologies[graph].nodes[agent_id]
init = False
state = dict(node)
agent_type = None
if 'agent_type' in self.states.get(agent_id, {}):
agent_type = self.states[agent_id]
elif 'agent_type' in node:
agent_type = node['agent_type']
elif 'agent_type' in self.default_state:
agent_type = self.default_state['agent_type']
agent_class = None
if 'agent_class' in self.states.get(agent_id, {}):
agent_class = self.states[agent_id]['agent_class']
elif 'agent_class' in node:
agent_class = node['agent_class']
elif 'agent_class' in self.default_state:
agent_class = self.default_state['agent_class']
if agent_type:
agent_type = agents.deserialize_type(agent_type)
if agent_class:
agent_class = agents.deserialize_type(agent_class)
elif agent_definitions:
agent_class, state = agents._agent_from_definition(agent_definitions, unique_id=agent_id)
else:
agent_type, state = agents._agent_from_distribution(agent_distribution)
return self.set_agent(agent_id, agent_type, state)
serialization.logger.debug('Skipping node {}'.format(agent_id))
return
return self.set_agent(agent_id, agent_class, state)
def set_agent(self, agent_id, agent_type, state=None):
node = self.G.nodes[agent_id]
def agent_to_node(self, agent_id, graph_name='default', node_id=None, shuffle=False):
#TODO: test
if node_id is None:
G = self.topologies[graph_name]
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if data.get('agent_id', None) is None:
node_id = next_id
data['agent_id'] = agent_id
break
self._node_ids[agent_id] = (graph_name, node_id)
print(self._node_ids)
def set_agent(self, agent_id, agent_class, state=None, graph='default'):
node = self.topologies[graph].nodes[agent_id]
defstate = deepcopy(self.default_state) or {}
defstate.update(self.states.get(agent_id, {}))
defstate.update(node.get('state', {}))
if state:
defstate.update(state)
state = defstate
a = agent_type(environment=self,
agent_id=agent_id,
state=state)
a = None
if agent_class:
state = defstate
a = agent_class(model=self,
unique_id=agent_id
)
for (k, v) in state.items():
setattr(a, k, v)
node['agent'] = a
self.schedule.add(a)
return a
def add_node(self, agent_type, state=None):
agent_id = int(len(self.G.nodes()))
self.G.add_node(agent_id)
a = self.set_agent(agent_id, agent_type, state)
def add_node(self, agent_class, state=None, graph='default'):
agent_id = int(len(self.topologies[graph].nodes()))
self.topologies[graph].add_node(agent_id)
a = self.set_agent(agent_id, agent_class, state, graph=graph)
a['visible'] = True
return a
def add_edge(self, agent1, agent2, attrs=None):
def add_edge(self, agent1, agent2, start=None, graph='default', **attrs):
if hasattr(agent1, 'id'):
agent1 = agent1.id
if hasattr(agent2, 'id'):
agent2 = agent2.id
return self.G.add_edge(agent1, agent2)
start = start or self.now
return self.topologies[graph].add_edge(agent1, agent2, **attrs)
def run(self, *args, **kwargs):
self._save_state()
super().run(*args, **kwargs)
self._history.flush_cache()
def _save_state(self, now=None):
# for agent in self.agents:
# agent.save_state()
utils.logger.debug('Saving state @{}'.format(self.now))
self._history.save_records(self.state_to_tuples(now=now))
def save_state(self):
'''
:DEPRECATED:
Periodically save the state of the environment and the agents.
'''
self._save_state()
while self.peek() != simpy.core.Infinity:
delay = max(self.peek() - self.now, self.interval)
utils.logger.debug('Step: {}'.format(self.now))
ev = self.event()
ev._ok = True
# Schedule the event with minimum priority so
# that it executes before all agents
self.schedule(ev, -999, delay)
yield ev
self._save_state()
def __getitem__(self, key):
if isinstance(key, tuple):
self._history.flush_cache()
return self._history[key]
return self.environment_params[key]
def __setitem__(self, key, value):
if isinstance(key, tuple):
k = history.Key(*key)
self._history.save_record(*k,
value=value)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
self.environment_params[key] = value
self._history.save_record(agent_id='env',
t_step=self.now,
key=key,
value=value)
message = message + " ".join(str(i) for i in args)
message = " @{:>3}: {}".format(self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra['now'] = self.now
extra['unique_id'] = self.id
return self.logger.log(level, message, extra=extra)
def step(self):
'''
Advance one step in the simulation, and update the data collection and scheduler appropriately
'''
super().step()
self.schedule.step()
self.datacollector.collect(self)
def run(self, until, *args, **kwargs):
until = until or float('inf')
while self.schedule.next_time < until:
self.step()
utils.logger.debug(f'Simulation step {self.schedule.time}/{until}. Next: {self.schedule.next_time}')
self.schedule.time = until
def __contains__(self, key):
return key in self.environment_params
return key in self.env_params
def get(self, key, default=None):
'''
Get the value of an environment attribute in a
given point in the simulation (history).
If key is an attribute name, this method returns
the current value.
To get values at other times, use a
:meth: `soil.history.Key` tuple.
Get the value of an environment attribute.
Return `default` if the value is not set.
'''
return self[key] if key in self else default
return self.env_params.get(key, default)
def get_path(self, dir_path=None):
dir_path = dir_path or self.dir_path
if not os.path.exists(dir_path):
try:
os.makedirs(dir_path)
except FileExistsError:
pass
return dir_path
def __getitem__(self, key):
return self.env_params.get(key)
def get_agent(self, agent_id):
return self.G.node[agent_id]['agent']
def __setitem__(self, key, value):
return self.env_params.__setitem__(key, value)
def get_agents(self):
return list(self.agents)
def dump_csv(self, dir_path=None):
csv_name = os.path.join(self.get_path(dir_path),
'{}.environment.csv'.format(self.name))
with open(csv_name, 'w') as f:
cr = csv.writer(f)
cr.writerow(('agent_id', 't_step', 'key', 'value'))
for i in self.history_to_tuples():
cr.writerow(i)
def dump_gexf(self, dir_path=None):
G = self.history_to_graph()
graph_path = os.path.join(self.get_path(dir_path),
self.name+".gexf")
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.node[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
nx.write_gexf(G, graph_path, version="1.2draft")
def dump(self, dir_path=None, formats=None):
if not formats:
return
functions = {
'csv': self.dump_csv,
'gexf': self.dump_gexf
}
for f in formats:
if f in functions:
functions[f](dir_path)
else:
raise ValueError('Unknown format: {}'.format(f))
def state_to_tuples(self, now=None):
def _agent_to_tuples(self, agent, now=None):
if now is None:
now = self.now
for k, v in self.environment_params.items():
yield history.Record(agent_id='env',
t_step=now,
key=k,
value=v)
for k, v in agent.state.items():
yield Record(dict_id=agent.id,
t_step=now,
key=k,
value=v)
def state_to_tuples(self, agent_id=None, now=None):
if now is None:
now = self.now
if agent_id:
agent = self.agents[agent_id]
yield from self._agent_to_tuples(agent, now)
return
for k, v in self.env_params.items():
yield Record(dict_id='env',
t_step=now,
key=k,
value=v)
for agent in self.agents:
for k, v in agent.state.items():
yield history.Record(agent_id=agent.id,
t_step=now,
key=k,
value=v)
yield from self._agent_to_tuples(agent, now)
def history_to_tuples(self):
return self._history.to_tuples()
def history_to_graph(self):
G = nx.Graph(self.G)
for agent in self.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
history = self[agent.id, None, None]
if not history:
continue
for t_step, attribute, value in sorted(list(history)):
if attribute == 'visible':
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G
def __getstate__(self):
state = self.__dict__.copy()
state['G'] = json_graph.node_link_data(self.G)
state['network_agents'] = agents.serialize_distribution(self.network_agents)
state['environment_agents'] = agents._convert_agent_types(self.environment_agents,
to_string=True)
return state
def __setstate__(self, state):
self.__dict__ = state
self.G = json_graph.node_link_graph(state['G'])
self.network_agents = self.calculate_distribution(self._convert_agent_types(self.network_agents))
self.environment_agents = self._convert_agent_types(self.environment_agents)
return state
SoilEnvironment = Environment

232
soil/exporters.py Normal file
View File

@@ -0,0 +1,232 @@
import os
from time import time as current_time
from io import BytesIO
from sqlalchemy import create_engine
import matplotlib.pyplot as plt
import networkx as nx
from .serialization import deserialize
from .utils import open_or_reuse, logger, timer
from . import utils
class DryRunner(BytesIO):
def __init__(self, fname, *args, copy_to=None, **kwargs):
super().__init__(*args, **kwargs)
self.__fname = fname
self.__copy_to = copy_to
def write(self, txt):
if self.__copy_to:
self.__copy_to.write('{}:::{}'.format(self.__fname, txt))
try:
super().write(txt)
except TypeError:
super().write(bytes(txt, 'utf-8'))
def close(self):
content = '(binary data not shown)'
try:
content = self.getvalue().decode()
except UnicodeDecodeError:
pass
logger.info('**Not** written to {} (dry run mode):\n\n{}\n\n'.format(self.__fname, content))
super().close()
class Exporter:
'''
Interface for all exporters. It is not necessary, but it is useful
if you don't plan to implement all the methods.
'''
def __init__(self, simulation, outdir=None, dry_run=None, copy_to=None):
self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), 'soil_output')
self.outdir = os.path.join(outdir,
simulation.config.general.group or '',
simulation.config.general.id)
self.dry_run = dry_run
self.copy_to = copy_to
def sim_start(self):
'''Method to call when the simulation starts'''
pass
def sim_end(self):
'''Method to call when the simulation ends'''
pass
def trial_start(self, env):
'''Method to call when a trial start'''
pass
def trial_end(self, env):
'''Method to call when a trial ends'''
pass
def output(self, f, mode='w', **kwargs):
if self.dry_run:
f = DryRunner(f, copy_to=self.copy_to)
else:
try:
if not os.path.isabs(f):
f = os.path.join(self.outdir, f)
except TypeError:
pass
return open_or_reuse(f, mode=mode, **kwargs)
class default(Exporter):
'''Default exporter. Writes sqlite results, as well as the simulation YAML'''
# def sim_start(self):
# if not self.dry_run:
# logger.info('Dumping results to %s', self.outdir)
# self.simulation.dump_yaml(outdir=self.outdir)
# else:
# logger.info('NOT dumping results')
# def trial_start(self, env, stats):
# if not self.dry_run:
# with timer('Dumping simulation {} trial {}'.format(self.simulation.name,
# env.name)):
# engine = create_engine('sqlite:///{}.sqlite'.format(env.name), echo=False)
# dc = env.datacollector
# tables = {'env': dc.get_model_vars_dataframe(),
# 'agents': dc.get_agent_vars_dataframe(),
# 'agents': dc.get_agent_vars_dataframe()}
# for table in dc.tables:
# tables[table] = dc.get_table_dataframe(table)
# for (t, df) in tables.items():
# df.to_sql(t, con=engine)
# def sim_end(self, stats):
# with timer('Dumping simulation {}\'s stats'.format(self.simulation.name)):
# engine = create_engine('sqlite:///{}.sqlite'.format(self.simulation.name), echo=False)
# with self.output('{}.sqlite'.format(self.simulation.name), mode='wb') as f:
# self.simulation.dump_sqlite(f)
def get_dc_dfs(dc):
dfs = {'env': dc.get_model_vars_dataframe(),
'agents': dc.get_agent_vars_dataframe }
for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name)
yield from dfs.items()
class csv(Exporter):
'''Export the state of each environment (and its agents) in a separate CSV file'''
def trial_end(self, env):
with timer('[CSV] Dumping simulation {} trial {} @ dir {}'.format(self.simulation.name,
env.id,
self.outdir)):
for (df_name, df) in get_dc_dfs(env.datacollector):
with self.output('{}.stats.{}.csv'.format(env.id, df_name)) as f:
df.to_csv(f)
class gexf(Exporter):
def trial_end(self, env):
if self.dry_run:
logger.info('Not dumping GEXF in dry_run mode')
return
with timer('[GEXF] Dumping simulation {} trial {}'.format(self.simulation.name,
env.id)):
with self.output('{}.gexf'.format(env.id), mode='wb') as f:
self.dump_gexf(env, f)
def dump_gexf(self, env, f):
G = env.history_to_graph()
# Workaround for geometric models
# See soil/soil#4
for node in G.nodes():
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
nx.write_gexf(G, f, version="1.2draft")
class dummy(Exporter):
def sim_start(self):
with self.output('dummy', 'w') as f:
f.write('simulation started @ {}\n'.format(current_time()))
def trial_start(self, env):
with self.output('dummy', 'w') as f:
f.write('trial started@ {}\n'.format(current_time()))
def trial_end(self, env):
with self.output('dummy', 'w') as f:
f.write('trial ended@ {}\n'.format(current_time()))
def sim_end(self):
with self.output('dummy', 'a') as f:
f.write('simulation ended @ {}\n'.format(current_time()))
class graphdrawing(Exporter):
def trial_end(self, env):
# Outside effects
f = plt.figure()
nx.draw(env.G, node_size=10, width=0.2, pos=nx.spring_layout(env.G, scale=100), ax=f.add_subplot(111))
with open('graph-{}.png'.format(env.id)) as f:
f.savefig(f)
'''
Convert an environment into a NetworkX graph
'''
def env_to_graph(env, history=None):
G = nx.Graph(env.G)
for agent in env.network_agents:
attributes = {'agent': str(agent.__class__)}
lastattributes = {}
spells = []
lastvisible = False
laststep = None
if not history:
history = sorted(list(env.state_to_tuples()))
for _, t_step, attribute, value in history:
if attribute == 'visible':
nowvisible = value
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
continue
key = 'attr_' + attribute
if key not in attributes:
attributes[key] = list()
if key not in lastattributes:
lastattributes[key] = (value, t_step)
elif lastattributes[key][0] != value:
last_value, laststep = lastattributes[key]
commit_value = (last_value, laststep, t_step)
if key not in attributes:
attributes[key] = list()
attributes[key].append(commit_value)
lastattributes[key] = (value, t_step)
for k, v in lastattributes.items():
attributes[k].append((v[0], v[1], None))
if lastvisible:
spells.append((laststep, None))
if spells:
G.add_node(agent.id, spells=spells, **attributes)
else:
G.add_node(agent.id, **attributes)
return G

View File

@@ -1,281 +0,0 @@
import time
import os
import pandas as pd
import sqlite3
import copy
from collections import UserDict, namedtuple
from . import utils
class History:
"""
Store and retrieve values from a sqlite database.
"""
def __init__(self, db_path=None, name=None, dir_path=None, backup=True):
if db_path is None and name:
db_path = os.path.join(dir_path or os.getcwd(),
'{}.db.sqlite'.format(name))
if db_path:
if backup and os.path.exists(db_path):
newname = db_path + '.backup{}.sqlite'.format(time.time())
os.rename(db_path, newname)
else:
db_path = ":memory:"
self.db_path = db_path
self.db = db_path
with self.db:
self.db.execute('''CREATE TABLE IF NOT EXISTS history (agent_id text, t_step int, key text, value text text)''')
self.db.execute('''CREATE TABLE IF NOT EXISTS value_types (key text, value_type text)''')
self.db.execute('''CREATE UNIQUE INDEX IF NOT EXISTS idx_history ON history (agent_id, t_step, key);''')
self._dtypes = {}
self._tups = []
@property
def db(self):
try:
self._db.cursor()
except sqlite3.ProgrammingError:
self.db = None # Reset the database
return self._db
@db.setter
def db(self, db_path=None):
db_path = db_path or self.db_path
if isinstance(db_path, str):
self._db = sqlite3.connect(db_path)
else:
self._db = db_path
@property
def dtypes(self):
self.read_types()
return {k:v[0] for k, v in self._dtypes.items()}
def save_tuples(self, tuples):
'''
Save a series of tuples, converting them to records if necessary
'''
self.save_records(Record(*tup) for tup in tuples)
def save_records(self, records):
'''
Save a collection of records
'''
for record in records:
if not isinstance(record, Record):
record = Record(*record)
self.save_record(*record)
def save_record(self, agent_id, t_step, key, value):
'''
Save a collection of records to the database.
Database writes are cached.
'''
value = self.convert(key, value)
self._tups.append(Record(agent_id=agent_id,
t_step=t_step,
key=key,
value=value))
if len(self._tups) > 100:
self.flush_cache()
def convert(self, key, value):
"""Get the serialized value for a given key."""
if key not in self._dtypes:
self.read_types()
if key not in self._dtypes:
name = utils.name(value)
serializer = utils.serializer(name)
deserializer = utils.deserializer(name)
self._dtypes[key] = (name, serializer, deserializer)
with self.db:
self.db.execute("replace into value_types (key, value_type) values (?, ?)", (key, name))
return self._dtypes[key][1](value)
def recover(self, key, value):
"""Get the deserialized value for a given key, and the serialized version."""
if key not in self._dtypes:
self.read_types()
if key not in self._dtypes:
raise ValueError("Unknown datatype for {} and {}".format(key, value))
return self._dtypes[key][2](value)
def flush_cache(self):
'''
Use a cache to save state changes to avoid opening a session for every change.
The cache will be flushed at the end of the simulation, and when history is accessed.
'''
with self.db:
for rec in self._tups:
self.db.execute("replace into history(agent_id, t_step, key, value) values (?, ?, ?, ?)", (rec.agent_id, rec.t_step, rec.key, rec.value))
self._tups = list()
def to_tuples(self):
self.flush_cache()
with self.db:
res = self.db.execute("select agent_id, t_step, key, value from history ").fetchall()
for r in res:
agent_id, t_step, key, value = r
value = self.recover(key, value)
yield agent_id, t_step, key, value
def read_types(self):
with self.db:
res = self.db.execute("select key, value_type from value_types ").fetchall()
for k, v in res:
serializer = utils.serializer(v)
deserializer = utils.deserializer(v)
self._dtypes[k] = (v, serializer, deserializer)
def __getitem__(self, key):
self.flush_cache()
key = Key(*key)
agent_ids = [key.agent_id] if key.agent_id is not None else []
t_steps = [key.t_step] if key.t_step is not None else []
keys = [key.key] if key.key is not None else []
df = self.read_sql(agent_ids=agent_ids,
t_steps=t_steps,
keys=keys)
r = Records(df, filter=key, dtypes=self._dtypes)
if r.resolved:
return r.value()
return r
def read_sql(self, keys=None, agent_ids=None, t_steps=None, convert_types=False, limit=-1):
self.read_types()
def escape_and_join(v):
if v is None:
return
return ",".join(map(lambda x: "\'{}\'".format(x), v))
filters = [("key in ({})".format(escape_and_join(keys)), keys),
("agent_id in ({})".format(escape_and_join(agent_ids)), agent_ids)
]
filters = list(k[0] for k in filters if k[1])
last_df = None
if t_steps:
# Look for the last value before the minimum step in the query
min_step = min(t_steps)
last_filters = ['t_step < {}'.format(min_step),]
last_filters = last_filters + filters
condition = ' and '.join(last_filters)
last_query = '''
select h1.*
from history h1
inner join (
select agent_id, key, max(t_step) as t_step
from history
where {condition}
group by agent_id, key
) h2
on h1.agent_id = h2.agent_id and
h1.key = h2.key and
h1.t_step = h2.t_step
'''.format(condition=condition)
last_df = pd.read_sql_query(last_query, self.db)
filters.append("t_step >= '{}' and t_step <= '{}'".format(min_step, max(t_steps)))
condition = ''
if filters:
condition = 'where {} '.format(' and '.join(filters))
query = 'select * from history {} limit {}'.format(condition, limit)
df = pd.read_sql_query(query, self.db)
if last_df is not None:
df = pd.concat([df, last_df])
df_p = df.pivot_table(values='value', index=['t_step'],
columns=['key', 'agent_id'],
aggfunc='first')
for k, v in self._dtypes.items():
if k in df_p:
dtype, _, deserial = v
df_p[k] = df_p[k].fillna(method='ffill').astype(dtype)
if t_steps:
df_p = df_p.reindex(t_steps, method='ffill')
return df_p.ffill()
class Records():
def __init__(self, df, filter=None, dtypes=None):
if not filter:
filter = Key(agent_id=None,
t_step=None,
key=None)
self._df = df
self._filter = filter
self.dtypes = dtypes or {}
super().__init__()
def mask(self, tup):
res = ()
for i, k in zip(tup[:-1], self._filter):
if k is None:
res = res + (i,)
res = res + (tup[-1],)
return res
def filter(self, newKey):
f = list(self._filter)
for ix, i in enumerate(f):
if i is None:
f[ix] = newKey
self._filter = Key(*f)
@property
def resolved(self):
return sum(1 for i in self._filter if i is not None) == 3
def __iter__(self):
for column, series in self._df.iteritems():
key, agent_id = column
for t_step, value in series.iteritems():
r = Record(t_step=t_step,
agent_id=agent_id,
key=key,
value=value)
yield self.mask(r)
def value(self):
if self.resolved:
f = self._filter
try:
i = self._df[f.key][str(f.agent_id)]
ix = i.index.get_loc(f.t_step, method='ffill')
return i.iloc[ix]
except KeyError:
return self.dtypes[f.key][2]()
return list(self)
def __getitem__(self, k):
n = copy.copy(self)
n.filter(k)
if n.resolved:
return n.value()
return n
def __len__(self):
return len(self._df)
def __str__(self):
if self.resolved:
return str(self.value())
return '<Records for [{}]>'.format(self._filter)
Key = namedtuple('Key', ['agent_id', 't_step', 'key'])
Record = namedtuple('Record', 'agent_id t_step key value')

42
soil/network.py Normal file
View File

@@ -0,0 +1,42 @@
from typing import Dict
import os
import sys
import networkx as nx
from . import config, serialization, basestring
def from_config(cfg: config.NetConfig, dir_path: str = None):
if not isinstance(cfg, config.NetConfig):
cfg = config.NetConfig(**cfg)
if cfg.path:
path = cfg.path
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
return method(path, **kwargs)
if cfg.params:
net_args = cfg.params.dict()
net_gen = net_args.pop('generator')
if dir_path not in sys.path:
sys.path.append(dir_path)
method = serialization.deserializer(net_gen,
known_modules=['networkx.generators',])
return method(**net_args)
if isinstance(cfg.topology, basestring) or isinstance(cfg.topology, dict):
return nx.json_graph.node_link_graph(cfg.topology)
return nx.Graph()

235
soil/serialization.py Normal file
View File

@@ -0,0 +1,235 @@
import os
import logging
import ast
import sys
import re
import importlib
from glob import glob
from itertools import product, chain
import yaml
import networkx as nx
from jinja2 import Template
logger = logging.getLogger('soil')
# def load_network(network_params, dir_path=None):
# G = nx.Graph()
# if not network_params:
# return G
# if 'path' in network_params:
# path = network_params['path']
# if dir_path and not os.path.isabs(path):
# path = os.path.join(dir_path, path)
# extension = os.path.splitext(path)[1][1:]
# kwargs = {}
# if extension == 'gexf':
# kwargs['version'] = '1.2draft'
# kwargs['node_type'] = int
# try:
# method = getattr(nx.readwrite, 'read_' + extension)
# except AttributeError:
# raise AttributeError('Unknown format')
# G = method(path, **kwargs)
# elif 'generator' in network_params:
# net_args = network_params.copy()
# net_gen = net_args.pop('generator')
# if dir_path not in sys.path:
# sys.path.append(dir_path)
# method = deserializer(net_gen,
# known_modules=['networkx.generators',])
# G = method(**net_args)
# return G
def load_file(infile):
folder = os.path.dirname(infile)
if folder not in sys.path:
sys.path.append(folder)
with open(infile, 'r') as f:
return list(chain.from_iterable(map(expand_template, load_string(f))))
def load_string(string):
yield from yaml.load_all(string, Loader=yaml.FullLoader)
def expand_template(config):
if 'template' not in config:
yield config
return
if 'vars' not in config:
raise ValueError(('You must provide a definition of variables'
' for the template.'))
template = config['template']
if not isinstance(template, str):
template = yaml.dump(template)
template = Template(template)
params = params_for_template(config)
blank_str = template.render({k: 0 for k in params[0].keys()})
blank = list(load_string(blank_str))
if len(blank) > 1:
raise ValueError('Templates must not return more than one configuration')
if 'name' in blank[0]:
raise ValueError('Templates cannot be named, use group instead')
for ps in params:
string = template.render(ps)
for c in load_string(string):
yield c
def params_for_template(config):
sampler_config = config.get('sampler', {'N': 100})
sampler = sampler_config.pop('method', 'SALib.sample.morris.sample')
sampler = deserializer(sampler)
bounds = config['vars']['bounds']
problem = {
'num_vars': len(bounds),
'names': list(bounds.keys()),
'bounds': list(v for v in bounds.values())
}
samples = sampler(problem, **sampler_config)
lists = config['vars'].get('lists', {})
names = list(lists.keys())
values = list(lists.values())
combs = list(product(*values))
allnames = names + problem['names']
allvalues = [(list(i[0])+list(i[1])) for i in product(combs, samples)]
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
return params
def load_files(*patterns, **kwargs):
for pattern in patterns:
for i in glob(pattern, **kwargs):
for config in load_file(i):
path = os.path.abspath(i)
if 'general' in config and 'dir_path' not in config['general']:
config['general']['dir_path'] = os.path.dirname(path)
yield config, path
def load_config(config):
if isinstance(config, dict):
yield config, os.getcwd()
else:
yield from load_files(config)
builtins = importlib.import_module('builtins')
KNOWN_MODULES = ['soil', ]
def name(value, known_modules=KNOWN_MODULES):
'''Return a name that can be imported, to serialize/deserialize an object'''
if value is None:
return 'None'
if not isinstance(value, type): # Get the class name first
value = type(value)
tname = value.__name__
if hasattr(builtins, tname):
return tname
modname = value.__module__
if modname == '__main__':
return tname
if known_modules and modname in known_modules:
return tname
for kmod in known_modules:
if not kmod:
continue
module = importlib.import_module(kmod)
if hasattr(module, tname):
return tname
return '{}.{}'.format(modname, tname)
def serializer(type_):
if type_ != 'str' and hasattr(builtins, type_):
return repr
return lambda x: x
def serialize(v, known_modules=KNOWN_MODULES):
'''Get a text representation of an object.'''
tname = name(v, known_modules=known_modules)
func = serializer(tname)
return func(v), tname
IS_CLASS = re.compile(r"<class '(.*)'>")
def deserializer(type_, known_modules=KNOWN_MODULES):
if type(type_) != str: # Already deserialized
return type_
if type_ == 'str':
return lambda x='': x
if type_ == 'None':
return lambda x=None: None
if hasattr(builtins, type_): # Check if it's a builtin type
cls = getattr(builtins, type_)
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
match = IS_CLASS.match(type_)
if match:
modname, tname = match.group(1).rsplit(".", 1)
module = importlib.import_module(modname)
cls = getattr(module, tname)
return getattr(cls, 'deserialize', cls)
# Otherwise, see if we can find the module and the class
options = []
for mod in known_modules:
if mod:
options.append((mod, type_))
if '.' in type_: # Fully qualified module
module, type_ = type_.rsplit(".", 1)
options.append((module, type_))
errors = []
for modname, tname in options:
try:
module = importlib.import_module(modname)
cls = getattr(module, tname)
return getattr(cls, 'deserialize', cls)
except (ImportError, AttributeError) as ex:
errors.append((modname, tname, ex))
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors))
def deserialize(type_, value=None, **kwargs):
'''Get an object from a text representation'''
if not isinstance(type_, str):
return type_
des = deserializer(type_, **kwargs)
if value is None:
return des
return des(value)
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
'''Return the list of deserialized objects'''
objects = []
for name in names:
mod = deserialize(name, known_modules=known_modules)
objects.append(mod(*args, **kwargs))
return objects

View File

@@ -1,197 +1,149 @@
import os
import time
from time import time as current_time, strftime
import importlib
import sys
import yaml
import traceback
import logging
import networkx as nx
from networkx.readwrite import json_graph
from multiprocessing import Pool
from functools import partial
import pickle
from nxsim import NetworkSimulation
from . import utils, basestring, agents
from . import serialization, utils, basestring, agents
from .environment import Environment
from .utils import logger
from .exporters import default
from .config import Config, convert_old
class Simulation(NetworkSimulation):
#TODO: change documentation for simulation
class Simulation:
"""
Subclass of nsim.NetworkSimulation with three main differences:
1) agent type can be specified by name or by class.
2) instead of just one type, a network agents distribution can be used.
The distribution specifies the weight (or probability) of each
agent type in the topology. This is an example distribution: ::
[
{'agent_type': 'agent_type_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_type': 'agent_type_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_type_1'.
3) if no initial state is given, each node's state will be set
to `{'id': 0}`.
Parameters
---------
name : str, optional
config (optional): :class:`config.Config`
name of the Simulation
topology : networkx.Graph instance, optional
network_params : dict
parameters used to create a topology with networkx, if no topology is given
network_agents : dict
definition of agents to populate the topology with
agent_type : NetworkAgent subclass, optional
Default type of NetworkAgent to use for nodes not specified in network_agents
states : list, optional
List of initial states corresponding to the nodes in the topology. Basic form is a list of integers
whose value indicates the state
dir_path : str, optional
Directory path where to save pickled objects
seed : str, optional
Seed to use for the random generator
num_trials : int, optional
Number of independent simulation runs
max_time : int, optional
Time how long the simulation should run
environment_params : dict, optional
Dictionary of globally-shared environmental parameters
environment_agents: dict, optional
Similar to network_agents. Distribution of Agents that control the environment
environment_class: soil.environment.Environment subclass, optional
Class for the environment. It defailts to soil.environment.Environment
load_module : str, module name, deprecated
If specified, soil will load the content of this module under 'soil.agents.custom'
kwargs: parameters to use to initialize a new configuration, if one has not been provided.
"""
def __init__(self, name=None, topology=None, network_params=None,
network_agents=None, agent_type=None, states=None,
default_state=None, interval=1, dump=None, dry_run=False,
dir_path=None, num_trials=1, max_time=100,
load_module=None, seed=None,
environment_agents=None, environment_params=None,
environment_class=None, **kwargs):
def __init__(self, config=None,
**kwargs):
if kwargs:
cfg = {}
if config:
cfg.update(config.dict(include_defaults=False))
cfg.update(kwargs)
config = Config(**cfg)
if not config:
raise ValueError("You need to specify a simulation configuration")
self.config = config
if topology is None:
topology = utils.load_network(network_params,
dir_path=dir_path)
elif isinstance(topology, basestring) or isinstance(topology, dict):
topology = json_graph.node_link_graph(topology)
self.load_module = load_module
self.topology = nx.Graph(topology)
self.network_params = network_params
self.name = name or 'UnnamedSimulation'
self.num_trials = num_trials
self.max_time = max_time
self.default_state = default_state or {}
self.dir_path = dir_path or os.getcwd()
self.interval = interval
self.seed = str(seed) or str(time.time())
self.dump = dump
self.dry_run = dry_run
sys.path += [self.dir_path, os.getcwd()]
self.environment_params = environment_params or {}
self.environment_class = utils.deserialize(environment_class,
known_modules=['soil.environment', ]) or Environment
environment_agents = environment_agents or []
self.environment_agents = agents._convert_agent_types(environment_agents,
known_modules=[self.load_module])
distro = agents.calculate_distribution(network_agents,
agent_type)
self.network_agents = agents._convert_agent_types(distro,
known_modules=[self.load_module])
self.states = agents._validate_states(states,
self.topology)
@property
def name(self) -> str:
return self.config.general.id
def run_simulation(self, *args, **kwargs):
return self.run(*args, **kwargs)
def run(self, *args, **kwargs):
return list(self.run_simulation_gen(*args, **kwargs))
'''Run the simulation and return the list of resulting environments'''
return list(self.run_gen(*args, **kwargs))
def run_simulation_gen(self, *args, parallel=False, dry_run=False,
**kwargs):
p = Pool()
with utils.timer('simulation {}'.format(self.name)):
if parallel:
func = partial(self.run_trial_exceptions, dry_run=dry_run or self.dry_run,
return_env=True,
**kwargs)
for i in p.imap_unordered(func, range(self.num_trials)):
if isinstance(i, Exception):
logger.error('Trial failed:\n\t{}'.format(i.message))
continue
yield i
else:
for i in range(self.num_trials):
yield self.run_trial(i, dry_run = dry_run or self.dry_run, **kwargs)
if not (dry_run or self.dry_run):
logger.info('Dumping results to {}'.format(self.dir_path))
self.dump_pickle(self.dir_path)
self.dump_yaml(self.dir_path)
else:
logger.info('NOT dumping results')
def _run_sync_or_async(self, parallel=False, **kwargs):
if parallel and not os.environ.get('SENPY_DEBUG', None):
p = Pool()
func = partial(self.run_trial_exceptions, **kwargs)
for i in p.imap_unordered(func, range(self.config.general.num_trials)):
if isinstance(i, Exception):
logger.error('Trial failed:\n\t%s', i.message)
continue
yield i
else:
for i in range(self.config.general.num_trials):
yield self.run_trial(trial_id=i,
**kwargs)
def get_env(self, trial_id = 0, **kwargs):
opts=self.environment_params.copy()
env_name='{}_trial_{}'.format(self.name, trial_id)
opts.update({
'name': env_name,
'topology': self.topology.copy(),
'seed': self.seed+env_name,
'initial_time': 0,
'dry_run': self.dry_run,
'interval': self.interval,
'network_agents': self.network_agents,
'states': self.states,
'default_state': self.default_state,
'environment_agents': self.environment_agents,
'dir_path': self.dir_path,
})
opts.update(kwargs)
env=self.environment_class(**opts)
def run_gen(self, parallel=False, dry_run=False,
exporters=[default, ], outdir=None, exporter_params={},
log_level=None,
**kwargs):
'''Run the simulation and yield the resulting environments.'''
if log_level:
logger.setLevel(log_level)
logger.info('Using exporters: %s', exporters or [])
logger.info('Output directory: %s', outdir)
exporters = serialization.deserialize_all(exporters,
simulation=self,
known_modules=['soil.exporters',],
dry_run=dry_run,
outdir=outdir,
**exporter_params)
with utils.timer('simulation {}'.format(self.config.general.id)):
for exporter in exporters:
exporter.sim_start()
for env in self._run_sync_or_async(parallel=parallel,
log_level=log_level,
**kwargs):
for exporter in exporters:
exporter.trial_start(env)
for exporter in exporters:
exporter.trial_end(env)
yield env
for exporter in exporters:
exporter.sim_end()
def get_env(self, trial_id=0, **kwargs):
'''Create an environment for a trial of the simulation'''
# opts = self.environment_params.copy()
# opts.update({
# 'name': '{}_trial_{}'.format(self.name, trial_id),
# 'topology': self.topology.copy(),
# 'network_params': self.network_params,
# 'seed': '{}_trial_{}'.format(self.seed, trial_id),
# 'initial_time': 0,
# 'interval': self.interval,
# 'network_agents': self.network_agents,
# 'initial_time': 0,
# 'states': self.states,
# 'dir_path': self.dir_path,
# 'default_state': self.default_state,
# 'history': bool(self._history),
# 'environment_agents': self.environment_agents,
# })
# opts.update(kwargs)
print(self.config)
env = Environment.from_config(self.config, trial_id=trial_id, **kwargs)
return env
def run_trial(self, trial_id = 0, until = None, return_env = True, **opts):
"""Run a single trial of the simulation
Parameters
----------
trial_id : int
def run_trial(self, trial_id=None, until=None, log_level=logging.INFO, **opts):
"""
Run a single trial of the simulation
"""
trial_id = trial_id if trial_id is not None else current_time()
if log_level:
logger.setLevel(log_level)
# Set-up trial environment and graph
until=until or self.max_time
env=self.get_env(trial_id = trial_id, **opts)
until = until or self.config.general.max_time
env = self.get_env(trial_id, **opts)
# Set up agents on nodes
with utils.timer('Simulation {} trial {}'.format(self.name, trial_id)):
with utils.timer('Simulation {} trial {}'.format(self.config.general.id, trial_id)):
env.run(until)
if self.dump and not self.dry_run:
with utils.timer('Dumping simulation {} trial {}'.format(self.name, trial_id)):
env.dump(formats = self.dump)
if return_env:
return env
return env
def run_trial_exceptions(self, *args, **kwargs):
'''
A wrapper for run_trial that catches exceptions and returns them.
@@ -200,90 +152,47 @@ class Simulation(NetworkSimulation):
try:
return self.run_trial(*args, **kwargs)
except Exception as ex:
c = ex.__cause__
c.message = ''.join(traceback.format_tb(c.__traceback__)[3:])
return c
if ex.__cause__ is not None:
ex = ex.__cause__
ex.message = ''.join(traceback.format_exception(type(ex), ex, ex.__traceback__)[:])
return ex
def to_dict(self):
return self.__getstate__()
return self.config.dict()
def to_yaml(self):
return yaml.dump(self.to_dict())
def dump_yaml(self, dir_path = None, file_name = None):
dir_path=dir_path or self.dir_path
if not os.path.exists(dir_path):
os.makedirs(dir_path)
if not file_name:
file_name=os.path.join(dir_path,
'{}.dumped.yml'.format(self.name))
with open(file_name, 'w') as f:
f.write(self.to_yaml())
def dump_pickle(self, dir_path = None, pickle_name = None):
dir_path=dir_path or self.dir_path
if not os.path.exists(dir_path):
os.makedirs(dir_path)
if not pickle_name:
pickle_name=os.path.join(dir_path,
'{}.simulation.pickle'.format(self.name))
with open(pickle_name, 'wb') as f:
pickle.dump(self, f)
def __getstate__(self):
state={}
for k, v in self.__dict__.items():
if k[0] != '_':
state[k]=v
state['topology']=json_graph.node_link_data(self.topology)
state['network_agents']=agents.serialize_distribution(self.network_agents,
known_modules = [])
state['environment_agents']=agents.serialize_distribution(self.environment_agents,
known_modules = [])
state['environment_class']=utils.serialize(self.environment_class,
known_modules=['soil.environment'])[1] # func, name
if state['load_module'] is None:
del state['load_module']
return state
def __setstate__(self, state):
self.__dict__ = state
self.load_module = getattr(self, 'load_module', None)
if self.dir_path not in sys.path:
sys.path += [self.dir_path, os.getcwd()]
self.topology = json_graph.node_link_graph(state['topology'])
self.network_agents = agents.calculate_distribution(agents._convert_agent_types(self.network_agents))
self.environment_agents = agents._convert_agent_types(self.environment_agents,
known_modules=[self.load_module])
self.environment_class = utils.deserialize(self.environment_class,
known_modules=[self.load_module, 'soil.environment', ]) # func, name
return state
return yaml.dump(self.config.dict())
def from_config(config):
config = list(utils.load_config(config))
def all_from_config(config):
configs = list(serialization.load_config(config))
for config, path in configs:
if config.get('version', '1') == '1':
config = convert_old(config)
if not isinstance(config, Config):
config = Config(**config)
if not config.general.dir_path:
config.general.dir_path = os.path.dirname(path)
sim = Simulation(config=config)
yield sim
def from_config(conf_or_path):
lst = list(all_from_config(conf_or_path))
if len(lst) > 1:
raise AttributeError('Provide only one configuration')
return lst[0]
def from_old_config(conf_or_path):
config = list(serialization.load_config(conf_or_path))
if len(config) > 1:
raise AttributeError('Provide only one configuration')
config = config[0][0]
sim = Simulation(**config)
return sim
config = convert_old(config[0][0])
return Simulation(config)
def run_from_config(*configs, results_dir='soil_output', dump=None, timestamp=False, **kwargs):
for config_def in configs:
# logger.info("Found {} config(s)".format(len(ls)))
for config, _ in utils.load_config(config_def):
name = config.get('name', 'unnamed')
logger.info("Using config(s): {name}".format(name=name))
if timestamp:
sim_folder = '{}_{}'.format(name,
time.strftime("%Y-%m-%d_%H:%M:%S"))
else:
sim_folder = name
dir_path = os.path.join(results_dir, sim_folder)
if dump is not None:
config['dump'] = dump
sim = Simulation(dir_path=dir_path, **config)
logger.info('Dumping results to {} : {}'.format(sim.dir_path, sim.dump))
sim.run_simulation(**kwargs)
def run_from_config(*configs, **kwargs):
for sim in all_from_config(configs):
name = config.general.id
logger.info("Using config(s): {name}".format(name=name))
sim.run_simulation(**kwargs)

80
soil/time.py Normal file
View File

@@ -0,0 +1,80 @@
from mesa.time import BaseScheduler
from queue import Empty
from heapq import heappush, heappop
import math
from .utils import logger
from mesa import Agent as MesaAgent
INFINITY = float('inf')
class When:
def __init__(self, time):
if isinstance(time, When):
return time
self._time = time
def abs(self, time):
return self._time
NEVER = When(INFINITY)
class Delta(When):
def __init__(self, delta):
self._delta = delta
def __eq__(self, other):
return self._delta == other._delta
def abs(self, time):
return time + self._delta
class TimedActivation(BaseScheduler):
"""A scheduler which activates each agent when the agent requests.
In each activation, each agent will update its 'next_time'.
"""
def __init__(self, *args, **kwargs):
super().__init__(self)
self._queue = []
self.next_time = 0
def add(self, agent: MesaAgent):
if agent.unique_id not in self._agents:
heappush(self._queue, (self.time, agent.unique_id))
super().add(agent)
def step(self) -> None:
"""
Executes agents in order, one at a time. After each step,
an agent will signal when it wants to be scheduled next.
"""
if self.next_time == INFINITY:
return
self.time = self.next_time
when = self.time
while self._queue and self._queue[0][0] == self.time:
(when, agent_id) = heappop(self._queue)
logger.debug(f'Stepping agent {agent_id}')
returned = self._agents[agent_id].step()
when = (returned or Delta(1)).abs(self.time)
if when < self.time:
raise Exception("Cannot schedule an agent for a time in the past ({} < {})".format(when, self.time))
heappush(self._queue, (when, agent_id))
self.steps += 1
if not self._queue:
self.time = INFINITY
self.next_time = INFINITY
return
self.next_time = self._queue[0][0]

View File

@@ -1,154 +1,91 @@
import os
import ast
import yaml
import logging
import importlib
import time
from glob import glob
from random import random
from copy import deepcopy
from time import time as current_time, strftime, gmtime, localtime
import os
import networkx as nx
from shutil import copyfile
from contextlib import contextmanager
logger = logging.getLogger('soil')
logger.setLevel(logging.INFO)
def load_network(network_params, dir_path=None):
if network_params is None:
return nx.Graph()
path = network_params.get('path', None)
if path:
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == 'gexf':
kwargs['version'] = '1.2draft'
kwargs['node_type'] = int
try:
method = getattr(nx.readwrite, 'read_' + extension)
except AttributeError:
raise AttributeError('Unknown format')
return method(path, **kwargs)
net_args = network_params.copy()
net_type = net_args.pop('generator')
method = getattr(nx.generators, net_type)
return method(**net_args)
def load_file(infile):
with open(infile, 'r') as f:
return list(yaml.load_all(f))
def load_files(*patterns):
for pattern in patterns:
for i in glob(pattern):
for config in load_file(i):
yield config, os.path.abspath(i)
def load_config(config):
if isinstance(config, dict):
yield config, None
else:
yield from load_files(config)
# logging.basicConfig()
# logger.setLevel(logging.INFO)
@contextmanager
def timer(name='task', pre="", function=logger.info, to_object=None):
start = time.time()
start = current_time()
function('{}Starting {} at {}.'.format(pre, name,
time.strftime("%X", time.gmtime(start))))
strftime("%X", gmtime(start))))
yield start
end = time.time()
end = current_time()
function('{}Finished {} at {} in {} seconds'.format(pre, name,
time.strftime("%X", time.gmtime(end)),
strftime("%X", gmtime(end)),
str(end-start)))
if to_object:
to_object.start = start
to_object.end = end
builtins = importlib.import_module('builtins')
def name(value, known_modules=[]):
'''Return a name that can be imported, to serialize/deserialize an object'''
if value is None:
return 'None'
if not isinstance(value, type): # Get the class name first
value = type(value)
tname = value.__name__
if hasattr(builtins, tname):
return tname
modname = value.__module__
if modname == '__main__':
return tname
if known_modules and modname in known_modules:
return tname
for kmod in known_modules:
if not kmod:
def safe_open(path, mode='r', backup=True, **kwargs):
outdir = os.path.dirname(path)
if outdir and not os.path.exists(outdir):
os.makedirs(outdir)
if backup and 'w' in mode and os.path.exists(path):
creation = os.path.getctime(path)
stamp = strftime('%Y-%m-%d_%H.%M.%S', localtime(creation))
backup_dir = os.path.join(outdir, 'backup')
if not os.path.exists(backup_dir):
os.makedirs(backup_dir)
newpath = os.path.join(backup_dir, '{}@{}'.format(os.path.basename(path),
stamp))
copyfile(path, newpath)
return open(path, mode=mode, **kwargs)
@contextmanager
def open_or_reuse(f, *args, **kwargs):
try:
with safe_open(f, *args, **kwargs) as f:
yield f
except (AttributeError, TypeError):
yield f
def flatten_dict(d):
if not isinstance(d, dict):
return d
return dict(_flatten_dict(d))
def _flatten_dict(d, prefix=''):
if not isinstance(d, dict):
# print('END:', prefix, d)
yield prefix, d
return
if prefix:
prefix = prefix + '.'
for k, v in d.items():
# print(k, v)
res = list(_flatten_dict(v, prefix='{}{}'.format(prefix, k)))
# print('RES:', res)
yield from res
def unflatten_dict(d):
out = {}
for k, v in d.items():
target = out
if not isinstance(k, str):
target[k] = v
continue
module = importlib.import_module(kmod)
if hasattr(module, tname):
return tname
return '{}.{}'.format(modname, tname)
def serializer(type_):
if type_ != 'str' and hasattr(builtins, type_):
return repr
return lambda x: x
def serialize(v, known_modules=[]):
'''Get a text representation of an object.'''
tname = name(v, known_modules=known_modules)
func = serializer(tname)
return func(v), tname
def deserializer(type_, known_modules=[]):
if type_ == 'str':
return lambda x='': x
if type_ == 'None':
return lambda x=None: None
if hasattr(builtins, type_): # Check if it's a builtin type
cls = getattr(builtins, type_)
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
# Otherwise, see if we can find the module and the class
modules = known_modules or []
options = []
for mod in modules:
if mod:
options.append((mod, type_))
if '.' in type_: # Fully qualified module
module, type_ = type_.rsplit(".", 1)
options.append ((module, type_))
errors = []
for modname, tname in options:
try:
module = importlib.import_module(modname)
cls = getattr(module, tname)
return getattr(cls, 'deserialize', cls)
except (ImportError, AttributeError) as ex:
errors.append((modname, tname, ex))
raise Exception('Could not find type {}. Tried: {}'.format(type_, errors))
def deserialize(type_, value=None, **kwargs):
'''Get an object from a text representation'''
if not isinstance(type_, str):
return type_
des = deserializer(type_, **kwargs)
if value is None:
return des
return des(value)
tokens = k.split('.')
if len(tokens) < 2:
target[k] = v
continue
for token in tokens[:-1]:
if token not in target:
target[token] = {}
target = target[token]
target[tokens[-1]] = v
return out

5
soil/visualization.py Normal file
View File

@@ -0,0 +1,5 @@
from mesa.visualization.UserParam import UserSettableParameter
class UserSettableParameter(UserSettableParameter):
def __str__(self):
return self.value

View File

@@ -1,255 +0,0 @@
import random
import networkx as nx
from soil.agents import BaseAgent, FSM, state, default_state
from scipy.spatial import cKDTree as KDTree
global betweenness_centrality_global
global degree_centrality_global
betweenness_centrality_global = None
degree_centrality_global = None
class TerroristSpreadModel(FSM):
"""
Settings:
information_spread_intensity
terrorist_additional_influence
min_vulnerability (optional else zero)
max_vulnerability
prob_interaction
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
global betweenness_centrality_global
global degree_centrality_global
if betweenness_centrality_global == None:
betweenness_centrality_global = nx.betweenness_centrality(self.global_topology)
if degree_centrality_global == None:
degree_centrality_global = nx.degree_centrality(self.global_topology)
self.information_spread_intensity = environment.environment_params['information_spread_intensity']
self.terrorist_additional_influence = environment.environment_params['terrorist_additional_influence']
self.prob_interaction = environment.environment_params['prob_interaction']
if self['id'] == self.civilian.id: # Civilian
self.initial_belief = random.uniform(0.00, 0.5)
elif self['id'] == self.terrorist.id: # Terrorist
self.initial_belief = random.uniform(0.8, 1.00)
elif self['id'] == self.leader.id: # Leader
self.initial_belief = 1.00
else:
raise Exception('Invalid state id: {}'.format(self['id']))
if 'min_vulnerability' in environment.environment_params:
self.vulnerability = random.uniform( environment.environment_params['min_vulnerability'], environment.environment_params['max_vulnerability'] )
else :
self.vulnerability = random.uniform( 0, environment.environment_params['max_vulnerability'] )
self.mean_belief = self.initial_belief
self.betweenness_centrality = betweenness_centrality_global[self.id]
self.degree_centrality = degree_centrality_global[self.id]
# self.state['radicalism'] = self.mean_belief
def count_neighboring_agents(self, state_id=None):
if isinstance(state_id, list):
return len(self.get_neighboring_agents(state_id))
else:
return len(super().get_agents(state_id, limit_neighbors=True))
def get_neighboring_agents(self, state_id=None):
if isinstance(state_id, list):
_list = []
for i in state_id:
_list += super().get_agents(i, limit_neighbors=True)
return [ neighbour for neighbour in _list if isinstance(neighbour, TerroristSpreadModel) ]
else:
_list = super().get_agents(state_id, limit_neighbors=True)
return [ neighbour for neighbour in _list if isinstance(neighbour, TerroristSpreadModel) ]
@state
def civilian(self):
if self.count_neighboring_agents() > 0:
neighbours = []
for neighbour in self.get_neighboring_agents():
if random.random() < self.prob_interaction:
neighbours.append(neighbour)
influence = sum( neighbour.degree_centrality for neighbour in neighbours )
mean_belief = sum( neighbour.mean_belief * neighbour.degree_centrality / influence for neighbour in neighbours )
self.initial_belief = self.mean_belief
mean_belief = mean_belief * self.information_spread_intensity + self.initial_belief * ( 1 - self.information_spread_intensity )
self.mean_belief = mean_belief * self.vulnerability + self.initial_belief * ( 1 - self.vulnerability )
if self.mean_belief >= 0.8:
return self.terrorist
@state
def leader(self):
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
if self.count_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]) > 0:
for neighbour in self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]):
if neighbour.betweenness_centrality > self.betweenness_centrality:
return self.terrorist
@state
def terrorist(self):
if self.count_neighboring_agents(state_id=[self.terrorist.id, self.leader.id]) > 0:
neighbours = self.get_neighboring_agents(state_id=[self.terrorist.id, self.leader.id])
influence = sum( neighbour.degree_centrality for neighbour in neighbours )
mean_belief = sum( neighbour.mean_belief * neighbour.degree_centrality / influence for neighbour in neighbours )
self.initial_belief = self.mean_belief
self.mean_belief = mean_belief * self.vulnerability + self.initial_belief * ( 1 - self.vulnerability )
self.mean_belief = self.mean_belief ** ( 1 - self.terrorist_additional_influence )
if self.count_neighboring_agents(state_id=self.leader.id) == 0 and self.count_neighboring_agents(state_id=self.terrorist.id) > 0:
max_betweenness_centrality = self
for neighbour in self.get_neighboring_agents(state_id=self.terrorist.id):
if neighbour.betweenness_centrality > max_betweenness_centrality.betweenness_centrality:
max_betweenness_centrality = neighbour
if max_betweenness_centrality == self:
return self.leader
def add_edge(self, G, source, target):
G.add_edge(source.id, target.id, start=self.env._now)
def link_search(self, G, node, radius):
pos = nx.get_node_attributes(G, 'pos')
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
edge_indexes = kdtree.query_pairs(radius, 2)
_list = [ edge[int(not edge.index(node))] for edge in edge_indexes if node in edge ]
return [ G.nodes()[index]['agent'] for index in _list ]
def social_search(self, G, node, steps):
nodes = list(nx.ego_graph(G, node, radius=steps).nodes())
nodes.remove(node)
return [ G.nodes()[index]['agent'] for index in nodes ]
class TrainingAreaModel(FSM):
"""
Settings:
training_influence
min_vulnerability
Requires TerroristSpreadModel.
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.training_influence = environment.environment_params['training_influence']
if 'min_vulnerability' in environment.environment_params:
self.min_vulnerability = environment.environment_params['min_vulnerability']
else: self.min_vulnerability = 0
@default_state
@state
def terrorist(self):
for neighbour in self.get_neighboring_agents():
if isinstance(neighbour, TerroristSpreadModel) and neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.training_influence )
class HavenModel(FSM):
"""
Settings:
haven_influence
min_vulnerability
max_vulnerability
Requires TerroristSpreadModel.
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.haven_influence = environment.environment_params['haven_influence']
if 'min_vulnerability' in environment.environment_params:
self.min_vulnerability = environment.environment_params['min_vulnerability']
else: self.min_vulnerability = 0
self.max_vulnerability = environment.environment_params['max_vulnerability']
@state
def civilian(self):
for neighbour_agent in self.get_neighboring_agents():
if isinstance(neighbour_agent, TerroristSpreadModel) and neighbour_agent['id'] == neighbour_agent.civilian.id:
for neighbour in self.get_neighboring_agents():
if isinstance(neighbour, TerroristSpreadModel) and neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability * ( 1 - self.haven_influence )
return self.civilian
return self.terrorist
@state
def terrorist(self):
for neighbour in self.get_neighboring_agents():
if isinstance(neighbour, TerroristSpreadModel) and neighbour.vulnerability < self.max_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** ( 1 - self.haven_influence )
return self.terrorist
class TerroristNetworkModel(TerroristSpreadModel):
"""
Settings:
sphere_influence
vision_range
weight_social_distance
weight_link_distance
"""
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.vision_range = environment.environment_params['vision_range']
self.sphere_influence = environment.environment_params['sphere_influence']
self.weight_social_distance = environment.environment_params['weight_social_distance']
self.weight_link_distance = environment.environment_params['weight_link_distance']
@state
def terrorist(self):
self.update_relationships()
return super().terrorist()
@state
def leader(self):
self.update_relationships()
return super().leader()
def update_relationships(self):
if self.count_neighboring_agents(state_id=self.civilian.id) == 0:
close_ups = self.link_search(self.global_topology, self.id, self.vision_range)
step_neighbours = self.social_search(self.global_topology, self.id, self.sphere_influence)
search = list(set(close_ups).union(step_neighbours))
neighbours = self.get_neighboring_agents()
search = [item for item in search if not item in neighbours and isinstance(item, TerroristNetworkModel)]
for agent in search:
social_distance = 1 / self.shortest_path_length(self.global_topology, self.id, agent.id)
spatial_proximity = ( 1 - self.get_distance(self.global_topology, self.id, agent.id) )
prob_new_interaction = self.weight_social_distance * social_distance + self.weight_link_distance * spatial_proximity
if agent['id'] == agent.civilian.id and random.random() < prob_new_interaction:
self.add_edge(self.global_topology, self, agent)
break
def get_distance(self, G, source, target):
source_x, source_y = nx.get_node_attributes(G, 'pos')[source]
target_x, target_y = nx.get_node_attributes(G, 'pos')[target]
dx = abs( source_x - target_x )
dy = abs( source_y - target_y )
return ( dx ** 2 + dy ** 2 ) ** ( 1 / 2 )
def shortest_path_length(self, G, source, target):
try:
return nx.shortest_path_length(G, source, target)
except nx.NetworkXNoPath:
return float('inf')

View File

@@ -19,7 +19,7 @@ from xml.etree.ElementTree import tostring
from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor
from ..simulation import SoilSimulation
from ..simulation import Simulation
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
@@ -118,9 +118,9 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
elif msg['type'] == 'download_gexf':
G = self.trials[ int(msg['data']) ].history_to_graph()
for node in G.nodes():
if 'pos' in G.node[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
writer = nx.readwrite.gexf.GEXFWriter(version='1.2draft')
writer.add_graph(G)
self.write_message({'type': 'download_gexf',
@@ -130,9 +130,9 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
elif msg['type'] == 'download_json':
G = self.trials[ int(msg['data']) ].history_to_graph()
for node in G.nodes():
if 'pos' in G.node[node]:
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
if 'pos' in G.nodes[node]:
G.nodes[node]['viz'] = {"position": {"x": G.nodes[node]['pos'][0], "y": G.nodes[node]['pos'][1], "z": 0.0}}
del (G.nodes[node]['pos'])
self.write_message({'type': 'download_json',
'filename': self.config['name'] + '_trial_' + str(msg['data']),
'data': nx.node_link_data(G) })
@@ -168,7 +168,7 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
@run_on_executor
def nonblocking(self, config):
simulation = SoilSimulation(**config)
simulation = Simulation(**config)
return simulation.run()
@tornado.gen.coroutine
@@ -180,7 +180,7 @@ class SocketHandler(tornado.websocket.WebSocketHandler):
with self.logging(self.simulation_name):
try:
config = dict(**self.config)
config['dir_path'] = os.path.join(self.application.dir_path, config['name'])
config['outdir'] = os.path.join(self.application.outdir, config['name'])
config['dump'] = self.application.dump
self.trials = yield self.nonblocking(config)
@@ -232,12 +232,12 @@ class ModularServer(tornado.web.Application):
settings = {'debug': True,
'template_path': ROOT + '/templates'}
def __init__(self, dump=False, dir_path='output', name='SOIL', verbose=True, *args, **kwargs):
def __init__(self, dump=False, outdir='output', name='SOIL', verbose=True, *args, **kwargs):
self.verbose = verbose
self.name = name
self.dump = dump
self.dir_path = dir_path
self.outdir = outdir
# Initializing the application itself:
super().__init__(self.handlers, **self.settings)
@@ -271,4 +271,4 @@ def main():
parser.add_argument('--verbose', '-v', help='verbose mode', action='store_true')
args = parser.parse_args()
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)
run(name=args.name, port=(args.port[0] if isinstance(args.port, list) else args.port), verbose=args.verbose)

View File

@@ -1 +1,4 @@
pytest
pytest
pytest-profiling
scipy>=1.3
tornado

View File

@@ -0,0 +1,49 @@
---
version: '2'
general:
id: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
topologies:
default:
params:
generator: complete_graph
n: 10
agents:
default:
agent_class: CounterModel
state:
times: 1
network:
topology: 'default'
distribution:
- agent_class: CounterModel
weight: 0.4
state:
state_id: 0
- agent_class: AggregatedCounter
weight: 0.6
override:
- filter:
node_id: 0
state:
name: 'The first node'
- filter:
node_id: 1
state:
name: 'The second node'
environment:
fixed:
- name: 'Environment Agent 1'
agent_class: CounterModel
state:
times: 10
environment:
environment_class: Environment
params:
am_i_complete: true

32
tests/old_complete.yml Normal file
View File

@@ -0,0 +1,32 @@
---
name: simple
group: tests
dir_path: "/tmp/"
num_trials: 3
max_time: 100
interval: 1
seed: "CompleteSeed!"
network_params:
generator: complete_graph
n: 10
network_agents:
- agent_type: CounterModel
weight: 0.4
state:
state_id: 0
- agent_type: AggregatedCounter
weight: 0.6
environment_agents:
- agent_id: 'Environment Agent 1'
agent_type: CounterModel
state:
times: 10
environment_class: Environment
environment_params:
am_i_complete: true
agent_type: CounterModel
default_state:
times: 1
states:
- name: 'The first node'
- name: 'The second node'

22
tests/test_agents.py Normal file
View File

@@ -0,0 +1,22 @@
from unittest import TestCase
import pytest
from soil import agents, environment
from soil import time as stime
class Dead(agents.FSM):
@agents.default_state
@agents.state
def only(self):
self.die()
class TestMain(TestCase):
def test_die_raises_exception(self):
d = Dead(unique_id=0, model=environment.Environment())
d.step()
with pytest.raises(agents.DeadAgent):
d.step()
def test_die_returns_infinity(self):
d = Dead(unique_id=0, model=environment.Environment())
assert d.step().abs(0) == stime.INFINITY

View File

@@ -21,11 +21,13 @@ class Ping(agents.FSM):
@agents.default_state
@agents.state
def even(self):
self.debug(f'Even {self["count"]}')
self['count'] += 1
return self.odd
@agents.state
def odd(self):
self.debug(f'Odd {self["count"]}')
self['count'] += 1
return self.even
@@ -39,7 +41,6 @@ class TestAnalysis(TestCase):
agent should be able to update its state."""
config = {
'name': 'analysis',
'dry_run': True,
'seed': 'seed',
'network_params': {
'generator': 'complete_graph',
@@ -49,11 +50,12 @@ class TestAnalysis(TestCase):
'states': [{'interval': 1}, {'interval': 2}],
'max_time': 30,
'num_trials': 1,
'history': True,
'environment_params': {
}
}
s = simulation.from_config(config)
self.env = s.run_simulation()[0]
self.env = s.run_simulation(dry_run=True)[0]
def test_saved(self):
env = self.env
@@ -65,26 +67,25 @@ class TestAnalysis(TestCase):
def test_count(self):
env = self.env
df = analysis.read_sql(env._history._db)
res = analysis.get_count(df, 'SEED', 'id')
assert res['SEED']['seedanalysis_trial_0'].iloc[0] == 1
assert res['SEED']['seedanalysis_trial_0'].iloc[-1] == 1
assert res['id']['odd'].iloc[0] == 2
assert res['id']['even'].iloc[0] == 0
assert res['id']['odd'].iloc[-1] == 1
assert res['id']['even'].iloc[-1] == 1
df = analysis.read_sql(env._history.db_path)
res = analysis.get_count(df, 'SEED', 'state_id')
assert res['SEED'][self.env['SEED']].iloc[0] == 1
assert res['SEED'][self.env['SEED']].iloc[-1] == 1
assert res['state_id']['odd'].iloc[0] == 2
assert res['state_id']['even'].iloc[0] == 0
assert res['state_id']['odd'].iloc[-1] == 1
assert res['state_id']['even'].iloc[-1] == 1
def test_value(self):
env = self.env
df = analysis.read_sql(env._history._db)
df = analysis.read_sql(env._history.db_path)
res_sum = analysis.get_value(df, 'count')
assert res_sum['count'].iloc[0] == 2
import numpy as np
res_mean = analysis.get_value(df, 'count', aggfunc=np.mean)
assert res_mean['count'].iloc[0] == 1
assert res_mean['count'].iloc[15] == (16+8)/2
res_total = analysis.get_value(df)
res_total['SEED'].iloc[0] == 'seedanalysis_trial_0'
res_total = analysis.get_majority(df)
res_total['SEED'].iloc[0] == self.env['SEED']

119
tests/test_config.py Normal file
View File

@@ -0,0 +1,119 @@
from unittest import TestCase
import os
from os.path import join
from soil import simulation, serialization, config, network, agents
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
class TestConfig(TestCase):
def test_conversion(self):
expected = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
old = serialization.load_file(join(ROOT, "old_complete.yml"))[0]
converted_defaults = config.convert_old(old, strict=False)
converted = converted_defaults.dict(skip_defaults=True)
def isequal(a, b):
if isinstance(a, dict):
for (k, v) in a.items():
if v:
isequal(a[k], b[k])
else:
assert not b.get(k, None)
return
assert a == b
isequal(converted, expected)
def test_topology_config(self):
netconfig = config.NetConfig(**{
'path': join(ROOT, 'test.gexf')
})
net = network.from_config(netconfig, dir_path=ROOT)
assert len(net.nodes) == 2
assert len(net.edges) == 1
def test_env_from_config(self):
"""
Simple configuration that tests that the graph is loaded, and that
network agents are initialized properly.
"""
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'agent_type': 'CounterModel',
# 'states': [{'times': 10}, {'times': 20}],
'max_time': 2,
'dry_run': True,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_old_config(config)
env = s.get_env()
assert len(env.topologies['default'].nodes) == 2
assert len(env.topologies['default'].edges) == 1
assert len(env.agents) == 2
assert env.agents[0].topology == env.topologies['default']
def test_agents_from_config(self):
'''We test that the known complete configuration produces
the right agents in the right groups'''
cfg = serialization.load_file(join(ROOT, "complete_converted.yml"))[0]
s = simulation.from_config(cfg)
env = s.get_env()
assert len(env.topologies['default'].nodes) == 10
assert len(env.agents(group='network')) == 10
assert len(env.agents(group='environment')) == 1
assert sum(1 for a in env.agents(group='network', agent_type=agents.CounterModel)) == 4
assert sum(1 for a in env.agents(group='network', agent_type=agents.AggregatedCounter)) == 6
def make_example_test(path, cfg):
def wrapped(self):
root = os.getcwd()
print(path)
s = simulation.from_config(cfg)
# for s in simulation.all_from_config(path):
# iterations = s.config.max_time * s.config.num_trials
# if iterations > 1000:
# s.config.max_time = 100
# s.config.num_trials = 1
# if config.get('skip_test', False) and not FORCE_TESTS:
# self.skipTest('Example ignored.')
# envs = s.run_simulation(dry_run=True)
# assert envs
# for env in envs:
# assert env
# try:
# n = config['network_params']['n']
# assert len(list(env.network_agents)) == n
# assert env.now > 0 # It has run
# assert env.now <= config['max_time'] # But not further than allowed
# except KeyError:
# pass
return wrapped
def add_example_tests():
for config, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'),
join(EXAMPLES, '*.yml'),
):
p = make_example_test(path=path, cfg=config)
fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname
p.__doc__ = '%s should be a valid configuration' % fname
setattr(TestConfig, p.__name__, p)
del p
add_example_tests()

View File

@@ -2,11 +2,13 @@ from unittest import TestCase
import os
from os.path import join
from soil import utils, simulation
from soil import serialization, simulation
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
FORCE_TESTS = os.environ.get('FORCE_TESTS', '')
class TestExamples(TestCase):
pass
@@ -15,28 +17,32 @@ class TestExamples(TestCase):
def make_example_test(path, config):
def wrapped(self):
root = os.getcwd()
os.chdir(os.path.dirname(path))
s = simulation.from_config(config)
iterations = s.max_time * s.num_trials
if iterations > 1000:
self.skipTest('This example would probably take too long')
envs = s.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = config['network_params']['n']
assert len(list(env.network_agents)) == n
assert env.now > 2 # It has run
assert env.now <= config['max_time'] # But not further than allowed
except KeyError:
pass
os.chdir(root)
for s in simulation.all_from_config(path):
iterations = s.config.general.max_time * s.config.general.num_trials
if iterations > 1000:
s.config.general.max_time = 100
s.config.general.num_trials = 1
if config.get('skip_test', False) and not FORCE_TESTS:
self.skipTest('Example ignored.')
envs = s.run_simulation(dry_run=True)
assert envs
for env in envs:
assert env
try:
n = config['network_params']['n']
assert len(list(env.network_agents)) == n
assert env.now > 0 # It has run
assert env.now <= config['max_time'] # But not further than allowed
except KeyError:
pass
return wrapped
def add_example_tests():
for config, path in utils.load_config(join(EXAMPLES, '**', '*.yml')):
for config, path in serialization.load_files(
join(EXAMPLES, '*', '*.yml'),
join(EXAMPLES, '*.yml'),
):
p = make_example_test(path=path, config=config)
fname = os.path.basename(path)
p.__name__ = 'test_example_file_%s' % fname

99
tests/test_exporters.py Normal file
View File

@@ -0,0 +1,99 @@
import os
import io
import tempfile
import shutil
from unittest import TestCase
from soil import exporters
from soil import simulation
class Dummy(exporters.Exporter):
started = False
trials = 0
ended = False
total_time = 0
called_start = 0
called_trial = 0
called_end = 0
def sim_start(self):
self.__class__.called_start += 1
self.__class__.started = True
def trial_end(self, env):
assert env
self.__class__.trials += 1
self.__class__.total_time += env.now
self.__class__.called_trial += 1
def sim_end(self):
self.__class__.ended = True
self.__class__.called_end += 1
class Exporters(TestCase):
def test_basic(self):
config = {
'name': 'exporter_sim',
'network_params': {},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': 5,
'environment_params': {}
}
s = simulation.from_config(config)
for env in s.run_simulation(exporters=[Dummy], dry_run=True):
assert env.now <= 2
assert Dummy.started
assert Dummy.ended
assert Dummy.called_start == 1
assert Dummy.called_end == 1
assert Dummy.called_trial == 5
assert Dummy.trials == 5
assert Dummy.total_time == 2*5
def test_writing(self):
'''Try to write CSV, GEXF, sqlite and YAML (without dry_run)'''
n_trials = 5
config = {
'name': 'exporter_sim',
'network_params': {
'generator': 'complete_graph',
'n': 4
},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': n_trials,
'dry_run': False,
'environment_params': {}
}
output = io.StringIO()
s = simulation.from_config(config)
tmpdir = tempfile.mkdtemp()
envs = s.run_simulation(exporters=[
exporters.default,
exporters.csv,
exporters.gexf,
],
dry_run=False,
outdir=tmpdir,
exporter_params={'copy_to': output})
result = output.getvalue()
simdir = os.path.join(tmpdir, s.group or '', s.name)
with open(os.path.join(simdir, '{}.dumped.yml'.format(s.name))) as f:
result = f.read()
assert result
try:
for e in envs:
with open(os.path.join(simdir, '{}.gexf'.format(e.name))) as f:
result = f.read()
assert result
with open(os.path.join(simdir, '{}.csv'.format(e.name))) as f:
result = f.read()
assert result
finally:
shutil.rmtree(tmpdir)

View File

@@ -1,156 +1,128 @@
from unittest import TestCase
import os
import shutil
from glob import glob
import io
import yaml
import copy
import pickle
import networkx as nx
from functools import partial
from soil import history
from os.path import join
from soil import (simulation, Environment, agents, serialization,
utils)
from soil.time import Delta
from tsih import NoHistory, History
ROOT = os.path.abspath(os.path.dirname(__file__))
DBROOT = os.path.join(ROOT, 'testdb')
EXAMPLES = join(ROOT, '..', 'examples')
class CustomAgent(agents.FSM):
@agents.default_state
@agents.state
def normal(self):
self.neighbors = self.count_agents(state_id='normal',
limit_neighbors=True)
@agents.state
def unreachable(self):
return
class TestHistory(TestCase):
def setUp(self):
if not os.path.exists(DBROOT):
os.makedirs(DBROOT)
def tearDown(self):
if os.path.exists(DBROOT):
shutil.rmtree(DBROOT)
def test_history(self):
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
"""
tuples = (
('a_0', 0, 'id', 'h'),
('a_0', 1, 'id', 'e'),
('a_0', 2, 'id', 'l'),
('a_0', 3, 'id', 'l'),
('a_0', 4, 'id', 'o'),
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 3, 'prob', 2),
('env', 5, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
# assert h['env', 0, 'prob'] == 0
for i in range(1, 7):
assert h['env', i, 'prob'] == ((i-1)//2)+1
config = {
'name': 'CounterAgent',
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'state_id': 0}
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 11
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
for i, k in zip(range(5), 'hello'):
assert h['a_0', i, 'id'] == k
for record, value in zip(h['a_0', None, 'id'], 'hello'):
t_step, val = record
assert val == value
def test_row_conversion(self):
env = Environment(history=True)
env['test'] = 'test_value'
for i, k in zip(range(5), 'value'):
assert h['a_1', i, 'id'] == k
for i in range(5, 8):
assert h['a_1', i, 'id'] == 'e'
for i in range(7):
assert h['a_2', i, 'finished'] == False
assert h['a_2', 7, 'finished']
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
def test_history_gen(self):
"""
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
for t_step, key, value in h['env', None, None]:
assert t_step == value
assert key == 'prob'
env.schedule.time = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
records = list(h[None, 7, None])
assert len(records) == 3
for i in records:
agent_id, key, value = i
if agent_id == 'a_1':
assert key == 'id'
assert value == 'e'
elif agent_id == 'a_2':
assert key == 'finished'
assert value
else:
assert key == 'prob'
assert value == 3
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
records = h['a_1', 7, None]
assert records['id'] == 'e'
def test_nohistory(self):
'''
Make sure that no history(/sqlite) is used by default
'''
env = Environment(topology=nx.Graph(), network_agents=[])
assert isinstance(env._history, NoHistory)
def test_history_file(self):
"""
History should be saved to a file
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
db_path = os.path.join(DBROOT, 'test')
h = history.History(db_path=db_path)
h.save_tuples(tuples)
h.flush_cache()
assert os.path.exists(db_path)
def test_save_graph_history(self):
'''
The history_to_graph method should return a valid networkx graph.
# Recover the data
recovered = history.History(db_path=db_path, backup=False)
assert recovered['a_1', 0, 'id'] == 'v'
assert recovered['a_1', 4, 'id'] == 'e'
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=True)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
# Using the same name should create a backup copy
newhistory = history.History(db_path=db_path, backup=True)
backuppaths = glob(db_path + '.backup*.sqlite')
assert len(backuppaths) == 1
backuppath = backuppaths[0]
assert newhistory.db_path == h.db_path
assert os.path.exists(backuppath)
assert not len(newhistory[None, None, None])
def test_save_graph_nohistory(self):
'''
The history_to_graph method should return a valid networkx graph.
def test_history_tuples(self):
"""
The data recovered should be equal to the one recorded.
"""
tuples = (
('a_1', 0, 'id', 'v'),
('a_1', 1, 'id', 'a'),
('a_1', 2, 'id', 'l'),
('a_1', 3, 'id', 'u'),
('a_1', 4, 'id', 'e'),
('env', 1, 'prob', 1),
('env', 2, 'prob', 2),
('env', 3, 'prob', 3),
('a_2', 7, 'finished', True),
)
h = history.History()
h.save_tuples(tuples)
recovered = list(h.to_tuples())
assert recovered
for i in recovered:
assert i in tuples
When NoHistory is used, only the last known value is known
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, history=False)
env.get_agent(0)['testvalue'] = 'start'
env.schedule.time = 10
env.get_agent(0)['testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.nodes[0]['attr_testvalue']
assert ('start', 0, None) not in values
assert ('finish', 10, None) in values
def test_pickle_agent_environment(self):
env = Environment(name='Test', history=True)
a = agents.BaseAgent(model=env, unique_id=25)
a['key'] = 'test'
pickled = pickle.dumps(a)
recovered = pickle.loads(pickled)
assert recovered.env.name == 'Test'
assert list(recovered.env._history.to_tuples())
assert recovered['key', 0] == 'test'
assert recovered['key'] == 'test'

View File

@@ -1,22 +1,31 @@
from unittest import TestCase
import os
import io
import yaml
import copy
import pickle
import networkx as nx
from functools import partial
from os.path import join
from soil import simulation, Environment, agents, utils, history
from soil import (simulation, Environment, agents, network, serialization,
utils, config)
from soil.time import Delta
ROOT = os.path.abspath(os.path.dirname(__file__))
EXAMPLES = join(ROOT, '..', 'examples')
class CustomAgent(agents.BaseAgent):
def step(self):
self.state['neighbors'] = self.count_agents(state_id=0,
limit_neighbors=True)
class CustomAgent(agents.FSM, agents.NetworkAgent):
@agents.default_state
@agents.state
def normal(self):
self.neighbors = self.count_agents(state_id='normal',
limit_neighbors=True)
@agents.state
def unreachable(self):
return
class TestMain(TestCase):
@@ -26,22 +35,20 @@ class TestMain(TestCase):
Raise an exception otherwise.
"""
config = {
'dry_run': True,
'network_params': {
'path': join(ROOT, 'test.gexf')
}
}
G = utils.load_network(config['network_params'])
G = network.from_config(config['network_params'])
assert G
assert len(G) == 2
with self.assertRaises(AttributeError):
config = {
'dry_run': True,
'network_params': {
'path': join(ROOT, 'unknown.extension')
}
}
G = utils.load_network(config['network_params'])
G = network.from_config(config['network_params'])
print(G)
def test_generate_barabasi(self):
@@ -49,23 +56,21 @@ class TestMain(TestCase):
If no path is given, a generator and network parameters
should be used to generate a network
"""
config = {
'dry_run': True,
'network_params': {
cfg = {
'params': {
'generator': 'barabasi_albert_graph'
}
}
with self.assertRaises(TypeError):
G = utils.load_network(config['network_params'])
config['network_params']['n'] = 100
config['network_params']['m'] = 10
G = utils.load_network(config['network_params'])
with self.assertRaises(Exception):
G = network.from_config(cfg)
cfg['params']['n'] = 100
cfg['params']['m'] = 10
G = network.from_config(cfg)
assert len(G) == 100
def test_empty_simulation(self):
"""A simulation with a base behaviour should do nothing"""
config = {
'dry_run': True,
'network_params': {
'path': join(ROOT, 'test.gexf')
},
@@ -73,91 +78,97 @@ class TestMain(TestCase):
'environment_params': {
}
}
s = simulation.from_config(config)
s = simulation.from_old_config(config)
s.run_simulation(dry_run=True)
def test_network_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'CounterAgent',
'network_params': {
'generator': nx.complete_graph,
'n': 2,
},
'agent_type': 'CounterModel',
'states': {
0: {'times': 10},
1: {'times': 20},
},
'max_time': 2,
'num_trials': 1,
'environment_params': {
}
}
s = simulation.from_old_config(config)
def test_counter_agent(self):
"""
The initial states should be applied to the agent and the
agent should be able to update its state."""
config = {
'name': 'CounterAgent',
'dry_run': True,
'network_params': {
'path': join(ROOT, 'test.gexf')
'version': '2',
'general': {
'name': 'CounterAgent',
'max_time': 2,
'dry_run': True,
'num_trials': 1,
},
'agent_type': 'CounterModel',
'states': [{'times': 10}, {'times': 20}],
'max_time': 2,
'num_trials': 1,
'environment_params': {
'topologies': {
'default': {
'path': join(ROOT, 'test.gexf')
}
},
'agents': {
'default': {
'agent_class': 'CounterModel',
},
'counters': {
'topology': 'default',
'fixed': [{'state': {'times': 10}}, {'state': {'times': 20}}],
}
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0)['times', 0] == 11
assert env.get_agent(0)['times', 1] == 12
assert env.get_agent(1)['times', 0] == 21
assert env.get_agent(1)['times', 1] == 22
def test_counter_agent_history(self):
"""
The evolution of the state should be recorded in the logging agent
"""
config = {
'name': 'CounterAgent',
'dry_run': True,
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': 'AggregatedCounter',
'weight': 1,
'state': {'id': 0}
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
env = s.run_simulation(dry_run=True)[0]
for agent in env.network_agents:
last = 0
assert len(agent[None, None]) == 10
for step, total in sorted(agent['total', None]):
assert total == last + 2
last = total
env = s.get_env()
assert isinstance(env.agents[0], agents.CounterModel)
assert env.agents[0].topology == env.topologies['default']
assert env.agents[0]['times'] == 10
assert env.agents[0]['times'] == 10
env.step()
assert env.agents[0]['times'] == 11
assert env.agents[1]['times'] == 21
def test_custom_agent(self):
"""Allow for search of neighbors with a certain state_id"""
config = {
'dry_run': True,
'network_params': {
'path': join(ROOT, 'test.gexf')
},
'network_agents': [{
'agent_type': CustomAgent,
'weight': 1,
'state': {'id': 0}
'weight': 1
}],
'max_time': 10,
'environment_params': {
}
}
s = simulation.from_config(config)
s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0]
assert env.get_agent(0).state['neighbors'] == 1
assert env.agents[1].count_agents(state_id='normal') == 2
assert env.agents[1].count_agents(state_id='normal', limit_neighbors=True) == 1
assert env.agents[0].neighbors == 1
def test_torvalds_example(self):
"""A complete example from a documentation should work."""
config = utils.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
config = serialization.load_file(join(EXAMPLES, 'torvalds.yml'))[0]
config['network_params']['path'] = join(EXAMPLES,
config['network_params']['path'])
s = simulation.from_config(config)
s.dry_run = True
env = s.run_simulation()[0]
s = simulation.from_old_config(config)
env = s.run_simulation(dry_run=True)[0]
for a in env.network_agents:
skill_level = a.state['skill_level']
if a.id == 'Torvalds':
@@ -175,48 +186,34 @@ class TestMain(TestCase):
def test_yaml(self):
"""
The YAML version of a newly created simulation
should be equivalent to the configuration file used
The YAML version of a newly created configuration should be equivalent
to the configuration file used.
Values not present in the original config file should have reasonable
defaults.
"""
with utils.timer('loading'):
config = utils.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
s.dry_run = True
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_old_config(config)
with utils.timer('serializing'):
serial = s.to_yaml()
with utils.timer('recovering'):
recovered = yaml.load(serial)
with utils.timer('deleting'):
del recovered['topology']
assert config == recovered
recovered = yaml.load(serial, Loader=yaml.SafeLoader)
for (k, v) in config.items():
assert recovered[k] == v
def test_configuration_changes(self):
"""
The configuration should not change after running
the simulation.
"""
config = utils.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_config(config)
s.dry_run = True
for i in range(5):
s.run_simulation(dry_run=True)
nconfig = s.to_dict()
del nconfig['topology']
assert config == nconfig
config = serialization.load_file(join(EXAMPLES, 'complete.yml'))[0]
s = simulation.from_old_config(config)
init_config = copy.copy(s.config)
def test_row_conversion(self):
env = Environment(dry_run=True)
env['test'] = 'test_value'
res = list(env.history_to_tuples())
assert len(res) == len(env.environment_params)
env._now = 1
env['test'] = 'second_value'
res = list(env.history_to_tuples())
assert env['env', 0, 'test' ] == 'test_value'
assert env['env', 1, 'test' ] == 'second_value'
s.run_simulation(dry_run=True)
nconfig = s.config
# del nconfig['to
assert init_config == nconfig
def test_save_geometric(self):
"""
@@ -224,43 +221,30 @@ class TestMain(TestCase):
from geometric models. We should work around it.
"""
G = nx.random_geometric_graph(20, 0.1)
env = Environment(topology=G, dry_run=True)
env.dump_gexf('/tmp/dump-gexf')
def test_save_graph(self):
'''
The history_to_graph method should return a valid networkx graph.
The state of the agent should be encoded as intervals in the nx graph.
'''
G = nx.cycle_graph(5)
distribution = agents.calculate_distribution(None, agents.BaseAgent)
env = Environment(topology=G, network_agents=distribution, dry_run=True)
env[0, 0, 'testvalue'] = 'start'
env[0, 10, 'testvalue'] = 'finish'
nG = env.history_to_graph()
values = nG.node[0]['attr_testvalue']
assert ('start', 0, 10) in values
assert ('finish', 10, None) in values
env = Environment(topology=G)
f = io.BytesIO()
env.dump_gexf(f)
def test_serialize_class(self):
ser, name = utils.serialize(agents.BaseAgent)
ser, name = serialization.serialize(agents.BaseAgent, known_modules=[])
assert name == 'soil.agents.BaseAgent'
assert ser == agents.BaseAgent
class CustomAgent(agents.BaseAgent):
pass
ser, name = serialization.serialize(agents.BaseAgent, known_modules=['soil', ])
assert name == 'BaseAgent'
assert ser == agents.BaseAgent
ser, name = utils.serialize(CustomAgent)
ser, name = serialization.serialize(CustomAgent)
assert name == 'test_main.CustomAgent'
assert ser == CustomAgent
pickle.dumps(ser)
def test_serialize_builtin_types(self):
for i in [1, None, True, False, {}, [], list(), dict()]:
ser, name = utils.serialize(i)
ser, name = serialization.serialize(i)
assert type(ser) == str
des = utils.deserialize(name, ser)
des = serialization.deserialize(name, ser)
assert i == des
def test_serialize_agent_type(self):
@@ -269,7 +253,8 @@ class TestMain(TestCase):
assert ser == 'test_main.CustomAgent'
ser = agents.serialize_type(agents.BaseAgent)
assert ser == 'BaseAgent'
pickle.dumps(ser)
def test_deserialize_agent_distribution(self):
agent_distro = [
{
@@ -281,9 +266,10 @@ class TestMain(TestCase):
'weight': 2
},
]
converted = agents.deserialize_distribution(agent_distro)
converted = agents.deserialize_definition(agent_distro)
assert converted[0]['agent_type'] == agents.CounterModel
assert converted[1]['agent_type'] == CustomAgent
pickle.dumps(converted)
def test_serialize_agent_distribution(self):
agent_distro = [
@@ -296,12 +282,83 @@ class TestMain(TestCase):
'weight': 2
},
]
converted = agents.serialize_distribution(agent_distro)
converted = agents.serialize_definition(agent_distro)
assert converted[0]['agent_type'] == 'CounterModel'
assert converted[1]['agent_type'] == 'test_main.CustomAgent'
pickle.dumps(converted)
def test_history(self):
'''Test storing in and retrieving from history (sqlite)'''
h = history.History()
h.save_record(agent_id=0, t_step=0, key="test", value="hello")
assert h[0, 0, "test"] == "hello"
def test_subgraph(self):
'''An agent should be able to subgraph the global topology'''
G = nx.Graph()
G.add_node(3)
G.add_edge(1, 2)
distro = agents.calculate_distribution(agent_type=agents.NetworkAgent)
distro[0]['topology'] = 'default'
aconfig = config.AgentConfig(distribution=distro, topology='default')
env = Environment(name='Test', topologies={'default': G}, agents={'network': aconfig})
lst = list(env.network_agents)
a2 = env.find_one(node_id=2)
a3 = env.find_one(node_id=3)
assert len(a2.subgraph(limit_neighbors=True)) == 2
assert len(a3.subgraph(limit_neighbors=True)) == 1
assert len(a3.subgraph(limit_neighbors=True, center=False)) == 0
assert len(a3.subgraph(agent_type=agents.NetworkAgent)) == 3
def test_templates(self):
'''Loading a template should result in several configs'''
configs = serialization.load_file(join(EXAMPLES, 'template.yml'))
assert len(configs) > 0
def test_until(self):
config = {
'name': 'until_sim',
'network_params': {},
'agent_type': 'CounterModel',
'max_time': 2,
'num_trials': 50,
'environment_params': {}
}
s = simulation.from_old_config(config)
runs = list(s.run_simulation(dry_run=True))
over = list(x.now for x in runs if x.now>2)
assert len(runs) == config['num_trials']
assert len(over) == 0
def test_fsm(self):
'''Basic state change'''
class ToggleAgent(agents.FSM):
@agents.default_state
@agents.state
def ping(self):
return self.pong
@agents.state
def pong(self):
return self.ping
a = ToggleAgent(unique_id=1, model=Environment())
assert a.state_id == a.ping.id
a.step()
assert a.state_id == a.pong.id
a.step()
assert a.state_id == a.ping.id
def test_fsm_when(self):
'''Basic state change'''
class ToggleAgent(agents.FSM):
@agents.default_state
@agents.state
def ping(self):
return self.pong, 2
@agents.state
def pong(self):
return self.ping
a = ToggleAgent(unique_id=1, model=Environment())
when = a.step()
assert when == 2
when = a.step()
assert when == Delta(a.interval)

69
tests/test_mesa.py Normal file
View File

@@ -0,0 +1,69 @@
'''
Mesa-SOIL integration tests
We have to test that:
- Mesa agents can be used in SOIL
- Simplified soil agents can be used in mesa simulations
- Mesa and soil agents can interact in a simulation
- Mesa visualizations work with SOIL simulations
'''
from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.space import MultiGrid
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos,
moore=True,
include_center=False)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
def step(self):
'''Advance the model by one step.'''
self.schedule.step()
# model = MoneyModel(10)
# for i in range(10):
# model.step()
# agent_wealth = [a.wealth for a in model.schedule.agents]