1
0
mirror of https://github.com/gsi-upm/soil synced 2025-10-18 17:28:30 +00:00

Compare commits

..

2 Commits

Author SHA1 Message Date
David García Martín
c73503d9f6 Delete TerroristModel_tipo.png 2017-07-05 11:21:33 +00:00
David García Martín
de67fe3e74 Initial commit 2017-07-05 13:19:56 +02:00
131 changed files with 11015 additions and 37041 deletions

View File

@@ -1,7 +0,0 @@
**/soil_output
.*
**/.*
**/__pycache__
__pycache__
*.pyc
**/backup

2
.gitignore vendored
View File

@@ -8,5 +8,3 @@ soil_output
docs/_build* docs/_build*
build/* build/*
dist/* dist/*
prof
backup

View File

@@ -1,53 +0,0 @@
stages:
- test
- publish
- check_published
docker:
stage: publish
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- docker
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# The skip-tls-verify flag is there because our registry certificate is self signed
- /kaniko/executor --context $CI_PROJECT_DIR --skip-tls-verify --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
only:
- tags
test:
tags:
- docker
image: python:3.8
stage: test
script:
- pip install -r requirements.txt -r test-requirements.txt
- python setup.py test
push_pypi:
only:
- tags
tags:
- docker
image: python:3.8
stage: publish
script:
- echo $CI_COMMIT_TAG > soil/VERSION
- pip install twine
- python setup.py sdist bdist_wheel
- TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
check_pypi:
only:
- tags
tags:
- docker
image: python:3.8
stage: check_published
script:
- pip install soil==$CI_COMMIT_TAG
# Allow PYPI to update its index before we try to install
when: delayed
start_in: 2 minutes

View File

@@ -1,196 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.0 UNRELEASED]
Version 1.0 introduced multiple changes, especially on the `Simulation` class and anything related to how configuration is handled.
For an explanation of the general changes in version 1.0, please refer to the file `docs/notes_v1.0.rst`.
### Added
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
* Environments now have a class method to make them easier to use without a simulation`.run`. Notice that this is different from `run_model`, which is an instance method.
* Ability to run simulations using mesa models
* The `soil.exporters` module to export the results of datacollectors (`model.datacollector`) into files at the end of trials/simulations
* Agents can now have generators as a step function or a state. They work similar to normal functions, with one caveat in the case of `FSM`: only `time` values (or None) can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
* Simulations can now specify a `matrix` with possible values for every simulation parameter. The final parameters will be calculated based on the `parameters` used and a cartesian product (i.e., all possible combinations) of each parameter.
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
### Changed
* Configuration schema (`Simulation`) is very simplified. All simulations should be checked
* Model / environment variables are expected (but not enforced) to be a single value. This is done to more closely align with mesa
* `Exporter.iteration_end` now takes two parameters: `env` (same as before) and `params` (specific parameters for this environment). We considered including a `parameters` attribute in the environment, but this would not be compatible with mesa.
* `num_trials` renamed to `iterations`
* General renaming of `trial` to `iteration`, to work better with `mesa`
* `model_parameters` renamed to `parameters` in simulation
* Simulation results for every iteration of a simulation with the same name are stored in a single `sqlite` database
### Removed
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
## [0.20.8]
### Changed
* Tsih bumped to version 0.1.8
### Fixed
* Mentions to `id` in docs. It should be `state_id` now.
* Fixed bug: environment agents were not being added to the simulation
## [0.20.7]
### Changed
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
### Fixed
* Bug with time.NEVER/time.INFINITY
## [0.20.6]
### Fixed
* Agents now return `time.INFINITY` when dead, instead of 'inf'
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
### Changed
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
## [0.20.5]
### Changed
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
## [0.20.4]
### Added
* Agents can now be given any kwargs, which will be used to set their state
* Environments have a default logger `self.logger` and a log method, just like agents
## [0.20.3]
### Fixed
* Default state values are now deepcopied again.
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
### Removed
* Datacollectors are not being used for now.
* `time.TimedActivation.step` does not use an `until` parameter anymore.
### Changed
* Simulations now run right up to `until` (open interval)
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
* Rabbits simulation is more idiomatic (using subclasses)
## [0.20.2]
### Fixed
* CI/CD testing issues
## [0.20.1]
### Fixed
* Agents would run another step after dying.
## [0.20.0]
### Added
* Integration with MESA
* `not_agent_ids` parameter to get sql in history
### Changed
* `soil.Environment` now also inherits from `mesa.Model`
* `soil.Agent` now also inherits from `mesa.Agent`
* `soil.time` to replace `simpy` events, delays, duration, etc.
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
### Removed
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
### Added
* An option to choose whether a database should be used for history
## [0.15.2]
### Fixed
* Pass the right known_modules and parameters to stats discovery in simulation
* The configuration file must exist when launching through the CLI. If it doesn't, an error will be logged
* Minor changes in the documentation of the CLI arguments
### Changed
* Stats are now exported by default
## [0.15.1]
### Added
* read-only `History`
### Fixed
* Serialization problem with the `Environment` on parallel mode.
* Analysis functions now work as they should in the tutorial
## [0.15.0]
### Added
* Control logging level in CLI and simulation
* `Stats` to calculate trial and simulation-wide statistics
* Simulation statistics are stored in a separate table in history (see `History.get_stats` and `History.save_stats`, as well as `soil.stats`)
* Aliased `NetworkAgent.G` to `NetworkAgent.topology`.
### Changed
* Templates in config files can be given as dictionaries in addition to strings
* Samplers are used more explicitly
* Removed nxsim dependency. We had already made a lot of changes, and nxsim has not been updated in 5 years.
* Exporter methods renamed to `trial` and `end`. Added `start`.
* `Distribution` exporter now a stats class
* `global_topology` renamed to `topology`
* Moved topology-related methods to `NetworkAgent`
### Fixed
* Temporary files used for history in dry_run mode are not longer left open
## [0.14.9]
### Changed
* Seed random before environment initialization
## [0.14.8]
### Fixed
* Invalid directory names in Windows gsi-upm/soil#5
## [0.14.7]
### Changed
* Minor change to traceback handling in async simulations
### Fixed
* Incomplete example in the docs (example.yml) caused an exception
## [0.14.6]
### Fixed
* Bug with newer versions of networkx (0.24) where the Graph.node attribute has been removed. We have updated our calls, but the code in nxsim is not under our control, so we have pinned the networkx version until that issue is solved.
### Changed
* Explicit yaml.SafeLoader to avoid deprecation warnings when using yaml.load. It should not break any existing setups, but we could move to the FullLoader in the future if needed.
## [0.14.4]
### Fixed
* Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly.
## [0.14.3]
### Fixed
* Incompatibility with py3.3-3.6 due to ModuleNotFoundError and TypeError in DryRunner
## [0.14.2]
### Fixed
* Output path for exporters is now soil_output
### Changed
* CSV output to stdout in dry_run mode
## [0.14.1]
### Changed
* Exporter names in lower case
* Add default exporter in runs
## [0.14.0]
### Added
* Loading configuration from template definitions in the yaml, in preparation for SALib support.
The definition of the variables and their possible values (i.e., a problem in SALib terms), as well as a sampler function, can be provided.
Soil uses this definition and the template to generate a set of configurations.
* Simulation group names, to link related simulations. For now, they are only used to group all simulations in the same group under the same folder.
* Exporters unify exporting/dumping results and other files to disk. If `dry_run` is set to `True`, exporters will write to stdout instead of a file (useful for testing/debugging).
* Distribution exporter, to write statistics about values and value_counts in every simulation. The results are dumped to two CSV files.
### Changed
* `dir_path` is now the directory for resources (modules, files)
* Environments and simulations do not export or write anything by default. That task is delegated to Exporters
### Removed
* The output dir for environments and simulations (see Exporters)
* DrawingAgent, because it wrote to disk and was not being used. We provide a partial alternative in the form of the GraphDrawing exporter. A complete alternative will be provided once the network at each state can be accessed by exporters.
## Fixed
* Modules with custom agents/environments failed to load when they were run from outside the directory of the definition file. Modules are now loaded from the directory of the simulation file in addition to the working directory
* Memory databases (in history) can now be shared between threads.
* Testing all examples, not just subdirectories
## [0.13.8]
### Changed
* Moved TerroristNetworkModel to examples
### Added
* `get_agents` and `count_agents` methods now accept lists as inputs. They can be used to retrieve agents from node ids
* `subgraph` in BaseAgent
* `agents.select` method, to filter out agents
* `skip_test` property in yaml definitions, to force skipping some examples
* `agents.Geo`, with a search function based on postition
* `BaseAgent.ego_search` to get nodes from the ego network of a node
* `BaseAgent.degree` and `BaseAgent.betweenness`
### Fixed
## [0.13.7]
### Changed
* History now defaults to not backing up! This makes it more intuitive to load the history for examination, at the expense of rewriting something. That should not happen because History is only created in the Environment, and that has `backup=True`.
### Added
* Agent names are assigned based on their agent types
* Agent logging uses the agent name.
* FSM agents can now return a timeout in addition to a new state. e.g. `return self.idle, self.env.timeout(2)` will execute the *different_state* in 2 *units of time* (`t_step=now+2`).
* Example of using timeouts in FSM (custom_timeouts)
* `network_agents` entries may include an `ids` entry. If set, it should be a list of node ids that should be assigned that agent type. This complements the previous behavior of setting agent type with `weights`.

View File

@@ -1,12 +0,0 @@
FROM python:3.7
WORKDIR /usr/src/app
COPY test-requirements.txt requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r test-requirements.txt -r requirements.txt
COPY ./ /usr/src/app
RUN pip install '.[web]'
ENTRYPOINT ["python", "-m", "soil"]

0
LICENSE Normal file → Executable file
View File

View File

@@ -1,7 +0,0 @@
include requirements.txt
include test-requirements.txt
include README.rst
graft soil
global-exclude __pycache__
global-exclude soil_output
global-exclude *.py[co]

View File

@@ -1,7 +0,0 @@
quick-test:
docker-compose exec dev python -m pytest -s -v
test:
docker run -t -v $$PWD:/usr/src/app -w /usr/src/app python:3.7 python setup.py test
.PHONY: test

89
README.md Normal file → Executable file
View File

@@ -1,91 +1,12 @@
# [SOIL](https://github.com/gsi-upm/soil) #[Soil](https://github.com/gsi-upm/soil)
The purpose of Soil (SOcial network sImuLator) is provding an Agent-based Social Simulator written in Python for Social Networks.
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks. In order to see quickly how to use Soil, you can follow the following [tutorial](https://github.com/gsi-upm/soil/blob/master/soil_tutorial.ipynb).
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
Follow our [tutorial](docs/tutorial/soil_tutorial.ipynb) to develop your own agent models.
> **Warning**
> Soil 1.0 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v1.0.rst)
## Features
* Integration with (social) networks (through `networkx`)
* Convenience functions and methods to easily assign agents to your model (and optionally to its network):
* Following a given distribution (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
* Based on the topology of the network
* **Several types of abstractions for agents**:
* Finite state machine, where methods can be turned into a state
* Network agents, which have convenience methods to access the model's topology
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
* **Reporting and data collection**:
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
* All data collected are exported by default to a SQLite database and a description file
* Options to export to other formats, such as CSV, or defining your own exporters
* A summary of the data collected is shown in the command line, for easy inspection
* **An event-based scheduler**
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
* Time intervals between each step are flexible.
* There are primitives to specify when the next execution of an agent should be (or conditions)
* **Actor-inspired** message-passing
* A simulation runner (`soil.Simulation`) that can:
* Run models in parallel
* Save results to different formats
* Simulation configuration files
* A command line interface (`soil`), to quickly run simulations with different parameters
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
## Mesa compatibility
SOIL has been redesigned to integrate well with [Mesa](https://github.com/projectmesa/mesa).
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or might yield surprising results.
For instance, you may add any `soil.agent` agent on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that may not work.
## Changes in version 0.3
Version 0.3 came packed with many changes to provide much better integration with MESA.
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
This translates to harder maintenance and a worse experience for newcomers.
In the end, we decided to make some breaking changes.
If you have an older Soil simulation, you have two options:
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
* Keep using a previous `soil` version.
## Citation
If you use Soil in your research, don't forget to cite this paper: @Copyright GSI - Universidad Politécnica de Madrid 2017
```bibtex
@inbook{soil-gsi-conference-2017,
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
doi = "10.1007/978-3-319-59930-4_19",
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
isbn = "978-3-319-59929-8",
keywords = "soil;social networks;agent based social simulation;python",
month = "June",
organization = "PAAMS 2017",
pages = "234-245",
publisher = "Springer Verlag",
series = "LNAI",
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
volume = "10349",
year = "2017",
}
```
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
[![SOIL](logo_gsi.png)](https://www.gsi.upm.es)

BIN
TerroristModel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
TerroristModel_type.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
clase_base.pyc Executable file

Binary file not shown.

8802
data.txt Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +0,0 @@
import soil
soil.main()
import pdb

View File

@@ -1,12 +0,0 @@
version: '3'
services:
dev:
build: .
environment:
PYTHONDONTWRITEBYTECODE: 1
volumes:
- .:/usr/src/app
tty: true
entrypoint: /bin/bash
ports:
- '8001:8001'

0
docs/Makefile Normal file → Executable file
View File

12
docs/conf.py Normal file → Executable file
View File

@@ -31,10 +31,7 @@
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = [ extensions = []
"IPython.sphinxext.ipython_console_highlighting",
"nbsphinx",
]
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
@@ -67,12 +64,12 @@ release = '0.1'
# #
# This is also used if you do content translation via gettext catalogs. # This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases. # Usually you set "language" from the command line for these cases.
language = "en" language = None
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path # This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '**.ipynb_checkpoints'] exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'sphinx'
@@ -155,3 +152,6 @@ texinfo_documents = [
author, 'Soil', 'One line description of project.', author, 'Soil', 'One line description of project.',
'Miscellaneous'), 'Miscellaneous'),
] ]

View File

@@ -1,40 +0,0 @@
---
name: MyExampleSimulation
max_time: 50
num_trials: 3
interval: 2
model_params:
topology:
params:
generator: barabasi_albert_graph
n: 100
m: 2
agents:
distribution:
- agent_class: SISaModel
topology: True
ratio: 0.1
state:
state_id: content
- agent_class: SISaModel
topology: True
ratio: .1
state:
state_id: discontent
- agent_class: SISaModel
topology: True
ratio: 0.8
state:
state_id: neutral
prob_infect: 0.075
neutral_discontent_spon_prob: 0.1
neutral_discontent_infected_prob: 0.3
neutral_content_spon_prob: 0.3
neutral_content_infected_prob: 0.4
discontent_neutral: 0.5
discontent_content: 0.5
variance_d_c: 0.2
content_discontent: 0.2
variance_c_d: 0.2
content_neutral: 0.2
standard_variance: 1

53
docs/index.rst Normal file → Executable file
View File

@@ -1,56 +1,21 @@
.. Soil documentation master file, created by
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Soil's documentation! Welcome to Soil's documentation!
================================ ================================
Soil is an opinionated Agent-based Social Simulator in Python focused on Social Networks. Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
To get started developing your own simulations and agent behaviors, check out our :doc:`Tutorial <soil_tutorial>` and the `examples on GitHub <https://github.com/gsi-upm/soil/tree/master/examples>`.
Soil can be installed through pip (see more details in the :doc:`installation` page):.
.. image:: soil.png
:width: 80%
:align: center
.. code:: bash
pip install soil
If you use Soil in your research, do not forget to cite this paper:
.. code:: bibtex
@inbook{soil-gsi-conference-2017,
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
doi = "10.1007/978-3-319-59930-4_19",
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
isbn = "978-3-319-59929-8",
keywords = "soil;social networks;agent based social simulation;python",
month = "June",
organization = "PAAMS 2017",
pages = "234-245",
publisher = "Springer Verlag",
series = "LNAI",
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
volume = "10349",
year = "2017",
}
.. toctree:: .. toctree::
:maxdepth: 0 :maxdepth: 2
:caption: Learn more about soil: :caption: Learn more about soil:
installation installation
Tutorial <tutorial/soil_tutorial> usage
notes_v1.0 models
..
.. Indices and tables .. Indices and tables

62
docs/installation.rst Normal file → Executable file
View File

@@ -1,65 +1,7 @@
Installation Installation
------------ ------------
The latest version can be installed through GitLab.
Through pip
===========
The easiest way to install Soil is through pip, with Python >= 3.8:
.. code:: bash .. code:: bash
pip install soil git clone https://lab.cluster.gsi.dit.upm.es/soil/soil.git
Now test that it worked by running the command line tool
.. code:: bash
soil --help
#or
python -m soil --help
Or, if you're using using soil programmatically:
.. code:: python
import soil
print(soil.__version__)
Web UI
======
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
To make it work, you have to install soil like this:
.. code::
pip install soil[web]
Once installed, the soil web UI can be run in two ways:
.. code::
soil-web
# OR
python -m soil.web
Development
===========
The latest version can be downloaded from `GitHub <https://github.com/gsi-upm/soil>`_ and installed manually:
.. code:: bash
git clone https://github.com/gsi-upm/soil
cd soil
python -m venv .venv
source .venv/bin/activate
pip install --editable .

2
docs/make.bat Normal file → Executable file
View File

@@ -12,7 +12,7 @@ set BUILDDIR=_build
set SPHINXPROJ=Soil set SPHINXPROJ=Soil
if "%1" == "" goto help if "%1" == "" goto help
eE
%SPHINXBUILD% >NUL 2>NUL %SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 ( if errorlevel 9009 (
echo. echo.

View File

@@ -1,22 +0,0 @@
Mesa compatibility
------------------
Soil is in the process of becoming fully compatible with MESA.
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
This is a non-exhaustive list of tasks to achieve compatibility:
- [ ] Integrate `soil.Simulation` with mesa's runners:
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
- [ ] Integrate `soil.Environment` with `mesa.Model`:
- [x] `Soil.Environment` inherits from `mesa.Model`
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
- [ ] Allow for `mesa.Model` to be used in a simulation.
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
- [x] Rename agent.id to unique_id?
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
- [ ] Provide examples
- [ ] Using mesa modules in a soil simulation
- [ ] Using soil modules in a mesa simulation
- [ ] Document the new APIs and usage

112
docs/models.rst Executable file
View File

@@ -0,0 +1,112 @@
Developing new models
---------------------
This document describes how to develop a new analysis model.
What is a model?
================
A model defines the behaviour of the agents with a view to assessing their effects on the system as a whole.
In practice, a model consists of at least two parts:
* Python module: the actual code that describes the behaviour.
* Setting up the variables in the Settings JSON file.
This separation allows us to run the simulation with different agents.
Models Code
===========
All the models are imported to the main file. The initialization look like this:
.. code:: python
import settings
networkStatus = {} # Dict that will contain the status of every agent in the network
sentimentCorrelationNodeArray = []
for x in range(0, settings.network_params["number_of_nodes"]):
sentimentCorrelationNodeArray.append({'id': x})
# Initialize agent states. Let's assume everyone is normal.
init_states = [{'id': 0, } for _ in range(settings.network_params["number_of_nodes"])]
# add keys as as necessary, but "id" must always refer to that state category
A new model have to inherit the BaseBehaviour class which is in the same module.
There are two basics methods:
* __init__
* step: used to define the behaviour over time.
Variable Initialization
=======================
The different parameters of the model have to be initialize in the Simulation Settings JSON file which will be
passed as a parameter to the simulation.
.. code:: json
{
"agent": ["SISaModel","ControlModelM2"],
"neutral_discontent_spon_prob": 0.04,
"neutral_discontent_infected_prob": 0.04,
"neutral_content_spon_prob": 0.18,
"neutral_content_infected_prob": 0.02,
"discontent_neutral": 0.13,
"discontent_content": 0.07,
"variance_d_c": 0.02,
"content_discontent": 0.009,
"variance_c_d": 0.003,
"content_neutral": 0.088,
"standard_variance": 0.055,
"prob_neutral_making_denier": 0.035,
"prob_infect": 0.075,
"prob_cured_healing_infected": 0.035,
"prob_cured_vaccinate_neutral": 0.035,
"prob_vaccinated_healing_infected": 0.035,
"prob_vaccinated_vaccinate_neutral": 0.035,
"prob_generate_anti_rumor": 0.035
}
In this file you will also define the models you are going to simulate. You can simulate as many models as you want.
The simulation returns one result for each model, executing each model separately. For the usage, see :doc:`usage`.
Example Model
=============
In this section, we will implement a Sentiment Correlation Model.
The class would look like this:
.. code:: python
from ..BaseBehaviour import *
from .. import sentimentCorrelationNodeArray
class SentimentCorrelationModel(BaseBehaviour):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
self.anger_prob = environment.environment_params['anger_prob']
self.joy_prob = environment.environment_params['joy_prob']
self.sadness_prob = environment.environment_params['sadness_prob']
self.disgust_prob = environment.environment_params['disgust_prob']
self.time_awareness = []
for i in range(4): # In this model we have 4 sentiments
self.time_awareness.append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
sentimentCorrelationNodeArray[self.id][self.env.now] = 0
def step(self, now):
self.behaviour() # Method which define the behaviour
super().step(now)
The variables will be modified by the user, so you have to include them in the Simulation Settings JSON file.

View File

@@ -1,38 +0,0 @@
Upgrading to Soil 1.0
---------------------
What are the main changes in version 1.0?
#########################################
Version 1.0 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
Unfortunately, this comes at the cost of backwards compatibility.
We drew several lessons from the previous version of Soil, and tried to address them in this version.
Mainly:
- The split between simulation configuration and simulation code was overly complicated for most use cases. As a result, most users ended up reusing configuration.
- Storing **all** the simulation data in a database is costly and unnecessary for most use cases. For most use cases, only a handful of variables need to be stored. This fits nicely with Mesa's data collection system.
- The API was too complex, and it was difficult to understand how to use it.
- Most parts of the API were not aligned with Mesa, which made it difficult to use Mesa's features or to integrate Soil modules with Mesa code, especially for newcomers.
- Many parts of the API were tightly coupled, which made it difficult to find bugs, test the system and add new features.
The 0.30 rewrite should provide a middle ground between Soil's opinionated approach and Mesa's flexibility.
The new Soil is less configuration-centric.
It aims to provide more modular and convenient functions, most of which can be used in vanilla Mesa.
How are agents assigned to nodes in the network
###############################################
The constructor of the `NetworkAgent` class has two arguments: `node_id` and `topology`.
If `topology` is not provided, it will default to `self.model.topology`.
This assignment might err if the model does not have a `topology` attribute, but most Soil environments derive from `NetworkEnvironment`, so they include a topology by default.
If `node_id` is not provided, a random node will be selected from the topology, until a node with no agent is found.
Then, the `node_id` of that node is assigned to the agent.
If no node with no agent is found, a new node is automatically added to the topology.
Can Soil environments include more than one network / topology?
###############################################################
Yes, but each network has to be included manually.
Somewhere between 0.20 and 0.30 we included the ability to include multiple networks, but it was deemed too complex and was removed.

View File

@@ -1,2 +0,0 @@
ipython>=7.31.1
nbsphinx==0.9.1

View File

@@ -1,12 +0,0 @@
### MESA
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
Here are some reasons to use Soil instead of plain mesa:
- Less boilerplate for common scenarios (by some definitions of common)
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
- Reporting functions that aggregate multiple

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

99
docs/usage.rst Executable file
View File

@@ -0,0 +1,99 @@
Usage
-----
First of all, you need to install the package. See :doc:`installation` for installation instructions.
Simulation Settings
===================
Once installed, before running a simulation, you need to configure it.
* In the Settings JSON file you will find the configuration of the network.
.. code:: python
{
"network_type": 1,
"number_of_nodes": 1000,
"max_time": 50,
"num_trials": 1,
"timeout": 2
}
* In the Settings JSON file, you will also find the configuration of the models.
Network Types
=============
There are three types of network implemented, but you could add more.
.. code:: python
if settings.network_type == 0:
G = nx.complete_graph(settings.number_of_nodes)
if settings.network_type == 1:
G = nx.barabasi_albert_graph(settings.number_of_nodes, 10)
if settings.network_type == 2:
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
# More types of networks can be added here
Models Settings
===============
After having configured the simulation, the next step is setting up the variables of the models.
For this, you will need to modify the Settings JSON file again.
.. code:: json
{
"agent": ["SISaModel","ControlModelM2"],
"neutral_discontent_spon_prob": 0.04,
"neutral_discontent_infected_prob": 0.04,
"neutral_content_spon_prob": 0.18,
"neutral_content_infected_prob": 0.02,
"discontent_neutral": 0.13,
"discontent_content": 0.07,
"variance_d_c": 0.02,
"content_discontent": 0.009,
"variance_c_d": 0.003,
"content_neutral": 0.088,
"standard_variance": 0.055,
"prob_neutral_making_denier": 0.035,
"prob_infect": 0.075,
"prob_cured_healing_infected": 0.035,
"prob_cured_vaccinate_neutral": 0.035,
"prob_vaccinated_healing_infected": 0.035,
"prob_vaccinated_vaccinate_neutral": 0.035,
"prob_generate_anti_rumor": 0.035
}
In this file you will define the different models you are going to simulate. You can simulate as many models
as you want. Each model will be simulated separately.
After setting up the models, you have to initialize the parameters of each one. You will find the parameters needed
in the documentation of each model.
Parameter validation will fail if a required parameter without a default has not been provided.
Running the Simulation
======================
After setting all the configuration, you will be able to run the simulation. All you need to do is execute:
.. code:: bash
python3 soil.py
The simulation will return a dynamic graph .gexf file which could be visualized with
`Gephi <https://gephi.org/users/download/>`__.
It will also return one .png picture for each model simulated.

View File

@@ -1,39 +0,0 @@
from networkx import Graph
import random
import networkx as nx
from soil import Simulation, Environment, CounterModel, parameters
def mygenerator(n=5, n_edges=5):
"""
Just a simple generator that creates a network with n nodes and
n_edges edges. Edges are assigned randomly, only avoiding self loops.
"""
G = nx.Graph()
for i in range(n):
G.add_node(i)
for i in range(n_edges):
nodes = list(G.nodes)
n_in = random.choice(nodes)
nodes.remove(n_in) # Avoid loops
n_out = random.choice(nodes)
G.add_edge(n_in, n_out)
return G
class GeneratorEnv(Environment):
"""Using a custom generator for the network"""
generator: parameters.function = staticmethod(mygenerator)
def init(self):
self.create_network(generator=self.generator, n=10, n_edges=5)
self.add_agents(CounterModel)
sim = Simulation(model=GeneratorEnv, max_steps=10, interval=1)
if __name__ == '__main__':
sim.run(dump=False)

View File

@@ -1,41 +0,0 @@
from soil.agents import FSM, state, default_state
from soil.time import Delta
class Fibonacci(FSM):
"""Agent that only executes in t_steps that are Fibonacci numbers"""
prev = 1
@default_state
@state
def counting(self):
self.log("Stopping at {}".format(self.now))
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
return None, Delta(prev)
class Odds(FSM):
"""Agent that only executes in odd t_steps"""
@default_state
@state
def odds(self):
self.log("Stopping at {}".format(self.now))
return None, Delta(1 + self.now % 2)
from soil import Environment, Simulation
from networkx import complete_graph
class TimeoutsEnv(Environment):
def init(self):
self.create_network(generator=complete_graph, n=2)
self.add_agent(agent_class=Fibonacci, node_id=0)
self.add_agent(agent_class=Odds, node_id=1)
sim = Simulation(model=TimeoutsEnv, max_steps=10, interval=1)
if __name__ == "__main__":
sim.run(dump=False)

View File

@@ -1,9 +0,0 @@
This example can be run like with command-line options, like this:
```bash
python cars.py --level DEBUG -e summary --csv
#or
soil cars.py -e summary
```
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.

View File

@@ -1,231 +0,0 @@
"""
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
from their location to their desired location.
An example scenario could play like the following:
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
- Passenger(1) tells every driver that it wants to request a Journey.
- Each driver receives the request.
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
- When Passenger(1) accepts the request, two things happen:
- Passenger(1) changes its state to `driving_home`
- Driver(2) starts moving towards the origin of the Journey
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
- When Driver(2) reaches the destination (carrying Passenger(1) along):
- Driver(2) starts wondering again
- Passenger(1) dies, and is removed from the simulation
- If there are no more passengers available in the simulation, Drivers die
"""
from __future__ import annotations
from typing import Optional
from soil import *
from soil import events
from mesa.space import MultiGrid
# More complex scenarios may use more than one type of message between objects.
# A common pattern is to use `enum.Enum` to represent state changes in a request.
@dataclass
class Journey:
"""
This represents a request for a journey. Passengers and drivers exchange this object.
A journey may have a driver assigned or not. If the driver has not been assigned, this
object is considered a "request for a journey".
"""
origin: (int, int)
destination: (int, int)
tip: float
passenger: Passenger
driver: Optional[Driver] = None
class City(EventedEnvironment):
"""
An environment with a grid where drivers and passengers will be placed.
The number of drivers and riders is configurable through its parameters:
:param str n_cars: The total number of drivers to add
:param str n_passengers: The number of passengers in the simulation
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
and `n_cars` params.
:param int height: Height of the internal grid
:param int width: Width of the internal grid
"""
n_cars = 1
n_passengers = 10
height = 100
width = 100
def init(self):
self.grid = MultiGrid(width=self.width, height=self.height, torus=False)
if not self.agents:
self.add_agents(Driver, k=self.n_cars)
self.add_agents(Passenger, k=self.n_passengers)
for agent in self.agents:
self.grid.place_agent(agent, (0, 0))
self.grid.move_to_empty(agent)
self.total_earnings = 0
self.add_model_reporter("total_earnings")
@report
@property
def number_passengers(self):
return self.count_agents(agent_class=Passenger)
class Driver(Evented, FSM):
pos = None
journey = None
earnings = 0
def on_receive(self, msg, sender):
"""This is not a state. It will run (and block) every time check_messages is invoked"""
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
msg.driver = self
self.journey = msg
def check_passengers(self):
"""If there are no more passengers, stop forever"""
c = self.count_agents(agent_class=Passenger)
self.debug(f"Passengers left {c}")
if not c:
self.die("No more passengers")
@default_state
@state
def wandering(self):
"""Move around the city until a journey is accepted"""
target = None
self.check_passengers()
self.journey = None
while self.journey is None: # No potential journeys detected (see on_receive)
if target is None or not self.move_towards(target):
target = self.random.choice(
self.model.grid.get_neighborhood(self.pos, moore=False)
)
self.check_passengers()
# This will call on_receive behind the scenes, and the agent's status will be updated
self.check_messages()
yield Delta(30) # Wait at least 30 seconds before checking again
try:
# Re-send the journey to the passenger, to confirm that we have been selected
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
except events.TimedOut:
# No journey has been accepted. Try again
self.journey = None
return
return self.driving
@state
def driving(self):
"""The journey has been accepted. Pick them up and take them to their destination"""
self.info(f"Driving towards Passenger {self.journey.passenger.unique_id}")
while self.move_towards(self.journey.origin):
yield
self.info(f"Driving {self.journey.passenger.unique_id} from {self.journey.origin} to {self.journey.destination}")
while self.move_towards(self.journey.destination, with_passenger=True):
yield
self.info("Arrived at destination")
self.earnings += self.journey.tip
self.model.total_earnings += self.journey.tip
self.check_passengers()
return self.wandering
def move_towards(self, target, with_passenger=False):
"""Move one cell at a time towards a target"""
self.debug(f"Moving { self.pos } -> { target }")
if target[0] == self.pos[0] and target[1] == self.pos[1]:
return False
next_pos = [self.pos[0], self.pos[1]]
for idx in [0, 1]:
if self.pos[idx] < target[idx]:
next_pos[idx] += 1
break
if self.pos[idx] > target[idx]:
next_pos[idx] -= 1
break
self.model.grid.move_agent(self, tuple(next_pos))
if with_passenger:
self.journey.passenger.pos = (
self.pos
) # This could be communicated through messages
return True
class Passenger(Evented, FSM):
pos = None
def on_receive(self, msg, sender):
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
if isinstance(msg, Journey):
self.journey = msg
return msg
@default_state
@state
def asking(self):
destination = (
self.random.randint(0, self.model.grid.height-1),
self.random.randint(0, self.model.grid.width-1),
)
self.journey = None
journey = Journey(
origin=self.pos,
destination=destination,
tip=self.random.randint(10, 100),
passenger=self,
)
timeout = 60
expiration = self.now + timeout
self.info(f"Asking for journey at: { self.pos }")
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
while not self.journey:
self.debug(f"Waiting for responses at: { self.pos }")
try:
# This will call check_messages behind the scenes, and the agent's status will be updated
# If you want to avoid that, you can call it with: check=False
yield self.received(expiration=expiration)
except events.TimedOut:
self.info(f"Still no response. Waiting at: { self.pos }")
self.model.broadcast(
journey, ttl=timeout, sender=self, agent_class=Driver
)
expiration = self.now + timeout
self.info(f"Got a response! Waiting for driver")
return self.driving_home
@state
def driving_home(self):
while (
self.pos[0] != self.journey.destination[0]
or self.pos[1] != self.journey.destination[1]
):
try:
yield self.received(timeout=60)
except events.TimedOut:
pass
self.die("Got home safe!")
simulation = Simulation(name="RideHailing",
model=City,
seed="carsSeed",
max_time=1000,
parameters=dict(n_passengers=2))
if __name__ == "__main__":
easy(simulation)

View File

@@ -1,7 +0,0 @@
from soil import Simulation
from social_wealth import MoneyEnv, graph_generator
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, parameters=dict(generator=graph_generator, N=10, width=50, height=50))
if __name__ == "__main__":
sim.run()

View File

@@ -1,111 +0,0 @@
from mesa.visualization.ModularVisualization import ModularServer
from mesa.visualization.UserParam import Slider, Choice
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
import networkx as nx
class MyNetwork(NetworkModule):
def render(self, model):
return self.portrayal_method(model)
def network_portrayal(env):
# The model ensures there is 0 or 1 agent per node
portrayal = dict()
wealths = {
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
}
portrayal["nodes"] = [
{
"id": node_id,
"size": 2 * (wealth + 1),
"color": "#CC0000" if wealth == 0 else "#007959",
# "color": "#CC0000",
"label": f"{node_id}: {wealth}",
}
for (node_id, wealth) in wealths.items()
]
portrayal["edges"] = [
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
for edge_id, (source, target) in enumerate(env.G.edges)
]
return portrayal
def gridPortrayal(agent):
"""
This function is registered with the visualization server to be called
each tick to indicate how to draw the agent in its current state.
:param agent: the agent in the simulation
:return: the portrayal dictionary
"""
color = max(10, min(agent.wealth * 10, 100))
return {
"Shape": "rect",
"w": 1,
"h": 1,
"Filled": "true",
"Layer": 0,
"Label": agent.unique_id,
"Text": agent.unique_id,
"x": agent.pos[0],
"y": agent.pos[1],
"Color": f"rgba(31, 10, 255, 0.{color})",
}
grid = MyNetwork(network_portrayal, 500, 500)
chart = ChartModule(
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
)
parameters = {
"N": Slider(
"N",
5,
1,
10,
1,
description="Choose how many agents to include in the model",
),
"height": Slider(
"height",
5,
5,
10,
1,
description="Grid height",
),
"width": Slider(
"width",
5,
5,
10,
1,
description="Grid width",
),
"agent_class": Choice(
"Agent class",
value="MoneyAgent",
choices=["MoneyAgent", "SocialMoneyAgent"],
),
"generator": graph_generator,
}
canvas_element = CanvasGrid(
gridPortrayal, parameters["width"].value, parameters["height"].value, 500, 500
)
server = ModularServer(
MoneyEnv, [grid, chart, canvas_element], "Money Model", parameters
)
server.port = 8521
if __name__ == '__main__':
server.launch(open_browser=False)

View File

@@ -1,137 +0,0 @@
"""
This is an example that adds soil agents and environment in a normal
mesa workflow.
"""
from mesa import Agent as MesaAgent
from mesa.space import MultiGrid
# from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
import networkx as nx
from soil import NetworkAgent, Environment, serialization
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.agents]
x = sorted(agent_wealths)
N = len(list(model.agents))
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return 1 + (1 / N) - 2 * B
class MoneyAgent(MesaAgent):
"""
A MESA agent with fixed initial wealth.
It will only share wealth with neighbors based on grid proximity
"""
def __init__(self, unique_id, model, wealth=1, **kwargs):
super().__init__(unique_id=unique_id, model=model)
self.wealth = wealth
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
print("Crying wolf", self.pos)
self.move()
if self.wealth > 0:
self.give_money()
class SocialMoneyAgent(MoneyAgent, NetworkAgent):
wealth = 1
def give_money(self):
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
friends = set(self.get_neighbors())
self.info("Trying to give money")
self.info("Cellmates: ", cellmates)
self.info("Friends: ", friends)
nearby_friends = list(cellmates & friends)
if len(nearby_friends):
other = self.random.choice(nearby_friends)
other.wealth += 1
self.wealth -= 1
def graph_generator(n=5):
G = nx.Graph()
for ix in range(n):
G.add_edge(0, ix)
return G
class MoneyEnv(Environment):
"""A model with some number of agents."""
def __init__(
self,
width,
height,
N,
generator=graph_generator,
agent_class=SocialMoneyAgent,
topology=None,
**kwargs
):
generator = serialization.deserialize(generator)
agent_class = serialization.deserialize(agent_class, globs=globals())
topology = generator(n=N)
super().__init__(topology=topology, N=N, **kwargs)
self.grid = MultiGrid(width, height, False)
self.populate_network(agent_class=agent_class)
# Create agents
for agent in self.agents:
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(agent, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
)
if __name__ == "__main__":
fixed_params = {
"generator": nx.complete_graph,
"width": 10,
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
"height": 10,
}
variable_params = {"N": range(10, 100, 10)}
batch_run = BatchRunner(
MoneyEnv,
variable_parameters=variable_params,
fixed_parameters=fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

View File

@@ -1,87 +0,0 @@
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return 1 + (1 / N) - 2 * B
class MoneyAgent(Agent):
"""An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = self.random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = self.random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
class MoneyModel(Model):
"""A model with some number of agents."""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.schedule = RandomActivation(self)
self.running = True
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = self.random.randrange(self.grid.width)
y = self.random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
)
def step(self):
self.datacollector.collect(self)
self.schedule.step()
if __name__ == "__main__":
fixed_params = {"width": 10, "height": 10}
variable_params = {"N": range(10, 500, 10)}
batch_run = BatchRunner(
MoneyModel,
variable_params,
fixed_params,
iterations=5,
max_steps=100,
model_reporters={"Gini": compute_gini},
)
batch_run.run_all()
run_data = batch_run.get_model_vars_dataframe()
run_data.head()
print(run_data.Gini)

File diff suppressed because one or more lines are too long

View File

@@ -1,134 +0,0 @@
from soil.agents import FSM, NetworkAgent, state, default_state, prob
from soil.parameters import *
import logging
from soil.environment import Environment
class DumbViewer(FSM, NetworkAgent):
"""
A viewer that gets infected via TV (if it has one) and tries to infect
its neighbors once it's infected.
"""
has_been_infected: bool = False
has_tv: bool = False
@default_state
@state
def neutral(self):
if self.has_tv:
if self.prob(self.get("prob_tv_spread")):
return self.infected
if self.has_been_infected:
return self.infected
@state
def infected(self):
for neighbor in self.get_neighbors(state_id=self.neutral.id):
if self.prob(self.get("prob_neighbor_spread")):
neighbor.infect()
def infect(self):
"""
This is not a state. It is a function that other agents can use to try to
infect this agent. DumbViewer always gets infected, but other agents like
HerdViewer might not become infected right away
"""
self.has_been_infected = True
class HerdViewer(DumbViewer):
"""
A viewer whose probability of infection depends on the state of its neighbors.
"""
def infect(self):
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
infected = self.count_neighbors(state_id=self.infected.id)
total = self.count_neighbors()
prob_infect = self.get("prob_neighbor_spread") * infected / total
self.debug("prob_infect", prob_infect)
if self.prob(prob_infect):
self.has_been_infected = True
class WiseViewer(HerdViewer):
"""
A viewer that can change its mind.
"""
@state
def cured(self):
prob_cure = self.get("prob_neighbor_cure")
for neighbor in self.get_neighbors(state_id=self.infected.id):
if self.prob(prob_cure):
try:
neighbor.cure()
except AttributeError:
self.debug("Viewer {} cannot be cured".format(neighbor.id))
def cure(self):
self.has_been_cured = True
@state
def infected(self):
if self.has_been_cured:
return self.cured
cured = max(self.count_neighbors(self.cured.id), 1.0)
infected = max(self.count_neighbors(self.infected.id), 1.0)
prob_cure = self.get("prob_neighbor_cure") * (cured / infected)
if self.prob(prob_cure):
return self.cured
class NewsSpread(Environment):
ratio_dumb: probability = 1,
ratio_herd: probability = 0,
ratio_wise: probability = 0,
prob_tv_spread: probability = 0.1,
prob_neighbor_spread: probability = 0.1,
prob_neighbor_cure: probability = 0.05,
def init(self):
self.populate_network([DumbViewer, HerdViewer, WiseViewer],
[self.ratio_dumb, self.ratio_herd, self.ratio_wise])
from itertools import product
from soil import Simulation
# We want to investigate the effect of different agent distributions on the spread of news.
# To do that, we will run different simulations, with a varying ratio of DumbViewers, HerdViewers, and WiseViewers
# Because the effect of these agents might also depend on the network structure, we will run our simulations on two different networks:
# one with a small-world structure and one with a connected structure.
counter = 0
for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
for (generator, netparams) in {
"barabasi_albert_graph": {"m": 5},
"erdos_renyi_graph": {"p": 0.1},
}.items():
print(r1, r2, 1-r1-r2, generator)
# Create new simulation
netparams["n"] = 500
Simulation(
name='newspread_sim',
model=NewsSpread,
parameters=dict(
ratio_dumb=r1,
ratio_herd=r2,
ratio_wise=1-r1-r2,
network_generator=generator,
network_params=netparams,
prob_neighbor_spread=0,
),
iterations=5,
max_steps=300,
dump=False,
).run()
counter += 1
# Run all the necessary instances
print(f"A total of {counter} simulations were run.")

View File

@@ -1 +0,0 @@
Programmatic*

View File

@@ -1,53 +0,0 @@
"""
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, Environment, agents
from networkx import Graph
import logging
def mygenerator():
# Add only a node
G = Graph()
G.add_node(1)
G.add_node(2)
return G
class MyAgent(agents.NetworkAgent, agents.FSM):
times_run = 0
@agents.default_state
@agents.state
def neutral(self):
self.debug("I am running")
if self.prob(0.2):
self.times_run += 1
self.info("This runs 2/10 times on average")
class ProgrammaticEnv(Environment):
def init(self):
self.create_network(generator=mygenerator)
assert len(self.G)
self.populate_network(agent_class=MyAgent)
self.add_agent_reporter('times_run')
simulation = Simulation(
name="Programmatic",
model=ProgrammaticEnv,
seed='Program',
iterations=1,
max_time=100,
dump=False,
)
if __name__ == "__main__":
# By default, logging will only print WARNING logs (and above).
# You need to choose a lower logging level to get INFO/DEBUG traces
logging.basicConfig(level=logging.INFO)
envs = simulation.run()
for agent in envs[0].agents:
print(agent.times_run)

View File

@@ -1,10 +0,0 @@
Simulation of pubs and drinking pals that go from pub to pub.
Th custom environment includes a list of pubs and methods to allow agents to discover and enter pubs.
There are two types of agents:
* Patron. A patron will do three things, in this order:
* Look for other patrons to drink with
* Look for a pub where the agent and other agents in the same group can get in.
* While in the pub, patrons only drink, until they get drunk and taken home.
* Police. There is only one police agent that will take any drunk patrons home (kick them out of the pub).

View File

@@ -1,195 +0,0 @@
from soil.agents import FSM, NetworkAgent, state, default_state
from soil import Environment, Simulation, parameters
from itertools import islice
import networkx as nx
import logging
class CityPubs(Environment):
"""Environment with Pubs"""
level = logging.INFO
number_of_pubs: parameters.Integer = 3
ratio_extroverted: parameters.probability = 0.1
pub_capacity: parameters.Integer = 10
def init(self):
self.pubs = {}
for i in range(self.number_of_pubs):
newpub = {
"name": "The awesome pub #{}".format(i),
"open": True,
"capacity": self.pub_capacity,
"occupancy": 0,
}
self.pubs[newpub["name"]] = newpub
self.add_agent(agent_class=Police)
self.populate_network([Patron.w(openness=0.1), Patron.w(openness=1)],
[self.ratio_extroverted, 1-self.ratio_extroverted])
assert all(["agent" in node and isinstance(node["agent"], Patron) for (_, node) in self.G.nodes(data=True)])
def enter(self, pub_id, *nodes):
"""Agents will try to enter. The pub checks if it is possible"""
try:
pub = self["pubs"][pub_id]
except KeyError:
raise ValueError("Pub {} is not available".format(pub_id))
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
return False
pub["occupancy"] += len(nodes)
for node in nodes:
node["pub"] = pub_id
return True
def available_pubs(self):
for pub in self["pubs"].values():
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
yield pub["name"]
def exit(self, pub_id, *node_ids):
"""Agents will notify the pub they want to leave"""
try:
pub = self["pubs"][pub_id]
except KeyError:
raise ValueError("Pub {} is not available".format(pub_id))
for node_id in node_ids:
node = self.get_agent(node_id)
if pub_id == node["pub"]:
del node["pub"]
pub["occupancy"] -= 1
class Patron(FSM, NetworkAgent):
"""Agent that looks for friends to drink with. It will do three things:
1) Look for other patrons to drink with
2) Look for a bar where the agent and other agents in the same group can get in.
3) While in the bar, patrons only drink, until they get drunk and taken home.
"""
level = logging.DEBUG
pub = None
drunk = False
pints = 0
max_pints = 3
kicked_out = False
@default_state
@state
def looking_for_friends(self):
"""Look for friends to drink with"""
self.info("I am looking for friends")
available_friends = list(
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
)
if not available_friends:
self.info("Life sucks and I'm alone!")
return self.at_home
befriended = self.try_friends(available_friends)
if befriended:
return self.looking_for_pub
@state
def looking_for_pub(self):
"""Look for a pub that accepts me and my friends"""
if self["pub"] != None:
return self.sober_in_pub
self.debug("I am looking for a pub")
group = list(self.get_neighbors())
for pub in self.model.available_pubs():
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
if self.model.enter(pub, self, *group):
self.info("We're all {} getting in {}!".format(len(group), pub))
return self.sober_in_pub
@state
def sober_in_pub(self):
"""Drink up."""
self.drink()
if self["pints"] > self["max_pints"]:
return self.drunk_in_pub
@state
def drunk_in_pub(self):
"""I'm out. Take me home!"""
self.info("I'm so drunk. Take me home!")
self["drunk"] = True
if self.kicked_out:
return self.at_home
pass # out drun
@state
def at_home(self):
"""The end"""
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
self.debug("I'm home. Just like {} of my friends".format(len(others)))
def drink(self):
self["pints"] += 1
self.debug("Cheers to that")
def kick_out(self):
self.kicked_out = True
def befriend(self, other_agent, force=False):
"""
Try to become friends with another agent. The chances of
success depend on both agents' openness.
"""
if force or self["openness"] > self.random.random():
self.add_edge(self, other_agent)
self.info("Made some friend {}".format(other_agent))
return True
return False
def try_friends(self, others):
"""Look for random agents around me and try to befriend them"""
befriended = False
k = int(10 * self["openness"])
self.random.shuffle(others)
for friend in islice(others, k): # random.choice >= 3.7
if friend == self:
continue
if friend.befriend(self):
self.befriend(friend, force=True)
self.debug("Hooray! new friend: {}".format(friend.unique_id))
befriended = True
else:
self.debug("{} does not want to be friends".format(friend.unique_id))
return befriended
class Police(FSM):
"""Simple agent to take drunk people out of pubs."""
level = logging.INFO
@default_state
@state
def patrol(self):
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
for drunk in drunksters:
self.info("Kicking out the trash: {}".format(drunk.unique_id))
drunk.kick_out()
else:
self.info("No trash to take out. Too bad.")
sim = Simulation(
model=CityPubs,
name="pubcrawl",
iterations=3,
max_steps=10,
dump=False,
parameters=dict(
network_generator=nx.empty_graph,
network_params={"n": 30},
model=CityPubs,
altercations=0,
number_of_pubs=3,
)
)
if __name__ == "__main__":
sim.run(parallel=False)

View File

@@ -1,14 +0,0 @@
There are two similar implementations of this simulation.
- `basic`. Using simple primites
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
The examples can be run directly in the terminal, and they accept command like arguments.
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
```
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
```
To learn more about how this functionality works, check out the `soil.easy` function.

View File

@@ -1,153 +0,0 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation
from soil.time import Delta
from enum import Enum
from collections import Counter
import logging
import math
from rabbits_basic_sim import RabbitEnv
class RabbitsImprovedEnv(RabbitEnv):
def init(self):
"""Initialize the environment with the new versions of the agents"""
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
class Rabbit(FSM, NetworkAgent):
sexual_maturity = 30
life_expectancy = 300
birth = None
@property
def age(self):
if self.birth is None:
return None
return self.now - self.birth
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.birth = self.now
self.offspring = 0
return self.youngling, Delta(self.sexual_maturity - self.age)
@state
def youngling(self):
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Do not try to impregnate other females
class Female(Rabbit):
gestation = 10
conception = None
@state
def fertile(self):
# Just wait for a Male
if self.age > self.life_expectancy:
return self.dead
if self.conception is not None:
return self.pregnant
@property
def pregnancy(self):
if self.conception is None:
return None
return self.now - self.conception
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.conception = self.now
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.debug("I am pregnant")
if self.age > self.life_expectancy:
self.info("Dying before giving birth")
return self.die()
if self.pregnancy >= self.gestation:
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
if self.mate:
child.add_edge(self.mate)
self.mate.offspring += 1
else:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
return self.fertile
def die(self):
if self.pregnancy is not None:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.iter_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__":
sim.run()

View File

@@ -1,161 +0,0 @@
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation, report, parameters as params
from collections import Counter
import logging
import math
class RabbitEnv(Environment):
prob_death: params.probability = 1e-100
def init(self):
a1 = self.add_node(Male)
a2 = self.add_node(Female)
a1.add_edge(a2)
self.add_agent(RandomAccident)
@report
@property
def num_rabbits(self):
return self.count_agents(agent_class=Rabbit)
@report
@property
def num_males(self):
return self.count_agents(agent_class=Male)
@report
@property
def num_females(self):
return self.count_agents(agent_class=Female)
class Rabbit(NetworkAgent, FSM):
sexual_maturity = 30
life_expectancy = 300
@default_state
@state
def newborn(self):
self.info("I am a newborn.")
self.age = 0
self.offspring = 0
return self.youngling
@state
def youngling(self):
self.age += 1
if self.age >= self.sexual_maturity:
self.info(f"I am fertile! My age is {self.age}")
return self.fertile
@state
def fertile(self):
raise Exception("Each subclass should define its fertile state")
@state
def dead(self):
self.die()
class Male(Rabbit):
max_females = 5
mating_prob = 0.001
@state
def fertile(self):
self.age += 1
if self.age > self.life_expectancy:
return self.dead
# Males try to mate
for f in self.model.agents(
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
):
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
if self.prob(self["mating_prob"]):
f.impregnate(self)
break # Take a break
class Female(Rabbit):
gestation = 10
pregnancy = -1
@state
def fertile(self):
# Just wait for a Male
self.age += 1
if self.age > self.life_expectancy:
return self.dead
if self.pregnancy >= 0:
return self.pregnant
def impregnate(self, male):
self.info(f"impregnated by {repr(male)}")
self.mate = male
self.pregnancy = 0
self.number_of_babies = int(8 + 4 * self.random.random())
@state
def pregnant(self):
self.info("I am pregnant")
self.age += 1
if self.age >= self.life_expectancy:
return self.die()
if self.pregnancy < self.gestation:
self.pregnancy += 1
return
self.info("Having {} babies".format(self.number_of_babies))
for i in range(self.number_of_babies):
state = {}
agent_class = self.random.choice([Male, Female])
child = self.model.add_node(agent_class=agent_class, **state)
child.add_edge(self)
try:
child.add_edge(self.mate)
self.model.agents[self.mate].offspring += 1
except ValueError:
self.debug("The father has passed away")
self.offspring += 1
self.mate = None
self.pregnancy = -1
return self.fertile
def die(self):
if "pregnancy" in self and self["pregnancy"] > -1:
self.info("A mother has died carrying a baby!!")
return super().die()
class RandomAccident(BaseAgent):
def step(self):
rabbits_alive = self.model.G.number_of_nodes()
if not rabbits_alive:
return self.die()
prob_death = self.model.prob_death * math.floor(
math.log10(max(1, rabbits_alive))
)
self.debug("Killing some rabbits with prob={}!".format(prob_death))
for i in self.get_agents(agent_class=Rabbit):
if i.state_id == i.dead.id:
continue
if self.prob(prob_death):
self.info("I killed a rabbit: {}".format(i.id))
rabbits_alive -= 1
i.die()
self.debug("Rabbits alive: {}".format(rabbits_alive))
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", iterations=1)
if __name__ == "__main__":
sim.run()

View File

@@ -1,47 +0,0 @@
"""
Example of setting a
Example of a fully programmatic simulation, without definition files.
"""
from soil import Simulation, agents, Environment
from soil.time import Delta
class MyAgent(agents.FSM):
"""
An agent that first does a ping
"""
defaults = {"pong_counts": 2}
@agents.default_state
@agents.state
def ping(self):
self.info("Ping")
return self.pong, Delta(self.random.expovariate(1 / 16))
@agents.state
def pong(self):
self.info("Pong")
self.pong_counts -= 1
self.info(str(self.pong_counts))
if self.pong_counts < 1:
return self.die()
return None, Delta(self.random.expovariate(1 / 16))
class RandomEnv(Environment):
def init(self):
self.add_agent(agent_class=MyAgent)
s = Simulation(
name="Programmatic",
model=RandomEnv,
iterations=1,
max_time=100,
dump=False,
)
envs = s.run()

View File

@@ -1,341 +0,0 @@
import networkx as nx
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
from soil import Environment, Simulation
from soil.parameters import *
from soil.utils import int_seed
class TerroristEnvironment(Environment):
n: Integer = 100
radius: Float = 0.2
information_spread_intensity: probability = 0.7
terrorist_additional_influence: probability = 0.03
terrorist_additional_influence: probability = 0.035
max_vulnerability: probability = 0.7
prob_interaction: probability = 0.5
# TrainingAreaModel and HavenModel
training_influence: probability = 0.20
haven_influence: probability = 0.20
# TerroristNetworkModel
vision_range: Float = 0.30
sphere_influence: Integer = 2
weight_social_distance: Float = 0.035
weight_link_distance: Float = 0.035
ratio_civil: probability = 0.8
ratio_leader: probability = 0.1
ratio_training: probability = 0.05
ratio_haven: probability = 0.05
def init(self):
self.create_network(generator=self.generator, n=self.n, radius=self.radius)
self.populate_network([
TerroristNetworkModel.w(state_id='civilian'),
TerroristNetworkModel.w(state_id='leader'),
TrainingAreaModel,
HavenModel
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
def generator(self, *args, **kwargs):
return nx.random_geometric_graph(*args, **kwargs, seed=int_seed(self._seed))
class TerroristSpreadModel(FSM, Geo):
"""
Settings:
information_spread_intensity
terrorist_additional_influence
min_vulnerability (optional else zero)
max_vulnerability
"""
information_spread_intensity = 0.1
terrorist_additional_influence = 0.1
min_vulnerability = 0
max_vulnerability = 1
def init(self):
if self.state_id == self.civilian.id: # Civilian
self.mean_belief = self.model.random.uniform(0.00, 0.5)
elif self.state_id == self.terrorist.id: # Terrorist
self.mean_belief = self.random.uniform(0.8, 1.00)
elif self.state_id == self.leader.id: # Leader
self.mean_belief = 1.00
else:
raise Exception("Invalid state id: {}".format(self["id"]))
self.vulnerability = self.random.uniform(
self.get("min_vulnerability", 0), self.get("max_vulnerability", 1)
)
@default_state
@state
def civilian(self):
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
if len(neighbours) > 0:
# Only interact with some of the neighbors
interactions = list(
n for n in neighbours if self.random.random() <= self.model.prob_interaction
)
influence = sum(self.degree(i) for i in interactions)
mean_belief = sum(
i.mean_belief * self.degree(i) / influence for i in interactions
)
mean_belief = (
mean_belief * self.information_spread_intensity
+ self.mean_belief * (1 - self.information_spread_intensity)
)
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
if self.mean_belief >= 0.8:
return self.terrorist
@state
def leader(self):
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
for neighbour in self.get_neighbors(
state_id=[self.terrorist.id, self.leader.id]
):
if self.betweenness(neighbour) > self.betweenness(self):
return self.terrorist
@state
def terrorist(self):
neighbours = self.get_agents(
state_id=[self.terrorist.id, self.leader.id],
agent_class=TerroristSpreadModel,
limit_neighbors=True,
)
if len(neighbours) > 0:
influence = sum(self.degree(n) for n in neighbours)
mean_belief = sum(
n.mean_belief * self.degree(n) / influence for n in neighbours
)
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
1 - self.vulnerability
)
self.mean_belief = self.mean_belief ** (
1 - self.terrorist_additional_influence
)
# Check if there are any leaders in the group
leaders = list(filter(lambda x: x.state_id == self.leader.id, neighbours))
if not leaders:
# Check if this is the potential leader
# Stop once it's found. Otherwise, set self as leader
for neighbour in neighbours:
if self.betweenness(self) < self.betweenness(neighbour):
return
return self.leader
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
node = agent.node_id if agent else self.node_id
G = self.subgraph(**kwargs)
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
def degree(self, agent, force=False):
if (
force
or (not hasattr(self.model, "_degree"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._degree = nx.degree_centrality(self.G)
self.model._last_step = self.now
return self.model._degree[agent.node_id]
def betweenness(self, agent, force=False):
if (
force
or (not hasattr(self.model, "_betweenness"))
or getattr(self.model, "_last_step", 0) < self.now
):
self.model._betweenness = nx.betweenness_centrality(self.G)
self.model._last_step = self.now
return self.model._betweenness[agent.node_id]
class TrainingAreaModel(FSM, Geo):
"""
Settings:
training_influence
min_vulnerability
Requires TerroristSpreadModel.
"""
training_influence = 0.1
min_vulnerability = 0
def init(self):
self.mean_believe = 1
self.vulnerability = 0
@default_state
@state
def terrorist(self):
for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.training_influence
)
class HavenModel(FSM, Geo):
"""
Settings:
haven_influence
min_vulnerability
max_vulnerability
Requires TerroristSpreadModel.
"""
min_vulnerability = 0
haven_influence = 0.1
max_vulnerability = 0.5
def init(self):
self.mean_believe = 0
self.vulnerability = 0
def get_occupants(self, **kwargs):
return self.get_neighbors(agent_class=TerroristSpreadModel,
**kwargs)
@default_state
@state
def civilian(self):
civilians = self.get_occupants(state_id=self.civilian.id)
if not civilians:
return self.terrorist
for neighbour in self.get_occupants():
if neighbour.vulnerability > self.min_vulnerability:
neighbour.vulnerability = neighbour.vulnerability * (
1 - self.haven_influence
)
return self.civilian
@state
def terrorist(self):
for neighbour in self.get_occupants():
if neighbour.vulnerability < self.max_vulnerability:
neighbour.vulnerability = neighbour.vulnerability ** (
1 - self.haven_influence
)
return self.terrorist
class TerroristNetworkModel(TerroristSpreadModel):
"""
Settings:
sphere_influence
vision_range
weight_social_distance
weight_link_distance
"""
sphere_influence: float = 1
vision_range: float = 1
weight_social_distance: float = 0.5
weight_link_distance: float = 0.2
@state
def terrorist(self):
self.update_relationships()
return super().terrorist()
@state
def leader(self):
self.update_relationships()
return super().leader()
def update_relationships(self):
if self.count_neighbors(state_id=self.civilian.id) == 0:
close_ups = set(
self.geo_search(
radius=self.vision_range, agent_class=TerroristNetworkModel
)
)
step_neighbours = set(
self.ego_search(
self.sphere_influence,
agent_class=TerroristNetworkModel,
center=False,
)
)
neighbours = set(
agent.unique_id
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
)
search = (close_ups | step_neighbours) - neighbours
for agent in self.get_agents(search):
social_distance = 1 / self.shortest_path_length(agent.unique_id)
spatial_proximity = 1 - self.get_distance(agent.unique_id)
prob_new_interaction = (
self.weight_social_distance * social_distance
+ self.weight_link_distance * spatial_proximity
)
if (
agent.state_id == "civilian"
and self.random.random() < prob_new_interaction
):
self.add_edge(agent)
break
def get_distance(self, target):
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.unique_id]
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
dx = abs(source_x - target_x)
dy = abs(source_y - target_y)
return (dx**2 + dy**2) ** (1 / 2)
def shortest_path_length(self, target):
try:
return nx.shortest_path_length(self.G, self.unique_id, target)
except nx.NetworkXNoPath:
return float("inf")
sim = Simulation(
model=TerroristEnvironment,
iterations=1,
name="TerroristNetworkModel_sim",
max_steps=150,
seed="default2",
skip_test=False,
dump=False,
)
# TODO: integrate visualization
# visualization_params:
# # Icons downloaded from https://www.iconfinder.com/
# shape_property: agent
# shapes:
# TrainingAreaModel: target
# HavenModel: home
# TerroristNetworkModel: person
# colors:
# - attr_id: civilian
# color: '#40de40'
# - attr_id: terrorist
# color: red
# - attr_id: leader
# color: '#c16a6a'
# background_image: 'map_4800x2860.jpg'
# background_opacity: '0.9'
# background_filter_color: 'blue'

View File

@@ -1,2 +0,0 @@
balkian Torvalds {}
anonymous Torvalds {}

View File

@@ -1,25 +0,0 @@
from soil import Environment, Simulation, CounterModel, report
# Get directory path for current file
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
class TorvaldsEnv(Environment):
def init(self):
self.create_network(path=os.path.join(currentdir, 'torvalds.edgelist'))
self.populate_network(CounterModel, skill_level='beginner')
self.agent(node_id="Torvalds").skill_level = 'God'
self.agent(node_id="balkian").skill_level = 'developer'
self.add_agent_reporter("times")
@report
def god_developers(self):
return self.count_agents(skill_level='God')
sim = Simulation(name='torvalds_example',
max_steps=10,
interval=2,
model=TorvaldsEnv)

0
logo_gsi.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

0
logo_gsi.svg Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -0,0 +1,38 @@
import settings
from nxsim import BaseNetworkAgent
from .. import networkStatus
class BaseBehaviour(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self._attrs = {}
@property
def attrs(self):
now = self.env.now
if now not in self._attrs:
self._attrs[now] = {}
return self._attrs[now]
@attrs.setter
def attrs(self, value):
self._attrs[self.env.now] = value
def run(self):
while True:
self.step(self.env.now)
yield self.env.timeout(settings.network_params["timeout"])
def step(self, now):
networkStatus['agent_%s'% self.id] = self.to_json()
def to_json(self):
final = {}
for stamp, attrs in self._attrs.items():
for a in attrs:
if a not in final:
final[a] = {}
final[a][stamp] = attrs[a]
return final

View File

@@ -0,0 +1 @@
from .BaseBehaviour import BaseBehaviour

View File

@@ -0,0 +1,367 @@
import random
import numpy as np
from ..BaseBehaviour import *
import settings
import networkx as nx
POPULATION = 0
LEADERS = 1
HAVEN = 2
TRAININGENV = 3
NON_RADICAL = 0
NEUTRAL = 1
RADICAL = 2
POPNON =0
POPNE=1
POPRAD=2
HAVNON=3
HAVNE=4
HAVRAD=5
LEADER=6
TRAINING = 7
class TerroristModel(BaseBehaviour):
num_agents = 0
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.population = settings.network_params["number_of_nodes"] * settings.environment_params['initial_population']
self.havens = settings.network_params["number_of_nodes"] * settings.environment_params['initial_havens']
self.training_enviroments = settings.network_params["number_of_nodes"] * settings.environment_params['initial_training_enviroments']
self.initial_radicalism = settings.environment_params['initial_radicalism']
self.information_spread_intensity = settings.environment_params['information_spread_intensity']
self.influence = settings.environment_params['influence']
self.relative_inequality = settings.environment_params['relative_inequality']
self.additional_influence = settings.environment_params['additional_influence']
if TerroristModel.num_agents < self.population:
self.state['type'] = POPULATION
TerroristModel.num_agents = TerroristModel.num_agents + 1
random1 = random.random()
if random1 < 0.7:
self.state['id'] = NON_RADICAL
self.state['fstatus'] = POPNON
elif random1 >= 0.7 and random1 < 0.9:
self.state['id'] = NEUTRAL
self.state['fstatus'] = POPNE
elif random1 >= 0.9:
self.state['id'] = RADICAL
self.state['fstatus'] = POPRAD
elif TerroristModel.num_agents < self.havens + self.population:
self.state['type'] = HAVEN
TerroristModel.num_agents = TerroristModel.num_agents + 1
random2 = random.random()
random1 = random2 + self.initial_radicalism
if random1 < 1.2:
self.state['id'] = NON_RADICAL
self.state['fstatus'] = HAVNON
elif random1 >= 1.2 and random1 < 1.6:
self.state['id'] = NEUTRAL
self.state['fstatus'] = HAVNE
elif random1 >= 1.6:
self.state['id'] = RADICAL
self.state['fstatus'] = HAVRAD
elif TerroristModel.num_agents < self.training_enviroments + self.havens + self.population:
self.state['type'] = TRAININGENV
self.state['fstatus'] = TRAINING
TerroristModel.num_agents = TerroristModel.num_agents + 1
def step(self, now):
if self.state['type'] == POPULATION:
self.population_and_leader_conduct()
if self.state['type'] == LEADERS:
self.population_and_leader_conduct()
if self.state['type'] == HAVEN:
self.haven_conduct()
if self.state['type'] == TRAININGENV:
self.training_enviroment_conduct()
self.attrs['status'] = self.state['id']
self.attrs['type'] = self.state['type']
self.attrs['radicalism'] = self.state['rad']
self.attrs['fstatus'] = self.state['fstatus']
super().step(now)
def population_and_leader_conduct(self):
if self.state['id'] == NON_RADICAL:
if self.state['rad'] == 0.000:
self.state['rad'] = self.set_radicalism()
self.non_radical_behaviour()
if self.state['id'] == NEUTRAL:
if self.state['rad'] == 0.000:
self.state['rad'] = self.set_radicalism()
while self.state['id'] == RADICAL:
self.radical_behaviour()
break
self.neutral_behaviour()
if self.state['id'] == RADICAL:
if self.state['rad'] == 0.000:
self.state['rad'] = self.set_radicalism()
self.radical_behaviour()
def haven_conduct(self):
non_radical_neighbors = self.get_neighboring_agents(state_id=NON_RADICAL)
neutral_neighbors = self.get_neighboring_agents(state_id=NEUTRAL)
radical_neighbors = self.get_neighboring_agents(state_id=RADICAL)
neighbors_of_non_radical = len(neutral_neighbors) + len(radical_neighbors)
neighbors_of_neutral = len(non_radical_neighbors) + len(radical_neighbors)
neighbors_of_radical = len(non_radical_neighbors) + len(neutral_neighbors)
threshold = 8
if (len(non_radical_neighbors) > neighbors_of_non_radical) and len(non_radical_neighbors) >= threshold:
self.state['id'] = NON_RADICAL
elif (len(neutral_neighbors) > neighbors_of_neutral) and len(neutral_neighbors) >= threshold:
self.state['id'] = NEUTRAL
elif (len(radical_neighbors) > neighbors_of_radical) and len(radical_neighbors) >= threshold:
self.state['id'] = RADICAL
if self.state['id'] == NEUTRAL:
for neighbor in non_radical_neighbors:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] > 0.59:
neighbor.state['rad'] = 0.59
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
if self.state['id'] == RADICAL:
for neighbor in non_radical_neighbors:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] > 0.59:
neighbor.state['rad'] = 0.59
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
for neighbor in neutral_neighbors:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.6:
neighbor.state['id'] = RADICAL
if neighbor.state['type'] != HAVEN and neighbor.state['type']!=TRAININGENV:
if neighbor.state['rad'] >= 0.62:
if create_leader(neighbor):
neighbor.state['type'] = LEADERS
neighbor.state['fstatus'] = LEADER
# elif neighbor.state['type'] == LEADERS:
# neighbor.state['type'] = POPULATION
# neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVRAD
def training_enviroment_conduct(self):
self.state['id'] = RADICAL
self.state['rad'] = 1
neighbors = self.get_neighboring_agents()
for neighbor in neighbors:
if neighbor.state['id'] == NON_RADICAL:
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] > 0.59:
neighbor.state['rad'] = 0.59
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
neighbor.state['rad'] = neighbor.state['rad'] + (neighbor.influence + neighbor.additional_influence) * neighbor.information_spread_intensity
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
neighbor.state['id'] = NEUTRAL
if neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPNE
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVNE
elif neighbor.state['rad'] >= 0.6:
neighbor.state['id'] = RADICAL
if neighbor.state['type'] != HAVEN and neighbor.state['type'] != TRAININGENV:
if neighbor.state['rad'] >= 0.62:
if create_leader(neighbor):
neighbor.state['type'] = LEADERS
neighbor.state['fstatus'] = LEADER
# elif neighbor.state['type'] == LEADERS:
# neighbor.state['type'] = POPULATION
# neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
neighbor.state['fstatus'] = POPRAD
elif neighbor.state['type'] == HAVEN:
neighbor.state['fstatus'] = HAVRAD
def non_radical_behaviour(self):
neighbors = self.get_neighboring_agents()
for neighbor in neighbors:
if neighbor.state['type'] == POPULATION:
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
elif self.state['rad'] > 0.59:
self.state['rad'] = 0.59
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
elif neighbor.state['type'] == LEADERS:
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
elif self.state['rad'] > 0.59:
self.state['rad'] = 0.59
self.state['id'] = NEUTRAL
if self.state['type']==POPULATION:
self.state['fstatus'] = POPNE
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVNE
def neutral_behaviour(self):
neighbors = self.get_neighboring_agents()
for neighbor in neighbors:
if neighbor.state['type'] == POPULATION:
if neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
if self.state['rad'] >= 0.6:
self.state['id'] = RADICAL
if self.state['type'] != HAVEN:
if self.state['rad'] >= 0.62:
if create_leader(self):
self.state['type'] = LEADERS
self.state['fstatus'] = LEADER
# elif self.state['type'] == LEADERS:
# self.state['type'] = POPULATION
# self.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
self.state['fstatus'] = POPRAD
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVRAD
elif neighbor.state['type'] == LEADERS:
if neighbor.state['id'] == RADICAL:
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
if self.state['rad'] >= 0.6:
self.state['id'] = RADICAL
if self.state['type'] != HAVEN:
if self.state['rad'] >= 0.62:
if create_leader(self):
self.state['type'] = LEADERS
self.state['fstatus'] = LEADER
# elif self.state['type'] == LEADERS:
# self.state['type'] = POPULATION
# self.state['fstatus'] = POPRAD
elif neighbor.state['type'] == POPULATION:
self.state['fstatus'] = POPRAD
elif self.state['type'] == HAVEN:
self.state['fstatus'] = HAVRAD
def radical_behaviour(self):
neighbors = self.get_neighboring_agents(state_id=RADICAL)
for neighbor in neighbors:
if self.state['rad']< neighbor.state['rad'] and self.state['type']== LEADERS and neighbor.state['type']==LEADERS:
self.state['type'] = POPULATION
self.state['fstatus'] = POPRAD
def set_radicalism(self):
if self.state['id'] == NON_RADICAL:
radicalism = random.uniform(0.0, 0.29) * self.relative_inequality
return radicalism
elif self.state['id'] == NEUTRAL:
radicalism = 0.3 + random.uniform(0.3, 0.59) * self.relative_inequality
if radicalism >= 0.6:
self.state['id'] = RADICAL
return radicalism
elif self.state['id'] == RADICAL:
radicalism = 0.6 + random.uniform(0.6, 1.0) * self.relative_inequality
return radicalism
def get_partition(agent):
return settings.partition_param[agent.id]
def get_centrality(agent):
return settings.centrality_param[agent.id]
def get_centrality_given_id(id):
return settings.centrality_param[id]
def get_leader(partition):
if not bool(settings.leaders) or partition not in settings.leaders.keys():
return None
return settings.leaders[partition]
def set_leader(partition, agent):
settings.leaders[partition] = agent.id
def create_leader(agent):
my_partition = get_partition(agent)
old_leader = get_leader(my_partition)
if old_leader == None:
set_leader(my_partition, agent)
return True
else:
my_centrality = get_centrality(agent)
old_leader_centrality = get_centrality_given_id(old_leader)
if my_centrality > old_leader_centrality:
set_leader(my_partition, agent)
return True
return False

View File

@@ -0,0 +1 @@
from .TerroristModel import TerroristModel

3
models/__init__.py Executable file
View File

@@ -0,0 +1,3 @@
from .models import *
from .BaseBehaviour import *
from .TerroristModel import *

7
models/models.py Executable file
View File

@@ -0,0 +1,7 @@
import settings
networkStatus = {} # Dict that will contain the status of every agent in the network
# Initialize agent states. Let's assume everyone is normal and all types are population.
init_states = [{'id': 0, 'type': 0, 'rad': 0, 'fstatus':0, } for _ in range(settings.network_params["number_of_nodes"])]

16
requirements.txt Normal file → Executable file
View File

@@ -1,13 +1,5 @@
networkx>=2.5 nxsim
simpy
networkx
numpy numpy
matplotlib matplotlib
pyyaml>=5.1
pandas>=1
SALib>=1.3
Jinja2
Mesa>=1.2
pydantic>=1.9
sqlalchemy>=1.4
typing-extensions>=4.4
annotated-types>=0.4
tqdm>=4.64

23
settings.json Executable file
View File

@@ -0,0 +1,23 @@
[
{
"network_type": 0,
"number_of_nodes": 80,
"max_time": 50,
"num_trials": 1,
"timeout": 2
},
{
"agent": ["TerroristModel"],
"initial_population": 0.85,
"initial_havens": 0.1,
"initial_training_enviroments": 0.05,
"initial_radicalism": 0.12,
"relative_inequality": 0.33,
"information_spread_intensity": 0.1,
"influence": 0.4,
"additional_influence": 0.1
}
]

13
settings.py Executable file
View File

@@ -0,0 +1,13 @@
# General configuration
import json
with open('settings.json', 'r') as f:
settings = json.load(f)
network_params = settings[0]
environment_params = settings[1]
centrality_param = {}
partition_param={}
leaders={}

View File

@@ -1,8 +0,0 @@
[metadata]
long_description = file: README.md
long_description_content_type = text/markdown
[aliases]
test=pytest
[tool:pytest]
addopts = --verbose

View File

@@ -1,63 +0,0 @@
import os
from setuptools import setup
with open(os.path.join('soil', 'VERSION')) as f:
__version__ = f.readlines()[0].strip()
assert __version__
def parse_requirements(filename):
""" load requirements from a pip requirements file """
with open(filename, 'r') as f:
lineiter = list(line.strip() for line in f)
return [line for line in lineiter if line and not line.startswith("#")]
install_reqs = parse_requirements("requirements.txt")
test_reqs = parse_requirements("test-requirements.txt")
extras_require={
'geo': ['scipy>=1.3'],
'web': ['tornado'],
'ipython': ['ipython==8.12', 'nbformat==5.8'],
}
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
setup(
name='soil',
packages=['soil'], # this must be the same as the name above
version=__version__,
description=('An Agent-Based Social Simulator for Social Networks'),
author='J. Fernando Sanchez',
author_email='jf.sanchez@upm.es',
url='https://github.com/gsi-upm/soil', # use the URL to the github repo
download_url='https://github.com/gsi-upm/soil/archive/{}.tar.gz'.format(
__version__),
keywords=['agent', 'social', 'simulator'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
],
install_requires=install_reqs,
extras_require=extras_require,
tests_require=test_reqs,
setup_requires=['pytest-runner', ],
pytest_plugins = ['pytest_profiling'],
include_package_data=True,
python_requires=">=3.8",
entry_points={
'console_scripts':
['soil = soil.__main__:main',
'soil-web = soil.web.__init__:main']
})

BIN
sim_01/log.0.state.pickled Executable file

Binary file not shown.

BIN
sim_01/log.1.state.pickled Normal file

Binary file not shown.

215
soil.py Executable file
View File

@@ -0,0 +1,215 @@
from models import *
from nxsim import NetworkSimulation
# import numpy
from matplotlib import pyplot as plt
import networkx as nx
import settings
import models
import math
import json
import operator
import community
POPULATION = 0
LEADERS = 1
HAVEN = 2
TRAINING = 3
NON_RADICAL = 0
NEUTRAL = 1
RADICAL = 2
#################
# Visualization #
#################
def visualization(graph_name):
for x in range(0, settings.network_params["number_of_nodes"]):
attributes = {}
spells = []
for attribute in models.networkStatus["agent_%s" % x]:
if attribute == 'visible':
lastvisible = False
laststep = 0
for t_step in models.networkStatus["agent_%s" % x][attribute]:
nowvisible = models.networkStatus["agent_%s" % x][attribute][t_step]
if nowvisible and not lastvisible:
laststep = t_step
if not nowvisible and lastvisible:
spells.append((laststep, t_step))
lastvisible = nowvisible
if lastvisible:
spells.append((laststep, None))
else:
emotionStatusAux = []
for t_step in models.networkStatus["agent_%s" % x][attribute]:
prec = 2
output = math.floor(models.networkStatus["agent_%s" % x][attribute][t_step] * (10 ** prec)) / (10 ** prec) # 2 decimals
emotionStatusAux.append((output, t_step, t_step + settings.network_params["timeout"]))
attributes[attribute] = emotionStatusAux
if spells:
G.add_node(x, attributes, spells=spells)
else:
G.add_node(x, attributes)
print("Done!")
with open('data.txt', 'w') as outfile:
json.dump(models.networkStatus, outfile, sort_keys=True, indent=4, separators=(',', ': '))
for node in range(settings.network_params["number_of_nodes"]):
G.node[node]['x'] = G.node[node]['pos'][0]
G.node[node]['y'] = G.node[node]['pos'][1]
G.node[node]['viz'] = {"position": {"x": G.node[node]['pos'][0], "y": G.node[node]['pos'][1], "z": 0.0}}
del (G.node[node]['pos'])
nx.write_gexf(G, graph_name+".gexf", version="1.2draft")
###########
# Results #
###########
def results(model_name):
x_values = []
neutral_values = []
non_radical_values = []
radical_values = []
attribute_plot = 'status'
for time in range(0, settings.network_params["max_time"]):
value_neutral = 0
value_non_radical = 0
value_radical = 0
real_time = time * settings.network_params["timeout"]
activity = False
for x in range(0, settings.network_params["number_of_nodes"]):
if attribute_plot in models.networkStatus["agent_%s" % x]:
if real_time in models.networkStatus["agent_%s" % x][attribute_plot]:
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == NON_RADICAL:
value_non_radical += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == NEUTRAL:
value_neutral += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == RADICAL:
value_radical += 1
activity = True
if activity:
x_values.append(real_time)
neutral_values.append(value_neutral)
non_radical_values.append(value_non_radical)
radical_values.append(value_radical)
activity = False
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
non_radical_line = ax1.plot(x_values, non_radical_values, label='Non radical')
neutral_line = ax1.plot(x_values, neutral_values, label='Neutral')
radical_line = ax1.plot(x_values, radical_values, label='Radical')
ax1.legend()
fig1.savefig(model_name+'.png')
plt.show()
###########
# Results #
###########
def resultadosTipo(model_name):
x_values = []
population_values = []
leaders_values = []
havens_values = []
training_enviroments_values = []
attribute_plot = 'type'
for time in range(0, settings.network_params["max_time"]):
value_population = 0
value_leaders = 0
value_havens = 0
value_training_enviroments = 0
real_time = time * settings.network_params["timeout"]
activity = False
for x in range(0, settings.network_params["number_of_nodes"]):
if attribute_plot in models.networkStatus["agent_%s" % x]:
if real_time in models.networkStatus["agent_%s" % x][attribute_plot]:
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == POPULATION:
value_population += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == LEADERS:
value_leaders += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == HAVEN:
value_havens += 1
activity = True
if models.networkStatus["agent_%s" % x][attribute_plot][real_time] == TRAINING:
value_training_enviroments += 1
activity = True
if activity:
x_values.append(real_time)
population_values.append(value_population)
leaders_values.append(value_leaders)
havens_values.append(value_havens)
training_enviroments_values.append(value_training_enviroments)
activity = False
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
population_line = ax2.plot(x_values, population_values, label='Population')
leaders_line = ax2.plot(x_values, leaders_values, label='Leader')
havens_line = ax2.plot(x_values, havens_values, label='Havens')
training_enviroments_line = ax2.plot(x_values, training_enviroments_values, label='Training Enviroments')
ax2.legend()
fig2.savefig(model_name+'_type'+'.png')
plt.show()
####################
# Network creation #
####################
# nx.degree_centrality(G);
if settings.network_params["network_type"] == 0:
G = nx.random_geometric_graph(settings.network_params["number_of_nodes"], 0.2)
settings.partition_param = community.best_partition(G)
settings.centrality_param = nx.betweenness_centrality(G).copy()
# print(settings.centrality_param)
# print(settings.partition_param)
# More types of networks can be added here
##############
# Simulation #
##############
agents = settings.environment_params['agent']
print("Using Agent(s): {agents}".format(agents=agents))
if len(agents) > 1:
for agent in agents:
sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params["max_time"],
num_trials=settings.network_params["num_trials"], logging_interval=1.0, **settings.environment_params)
sim.run_simulation()
print(str(agent))
results(str(agent))
resultadosTipo(str(agent))
visualization(str(agent))
else:
agent = agents[0]
sim = NetworkSimulation(topology=G, states=init_states, agent_type=locals()[agent], max_time=settings.network_params["max_time"],
num_trials=settings.network_params["num_trials"], logging_interval=1.0, **settings.environment_params)
sim.run_simulation()
results(str(agent))
resultadosTipo(str(agent))
visualization(str(agent))

394
soil.py~ Executable file
View File

@@ -0,0 +1,394 @@
from nxsim import NetworkSimulation
from nxsim import BaseNetworkAgent
from nxsim import BaseLoggingAgent
from random import randint
from matplotlib import pyplot as plt
import random
import numpy as np
import networkx as nx
import settings
settings.init()
if settings.network_type == 0:
G = nx.complete_graph(settings.number_of_nodes)
if settings.network_type == 1:
G = nx.barabasi_albert_graph(settings.number_of_nodes,3)
if settings.network_type == 2:
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
myList=[]
networkStatus=[]
for x in range(0, settings.number_of_nodes):
networkStatus.append({'id':x})
# # Just like subclassing a process in SimPy
# class MyAgent(BaseNetworkAgent):
# def __init__(self, environment=None, agent_id=0, state=()): # Make sure to have these three keyword arguments
# super().__init__(environment=environment, agent_id=agent_id, state=state)
# # Add your own attributes here
# def run(self):
# # Add your behaviors here
class SentimentCorrelationModel(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.outside_effects_prob = settings.outside_effects_prob
self.anger_prob = settings.anger_prob
self.joy_prob = settings.joy_prob
self.sadness_prob = settings.sadness_prob
self.disgust_prob = settings.disgust_prob
self.time_awareness=[]
for i in range(4):
self.time_awareness.append(0) #0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
if self.env.now > 10:
G.add_node(205)
G.add_edge(205,0)
angry_neighbors_1_time_step=[]
joyful_neighbors_1_time_step=[]
sad_neighbors_1_time_step=[]
disgusted_neighbors_1_time_step=[]
angry_neighbors = self.get_neighboring_agents(state_id=1)
for x in angry_neighbors:
if x.time_awareness[0] > (self.env.now-500):
angry_neighbors_1_time_step.append(x)
num_neighbors_angry = len(angry_neighbors_1_time_step)
joyful_neighbors = self.get_neighboring_agents(state_id=2)
for x in joyful_neighbors:
if x.time_awareness[1] > (self.env.now-500):
joyful_neighbors_1_time_step.append(x)
num_neighbors_joyful = len(joyful_neighbors_1_time_step)
sad_neighbors = self.get_neighboring_agents(state_id=3)
for x in sad_neighbors:
if x.time_awareness[2] > (self.env.now-500):
sad_neighbors_1_time_step.append(x)
num_neighbors_sad = len(sad_neighbors_1_time_step)
disgusted_neighbors = self.get_neighboring_agents(state_id=4)
for x in disgusted_neighbors:
if x.time_awareness[3] > (self.env.now-500):
disgusted_neighbors_1_time_step.append(x)
num_neighbors_disgusted = len(disgusted_neighbors_1_time_step)
# #Outside effects. Asignamos un estado aleatorio
# if random.random() < settings.outside_effects_prob:
# if self.state['id'] == 0:
# self.state['id'] = random.randint(1,4)
# myList.append(self.id)
# networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
# self.time_awareness = self.env.now #Para saber cuando se han contagiado
# yield self.env.timeout(settings.timeout)
# else:
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Joy
# if random.random() < (settings.joy_prob*(num_neighbors_joyful)/10):
# myList.append(self.id)
# self.state['id'] = 2
# networkStatus[self.id][self.env.now]=2
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Sadness
# if random.random() < (settings.sadness_prob*(num_neighbors_sad)/10):
# myList.append(self.id)
# self.state['id'] = 3
# networkStatus[self.id][self.env.now]=3
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Disgust
# if random.random() < (settings.disgust_prob*(num_neighbors_disgusted)/10):
# myList.append(self.id)
# self.state['id'] = 4
# networkStatus[self.id][self.env.now]=4
# yield self.env.timeout(settings.timeout)
# #Imitation effects-Anger
# if random.random() < (settings.anger_prob*(num_neighbors_angry)/10):
# myList.append(self.id)
# self.state['id'] = 1
# networkStatus[self.id][self.env.now]=1
# yield self.env.timeout(settings.timeout)
# yield self.env.timeout(settings.timeout)
###########################################
anger_prob= settings.anger_prob+(len(angry_neighbors_1_time_step)*settings.anger_prob)
print("anger_prob " + str(anger_prob))
joy_prob= settings.joy_prob+(len(joyful_neighbors_1_time_step)*settings.joy_prob)
print("joy_prob " + str(joy_prob))
sadness_prob = settings.sadness_prob+(len(sad_neighbors_1_time_step)*settings.sadness_prob)
print("sadness_prob "+ str(sadness_prob))
disgust_prob = settings.disgust_prob+(len(disgusted_neighbors_1_time_step)*settings.disgust_prob)
print("disgust_prob " + str(disgust_prob))
outside_effects_prob= settings.outside_effects_prob
print("outside_effects_prob " + str(outside_effects_prob))
num = random.random()
if(num<outside_effects_prob):
self.state['id'] = random.randint(1,4)
myList.append(self.id)
networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
self.time_awareness[self.state['id']-1] = self.env.now
yield self.env.timeout(settings.timeout)
if(num<anger_prob):
myList.append(self.id)
self.state['id'] = 1
networkStatus[self.id][self.env.now]=1
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<joy_prob+anger_prob and num>anger_prob):
myList.append(self.id)
self.state['id'] = 2
networkStatus[self.id][self.env.now]=2
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<sadness_prob+anger_prob+joy_prob and num>joy_prob+anger_prob):
myList.append(self.id)
self.state['id'] = 3
networkStatus[self.id][self.env.now]=3
self.time_awareness[self.state['id']-1] = self.env.now
elif (num<disgust_prob+sadness_prob+anger_prob+joy_prob and num>sadness_prob+anger_prob+joy_prob):
myList.append(self.id)
self.state['id'] = 4
networkStatus[self.id][self.env.now]=4
self.time_awareness[self.state['id']-1] = self.env.now
yield self.env.timeout(settings.timeout)
# anger_propagation = settings.anger_prob*num_neighbors_angry/10
# joy_propagation = anger_propagation + (settings.joy_prob*num_neighbors_joyful/10)
# sadness_propagation = joy_propagation + (settings.sadness_prob*num_neighbors_sad/10)
# disgust_propagation = sadness_propagation + (settings.disgust_prob*num_neighbors_disgusted/10)
# outside_effects_propagation = disgust_propagation + settings.outside_effects_prob
# if (num<anger_propagation):
# if(self.state['id'] !=0):
# myList.append(self.id)
# self.state['id'] = 1
# networkStatus[self.id][self.env.now]=1
# yield self.env.timeout(settings.timeout)
# if (num<joy_propagation):
# if(self.state['id'] !=0):
# myList.append(self.id)
# self.state['id'] = 2
# networkStatus[self.id][self.env.now]=2
# yield self.env.timeout(settings.timeout)
# if(num<sadness_propagation):
# if(self.state['id'] !=0):
# myList.append(self.id)
# self.state['id'] = 3
# networkStatus[self.id][self.env.now]=3
# yield self.env.timeout(settings.timeout)
# # if(num<disgust_propagation):
# # if(self.state['id'] !=0):
# # myList.append(self.id)
# # self.state['id'] = 4
# # networkStatus[self.id][self.env.now]=4
# # yield self.env.timeout(settings.timeout)
# if(num <outside_effects_propagation):
# if self.state['id'] == 0:
# self.state['id'] = random.randint(1,4)
# myList.append(self.id)
# networkStatus[self.id][self.env.now]=self.state['id'] #Almaceno cuando se ha infectado para la red dinamica
# self.time_awareness = self.env.now #Para saber cuando se han contagiado
# yield self.env.timeout(settings.timeout)
# else:
# yield self.env.timeout(settings.timeout)
# else:
# yield self.env.timeout(settings.timeout)
class BassModel(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = settings.innovation_prob
self.imitation_prob = settings.imitation_prob
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
#Outside effects
if random.random() < settings.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
myList.append(self.id)
networkStatus[self.id][self.env.now]=1
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
#Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
num_neighbors_aware = len(aware_neighbors)
if random.random() < (settings.imitation_prob*num_neighbors_aware):
myList.append(self.id)
self.state['id'] = 1
networkStatus[self.id][self.env.now]=1
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
class IndependentCascadeModel(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.innovation_prob = settings.innovation_prob
self.imitation_prob = settings.imitation_prob
self.time_awareness = 0
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
aware_neighbors_1_time_step=[]
#Outside effects
if random.random() < settings.innovation_prob:
if self.state['id'] == 0:
self.state['id'] = 1
myList.append(self.id)
networkStatus[self.id][self.env.now]=1
self.time_awareness = self.env.now #Para saber cuando se han contagiado
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
#Imitation effects
if self.state['id'] == 0:
aware_neighbors = self.get_neighboring_agents(state_id=1)
for x in aware_neighbors:
if x.time_awareness == (self.env.now-1):
aware_neighbors_1_time_step.append(x)
num_neighbors_aware = len(aware_neighbors_1_time_step)
if random.random() < (settings.imitation_prob*num_neighbors_aware):
myList.append(self.id)
self.state['id'] = 1
networkStatus[self.id][self.env.now]=1
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
class ZombieOutbreak(BaseNetworkAgent):
def __init__(self, environment=None, agent_id=0, state=()):
super().__init__(environment=environment, agent_id=agent_id, state=state)
self.bite_prob = settings.bite_prob
networkStatus[self.id][self.env.now]=0
def run(self):
while True:
if random.random() < settings.heal_prob:
if self.state['id'] == 1:
self.zombify()
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
else:
if self.state['id'] == 1:
print("Soy el zombie " + str(self.id) + " y me voy a curar porque el num aleatorio ha sido " + str(num))
networkStatus[self.id][self.env.now]=0
if self.id in myList:
myList.remove(self.id)
self.state['id'] = 0
yield self.env.timeout(settings.timeout)
else:
yield self.env.timeout(settings.timeout)
def zombify(self):
normal_neighbors = self.get_neighboring_agents(state_id=0)
for neighbor in normal_neighbors:
if random.random() < self.bite_prob:
print("Soy el zombie " + str(self.id) + " y voy a contagiar a " + str(neighbor.id))
neighbor.state['id'] = 1 # zombie
myList.append(neighbor.id)
networkStatus[self.id][self.env.now]=1
networkStatus[neighbor.id][self.env.now]=1
print(self.env.now, "Soy el zombie: "+ str(self.id), "Mi vecino es: "+ str(neighbor.id), sep='\t')
break
# Initialize agent states. Let's assume everyone is normal.
init_states = [{'id': 0, } for _ in range(settings.number_of_nodes)] # add keys as as necessary, but "id" must always refer to that state category
# Seed a zombie
#init_states[5] = {'id': 1}
#init_states[3] = {'id': 1}
sim = NetworkSimulation(topology=G, states=init_states, agent_type=SentimentCorrelationModel,
max_time=settings.max_time, num_trials=settings.num_trials, logging_interval=1.0)
sim.run_simulation()
myList = sorted(myList, key=int)
#print("Los zombies son: " + str(myList))
trial = BaseLoggingAgent.open_trial_state_history(dir_path='sim_01', trial_id=0)
zombie_census = [sum([1 for node_id, state in g.items() if state['id'] == 1]) for t,g in trial.items()]
#for x in range(len(myList)):
# G.node[myList[x]]['viz'] = {'color': {'r': 255, 'g': 0, 'b': 0, 'a': 0}}
#G.node[1]['viz'] = {'color': {'r': 255, 'g': 0, 'b': 0, 'a': 0}}
#lista = nx.nodes(G)
#print('Nodos: ' + str(lista))
for x in range(0, settings.number_of_nodes):
networkStatusAux=[]
for tiempo in networkStatus[x]:
if tiempo != 'id':
networkStatusAux.append((networkStatus[x][tiempo],tiempo,None))
G.add_node(x, zombie= networkStatusAux)
#print(networkStatus)
nx.write_gexf(G,"test.gexf", version="1.2draft")
plt.plot(zombie_census)
plt.draw() # pyplot draw()
plt.savefig("zombie.png")
#print(networkStatus)
#nx.draw(G)
#plt.show()
#plt.savefig("path.png")

View File

@@ -1 +0,0 @@
1.0.0rc2

View File

@@ -1,287 +0,0 @@
from __future__ import annotations
import importlib
from importlib.resources import path
import sys
import os
import logging
import traceback
from contextlib import contextmanager
from .version import __version__
try:
basestring
except NameError:
basestring = str
from pathlib import Path
from .analysis import *
from .agents import *
from . import agents
from .simulation import *
from .environment import Environment, EventedEnvironment
from .datacollection import SoilCollector
from . import serialization
from .utils import logger
from .time import *
from .decorators import *
def main(
cfg="simulation.yml",
exporters=None,
num_processes=1,
output="soil_output",
*,
debug=False,
pdb=False,
**kwargs,
):
sim = None
if isinstance(cfg, Simulation):
sim = cfg
import argparse
from . import simulation
logger.info("Running SOIL version: {}".format(__version__))
parser = argparse.ArgumentParser(description="Run a SOIL simulation")
parser.add_argument(
"file",
type=str,
nargs="?",
default=cfg if sim is None else "",
help="Configuration file for the simulation (e.g., YAML or JSON)",
)
parser.add_argument(
"--version", action="store_true", help="Show version info and exit"
)
parser.add_argument(
"--module",
"-m",
type=str,
help="file containing the code of any custom agents.",
)
parser.add_argument(
"--dry-run",
"--dry",
action="store_true",
help="Do not run the simulation",
)
parser.add_argument(
"--no-dump",
action="store_true",
help="Do not store the results of the simulation to disk, show in terminal instead.",
)
parser.add_argument(
"--pdb", action="store_true", help="Use a pdb console in case of exception."
)
parser.add_argument(
"--debug",
action="store_true",
help="Run a customized version of a pdb console to debug a simulation.",
)
parser.add_argument(
"--graph",
"-g",
action="store_true",
help="Dump each iteration's network topology as a GEXF graph. Defaults to false.",
)
parser.add_argument(
"--csv",
action="store_true",
help="Dump all data collected in CSV format. Defaults to false.",
)
parser.add_argument("--level", type=str, help="Logging level")
parser.add_argument(
"--output",
"-o",
type=str,
default=output or "soil_output",
help="folder to write results to. It defaults to the current directory.",
)
parser.add_argument(
"--num-processes",
default=num_processes,
help="Number of processes to use for parallel execution. Defaults to 1.",
)
parser.add_argument(
"-e",
"--exporter",
action="append",
default=[],
help="Export environment and/or simulations using this exporter",
)
parser.add_argument(
"--max_time",
default="-1",
help="Set maximum time for the simulation to run. ",
)
parser.add_argument(
"--max_steps",
default="-1",
help="Set maximum number of steps for the simulation to run.",
)
parser.add_argument(
"--iterations",
default="",
help="Set maximum number of iterations (runs) for the simulation.",
)
parser.add_argument(
"--seed",
default=None,
help="Manually set a seed for the simulation.",
)
parser.add_argument(
"--only-convert",
"--convert",
action="store_true",
help="Do not run the simulation, only convert the configuration file(s) and output them.",
)
parser.add_argument(
"--set",
metavar="KEY=VALUE",
action="append",
help="Set a number of parameters that will be passed to the simulation."
"(do not put spaces before or after the = sign). "
"If a value contains spaces, you should define "
"it with double quotes: "
'foo="this is a sentence". Note that '
"values are always treated as strings.",
)
args = parser.parse_args()
level = getattr(logging, (args.level or "INFO").upper())
logger.setLevel(level)
if args.version:
return
exporters = exporters or [
"default",
]
for exp in args.exporter:
if exp not in exporters:
exporters.append(exp)
if args.csv:
exporters.append("csv")
if args.graph:
exporters.append("gexf")
if os.getcwd() not in sys.path:
sys.path.append(os.getcwd())
if args.module:
importlib.import_module(args.module)
if output is None:
output = args.output
debug = debug or args.debug
if args.pdb or debug:
args.synchronous = True
os.environ["SOIL_POSTMORTEM"] = "true"
res = []
try:
exp_params = {}
opts = dict(
dry_run=args.dry_run,
dump=not args.no_dump,
debug=debug,
exporters=exporters,
num_processes=args.num_processes,
level=level,
outdir=output,
exporter_params=exp_params,
**kwargs)
if args.seed is not None:
opts["seed"] = args.seed
if args.iterations:
opts["iterations"] =int(args.iterations)
if sim:
logger.info("Loading simulation instance")
for (k, v) in opts.items():
setattr(sim, k, v)
sims = [sim]
else:
logger.info("Loading config file: {}".format(args.file))
if not os.path.exists(args.file):
logger.error("Please, input a valid file")
return
assert opts["debug"] == debug
sims = list(
simulation.iter_from_file(
args.file,
**opts,
)
)
for sim in sims:
assert sim.debug == debug
if args.set:
for s in args.set:
k, v = s.split("=", 1)[:2]
v = eval(v)
tail, *head = k.rsplit(".", 1)[::-1]
target = sim.parameters
if head:
for part in head[0].split("."):
try:
target = getattr(target, part)
except AttributeError:
target = target[part]
try:
setattr(target, tail, v)
except AttributeError:
target[tail] = v
if args.only_convert:
print(sim.to_yaml())
continue
max_time = float(args.max_time) if args.max_time != "-1" else None
max_steps = float(args.max_steps) if args.max_steps != "-1" else None
res.append(sim.run(max_time=max_time, max_steps=max_steps))
except Exception as ex:
if args.pdb:
from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
else:
raise
if debug:
from .debugging import set_trace
os.environ["SOIL_DEBUG"] = "true"
set_trace()
return res
@contextmanager
def easy(cfg, pdb=False, debug=False, **kwargs):
try:
return main(cfg, debug=debug, pdb=pdb, **kwargs)[0]
except Exception as e:
if os.environ.get("SOIL_POSTMORTEM"):
from .debugging import post_mortem
print(traceback.format_exc())
post_mortem()
raise
if __name__ == "__main__":
main()

View File

@@ -1,9 +0,0 @@
from . import main as init_main
def main():
init_main()
if __name__ == "__main__":
init_main()

View File

@@ -1,31 +0,0 @@
from . import FSM, state, default_state
class BassModel(FSM):
"""
Settings:
innovation_prob
imitation_prob
"""
sentimentCorrelation = 0
def step(self):
self.behaviour()
@default_state
@state
def innovation(self):
if self.prob(self.innovation_prob):
self.sentimentCorrelation = 1
return self.aware
else:
aware_neighbors = self.get_neighbors(state_id=self.aware.id)
num_neighbors_aware = len(aware_neighbors)
if self.prob((self.imitation_prob * num_neighbors_aware)):
self.sentimentCorrelation = 1
return self.aware
@state
def aware(self):
self.die()

View File

@@ -1,46 +0,0 @@
from . import BaseAgent, NetworkAgent
class Ticker(BaseAgent):
times = 0
def step(self):
self.times += 1
class CounterModel(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
times = 0
neighbors = 0
total = 0
def step(self):
# Outside effects
total = len(list(self.model.schedule._agents))
neighbors = len(list(self.get_neighbors()))
self["times"] = self.get("times", 0) + 1
self["neighbors"] = neighbors
self["total"] = total
class AggregatedCounter(NetworkAgent):
"""
Dummy behaviour. It counts the number of nodes in the network and neighbors
in each step and adds it to its state.
"""
times = 0
neighbors = 0
total = 0
def step(self):
# Outside effects
self["times"] += 1
neighbors = len(list(self.get_neighbors()))
self["neighbors"] += neighbors
total = len(list(self.model.schedule.agents))
self["total"] += total
self.debug("Running for step: {}. Total: {}".format(self.now, total))

View File

@@ -1,21 +0,0 @@
from scipy.spatial import cKDTree as KDTree
import networkx as nx
from . import NetworkAgent
class Geo(NetworkAgent):
"""In this type of network, nodes have a "pos" attribute."""
def geo_search(self, radius, center=False, **kwargs):
"""Get a list of nodes whose coordinates are closer than *radius* to *node*."""
node = self.node_id
G = self.subgraph(**kwargs)
pos = nx.get_node_attributes(G, "pos")
if not pos:
return []
nodes, coords = list(zip(*pos.items()))
kdtree = KDTree(coords) # Cannot provide generator.
indices = kdtree.query_ball_point(pos[node], radius)
return [nodes[i] for i in indices if center or (nodes[i] != node)]

View File

@@ -1,30 +0,0 @@
from . import Agent, state, default_state
class IndependentCascadeModel(Agent):
"""
Settings:
innovation_prob
imitation_prob
"""
time_awareness = 0
sentimentCorrelation = 0
# Outside effects
@default_state
@state
def outside(self):
if self.prob(self.model.innovation_prob):
self.sentimentCorrelation = 1
self.time_awareness = self.model.now # To know when they have been infected
return self.imitate
@state
def imitate(self):
aware_neighbors = self.get_neighbors(state_id=1, time_awareness=self.now-1)
if self.prob(self.model.imitation_prob * len(aware_neighbors)):
self.sentimentCorrelation = 1
return self.outside

View File

@@ -1,110 +0,0 @@
import numpy as np
from hashlib import sha512
from . import Agent, state, default_state
class SISaModel(Agent):
"""
Settings:
neutral_discontent_spon_prob
neutral_discontent_infected_prob
neutral_content_spon_prob
neutral_content_infected_prob
discontent_neutral
discontent_content
variance_d_c
content_discontent
variance_c_d
content_neutral
standard_variance
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
seed = self.model._seed
if isinstance(seed, (str, bytes, bytearray)):
if isinstance(seed, str):
seed = seed.encode()
seed = int.from_bytes(seed + sha512(seed).digest(), 'big')
random = np.random.default_rng(seed=seed)
self.neutral_discontent_spon_prob = random.normal(
self.model.neutral_discontent_spon_prob, self.model.standard_variance
)
self.neutral_discontent_infected_prob = random.normal(
self.model.neutral_discontent_infected_prob, self.model.standard_variance
)
self.neutral_content_spon_prob = random.normal(
self.model.neutral_content_spon_prob, self.model.standard_variance
)
self.neutral_content_infected_prob = random.normal(
self.model.neutral_content_infected_prob, self.model.standard_variance
)
self.discontent_neutral = random.normal(
self.model.discontent_neutral, self.model.standard_variance
)
self.discontent_content = random.normal(
self.model.discontent_content, self.model.variance_d_c
)
self.content_discontent = random.normal(
self.model.content_discontent, self.model.variance_c_d
)
self.content_neutral = random.normal(
self.model.discontent_neutral, self.model.standard_variance
)
@default_state
@state
def neutral(self):
# Spontaneous effects
if self.prob(self.neutral_discontent_spon_prob):
return self.discontent
if self.prob(self.neutral_content_spon_prob):
return self.content
# Infected
discontent_neighbors = self.count_neighbors(state_id=self.discontent)
if self.prob(discontent_neighbors * self.neutral_discontent_infected_prob):
return self.discontent
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(content_neighbors * self.neutral_content_infected_prob):
return self.content
return self.neutral
@state
def discontent(self):
# Healing
if self.prob(self.discontent_neutral):
return self.neutral
# Superinfected
content_neighbors = self.count_neighbors(state_id=self.content.id)
if self.prob(content_neighbors * self.discontent_content):
return self.content
return self.discontent
@state
def content(self):
# Healing
if self.prob(self.content_neutral):
return self.neutral
# Superinfected
discontent_neighbors = self.count_neighbors(state_id=self.discontent.id)
if self.prob(discontent_neighbors * self.content_discontent):
self.discontent
return self.content

View File

@@ -1,672 +0,0 @@
from __future__ import annotations
import logging
from collections import OrderedDict, defaultdict
from collections.abc import MutableMapping, Mapping, Set
from abc import ABCMeta
from copy import deepcopy, copy
from functools import partial, wraps
from itertools import islice, chain
import inspect
import types
import textwrap
import networkx as nx
import warnings
import sys
from typing import Any
from mesa import Agent as MesaAgent, Model
from typing import Dict, List
from .. import serialization, network, utils, time, config
IGNORED_FIELDS = ("model", "logger")
class MetaAgent(ABCMeta):
def __new__(mcls, name, bases, namespace):
defaults = {}
# Re-use defaults from inherited classes
for i in bases:
if isinstance(i, MetaAgent):
defaults.update(i._defaults)
new_nmspc = {
"_defaults": defaults,
"_last_return": None,
"_last_except": None,
}
for attr, func in namespace.items():
if attr == "step" and inspect.isgeneratorfunction(func):
orig_func = func
new_nmspc["_coroutine"] = None
@wraps(func)
def func(self):
while True:
if not self._coroutine:
self._coroutine = orig_func(self)
try:
if self._last_except:
return self._coroutine.throw(self._last_except)
else:
return self._coroutine.send(self._last_return)
except StopIteration as ex:
self._coroutine = None
return ex.value
finally:
self._last_return = None
self._last_except = None
func.id = name or func.__name__
func.is_default = False
new_nmspc[attr] = func
elif (
isinstance(func, types.FunctionType)
or isinstance(func, property)
or isinstance(func, classmethod)
or attr[0] == "_"
):
new_nmspc[attr] = func
elif attr == "defaults":
defaults.update(func)
else:
defaults[attr] = copy(func)
return super().__new__(mcls, name, bases, new_nmspc)
class BaseAgent(MesaAgent, MutableMapping, metaclass=MetaAgent):
"""
A special type of Mesa Agent that:
* Can be used as a dictionary to access its state.
* Has logging built-in
* Can be given default arguments through a defaults class attribute,
which will be used on construction to initialize each agent's state
Any attribute that is not preceded by an underscore (`_`) will also be added to its state.
"""
def __init__(self, unique_id, model, name=None, init=True, interval=None, **kwargs):
assert isinstance(unique_id, int)
super().__init__(unique_id=unique_id, model=model)
self.name = (
str(name) if name else "{}[{}]".format(type(self).__name__, self.unique_id)
)
self.alive = True
self.interval = interval or self.get("interval", 1)
logger = utils.logger.getChild(getattr(self.model, "id", self.model)).getChild(
self.name
)
self.logger = logging.LoggerAdapter(logger, {"agent_name": self.name})
if hasattr(self, "level"):
self.logger.setLevel(self.level)
for (k, v) in self._defaults.items():
if not hasattr(self, k) or getattr(self, k) is None:
setattr(self, k, deepcopy(v))
for (k, v) in kwargs.items():
setattr(self, k, v)
if init:
self.init()
def init(self):
pass
def __hash__(self):
return hash(self.unique_id)
def prob(self, probability):
return prob(probability, self.model.random)
@classmethod
def w(cls, **kwargs):
return custom(cls, **kwargs)
# TODO: refactor to clean up mesa compatibility
@property
def id(self):
msg = "This attribute is deprecated. Use `unique_id` instead"
warnings.warn(msg, DeprecationWarning)
print(msg, file=sys.stderr)
return self.unique_id
@classmethod
def from_dict(cls, model, attrs, warn_extra=True):
ignored = {}
args = {}
for k, v in attrs.items():
if k in inspect.signature(cls).parameters:
args[k] = v
else:
ignored[k] = v
if ignored and warn_extra:
utils.logger.info(
f"Ignoring the following arguments for agent class { agent_class.__name__ }: { ignored }"
)
return cls(model=model, **args)
def __getitem__(self, key):
try:
return getattr(self, key)
except AttributeError:
raise KeyError(f"key {key} not found in agent")
def __delitem__(self, key):
return delattr(self, key)
def __contains__(self, key):
return hasattr(self, key)
def __setitem__(self, key, value):
setattr(self, key, value)
def __len__(self):
return sum(1 for n in self.keys())
def __iter__(self):
return self.items()
def keys(self):
return (k for k in self.__dict__ if k[0] != "_" and k not in IGNORED_FIELDS)
def items(self, keys=None, skip=None):
keys = keys if keys is not None else self.keys()
it = ((k, self.get(k, None)) for k in keys)
if skip:
return filter(lambda x: x[0] not in skip, it)
return it
def get(self, key, default=None):
if key in self:
return self[key]
elif key in self.model:
return self.model[key]
return default
@property
def now(self):
try:
return self.model.now
except AttributeError:
# No environment
return None
def die(self, msg=None):
if msg:
self.info("Agent dying:", msg)
self.debug(f"agent dying")
self.alive = False
try:
self.model.schedule.remove(self)
except KeyError:
pass
return time.NEVER
def step(self):
raise NotImplementedError("Agent must implement step method")
def _check_alive(self):
if not self.alive:
raise time.DeadAgent(self.unique_id)
def log(self, *message, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
message = " ".join(str(i) for i in message)
message = "[@{:>4}]\t{:>10}: {}".format(self.now, repr(self), message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra["now"] = self.now
extra["unique_id"] = self.unique_id
extra["agent_name"] = self.name
return self.logger.log(level, message, extra=extra)
def debug(self, *args, **kwargs):
return self.log(*args, level=logging.DEBUG, **kwargs)
def info(self, *args, **kwargs):
return self.log(*args, level=logging.INFO, **kwargs)
def count_agents(self, **kwargs):
return len(list(self.get_agents(**kwargs)))
def get_agents(self, *args, **kwargs):
it = self.iter_agents(*args, **kwargs)
return list(it)
def iter_agents(self, *args, **kwargs):
yield from filter_agents(self.model.schedule._agents, *args, **kwargs)
def __str__(self):
return self.to_str()
def to_str(self, keys=None, skip=None, pretty=False):
content = dict(self.items(keys=keys))
if pretty and content:
d = content
content = "\n"
for k, v in d.items():
content += f"- {k}: {v}\n"
content = textwrap.indent(content, " ")
return f"{repr(self)}{content}"
def __repr__(self):
return f"{self.__class__.__name__}({self.unique_id})"
def prob(prob, random):
"""
A true/False uniform distribution with a given probability.
To be used like this:
.. code-block:: python
if prob(0.3):
do_something()
"""
r = random.random()
return r < prob
def calculate_distribution(network_agents=None, agent_class=None):
"""
Calculate the threshold values (thresholds for a uniform distribution)
of an agent distribution given the weights of each agent type.
The input has this form: ::
[
{'agent_class': 'agent_class_1',
'weight': 0.2,
'state': {
'id': 0
}
},
{'agent_class': 'agent_class_2',
'weight': 0.8,
'state': {
'id': 1
}
}
]
In this example, 20% of the nodes will be marked as type
'agent_class_1'.
"""
if network_agents:
network_agents = [
deepcopy(agent) for agent in network_agents if not hasattr(agent, "id")
]
elif agent_class:
network_agents = [{"agent_class": agent_class}]
else:
raise ValueError("Specify a distribution or a default agent type")
# Fix missing weights and incompatible types
for x in network_agents:
x["weight"] = float(x.get("weight", 1))
# Calculate the thresholds
total = sum(x["weight"] for x in network_agents)
acc = 0
for v in network_agents:
if "ids" in v:
continue
upper = acc + (v["weight"] / total)
v["threshold"] = [acc, upper]
acc = upper
return network_agents
def _serialize_type(agent_class, known_modules=[], **kwargs):
if isinstance(agent_class, str):
return agent_class
known_modules += ["soil.agents"]
return serialization.serialize(agent_class, known_modules=known_modules, **kwargs)[
1
] # Get the name of the class
def _deserialize_type(agent_class, known_modules=[]):
if not isinstance(agent_class, str):
return agent_class
known = known_modules + ["soil.agents", "soil.agents.custom"]
agent_class = serialization.deserializer(agent_class, known_modules=known)
return agent_class
class AgentView(Mapping, Set):
"""A lazy-loaded list of agents."""
__slots__ = ("_agents",)
def __init__(self, agents):
self._agents = agents
def __getstate__(self):
return {"_agents": self._agents}
def __setstate__(self, state):
self._agents = state["_agents"]
# Mapping methods
def __len__(self):
return len(self._agents)
def __iter__(self):
yield from self._agents.values()
def __getitem__(self, agent_id):
if isinstance(agent_id, slice):
raise ValueError(f"Slicing is not supported")
if agent_id in self._agents:
return self._agents[agent_id]
raise ValueError(f"Agent {agent_id} not found")
def filter(self, *args, **kwargs):
yield from filter_agents(self._agents, *args, **kwargs)
def one(self, *args, **kwargs):
return next(filter_agents(self._agents, *args, **kwargs))
def __call__(self, *args, **kwargs):
return list(self.filter(*args, **kwargs))
def __contains__(self, agent_id):
return agent_id in self._agents
def __str__(self):
return str(list(unique_id for unique_id in self.keys()))
def __repr__(self):
return f"{self.__class__.__name__}({self})"
def filter_agents(
agents: dict,
*id_args,
unique_id=None,
state_id=None,
agent_class=None,
ignore=None,
state=None,
limit=None,
**kwargs,
):
"""
Filter agents given as a dict, by the criteria given as arguments (e.g., certain type or state id).
"""
assert isinstance(agents, dict)
ids = []
if unique_id is not None:
if isinstance(unique_id, list):
ids += unique_id
else:
ids.append(unique_id)
if id_args:
ids += id_args
if ids:
f = (agents[aid] for aid in ids if aid in agents)
else:
f = agents.values()
if state_id is not None and not isinstance(state_id, (tuple, list)):
state_id = tuple([state_id])
if agent_class is not None:
agent_class = _deserialize_type(agent_class)
try:
agent_class = tuple(agent_class)
except TypeError:
agent_class = tuple([agent_class])
if ignore:
f = filter(lambda x: x not in ignore, f)
if state_id is not None:
f = filter(lambda agent: agent.get("state_id", None) in state_id, f)
if agent_class is not None:
f = filter(lambda agent: isinstance(agent, agent_class), f)
state = state or dict()
state.update(kwargs)
for k, v in state.items():
f = filter(lambda agent: getattr(agent, k, None) == v, f)
if limit is not None:
f = islice(f, limit)
yield from f
def from_config(
cfg: config.AgentConfig, random, topology: nx.Graph = None
) -> List[Dict[str, Any]]:
"""
This function turns an agentconfig into a list of individual "agent specifications", which are just a dictionary
with the parameters that the environment will use to construct each agent.
This function does NOT return a list of agents, mostly because some attributes to the agent are not known at the
time of calling this function, such as `unique_id`.
"""
default = cfg or config.AgentConfig()
if not isinstance(cfg, config.AgentConfig):
cfg = config.AgentConfig(**cfg)
agents = []
assigned_total = 0
assigned_network = 0
if cfg.fixed is not None:
agents, assigned_total, assigned_network = _from_fixed(
cfg.fixed, topology=cfg.topology, default=cfg
)
n = cfg.n
if cfg.distribution:
topo_size = len(topology) if topology else 0
networked = []
total = []
for d in cfg.distribution:
if d.strategy == config.Strategy.topology:
topo = d.topology if ("topology" in d.__fields_set__) else cfg.topology
if not topo:
raise ValueError(
'The "topology" strategy only works if the topology parameter is set to True'
)
if not topo_size:
raise ValueError(
f"Topology does not have enough free nodes to assign one to the agent"
)
networked.append(d)
if d.strategy == config.Strategy.total:
if not cfg.n:
raise ValueError(
'Cannot use the "total" strategy without providing the total number of agents'
)
total.append(d)
if networked:
new_agents = _from_distro(
networked,
n=topo_size - assigned_network,
topology=topo,
default=cfg,
random=random,
)
assigned_total += len(new_agents)
assigned_network += len(new_agents)
agents += new_agents
if total:
remaining = n - assigned_total
agents += _from_distro(total, n=remaining, default=cfg, random=random)
if assigned_network < topo_size:
utils.logger.warn(
f"The total number of agents does not match the total number of nodes in "
"every topology. This may be due to a definition error: assigned: "
f"{ assigned } total size: { topo_size }"
)
return agents
def _from_fixed(
lst: List[config.FixedAgentConfig],
topology: bool,
default: config.SingleAgentConfig,
) -> List[Dict[str, Any]]:
agents = []
counts_total = 0
counts_network = 0
for fixed in lst:
agent = {}
if default:
agent = default.state.copy()
agent.update(fixed.state)
cls = serialization.deserialize(
fixed.agent_class or (default and default.agent_class)
)
agent["agent_class"] = cls
topo = (
fixed.topology
if ("topology" in fixed.__fields_set__)
else topology or default.topology
)
if topo:
agent["topology"] = True
counts_network += 1
if not fixed.hidden:
counts_total += 1
agents.append(agent)
return agents, counts_total, counts_network
def _from_distro(
distro: List[config.AgentDistro],
n: int,
default: config.SingleAgentConfig,
random,
topology: str = None
) -> List[Dict[str, Any]]:
agents = []
if n is None:
if any(lambda dist: dist.n is None, distro):
raise ValueError(
"You must provide a total number of agents, or the number of each type"
)
n = sum(dist.n for dist in distro)
weights = list(dist.weight if dist.weight is not None else 1 for dist in distro)
minw = min(weights)
norm = list(weight / minw for weight in weights)
total = sum(norm)
chunk = n // total
# random.choices would be enough to get a weighted distribution. But it can vary a lot for smaller k
# So instead we calculate our own distribution to make sure the actual ratios are close to what we would expect
# Calculate how many times each has to appear
indices = list(
chain.from_iterable([idx] * int(n * chunk) for (idx, n) in enumerate(norm))
)
# Complete with random agents following the original weight distribution
if len(indices) < n:
indices += random.choices(
list(range(len(distro))),
weights=[d.weight for d in distro],
k=n - len(indices),
)
# Deserialize classes for efficiency
classes = list(
serialization.deserialize(i.agent_class or default.agent_class) for i in distro
)
# Add them in random order
random.shuffle(indices)
for idx in indices:
d = distro[idx]
agent = d.state.copy()
cls = classes[idx]
agent["agent_class"] = cls
if default:
agent.update(default.state)
topology = (
d.topology
if ("topology" in d.__fields_set__)
else topology or default.topology
)
if topology:
agent["topology"] = topology
agents.append(agent)
return agents
from .network_agents import *
from .fsm import *
from .evented import *
from typing import Optional
class Agent(NetworkAgent, FSM, EventedAgent):
"""Default agent class, has both network and event capabilities"""
from ..environment import NetworkEnvironment
from .BassModel import *
from .IndependentCascadeModel import *
from .SISaModel import *
from .CounterModel import *
try:
import scipy
from .Geo import Geo
except ImportError:
import sys
print("Could not load the Geo Agent, scipy is not installed", file=sys.stderr)
def custom(cls, **kwargs):
"""Create a new class from a template class and keyword arguments"""
return type(cls.__name__, (cls,), kwargs)

View File

@@ -1,77 +0,0 @@
from . import BaseAgent
from ..events import Message, Tell, Ask, TimedOut
from ..time import BaseCond
from functools import partial
from collections import deque
class ReceivedOrTimeout(BaseCond):
def __init__(
self, agent, expiration=None, timeout=None, check=True, ignore=False, **kwargs
):
if expiration is None:
if timeout is not None:
expiration = agent.now + timeout
self.expiration = expiration
self.ignore = ignore
self.check = check
super().__init__(**kwargs)
def expired(self, time):
return self.expiration and self.expiration < time
def ready(self, agent, time):
return len(agent._inbox) or self.expired(time)
def return_value(self, agent):
if not self.ignore and self.expired(agent.now):
raise TimedOut("No messages received")
if self.check:
agent.check_messages()
return None
def schedule_next(self, time, delta, first=False):
if self._delta is not None:
delta = self._delta
return (time + delta, self)
def __repr__(self):
return f"ReceivedOrTimeout(expires={self.expiration})"
class EventedAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._inbox = deque()
self._processed = 0
def on_receive(self, *args, **kwargs):
pass
def received(self, *args, **kwargs):
return ReceivedOrTimeout(self, *args, **kwargs)
def tell(self, msg, sender=None):
self._inbox.append(Tell(timestamp=self.now, payload=msg, sender=sender))
def ask(self, msg, timeout=None, **kwargs):
ask = Ask(timestamp=self.now, payload=msg, sender=self)
self._inbox.append(ask)
expiration = float("inf") if timeout is None else self.now + timeout
return ask.replied(expiration=expiration, **kwargs)
def check_messages(self):
changed = False
while self._inbox:
msg = self._inbox.popleft()
self._processed += 1
if msg.expired(self.now):
continue
changed = True
reply = self.on_receive(msg.payload, sender=msg.sender)
if isinstance(msg, Ask):
msg.reply = reply
return changed
Evented = EventedAgent

View File

@@ -1,148 +0,0 @@
from . import MetaAgent, BaseAgent
from ..time import Delta
from functools import partial, wraps
import inspect
def state(name=None, default=False):
def decorator(func, name=None):
"""
A state function should return either a state id, or a tuple (state_id, when)
The default value for state_id is the current state id.
The default value for when is the interval defined in the environment.
"""
if inspect.isgeneratorfunction(func):
orig_func = func
@wraps(func)
def func(self):
while True:
if not self._coroutine:
self._coroutine = orig_func(self)
try:
if self._last_except:
n = self._coroutine.throw(self._last_except)
else:
n = self._coroutine.send(self._last_return)
if n:
return None, n
return n
except StopIteration as ex:
self._coroutine = None
next_state = ex.value
if next_state is not None:
self._set_state(next_state)
return next_state
finally:
self._last_return = None
self._last_except = None
func.id = name or func.__name__
func.is_default = default
return func
if callable(name):
return decorator(name)
else:
return partial(decorator, name=name)
def default_state(func):
func.is_default = True
return func
class MetaFSM(MetaAgent):
def __new__(mcls, name, bases, namespace):
states = {}
# Re-use states from inherited classes
default_state = None
for i in bases:
if isinstance(i, MetaFSM):
for state_id, state in i._states.items():
if state.is_default:
default_state = state
states[state_id] = state
# Add new states
for attr, func in namespace.items():
if hasattr(func, "id"):
if func.is_default:
default_state = func
states[func.id] = func
namespace.update(
{
"_default_state": default_state,
"_states": states,
}
)
return super(MetaFSM, mcls).__new__(
mcls=mcls, name=name, bases=bases, namespace=namespace
)
class FSM(BaseAgent, metaclass=MetaFSM):
def __init__(self, init=True, **kwargs):
super().__init__(**kwargs, init=False)
if not hasattr(self, "state_id"):
if not self._default_state:
raise ValueError(
"No default state specified for {}".format(self.unique_id)
)
self.state_id = self._default_state.id
self._coroutine = None
self.default_interval = Delta(self.model.interval)
self._set_state(self.state_id)
if init:
self.init()
@classmethod
def states(cls):
return list(cls._states.keys())
def step(self):
self.debug(f"Agent {self.unique_id} @ state {self.state_id}")
self._check_alive()
next_state = self._states[self.state_id](self)
when = None
try:
next_state, *when = next_state
if not when:
when = None
elif len(when) == 1:
when = when[0]
else:
raise ValueError(
"Too many values returned. Only state (and time) allowed"
)
except TypeError:
pass
if next_state is not None:
self._set_state(next_state)
return when or self.default_interval
def _set_state(self, state, when=None):
if hasattr(state, "id"):
state = state.id
if state not in self._states:
raise ValueError("{} is not a valid state".format(state))
self.state_id = state
if when is not None:
self.model.schedule.add(self, when=when)
return state
def die(self, *args, **kwargs):
return self.dead, super().die(*args, **kwargs)
@state
def dead(self):
return self.die()

View File

@@ -1,100 +0,0 @@
from . import BaseAgent
class NetworkAgent(BaseAgent):
def __init__(self, *args, topology=None, init=True, node_id=None, **kwargs):
super().__init__(*args, init=False, **kwargs)
self.G = topology or self.model.G
assert self.G
if node_id is None:
nodes = self.random.choices(list(self.G.nodes), k=len(self.G))
for n_id in nodes:
if "agent" not in self.G.nodes[n_id] or self.G.nodes[n_id]["agent"] is None:
node_id = n_id
break
else:
node_id = len(self.G)
self.info(f"All nodes ({len(self.G)}) have an agent assigned, adding a new node to the graph for agent {self.unique_id}")
self.G.add_node(node_id)
assert node_id is not None
self.G.nodes[node_id]["agent"] = self
self.node_id = node_id
if init:
self.init()
def count_neighbors(self, state_id=None, **kwargs):
return len(self.get_neighbors(state_id=state_id, **kwargs))
if init:
self.init()
def iter_neighbors(self, **kwargs):
return self.iter_agents(limit_neighbors=True, **kwargs)
def get_neighbors(self, **kwargs):
return list(self.iter_neighbors(**kwargs))
@property
def node(self):
return self.G.nodes[self.node_id]
def iter_agents(self, unique_id=None, *, limit_neighbors=False, **kwargs):
unique_ids = None
if unique_ids is not None:
try:
unique_ids = set(unique_id)
except TypeError:
unique_ids = set([unique_id])
if limit_neighbors:
neighbor_ids = set()
for node_id in self.G.neighbors(self.node_id):
agent = self.G.nodes[node_id].get("agent")
if agent is not None:
neighbor_ids.add(agent.unique_id)
if unique_ids:
unique_ids = unique_ids & neighbor_ids
else:
unique_ids = neighbor_ids
if not unique_ids:
return
unique_ids = list(unique_ids)
yield from super().iter_agents(unique_id=unique_ids, **kwargs)
def subgraph(self, center=True, **kwargs):
include = [self] if center else []
G = self.G.subgraph(
n.node_id for n in list(self.get_agents(**kwargs) + include)
)
return G
def remove_node(self):
self.debug(f"Removing node for {self.unique_id}: {self.node_id}")
self.G.remove_node(self.node_id)
self.node_id = None
def add_edge(self, other, edge_attr_dict=None, *edge_attrs):
if self.node_id not in self.G.nodes(data=False):
raise ValueError(
"{} not in list of existing agents in the network".format(
self.unique_id
)
)
if other.node_id not in self.G.nodes(data=False):
raise ValueError(
"{} not in list of existing agents in the network".format(other)
)
self.G.add_edge(
self.node_id, other.node_id, edge_attr_dict=edge_attr_dict, *edge_attrs
)
def die(self, remove=True):
if not self.alive:
return None
if remove:
self.remove_node()
return super().die()
NetAgent = NetworkAgent

View File

@@ -1,56 +0,0 @@
import os
import sys
import sqlalchemy
import pandas as pd
from collections import namedtuple
def plot(env, agent_df=None, model_df=None, steps=False, ignore=["agent_count", ]):
"""Plot the model dataframe and agent dataframe together."""
if model_df is None:
model_df = env.model_df()
ignore = list(ignore)
if not steps:
ignore.append("step")
else:
ignore.append("time")
ax = model_df.drop(ignore, axis='columns').plot();
if agent_df is None:
try:
agent_df = env.agent_df()
except UserWarning:
print("No agent dataframe provided and no agent reporters found. Skipping agent plot.", file=sys.stderr)
return
if not agent_df.empty:
agent_df.unstack().apply(lambda x: x.value_counts(),
axis=1).fillna(0).plot(ax=ax,
secondary_y=True)
Results = namedtuple("Results", ["config", "parameters", "env", "agents"])
#TODO implement reading from CSV and SQLITE
def read_sql(fpath=None, name=None, include_agents=False):
if not (fpath is None) ^ (name is None):
raise ValueError("Specify either a path or a simulation name")
if name:
fpath = os.path.join("soil_output", name, f"{name}.sqlite")
fpath = os.path.abspath(fpath)
# TODO: improve url parsing. This is a hacky way to check we weren't given a URL
if "://" not in fpath:
fpath = f"sqlite:///{fpath}"
engine = sqlalchemy.create_engine(fpath)
with engine.connect() as conn:
env = pd.read_sql_table("env", con=conn,
index_col="step").reset_index().set_index([
"simulation_id", "params_id",
"iteration_id", "step"
])
agents = pd.read_sql_table("agents", con=conn, index_col=["simulation_id", "params_id", "iteration_id", "step", "agent_id"])
config = pd.read_sql_table("configuration", con=conn, index_col="simulation_id")
parameters = pd.read_sql_table("parameters", con=conn, index_col=["iteration_id", "params_id", "simulation_id"])
try:
parameters = parameters.pivot(columns="key", values="value")
except Exception as e:
print(f"warning: coult not pivot parameters: {e}")
return Results(config, parameters, env, agents)

View File

@@ -1,2 +0,0 @@
def load_config(cfg):
return cfg

View File

@@ -1,19 +0,0 @@
from mesa import DataCollector as MDC
class SoilCollector(MDC):
def __init__(self, model_reporters=None, agent_reporters=None, tables=None, **kwargs):
model_reporters = model_reporters or {}
agent_reporters = agent_reporters or {}
tables = tables or {}
if 'agent_count' not in model_reporters:
model_reporters['agent_count'] = lambda m: m.schedule.get_agent_count()
if 'time' not in model_reporters:
model_reporters['time'] = lambda m: m.now
# if 'state_id' not in agent_reporters:
# agent_reporters['state_id'] = lambda agent: getattr(agent, 'state_id', None)
super().__init__(model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
**kwargs)

View File

@@ -1,243 +0,0 @@
from __future__ import annotations
import pdb
import sys
import os
from textwrap import indent
from functools import wraps
from .agents import FSM, MetaFSM
from mesa import Model, Agent
def wrapcmd(func):
@wraps(func)
def wrapper(self, arg: str, temporary=False):
sys.settrace(self.trace_dispatch)
lastself = self
known = globals()
known.update(self.curframe.f_globals)
known.update(self.curframe.f_locals)
known["attrs"] = arg.strip().split()
this = known.get("self", None)
if isinstance(this, Model):
known["model"] = this
elif isinstance(this, Agent):
known["agent"] = this
known["model"] = this.model
known["self"] = lastself
return exec(func.__code__, known, known)
return wrapper
class Debug(pdb.Pdb):
def __init__(self, *args, skip_soil=False, **kwargs):
skip = kwargs.get("skip", [])
if skip_soil:
skip.append("soil")
skip.append("contextlib")
skip.append("soil.*")
skip.append("mesa.*")
super(Debug, self).__init__(*args, skip=skip, **kwargs)
self.prompt = "[soil-pdb] "
@staticmethod
def _soil_agents(model, attrs=None, pretty=True, **kwargs):
for agent in model.agents(**kwargs):
d = agent
print(" - " + indent(agent.to_str(keys=attrs, pretty=pretty), " "))
@wrapcmd
def do_soil_agents():
return Debug._soil_agents(model, attrs=attrs or None)
do_sa = do_soil_agents
@wrapcmd
def do_soil_list():
return Debug._soil_agents(model, attrs=["state_id"], pretty=False)
do_sl = do_soil_list
def do_continue_state(self, arg):
"""Continue until next time this state is reached"""
self.do_break_state(arg, temporary=True)
return self.do_continue("")
do_cs = do_continue_state
@wrapcmd
def do_soil_agent():
if not agent:
print("No agent available")
return
keys = None
if attrs:
keys = []
for k in attrs:
for key in agent.keys():
if key.startswith(k):
keys.append(key)
print(agent.to_str(pretty=True, keys=keys))
do_aa = do_soil_agent
def do_break_step(self, arg: str):
"""
Break before the next step.
"""
try:
known = globals()
known.update(self.curframe.f_globals)
known.update(self.curframe.f_locals)
func = getattr(known["model"], "step")
except AttributeError as ex:
self.error(f"The model does not have a step function: {ex}")
return
if hasattr(func, "__func__"):
func = func.__func__
code = func.__code__
# use co_name to identify the bkpt (function names
# could be aliased, but co_name is invariant)
funcname = code.co_name
lineno = code.co_firstlineno
filename = code.co_filename
# Check for reasonable breakpoint
line = self.checkline(filename, lineno)
if not line:
raise ValueError("no line found")
# now set the break point
existing = self.get_breaks(filename, line)
if existing:
self.message("Breakpoint already exists at %s:%d" % (filename, line))
return
cond = f"self.schedule.steps > {model.schedule.steps}"
err = self.set_break(filename, line, True, cond, funcname)
if err:
self.error(err)
else:
bp = self.get_breaks(filename, line)[-1]
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
return self.do_continue("")
do_bstep = do_break_step
def do_break_state(self, arg: str, instances=None, temporary=False):
"""
Break before a specified state is stepped into.
"""
klass = None
state = arg
if not state:
self.error("Specify at least a state name")
return
state, *tokens = state.lstrip().split()
if tokens:
instances = list(eval(token) for token in tokens)
colon = state.find(":")
if colon > 0:
klass = state[:colon].rstrip()
state = state[colon + 1 :].strip()
print(klass, state, tokens)
klass = eval(klass, self.curframe.f_globals, self.curframe_locals)
if klass:
klasses = [klass]
else:
klasses = [
k
for k in self.curframe.f_globals.values()
if isinstance(k, type) and issubclass(k, FSM)
]
if not klasses:
self.error("No agent classes found")
for klass in klasses:
try:
func = getattr(klass, state)
except AttributeError:
self.error(f"State {state} not found in class {klass}")
continue
if hasattr(func, "__func__"):
func = func.__func__
code = func.__code__
# use co_name to identify the bkpt (function names
# could be aliased, but co_name is invariant)
funcname = code.co_name
lineno = code.co_firstlineno
filename = code.co_filename
# Check for reasonable breakpoint
line = self.checkline(filename, lineno)
if not line:
raise ValueError("no line found")
# now set the break point
cond = None
if instances:
cond = f"self.unique_id in { repr(instances) }"
existing = self.get_breaks(filename, line)
if existing:
self.message("Breakpoint already exists at %s:%d" % (filename, line))
continue
err = self.set_break(filename, line, temporary, cond, funcname)
if err:
self.error(err)
else:
bp = self.get_breaks(filename, line)[-1]
self.message("Breakpoint %d at %s:%d" % (bp.number, bp.file, bp.line))
do_bs = do_break_state
def do_break_state_self(self, arg: str, temporary=False):
"""
Break before a specified state is stepped into, for the current agent
"""
agent = self.curframe.f_locals.get("self")
if not agent:
self.error("No current agent.")
self.error("Try this again when the debugger is stopped inside an agent")
return
arg = f"{agent.__class__.__name__}:{ arg } {agent.unique_id}"
return self.do_break_state(arg)
do_bss = do_break_state_self
debugger = None
def set_trace(frame=None, **kwargs):
global debugger
if debugger is None:
debugger = Debug(**kwargs)
frame = frame or sys._getframe().f_back
debugger.set_trace(frame)
def post_mortem(traceback=None, **kwargs):
global debugger
if debugger is None:
debugger = Debug(**kwargs)
t = sys.exc_info()[2]
debugger.reset()
debugger.interaction(None, t)

View File

@@ -1,6 +0,0 @@
def report(f: property):
if isinstance(f, property):
setattr(f.fget, "add_to_report", True)
else:
setattr(f, "add_to_report", True)
return f

View File

@@ -1,431 +0,0 @@
from __future__ import annotations
import os
import sqlite3
import math
import logging
import inspect
from typing import Any, Callable, Dict, Optional, Union, List, Type
from collections import namedtuple
from time import time as current_time
from copy import deepcopy
import networkx as nx
from mesa import Model, Agent
from . import agents as agentmod, datacollection, serialization, utils, time, network, events
# TODO: maybe add metaclass to read attributes of a model
class BaseEnvironment(Model):
"""
The environment is key in a simulation. It controls how agents interact,
and what information is available to them.
This is an opinionated version of `mesa.Model` class, which adds many
convenience methods and abstractions.
The environment parameters and the state of every agent can be accessed
both by using the environment as a dictionary and with the environment's
:meth:`soil.environment.Environment.get` method.
"""
collector_class = datacollection.SoilCollector
def __new__(cls,
*args: Any,
seed="default",
dir_path=None,
collector_class: type = None,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
**kwargs: Any) -> Any:
"""Create a new model with a default seed value"""
self = super().__new__(cls, *args, seed=seed, **kwargs)
self.dir_path = dir_path or os.getcwd()
collector_class = collector_class or cls.collector_class
collector_class = serialization.deserialize(collector_class)
self.datacollector = collector_class(
model_reporters=model_reporters,
agent_reporters=agent_reporters,
tables=tables,
)
for k in dir(cls):
v = getattr(cls, k)
if isinstance(v, property):
v = v.fget
if getattr(v, "add_to_report", False):
self.add_model_reporter(k, v)
return self
def __init__(
self,
*,
id="unnamed_env",
seed="default",
dir_path=None,
schedule_class=time.TimedActivation,
interval=1,
logger = None,
agents: Optional[Dict] = None,
collector_class: type = datacollection.SoilCollector,
agent_reporters: Optional[Any] = None,
model_reporters: Optional[Any] = None,
tables: Optional[Any] = None,
init: bool = True,
**env_params,
):
super().__init__()
self.current_id = -1
self.id = id
if logger:
self.logger = logger
else:
self.logger = utils.logger.getChild(self.id)
if schedule_class is None:
schedule_class = time.TimedActivation
else:
schedule_class = serialization.deserialize(schedule_class)
self.interval = interval
self.schedule = schedule_class(self)
for (k, v) in env_params.items():
self[k] = v
if agents:
self.add_agents(**agents)
if init:
self.init()
self.datacollector.collect(self)
def init(self):
pass
@property
def agents(self):
return agentmod.AgentView(self.schedule._agents)
def agent(self, *args, **kwargs):
return agentmod.AgentView(self.schedule._agents).one(*args, **kwargs)
def count_agents(self, *args, **kwargs):
return sum(1 for i in self.agents(*args, **kwargs))
def agent_df(self, steps=False):
df = self.datacollector.get_agent_vars_dataframe()
if steps:
df.index.rename(["step", "agent_id"], inplace=True)
return df
model_df = self.datacollector.get_model_vars_dataframe()
df.index = df.index.set_levels(model_df.time, level=0).rename(["time", "agent_id"])
return df
def model_df(self, steps=False):
df = self.datacollector.get_model_vars_dataframe()
if steps:
return df
df.index.rename("step", inplace=True)
return df.reset_index().set_index("time")
@property
def now(self):
if self.schedule:
return self.schedule.time
raise Exception(
"The environment has not been scheduled, so it has no sense of time"
)
def init_agents(self):
pass
def add_agent(self, agent_class, unique_id=None, **agent):
if unique_id is None:
unique_id = self.next_id()
agent["unique_id"] = unique_id
agent = dict(**agent)
unique_id = agent.pop("unique_id", None)
if unique_id is None:
unique_id = self.next_id()
a = serialization.deserialize(agent_class)(unique_id=unique_id, model=self, **agent)
self.schedule.add(a)
return a
def add_agents(self, agent_classes: List[type], k, weights: Optional[List[float]] = None, **kwargs):
if isinstance(agent_classes, type):
agent_classes = [agent_classes]
if weights is None:
weights = [1] * len(agent_classes)
for cls in self.random.choices(agent_classes, weights=weights, k=k):
self.add_agent(agent_class=cls, **kwargs)
def log(self, message, *args, level=logging.INFO, **kwargs):
if not self.logger.isEnabledFor(level):
return
message = message + " ".join(str(i) for i in args)
message = " @{:>3}: {}".format(self.now, message)
for k, v in kwargs:
message += " {k}={v} ".format(k, v)
extra = {}
extra["now"] = self.now
extra["id"] = self.id
return self.logger.log(level, message, extra=extra)
def step(self):
"""
Advance one step in the simulation, and update the data collection and scheduler appropriately
"""
super().step()
self.schedule.step()
self.datacollector.collect(self)
if self.logger.isEnabledFor(logging.DEBUG):
msg = "Model data:\n"
max_width = max(len(k) for k in self.datacollector.model_vars.keys())
for (k, v) in self.datacollector.model_vars.items():
msg += f"\t{k:<{max_width}}: {v[-1]:>6}\n"
self.logger.debug(f"--- Steps: {self.schedule.steps:^5} - Time: {self.now:^5} --- " + msg)
def add_model_reporter(self, name, func=None):
if not func:
func = lambda env: getattr(env, name)
self.datacollector._new_model_reporter(name, func)
def add_agent_reporter(self, name, agent_type=None):
if agent_type:
reporter = lambda a: getattr(a, name) if isinstance(a, agent_type) else None
else:
reporter = lambda a: getattr(a, name, None)
self.datacollector._new_agent_reporter(name, reporter)
@classmethod
def run(cls, *,
name=None,
iterations=1,
num_processes=1, **kwargs):
from .simulation import Simulation
return Simulation(name=name or cls.__name__,
model=cls, iterations=iterations,
num_processes=num_processes, **kwargs).run()
def __getitem__(self, key):
try:
return getattr(self, key)
except AttributeError:
raise KeyError(f"key {key} not found in environment")
def __delitem__(self, key):
return delattr(self, key)
def __contains__(self, key):
return hasattr(self, key)
def __setitem__(self, key, value):
setattr(self, key, value)
def __str__(self):
return str(dict(self))
def __len__(self):
return sum(1 for n in self.keys())
def __iter__(self):
return iter(self.agents())
def get(self, key, default=None):
return self[key] if key in self else default
def keys(self):
return (k for k in self.__dict__ if k[0] != "_")
class NetworkEnvironment(BaseEnvironment):
"""
The NetworkEnvironment is an environment that includes one or more networkx.Graph intances
and methods to associate agents to nodes and vice versa.
"""
def __init__(self,
*args,
topology: Optional[Union[nx.Graph, str]] = None,
agent_class: Optional[Type[agentmod.Agent]] = None,
network_generator: Optional[Callable] = None,
network_params: Optional[Dict] = {},
init=True,
**kwargs):
self.topology = topology
self.network_generator = network_generator
self.network_params = network_params
if topology or network_params or network_generator:
self.create_network(topology, generator=network_generator, **network_params)
else:
self.G = nx.Graph()
super().__init__(*args, **kwargs, init=False)
self.agent_class = agent_class
if agent_class:
self.agent_class = serialization.deserialize(agent_class)
if self.agent_class:
self.populate_network(self.agent_class)
self._check_agent_nodes()
if init:
self.init()
self.datacollector.collect(self)
def add_agent(self, agent_class, *args, node_id=None, topology=None, **kwargs):
if node_id is None and topology is None:
return super().add_agent(agent_class, *args, **kwargs)
try:
a = super().add_agent(agent_class, *args, node_id=node_id, **kwargs)
except TypeError:
self.logger.warning(f"Agent constructor for {agent_class} does not have a node_id attribute. Might be a bug.")
a = super().add_agent(agent_class, *args, **kwargs)
self.G.nodes[node_id]["agent"] = a
return a
def add_agents(self, *args, k=None, **kwargs):
if not k and not self.G:
raise ValueError("Cannot add agents to an empty network")
super().add_agents(*args, k=k or len(self.G), **kwargs)
def create_network(self, topology=None, generator=None, path=None, **network_params):
if topology is not None:
topology = network.from_topology(topology, dir_path=self.dir_path)
elif path is not None:
topology = network.from_topology(path, dir_path=self.dir_path)
elif generator is not None:
topology = network.from_params(generator=generator, dir_path=self.dir_path, **network_params)
else:
raise ValueError("topology must be a networkx.Graph or a string, or network_generator must be provided")
self.G = topology
def init_agents(self, *args, **kwargs):
"""Initialize the agents from a"""
super().init_agents(*args, **kwargs)
@property
def network_agents(self):
"""Return agents still alive and assigned to a node in the network."""
for (id, data) in self.G.nodes(data=True):
if "agent" in data:
agent = data["agent"]
if getattr(agent, "alive", True):
yield agent
def add_node(self, agent_class, unique_id=None, node_id=None, **kwargs):
if unique_id is None:
unique_id = self.next_id()
if node_id is None:
node_id = network.find_unassigned(
G=self.G, shuffle=True, random=self.random
)
if node_id is None:
node_id = f"node_for_{unique_id}"
if node_id not in self.G.nodes:
self.G.add_node(node_id)
assert "agent" not in self.G.nodes[node_id]
a = self.add_agent(
unique_id=unique_id,
agent_class=agent_class,
topology=self.G,
node_id=node_id,
**kwargs,
)
a["visible"] = True
return a
def _check_agent_nodes(self):
"""
Detect nodes that have agents assigned to them.
"""
for (id, data) in self.G.nodes(data=True):
if "agent_id" in data:
agent = self.agents(data["agent_id"])
self.G.nodes[id]["agent"] = agent
assert not getattr(agent, "node_id", None) or agent.node_id == id
agent.node_id = id
for agent in self.agents():
if hasattr(agent, "node_id"):
node_id = agent["node_id"]
if node_id not in self.G.nodes:
raise ValueError(f"Agent {agent} is assigned to node {agent.node_id} which is not in the network")
node = self.G.nodes[node_id]
if node.get("agent") is not None and node["agent"] != agent:
raise ValueError(f"Node {node_id} already has a different agent assigned to it")
self.G.nodes[node_id]["agent"] = agent
def add_agents(self, agent_classes: List[type], k=None, weights: Optional[List[float]] = None, **kwargs):
if k is None:
k = len(self.G)
if not k:
raise ValueError("Cannot add agents to an empty network")
super().add_agents(agent_classes, k=k, weights=weights, **kwargs)
def agent_for_node_id(self, node_id):
return self.G.nodes[node_id].get("agent")
def populate_network(self, agent_class: List[Model], weights: List[float] = None, **agent_params):
if isinstance(agent_class, type):
agent_class = [agent_class]
else:
agent_class = list(agent_class)
if not weights:
weights = [1] * len(agent_class)
assert len(self.G)
classes = self.random.choices(agent_class, weights, k=len(self.G))
toadd = []
for (cls, (node_id, node)) in zip(classes, self.G.nodes(data=True)):
if "agent" in node:
continue
node["agent"] = None # Reserve
toadd.append(dict(node_id=node_id, topology=self.G, agent_class=cls, **agent_params))
for d in toadd:
a = self.add_agent(**d)
self.G.nodes[d["node_id"]]["agent"] = a
assert all("agent" in node for (_, node) in self.G.nodes(data=True))
assert len(list(self.network_agents))
class EventedEnvironment(BaseEnvironment):
def broadcast(self, msg, sender=None, expiration=None, ttl=None, **kwargs):
for agent in self.agents(**kwargs):
if agent == sender:
continue
self.logger.debug(f"Telling {repr(agent)}: {msg} ttl={ttl}")
try:
inbox = agent._inbox
except AttributeError:
self.logger.info(
f"Agent {agent.unique_id} cannot receive events because it does not have an inbox"
)
continue
# Allow for AttributeError exceptions in this part of the code
inbox.append(
events.Tell(
payload=msg,
sender=sender,
expiration=expiration if ttl is None else self.now + ttl,
)
)
class Environment(NetworkEnvironment, EventedEnvironment):
"""Default environment class, has both network and event capabilities"""

View File

@@ -1,56 +0,0 @@
from .time import BaseCond
from dataclasses import dataclass, field
from typing import Any
from uuid import uuid4
class Event:
pass
@dataclass
class Message:
payload: Any
sender: Any = None
expiration: float = None
timestamp: float = None
id: int = field(default_factory=uuid4)
def expired(self, when):
return self.expiration is not None and self.expiration < when
class Reply(Message):
source: Message
class ReplyCond(BaseCond):
def __init__(self, ask, *args, **kwargs):
self._ask = ask
super().__init__(*args, **kwargs)
def ready(self, agent, time):
return self._ask.reply is not None or self._ask.expired(time)
def return_value(self, agent):
if self._ask.expired(agent.now):
raise TimedOut()
return self._ask.reply
def __repr__(self):
return f"ReplyCond({self._ask.id})"
class Ask(Message):
reply: Message = None
def replied(self, expiration=None):
return ReplyCond(self)
class Tell(Message):
pass
class TimedOut(Exception):
pass

View File

@@ -1,282 +0,0 @@
import os
import sys
from time import time as current_time
from io import BytesIO
from sqlalchemy import create_engine
from textwrap import dedent, indent
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
from .serialization import deserialize, serialize
from .utils import try_backup, open_or_reuse, logger, timer
from . import utils, network
class DryRunner(BytesIO):
def __init__(self, fname, *args, copy_to=None, **kwargs):
super().__init__(*args, **kwargs)
self.__fname = fname
self.__copy_to = copy_to
def write(self, txt):
if self.__copy_to:
self.__copy_to.write("{}:::{}".format(self.__fname, txt))
try:
super().write(txt)
except TypeError:
super().write(bytes(txt, "utf-8"))
def close(self):
content = "(binary data not shown)"
try:
content = self.getvalue().decode()
except UnicodeDecodeError:
pass
logger.info(
"**Not** written to {} (no_dump mode):\n\n{}\n\n".format(
self.__fname, content
)
)
super().close()
class Exporter:
"""
Interface for all exporters. It is not necessary, but it is useful
if you don't plan to implement all the methods.
"""
def __init__(self, simulation, outdir=None, dump=True, copy_to=None):
self.simulation = simulation
outdir = outdir or os.path.join(os.getcwd(), "soil_output")
self.outdir = os.path.join(outdir, simulation.group or "", simulation.name)
self.dump = dump
if copy_to is None and not dump:
copy_to = sys.stdout
self.copy_to = copy_to
def sim_start(self):
"""Method to call when the simulation starts"""
pass
def sim_end(self):
"""Method to call when the simulation ends"""
pass
def iteration_start(self, env):
"""Method to call when a iteration start"""
pass
def iteration_end(self, env, params, params_id):
"""Method to call when a iteration ends"""
pass
def output(self, f, mode="w", **kwargs):
if not self.dump:
f = DryRunner(f, copy_to=self.copy_to)
else:
try:
if not os.path.isabs(f):
f = os.path.join(self.outdir, f)
except TypeError:
pass
return open_or_reuse(f, mode=mode, backup=self.simulation.backup, **kwargs)
def get_dfs(self, env, **kwargs):
yield from get_dc_dfs(env.datacollector,
simulation_id=self.simulation.id,
iteration_id=env.id,
**kwargs)
def get_dc_dfs(dc, **kwargs):
dfs = {}
dfe = dc.get_model_vars_dataframe()
dfe.index.rename("step", inplace=True)
dfs["env"] = dfe
try:
dfa = dc.get_agent_vars_dataframe()
dfa.index.rename(["step", "agent_id"], inplace=True)
dfs["agents"] = dfa
except UserWarning:
pass
for table_name in dc.tables:
dfs[table_name] = dc.get_table_dataframe(table_name)
for (name, df) in dfs.items():
for (k, v) in kwargs.items():
df[k] = v
df.set_index(["simulation_id", "iteration_id"], append=True, inplace=True)
yield from dfs.items()
class SQLite(Exporter):
"""Writes sqlite results"""
sim_started = False
def sim_start(self):
if not self.dump:
logger.debug("NOT dumping results")
return
self.dbpath = os.path.join(self.outdir, f"{self.simulation.name}.sqlite")
logger.info("Dumping results to %s", self.dbpath)
if self.simulation.backup:
try_backup(self.dbpath, remove=True)
if self.simulation.overwrite:
if os.path.exists(self.dbpath):
os.remove(self.dbpath)
self.engine = create_engine(f"sqlite:///{self.dbpath}", echo=False)
sim_dict = {k: serialize(v)[0] for (k,v) in self.simulation.to_dict().items()}
sim_dict["simulation_id"] = self.simulation.id
df = pd.DataFrame([sim_dict])
df.to_sql("configuration", con=self.engine, if_exists="append")
def iteration_end(self, env, params, params_id, *args, **kwargs):
if not self.dump:
logger.info("Running in NO DUMP mode. Results will NOT be saved to a DB.")
return
with timer(
"Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
):
pd.DataFrame([{"simulation_id": self.simulation.id,
"params_id": params_id,
"iteration_id": env.id,
"key": k,
"value": serialize(v)[0]} for (k,v) in params.items()]).to_sql("parameters", con=self.engine, if_exists="append")
for (t, df) in self.get_dfs(env, params_id=params_id):
df.to_sql(t, con=self.engine, if_exists="append")
class csv(Exporter):
"""Export the state of each environment (and its agents) a CSV file for the simulation"""
def sim_start(self):
super().sim_start()
def iteration_end(self, env, params, params_id, *args, **kwargs):
with timer(
"[CSV] Dumping simulation {} iteration {} @ dir {}".format(
self.simulation.name, env.id, self.outdir
)
):
for (df_name, df) in self.get_dfs(env, params_id=params_id):
with self.output("{}.{}.csv".format(env.id, df_name), mode="a") as f:
df.to_csv(f)
# TODO: reimplement GEXF exporting without history
class gexf(Exporter):
def iteration_end(self, env, *args, **kwargs):
if not self.dump:
logger.info("Not dumping GEXF (NO_DUMP mode)")
return
with timer(
"[GEXF] Dumping simulation {} iteration {}".format(self.simulation.name, env.id)
):
with self.output("{}.gexf".format(env.id), mode="wb") as f:
network.dump_gexf(env.history_to_graph(), f)
self.dump_gexf(env, f)
class dummy(Exporter):
def sim_start(self):
with self.output("dummy", "w") as f:
f.write("simulation started @ {}\n".format(current_time()))
def iteration_start(self, env):
with self.output("dummy", "w") as f:
f.write("iteration started@ {}\n".format(current_time()))
def iteration_end(self, env, *args, **kwargs):
with self.output("dummy", "w") as f:
f.write("iteration ended@ {}\n".format(current_time()))
def sim_end(self):
with self.output("dummy", "a") as f:
f.write("simulation ended @ {}\n".format(current_time()))
class graphdrawing(Exporter):
def iteration_end(self, env, *args, **kwargs):
# Outside effects
f = plt.figure()
nx.draw(
env.G,
node_size=10,
width=0.2,
pos=nx.spring_layout(env.G, scale=100),
ax=f.add_subplot(111),
)
with open("graph-{}.png".format(env.id)) as f:
f.savefig(f)
class summary(Exporter):
"""Print a summary of each iteration to sys.stdout"""
def iteration_end(self, env, *args, **kwargs):
msg = ""
for (t, df) in self.get_dfs(env):
if not len(df):
continue
tabs = "\t" * 2
description = indent(str(df.describe()), tabs)
last_line = indent(str(df.iloc[-1:]), tabs)
# value_counts = indent(str(df.value_counts()), tabs)
value_counts = indent(str(df.apply(lambda x: x.value_counts()).T.stack()), tabs)
msg += dedent("""
Dataframe {t}:
Last line: :
{last_line}
Description:
{description}
Value counts:
{value_counts}
""").format(**locals())
logger.info(msg)
class YAML(Exporter):
"""Writes the configuration of the simulation to a YAML file"""
def sim_start(self):
if not self.dump:
logger.debug("NOT dumping results")
return
with self.output(self.simulation.id + ".dumped.yml") as f:
logger.info(f"Dumping simulation configuration to {self.outdir}")
f.write(self.simulation.to_yaml())
class default(Exporter):
"""Default exporter. Writes sqlite results, as well as the simulation YAML"""
def __init__(self, *args, exporter_cls=[], **kwargs):
exporter_cls = exporter_cls or [YAML, SQLite]
self.inner = [cls(*args, **kwargs) for cls in exporter_cls]
def sim_start(self, *args, **kwargs):
for exporter in self.inner:
exporter.sim_start(*args, **kwargs)
def sim_end(self, *args, **kwargs):
for exporter in self.inner:
exporter.sim_end(*args, **kwargs)
def iteration_end(self, *args, **kwargs):
for exporter in self.inner:
exporter.iteration_end(*args, **kwargs)

View File

@@ -1,83 +0,0 @@
from __future__ import annotations
from typing import Dict
import os
import sys
import random
import networkx as nx
from . import config, serialization, basestring
def from_topology(topology, dir_path: str = None):
if topology is None:
return nx.Graph()
if isinstance(topology, nx.Graph):
return topology
# If it's a dict, assume it's a node-link graph
if isinstance(topology, dict):
try:
return nx.json_graph.node_link_graph(topology)
except Exception as ex:
raise ValueError("Unknown topology format")
# Otherwise, treat like a path
path = topology
if dir_path and not os.path.isabs(path):
path = os.path.join(dir_path, path)
extension = os.path.splitext(path)[1][1:]
kwargs = {}
if extension == "gexf":
kwargs["version"] = "1.2draft"
kwargs["node_type"] = int
try:
method = getattr(nx.readwrite, "read_" + extension)
except AttributeError:
raise AttributeError("Unknown format")
return method(path, **kwargs)
def from_params(generator, dir_path: str = None, **params):
if dir_path not in sys.path:
sys.path.append(dir_path)
method = serialization.deserializer(
generator,
known_modules=[
"networkx.generators",
],
)
return method(**params)
def find_unassigned(G, shuffle=False, random=random):
"""
Link an agent to a node in a topology.
If node_id is None, a node without an agent_id will be found.
"""
candidates = list(G.nodes(data=True))
if shuffle:
random.shuffle(candidates)
for next_id, data in candidates:
if "agent" not in data:
return next_id
return None
def dump_gexf(G, f):
for node in G.nodes():
if "pos" in G.nodes[node]:
G.nodes[node]["viz"] = {
"position": {
"x": G.nodes[node]["pos"][0],
"y": G.nodes[node]["pos"][1],
"z": 0.0,
}
}
del G.nodes[node]["pos"]
nx.write_gexf(G, f, version="1.2draft")

View File

@@ -1,32 +0,0 @@
from __future__ import annotations
from typing_extensions import Annotated
import annotated_types
from typing import *
from dataclasses import dataclass
class Parameter:
pass
def floatrange(
*,
gt: Optional[float] = None,
ge: Optional[float] = None,
lt: Optional[float] = None,
le: Optional[float] = None,
multiple_of: Optional[float] = None,
) -> type[float]:
return Annotated[
float,
annotated_types.Interval(gt=gt, ge=ge, lt=lt, le=le),
annotated_types.MultipleOf(multiple_of) if multiple_of is not None else None,
]
function = Annotated[Callable, Parameter]
Integer = Annotated[int, Parameter]
Float = Annotated[float, Parameter]
probability = floatrange(ge=0, le=1)

View File

@@ -1,276 +0,0 @@
import os
import logging
import ast
import sys
import re
import importlib
import importlib.machinery, importlib.util
from glob import glob
from itertools import product, chain
from contextlib import contextmanager
import yaml
import networkx as nx
from . import config
from jinja2 import Template
logger = logging.getLogger("soil")
def load_file(infile):
folder = os.path.dirname(infile)
if folder not in sys.path:
sys.path.append(folder)
with open(infile, "r") as f:
return list(chain.from_iterable(map(expand_template, load_string(f))))
def load_string(string):
yield from yaml.load_all(string, Loader=yaml.FullLoader)
def expand_template(config):
if "template" not in config:
yield config
return
if "vars" not in config:
raise ValueError(
("You must provide a definition of variables" " for the template.")
)
template = config["template"]
if not isinstance(template, str):
template = yaml.dump(template)
template = Template(template)
params = params_for_template(config)
blank_str = template.render({k: 0 for k in params[0].keys()})
blank = list(load_string(blank_str))
if len(blank) > 1:
raise ValueError("Templates must not return more than one configuration")
if "name" in blank[0]:
raise ValueError("Templates cannot be named, use group instead")
for ps in params:
string = template.render(ps)
for c in load_string(string):
yield c
def params_for_template(config):
sampler_config = config.get("sampler", {"N": 100})
sampler = sampler_config.pop("method", "SALib.sample.morris.sample")
sampler = deserializer(sampler)
bounds = config["vars"]["bounds"]
problem = {
"num_vars": len(bounds),
"names": list(bounds.keys()),
"bounds": list(v for v in bounds.values()),
}
samples = sampler(problem, **sampler_config)
lists = config["vars"].get("lists", {})
names = list(lists.keys())
values = list(lists.values())
combs = list(product(*values))
allnames = names + problem["names"]
allvalues = [(list(i[0]) + list(i[1])) for i in product(combs, samples)]
params = list(map(lambda x: dict(zip(allnames, x)), allvalues))
return params
def load_files(*patterns, **kwargs):
for pattern in patterns:
for i in glob(pattern, **kwargs, recursive=True):
for cfg in load_file(i):
path = os.path.abspath(i)
yield cfg, path
def load_config(cfg):
if isinstance(cfg, dict):
yield config.load_config(cfg), os.getcwd()
else:
yield from load_files(cfg)
builtins = importlib.import_module("builtins")
KNOWN_MODULES = {
'soil': None,
}
MODULE_FILES = {}
def _add_source_file(file):
"""Add a file to the list of known modules"""
file = os.path.abspath(file)
if file in MODULE_FILES:
logger.warning(f"File {file} already added as module {MODULE_FILES[file]}. Reloading")
_remove_source_file(file)
modname = f"imported_module_{len(MODULE_FILES)}"
loader = importlib.machinery.SourceFileLoader(modname, file)
spec = importlib.util.spec_from_loader(loader.name, loader)
my_module = importlib.util.module_from_spec(spec)
loader.exec_module(my_module)
MODULE_FILES[file] = modname
KNOWN_MODULES[modname] = my_module
def _remove_source_file(file):
"""Remove a file from the list of known modules"""
file = os.path.abspath(file)
modname = None
try:
modname = MODULE_FILES.pop(file)
KNOWN_MODULES.pop(modname)
except KeyError as ex:
raise ValueError(f"File {file} had not been added as a module: {ex}")
@contextmanager
def with_source(file=None):
"""Add a file to the list of known modules, and remove it afterwards"""
if file:
_add_source_file(file)
try:
yield
finally:
if file:
_remove_source_file(file)
def get_module(modname):
"""Get a module from the list of known modules"""
if modname not in KNOWN_MODULES or KNOWN_MODULES[modname] is None:
module = importlib.import_module(modname)
KNOWN_MODULES[modname] = module
return KNOWN_MODULES[modname]
def name(value, known_modules=KNOWN_MODULES):
"""Return a name that can be imported, to serialize/deserialize an object"""
if value is None:
return "None"
if not isinstance(value, type): # Get the class name first
value = type(value)
tname = value.__name__
if hasattr(builtins, tname):
return tname
modname = value.__module__
if modname == "__main__":
return tname
if known_modules and modname in known_modules:
return tname
for kmod in known_modules:
module = get_module(kmod)
if hasattr(module, tname):
return tname
return "{}.{}".format(modname, tname)
def serializer(type_):
if type_ != "str" and hasattr(builtins, type_):
return repr
return lambda x: x
def serialize(v, known_modules=KNOWN_MODULES):
"""Get a text representation of an object."""
tname = name(v, known_modules=known_modules)
func = serializer(tname)
return func(v), tname
def serialize_dict(d, known_modules=KNOWN_MODULES):
try:
d = dict(d)
except (ValueError, TypeError) as ex:
return serialize(d)[0]
for (k, v) in reversed(list(d.items())):
if isinstance(v, dict):
d[k] = serialize_dict(v, known_modules=known_modules)
elif isinstance(v, list):
for ix in range(len(v)):
v[ix] = serialize_dict(v[ix], known_modules=known_modules)
elif isinstance(v, type):
d[k] = serialize(v, known_modules=known_modules)[1]
return d
IS_CLASS = re.compile(r"<class '(.*)'>")
def deserializer(type_, known_modules=KNOWN_MODULES):
if type(type_) != str: # Already deserialized
return type_
if type_ == "str":
return lambda x="": x
if type_ == "None":
return lambda x=None: None
if hasattr(builtins, type_): # Check if it's a builtin type
cls = getattr(builtins, type_)
return lambda x=None: ast.literal_eval(x) if x is not None else cls()
match = IS_CLASS.match(type_)
if match:
modname, tname = match.group(1).rsplit(".", 1)
module = get_module(modname)
cls = getattr(module, tname)
return getattr(cls, "deserialize", cls)
# Otherwise, see if we can find the module and the class
options = []
for mod in known_modules:
if mod:
options.append((mod, type_))
if "." in type_: # Fully qualified module
module, type_ = type_.rsplit(".", 1)
options.append((module, type_))
errors = []
for modname, tname in options:
try:
module = get_module(modname)
cls = getattr(module, tname)
return getattr(cls, "deserialize", cls)
except (ImportError, AttributeError) as ex:
errors.append((modname, tname, ex))
raise ValueError('Could not find type "{}". Tried: {}'.format(type_, errors))
def deserialize(type_, value=None, globs=None, **kwargs):
"""Get an object from a text representation"""
if not isinstance(type_, str):
return type_
if globs and type_ in globs:
des = globs[type_]
else:
try:
des = deserializer(type_, **kwargs)
except ValueError as ex:
try:
des = eval(type_)
except Exception:
raise ex
if value is None:
return des
return des(value)
def deserialize_all(names, *args, known_modules=KNOWN_MODULES, **kwargs):
"""Return the list of deserialized objects"""
objects = []
for name in names:
mod = deserialize(name, known_modules=known_modules)
objects.append(mod(*args, **kwargs))
return objects

View File

@@ -1 +0,0 @@
# General configuration

View File

@@ -1,384 +0,0 @@
import os
from time import time as current_time, strftime
import sys
import yaml
import hashlib
import inspect
import logging
import networkx as nx
from tqdm.auto import tqdm
from textwrap import dedent
from dataclasses import dataclass, field, asdict, replace
from typing import Any, Dict, Union, Optional, List
from functools import partial
from contextlib import contextmanager
from itertools import product
import json
from . import serialization, exporters, utils, basestring, agents
from .environment import Environment
from .utils import logger, run_and_return_exceptions
from .debugging import set_trace
_AVOID_RUNNING = False
_QUEUED = []
@contextmanager
def do_not_run():
global _AVOID_RUNNING
_AVOID_RUNNING = True
try:
logger.debug("NOT RUNNING")
yield
finally:
logger.debug("RUNNING AGAIN")
_AVOID_RUNNING = False
def _iter_queued():
while _QUEUED:
(cls, params) = _QUEUED.pop(0)
yield replace(cls, parameters=params)
# TODO: change documentation for simulation
# TODO: rename iterations to iterations
# TODO: make parameters a dict of iterable/any
@dataclass
class Simulation:
"""
A simulation is a collection of agents and a model. It is responsible for running the model and agents, and collecting data from them.
Args:
version: The version of the simulation. This is used to determine how to load the simulation.
name: The name of the simulation.
description: A description of the simulation.
group: The group that the simulation belongs to.
model: The model to use for the simulation. This can be a string or a class.
parameters: The parameters to pass to the model.
matrix: A matrix of values for each parameter.
seed: The seed to use for the simulation.
dir_path: The directory path to use for the simulation.
max_time: The maximum time to run the simulation.
max_steps: The maximum number of steps to run the simulation.
interval: The interval to use for the simulation.
iterations: The number of iterations (times) to run the simulation.
num_processes: The number of processes to use for the simulation. If greater than one, simulations will be performed in parallel. This may make debugging and error handling difficult.
tables: The tables to use in the simulation datacollector
agent_reporters: The agent reporters to use in the datacollector
model_reporters: The model reporters to use in the datacollector
dry_run: Whether or not to run the simulation. If True, the simulation will not be run.
backup: Whether or not to backup the simulation. If True, the simulation files will be backed up to a different directory.
overwrite: Whether or not to replace existing simulation data.
source_file: Python file to use to find additional classes.
"""
version: str = "2"
source_file: Optional[str] = None
name: Optional[str] = None
description: Optional[str] = ""
group: str = None
backup: bool = False
overwrite: bool = False
dry_run: bool = False
dump: bool = False
model: Union[str, type] = "soil.Environment"
parameters: dict = field(default_factory=dict)
matrix: dict = field(default_factory=dict)
seed: str = "default"
dir_path: str = field(default_factory=lambda: os.getcwd())
max_time: float = None
max_steps: int = None
interval: int = 1
iterations: int = 1
num_processes: Optional[int] = 1
exporters: Optional[List[str]] = field(default_factory=lambda: [exporters.default])
model_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
agent_reporters: Optional[Dict[str, Any]] = field(default_factory=dict)
tables: Optional[Dict[str, Any]] = field(default_factory=dict)
outdir: str = field(default_factory=lambda: os.path.join(os.getcwd(), "soil_output"))
# outdir: Optional[str] = None
exporter_params: Optional[Dict[str, Any]] = field(default_factory=dict)
level: int = logging.INFO
skip_test: Optional[bool] = False
debug: Optional[bool] = False
def __post_init__(self):
if self.name is None:
if isinstance(self.model, str):
self.name = self.model
else:
self.name = self.model.__name__
self.logger = logger.getChild(self.name)
self.logger.setLevel(self.level)
if self.source_file and (not os.path.isabs(self.source_file)):
self.source_file = os.path.abspath(os.path.join(self.dir_path, self.source_file))
with serialization.with_source(self.source_file):
if isinstance(self.model, str):
self.model = serialization.deserialize(self.model)
def deserialize_reporters(reporters):
for (k, v) in reporters.items():
if isinstance(v, str) and v.startswith("py:"):
reporters[k] = serialization.deserialize(v.split(":", 1)[1])
return reporters
self.agent_reporters = deserialize_reporters(self.agent_reporters)
self.model_reporters = deserialize_reporters(self.model_reporters)
self.tables = deserialize_reporters(self.tables)
self.id = f"{self.name}_{current_time()}"
def run(self, **kwargs):
"""Run the simulation and return the list of resulting environments"""
if kwargs:
return replace(self, **kwargs).run()
self.logger.debug(
dedent(
"""
Simulation:
---
"""
)
+ self.to_yaml()
)
param_combinations = self._collect_params(**kwargs)
if _AVOID_RUNNING:
_QUEUED.extend((self, param) for param in param_combinations)
return []
self.logger.debug("Using exporters: %s", self.exporters or [])
exporters = serialization.deserialize_all(
self.exporters,
simulation=self,
known_modules=[
"soil.exporters",
],
dump=self.dump and not self.dry_run,
outdir=self.outdir,
**self.exporter_params,
)
results = []
for exporter in exporters:
exporter.sim_start()
for params in tqdm(param_combinations, desc=self.name, unit="configuration"):
for (k, v) in params.items():
tqdm.write(f"{k} = {v}")
sha = hashlib.sha256()
sha.update(repr(sorted(params.items())).encode())
params_id = sha.hexdigest()[:7]
for env in self._run_iters_for_params(params):
for exporter in exporters:
exporter.iteration_end(env, params, params_id)
results.append(env)
for exporter in exporters:
exporter.sim_end()
return results
def _collect_params(self):
parameters = []
if self.parameters:
parameters.append(self.parameters)
if self.matrix:
assert isinstance(self.matrix, dict)
for values in product(*(self.matrix.values())):
parameters.append(dict(zip(self.matrix.keys(), values)))
if not parameters:
parameters = [{}]
if self.dump:
self.logger.info("Output directory: %s", self.outdir)
return parameters
def _run_iters_for_params(
self,
params
):
"""Run the simulation and yield the resulting environments."""
with serialization.with_source(self.source_file):
with utils.timer(f"running for config {params}"):
if self.dry_run:
def func(*args, **kwargs):
return None
else:
func = self._run_model
for env in tqdm(utils.run_parallel(
func=func,
iterable=range(self.iterations),
**params,
), total=self.iterations, leave=False):
if env is None and self.dry_run:
continue
yield env
def _get_env(self, iteration_id, params):
"""Create an environment for a iteration of the simulation"""
iteration_id = str(iteration_id)
agent_reporters = self.agent_reporters
agent_reporters.update(params.pop("agent_reporters", {}))
model_reporters = self.model_reporters
model_reporters.update(params.pop("model_reporters", {}))
return self.model(
id=iteration_id,
seed=f"{self.seed}_iteration_{iteration_id}",
dir_path=self.dir_path,
interval=self.interval,
logger=self.logger.getChild(iteration_id),
agent_reporters=agent_reporters,
model_reporters=model_reporters,
tables=self.tables,
**params,
)
def _run_model(self, iteration_id, **params):
"""
Run a single iteration of the simulation
"""
# Set-up iteration environment and graph
model = self._get_env(iteration_id, params)
with utils.timer("Simulation {} iteration {}".format(self.name, iteration_id)):
max_time = self.max_time
max_steps = self.max_steps
if (max_time is not None) and (max_steps is not None):
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time) or (model.schedule.steps >= max_steps)
elif max_time is not None:
is_done = lambda model: (not model.running) or (model.schedule.time >= max_time)
elif max_steps is not None:
is_done = lambda model: (not model.running) or (model.schedule.steps >= max_steps)
else:
is_done = lambda model: not model.running
if not model.schedule.agents:
raise Exception("No agents in model. This is probably a bug. Make sure that the model has agents scheduled after its initialization.")
newline = "\n"
self.logger.debug(
dedent(
f"""
Model stats:
Agent count: { model.schedule.get_agent_count() }):
Topology size: { len(model.G) if hasattr(model, "G") else 0 }
"""
)
)
if self.debug:
set_trace()
while not is_done(model):
self.logger.debug(
f'Simulation time {model.schedule.time}/{max_time}.'
)
model.step()
return model
def to_dict(self):
d = asdict(self)
return serialization.serialize_dict(d)
def to_yaml(self):
return yaml.dump(self.to_dict())
def iter_from_file(*files, **kwargs):
for f in files:
try:
yield from iter_from_py(f, **kwargs)
except ValueError as ex:
yield from iter_from_config(f, **kwargs)
def from_file(*args, **kwargs):
return list(iter_from_file(*args, **kwargs))
def iter_from_config(*cfgs, **kwargs):
for config in cfgs:
configs = list(serialization.load_config(config))
for config, path in configs:
d = dict(config)
d.update(kwargs)
if "dir_path" not in d:
d["dir_path"] = os.path.dirname(path)
yield Simulation(**d)
def from_config(conf_or_path):
lst = list(iter_from_config(conf_or_path))
if len(lst) > 1:
raise AttributeError("Provide only one configuration")
return lst[0]
def iter_from_py(pyfile, module_name='imported_file', **kwargs):
"""Try to load every Simulation instance in a given Python file"""
import importlib
added = False
sims = []
assert not _AVOID_RUNNING
with do_not_run():
assert _AVOID_RUNNING
spec = importlib.util.spec_from_file_location(module_name, pyfile)
folder = os.path.dirname(pyfile)
if folder not in sys.path:
added = True
sys.path.append(folder)
if not spec:
raise ValueError(f"{pyfile} does not seem to be a Python module")
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
for (_name, sim) in inspect.getmembers(module, lambda x: isinstance(x, Simulation)):
sims.append(sim)
for sim in _iter_queued():
sims.append(sim)
if not sims:
for (_name, sim) in inspect.getmembers(module, lambda x: inspect.isclass(x) and issubclass(x, Simulation)):
sims.append(sim(**kwargs))
del sys.modules[module_name]
assert not _AVOID_RUNNING
if not sims:
raise AttributeError(f"No valid configurations found in {pyfile}")
if added:
sys.path.remove(folder)
for sim in sims:
yield replace(sim, **kwargs)
def from_py(pyfile):
return next(iter_from_py(pyfile))
def run_from_file(*files, **kwargs):
for sim in iter_from_file(*files):
logger.info(f"Using config(s): {sim.name}")
sim.run_simulation(**kwargs)
def run(env, iterations=1, num_processes=1, dump=False, name="test", **kwargs):
return Simulation(model=env, iterations=iterations, name=name, dump=dump, num_processes=num_processes, **kwargs).run()

View File

@@ -1,213 +0,0 @@
from mesa.time import BaseScheduler
from queue import Empty
from heapq import heappush, heappop, heapreplace
import math
from inspect import getsource
from numbers import Number
from textwrap import dedent
from .utils import logger
from mesa import Agent as MesaAgent
INFINITY = float("inf")
class DeadAgent(Exception):
pass
class When:
def __init__(self, time):
if isinstance(time, When):
return time
self._time = time
def abs(self, time):
return self._time
def schedule_next(self, time, delta, first=False):
return (self._time, None)
NEVER = When(INFINITY)
class Delta(When):
def __init__(self, delta):
self._delta = delta
def abs(self, time):
return self._time + self._delta
def __eq__(self, other):
if isinstance(other, Delta):
return self._delta == other._delta
return False
def schedule_next(self, time, delta, first=False):
return (time + self._delta, None)
def __repr__(self):
return str(f"Delta({self._delta})")
class BaseCond:
def __init__(self, msg=None, delta=None, eager=False):
self._msg = msg
self._delta = delta
self.eager = eager
def schedule_next(self, time, delta, first=False):
if first and self.eager:
return (time, self)
if self._delta:
delta = self._delta
return (time + delta, self)
def return_value(self, agent):
return None
def __repr__(self):
return self._msg or self.__class__.__name__
class Cond(BaseCond):
def __init__(self, func, *args, **kwargs):
self._func = func
super().__init__(*args, **kwargs)
def ready(self, agent, time):
return self._func(agent)
def __repr__(self):
if self._msg:
return self._msg
return str(f'Cond("{dedent(getsource(self._func)).strip()}")')
class TimedActivation(BaseScheduler):
"""A scheduler which activates each agent when the agent requests.
In each activation, each agent will update its 'next_time'.
"""
def __init__(self, *args, shuffle=True, **kwargs):
super().__init__(*args, **kwargs)
self._next = {}
self._queue = []
self._shuffle = shuffle
# self.step_interval = getattr(self.model, "interval", 1)
self.step_interval = self.model.interval
self.logger = getattr(self.model, "logger", logger).getChild(f"time_{ self.model }")
self.next_time = self.time
def add(self, agent: MesaAgent, when=None):
if when is None:
when = self.time
elif isinstance(when, When):
when = when.abs()
self._schedule(agent, None, when)
super().add(agent)
def _schedule(self, agent, condition=None, when=None, replace=False):
if condition:
if not when:
when, condition = condition.schedule_next(
when or self.time, self.step_interval
)
else:
if when is None:
when = self.time + self.step_interval
condition = None
if self._shuffle:
key = (when, self.model.random.random(), condition)
else:
key = (when, agent.unique_id, condition)
self._next[agent.unique_id] = key
if replace:
heapreplace(self._queue, (key, agent))
else:
heappush(self._queue, (key, agent))
def step(self) -> None:
"""
Executes agents in order, one at a time. After each step,
an agent will signal when it wants to be scheduled next.
"""
self.logger.debug(f"Simulation step {self.time}")
if not self.model.running or self.time == INFINITY:
return
self.logger.debug(f"Queue length: %s", len(self._queue))
while self._queue:
((when, _id, cond), agent) = self._queue[0]
if when > self.time:
break
if cond:
if not cond.ready(agent, self.time):
self._schedule(agent, cond, replace=True)
continue
try:
agent._last_return = cond.return_value(agent)
except Exception as ex:
agent._last_except = ex
else:
agent._last_return = None
agent._last_except = None
self.logger.debug("Stepping agent %s", agent)
self._next.pop(agent.unique_id, None)
try:
returned = agent.step()
except DeadAgent:
agent.alive = False
heappop(self._queue)
continue
# Check status for MESA agents
if not getattr(agent, "alive", True):
heappop(self._queue)
continue
if returned:
next_check = returned.schedule_next(
self.time, self.step_interval, first=True
)
self._schedule(agent, when=next_check[0], condition=next_check[1], replace=True)
else:
next_check = (self.time + self.step_interval, None)
self._schedule(agent, replace=True)
self.steps += 1
if not self._queue:
self.model.running = False
self.time = INFINITY
return
next_time = self._queue[0][0][0]
if next_time < self.time:
raise Exception(
f"An agent has been scheduled for a time in the past, there is probably an error ({when} < {self.time})"
)
self.logger.debug("Updating time step: %s -> %s ", self.time, next_time)
self.time = next_time
class ShuffledTimedActivation(TimedActivation):
def __init__(self, *args, **kwargs):
super().__init__(*args, shuffle=True, **kwargs)
class OrderedTimedActivation(TimedActivation):
def __init__(self, *args, **kwargs):
super().__init__(*args, shuffle=False, **kwargs)

View File

@@ -1,160 +0,0 @@
import logging
from time import time as current_time, strftime, gmtime, localtime
import os
import traceback
from functools import partial
from shutil import copyfile, move
from multiprocessing import Pool, cpu_count
from contextlib import contextmanager
logger = logging.getLogger("soil")
logger.setLevel(logging.WARNING)
timeformat = "%H:%M:%S"
if os.environ.get("SOIL_VERBOSE", ""):
logformat = "[%(levelname)-5.5s][%(asctime)s][%(name)s]: %(message)s"
else:
logformat = "[%(levelname)-5.5s][%(asctime)s] %(message)s"
logFormatter = logging.Formatter(logformat, timeformat)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logging.basicConfig(
level=logging.INFO,
handlers=[
consoleHandler,
],
)
@contextmanager
def timer(name="task", pre="", function=logger.info, to_object=None):
start = current_time()
function("{}Starting {} at {}.".format(pre, name, strftime("%X", gmtime(start))))
yield start
end = current_time()
function(
"{}Finished {} at {} in {} seconds".format(
pre, name, strftime("%X", gmtime(end)), str(end - start)
)
)
if to_object:
to_object.start = start
to_object.end = end
def try_backup(path, remove=False):
if not os.path.exists(path):
return None
outdir = os.path.dirname(path)
if outdir and not os.path.exists(outdir):
os.makedirs(outdir)
creation = os.path.getctime(path)
stamp = strftime("%Y-%m-%d_%H.%M.%S", localtime(creation))
backup_dir = os.path.join(outdir, "backup")
if not os.path.exists(backup_dir):
os.makedirs(backup_dir)
newpath = os.path.join(backup_dir, "{}@{}".format(os.path.basename(path), stamp))
if remove:
move(path, newpath)
else:
copyfile(path, newpath)
return newpath
def safe_open(path, mode="r", backup=True, **kwargs):
outdir = os.path.dirname(path)
if outdir and not os.path.exists(outdir):
os.makedirs(outdir)
if backup and "w" in mode:
try_backup(path)
return open(path, mode=mode, **kwargs)
@contextmanager
def open_or_reuse(f, *args, **kwargs):
try:
with safe_open(f, *args, **kwargs) as f:
yield f
except (AttributeError, TypeError) as ex:
yield f
def flatten_dict(d):
if not isinstance(d, dict):
return d
return dict(_flatten_dict(d))
def _flatten_dict(d, prefix=""):
if not isinstance(d, dict):
# print('END:', prefix, d)
yield prefix, d
return
if prefix:
prefix = prefix + "."
for k, v in d.items():
# print(k, v)
res = list(_flatten_dict(v, prefix="{}{}".format(prefix, k)))
# print('RES:', res)
yield from res
def unflatten_dict(d):
out = {}
for k, v in d.items():
target = out
if not isinstance(k, str):
target[k] = v
continue
tokens = k.split(".")
if len(tokens) < 2:
target[k] = v
continue
for token in tokens[:-1]:
if token not in target:
target[token] = {}
target = target[token]
target[tokens[-1]] = v
return out
def run_and_return_exceptions(func, *args, **kwargs):
"""
A wrapper for a function that catches exceptions and returns them.
It is meant for async simulations.
"""
try:
return func(*args, **kwargs)
except Exception as ex:
if ex.__cause__ is not None:
ex = ex.__cause__
ex.message = "".join(
traceback.format_exception(type(ex), ex, ex.__traceback__)[:]
)
return ex
def run_parallel(func, iterable, num_processes=1, **kwargs):
if num_processes > 1 and not os.environ.get("SOIL_DEBUG", None):
if num_processes < 1:
num_processes = cpu_count() - num_processes
p = Pool(processes=num_processes)
wrapped_func = partial(run_and_return_exceptions, func, **kwargs)
for i in p.imap_unordered(wrapped_func, iterable):
if isinstance(i, Exception):
logger.error("Trial failed:\n\t%s", i.message)
continue
yield i
else:
for i in iterable:
yield func(i, **kwargs)
def int_seed(seed: str):
return int.from_bytes(seed.encode(), "little")

View File

@@ -1,21 +0,0 @@
import os
import logging
logger = logging.getLogger(__name__)
ROOT = os.path.dirname(__file__)
DEFAULT_FILE = os.path.join(ROOT, "VERSION")
def read_version(versionfile=DEFAULT_FILE):
try:
with open(versionfile) as f:
return f.read().strip()
except IOError: # pragma: no cover
logger.error(
("Running an unknown version of {}." "Be careful!.").format(__name__)
)
return "0.0"
__version__ = read_version()

4
soil/web/.gitignore vendored
View File

@@ -1,4 +0,0 @@
__pycache__/
output/
tests/
soil_output/

View File

@@ -1,59 +0,0 @@
# Graph Visualization with D3.js
The aim of this software is to provide a useful tool for visualising and analysing the result of different simulations based on graph. Once you run the simulation, you will be able to interact with the simulation in real time.
For this purpose, a model which tries to simulate the spread of information to comprehend the radicalism spread in a society is included. Whith all this, the main project goals could be divided in five as it is shown in the following.
* Simulate the spread of information through a network applied to radicalism.
* Visualize the results of the simulation.
* Interact with the simulation in real time.
* Extract data from the results.
* Show data in a right way for its research.
## Deploying the server
For deploying the application, you will only need to run the following command.
`python3 run.py [--name NAME] [--dump] [--port PORT] [--verbose]`
Where the options are detailed in the following table.
| Option | Description |
| --- | --- |
| `--name NAME` | The name of the simulation. It will appear on the app. |
| `--dump` | For dumping the results in server side. |
| `--port PORT` | The port where the server will listen. |
| `--verbose` | Verbose mode. |
> You can dump the results of the simulation in server side. Anyway, you will be able to download them in GEXF or JSON Graph format directly from the browser.
## Visualization Params
The configuration of the simulation is based on the simulator configuration. In this case, it follows the [SOIL](https://github.com/gsi-upm/soil) configuration syntax and for visualising the results in a more comfortable way, more params can be added in `visualization_params` dictionary.
* For setting a background image, the tag needed is `background image`. You can also add a `background_opacity` and `background_filter_color` if the image is so clear than you can difficult view the nodes.
* For setting colors to the nodes, you can do it based on their properties values. Using the `color` tag, you will need to indicate the attribute key and value, and then the color you want to apply.
* The shapes applied to a group of nodes are always the same. This means than it won't change dynamically, so you will have to indicate the property with the `shape_property` tag and add a dictionary called `shapes` in which for each value, you indicate the shape.
All shapes have to had been downloaded before in SVG format and added to the server.
An example of this configuration applied to the TerroristNetworkModel is presented.
```yaml
visualization_params:
# Icons downloaded from https://www.iconfinder.com/
shape_property: agent
shapes:
TrainingAreaModel: target
HavenModel: home
TerroristNetworkModel: person
colors:
- attr_id: 0
color: '#40de40'
- attr_id: 1
color: red
- attr_id: 2
color: '#c16a6a'
background_image: 'map_4800x2860.jpg'
background_opacity: '0.9'
background_filter_color: 'blue'
```

View File

@@ -1,334 +0,0 @@
import io
import threading
import asyncio
import logging
import networkx as nx
import os
import sys
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.escape
import tornado.gen
import yaml
import webbrowser
from contextlib import contextmanager
from time import sleep
from xml.etree.ElementTree import tostring
from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor
from ..simulation import Simulation
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
ROOT = os.path.abspath(os.path.dirname(__file__))
MAX_WORKERS = 4
LOGGING_INTERVAL = 0.5
# Workaround to let Soil load the required modules
sys.path.append(ROOT)
class PageHandler(tornado.web.RequestHandler):
"""Handler for the HTML template which holds the visualization."""
def get(self):
self.render(
"index.html", port=self.application.port, name=self.application.name
)
class SocketHandler(tornado.websocket.WebSocketHandler):
"""Handler for websocket."""
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
def open(self):
if self.application.verbose:
logger.info("Socket opened!")
def check_origin(self, origin):
return True
def on_message(self, message):
"""Receiving a message from the websocket, parse, and act accordingly."""
msg = tornado.escape.json_decode(message)
if msg["type"] == "config_file":
if self.application.verbose:
print(msg["data"])
self.config = list(yaml.load_all(msg["data"]))
if len(self.config) > 1:
error = "Please, provide only one configuration."
if self.application.verbose:
logger.error(error)
self.write_message({"type": "error", "error": error})
return
self.config = self.config[0]
self.send_log(
"INFO." + self.simulation_name,
"Using config: {name}".format(name=self.config["name"]),
)
if "visualization_params" in self.config:
self.write_message(
{
"type": "visualization_params",
"data": self.config["visualization_params"],
}
)
self.name = self.config["name"]
self.run_simulation()
settings = []
for key in self.config["environment_params"]:
if (
type(self.config["environment_params"][key]) == float
or type(self.config["environment_params"][key]) == int
):
if self.config["environment_params"][key] <= 1:
setting_type = "number"
else:
setting_type = "great_number"
elif type(self.config["environment_params"][key]) == bool:
setting_type = "boolean"
else:
setting_type = "undefined"
settings.append(
{
"label": key,
"type": setting_type,
"value": self.config["environment_params"][key],
}
)
self.write_message({"type": "settings", "data": settings})
elif msg["type"] == "get_trial":
if self.application.verbose:
logger.info("Trial {} requested!".format(msg["data"]))
self.send_log("INFO." + __name__, "Trial {} requested!".format(msg["data"]))
self.write_message(
{"type": "get_trial", "data": self.get_trial(int(msg["data"]))}
)
elif msg["type"] == "run_simulation":
if self.application.verbose:
logger.info(
"Running new simulation for {name}".format(name=self.config["name"])
)
self.send_log(
"INFO." + self.simulation_name,
"Running new simulation for {name}".format(name=self.config["name"]),
)
self.config["environment_params"] = msg["data"]
self.run_simulation()
elif msg["type"] == "download_gexf":
G = self.trials[int(msg["data"])].history_to_graph()
for node in G.nodes():
if "pos" in G.nodes[node]:
G.nodes[node]["viz"] = {
"position": {
"x": G.nodes[node]["pos"][0],
"y": G.nodes[node]["pos"][1],
"z": 0.0,
}
}
del G.nodes[node]["pos"]
writer = nx.readwrite.gexf.GEXFWriter(version="1.2draft")
writer.add_graph(G)
self.write_message(
{
"type": "download_gexf",
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
"data": tostring(writer.xml).decode(writer.encoding),
}
)
elif msg["type"] == "download_json":
G = self.trials[int(msg["data"])].history_to_graph()
for node in G.nodes():
if "pos" in G.nodes[node]:
G.nodes[node]["viz"] = {
"position": {
"x": G.nodes[node]["pos"][0],
"y": G.nodes[node]["pos"][1],
"z": 0.0,
}
}
del G.nodes[node]["pos"]
self.write_message(
{
"type": "download_json",
"filename": self.config["name"] + "_trial_" + str(msg["data"]),
"data": nx.node_link_data(G),
}
)
else:
if self.application.verbose:
logger.info("Unexpected message!")
def update_logging(self):
try:
if (
not self.log_capture_string.closed
and self.log_capture_string.getvalue()
):
for i in range(len(self.log_capture_string.getvalue().split("\n")) - 1):
self.send_log(
"INFO." + self.simulation_name,
self.log_capture_string.getvalue().split("\n")[i],
)
self.log_capture_string.truncate(0)
self.log_capture_string.seek(0)
finally:
if self.capture_logging:
tornado.ioloop.IOLoop.current().call_later(
LOGGING_INTERVAL, self.update_logging
)
def on_close(self):
if self.application.verbose:
logger.info("Socket closed!")
def send_log(self, logger, logging):
self.write_message({"type": "log", "logger": logger, "logging": logging})
@property
def simulation_name(self):
return self.config.get("name", "NoSimulationRunning")
@run_on_executor
def nonblocking(self, config):
simulation = Simulation(**config)
return simulation.run()
@tornado.gen.coroutine
def run_simulation(self):
# Run simulation and capture logs
logger.info("Running simulation!")
if "visualization_params" in self.config:
del self.config["visualization_params"]
with self.logging(self.simulation_name):
try:
config = dict(**self.config)
config["outdir"] = os.path.join(self.application.outdir, config["name"])
config["dump"] = self.application.dump
self.trials = yield self.nonblocking(config)
self.write_message(
{
"type": "trials",
"data": list(trial.name for trial in self.trials),
}
)
except Exception as ex:
error = "Something went wrong:\n\t{}".format(ex)
logging.info(error)
self.write_message({"type": "error", "error": error})
self.send_log("ERROR." + self.simulation_name, error)
def get_trial(self, trial):
logger.info("Available trials: %s " % len(self.trials))
logger.info("Ask for : %s" % trial)
trial = self.trials[trial]
G = trial.history_to_graph()
return nx.node_link_data(G)
@contextmanager
def logging(self, logger):
self.capture_logging = True
self.logger_application = logging.getLogger(logger)
self.log_capture_string = io.StringIO()
ch = logging.StreamHandler(self.log_capture_string)
self.logger_application.addHandler(ch)
self.update_logging()
yield self.capture_logging
sleep(0.2)
self.log_capture_string.close()
self.logger_application.removeHandler(ch)
self.capture_logging = False
return self.capture_logging
class ModularServer(tornado.web.Application):
"""Main visualization application."""
port = 8001
page_handler = (r"/", PageHandler)
socket_handler = (r"/ws", SocketHandler)
static_handler = (
r"/(.*)",
tornado.web.StaticFileHandler,
{"path": os.path.join(ROOT, "static")},
)
local_handler = (r"/local/(.*)", tornado.web.StaticFileHandler, {"path": ""})
handlers = [page_handler, socket_handler, static_handler, local_handler]
settings = {"debug": True, "template_path": ROOT + "/templates"}
def __init__(
self, dump=False, outdir="output", name="SOIL", verbose=True, *args, **kwargs
):
self.verbose = verbose
self.name = name
self.dump = dump
self.outdir = outdir
# Initializing the application itself:
super().__init__(self.handlers, **self.settings)
def launch(self, port=None):
"""Run the app."""
if port is not None:
self.port = port
url = "http://127.0.0.1:{PORT}".format(PORT=self.port)
print("Interface starting at {url}".format(url=url))
self.listen(self.port)
# webbrowser.open(url)
tornado.ioloop.IOLoop.instance().start()
def run(*args, **kwargs):
asyncio.set_event_loop(asyncio.new_event_loop())
server = ModularServer(*args, **kwargs)
server.launch()
def main():
import argparse
parser = argparse.ArgumentParser(description="Visualization of a Graph Model")
parser.add_argument(
"--name", "-n", nargs=1, default="SOIL", help="name of the simulation"
)
parser.add_argument(
"--dump", "-d", help="dumping results in folder output", action="store_true"
)
parser.add_argument(
"--port", "-p", nargs=1, default=8001, help="port for launching the server"
)
parser.add_argument("--verbose", "-v", help="verbose mode", action="store_true")
args = parser.parse_args()
run(
name=args.name,
port=(args.port[0] if isinstance(args.port, list) else args.port),
verbose=args.verbose,
)

View File

@@ -1,5 +0,0 @@
from . import main
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More