Compare commits
2 Commits
be65592055
...
TFG-David
Author | SHA1 | Date | |
---|---|---|---|
|
c73503d9f6 | ||
|
de67fe3e74 |
@@ -1,5 +0,0 @@
|
||||
**/soil_output
|
||||
.*
|
||||
**/__pycache__
|
||||
__pycache__
|
||||
*.pyc
|
1
.gitignore
vendored
@@ -8,4 +8,3 @@ soil_output
|
||||
docs/_build*
|
||||
build/*
|
||||
dist/*
|
||||
prof
|
@@ -1,53 +0,0 @@
|
||||
stages:
|
||||
- test
|
||||
- publish
|
||||
- check_published
|
||||
|
||||
docker:
|
||||
stage: publish
|
||||
image:
|
||||
name: gcr.io/kaniko-project/executor:debug
|
||||
entrypoint: [""]
|
||||
tags:
|
||||
- docker
|
||||
script:
|
||||
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
|
||||
# The skip-tls-verify flag is there because our registry certificate is self signed
|
||||
- /kaniko/executor --context $CI_PROJECT_DIR --skip-tls-verify --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
|
||||
only:
|
||||
- tags
|
||||
|
||||
test:
|
||||
tags:
|
||||
- docker
|
||||
image: python:3.8
|
||||
stage: test
|
||||
script:
|
||||
- pip install -r requirements.txt -r test-requirements.txt
|
||||
- python setup.py test
|
||||
|
||||
push_pypi:
|
||||
only:
|
||||
- tags
|
||||
tags:
|
||||
- docker
|
||||
image: python:3.8
|
||||
stage: publish
|
||||
script:
|
||||
- echo $CI_COMMIT_TAG > soil/VERSION
|
||||
- pip install twine
|
||||
- python setup.py sdist bdist_wheel
|
||||
- TWINE_PASSWORD=$PYPI_PASSWORD TWINE_USERNAME=$PYPI_USERNAME python -m twine upload dist/*
|
||||
|
||||
check_pypi:
|
||||
only:
|
||||
- tags
|
||||
tags:
|
||||
- docker
|
||||
image: python:3.8
|
||||
stage: check_published
|
||||
script:
|
||||
- pip install soil==$CI_COMMIT_TAG
|
||||
# Allow PYPI to update its index before we try to install
|
||||
when: delayed
|
||||
start_in: 2 minutes
|
177
CHANGELOG.md
@@ -1,177 +0,0 @@
|
||||
# Changelog
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.30 UNRELEASED]
|
||||
### Added
|
||||
* Simple debugging capabilities in `soil.debugging`, with a custom `pdb.Debugger` subclass that exposes commands to list agents and their status and set breakpoints on states (for FSM agents). Try it with `soil --debug <simulation file>`
|
||||
* Ability to run mesa simulations
|
||||
* The `soil.exporters` module to export the results of datacollectors (model.datacollector) into files at the end of trials/simulations
|
||||
* A modular set of classes for environments/models. Now the ability to configure the agents through an agent definition and a topology through a network configuration is split into two classes (`soil.agents.BaseEnvironment` for agents, `soil.agents.NetworkEnvironment` to add topology).
|
||||
* FSM agents can now have generators as states. They work similar to normal states, with one caveat. Only `time` values can be yielded, not a state. This is because the state will not change, it will be resumed after the yield, at the appropriate time. The return value *can* be a state, or a `(state, time)` tuple, just like in normal states.
|
||||
### Changed
|
||||
* Configuration schema is very simplified
|
||||
### Removed
|
||||
* Any `tsih` and `History` integration in the main classes. To record the state of environments/agents, just use a datacollector. In some cases this may be slower or consume more memory than the previous system. However, few cases actually used the full potential of the history, and it came at the cost of unnecessary complexity and worse performance for the majority of cases.
|
||||
|
||||
|
||||
## [0.20.7]
|
||||
### Changed
|
||||
* Creating a `time.When` from another `time.When` does not nest them anymore (it returns the argument)
|
||||
### Fixed
|
||||
* Bug with time.NEVER/time.INFINITY
|
||||
## [0.20.6]
|
||||
### Fixed
|
||||
* Agents now return `time.INFINITY` when dead, instead of 'inf'
|
||||
* `soil.__init__` does not re-export built-in time (change in `soil.simulation`. It used to create subtle import conflicts when importing soil.time.
|
||||
* Parallel simulations were broken because lambdas cannot be pickled properly, which is needed for multiprocessing.
|
||||
### Changed
|
||||
* Some internal simulation methods do not accept `*args` anymore, to avoid ambiguity and bugs.
|
||||
## [0.20.5]
|
||||
### Changed
|
||||
* Defaults are now set in the agent __init__, not in the environment. This decouples both classes a bit more, and it is more intuitive
|
||||
## [0.20.4]
|
||||
### Added
|
||||
* Agents can now be given any kwargs, which will be used to set their state
|
||||
* Environments have a default logger `self.logger` and a log method, just like agents
|
||||
## [0.20.3]
|
||||
### Fixed
|
||||
* Default state values are now deepcopied again.
|
||||
* Seeds for environments only concatenate the trial id (i.e., a number), to provide repeatable results.
|
||||
* `Environment.run` now calls `Environment.step`, to allow for easy overloading of the environment step
|
||||
### Removed
|
||||
* Datacollectors are not being used for now.
|
||||
* `time.TimedActivation.step` does not use an `until` parameter anymore.
|
||||
### Changed
|
||||
* Simulations now run right up to `until` (open interval)
|
||||
* Time instants (`time.When`) don't need to be floats anymore. Now we can avoid precision issues with big numbers by using ints.
|
||||
* Rabbits simulation is more idiomatic (using subclasses)
|
||||
|
||||
## [0.20.2]
|
||||
### Fixed
|
||||
* CI/CD testing issues
|
||||
## [0.20.1]
|
||||
### Fixed
|
||||
* Agents would run another step after dying.
|
||||
## [0.20.0]
|
||||
### Added
|
||||
* Integration with MESA
|
||||
* `not_agent_ids` parameter to get sql in history
|
||||
### Changed
|
||||
* `soil.Environment` now also inherits from `mesa.Model`
|
||||
* `soil.Agent` now also inherits from `mesa.Agent`
|
||||
* `soil.time` to replace `simpy` events, delays, duration, etc.
|
||||
* `agent.id` is not `agent.unique_id` to be compatible with `mesa`. A property `BaseAgent.id` has been added for compatibility.
|
||||
* `agent.environment` is now `agent.model`, for the same reason as above. The parameter name in `BaseAgent.__init__` has also been renamed.
|
||||
### Removed
|
||||
* `simpy` dependency and compatibility. Each agent used to be a simpy generator, but that made debugging and error handling more complex. That has been replaced by a scheduler within the `soil.Environment` class, similar to how `mesa` does it.
|
||||
* `soil.history` is now a separate package named `tsih`. The keys namedtuple uses `dict_id` instead of `agent_id`.
|
||||
### Added
|
||||
* An option to choose whether a database should be used for history
|
||||
## [0.15.2]
|
||||
### Fixed
|
||||
* Pass the right known_modules and parameters to stats discovery in simulation
|
||||
* The configuration file must exist when launching through the CLI. If it doesn't, an error will be logged
|
||||
* Minor changes in the documentation of the CLI arguments
|
||||
### Changed
|
||||
* Stats are now exported by default
|
||||
## [0.15.1]
|
||||
### Added
|
||||
* read-only `History`
|
||||
### Fixed
|
||||
* Serialization problem with the `Environment` on parallel mode.
|
||||
* Analysis functions now work as they should in the tutorial
|
||||
## [0.15.0]
|
||||
### Added
|
||||
* Control logging level in CLI and simulation
|
||||
* `Stats` to calculate trial and simulation-wide statistics
|
||||
* Simulation statistics are stored in a separate table in history (see `History.get_stats` and `History.save_stats`, as well as `soil.stats`)
|
||||
* Aliased `NetworkAgent.G` to `NetworkAgent.topology`.
|
||||
### Changed
|
||||
* Templates in config files can be given as dictionaries in addition to strings
|
||||
* Samplers are used more explicitly
|
||||
* Removed nxsim dependency. We had already made a lot of changes, and nxsim has not been updated in 5 years.
|
||||
* Exporter methods renamed to `trial` and `end`. Added `start`.
|
||||
* `Distribution` exporter now a stats class
|
||||
* `global_topology` renamed to `topology`
|
||||
* Moved topology-related methods to `NetworkAgent`
|
||||
### Fixed
|
||||
* Temporary files used for history in dry_run mode are not longer left open
|
||||
|
||||
## [0.14.9]
|
||||
### Changed
|
||||
* Seed random before environment initialization
|
||||
## [0.14.8]
|
||||
### Fixed
|
||||
* Invalid directory names in Windows gsi-upm/soil#5
|
||||
## [0.14.7]
|
||||
### Changed
|
||||
* Minor change to traceback handling in async simulations
|
||||
### Fixed
|
||||
* Incomplete example in the docs (example.yml) caused an exception
|
||||
## [0.14.6]
|
||||
### Fixed
|
||||
* Bug with newer versions of networkx (0.24) where the Graph.node attribute has been removed. We have updated our calls, but the code in nxsim is not under our control, so we have pinned the networkx version until that issue is solved.
|
||||
### Changed
|
||||
* Explicit yaml.SafeLoader to avoid deprecation warnings when using yaml.load. It should not break any existing setups, but we could move to the FullLoader in the future if needed.
|
||||
|
||||
## [0.14.4]
|
||||
### Fixed
|
||||
* Bug in `agent.get_agents()` when `state_id` is passed as a string. The tests have been modified accordingly.
|
||||
## [0.14.3]
|
||||
### Fixed
|
||||
* Incompatibility with py3.3-3.6 due to ModuleNotFoundError and TypeError in DryRunner
|
||||
## [0.14.2]
|
||||
### Fixed
|
||||
* Output path for exporters is now soil_output
|
||||
### Changed
|
||||
* CSV output to stdout in dry_run mode
|
||||
## [0.14.1]
|
||||
### Changed
|
||||
* Exporter names in lower case
|
||||
* Add default exporter in runs
|
||||
## [0.14.0]
|
||||
### Added
|
||||
* Loading configuration from template definitions in the yaml, in preparation for SALib support.
|
||||
The definition of the variables and their possible values (i.e., a problem in SALib terms), as well as a sampler function, can be provided.
|
||||
Soil uses this definition and the template to generate a set of configurations.
|
||||
* Simulation group names, to link related simulations. For now, they are only used to group all simulations in the same group under the same folder.
|
||||
* Exporters unify exporting/dumping results and other files to disk. If `dry_run` is set to `True`, exporters will write to stdout instead of a file (useful for testing/debugging).
|
||||
* Distribution exporter, to write statistics about values and value_counts in every simulation. The results are dumped to two CSV files.
|
||||
|
||||
### Changed
|
||||
* `dir_path` is now the directory for resources (modules, files)
|
||||
* Environments and simulations do not export or write anything by default. That task is delegated to Exporters
|
||||
|
||||
### Removed
|
||||
* The output dir for environments and simulations (see Exporters)
|
||||
* DrawingAgent, because it wrote to disk and was not being used. We provide a partial alternative in the form of the GraphDrawing exporter. A complete alternative will be provided once the network at each state can be accessed by exporters.
|
||||
|
||||
## Fixed
|
||||
* Modules with custom agents/environments failed to load when they were run from outside the directory of the definition file. Modules are now loaded from the directory of the simulation file in addition to the working directory
|
||||
* Memory databases (in history) can now be shared between threads.
|
||||
* Testing all examples, not just subdirectories
|
||||
|
||||
## [0.13.8]
|
||||
### Changed
|
||||
* Moved TerroristNetworkModel to examples
|
||||
### Added
|
||||
* `get_agents` and `count_agents` methods now accept lists as inputs. They can be used to retrieve agents from node ids
|
||||
* `subgraph` in BaseAgent
|
||||
* `agents.select` method, to filter out agents
|
||||
* `skip_test` property in yaml definitions, to force skipping some examples
|
||||
* `agents.Geo`, with a search function based on postition
|
||||
* `BaseAgent.ego_search` to get nodes from the ego network of a node
|
||||
* `BaseAgent.degree` and `BaseAgent.betweenness`
|
||||
### Fixed
|
||||
|
||||
## [0.13.7]
|
||||
### Changed
|
||||
* History now defaults to not backing up! This makes it more intuitive to load the history for examination, at the expense of rewriting something. That should not happen because History is only created in the Environment, and that has `backup=True`.
|
||||
### Added
|
||||
* Agent names are assigned based on their agent types
|
||||
* Agent logging uses the agent name.
|
||||
* FSM agents can now return a timeout in addition to a new state. e.g. `return self.idle, self.env.timeout(2)` will execute the *different_state* in 2 *units of time* (`t_step=now+2`).
|
||||
* Example of using timeouts in FSM (custom_timeouts)
|
||||
* `network_agents` entries may include an `ids` entry. If set, it should be a list of node ids that should be assigned that agent type. This complements the previous behavior of setting agent type with `weights`.
|
12
Dockerfile
@@ -1,12 +0,0 @@
|
||||
FROM python:3.7
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
COPY test-requirements.txt requirements.txt /usr/src/app/
|
||||
RUN pip install --no-cache-dir -r test-requirements.txt -r requirements.txt
|
||||
|
||||
COPY ./ /usr/src/app
|
||||
|
||||
RUN pip install '.[web]'
|
||||
|
||||
ENTRYPOINT ["python", "-m", "soil"]
|
@@ -1,7 +0,0 @@
|
||||
include requirements.txt
|
||||
include test-requirements.txt
|
||||
include README.rst
|
||||
graft soil
|
||||
global-exclude __pycache__
|
||||
global-exclude soil_output
|
||||
global-exclude *.py[co]
|
7
Makefile
@@ -1,7 +0,0 @@
|
||||
quick-test:
|
||||
docker-compose exec dev python -m pytest -s -v
|
||||
|
||||
test:
|
||||
docker run -t -v $$PWD:/usr/src/app -w /usr/src/app python:3.7 python setup.py test
|
||||
|
||||
.PHONY: test
|
90
README.md
Normal file → Executable file
@@ -1,92 +1,12 @@
|
||||
# [SOIL](https://github.com/gsi-upm/soil)
|
||||
#[Soil](https://github.com/gsi-upm/soil)
|
||||
|
||||
The purpose of Soil (SOcial network sImuLator) is provding an Agent-based Social Simulator written in Python for Social Networks.
|
||||
|
||||
|
||||
Soil is an extensible and user-friendly Agent-based Social Simulator for Social Networks.
|
||||
Learn how to run your own simulations with our [documentation](http://soilsim.readthedocs.io).
|
||||
|
||||
Follow our [tutorial](examples/tutorial/soil_tutorial.ipynb) to develop your own agent models.
|
||||
|
||||
> **Warning**
|
||||
> Mesa 0.30 introduced many fundamental changes. Check the [documention on how to update your simulations to work with newer versions](docs/notes_v0.30.rst)
|
||||
|
||||
## Features
|
||||
|
||||
* Integration with (social) networks (through `networkx`)
|
||||
* Convenience functions and methods to easily assign agents to your model (and optionally to its network):
|
||||
* Following a given distribution (e.g., 2 agents of type `Foo`, 10% of the network should be agents of type `Bar`)
|
||||
* Based on the topology of the network
|
||||
* **Several types of abstractions for agents**:
|
||||
* Finite state machine, where methods can be turned into a state
|
||||
* Network agents, which have convenience methods to access the model's topology
|
||||
* Generator-based agents, whose state is paused though a `yield` and resumed on the next step
|
||||
* **Reporting and data collection**:
|
||||
* Soil models include data collection and record some data by default (# of agents, state of each agent, etc.)
|
||||
* All data collected are exported by default to a SQLite database and a description file
|
||||
* Options to export to other formats, such as CSV, or defining your own exporters
|
||||
* A summary of the data collected is shown in the command line, for easy inspection
|
||||
* **An event-based scheduler**
|
||||
* Agents can be explicit about when their next time/step should be, and not all agents run in every step. This avoids unnecessary computation.
|
||||
* Time intervals between each step are flexible.
|
||||
* There are primitives to specify when the next execution of an agent should be (or conditions)
|
||||
* **Actor-inspired** message-passing
|
||||
* A simulation runner (`soil.Simulation`) that can:
|
||||
* Run models in parallel
|
||||
* Save results to different formats
|
||||
* Simulation configuration files
|
||||
* A command line interface (`soil`), to quickly run simulations with different parameters
|
||||
* An integrated debugger (`soil --debug`) with custom functions to print agent states and break at specific states
|
||||
|
||||
|
||||
## Mesa compatibility
|
||||
|
||||
SOIL has been redesigned to integrate well with [Mesa](https://github.com/projectmesa/mesa).
|
||||
For instance, it should be possible to run a `mesa.Model` models using a `soil.Simulation` and the `soil` CLI, or to integrate the `soil.TimedActivation` scheduler on a `mesa.Model`.
|
||||
|
||||
Note that some combinations of `mesa` and `soil` components, while technically possible, are much less useful or might yield surprising results.
|
||||
For instance, you may add any `soil.agent` agent on a regular `mesa.Model` with a vanilla scheduler from `mesa.time`.
|
||||
But in that case the agents will not get any of the advanced event-based scheduling, and most agent behaviors that depend on that may not work.
|
||||
|
||||
|
||||
## Changes in version 0.3
|
||||
|
||||
Version 0.3 came packed with many changes to provide much better integration with MESA.
|
||||
For a long time, we tried to keep soil backwards-compatible, but it turned out to be a big endeavour and the resulting code was less readable.
|
||||
This translates to harder maintenance and a worse experience for newcomers.
|
||||
In the end, we decided to make some breaking changes.
|
||||
|
||||
If you have an older Soil simulation, you have two options:
|
||||
|
||||
* Update the necessary configuration files and code. You may use the examples in the `examples` folder for reference, as well as the documentation.
|
||||
* Keep using a previous `soil` version.
|
||||
In order to see quickly how to use Soil, you can follow the following [tutorial](https://github.com/gsi-upm/soil/blob/master/soil_tutorial.ipynb).
|
||||
|
||||
|
||||
|
||||
## Citation
|
||||
|
||||
|
||||
If you use Soil in your research, don't forget to cite this paper:
|
||||
|
||||
```bibtex
|
||||
@inbook{soil-gsi-conference-2017,
|
||||
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
|
||||
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
|
||||
doi = "10.1007/978-3-319-59930-4_19",
|
||||
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
|
||||
isbn = "978-3-319-59929-8",
|
||||
keywords = "soil;social networks;agent based social simulation;python",
|
||||
month = "June",
|
||||
organization = "PAAMS 2017",
|
||||
pages = "234-245",
|
||||
publisher = "Springer Verlag",
|
||||
series = "LNAI",
|
||||
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
|
||||
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
|
||||
volume = "10349",
|
||||
year = "2017",
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
@Copyright GSI - Universidad Politécnica de Madrid 2017-2021
|
||||
|
||||
[](https://www.gsi.upm.es)
|
||||
@Copyright GSI - Universidad Politécnica de Madrid 2017
|
||||
|
BIN
TerroristModel.png
Normal file
After Width: | Height: | Size: 24 KiB |
BIN
TerroristModel_type.png
Normal file
After Width: | Height: | Size: 16 KiB |
BIN
clase_base.pyc
Executable file
@@ -1,12 +0,0 @@
|
||||
version: '3'
|
||||
services:
|
||||
dev:
|
||||
build: .
|
||||
environment:
|
||||
PYTHONDONTWRITEBYTECODE: 1
|
||||
volumes:
|
||||
- .:/usr/src/app
|
||||
tty: true
|
||||
entrypoint: /bin/bash
|
||||
ports:
|
||||
- '8001:8001'
|
0
docs/Makefile
Normal file → Executable file
4
docs/conf.py
Normal file → Executable file
@@ -31,7 +31,7 @@
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = ['IPython.sphinxext.ipython_console_highlighting']
|
||||
extensions = []
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
@@ -69,7 +69,7 @@ language = None
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This patterns also effect to html_static_path and html_extra_path
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '**.ipynb_checkpoints']
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
@@ -1,40 +0,0 @@
|
||||
---
|
||||
name: MyExampleSimulation
|
||||
max_time: 50
|
||||
num_trials: 3
|
||||
interval: 2
|
||||
model_params:
|
||||
topology:
|
||||
params:
|
||||
generator: barabasi_albert_graph
|
||||
n: 100
|
||||
m: 2
|
||||
agents:
|
||||
distribution:
|
||||
- agent_class: SISaModel
|
||||
topology: True
|
||||
ratio: 0.1
|
||||
state:
|
||||
state_id: content
|
||||
- agent_class: SISaModel
|
||||
topology: True
|
||||
ratio: .1
|
||||
state:
|
||||
state_id: discontent
|
||||
- agent_class: SISaModel
|
||||
topology: True
|
||||
ratio: 0.8
|
||||
state:
|
||||
state_id: neutral
|
||||
prob_infect: 0.075
|
||||
neutral_discontent_spon_prob: 0.1
|
||||
neutral_discontent_infected_prob: 0.3
|
||||
neutral_content_spon_prob: 0.3
|
||||
neutral_content_infected_prob: 0.4
|
||||
discontent_neutral: 0.5
|
||||
discontent_content: 0.5
|
||||
variance_d_c: 0.2
|
||||
content_discontent: 0.2
|
||||
variance_c_d: 0.2
|
||||
content_neutral: 0.2
|
||||
standard_variance: 1
|
40
docs/index.rst
Normal file → Executable file
@@ -1,43 +1,21 @@
|
||||
.. Soil documentation master file, created by
|
||||
sphinx-quickstart on Tue Apr 25 12:48:56 2017.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to Soil's documentation!
|
||||
================================
|
||||
|
||||
Soil is an Agent-based Social Simulator in Python focused on Social Networks.
|
||||
|
||||
If you use Soil in your research, do not forget to cite this paper:
|
||||
|
||||
.. code:: bibtex
|
||||
|
||||
@inbook{soil-gsi-conference-2017,
|
||||
author = "S{\'a}nchez, Jes{\'u}s M. and Iglesias, Carlos A. and S{\'a}nchez-Rada, J. Fernando",
|
||||
booktitle = "Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection",
|
||||
doi = "10.1007/978-3-319-59930-4_19",
|
||||
editor = "Demazeau Y., Davidsson P., Bajo J., Vale Z.",
|
||||
isbn = "978-3-319-59929-8",
|
||||
keywords = "soil;social networks;agent based social simulation;python",
|
||||
month = "June",
|
||||
organization = "PAAMS 2017",
|
||||
pages = "234-245",
|
||||
publisher = "Springer Verlag",
|
||||
series = "LNAI",
|
||||
title = "{S}oil: {A}n {A}gent-{B}ased {S}ocial {S}imulator in {P}ython for {M}odelling and {S}imulation of {S}ocial {N}etworks",
|
||||
url = "https://link.springer.com/chapter/10.1007/978-3-319-59930-4_19",
|
||||
volume = "10349",
|
||||
year = "2017",
|
||||
}
|
||||
|
||||
|
||||
|
||||
Soil is an Agent-based Social Simulator in Python for modelling and simulation of Social Networks.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 0
|
||||
:maxdepth: 2
|
||||
:caption: Learn more about soil:
|
||||
|
||||
installation
|
||||
quickstart
|
||||
configuration
|
||||
Tutorial <soil_tutorial>
|
||||
usage
|
||||
models
|
||||
|
||||
..
|
||||
|
||||
|
||||
.. Indices and tables
|
||||
|
25
docs/installation.rst
Normal file → Executable file
@@ -1,28 +1,7 @@
|
||||
Installation
|
||||
------------
|
||||
|
||||
The easiest way to install Soil is through pip, with Python >= 3.4:
|
||||
The latest version can be installed through GitLab.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
pip install soil
|
||||
|
||||
|
||||
Now test that it worked by running the command line tool
|
||||
|
||||
.. code:: bash
|
||||
|
||||
soil --help
|
||||
|
||||
#or
|
||||
|
||||
python -m soil --help
|
||||
|
||||
Or, if you're using using soil programmatically:
|
||||
|
||||
.. code:: python
|
||||
|
||||
import soil
|
||||
print(soil.__version__)
|
||||
|
||||
The latest version can be installed through `GitHub <https://github.com/gsi-upm/soil>`_ or `GitLab <https://lab.gsi.upm.es/soil/soil.git>`_.
|
||||
git clone https://lab.cluster.gsi.dit.upm.es/soil/soil.git
|
2
docs/make.bat
Normal file → Executable file
@@ -12,7 +12,7 @@ set BUILDDIR=_build
|
||||
set SPHINXPROJ=Soil
|
||||
|
||||
if "%1" == "" goto help
|
||||
eE
|
||||
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
|
@@ -1,22 +0,0 @@
|
||||
Mesa compatibility
|
||||
------------------
|
||||
|
||||
Soil is in the process of becoming fully compatible with MESA.
|
||||
The idea is to provide a set of modular classes and functions that extend the functionality of mesa, whilst staying compatible.
|
||||
In the end, it should be possible to add regular mesa agents to a soil simulation, or use a soil agent within a mesa simulation/model.
|
||||
|
||||
This is a non-exhaustive list of tasks to achieve compatibility:
|
||||
|
||||
- [ ] Integrate `soil.Simulation` with mesa's runners:
|
||||
- [ ] `soil.Simulation` could mimic/become a `mesa.batchrunner`
|
||||
- [ ] Integrate `soil.Environment` with `mesa.Model`:
|
||||
- [x] `Soil.Environment` inherits from `mesa.Model`
|
||||
- [x] `Soil.Environment` includes a Mesa-like Scheduler (see the `soil.time` module.
|
||||
- [ ] Allow for `mesa.Model` to be used in a simulation.
|
||||
- [ ] Integrate `soil.Agent` with `mesa.Agent`:
|
||||
- [x] Rename agent.id to unique_id?
|
||||
- [x] mesa agents can be used in soil simulations (see `examples/mesa`)
|
||||
- [ ] Provide examples
|
||||
- [ ] Using mesa modules in a soil simulation
|
||||
- [ ] Using soil modules in a mesa simulation
|
||||
- [ ] Document the new APIs and usage
|
112
docs/models.rst
Executable file
@@ -0,0 +1,112 @@
|
||||
Developing new models
|
||||
---------------------
|
||||
This document describes how to develop a new analysis model.
|
||||
|
||||
What is a model?
|
||||
================
|
||||
|
||||
A model defines the behaviour of the agents with a view to assessing their effects on the system as a whole.
|
||||
In practice, a model consists of at least two parts:
|
||||
|
||||
* Python module: the actual code that describes the behaviour.
|
||||
* Setting up the variables in the Settings JSON file.
|
||||
|
||||
This separation allows us to run the simulation with different agents.
|
||||
|
||||
Models Code
|
||||
===========
|
||||
|
||||
All the models are imported to the main file. The initialization look like this:
|
||||
|
||||
.. code:: python
|
||||
|
||||
import settings
|
||||
|
||||
networkStatus = {} # Dict that will contain the status of every agent in the network
|
||||
|
||||
sentimentCorrelationNodeArray = []
|
||||
for x in range(0, settings.network_params["number_of_nodes"]):
|
||||
sentimentCorrelationNodeArray.append({'id': x})
|
||||
# Initialize agent states. Let's assume everyone is normal.
|
||||
init_states = [{'id': 0, } for _ in range(settings.network_params["number_of_nodes"])]
|
||||
# add keys as as necessary, but "id" must always refer to that state category
|
||||
|
||||
A new model have to inherit the BaseBehaviour class which is in the same module.
|
||||
There are two basics methods:
|
||||
|
||||
* __init__
|
||||
* step: used to define the behaviour over time.
|
||||
|
||||
Variable Initialization
|
||||
=======================
|
||||
|
||||
The different parameters of the model have to be initialize in the Simulation Settings JSON file which will be
|
||||
passed as a parameter to the simulation.
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"agent": ["SISaModel","ControlModelM2"],
|
||||
|
||||
"neutral_discontent_spon_prob": 0.04,
|
||||
"neutral_discontent_infected_prob": 0.04,
|
||||
"neutral_content_spon_prob": 0.18,
|
||||
"neutral_content_infected_prob": 0.02,
|
||||
|
||||
"discontent_neutral": 0.13,
|
||||
"discontent_content": 0.07,
|
||||
"variance_d_c": 0.02,
|
||||
|
||||
"content_discontent": 0.009,
|
||||
"variance_c_d": 0.003,
|
||||
"content_neutral": 0.088,
|
||||
|
||||
"standard_variance": 0.055,
|
||||
|
||||
|
||||
"prob_neutral_making_denier": 0.035,
|
||||
|
||||
"prob_infect": 0.075,
|
||||
|
||||
"prob_cured_healing_infected": 0.035,
|
||||
"prob_cured_vaccinate_neutral": 0.035,
|
||||
|
||||
"prob_vaccinated_healing_infected": 0.035,
|
||||
"prob_vaccinated_vaccinate_neutral": 0.035,
|
||||
"prob_generate_anti_rumor": 0.035
|
||||
}
|
||||
|
||||
In this file you will also define the models you are going to simulate. You can simulate as many models as you want.
|
||||
The simulation returns one result for each model, executing each model separately. For the usage, see :doc:`usage`.
|
||||
|
||||
Example Model
|
||||
=============
|
||||
|
||||
In this section, we will implement a Sentiment Correlation Model.
|
||||
|
||||
The class would look like this:
|
||||
|
||||
.. code:: python
|
||||
|
||||
from ..BaseBehaviour import *
|
||||
from .. import sentimentCorrelationNodeArray
|
||||
|
||||
class SentimentCorrelationModel(BaseBehaviour):
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self.outside_effects_prob = environment.environment_params['outside_effects_prob']
|
||||
self.anger_prob = environment.environment_params['anger_prob']
|
||||
self.joy_prob = environment.environment_params['joy_prob']
|
||||
self.sadness_prob = environment.environment_params['sadness_prob']
|
||||
self.disgust_prob = environment.environment_params['disgust_prob']
|
||||
self.time_awareness = []
|
||||
for i in range(4): # In this model we have 4 sentiments
|
||||
self.time_awareness.append(0) # 0-> Anger, 1-> joy, 2->sadness, 3 -> disgust
|
||||
sentimentCorrelationNodeArray[self.id][self.env.now] = 0
|
||||
|
||||
def step(self, now):
|
||||
self.behaviour() # Method which define the behaviour
|
||||
super().step(now)
|
||||
|
||||
The variables will be modified by the user, so you have to include them in the Simulation Settings JSON file.
|
@@ -1,35 +0,0 @@
|
||||
What are the main changes between version 0.3 and 0.2?
|
||||
######################################################
|
||||
|
||||
Version 0.3 is a major rewrite of the Soil system, focused on simplifying the API, aligning it with Mesa, and making it easier to use.
|
||||
Unfortunately, this comes at the cost of backwards compatibility.
|
||||
|
||||
We drew several lessons from the previous version of Soil, and tried to address them in this version.
|
||||
Mainly:
|
||||
|
||||
- The split between simulation configuration and simulation code was overly complicated for most use cases. As a result, most users ended up reusing configuration.
|
||||
- Storing **all** the simulation data in a database is costly and unnecessary for most use cases. For most use cases, only a handful of variables need to be stored. This fits nicely with Mesa's data collection system.
|
||||
- The API was too complex, and it was difficult to understand how to use it.
|
||||
- Most parts of the API were not aligned with Mesa, which made it difficult to use Mesa's features or to integrate Soil modules with Mesa code, especially for newcomers.
|
||||
- Many parts of the API were tightly coupled, which made it difficult to find bugs, test the system and add new features.
|
||||
|
||||
The 0.30 rewrite should provide a middle ground between Soil's opinionated approach and Mesa's flexibility.
|
||||
The new Soil is less configuration-centric.
|
||||
It aims to provide more modular and convenient functions, most of which can be used in vanilla Mesa.
|
||||
|
||||
How are agents assigned to nodes in the network
|
||||
###############################################
|
||||
|
||||
The constructor of the `NetworkAgent` class has two arguments: `node_id` and `topology`.
|
||||
If `topology` is not provided, it will default to `self.model.topology`.
|
||||
This assignment might err if the model does not have a `topology` attribute, but most Soil environments derive from `NetworkEnvironment`, so they include a topology by default.
|
||||
If `node_id` is not provided, a random node will be selected from the topology, until a node with no agent is found.
|
||||
Then, the `node_id` of that node is assigned to the agent.
|
||||
If no node with no agent is found, a new node is automatically added to the topology.
|
||||
|
||||
|
||||
Can Soil environments include more than one network / topology?
|
||||
###############################################################
|
||||
|
||||
Yes, but each network has to be included manually.
|
||||
Somewhere between 0.20 and 0.30 we included the ability to include multiple networks, but it was deemed too complex and was removed.
|
Before Width: | Height: | Size: 7.0 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 5.3 KiB |
Before Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 11 KiB |
Before Width: | Height: | Size: 19 KiB |
@@ -1,93 +0,0 @@
|
||||
Quickstart
|
||||
----------
|
||||
|
||||
This section shows how to run your first simulation with Soil.
|
||||
For installation instructions, see :doc:`installation`.
|
||||
|
||||
There are mainly two parts in a simulation: agent classes and simulation configuration.
|
||||
An agent class defines how the agent will behave throughout the simulation.
|
||||
The configuration includes things such as number of agents to use and their type, network topology to use, etc.
|
||||
|
||||
|
||||
.. image:: soil.png
|
||||
:width: 80%
|
||||
:align: center
|
||||
|
||||
|
||||
Soil includes several agent classes in the ``soil.agents`` module, and we will use them in this quickstart.
|
||||
If you are interested in developing your own agents classes, see :doc:`soil_tutorial`.
|
||||
|
||||
Configuration
|
||||
=============
|
||||
To get you started, we will use this configuration (:download:`download the file <quickstart.yml>` directly):
|
||||
|
||||
.. literalinclude:: quickstart.yml
|
||||
:language: yaml
|
||||
|
||||
The agent type used, SISa, is a very simple model.
|
||||
It only has three states (neutral, content and discontent),
|
||||
Its parameters are the probabilities to change from one state to another, either spontaneously or because of contagion from neighboring agents.
|
||||
|
||||
Running the simulation
|
||||
======================
|
||||
|
||||
To see the simulation in action, simply point soil to the configuration, and tell it to store the graph and the history of agent states and environment parameters at every point.
|
||||
|
||||
.. code::
|
||||
|
||||
❯ soil --graph --csv quickstart.yml [13:35:29]
|
||||
INFO:soil:Using config(s): quickstart
|
||||
INFO:soil:Dumping results to soil_output/quickstart : ['csv', 'gexf']
|
||||
INFO:soil:Starting simulation quickstart at 13:35:30.
|
||||
INFO:soil:Starting Simulation quickstart trial 0 at 13:35:30.
|
||||
INFO:soil:Finished Simulation quickstart trial 0 at 13:35:49 in 19.43677067756653 seconds
|
||||
INFO:soil:Starting Dumping simulation quickstart trial 0 at 13:35:49.
|
||||
INFO:soil:Finished Dumping simulation quickstart trial 0 at 13:35:51 in 1.7733407020568848 seconds
|
||||
INFO:soil:Dumping results to soil_output/quickstart
|
||||
INFO:soil:Finished simulation quickstart at 13:35:51 in 21.29862952232361 seconds
|
||||
|
||||
|
||||
The ``CSV`` file should look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
agent_id,t_step,key,value
|
||||
env,0,neutral_discontent_spon_prob,0.05
|
||||
env,0,neutral_discontent_infected_prob,0.1
|
||||
env,0,neutral_content_spon_prob,0.2
|
||||
env,0,neutral_content_infected_prob,0.4
|
||||
env,0,discontent_neutral,0.2
|
||||
env,0,discontent_content,0.05
|
||||
env,0,content_discontent,0.05
|
||||
env,0,variance_d_c,0.05
|
||||
env,0,variance_c_d,0.1
|
||||
|
||||
Results and visualization
|
||||
=========================
|
||||
|
||||
The environment variables are marked as ``agent_id`` env.
|
||||
Th exported values are only stored when they change.
|
||||
To find out how to get every key and value at every point in the simulation, check out the :doc:`soil_tutorial`.
|
||||
|
||||
The dynamic graph is exported as a .gexf file which could be visualized with
|
||||
`Gephi <https://gephi.org/users/download/>`__.
|
||||
Now it is your turn to experiment with the simulation.
|
||||
Change some of the parameters, such as the number of agents, the probability of becoming content, or the type of network, and see how the results change.
|
||||
|
||||
|
||||
Soil also includes a web server that allows you to upload your simulations, change parameters, and visualize the results, including a timeline of the network.
|
||||
To make it work, you have to install soil like this:
|
||||
|
||||
.. code::
|
||||
|
||||
pip install soil[web]
|
||||
|
||||
Once installed, the soil web UI can be run in two ways:
|
||||
|
||||
.. code::
|
||||
|
||||
soil-web
|
||||
|
||||
# OR
|
||||
|
||||
python -m soil.web
|
@@ -1,33 +0,0 @@
|
||||
---
|
||||
name: quickstart
|
||||
num_trials: 1
|
||||
max_time: 1000
|
||||
model_params:
|
||||
agents:
|
||||
- agent_class: SISaModel
|
||||
topology: true
|
||||
state:
|
||||
id: neutral
|
||||
weight: 1
|
||||
- agent_class: SISaModel
|
||||
topology: true
|
||||
state:
|
||||
id: content
|
||||
weight: 2
|
||||
topology:
|
||||
params:
|
||||
n: 100
|
||||
k: 5
|
||||
p: 0.2
|
||||
generator: newman_watts_strogatz_graph
|
||||
neutral_discontent_spon_prob: 0.05
|
||||
neutral_discontent_infected_prob: 0.1
|
||||
neutral_content_spon_prob: 0.2
|
||||
neutral_content_infected_prob: 0.4
|
||||
discontent_neutral: 0.2
|
||||
discontent_content: 0.05
|
||||
content_discontent: 0.05
|
||||
variance_d_c: 0.05
|
||||
variance_c_d: 0.1
|
||||
content_neutral: 0.1
|
||||
standard_variance: 0.1
|
@@ -1 +0,0 @@
|
||||
ipython>=7.31.1
|
@@ -1,12 +0,0 @@
|
||||
### MESA
|
||||
|
||||
Starting with version 0.3, Soil has been redesigned to complement Mesa, while remaining compatible with it.
|
||||
That means that every component in Soil (i.e., Models, Environments, etc.) can be mixed with existing mesa components.
|
||||
In fact, there are examples that show how that integration may be used, in the `examples/mesa` folder in the repository.
|
||||
|
||||
Here are some reasons to use Soil instead of plain mesa:
|
||||
|
||||
- Less boilerplate for common scenarios (by some definitions of common)
|
||||
- Functions to automatically populate a topology with an agent distribution (i.e., different ratios of agent class and state)
|
||||
- The `soil.Simulation` class allows you to run multiple instances of the same experiment (i.e., multiple trials with the same parameters but a different randomness seed)
|
||||
- Reporting functions that aggregate multiple
|
BIN
docs/soil.png
Before Width: | Height: | Size: 43 KiB |
99
docs/usage.rst
Executable file
@@ -0,0 +1,99 @@
|
||||
Usage
|
||||
-----
|
||||
|
||||
First of all, you need to install the package. See :doc:`installation` for installation instructions.
|
||||
|
||||
Simulation Settings
|
||||
===================
|
||||
|
||||
Once installed, before running a simulation, you need to configure it.
|
||||
|
||||
* In the Settings JSON file you will find the configuration of the network.
|
||||
|
||||
.. code:: python
|
||||
|
||||
{
|
||||
"network_type": 1,
|
||||
"number_of_nodes": 1000,
|
||||
"max_time": 50,
|
||||
"num_trials": 1,
|
||||
"timeout": 2
|
||||
}
|
||||
|
||||
* In the Settings JSON file, you will also find the configuration of the models.
|
||||
|
||||
Network Types
|
||||
=============
|
||||
|
||||
There are three types of network implemented, but you could add more.
|
||||
|
||||
.. code:: python
|
||||
|
||||
if settings.network_type == 0:
|
||||
G = nx.complete_graph(settings.number_of_nodes)
|
||||
if settings.network_type == 1:
|
||||
G = nx.barabasi_albert_graph(settings.number_of_nodes, 10)
|
||||
if settings.network_type == 2:
|
||||
G = nx.margulis_gabber_galil_graph(settings.number_of_nodes, None)
|
||||
# More types of networks can be added here
|
||||
|
||||
Models Settings
|
||||
===============
|
||||
|
||||
After having configured the simulation, the next step is setting up the variables of the models.
|
||||
For this, you will need to modify the Settings JSON file again.
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"agent": ["SISaModel","ControlModelM2"],
|
||||
|
||||
"neutral_discontent_spon_prob": 0.04,
|
||||
"neutral_discontent_infected_prob": 0.04,
|
||||
"neutral_content_spon_prob": 0.18,
|
||||
"neutral_content_infected_prob": 0.02,
|
||||
|
||||
"discontent_neutral": 0.13,
|
||||
"discontent_content": 0.07,
|
||||
"variance_d_c": 0.02,
|
||||
|
||||
"content_discontent": 0.009,
|
||||
"variance_c_d": 0.003,
|
||||
"content_neutral": 0.088,
|
||||
|
||||
"standard_variance": 0.055,
|
||||
|
||||
|
||||
"prob_neutral_making_denier": 0.035,
|
||||
|
||||
"prob_infect": 0.075,
|
||||
|
||||
"prob_cured_healing_infected": 0.035,
|
||||
"prob_cured_vaccinate_neutral": 0.035,
|
||||
|
||||
"prob_vaccinated_healing_infected": 0.035,
|
||||
"prob_vaccinated_vaccinate_neutral": 0.035,
|
||||
"prob_generate_anti_rumor": 0.035
|
||||
}
|
||||
|
||||
In this file you will define the different models you are going to simulate. You can simulate as many models
|
||||
as you want. Each model will be simulated separately.
|
||||
|
||||
After setting up the models, you have to initialize the parameters of each one. You will find the parameters needed
|
||||
in the documentation of each model.
|
||||
|
||||
Parameter validation will fail if a required parameter without a default has not been provided.
|
||||
|
||||
Running the Simulation
|
||||
======================
|
||||
|
||||
After setting all the configuration, you will be able to run the simulation. All you need to do is execute:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
python3 soil.py
|
||||
|
||||
The simulation will return a dynamic graph .gexf file which could be visualized with
|
||||
`Gephi <https://gephi.org/users/download/>`__.
|
||||
|
||||
It will also return one .png picture for each model simulated.
|
@@ -1,39 +0,0 @@
|
||||
from networkx import Graph
|
||||
import random
|
||||
import networkx as nx
|
||||
from soil import Simulation, Environment, CounterModel, parameters
|
||||
|
||||
|
||||
def mygenerator(n=5, n_edges=5):
|
||||
"""
|
||||
Just a simple generator that creates a network with n nodes and
|
||||
n_edges edges. Edges are assigned randomly, only avoiding self loops.
|
||||
"""
|
||||
G = nx.Graph()
|
||||
|
||||
for i in range(n):
|
||||
G.add_node(i)
|
||||
|
||||
for i in range(n_edges):
|
||||
nodes = list(G.nodes)
|
||||
n_in = random.choice(nodes)
|
||||
nodes.remove(n_in) # Avoid loops
|
||||
n_out = random.choice(nodes)
|
||||
G.add_edge(n_in, n_out)
|
||||
return G
|
||||
|
||||
|
||||
class GeneratorEnv(Environment):
|
||||
"""Using a custom generator for the network"""
|
||||
|
||||
generator: parameters.function = staticmethod(mygenerator)
|
||||
|
||||
def init(self):
|
||||
self.create_network(generator=self.generator, n=10, n_edges=5)
|
||||
self.add_agents(CounterModel)
|
||||
|
||||
|
||||
sim = Simulation(model=GeneratorEnv, max_steps=10, interval=1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
sim.run(dump=False)
|
@@ -1,41 +0,0 @@
|
||||
from soil.agents import FSM, state, default_state
|
||||
from soil.time import Delta
|
||||
|
||||
|
||||
class Fibonacci(FSM):
|
||||
"""Agent that only executes in t_steps that are Fibonacci numbers"""
|
||||
prev = 1
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def counting(self):
|
||||
self.log("Stopping at {}".format(self.now))
|
||||
prev, self["prev"] = self["prev"], max([self.now, self["prev"]])
|
||||
return None, Delta(prev)
|
||||
|
||||
|
||||
class Odds(FSM):
|
||||
"""Agent that only executes in odd t_steps"""
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def odds(self):
|
||||
self.log("Stopping at {}".format(self.now))
|
||||
return None, Delta(1 + self.now % 2)
|
||||
|
||||
|
||||
from soil import Environment, Simulation
|
||||
from networkx import complete_graph
|
||||
|
||||
|
||||
class TimeoutsEnv(Environment):
|
||||
def init(self):
|
||||
self.create_network(generator=complete_graph, n=2)
|
||||
self.add_agent(agent_class=Fibonacci, node_id=0)
|
||||
self.add_agent(agent_class=Odds, node_id=1)
|
||||
|
||||
|
||||
sim = Simulation(model=TimeoutsEnv, max_steps=10, interval=1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
sim.run(dump=False)
|
@@ -1,9 +0,0 @@
|
||||
This example can be run like with command-line options, like this:
|
||||
|
||||
```bash
|
||||
python cars.py --level DEBUG -e summary --csv
|
||||
#or
|
||||
soil cars.py -e summary
|
||||
```
|
||||
|
||||
This will set the `CSV` (save the agent and model data to a CSV) and `summary` (print the a summary of the data to stdout) exporters, and set the log level to DEBUG.
|
@@ -1,226 +0,0 @@
|
||||
"""
|
||||
This is an example of a simplified city, where there are Passengers and Drivers that can take those passengers
|
||||
from their location to their desired location.
|
||||
|
||||
An example scenario could play like the following:
|
||||
|
||||
- Drivers start in the `wandering` state, where they wander around the city until they have been assigned a journey
|
||||
- Passenger(1) tells every driver that it wants to request a Journey.
|
||||
- Each driver receives the request.
|
||||
If Driver(2) is interested in providing the Journey, it asks Passenger(1) to confirm that it accepts Driver(2)'s request
|
||||
- When Passenger(1) accepts the request, two things happen:
|
||||
- Passenger(1) changes its state to `driving_home`
|
||||
- Driver(2) starts moving towards the origin of the Journey
|
||||
- Once Driver(2) reaches the origin, it starts moving itself and Passenger(1) to the destination of the Journey
|
||||
- When Driver(2) reaches the destination (carrying Passenger(1) along):
|
||||
- Driver(2) starts wondering again
|
||||
- Passenger(1) dies, and is removed from the simulation
|
||||
- If there are no more passengers available in the simulation, Drivers die
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from typing import Optional
|
||||
from soil import *
|
||||
from soil import events
|
||||
from mesa.space import MultiGrid
|
||||
|
||||
|
||||
# More complex scenarios may use more than one type of message between objects.
|
||||
# A common pattern is to use `enum.Enum` to represent state changes in a request.
|
||||
@dataclass
|
||||
class Journey:
|
||||
"""
|
||||
This represents a request for a journey. Passengers and drivers exchange this object.
|
||||
|
||||
A journey may have a driver assigned or not. If the driver has not been assigned, this
|
||||
object is considered a "request for a journey".
|
||||
"""
|
||||
|
||||
origin: (int, int)
|
||||
destination: (int, int)
|
||||
tip: float
|
||||
|
||||
passenger: Passenger
|
||||
driver: Optional[Driver] = None
|
||||
|
||||
|
||||
class City(EventedEnvironment):
|
||||
"""
|
||||
An environment with a grid where drivers and passengers will be placed.
|
||||
|
||||
The number of drivers and riders is configurable through its parameters:
|
||||
|
||||
:param str n_cars: The total number of drivers to add
|
||||
:param str n_passengers: The number of passengers in the simulation
|
||||
:param list agents: Specific agents to use in the simulation. It overrides the `n_passengers`
|
||||
and `n_cars` params.
|
||||
:param int height: Height of the internal grid
|
||||
:param int width: Width of the internal grid
|
||||
"""
|
||||
n_cars = 1
|
||||
n_passengers = 10
|
||||
height = 100
|
||||
width = 100
|
||||
|
||||
def init(self):
|
||||
self.grid = MultiGrid(width=self.width, height=self.height, torus=False)
|
||||
if not self.agents:
|
||||
self.add_agents(Driver, k=self.n_cars)
|
||||
self.add_agents(Passenger, k=self.n_passengers)
|
||||
|
||||
for agent in self.agents:
|
||||
self.grid.place_agent(agent, (0, 0))
|
||||
self.grid.move_to_empty(agent)
|
||||
|
||||
self.total_earnings = 0
|
||||
self.add_model_reporter("total_earnings")
|
||||
|
||||
@report
|
||||
@property
|
||||
def number_passengers(self):
|
||||
return self.count_agents(agent_class=Passenger)
|
||||
|
||||
|
||||
class Driver(Evented, FSM):
|
||||
pos = None
|
||||
journey = None
|
||||
earnings = 0
|
||||
|
||||
def on_receive(self, msg, sender):
|
||||
"""This is not a state. It will run (and block) every time check_messages is invoked"""
|
||||
if self.journey is None and isinstance(msg, Journey) and msg.driver is None:
|
||||
msg.driver = self
|
||||
self.journey = msg
|
||||
|
||||
def check_passengers(self):
|
||||
"""If there are no more passengers, stop forever"""
|
||||
c = self.count_agents(agent_class=Passenger)
|
||||
self.info(f"Passengers left {c}")
|
||||
if not c:
|
||||
self.die()
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def wandering(self):
|
||||
"""Move around the city until a journey is accepted"""
|
||||
target = None
|
||||
self.check_passengers()
|
||||
self.journey = None
|
||||
while self.journey is None: # No potential journeys detected (see on_receive)
|
||||
if target is None or not self.move_towards(target):
|
||||
target = self.random.choice(
|
||||
self.model.grid.get_neighborhood(self.pos, moore=False)
|
||||
)
|
||||
|
||||
self.check_passengers()
|
||||
# This will call on_receive behind the scenes, and the agent's status will be updated
|
||||
self.check_messages()
|
||||
yield Delta(30) # Wait at least 30 seconds before checking again
|
||||
|
||||
try:
|
||||
# Re-send the journey to the passenger, to confirm that we have been selected
|
||||
self.journey = yield self.journey.passenger.ask(self.journey, timeout=60)
|
||||
except events.TimedOut:
|
||||
# No journey has been accepted. Try again
|
||||
self.journey = None
|
||||
return
|
||||
|
||||
return self.driving
|
||||
|
||||
@state
|
||||
def driving(self):
|
||||
"""The journey has been accepted. Pick them up and take them to their destination"""
|
||||
while self.move_towards(self.journey.origin):
|
||||
yield
|
||||
while self.move_towards(self.journey.destination, with_passenger=True):
|
||||
yield
|
||||
self.earnings += self.journey.tip
|
||||
self.model.total_earnings += self.journey.tip
|
||||
self.check_passengers()
|
||||
return self.wandering
|
||||
|
||||
def move_towards(self, target, with_passenger=False):
|
||||
"""Move one cell at a time towards a target"""
|
||||
self.info(f"Moving { self.pos } -> { target }")
|
||||
if target[0] == self.pos[0] and target[1] == self.pos[1]:
|
||||
return False
|
||||
|
||||
next_pos = [self.pos[0], self.pos[1]]
|
||||
for idx in [0, 1]:
|
||||
if self.pos[idx] < target[idx]:
|
||||
next_pos[idx] += 1
|
||||
break
|
||||
if self.pos[idx] > target[idx]:
|
||||
next_pos[idx] -= 1
|
||||
break
|
||||
self.model.grid.move_agent(self, tuple(next_pos))
|
||||
if with_passenger:
|
||||
self.journey.passenger.pos = (
|
||||
self.pos
|
||||
) # This could be communicated through messages
|
||||
return True
|
||||
|
||||
|
||||
class Passenger(Evented, FSM):
|
||||
pos = None
|
||||
|
||||
def on_receive(self, msg, sender):
|
||||
"""This is not a state. It will be run synchronously every time `check_messages` is run"""
|
||||
|
||||
if isinstance(msg, Journey):
|
||||
self.journey = msg
|
||||
return msg
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def asking(self):
|
||||
destination = (
|
||||
self.random.randint(0, self.model.grid.height),
|
||||
self.random.randint(0, self.model.grid.width),
|
||||
)
|
||||
self.journey = None
|
||||
journey = Journey(
|
||||
origin=self.pos,
|
||||
destination=destination,
|
||||
tip=self.random.randint(10, 100),
|
||||
passenger=self,
|
||||
)
|
||||
|
||||
timeout = 60
|
||||
expiration = self.now + timeout
|
||||
self.model.broadcast(journey, ttl=timeout, sender=self, agent_class=Driver)
|
||||
while not self.journey:
|
||||
self.info(f"Passenger at: { self.pos }. Checking for responses.")
|
||||
try:
|
||||
# This will call check_messages behind the scenes, and the agent's status will be updated
|
||||
# If you want to avoid that, you can call it with: check=False
|
||||
yield self.received(expiration=expiration)
|
||||
except events.TimedOut:
|
||||
self.info(f"Passenger at: { self.pos }. Asking for journey.")
|
||||
self.model.broadcast(
|
||||
journey, ttl=timeout, sender=self, agent_class=Driver
|
||||
)
|
||||
expiration = self.now + timeout
|
||||
return self.driving_home
|
||||
|
||||
@state
|
||||
def driving_home(self):
|
||||
while (
|
||||
self.pos[0] != self.journey.destination[0]
|
||||
or self.pos[1] != self.journey.destination[1]
|
||||
):
|
||||
try:
|
||||
yield self.received(timeout=60)
|
||||
except events.TimedOut:
|
||||
pass
|
||||
|
||||
self.die("Got home safe!")
|
||||
|
||||
|
||||
simulation = Simulation(name="RideHailing",
|
||||
model=City,
|
||||
seed="carsSeed",
|
||||
max_time=1000,
|
||||
model_params=dict(n_passengers=2))
|
||||
|
||||
if __name__ == "__main__":
|
||||
easy(simulation)
|
@@ -1,7 +0,0 @@
|
||||
from soil import Simulation
|
||||
from social_wealth import MoneyEnv, graph_generator
|
||||
|
||||
sim = Simulation(name="mesa_sim", dump=False, max_steps=10, interval=2, model=MoneyEnv, model_params=dict(generator=graph_generator, N=10, width=50, height=50))
|
||||
|
||||
if __name__ == "__main__":
|
||||
sim.run()
|
@@ -1,111 +0,0 @@
|
||||
from mesa.visualization.ModularVisualization import ModularServer
|
||||
from mesa.visualization.UserParam import Slider, Choice
|
||||
from mesa.visualization.modules import ChartModule, NetworkModule, CanvasGrid
|
||||
from social_wealth import MoneyEnv, graph_generator, SocialMoneyAgent
|
||||
import networkx as nx
|
||||
|
||||
|
||||
class MyNetwork(NetworkModule):
|
||||
def render(self, model):
|
||||
return self.portrayal_method(model)
|
||||
|
||||
|
||||
def network_portrayal(env):
|
||||
# The model ensures there is 0 or 1 agent per node
|
||||
|
||||
portrayal = dict()
|
||||
wealths = {
|
||||
node_id: data["agent"].wealth for (node_id, data) in env.G.nodes(data=True)
|
||||
}
|
||||
portrayal["nodes"] = [
|
||||
{
|
||||
"id": node_id,
|
||||
"size": 2 * (wealth + 1),
|
||||
"color": "#CC0000" if wealth == 0 else "#007959",
|
||||
# "color": "#CC0000",
|
||||
"label": f"{node_id}: {wealth}",
|
||||
}
|
||||
for (node_id, wealth) in wealths.items()
|
||||
]
|
||||
|
||||
portrayal["edges"] = [
|
||||
{"id": edge_id, "source": source, "target": target, "color": "#000000"}
|
||||
for edge_id, (source, target) in enumerate(env.G.edges)
|
||||
]
|
||||
|
||||
return portrayal
|
||||
|
||||
|
||||
def gridPortrayal(agent):
|
||||
"""
|
||||
This function is registered with the visualization server to be called
|
||||
each tick to indicate how to draw the agent in its current state.
|
||||
:param agent: the agent in the simulation
|
||||
:return: the portrayal dictionary
|
||||
"""
|
||||
color = max(10, min(agent.wealth * 10, 100))
|
||||
return {
|
||||
"Shape": "rect",
|
||||
"w": 1,
|
||||
"h": 1,
|
||||
"Filled": "true",
|
||||
"Layer": 0,
|
||||
"Label": agent.unique_id,
|
||||
"Text": agent.unique_id,
|
||||
"x": agent.pos[0],
|
||||
"y": agent.pos[1],
|
||||
"Color": f"rgba(31, 10, 255, 0.{color})",
|
||||
}
|
||||
|
||||
|
||||
grid = MyNetwork(network_portrayal, 500, 500)
|
||||
chart = ChartModule(
|
||||
[{"Label": "Gini", "Color": "Black"}], data_collector_name="datacollector"
|
||||
)
|
||||
|
||||
model_params = {
|
||||
"N": Slider(
|
||||
"N",
|
||||
5,
|
||||
1,
|
||||
10,
|
||||
1,
|
||||
description="Choose how many agents to include in the model",
|
||||
),
|
||||
"height": Slider(
|
||||
"height",
|
||||
5,
|
||||
5,
|
||||
10,
|
||||
1,
|
||||
description="Grid height",
|
||||
),
|
||||
"width": Slider(
|
||||
"width",
|
||||
5,
|
||||
5,
|
||||
10,
|
||||
1,
|
||||
description="Grid width",
|
||||
),
|
||||
"agent_class": Choice(
|
||||
"Agent class",
|
||||
value="MoneyAgent",
|
||||
choices=["MoneyAgent", "SocialMoneyAgent"],
|
||||
),
|
||||
"generator": graph_generator,
|
||||
}
|
||||
|
||||
|
||||
canvas_element = CanvasGrid(
|
||||
gridPortrayal, model_params["width"].value, model_params["height"].value, 500, 500
|
||||
)
|
||||
|
||||
|
||||
server = ModularServer(
|
||||
MoneyEnv, [grid, chart, canvas_element], "Money Model", model_params
|
||||
)
|
||||
server.port = 8521
|
||||
|
||||
if __name__ == '__main__':
|
||||
server.launch(open_browser=False)
|
@@ -1,137 +0,0 @@
|
||||
"""
|
||||
This is an example that adds soil agents and environment in a normal
|
||||
mesa workflow.
|
||||
"""
|
||||
from mesa import Agent as MesaAgent
|
||||
from mesa.space import MultiGrid
|
||||
|
||||
# from mesa.time import RandomActivation
|
||||
from mesa.datacollection import DataCollector
|
||||
from mesa.batchrunner import BatchRunner
|
||||
|
||||
import networkx as nx
|
||||
|
||||
from soil import NetworkAgent, Environment, serialization
|
||||
|
||||
|
||||
def compute_gini(model):
|
||||
agent_wealths = [agent.wealth for agent in model.agents]
|
||||
x = sorted(agent_wealths)
|
||||
N = len(list(model.agents))
|
||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||
return 1 + (1 / N) - 2 * B
|
||||
|
||||
|
||||
class MoneyAgent(MesaAgent):
|
||||
"""
|
||||
A MESA agent with fixed initial wealth.
|
||||
It will only share wealth with neighbors based on grid proximity
|
||||
"""
|
||||
|
||||
def __init__(self, unique_id, model, wealth=1, **kwargs):
|
||||
super().__init__(unique_id=unique_id, model=model)
|
||||
self.wealth = wealth
|
||||
|
||||
def move(self):
|
||||
possible_steps = self.model.grid.get_neighborhood(
|
||||
self.pos, moore=True, include_center=False
|
||||
)
|
||||
new_position = self.random.choice(possible_steps)
|
||||
self.model.grid.move_agent(self, new_position)
|
||||
|
||||
def give_money(self):
|
||||
cellmates = self.model.grid.get_cell_list_contents([self.pos])
|
||||
if len(cellmates) > 1:
|
||||
other = self.random.choice(cellmates)
|
||||
other.wealth += 1
|
||||
self.wealth -= 1
|
||||
|
||||
def step(self):
|
||||
print("Crying wolf", self.pos)
|
||||
self.move()
|
||||
if self.wealth > 0:
|
||||
self.give_money()
|
||||
|
||||
|
||||
class SocialMoneyAgent(MoneyAgent, NetworkAgent):
|
||||
wealth = 1
|
||||
|
||||
def give_money(self):
|
||||
cellmates = set(self.model.grid.get_cell_list_contents([self.pos]))
|
||||
friends = set(self.get_neighbors())
|
||||
self.info("Trying to give money")
|
||||
self.info("Cellmates: ", cellmates)
|
||||
self.info("Friends: ", friends)
|
||||
|
||||
nearby_friends = list(cellmates & friends)
|
||||
|
||||
if len(nearby_friends):
|
||||
other = self.random.choice(nearby_friends)
|
||||
other.wealth += 1
|
||||
self.wealth -= 1
|
||||
|
||||
|
||||
def graph_generator(n=5):
|
||||
G = nx.Graph()
|
||||
for ix in range(n):
|
||||
G.add_edge(0, ix)
|
||||
return G
|
||||
|
||||
|
||||
class MoneyEnv(Environment):
|
||||
"""A model with some number of agents."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
width,
|
||||
height,
|
||||
N,
|
||||
generator=graph_generator,
|
||||
agent_class=SocialMoneyAgent,
|
||||
topology=None,
|
||||
**kwargs
|
||||
):
|
||||
|
||||
generator = serialization.deserialize(generator)
|
||||
agent_class = serialization.deserialize(agent_class, globs=globals())
|
||||
topology = generator(n=N)
|
||||
super().__init__(topology=topology, N=N, **kwargs)
|
||||
self.grid = MultiGrid(width, height, False)
|
||||
|
||||
self.populate_network(agent_class=agent_class)
|
||||
|
||||
# Create agents
|
||||
for agent in self.agents:
|
||||
x = self.random.randrange(self.grid.width)
|
||||
y = self.random.randrange(self.grid.height)
|
||||
self.grid.place_agent(agent, (x, y))
|
||||
|
||||
self.datacollector = DataCollector(
|
||||
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
fixed_params = {
|
||||
"generator": nx.complete_graph,
|
||||
"width": 10,
|
||||
"network_agents": [{"agent_class": SocialMoneyAgent, "weight": 1}],
|
||||
"height": 10,
|
||||
}
|
||||
|
||||
variable_params = {"N": range(10, 100, 10)}
|
||||
|
||||
batch_run = BatchRunner(
|
||||
MoneyEnv,
|
||||
variable_parameters=variable_params,
|
||||
fixed_parameters=fixed_params,
|
||||
iterations=5,
|
||||
max_steps=100,
|
||||
model_reporters={"Gini": compute_gini},
|
||||
)
|
||||
batch_run.run_all()
|
||||
|
||||
run_data = batch_run.get_model_vars_dataframe()
|
||||
run_data.head()
|
||||
print(run_data.Gini)
|
@@ -1,87 +0,0 @@
|
||||
from mesa import Agent, Model
|
||||
from mesa.space import MultiGrid
|
||||
from mesa.time import RandomActivation
|
||||
from mesa.datacollection import DataCollector
|
||||
from mesa.batchrunner import BatchRunner
|
||||
|
||||
|
||||
def compute_gini(model):
|
||||
agent_wealths = [agent.wealth for agent in model.schedule.agents]
|
||||
x = sorted(agent_wealths)
|
||||
N = model.num_agents
|
||||
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
|
||||
return 1 + (1 / N) - 2 * B
|
||||
|
||||
|
||||
class MoneyAgent(Agent):
|
||||
"""An agent with fixed initial wealth."""
|
||||
|
||||
def __init__(self, unique_id, model):
|
||||
super().__init__(unique_id, model)
|
||||
self.wealth = 1
|
||||
|
||||
def move(self):
|
||||
possible_steps = self.model.grid.get_neighborhood(
|
||||
self.pos, moore=True, include_center=False
|
||||
)
|
||||
new_position = self.random.choice(possible_steps)
|
||||
self.model.grid.move_agent(self, new_position)
|
||||
|
||||
def give_money(self):
|
||||
cellmates = self.model.grid.get_cell_list_contents([self.pos])
|
||||
if len(cellmates) > 1:
|
||||
other = self.random.choice(cellmates)
|
||||
other.wealth += 1
|
||||
self.wealth -= 1
|
||||
|
||||
def step(self):
|
||||
self.move()
|
||||
if self.wealth > 0:
|
||||
self.give_money()
|
||||
|
||||
|
||||
class MoneyModel(Model):
|
||||
"""A model with some number of agents."""
|
||||
|
||||
def __init__(self, N, width, height):
|
||||
self.num_agents = N
|
||||
self.grid = MultiGrid(width, height, True)
|
||||
self.schedule = RandomActivation(self)
|
||||
self.running = True
|
||||
|
||||
# Create agents
|
||||
for i in range(self.num_agents):
|
||||
a = MoneyAgent(i, self)
|
||||
self.schedule.add(a)
|
||||
# Add the agent to a random grid cell
|
||||
x = self.random.randrange(self.grid.width)
|
||||
y = self.random.randrange(self.grid.height)
|
||||
self.grid.place_agent(a, (x, y))
|
||||
|
||||
self.datacollector = DataCollector(
|
||||
model_reporters={"Gini": compute_gini}, agent_reporters={"Wealth": "wealth"}
|
||||
)
|
||||
|
||||
def step(self):
|
||||
self.datacollector.collect(self)
|
||||
self.schedule.step()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
fixed_params = {"width": 10, "height": 10}
|
||||
variable_params = {"N": range(10, 500, 10)}
|
||||
|
||||
batch_run = BatchRunner(
|
||||
MoneyModel,
|
||||
variable_params,
|
||||
fixed_params,
|
||||
iterations=5,
|
||||
max_steps=100,
|
||||
model_reporters={"Gini": compute_gini},
|
||||
)
|
||||
batch_run.run_all()
|
||||
|
||||
run_data = batch_run.get_model_vars_dataframe()
|
||||
run_data.head()
|
||||
print(run_data.Gini)
|
@@ -1,134 +0,0 @@
|
||||
from soil.agents import FSM, NetworkAgent, state, default_state, prob
|
||||
from soil.parameters import *
|
||||
import logging
|
||||
|
||||
from soil.environment import Environment
|
||||
|
||||
|
||||
class DumbViewer(FSM, NetworkAgent):
|
||||
"""
|
||||
A viewer that gets infected via TV (if it has one) and tries to infect
|
||||
its neighbors once it's infected.
|
||||
"""
|
||||
|
||||
has_been_infected: bool = False
|
||||
has_tv: bool = False
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def neutral(self):
|
||||
if self.has_tv:
|
||||
if self.prob(self.get("prob_tv_spread")):
|
||||
return self.infected
|
||||
if self.has_been_infected:
|
||||
return self.infected
|
||||
|
||||
@state
|
||||
def infected(self):
|
||||
for neighbor in self.get_neighbors(state_id=self.neutral.id):
|
||||
if self.prob(self.get("prob_neighbor_spread")):
|
||||
neighbor.infect()
|
||||
|
||||
def infect(self):
|
||||
"""
|
||||
This is not a state. It is a function that other agents can use to try to
|
||||
infect this agent. DumbViewer always gets infected, but other agents like
|
||||
HerdViewer might not become infected right away
|
||||
"""
|
||||
self.has_been_infected = True
|
||||
|
||||
|
||||
class HerdViewer(DumbViewer):
|
||||
"""
|
||||
A viewer whose probability of infection depends on the state of its neighbors.
|
||||
"""
|
||||
|
||||
def infect(self):
|
||||
"""Notice again that this is NOT a state. See DumbViewer.infect for reference"""
|
||||
infected = self.count_neighbors(state_id=self.infected.id)
|
||||
total = self.count_neighbors()
|
||||
prob_infect = self.get("prob_neighbor_spread") * infected / total
|
||||
self.debug("prob_infect", prob_infect)
|
||||
if self.prob(prob_infect):
|
||||
self.has_been_infected = True
|
||||
|
||||
|
||||
class WiseViewer(HerdViewer):
|
||||
"""
|
||||
A viewer that can change its mind.
|
||||
"""
|
||||
|
||||
@state
|
||||
def cured(self):
|
||||
prob_cure = self.get("prob_neighbor_cure")
|
||||
for neighbor in self.get_neighbors(state_id=self.infected.id):
|
||||
if self.prob(prob_cure):
|
||||
try:
|
||||
neighbor.cure()
|
||||
except AttributeError:
|
||||
self.debug("Viewer {} cannot be cured".format(neighbor.id))
|
||||
|
||||
def cure(self):
|
||||
self.has_been_cured = True
|
||||
|
||||
@state
|
||||
def infected(self):
|
||||
if self.has_been_cured:
|
||||
return self.cured
|
||||
cured = max(self.count_neighbors(self.cured.id), 1.0)
|
||||
infected = max(self.count_neighbors(self.infected.id), 1.0)
|
||||
prob_cure = self.get("prob_neighbor_cure") * (cured / infected)
|
||||
if self.prob(prob_cure):
|
||||
return self.cured
|
||||
|
||||
|
||||
class NewsSpread(Environment):
|
||||
ratio_dumb: probability = 1,
|
||||
ratio_herd: probability = 0,
|
||||
ratio_wise: probability = 0,
|
||||
prob_tv_spread: probability = 0.1,
|
||||
prob_neighbor_spread: probability = 0.1,
|
||||
prob_neighbor_cure: probability = 0.05,
|
||||
|
||||
def init(self):
|
||||
self.populate_network([DumbViewer, HerdViewer, WiseViewer],
|
||||
[self.ratio_dumb, self.ratio_herd, self.ratio_wise])
|
||||
|
||||
|
||||
from itertools import product
|
||||
from soil import Simulation
|
||||
|
||||
|
||||
# We want to investigate the effect of different agent distributions on the spread of news.
|
||||
# To do that, we will run different simulations, with a varying ratio of DumbViewers, HerdViewers, and WiseViewers
|
||||
# Because the effect of these agents might also depend on the network structure, we will run our simulations on two different networks:
|
||||
# one with a small-world structure and one with a connected structure.
|
||||
|
||||
counter = 0
|
||||
for [r1, r2] in product([0, 0.5, 1.0], repeat=2):
|
||||
for (generator, netparams) in {
|
||||
"barabasi_albert_graph": {"m": 5},
|
||||
"erdos_renyi_graph": {"p": 0.1},
|
||||
}.items():
|
||||
print(r1, r2, 1-r1-r2, generator)
|
||||
# Create new simulation
|
||||
netparams["n"] = 500
|
||||
Simulation(
|
||||
name='newspread_sim',
|
||||
model=NewsSpread,
|
||||
model_params=dict(
|
||||
ratio_dumb=r1,
|
||||
ratio_herd=r2,
|
||||
ratio_wise=1-r1-r2,
|
||||
network_generator=generator,
|
||||
network_params=netparams,
|
||||
prob_neighbor_spread=0,
|
||||
),
|
||||
num_trials=5,
|
||||
max_steps=300,
|
||||
dump=False,
|
||||
).run()
|
||||
counter += 1
|
||||
# Run all the necessary instances
|
||||
|
||||
print(f"A total of {counter} simulations were run.")
|
1
examples/programmatic/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
Programmatic*
|
@@ -1,53 +0,0 @@
|
||||
"""
|
||||
Example of a fully programmatic simulation, without definition files.
|
||||
"""
|
||||
from soil import Simulation, Environment, agents
|
||||
from networkx import Graph
|
||||
import logging
|
||||
|
||||
|
||||
def mygenerator():
|
||||
# Add only a node
|
||||
G = Graph()
|
||||
G.add_node(1)
|
||||
G.add_node(2)
|
||||
return G
|
||||
|
||||
|
||||
class MyAgent(agents.NetworkAgent, agents.FSM):
|
||||
times_run = 0
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
def neutral(self):
|
||||
self.debug("I am running")
|
||||
if self.prob(0.2):
|
||||
self.times_run += 1
|
||||
self.info("This runs 2/10 times on average")
|
||||
|
||||
|
||||
class ProgrammaticEnv(Environment):
|
||||
|
||||
def init(self):
|
||||
self.create_network(generator=mygenerator)
|
||||
assert len(self.G)
|
||||
self.populate_network(agent_class=MyAgent)
|
||||
self.add_agent_reporter('times_run')
|
||||
|
||||
|
||||
simulation = Simulation(
|
||||
name="Programmatic",
|
||||
model=ProgrammaticEnv,
|
||||
seed='Program',
|
||||
num_trials=1,
|
||||
max_time=100,
|
||||
dump=False,
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# By default, logging will only print WARNING logs (and above).
|
||||
# You need to choose a lower logging level to get INFO/DEBUG traces
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
envs = simulation.run()
|
||||
|
||||
for agent in envs[0].agents:
|
||||
print(agent.times_run)
|
@@ -1,10 +0,0 @@
|
||||
Simulation of pubs and drinking pals that go from pub to pub.
|
||||
|
||||
Th custom environment includes a list of pubs and methods to allow agents to discover and enter pubs.
|
||||
There are two types of agents:
|
||||
|
||||
* Patron. A patron will do three things, in this order:
|
||||
* Look for other patrons to drink with
|
||||
* Look for a pub where the agent and other agents in the same group can get in.
|
||||
* While in the pub, patrons only drink, until they get drunk and taken home.
|
||||
* Police. There is only one police agent that will take any drunk patrons home (kick them out of the pub).
|
@@ -1,195 +0,0 @@
|
||||
from soil.agents import FSM, NetworkAgent, state, default_state
|
||||
from soil import Environment, Simulation, parameters
|
||||
from itertools import islice
|
||||
import networkx as nx
|
||||
import logging
|
||||
|
||||
|
||||
class CityPubs(Environment):
|
||||
"""Environment with Pubs"""
|
||||
|
||||
level = logging.INFO
|
||||
number_of_pubs: parameters.Integer = 3
|
||||
ratio_extroverted: parameters.probability = 0.1
|
||||
pub_capacity: parameters.Integer = 10
|
||||
|
||||
def init(self):
|
||||
self.pubs = {}
|
||||
for i in range(self.number_of_pubs):
|
||||
newpub = {
|
||||
"name": "The awesome pub #{}".format(i),
|
||||
"open": True,
|
||||
"capacity": self.pub_capacity,
|
||||
"occupancy": 0,
|
||||
}
|
||||
self.pubs[newpub["name"]] = newpub
|
||||
self.add_agent(agent_class=Police)
|
||||
self.populate_network([Patron.w(openness=0.1), Patron.w(openness=1)],
|
||||
[self.ratio_extroverted, 1-self.ratio_extroverted])
|
||||
assert all(["agent" in node and isinstance(node["agent"], Patron) for (_, node) in self.G.nodes(data=True)])
|
||||
|
||||
def enter(self, pub_id, *nodes):
|
||||
"""Agents will try to enter. The pub checks if it is possible"""
|
||||
try:
|
||||
pub = self["pubs"][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError("Pub {} is not available".format(pub_id))
|
||||
if not pub["open"] or (pub["capacity"] < (len(nodes) + pub["occupancy"])):
|
||||
return False
|
||||
pub["occupancy"] += len(nodes)
|
||||
for node in nodes:
|
||||
node["pub"] = pub_id
|
||||
return True
|
||||
|
||||
def available_pubs(self):
|
||||
for pub in self["pubs"].values():
|
||||
if pub["open"] and (pub["occupancy"] < pub["capacity"]):
|
||||
yield pub["name"]
|
||||
|
||||
def exit(self, pub_id, *node_ids):
|
||||
"""Agents will notify the pub they want to leave"""
|
||||
try:
|
||||
pub = self["pubs"][pub_id]
|
||||
except KeyError:
|
||||
raise ValueError("Pub {} is not available".format(pub_id))
|
||||
for node_id in node_ids:
|
||||
node = self.get_agent(node_id)
|
||||
if pub_id == node["pub"]:
|
||||
del node["pub"]
|
||||
pub["occupancy"] -= 1
|
||||
|
||||
|
||||
class Patron(FSM, NetworkAgent):
|
||||
"""Agent that looks for friends to drink with. It will do three things:
|
||||
1) Look for other patrons to drink with
|
||||
2) Look for a bar where the agent and other agents in the same group can get in.
|
||||
3) While in the bar, patrons only drink, until they get drunk and taken home.
|
||||
"""
|
||||
|
||||
level = logging.DEBUG
|
||||
|
||||
pub = None
|
||||
drunk = False
|
||||
pints = 0
|
||||
max_pints = 3
|
||||
kicked_out = False
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def looking_for_friends(self):
|
||||
"""Look for friends to drink with"""
|
||||
self.info("I am looking for friends")
|
||||
available_friends = list(
|
||||
self.get_agents(drunk=False, pub=None, state_id=self.looking_for_friends.id)
|
||||
)
|
||||
if not available_friends:
|
||||
self.info("Life sucks and I'm alone!")
|
||||
return self.at_home
|
||||
befriended = self.try_friends(available_friends)
|
||||
if befriended:
|
||||
return self.looking_for_pub
|
||||
|
||||
@state
|
||||
def looking_for_pub(self):
|
||||
"""Look for a pub that accepts me and my friends"""
|
||||
if self["pub"] != None:
|
||||
return self.sober_in_pub
|
||||
self.debug("I am looking for a pub")
|
||||
group = list(self.get_neighbors())
|
||||
for pub in self.model.available_pubs():
|
||||
self.debug("We're trying to get into {}: total: {}".format(pub, len(group)))
|
||||
if self.model.enter(pub, self, *group):
|
||||
self.info("We're all {} getting in {}!".format(len(group), pub))
|
||||
return self.sober_in_pub
|
||||
|
||||
@state
|
||||
def sober_in_pub(self):
|
||||
"""Drink up."""
|
||||
self.drink()
|
||||
if self["pints"] > self["max_pints"]:
|
||||
return self.drunk_in_pub
|
||||
|
||||
@state
|
||||
def drunk_in_pub(self):
|
||||
"""I'm out. Take me home!"""
|
||||
self.info("I'm so drunk. Take me home!")
|
||||
self["drunk"] = True
|
||||
if self.kicked_out:
|
||||
return self.at_home
|
||||
pass # out drun
|
||||
|
||||
@state
|
||||
def at_home(self):
|
||||
"""The end"""
|
||||
others = self.get_agents(state_id=Patron.at_home.id, limit_neighbors=True)
|
||||
self.debug("I'm home. Just like {} of my friends".format(len(others)))
|
||||
|
||||
def drink(self):
|
||||
self["pints"] += 1
|
||||
self.debug("Cheers to that")
|
||||
|
||||
def kick_out(self):
|
||||
self.kicked_out = True
|
||||
|
||||
def befriend(self, other_agent, force=False):
|
||||
"""
|
||||
Try to become friends with another agent. The chances of
|
||||
success depend on both agents' openness.
|
||||
"""
|
||||
if force or self["openness"] > self.random.random():
|
||||
self.add_edge(self, other_agent)
|
||||
self.info("Made some friend {}".format(other_agent))
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_friends(self, others):
|
||||
"""Look for random agents around me and try to befriend them"""
|
||||
befriended = False
|
||||
k = int(10 * self["openness"])
|
||||
self.random.shuffle(others)
|
||||
for friend in islice(others, k): # random.choice >= 3.7
|
||||
if friend == self:
|
||||
continue
|
||||
if friend.befriend(self):
|
||||
self.befriend(friend, force=True)
|
||||
self.debug("Hooray! new friend: {}".format(friend.unique_id))
|
||||
befriended = True
|
||||
else:
|
||||
self.debug("{} does not want to be friends".format(friend.unique_id))
|
||||
return befriended
|
||||
|
||||
|
||||
class Police(FSM):
|
||||
"""Simple agent to take drunk people out of pubs."""
|
||||
|
||||
level = logging.INFO
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def patrol(self):
|
||||
drunksters = list(self.get_agents(drunk=True, state_id=Patron.drunk_in_pub.id))
|
||||
for drunk in drunksters:
|
||||
self.info("Kicking out the trash: {}".format(drunk.unique_id))
|
||||
drunk.kick_out()
|
||||
else:
|
||||
self.info("No trash to take out. Too bad.")
|
||||
|
||||
|
||||
sim = Simulation(
|
||||
model=CityPubs,
|
||||
name="pubcrawl",
|
||||
num_trials=3,
|
||||
max_steps=10,
|
||||
dump=False,
|
||||
model_params=dict(
|
||||
network_generator=nx.empty_graph,
|
||||
network_params={"n": 30},
|
||||
model=CityPubs,
|
||||
altercations=0,
|
||||
number_of_pubs=3,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sim.run(parallel=False)
|
@@ -1,14 +0,0 @@
|
||||
There are two similar implementations of this simulation.
|
||||
|
||||
- `basic`. Using simple primites
|
||||
- `improved`. Using more advanced features such as the `time` module to avoid unnecessary computations (i.e., skip steps), and generator functions.
|
||||
|
||||
The examples can be run directly in the terminal, and they accept command like arguments.
|
||||
For example, to enable the CSV exporter and the Summary exporter, while setting `max_time` to `100` and `seed` to `CustomSeed`:
|
||||
|
||||
```
|
||||
python rabbit_agents.py --set max_time=100 --csv -e summary --set 'seed="CustomSeed"'
|
||||
```
|
||||
|
||||
To learn more about how this functionality works, check out the `soil.easy` function.
|
||||
|
@@ -1,153 +0,0 @@
|
||||
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation
|
||||
from soil.time import Delta
|
||||
from enum import Enum
|
||||
from collections import Counter
|
||||
import logging
|
||||
import math
|
||||
|
||||
from rabbits_basic_sim import RabbitEnv
|
||||
|
||||
|
||||
class RabbitsImprovedEnv(RabbitEnv):
|
||||
def init(self):
|
||||
"""Initialize the environment with the new versions of the agents"""
|
||||
a1 = self.add_node(Male)
|
||||
a2 = self.add_node(Female)
|
||||
a1.add_edge(a2)
|
||||
self.add_agent(RandomAccident)
|
||||
|
||||
|
||||
class Rabbit(FSM, NetworkAgent):
|
||||
|
||||
sexual_maturity = 30
|
||||
life_expectancy = 300
|
||||
birth = None
|
||||
|
||||
@property
|
||||
def age(self):
|
||||
if self.birth is None:
|
||||
return None
|
||||
return self.now - self.birth
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def newborn(self):
|
||||
self.info("I am a newborn.")
|
||||
self.birth = self.now
|
||||
self.offspring = 0
|
||||
return self.youngling, Delta(self.sexual_maturity - self.age)
|
||||
|
||||
@state
|
||||
def youngling(self):
|
||||
if self.age >= self.sexual_maturity:
|
||||
self.info(f"I am fertile! My age is {self.age}")
|
||||
return self.fertile
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
raise Exception("Each subclass should define its fertile state")
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
self.die()
|
||||
|
||||
|
||||
class Male(Rabbit):
|
||||
max_females = 5
|
||||
mating_prob = 0.001
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
|
||||
# Males try to mate
|
||||
for f in self.model.agents(
|
||||
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||
):
|
||||
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||
if self.prob(self["mating_prob"]):
|
||||
f.impregnate(self)
|
||||
break # Do not try to impregnate other females
|
||||
|
||||
|
||||
class Female(Rabbit):
|
||||
gestation = 10
|
||||
conception = None
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
# Just wait for a Male
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
if self.conception is not None:
|
||||
return self.pregnant
|
||||
|
||||
@property
|
||||
def pregnancy(self):
|
||||
if self.conception is None:
|
||||
return None
|
||||
return self.now - self.conception
|
||||
|
||||
def impregnate(self, male):
|
||||
self.info(f"impregnated by {repr(male)}")
|
||||
self.mate = male
|
||||
self.conception = self.now
|
||||
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||
|
||||
@state
|
||||
def pregnant(self):
|
||||
self.debug("I am pregnant")
|
||||
|
||||
if self.age > self.life_expectancy:
|
||||
self.info("Dying before giving birth")
|
||||
return self.die()
|
||||
|
||||
if self.pregnancy >= self.gestation:
|
||||
self.info("Having {} babies".format(self.number_of_babies))
|
||||
for i in range(self.number_of_babies):
|
||||
state = {}
|
||||
agent_class = self.random.choice([Male, Female])
|
||||
child = self.model.add_node(agent_class=agent_class, **state)
|
||||
child.add_edge(self)
|
||||
if self.mate:
|
||||
child.add_edge(self.mate)
|
||||
self.mate.offspring += 1
|
||||
else:
|
||||
self.debug("The father has passed away")
|
||||
|
||||
self.offspring += 1
|
||||
self.mate = None
|
||||
return self.fertile
|
||||
|
||||
def die(self):
|
||||
if self.pregnancy is not None:
|
||||
self.info("A mother has died carrying a baby!!")
|
||||
return super().die()
|
||||
|
||||
|
||||
class RandomAccident(BaseAgent):
|
||||
def step(self):
|
||||
rabbits_alive = self.model.G.number_of_nodes()
|
||||
|
||||
if not rabbits_alive:
|
||||
return self.die()
|
||||
|
||||
prob_death = self.model.get("prob_death", 1e-100) * math.floor(
|
||||
math.log10(max(1, rabbits_alive))
|
||||
)
|
||||
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||
for i in self.iter_agents(agent_class=Rabbit):
|
||||
if i.state_id == i.dead.id:
|
||||
continue
|
||||
if self.prob(prob_death):
|
||||
self.info("I killed a rabbit: {}".format(i.id))
|
||||
rabbits_alive -= 1
|
||||
i.die()
|
||||
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||
|
||||
|
||||
sim = Simulation(model=RabbitsImprovedEnv, max_time=100, seed="MySeed", num_trials=1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
sim.run()
|
@@ -1,161 +0,0 @@
|
||||
from soil import FSM, state, default_state, BaseAgent, NetworkAgent, Environment, Simulation, report, parameters as params
|
||||
from collections import Counter
|
||||
import logging
|
||||
import math
|
||||
|
||||
|
||||
class RabbitEnv(Environment):
|
||||
prob_death: params.probability = 1e-100
|
||||
|
||||
def init(self):
|
||||
a1 = self.add_node(Male)
|
||||
a2 = self.add_node(Female)
|
||||
a1.add_edge(a2)
|
||||
self.add_agent(RandomAccident)
|
||||
|
||||
@report
|
||||
@property
|
||||
def num_rabbits(self):
|
||||
return self.count_agents(agent_class=Rabbit)
|
||||
|
||||
@report
|
||||
@property
|
||||
def num_males(self):
|
||||
return self.count_agents(agent_class=Male)
|
||||
|
||||
@report
|
||||
@property
|
||||
def num_females(self):
|
||||
return self.count_agents(agent_class=Female)
|
||||
|
||||
|
||||
class Rabbit(NetworkAgent, FSM):
|
||||
|
||||
sexual_maturity = 30
|
||||
life_expectancy = 300
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def newborn(self):
|
||||
self.info("I am a newborn.")
|
||||
self.age = 0
|
||||
self.offspring = 0
|
||||
return self.youngling
|
||||
|
||||
@state
|
||||
def youngling(self):
|
||||
self.age += 1
|
||||
if self.age >= self.sexual_maturity:
|
||||
self.info(f"I am fertile! My age is {self.age}")
|
||||
return self.fertile
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
raise Exception("Each subclass should define its fertile state")
|
||||
|
||||
@state
|
||||
def dead(self):
|
||||
self.die()
|
||||
|
||||
|
||||
class Male(Rabbit):
|
||||
max_females = 5
|
||||
mating_prob = 0.001
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
self.age += 1
|
||||
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
|
||||
# Males try to mate
|
||||
for f in self.model.agents(
|
||||
agent_class=Female, state_id=Female.fertile.id, limit=self.max_females
|
||||
):
|
||||
self.debug("FOUND A FEMALE: ", repr(f), self.mating_prob)
|
||||
if self.prob(self["mating_prob"]):
|
||||
f.impregnate(self)
|
||||
break # Take a break
|
||||
|
||||
|
||||
class Female(Rabbit):
|
||||
gestation = 10
|
||||
pregnancy = -1
|
||||
|
||||
@state
|
||||
def fertile(self):
|
||||
# Just wait for a Male
|
||||
self.age += 1
|
||||
if self.age > self.life_expectancy:
|
||||
return self.dead
|
||||
if self.pregnancy >= 0:
|
||||
return self.pregnant
|
||||
|
||||
def impregnate(self, male):
|
||||
self.info(f"impregnated by {repr(male)}")
|
||||
self.mate = male
|
||||
self.pregnancy = 0
|
||||
self.number_of_babies = int(8 + 4 * self.random.random())
|
||||
|
||||
@state
|
||||
def pregnant(self):
|
||||
self.info("I am pregnant")
|
||||
self.age += 1
|
||||
|
||||
if self.age >= self.life_expectancy:
|
||||
return self.die()
|
||||
|
||||
if self.pregnancy < self.gestation:
|
||||
self.pregnancy += 1
|
||||
return
|
||||
|
||||
self.info("Having {} babies".format(self.number_of_babies))
|
||||
for i in range(self.number_of_babies):
|
||||
state = {}
|
||||
agent_class = self.random.choice([Male, Female])
|
||||
child = self.model.add_node(agent_class=agent_class, **state)
|
||||
child.add_edge(self)
|
||||
try:
|
||||
child.add_edge(self.mate)
|
||||
self.model.agents[self.mate].offspring += 1
|
||||
except ValueError:
|
||||
self.debug("The father has passed away")
|
||||
|
||||
self.offspring += 1
|
||||
self.mate = None
|
||||
self.pregnancy = -1
|
||||
return self.fertile
|
||||
|
||||
def die(self):
|
||||
if "pregnancy" in self and self["pregnancy"] > -1:
|
||||
self.info("A mother has died carrying a baby!!")
|
||||
return super().die()
|
||||
|
||||
|
||||
class RandomAccident(BaseAgent):
|
||||
def step(self):
|
||||
rabbits_alive = self.model.G.number_of_nodes()
|
||||
|
||||
if not rabbits_alive:
|
||||
return self.die()
|
||||
|
||||
prob_death = self.model.prob_death * math.floor(
|
||||
math.log10(max(1, rabbits_alive))
|
||||
)
|
||||
self.debug("Killing some rabbits with prob={}!".format(prob_death))
|
||||
for i in self.get_agents(agent_class=Rabbit):
|
||||
if i.state_id == i.dead.id:
|
||||
continue
|
||||
if self.prob(prob_death):
|
||||
self.info("I killed a rabbit: {}".format(i.id))
|
||||
rabbits_alive -= 1
|
||||
i.die()
|
||||
self.debug("Rabbits alive: {}".format(rabbits_alive))
|
||||
|
||||
|
||||
|
||||
sim = Simulation(model=RabbitEnv, max_time=100, seed="MySeed", num_trials=1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
sim.run()
|
@@ -1,47 +0,0 @@
|
||||
"""
|
||||
Example of setting a
|
||||
Example of a fully programmatic simulation, without definition files.
|
||||
"""
|
||||
from soil import Simulation, agents, Environment
|
||||
from soil.time import Delta
|
||||
|
||||
|
||||
class MyAgent(agents.FSM):
|
||||
"""
|
||||
An agent that first does a ping
|
||||
"""
|
||||
|
||||
defaults = {"pong_counts": 2}
|
||||
|
||||
@agents.default_state
|
||||
@agents.state
|
||||
def ping(self):
|
||||
self.info("Ping")
|
||||
return self.pong, Delta(self.random.expovariate(1 / 16))
|
||||
|
||||
@agents.state
|
||||
def pong(self):
|
||||
self.info("Pong")
|
||||
self.pong_counts -= 1
|
||||
self.info(str(self.pong_counts))
|
||||
if self.pong_counts < 1:
|
||||
return self.die()
|
||||
return None, Delta(self.random.expovariate(1 / 16))
|
||||
|
||||
|
||||
class RandomEnv(Environment):
|
||||
|
||||
def init(self):
|
||||
self.add_agent(agent_class=MyAgent)
|
||||
|
||||
|
||||
s = Simulation(
|
||||
name="Programmatic",
|
||||
model=RandomEnv,
|
||||
num_trials=1,
|
||||
max_time=100,
|
||||
dump=False,
|
||||
)
|
||||
|
||||
|
||||
envs = s.run()
|
@@ -1,340 +0,0 @@
|
||||
import networkx as nx
|
||||
from soil.agents import Geo, NetworkAgent, FSM, custom, state, default_state
|
||||
from soil import Environment, Simulation
|
||||
from soil.parameters import *
|
||||
|
||||
|
||||
class TerroristEnvironment(Environment):
|
||||
n: Integer = 100
|
||||
radius: Float = 0.2
|
||||
|
||||
information_spread_intensity: probability = 0.7
|
||||
terrorist_additional_influence: probability = 0.03
|
||||
terrorist_additional_influence: probability = 0.035
|
||||
max_vulnerability: probability = 0.7
|
||||
prob_interaction: probability = 0.5
|
||||
|
||||
# TrainingAreaModel and HavenModel
|
||||
training_influence: probability = 0.20
|
||||
haven_influence: probability = 0.20
|
||||
|
||||
# TerroristNetworkModel
|
||||
vision_range: Float = 0.30
|
||||
sphere_influence: Integer = 2
|
||||
weight_social_distance: Float = 0.035
|
||||
weight_link_distance: Float = 0.035
|
||||
|
||||
ratio_civil: probability = 0.8
|
||||
ratio_leader: probability = 0.1
|
||||
ratio_training: probability = 0.05
|
||||
ratio_haven: probability = 0.05
|
||||
|
||||
def init(self):
|
||||
self.create_network(generator=self.generator, n=self.n, radius=self.radius)
|
||||
self.populate_network([
|
||||
TerroristNetworkModel.w(state_id='civilian'),
|
||||
TerroristNetworkModel.w(state_id='leader'),
|
||||
TrainingAreaModel,
|
||||
HavenModel
|
||||
], [self.ratio_civil, self.ratio_leader, self.ratio_training, self.ratio_haven])
|
||||
|
||||
@staticmethod
|
||||
def generator(*args, **kwargs):
|
||||
return nx.random_geometric_graph(*args, **kwargs)
|
||||
|
||||
class TerroristSpreadModel(FSM, Geo):
|
||||
"""
|
||||
Settings:
|
||||
information_spread_intensity
|
||||
|
||||
terrorist_additional_influence
|
||||
|
||||
min_vulnerability (optional else zero)
|
||||
|
||||
max_vulnerability
|
||||
"""
|
||||
|
||||
information_spread_intensity = 0.1
|
||||
terrorist_additional_influence = 0.1
|
||||
min_vulnerability = 0
|
||||
max_vulnerability = 1
|
||||
|
||||
def init(self):
|
||||
if self.state_id == self.civilian.id: # Civilian
|
||||
self.mean_belief = self.model.random.uniform(0.00, 0.5)
|
||||
elif self.state_id == self.terrorist.id: # Terrorist
|
||||
self.mean_belief = self.random.uniform(0.8, 1.00)
|
||||
elif self.state_id == self.leader.id: # Leader
|
||||
self.mean_belief = 1.00
|
||||
else:
|
||||
raise Exception("Invalid state id: {}".format(self["id"]))
|
||||
|
||||
self.vulnerability = self.random.uniform(
|
||||
self.get("min_vulnerability", 0), self.get("max_vulnerability", 1)
|
||||
)
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def civilian(self):
|
||||
neighbours = list(self.get_neighbors(agent_class=TerroristSpreadModel))
|
||||
if len(neighbours) > 0:
|
||||
# Only interact with some of the neighbors
|
||||
interactions = list(
|
||||
n for n in neighbours if self.random.random() <= self.model.prob_interaction
|
||||
)
|
||||
influence = sum(self.degree(i) for i in interactions)
|
||||
mean_belief = sum(
|
||||
i.mean_belief * self.degree(i) / influence for i in interactions
|
||||
)
|
||||
mean_belief = (
|
||||
mean_belief * self.information_spread_intensity
|
||||
+ self.mean_belief * (1 - self.information_spread_intensity)
|
||||
)
|
||||
self.mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||
1 - self.vulnerability
|
||||
)
|
||||
|
||||
if self.mean_belief >= 0.8:
|
||||
return self.terrorist
|
||||
|
||||
@state
|
||||
def leader(self):
|
||||
self.mean_belief = self.mean_belief ** (1 - self.terrorist_additional_influence)
|
||||
for neighbour in self.get_neighbors(
|
||||
state_id=[self.terrorist.id, self.leader.id]
|
||||
):
|
||||
if self.betweenness(neighbour) > self.betweenness(self):
|
||||
return self.terrorist
|
||||
|
||||
@state
|
||||
def terrorist(self):
|
||||
neighbours = self.get_agents(
|
||||
state_id=[self.terrorist.id, self.leader.id],
|
||||
agent_class=TerroristSpreadModel,
|
||||
limit_neighbors=True,
|
||||
)
|
||||
if len(neighbours) > 0:
|
||||
influence = sum(self.degree(n) for n in neighbours)
|
||||
mean_belief = sum(
|
||||
n.mean_belief * self.degree(n) / influence for n in neighbours
|
||||
)
|
||||
mean_belief = mean_belief * self.vulnerability + self.mean_belief * (
|
||||
1 - self.vulnerability
|
||||
)
|
||||
self.mean_belief = self.mean_belief ** (
|
||||
1 - self.terrorist_additional_influence
|
||||
)
|
||||
|
||||
# Check if there are any leaders in the group
|
||||
leaders = list(filter(lambda x: x.state_id == self.leader.id, neighbours))
|
||||
if not leaders:
|
||||
# Check if this is the potential leader
|
||||
# Stop once it's found. Otherwise, set self as leader
|
||||
for neighbour in neighbours:
|
||||
if self.betweenness(self) < self.betweenness(neighbour):
|
||||
return
|
||||
return self.leader
|
||||
|
||||
def ego_search(self, steps=1, center=False, agent=None, **kwargs):
|
||||
"""Get a list of nodes in the ego network of *node* of radius *steps*"""
|
||||
node = agent.node_id
|
||||
G = self.subgraph(**kwargs)
|
||||
return nx.ego_graph(G, node, center=center, radius=steps).nodes()
|
||||
|
||||
def degree(self, agent, force=False):
|
||||
if (
|
||||
force
|
||||
or (not hasattr(self.model, "_degree"))
|
||||
or getattr(self.model, "_last_step", 0) < self.now
|
||||
):
|
||||
self.model._degree = nx.degree_centrality(self.G)
|
||||
self.model._last_step = self.now
|
||||
return self.model._degree[agent.node_id]
|
||||
|
||||
def betweenness(self, agent, force=False):
|
||||
if (
|
||||
force
|
||||
or (not hasattr(self.model, "_betweenness"))
|
||||
or getattr(self.model, "_last_step", 0) < self.now
|
||||
):
|
||||
self.model._betweenness = nx.betweenness_centrality(self.G)
|
||||
self.model._last_step = self.now
|
||||
return self.model._betweenness[agent.node_id]
|
||||
|
||||
|
||||
class TrainingAreaModel(FSM, Geo):
|
||||
"""
|
||||
Settings:
|
||||
training_influence
|
||||
|
||||
min_vulnerability
|
||||
|
||||
Requires TerroristSpreadModel.
|
||||
"""
|
||||
|
||||
training_influence = 0.1
|
||||
min_vulnerability = 0
|
||||
|
||||
def init(self):
|
||||
self.mean_believe = 1
|
||||
self.vulnerability = 0
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def terrorist(self):
|
||||
for neighbour in self.get_neighbors(agent_class=TerroristSpreadModel):
|
||||
if neighbour.vulnerability > self.min_vulnerability:
|
||||
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||
1 - self.training_influence
|
||||
)
|
||||
|
||||
|
||||
class HavenModel(FSM, Geo):
|
||||
"""
|
||||
Settings:
|
||||
haven_influence
|
||||
|
||||
min_vulnerability
|
||||
|
||||
max_vulnerability
|
||||
|
||||
Requires TerroristSpreadModel.
|
||||
"""
|
||||
|
||||
min_vulnerability = 0
|
||||
haven_influence = 0.1
|
||||
max_vulnerability = 0.5
|
||||
|
||||
def init(self):
|
||||
self.mean_believe = 0
|
||||
self.vulnerability = 0
|
||||
|
||||
def get_occupants(self, **kwargs):
|
||||
return self.get_neighbors(agent_class=TerroristSpreadModel,
|
||||
**kwargs)
|
||||
|
||||
@default_state
|
||||
@state
|
||||
def civilian(self):
|
||||
civilians = self.get_occupants(state_id=self.civilian.id)
|
||||
if not civilians:
|
||||
return self.terrorist
|
||||
|
||||
for neighbour in self.get_occupants():
|
||||
if neighbour.vulnerability > self.min_vulnerability:
|
||||
neighbour.vulnerability = neighbour.vulnerability * (
|
||||
1 - self.haven_influence
|
||||
)
|
||||
return self.civilian
|
||||
|
||||
@state
|
||||
def terrorist(self):
|
||||
for neighbour in self.get_occupants():
|
||||
if neighbour.vulnerability < self.max_vulnerability:
|
||||
neighbour.vulnerability = neighbour.vulnerability ** (
|
||||
1 - self.haven_influence
|
||||
)
|
||||
return self.terrorist
|
||||
|
||||
|
||||
class TerroristNetworkModel(TerroristSpreadModel):
|
||||
"""
|
||||
Settings:
|
||||
sphere_influence
|
||||
|
||||
vision_range
|
||||
|
||||
weight_social_distance
|
||||
|
||||
weight_link_distance
|
||||
"""
|
||||
|
||||
sphere_influence: float = 1
|
||||
vision_range: float = 1
|
||||
weight_social_distance: float = 0.5
|
||||
weight_link_distance: float = 0.2
|
||||
|
||||
@state
|
||||
def terrorist(self):
|
||||
self.update_relationships()
|
||||
return super().terrorist()
|
||||
|
||||
@state
|
||||
def leader(self):
|
||||
self.update_relationships()
|
||||
return super().leader()
|
||||
|
||||
def update_relationships(self):
|
||||
if self.count_neighbors(state_id=self.civilian.id) == 0:
|
||||
close_ups = set(
|
||||
self.geo_search(
|
||||
radius=self.vision_range, agent_class=TerroristNetworkModel
|
||||
)
|
||||
)
|
||||
step_neighbours = set(
|
||||
self.ego_search(
|
||||
self.sphere_influence,
|
||||
agent_class=TerroristNetworkModel,
|
||||
center=False,
|
||||
)
|
||||
)
|
||||
neighbours = set(
|
||||
agent.id
|
||||
for agent in self.get_neighbors(agent_class=TerroristNetworkModel)
|
||||
)
|
||||
search = (close_ups | step_neighbours) - neighbours
|
||||
for agent in self.get_agents(search):
|
||||
social_distance = 1 / self.shortest_path_length(agent.id)
|
||||
spatial_proximity = 1 - self.get_distance(agent.id)
|
||||
prob_new_interaction = (
|
||||
self.weight_social_distance * social_distance
|
||||
+ self.weight_link_distance * spatial_proximity
|
||||
)
|
||||
if (
|
||||
agent["id"] == agent.civilian.id
|
||||
and self.random.random() < prob_new_interaction
|
||||
):
|
||||
self.add_edge(agent)
|
||||
break
|
||||
|
||||
def get_distance(self, target):
|
||||
source_x, source_y = nx.get_node_attributes(self.G, "pos")[self.id]
|
||||
target_x, target_y = nx.get_node_attributes(self.G, "pos")[target]
|
||||
dx = abs(source_x - target_x)
|
||||
dy = abs(source_y - target_y)
|
||||
return (dx**2 + dy**2) ** (1 / 2)
|
||||
|
||||
def shortest_path_length(self, target):
|
||||
try:
|
||||
return nx.shortest_path_length(self.G, self.id, target)
|
||||
except nx.NetworkXNoPath:
|
||||
return float("inf")
|
||||
|
||||
|
||||
sim = Simulation(
|
||||
model=TerroristEnvironment,
|
||||
num_trials=1,
|
||||
name="TerroristNetworkModel_sim",
|
||||
max_steps=150,
|
||||
skip_test=False,
|
||||
dump=False,
|
||||
)
|
||||
|
||||
# TODO: integrate visualization
|
||||
# visualization_params:
|
||||
# # Icons downloaded from https://www.iconfinder.com/
|
||||
# shape_property: agent
|
||||
# shapes:
|
||||
# TrainingAreaModel: target
|
||||
# HavenModel: home
|
||||
# TerroristNetworkModel: person
|
||||
# colors:
|
||||
# - attr_id: civilian
|
||||
# color: '#40de40'
|
||||
# - attr_id: terrorist
|
||||
# color: red
|
||||
# - attr_id: leader
|
||||
# color: '#c16a6a'
|
||||
# background_image: 'map_4800x2860.jpg'
|
||||
# background_opacity: '0.9'
|
||||
# background_filter_color: 'blue'
|
@@ -1,2 +0,0 @@
|
||||
balkian Torvalds {}
|
||||
anonymous Torvalds {}
|
@@ -1,25 +0,0 @@
|
||||
from soil import Environment, Simulation, CounterModel, report
|
||||
|
||||
|
||||
# Get directory path for current file
|
||||
import os, sys, inspect
|
||||
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
|
||||
|
||||
class TorvaldsEnv(Environment):
|
||||
|
||||
def init(self):
|
||||
self.create_network(path=os.path.join(currentdir, 'torvalds.edgelist'))
|
||||
self.populate_network(CounterModel, skill_level='beginner')
|
||||
self.agent(node_id="Torvalds").skill_level = 'God'
|
||||
self.agent(node_id="balkian").skill_level = 'developer'
|
||||
self.add_agent_reporter("times")
|
||||
|
||||
@report
|
||||
def god_developers(self):
|
||||
return self.count_agents(skill_level='God')
|
||||
|
||||
|
||||
sim = Simulation(name='torvalds_example',
|
||||
max_steps=10,
|
||||
interval=2,
|
||||
model=TorvaldsEnv)
|
0
logo_gsi.png
Normal file → Executable file
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
0
logo_gsi.svg
Normal file → Executable file
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
38
models/BaseBehaviour/BaseBehaviour.py
Executable file
@@ -0,0 +1,38 @@
|
||||
import settings
|
||||
from nxsim import BaseNetworkAgent
|
||||
from .. import networkStatus
|
||||
|
||||
|
||||
class BaseBehaviour(BaseNetworkAgent):
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
self._attrs = {}
|
||||
|
||||
@property
|
||||
def attrs(self):
|
||||
now = self.env.now
|
||||
if now not in self._attrs:
|
||||
self._attrs[now] = {}
|
||||
return self._attrs[now]
|
||||
|
||||
@attrs.setter
|
||||
def attrs(self, value):
|
||||
self._attrs[self.env.now] = value
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
self.step(self.env.now)
|
||||
yield self.env.timeout(settings.network_params["timeout"])
|
||||
|
||||
def step(self, now):
|
||||
networkStatus['agent_%s'% self.id] = self.to_json()
|
||||
|
||||
def to_json(self):
|
||||
final = {}
|
||||
for stamp, attrs in self._attrs.items():
|
||||
for a in attrs:
|
||||
if a not in final:
|
||||
final[a] = {}
|
||||
final[a][stamp] = attrs[a]
|
||||
return final
|
1
models/BaseBehaviour/__init__.py
Executable file
@@ -0,0 +1 @@
|
||||
from .BaseBehaviour import BaseBehaviour
|
367
models/TerroristModel/TerroristModel.py
Normal file
@@ -0,0 +1,367 @@
|
||||
import random
|
||||
import numpy as np
|
||||
from ..BaseBehaviour import *
|
||||
import settings
|
||||
import networkx as nx
|
||||
|
||||
|
||||
|
||||
POPULATION = 0
|
||||
LEADERS = 1
|
||||
HAVEN = 2
|
||||
TRAININGENV = 3
|
||||
|
||||
NON_RADICAL = 0
|
||||
NEUTRAL = 1
|
||||
RADICAL = 2
|
||||
|
||||
POPNON =0
|
||||
POPNE=1
|
||||
POPRAD=2
|
||||
|
||||
HAVNON=3
|
||||
HAVNE=4
|
||||
HAVRAD=5
|
||||
|
||||
LEADER=6
|
||||
|
||||
TRAINING = 7
|
||||
|
||||
|
||||
class TerroristModel(BaseBehaviour):
|
||||
num_agents = 0
|
||||
|
||||
def __init__(self, environment=None, agent_id=0, state=()):
|
||||
|
||||
super().__init__(environment=environment, agent_id=agent_id, state=state)
|
||||
|
||||
self.population = settings.network_params["number_of_nodes"] * settings.environment_params['initial_population']
|
||||
self.havens = settings.network_params["number_of_nodes"] * settings.environment_params['initial_havens']
|
||||
self.training_enviroments = settings.network_params["number_of_nodes"] * settings.environment_params['initial_training_enviroments']
|
||||
|
||||
self.initial_radicalism = settings.environment_params['initial_radicalism']
|
||||
self.information_spread_intensity = settings.environment_params['information_spread_intensity']
|
||||
self.influence = settings.environment_params['influence']
|
||||
self.relative_inequality = settings.environment_params['relative_inequality']
|
||||
self.additional_influence = settings.environment_params['additional_influence']
|
||||
|
||||
if TerroristModel.num_agents < self.population:
|
||||
self.state['type'] = POPULATION
|
||||
TerroristModel.num_agents = TerroristModel.num_agents + 1
|
||||
random1 = random.random()
|
||||
if random1 < 0.7:
|
||||
self.state['id'] = NON_RADICAL
|
||||
self.state['fstatus'] = POPNON
|
||||
elif random1 >= 0.7 and random1 < 0.9:
|
||||
self.state['id'] = NEUTRAL
|
||||
self.state['fstatus'] = POPNE
|
||||
elif random1 >= 0.9:
|
||||
self.state['id'] = RADICAL
|
||||
self.state['fstatus'] = POPRAD
|
||||
|
||||
elif TerroristModel.num_agents < self.havens + self.population:
|
||||
self.state['type'] = HAVEN
|
||||
TerroristModel.num_agents = TerroristModel.num_agents + 1
|
||||
random2 = random.random()
|
||||
random1 = random2 + self.initial_radicalism
|
||||
if random1 < 1.2:
|
||||
self.state['id'] = NON_RADICAL
|
||||
self.state['fstatus'] = HAVNON
|
||||
elif random1 >= 1.2 and random1 < 1.6:
|
||||
self.state['id'] = NEUTRAL
|
||||
self.state['fstatus'] = HAVNE
|
||||
elif random1 >= 1.6:
|
||||
self.state['id'] = RADICAL
|
||||
self.state['fstatus'] = HAVRAD
|
||||
|
||||
elif TerroristModel.num_agents < self.training_enviroments + self.havens + self.population:
|
||||
self.state['type'] = TRAININGENV
|
||||
self.state['fstatus'] = TRAINING
|
||||
TerroristModel.num_agents = TerroristModel.num_agents + 1
|
||||
|
||||
def step(self, now):
|
||||
if self.state['type'] == POPULATION:
|
||||
self.population_and_leader_conduct()
|
||||
if self.state['type'] == LEADERS:
|
||||
self.population_and_leader_conduct()
|
||||
if self.state['type'] == HAVEN:
|
||||
self.haven_conduct()
|
||||
if self.state['type'] == TRAININGENV:
|
||||
self.training_enviroment_conduct()
|
||||
|
||||
self.attrs['status'] = self.state['id']
|
||||
self.attrs['type'] = self.state['type']
|
||||
self.attrs['radicalism'] = self.state['rad']
|
||||
self.attrs['fstatus'] = self.state['fstatus']
|
||||
super().step(now)
|
||||
|
||||
def population_and_leader_conduct(self):
|
||||
if self.state['id'] == NON_RADICAL:
|
||||
if self.state['rad'] == 0.000:
|
||||
self.state['rad'] = self.set_radicalism()
|
||||
self.non_radical_behaviour()
|
||||
if self.state['id'] == NEUTRAL:
|
||||
if self.state['rad'] == 0.000:
|
||||
self.state['rad'] = self.set_radicalism()
|
||||
while self.state['id'] == RADICAL:
|
||||
self.radical_behaviour()
|
||||
break
|
||||
self.neutral_behaviour()
|
||||
if self.state['id'] == RADICAL:
|
||||
if self.state['rad'] == 0.000:
|
||||
self.state['rad'] = self.set_radicalism()
|
||||
self.radical_behaviour()
|
||||
|
||||
def haven_conduct(self):
|
||||
non_radical_neighbors = self.get_neighboring_agents(state_id=NON_RADICAL)
|
||||
neutral_neighbors = self.get_neighboring_agents(state_id=NEUTRAL)
|
||||
radical_neighbors = self.get_neighboring_agents(state_id=RADICAL)
|
||||
|
||||
neighbors_of_non_radical = len(neutral_neighbors) + len(radical_neighbors)
|
||||
neighbors_of_neutral = len(non_radical_neighbors) + len(radical_neighbors)
|
||||
neighbors_of_radical = len(non_radical_neighbors) + len(neutral_neighbors)
|
||||
threshold = 8
|
||||
if (len(non_radical_neighbors) > neighbors_of_non_radical) and len(non_radical_neighbors) >= threshold:
|
||||
self.state['id'] = NON_RADICAL
|
||||
elif (len(neutral_neighbors) > neighbors_of_neutral) and len(neutral_neighbors) >= threshold:
|
||||
self.state['id'] = NEUTRAL
|
||||
elif (len(radical_neighbors) > neighbors_of_radical) and len(radical_neighbors) >= threshold:
|
||||
self.state['id'] = RADICAL
|
||||
|
||||
if self.state['id'] == NEUTRAL:
|
||||
for neighbor in non_radical_neighbors:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] > 0.59:
|
||||
neighbor.state['rad'] = 0.59
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
|
||||
if self.state['id'] == RADICAL:
|
||||
|
||||
for neighbor in non_radical_neighbors:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] > 0.59:
|
||||
neighbor.state['rad'] = 0.59
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
|
||||
for neighbor in neutral_neighbors:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.6:
|
||||
neighbor.state['id'] = RADICAL
|
||||
if neighbor.state['type'] != HAVEN and neighbor.state['type']!=TRAININGENV:
|
||||
if neighbor.state['rad'] >= 0.62:
|
||||
if create_leader(neighbor):
|
||||
neighbor.state['type'] = LEADERS
|
||||
neighbor.state['fstatus'] = LEADER
|
||||
# elif neighbor.state['type'] == LEADERS:
|
||||
# neighbor.state['type'] = POPULATION
|
||||
# neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVRAD
|
||||
|
||||
def training_enviroment_conduct(self):
|
||||
self.state['id'] = RADICAL
|
||||
self.state['rad'] = 1
|
||||
neighbors = self.get_neighboring_agents()
|
||||
for neighbor in neighbors:
|
||||
if neighbor.state['id'] == NON_RADICAL:
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] > 0.59:
|
||||
neighbor.state['rad'] = 0.59
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
|
||||
|
||||
neighbor.state['rad'] = neighbor.state['rad'] + (neighbor.influence + neighbor.additional_influence) * neighbor.information_spread_intensity
|
||||
if neighbor.state['rad'] >= 0.3 and neighbor.state['rad'] <= 0.59:
|
||||
neighbor.state['id'] = NEUTRAL
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPNE
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVNE
|
||||
elif neighbor.state['rad'] >= 0.6:
|
||||
neighbor.state['id'] = RADICAL
|
||||
if neighbor.state['type'] != HAVEN and neighbor.state['type'] != TRAININGENV:
|
||||
if neighbor.state['rad'] >= 0.62:
|
||||
if create_leader(neighbor):
|
||||
neighbor.state['type'] = LEADERS
|
||||
neighbor.state['fstatus'] = LEADER
|
||||
# elif neighbor.state['type'] == LEADERS:
|
||||
# neighbor.state['type'] = POPULATION
|
||||
# neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
neighbor.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == HAVEN:
|
||||
neighbor.state['fstatus'] = HAVRAD
|
||||
|
||||
def non_radical_behaviour(self):
|
||||
neighbors = self.get_neighboring_agents()
|
||||
|
||||
for neighbor in neighbors:
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
|
||||
self.state['id'] = NEUTRAL
|
||||
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
elif self.state['rad'] > 0.59:
|
||||
self.state['rad'] = 0.59
|
||||
self.state['id'] = NEUTRAL
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
|
||||
elif neighbor.state['type'] == LEADERS:
|
||||
|
||||
if neighbor.state['id'] == NEUTRAL or neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.3 and self.state['rad'] <= 0.59:
|
||||
self.state['id'] = NEUTRAL
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
elif self.state['rad'] > 0.59:
|
||||
self.state['rad'] = 0.59
|
||||
self.state['id'] = NEUTRAL
|
||||
if self.state['type']==POPULATION:
|
||||
self.state['fstatus'] = POPNE
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVNE
|
||||
|
||||
|
||||
def neutral_behaviour(self):
|
||||
neighbors = self.get_neighboring_agents()
|
||||
for neighbor in neighbors:
|
||||
if neighbor.state['type'] == POPULATION:
|
||||
if neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + self.influence * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.6:
|
||||
self.state['id'] = RADICAL
|
||||
if self.state['type'] != HAVEN:
|
||||
if self.state['rad'] >= 0.62:
|
||||
if create_leader(self):
|
||||
self.state['type'] = LEADERS
|
||||
|
||||
self.state['fstatus'] = LEADER
|
||||
# elif self.state['type'] == LEADERS:
|
||||
# self.state['type'] = POPULATION
|
||||
# self.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
self.state['fstatus'] = POPRAD
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVRAD
|
||||
|
||||
|
||||
elif neighbor.state['type'] == LEADERS:
|
||||
if neighbor.state['id'] == RADICAL:
|
||||
self.state['rad'] = self.state['rad'] + (self.influence + self.additional_influence) * self.information_spread_intensity
|
||||
if self.state['rad'] >= 0.6:
|
||||
self.state['id'] = RADICAL
|
||||
if self.state['type'] != HAVEN:
|
||||
if self.state['rad'] >= 0.62:
|
||||
if create_leader(self):
|
||||
self.state['type'] = LEADERS
|
||||
self.state['fstatus'] = LEADER
|
||||
# elif self.state['type'] == LEADERS:
|
||||
# self.state['type'] = POPULATION
|
||||
# self.state['fstatus'] = POPRAD
|
||||
elif neighbor.state['type'] == POPULATION:
|
||||
self.state['fstatus'] = POPRAD
|
||||
elif self.state['type'] == HAVEN:
|
||||
self.state['fstatus'] = HAVRAD
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def radical_behaviour(self):
|
||||
neighbors = self.get_neighboring_agents(state_id=RADICAL)
|
||||
|
||||
for neighbor in neighbors:
|
||||
if self.state['rad']< neighbor.state['rad'] and self.state['type']== LEADERS and neighbor.state['type']==LEADERS:
|
||||
self.state['type'] = POPULATION
|
||||
self.state['fstatus'] = POPRAD
|
||||
|
||||
|
||||
def set_radicalism(self):
|
||||
if self.state['id'] == NON_RADICAL:
|
||||
radicalism = random.uniform(0.0, 0.29) * self.relative_inequality
|
||||
return radicalism
|
||||
elif self.state['id'] == NEUTRAL:
|
||||
radicalism = 0.3 + random.uniform(0.3, 0.59) * self.relative_inequality
|
||||
if radicalism >= 0.6:
|
||||
self.state['id'] = RADICAL
|
||||
return radicalism
|
||||
elif self.state['id'] == RADICAL:
|
||||
radicalism = 0.6 + random.uniform(0.6, 1.0) * self.relative_inequality
|
||||
return radicalism
|
||||
|
||||
def get_partition(agent):
|
||||
return settings.partition_param[agent.id]
|
||||
|
||||
def get_centrality(agent):
|
||||
return settings.centrality_param[agent.id]
|
||||
def get_centrality_given_id(id):
|
||||
return settings.centrality_param[id]
|
||||
|
||||
def get_leader(partition):
|
||||
if not bool(settings.leaders) or partition not in settings.leaders.keys():
|
||||
return None
|
||||
return settings.leaders[partition]
|
||||
|
||||
def set_leader(partition, agent):
|
||||
settings.leaders[partition] = agent.id
|
||||
|
||||
def create_leader(agent):
|
||||
my_partition = get_partition(agent)
|
||||
old_leader = get_leader(my_partition)
|
||||
|
||||
if old_leader == None:
|
||||
set_leader(my_partition, agent)
|
||||
return True
|
||||
else:
|
||||
my_centrality = get_centrality(agent)
|
||||
old_leader_centrality = get_centrality_given_id(old_leader)
|
||||
if my_centrality > old_leader_centrality:
|
||||
set_leader(my_partition, agent)
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
|
1
models/TerroristModel/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
from .TerroristModel import TerroristModel
|
3
models/__init__.py
Executable file
@@ -0,0 +1,3 @@
|
||||
from .models import *
|
||||
from .BaseBehaviour import *
|
||||
from .TerroristModel import *
|
7
models/models.py
Executable file
@@ -0,0 +1,7 @@
|
||||
import settings
|
||||
|
||||
networkStatus = {} # Dict that will contain the status of every agent in the network
|
||||
|
||||
# Initialize agent states. Let's assume everyone is normal and all types are population.
|
||||
init_states = [{'id': 0, 'type': 0, 'rad': 0, 'fstatus':0, } for _ in range(settings.network_params["number_of_nodes"])]
|
||||
|
15
requirements.txt
Normal file → Executable file
@@ -1,12 +1,5 @@
|
||||
networkx>=2.5
|
||||
nxsim
|
||||
simpy
|
||||
networkx
|
||||
numpy
|
||||
matplotlib
|
||||
pyyaml>=5.1
|
||||
pandas>=1
|
||||
SALib>=1.3
|
||||
Jinja2
|
||||
Mesa>=1.2
|
||||
pydantic>=1.9
|
||||
sqlalchemy>=1.4
|
||||
typing-extensions>=4.4
|
||||
annotated-types>=0.4
|
||||
matplotlib
|
23
settings.json
Executable file
@@ -0,0 +1,23 @@
|
||||
[
|
||||
{
|
||||
"network_type": 0,
|
||||
"number_of_nodes": 80,
|
||||
"max_time": 50,
|
||||
"num_trials": 1,
|
||||
"timeout": 2
|
||||
},
|
||||
|
||||
{
|
||||
"agent": ["TerroristModel"],
|
||||
|
||||
"initial_population": 0.85,
|
||||
"initial_havens": 0.1,
|
||||
"initial_training_enviroments": 0.05,
|
||||
|
||||
"initial_radicalism": 0.12,
|
||||
"relative_inequality": 0.33,
|
||||
"information_spread_intensity": 0.1,
|
||||
"influence": 0.4,
|
||||
"additional_influence": 0.1
|
||||
}
|
||||
]
|
13
settings.py
Executable file
@@ -0,0 +1,13 @@
|
||||
# General configuration
|
||||
import json
|
||||
|
||||
with open('settings.json', 'r') as f:
|
||||
settings = json.load(f)
|
||||
|
||||
network_params = settings[0]
|
||||
environment_params = settings[1]
|
||||
|
||||
centrality_param = {}
|
||||
partition_param={}
|
||||
leaders={}
|
||||
|
63
setup.py
@@ -1,63 +0,0 @@
|
||||
import os
|
||||
from setuptools import setup
|
||||
|
||||
|
||||
with open(os.path.join('soil', 'VERSION')) as f:
|
||||
__version__ = f.readlines()[0].strip()
|
||||
assert __version__
|
||||
|
||||
|
||||
def parse_requirements(filename):
|
||||
""" load requirements from a pip requirements file """
|
||||
with open(filename, 'r') as f:
|
||||
lineiter = list(line.strip() for line in f)
|
||||
return [line for line in lineiter if line and not line.startswith("#")]
|
||||
|
||||
|
||||
install_reqs = parse_requirements("requirements.txt")
|
||||
test_reqs = parse_requirements("test-requirements.txt")
|
||||
extras_require={
|
||||
'mesa': ['mesa>=0.8.9'],
|
||||
'geo': ['scipy>=1.3'],
|
||||
'web': ['tornado']
|
||||
}
|
||||
extras_require['all'] = [dep for package in extras_require.values() for dep in package]
|
||||
|
||||
|
||||
setup(
|
||||
name='soil',
|
||||
packages=['soil'], # this must be the same as the name above
|
||||
version=__version__,
|
||||
description=('An Agent-Based Social Simulator for Social Networks'),
|
||||
author='J. Fernando Sanchez',
|
||||
author_email='jf.sanchez@upm.es',
|
||||
url='https://github.com/gsi-upm/soil', # use the URL to the github repo
|
||||
download_url='https://github.com/gsi-upm/soil/archive/{}.tar.gz'.format(
|
||||
__version__),
|
||||
keywords=['agent', 'social', 'simulator'],
|
||||
classifiers=[
|
||||
'Development Status :: 5 - Production/Stable',
|
||||
'Environment :: Console',
|
||||
'Intended Audience :: End Users/Desktop',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: Apache Software License',
|
||||
'Operating System :: MacOS :: MacOS X',
|
||||
'Operating System :: Microsoft :: Windows',
|
||||
'Operating System :: POSIX',
|
||||
"Programming Language :: Python :: 3 :: Only",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Programming Language :: Python :: 3.9",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
],
|
||||
install_requires=install_reqs,
|
||||
extras_require=extras_require,
|
||||
tests_require=test_reqs,
|
||||
setup_requires=['pytest-runner', ],
|
||||
pytest_plugins = ['pytest_profiling'],
|
||||
include_package_data=True,
|
||||
python_requires=">=3.8",
|
||||
entry_points={
|
||||
'console_scripts':
|
||||
['soil = soil.__main__:main',
|
||||
'soil-web = soil.web.__init__:main']
|
||||
})
|